You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
 
 
 
 
 

4.7 KiB

Condensed Architecture (AI MVP)

Date: 2026-02-24
Source: docs/MVP-VERIFICATION.md (condensed to 1-2 pages)

1) System Overview

Ghostfolio AI MVP is a finance-domain assistant embedded in the existing Ghostfolio API and portfolio UI.

Primary goals:

  • Answer natural-language finance queries.
  • Execute domain tools with structured outputs.
  • Preserve memory across turns.
  • Emit verifiable responses (citations, confidence, checks).
  • Stay observable and testable under refactors.

2) Runtime Flow

Client (analysis page chat panel)
  -> POST /api/v1/ai/chat
  -> ai.controller.ts
  -> ai.service.ts (orchestrator)
     -> determineToolPlan(query, symbols)
     -> tool execution (portfolio/risk/market/rebalance/stress)
     -> verification checks
     -> buildAnswer() with provider + deterministic fallback
     -> confidence scoring + observability snapshot
  -> JSON response (answer + metadata)

3) Core Components

4) Tooling Model

Implemented tools:

  • portfolio_analysis
  • risk_assessment
  • market_data_lookup
  • rebalance_plan
  • stress_test

Selection policy:

  • Intent and keyword based.
  • Conservative fallback to portfolio_analysis + risk_assessment when intent is ambiguous.
  • Symbol extraction uses uppercase + stop-word filtering to reduce false positives.

5) Memory Model

  • Backend: Redis
  • Key: ai-agent-memory-{userId}-{sessionId}
  • TTL: 24h
  • Retention: last 10 turns
  • Stored turn fields: query, answer, timestamp, tool statuses

6) Verification and Guardrails

Checks currently emitted in response:

  • numerical_consistency
  • market_data_coverage
  • tool_execution
  • output_completeness
  • citation_coverage
  • response_quality
  • rebalance_coverage (when applicable)
  • stress_test_coherence (when applicable)

Quality guardrail:

  • Filters weak generated responses (generic disclaimers, low-information output, missing actionability for invest/rebalance prompts).
  • Falls back to deterministic synthesis when generated output quality is below threshold.

7) Observability

Per-chat capture:

  • Total latency
  • LLM / memory / tool breakdown
  • Token estimate
  • Error traces
  • Optional LangSmith trace linkage

Per-eval capture:

  • Category pass summaries
  • Suite pass rate
  • Hallucination-rate heuristic
  • Verification-accuracy metric

8) Performance Strategy

Two layers:

  • Service-level deterministic gate (test:ai:performance)
  • Live model/network gate (test:ai:live-latency:strict)

Latency control:

  • AI_AGENT_LLM_TIMEOUT_IN_MS (default 3500)
  • Timeout triggers deterministic fallback so tail latency remains bounded.

9) Testing and Evals

Primary AI gates:

  • npm run test:ai
  • npm run test:mvp-eval
  • npm run test:ai:quality
  • npm run test:ai:performance
  • npm run test:ai:live-latency:strict

Dataset:

  • 53 total eval cases
  • Category minimums satisfied (happy_path, edge_case, adversarial, multi_step)

10) Open Source Path

Prepared package scaffold:

This package is ready for dry-run packing and publication workflow.