mirror of https://github.com/ghostfolio/ghostfolio
You can not select more than 25 topics
Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
3.1 KiB
3.1 KiB
AI Development Log
Date: 2026-02-23
Project: Ghostfolio Finance Agent MVP
Domain: Finance
Tools and Workflow
The workflow for this sprint followed a strict loop:
- Presearch and architecture alignment in
docs/PRESEARCH.md. - Ticket and execution tracking in
tasks/tasks.mdandTasks.md. - Implementation in the existing Ghostfolio backend and client surfaces.
- Focused verification through AI unit tests and MVP eval tests.
- Deployment through Railway with public health checks.
Technical stack used in this MVP:
- Backend: NestJS (existing Ghostfolio architecture)
- Agent design: custom orchestrator in
ai.service.tswith helper modules for tool execution - Memory: Redis with 24-hour TTL and max 10 turns
- Tools:
portfolio_analysis,risk_assessment,market_data_lookup - Models:
glm-5via Z.AI primary path,MiniMax-M2.5fallback path, OpenRouter backup path - Deployment: Railway (moved to GHCR image source for faster deploy cycles)
MCP Usage
- Railway CLI and Railway GraphQL API:
- linked project/service
- switched service image source to
ghcr.io/maxpetrusenko/ghostfolio:main - redeployed and verified production health
- Local shell tooling:
- targeted test/eval runs
- health checks and deployment diagnostics
- GitHub Actions:
- GHCR publish workflow on
mainpushes
- GHCR publish workflow on
Effective Prompts
The following user prompts drove the highest-impact delivery steps:
use z_ai_glm_api_key glm-5 and minimax_api_key minimax m2.5 for mvpok 1 and 2 and add data to the app so we can test iti dotn see activities and how to test and i dont see ai bot windows. where should i see it?publish you have cli hereok do 1 and 2 and then 3. AI development log (1 page) 4. AI cost analysis (100/1K/10K/100K users) 5. Submit to GitHub
Code Analysis
Rough authorship estimate for the MVP slice:
- AI-generated implementation and docs: ~70%
- Human-guided edits, review, and final acceptance decisions: ~30%
The largest human contribution focused on:
- model/provider routing decisions
- deploy-source migration on Railway
- quality gates and scope control
Strengths and Limitations
Strengths observed:
- High velocity on brownfield integration with existing architecture
- Fast refactor support for file-size control and helper extraction
- Reliable generation of deterministic test scaffolding and eval cases
- Strong support for deployment automation and incident-style debugging
Limitations observed:
- CLI/API edge cases required manual schema introspection
- Runtime state and environment drift required explicit verification loops
- Exact token-cost accounting still needs production telemetry wiring
Key Learnings
- Clear, constraint-rich prompts produce fast and stable implementation output.
- Deterministic eval cases are essential for regression control during rapid iteration.
- Deploy speed improves materially when runtime builds move from source builds to prebuilt images.
- Production readiness depends on traceability: citations, confidence scores, verification checks, and explicit assumptions in cost reporting.