File Memory
The file memory layer (src/memory/) indexes markdown files into chunks stored in LibSQL. At query time, it uses BM25 full-text search (or vector search when available) to find relevant context.
Architecture
Chunking
Files are split into overlapping chunks of configurable size (DEFAULT_MEMORY_CONFIG.chunking). Each chunk stores:
path: relative file pathstartLine/endLine: line range in the source filetext: raw chunk contenthash: content hash for change detection (avoids re-indexing unchanged chunks)
Auto-Sync (Memory Watcher)
src/memory/memory-watcher.ts watches the configured paths with chokidar and triggers incremental re-sync when files change:
dirty is true, the next searchMemory() call automatically triggers a sync before returning results (autoSynced: true in the response).
Memory Config
Memory Sessions
Memory sessions (createMemorySession, archiveSession) track which knowledge base was loaded for a given conversation. This enables:
- Session replay (re-load same context)
- Auditing which files influenced a response
- Session isolation (different projects use different memory sets)
Experience Store
src/memory/experience-store.ts records patterns the agent learns from execution:
Experience Types
Schema
Each experience has:intent: what the user was trying to do (text for similarity search)solution: the approach that worked (arbitrary JSON)successScore: 0-1, quality of the solutionweight: decays over time (updated byapplyDecay())useCount: incremented each time the experience is retrieved and used
Retrieval
findSimilarExperiences(query, tags?, limit?) uses BM25 search on the intent field to find past experiences relevant to the current task. The result is injected into the system prompt when available.
Decay
Experiences are weighted by recency.applyDecay(halfLifeDays) reduces the weight of old experiences:
pruneExpired(minWeight) removes experiences below the minimum weight threshold to keep the store lean.
Context Management
getMemoryStats() and needsCompaction() (from src/chat/index.ts) track token usage and trigger conversation compaction before the context window fills: