April 18, 2026

AI Memory Overhaul: Episodic Simplification & Providers

A major episodic memory simplification was performed, dropping nine columns from the episodes table and adding a consolidated_into back-pointer

A major episodic memory simplification was performed, dropping nine columns from the episodes table and adding a consolidated_into back-pointer.

The episodic retrieval process was entirely refactored, introducing a new hybrid FTS+vector retrieval module that supports apex traversal and composite reranking.

Super-episode generation transitioned from an all-pairs clique mechanism to a connected-components clustering approach using cosine similarity.

The system now utilizes an internal one-shot processor to synthesize clusters into super-episodes, moving this logic out of periodic daemon jobs.

The core memory trigger mechanism was switched from a process-local counter to a durable, DB-state-driven query, ensuring extraction fires even after restarts.

Support for third-party OpenAI-compatible APIs was added, allowing configuration via base URLs for providers like MiniMax and Groq.

UI and backend logic were updated to automatically fetch available models for Ollama and Anthropic providers via new backend proxies.

Significant SonarCloud fixes were applied across services, including complexity reduction, dropping unused parameters, and suppressing specific SQL false positives.

    • Dropped 9 columns from episodes table and added consolidated_into back-pointer.
    • Replaced clique clustering with connected-components for super-episode generation.
    • Switched memory extraction trigger to a DB-state query instead of process-local counters.
    • Introduced openai_compatible platform support for custom base URLs.
    • Added auto-fetching of models for Ollama and Anthropic providers via new proxies.
    • Resolved numerous SonarCloud issues by reducing complexity and dropping unused parameters.