March 14, 2026
AI Core & ACT Loop Refinements
Trait extraction is now wired into the standard chat path and includes location data, and markdown fences are stripped before JSON parsing for traits
Trait extraction is now wired into the standard chat path and includes location data, and markdown fences are stripped before JSON parsing for traits.
Triage heuristics were replaced with ONNX-based classification for mode tiebreaking and skill selection, alongside improved readiness gating and observability.
Various paths in the digest worker were updated to enqueue a proper text event instead of a close signal when ACT only results in card outputs, ensuring message correlation.
The ACT loop limits were increased (max iterations to 50, timeout to 180s), and smart repetition now requires two consecutive similar iterations to trigger.
Goal proposal learning was introduced, allowing PlanAction to adapt cooldown backoff based on user feedback (cancellation/completion).
ACT loop context engineering was heavily upgraded, including a scratchpad, append mode for LLM providers, and persistent task continuity.
Working memory is now hydrated from SQLite upon container restart, recovering the last 12 turns of conversation context.
Dead code, including GraphService, was removed, and 7 services were decommissioned to simplify the cognitive agent structure.
-
ONNX replaced triage heuristics for mode/skill classification.
-
Trait extraction now uses location key and is triggered by regular chat messages.
-
ACT loop logic updated to send proper message events for card-only outputs.
-
Goal proposal learning adapts cooldown based on user cancellation/completion feedback.
-
Working memory is restored from SQLite on container restart.
-
Significant refactoring reduced cognitive agents and dead services.