March 10, 2026
Core Architecture Deep Dive: Salience & Constraints
A new WorldStateService is introduced to aggregate contextual items using temporal and semantic salience scoring, surfacing relevant goals and tasks based on re
A new WorldStateService is introduced to aggregate contextual items using temporal and semantic salience scoring, surfacing relevant goals and tasks based on relevance and time decay.
This refactoring replaces static context dumps, ensuring goals and lists only consume tokens when contextually relevant, while also adding vector embeddings at creation for semantic matching.
Goal awareness is extended to ACT and CLARIFY modes, injecting active goals into the context for reasoning and disambiguation.
The system is evolving towards a cognitive operating system vision, with updated documentation detailing an interface philosophy and a trajectory toward ambient presence.
The core reasoning pipeline is now mandatory for all messages, decoupling and disabling the fast-path cognitive reflex system to enforce full reasoning invariants.
A robust constraint learning system is implemented, wiring gate rejections into the memory pipeline so the LLM learns from systemic failures.
This constraint learning flows from gate rejections into an interaction log, surfaces summaries in prompts, and can form long-term episodes in episodic memory.
The ONNX contradiction classifier is wired as the primary mode-tiebreaker, falling back to the LLM only if confidence is low or the model fails.
-
Rebuilt WorldStateService with temporal and semantic salience scoring for context aggregation.
-
Goal awareness extended to ACT and CLARIFY modes, providing active goal context during reasoning.
-
The fast-path cognitive reflex system is fully disconnected, enforcing full reasoning through the pipeline.
-
A constraint learning system logs gate rejections to inform LLM behavior and build episodic memory.
-
ONNX contradiction classifier is deployed as the primary, sub-millisecond mode-tiebreaker.
-
Three-tier user trait injection replaces flat KNN retrieval for personalized LLM context.