April 17, 2026
AI Assistant Refactors Core Memory & Extraction
The ToolSynthesisProcessor was completely removed as the hourly sweep over tool_calls was an inefficient abstraction that duplicated existing ACT loop memory sk
The ToolSynthesisProcessor was completely removed as the hourly sweep over tool_calls was an inefficient abstraction that duplicated existing ACT loop memory skills.
The narration DTO in the ACT loop was renamed from ‘tool_synthesis’ to ‘narration’ to align with its actual function.
Per-channel episode extraction now triggers at 25 inserts initially, then every 20, utilizing a 25-entry window to manage context overlap efficiently.
The concept LUT was updated to include 18 aliases for ‘pets’ and the system migrated to use the deterministic LUT-driven rule system, replacing the contradiction classifier.
The data graph was refactored to remove cosine math duplication and implement rule-aware collision handling during canonical key migration, ensuring data integrity.
The memory tool schema was enhanced by explicitly listing all 27 canonical keys and adding rules to encourage atomic fact storage, addressing LLM summarization gaps.
Episode extraction stability was improved by correcting entry_range semantics so the LLM returns transcript IDs instead of 0-based positions.
The per-channel extraction interval was lowered from 35 to 20 to ensure episodic extraction triggers reliably in shorter conversations.
-
Removed ToolSynthesisProcessor; renamed ACT-loop DTO to ‘narration’.
-
Episode extraction cadence changed to first at 25 inserts, then every 20.
-
Replaced contradiction classifier with deterministic LUT-driven rule system (27 concepts).
-
Memory tool schema now lists all 27 canonical keys and enforces atomic fact rules.
-
Corrected episode extraction logic to use transcript IDs instead of 0-based positions in entry_range.
-
Lowered per-channel extraction interval from 35 to 20 for reliable triggering.