Belief revision
Belief revision
Memory without maintenance becomes noise. XTrace applies ideas from formal epistemology — specifically AGM-style belief revision — to keep your knowledge base accurate as things change.
When you correct an AI (“we’re NOT doing the Kafka migration”), XTrace doesn’t just add a new fact alongside the old one. It revises or retracts the outdated belief, propagates the change, and records why.
Two operations
XTrace distinguishes between two fundamentally different kinds of change:
This matters because “we switched from Postgres to MySQL” is different from “we’re not using a database at all.” The first is revision; the second is contraction.
How it works
Inline resolution during extraction
When the extraction pipeline processes a conversation, it can detect that a new statement conflicts with an existing belief. The LLM attaches a resolves directive to the new fact:
The reason field distinguishes corrections (the previous belief was wrong) from temporal changes (the world moved on). This classification feeds the learning loop.
Resolution flow
- Facts are extracted with optional
resolvesdirectives targeting existing beliefs by index _apply_inline_resolutionsexecutes the supersede or retract against the store- Superseded facts get
status: SUPERSEDEDand areplaced_bylink; retracted facts getstatus: RETRACTED - Revision events are recorded in Postgres for downstream learning
- Searches automatically exclude superseded and retracted facts
Post-ingestion consolidation
Beyond inline resolution, a separate consolidation pass finds beliefs that overlap or conflict across sessions. This uses embedding similarity and LLM-guided pairwise or cluster resolution to supersede redundant facts even when the user didn’t explicitly correct anything. See Consolidation for details.
Entrenchment
Not all beliefs are equal. What you explicitly state outranks what the system infers. A core architectural decision outranks a passing mention.
The extraction prompt enforces qualitative entrenchment: the LLM is instructed to treat direct user assertions as higher-weight than system inferences, and to protect core beliefs from silent overrides. This prevents a casual remark from superseding a deliberate decision.
Dependency propagation
When a belief changes, everything built on it is potentially affected:
- Downstream beliefs that reference the changed fact are flagged for review
- Artifact descriptors are re-synthesized when the facts they summarize are superseded
- Episode links are preserved so you can trace why a belief existed and when it changed
The AsyncDescriptorPropagator handles the artifact side — when an artifact gets a new version, its descriptor facts are superseded and a rationale narrative is generated from the accumulated change reasons.
Learning from corrections
Every revision event is stored as a labeled example: what was wrong, what replaced it, and why. Before future extractions, the system retrieves similar past mistakes to improve accuracy — without fine-tuning the model.
This means the system gets better at extraction over time, specific to your domain and your patterns of correction.