Belief revision

View as Markdown

Belief revision

Memory without maintenance becomes noise. XTrace applies ideas from formal epistemology — specifically AGM-style belief revision — to keep your knowledge base accurate as things change.

When you correct an AI (“we’re NOT doing the Kafka migration”), XTrace doesn’t just add a new fact alongside the old one. It revises or retracts the outdated belief, propagates the change, and records why.

Two operations

XTrace distinguishes between two fundamentally different kinds of change:

OperationWhen to useWhat happens
Supersede (revision)A belief is replaced by a newer oneThe old fact is marked SUPERSEDED and linked to its replacement via supersedes / replaced_by
Retract (contraction)A belief is removed with no replacementThe old fact is marked RETRACTED — the system acknowledges it was wrong without asserting something new

This matters because “we switched from Postgres to MySQL” is different from “we’re not using a database at all.” The first is revision; the second is contraction.

How it works

Inline resolution during extraction

When the extraction pipeline processes a conversation, it can detect that a new statement conflicts with an existing belief. The LLM attaches a resolves directive to the new fact:

1{
2 "resolves": {
3 "session_index": 3,
4 "action": "supersede",
5 "reason": "correction"
6 }
7}

The reason field distinguishes corrections (the previous belief was wrong) from temporal changes (the world moved on). This classification feeds the learning loop.

Resolution flow

  1. Facts are extracted with optional resolves directives targeting existing beliefs by index
  2. _apply_inline_resolutions executes the supersede or retract against the store
  3. Superseded facts get status: SUPERSEDED and a replaced_by link; retracted facts get status: RETRACTED
  4. Revision events are recorded in Postgres for downstream learning
  5. Searches automatically exclude superseded and retracted facts

Post-ingestion consolidation

Beyond inline resolution, a separate consolidation pass finds beliefs that overlap or conflict across sessions. This uses embedding similarity and LLM-guided pairwise or cluster resolution to supersede redundant facts even when the user didn’t explicitly correct anything. See Consolidation for details.

Entrenchment

Not all beliefs are equal. What you explicitly state outranks what the system infers. A core architectural decision outranks a passing mention.

The extraction prompt enforces qualitative entrenchment: the LLM is instructed to treat direct user assertions as higher-weight than system inferences, and to protect core beliefs from silent overrides. This prevents a casual remark from superseding a deliberate decision.

Dependency propagation

When a belief changes, everything built on it is potentially affected:

  • Downstream beliefs that reference the changed fact are flagged for review
  • Artifact descriptors are re-synthesized when the facts they summarize are superseded
  • Episode links are preserved so you can trace why a belief existed and when it changed

The AsyncDescriptorPropagator handles the artifact side — when an artifact gets a new version, its descriptor facts are superseded and a rationale narrative is generated from the accumulated change reasons.

Learning from corrections

Every revision event is stored as a labeled example: what was wrong, what replaced it, and why. Before future extractions, the system retrieves similar past mistakes to improve accuracy — without fine-tuning the model.

This means the system gets better at extraction over time, specific to your domain and your patterns of correction.