thinkn
  • Product
    Manifesto
    The reason we exist
    Founder Studioprivate beta
    Make better product decisions faster
    Belief SDKinvite only
    Add belief states to your AI system
    Request Access →Join the private beta waitlist
  • Docs
  • Pricing
  • FAQ
  • Docs
  • Pricing
  • FAQ
Sign In
Welcome
  • Start Here
  • Install
  • Quickstart
  • FAQ
  • Why beliefs
why/index.mdx

Why beliefs

Why agents drift, why memory and RAG don't fix it, and what beliefs change.

A bug you've already debugged

Your agent looks up a market size on turn 3 and says "$4.2B." On turn 12 a tool returns SEC filings showing "$3.8B." On turn 18 the agent cites "$4.2B" again — because that's what it said first, and the context window doesn't distinguish "stated earlier" from "verified."

You've seen this. It looks like flakiness, or hallucination, or model error. It's none of those. It's the absence of a primitive your agent doesn't have: a structured model of what it currently believes, and how that belief changed when new evidence arrived.

1Turn 3   ─▶ "Market is $4.2B"            ⟵ stated. no source.
2Turn 12  ─▶ "SEC filings suggest $3.8B"   ⟵ different number. no comparison.
3Turn 18  ─▶ "Market is $4.2B"             ⟵ first one wins. drift wins.

There's no tracked confidence. No evidence weight. No detection that the numbers disagree. No awareness that one came from a tool and the other from a guess. A peer-reviewed study and a guess three turns ago look identical to the model.

With beliefs, the same trace becomes:

1// Turn 3
2await beliefs.add('Market is $4.2B', { confidence: 0.5 })  // stated, no evidence
3
4// Turn 12
5await beliefs.after(secFilings)
6// engine extracts "$3.8B", detects contradicts edge to "$4.2B"
7// world.contradictions surfaces both with sources
8
9// Turn 18
10const context = await beliefs.before(userMessage)
11// context.prompt surfaces both numbers, their sources, the conflict
12// the agent now knows the question is unsettled

The agent stops self-contradicting because the infrastructure remembers what it has and hasn't investigated.

The five symptoms

These are what drift looks like in practice — what you've seen if you've shipped agents that run more than a few turns:

  1. Agents contradict themselves. Turn 3 cites $4.2B, turn 18 cites $4.2B again because it appeared first. No detection, no resolution.
  2. Confidence is invisible. A claim from one source and a claim from ten look identical in the context window. A three-month-old estimate sits next to yesterday's verified data with no distinction.
  3. Guesses and facts are indistinguishable. A user's intuition and a peer-reviewed study carry equal weight in the prompt. There's no separation between assumption and evidence.
  4. Agents don't know what they don't know. No concept of "gap." No mechanism to prioritize what would reduce the most uncertainty.
  5. Bigger context makes it worse. A 200K window doesn't fix any of this — it carries more conflicting claims with more fluency, and the model interpolates fluently across all of it. Wider window, murkier understanding.

Memory and RAG don't fix it

Memory and retrieval each solve a piece of the problem (recall what was said, find similar text), but neither models what's currently believed. Here's what the gap looks like in practice:

DimensionMemory / RAGBeliefs
What it storesText chunks and vectors (similarity-based retrieval)Structured claims with confidence and evidence
UncertaintyNone — every retrieved chunk looks equally validTwo channels: decision resolution + knowledge certainty
ConflictsReturns both conflicting chunks, or last-write-winsDetects, tracks, and resolves by source reliability
DecayFalls out of context window randomlyPrincipled decay toward an uninformative prior over time
Provenance"This chunk was retrieved"Full trail: who stated it, what evidence, how confidence evolved
What is missingNo conceptGaps are first-class — they drive the next action

A three-month-old market estimate and a verified data point from yesterday look identical in memory. Beliefs distinguishes them by confidence, evidence, and recency.

Memory says "this was mentioned before." Belief state says "this is probably true, but confidence dropped after the latest filing — here's the contradiction, here's the next move." That's the difference between recall and judgment.

What changes when beliefs are explicit

A worked example. Three agents audit a legacy auth module — a code analyst reads the files, an architecture agent maps service dependencies, a runtime profiler watches actual traffic. They share a namespace with writeScope: 'space', so every observation lands in one fused belief state.

1// Code analyst
2await analyst.after(
3  'Auth module has 3 token validation paths. Path A uses JWT. ' +
4  'Path B uses custom HMAC. Path C checks a session cookie ' +
5  'but never validates expiry. 14 services import this module.'
6)
7
8// Architecture agent
9await architect.after(
10  'Only 6 of 14 services use JWT. 5 use HMAC. ' +
11  '3 services use Path C — all customer-facing payment APIs.'
12)
13
14// Runtime profiler
15await profiler.after(
16  'Path C handles 73% of all auth requests. It was a "temporary bypass" ' +
17  'added during a migration 2 years ago. The migration completed ' +
18  'but the bypass was never removed. 4.2M active sessions use this path.'
19)

The fused world state reveals what no single agent could have seen alone:

1const world = await analyst.read()
2
3world.beliefs
4// [
5//   { text: 'Path C handles 73% of auth traffic',     confidence: 0.92 },
6//   { text: 'JWT is the primary auth mechanism',      confidence: 0.15 },
7//   { text: 'Path C has no session expiry validation', confidence: 0.85 },
8// ]
9
10world.edges
11// [{ type: 'contradicts', source: 'JWT is primary', target: 'Path C handles 73%' }]
12
13world.moves
14// [{ action: 'research',
15//    target: 'Audit Path C session security',
16//    reason: 'Payment-facing path with no expiry on 4.2M sessions',
17//    value: 0.96 }]

The team's stated belief was "JWT is our auth." The agents' fused observation was "73% of traffic runs on a forgotten bypass." That contradiction is invisible in any single agent's report. It's the belief state that makes it visible.

What this unlocks

When beliefs are explicit, the world model stops being read-only:

  • Hidden assumptions become examinable. Beliefs that were silently driving decisions get named, scored, and traceable.
  • Uncertainty becomes directional. The system knows which gap, if filled, would reduce the most uncertainty — and surfaces it as a recommended next move.
  • Contradictions become signal, not noise. A swarm of agents producing partially-overlapping views isn't a coordination failure; it's the input to a fusion engine that resolves through evidence.
  • The frontier becomes visible. What was never investigated is just as trackable as what was.

This is what beliefs change. The rest of these docs show you how.

Where to next

Install

API key, install, scopes — the setup path.

Learn more

Concepts

The vocabulary in depth: Beliefs, Intent, Clarity, Moves, World.

Learn more

Use cases

How belief infrastructure plays out in finance, health, science, and engineering.

Learn more
PreviousFAQ
NextBeliefs

On this page

  • A bug you've already debugged
  • The five symptoms
  • Memory and RAG don't fix it
  • What changes when beliefs are explicit
  • What this unlocks
  • Where to next