thinkn
  • Product
    Manifesto
    The reason we exist
    Founder Studioprivate beta
    Make better product decisions faster
    Belief SDKinvite only
    Add belief states to your AI system
    Request Access →Join the private beta waitlist
  • Docs
  • Pricing
  • FAQ
  • Docs
  • Pricing
  • FAQ
Sign In
Welcome
  • Start Here
  • Install
  • Quickstart
  • FAQ
  • Why beliefs
  • Architecture
  • How it works
  • Behavioral contracts
internals/architecture.mdx

Architecture

Three layers, one loop. How the environment, belief state, and your agent connect, and what happens on every turn.

The loop

A world model is the environment the agent operates on plus a belief state over it. Every turn runs through the same loop.

1┌──────────────────────────────────────────────────────────────────┐
2│  ENVIRONMENT                                                      │
3│  codebase · market · customer · patient · system                  │
4└────────┬───────────────────────────────────────▲─────────────────┘
5         │ observe                                │ act
6         ▼                                        │
7┌──────────────────────────────────────────────┴───────────────────┐
8│  BELIEF STATE                                                     │
9│                                                                   │
10│    PAST              PRESENT             FUTURE                   │
11│    ────              ───────             ──────                   │
12│    ledger      ───►  active claims  ───► ranked moves             │
13│    evidence          confidence          (next action)            │
14│    provenance        contradictions      (by info gain)           │
15│                                                                   │
16└────────┬───────────────────────────────────────▲─────────────────┘
17         │ before() → context.prompt              │ after(output)
18         │                                        │ → extract + fuse
19         ▼                                        │
20┌──────────────────────────────────────────────┴───────────────────┐
21│  YOUR LLM + AGENT                                                 │
22│  Claude · GPT · Gemini · any                                      │
23└──────────────────────────────────────────────────────────────────┘

The four phases

Every turn passes through four phases. Three are calls you make; one is internal.

1. Observe: after(output)

New evidence enters the belief state. Tool results, agent outputs, user messages, file reads: anything you pass to after() is extracted into structured claims, fused with the existing state, and folded into the ledger.

1const delta = await beliefs.after(toolResult)
2// delta.changes lists every belief added, modified, or retracted

Internally: extraction → semantic linking → fusion. See How it works.

2. Hold: the belief state

At any moment the belief state has three tenses, each surfaced through the SDK:

TenseWhatRead via
PastAppend-only ledger of every observation, with evidence type and provenancebeliefs.trace(beliefId)
PresentActive claims with confidence, edges, contradictions, gaps, clarity scorebeliefs.read()
FutureRanked next actions by expected information gainbeliefs.moves.list()

3. Inject: before(input)

The belief state is rendered into a prompt fragment your LLM call can consume. This isn't dumping everything. The engine selects relevant claims, surfaces open contradictions, includes the ranked next moves, and respects context budget.

1const context = await beliefs.before(userMessage)
2const result = await myAgent.run({ system: context.prompt })

context also exposes the structured fields (beliefs, gaps, moves, clarity) so you can branch on state without parsing the prompt.

4. Act: the LLM run

Your agent runs against the injected context and produces output. That output becomes the next observation. Loop.

Multi-agent fusion

When multiple agents share a namespace and writeScope: 'space', they contribute to one fused belief state. Each agent has an agent identifier; the engine merges contributions trust-weighted, in timestamp order, surfacing cross-agent contradictions that no single agent could see.

1const researcher = new Beliefs({ apiKey, namespace: 'market-map', agent: 'researcher', writeScope: 'space' })
2const critic     = new Beliefs({ apiKey, namespace: 'market-map', agent: 'critic',     writeScope: 'space' })
3
4await researcher.after(researchOutput)
5await critic.after(criticReview)
6
7// Both see the same fused world
8const world = await researcher.read()
9console.log(world.contradictions.length)

See Scoping for namespace, writeScope, thread, and contextLayers.

What this gets you

  • Coherence across turns. The agent doesn't drift back to its training prior because the user-correction belief has higher evidence weight.
  • Coherence across agents. Multiple agents in one namespace share one fused world; contradictions become visible.
  • Auditable decisions. Every claim has a ledger entry. beliefs.trace(id) returns the full provenance walk.
  • Direction. The future tense is first-class. Moves rank what to do next by information gain, not vibes.

Where to next

How it works

Extraction, fusion, decay, ledger.

Learn more

World

The environment + belief state, in detail.

Learn more

Patterns

Single-turn, multi-turn, streaming, multi-agent.

Learn more
PreviousScience
NextHow it works

On this page

  • The loop
  • The four phases
  • 1. Observe: after(output)
  • 2. Hold: the belief state
  • 3. Inject: before(input)
  • 4. Act: the LLM run
  • Multi-agent fusion
  • What this gets you
  • Where to next