---
title: Architecture
description: "Three layers, one loop. How the environment, belief state, and your agent connect, and what happens on every turn."
---

## The loop

A world model is the **environment** the agent operates on plus a **belief state** over it. Every turn runs through the same loop.

```
┌──────────────────────────────────────────────────────────────────┐
│  ENVIRONMENT                                                      │
│  codebase · market · customer · patient · system                  │
└────────┬───────────────────────────────────────▲─────────────────┘
         │ observe                                │ act
         ▼                                        │
┌──────────────────────────────────────────────┴───────────────────┐
│  BELIEF STATE                                                     │
│                                                                   │
│    PAST              PRESENT             FUTURE                   │
│    ────              ───────             ──────                   │
│    ledger      ───►  active claims  ───► ranked moves             │
│    evidence          confidence          (next action)            │
│    provenance        contradictions      (by info gain)           │
│                                                                   │
└────────┬───────────────────────────────────────▲─────────────────┘
         │ before() → context.prompt              │ after(output)
         │                                        │ → extract + fuse
         ▼                                        │
┌──────────────────────────────────────────────┴───────────────────┐
│  YOUR LLM + AGENT                                                 │
│  Claude · GPT · Gemini · any                                      │
└──────────────────────────────────────────────────────────────────┘
```

## The four phases

Every turn passes through four phases. Three are calls you make; one is internal.

### 1. Observe: `after(output)`

New evidence enters the belief state. Tool results, agent outputs, user messages, file reads: anything you pass to `after()` is extracted into structured claims, fused with the existing state, and folded into the ledger.

```ts
const delta = await beliefs.after(toolResult)
// delta.changes lists every belief added, modified, or retracted
```

Internally: extraction → semantic linking → fusion. See [How it works](/dev/internals/how-it-works).

### 2. Hold: the belief state

At any moment the belief state has three tenses, each surfaced through the SDK:

| Tense | What | Read via |
|---|---|---|
| Past | Append-only ledger of every observation, with evidence type and provenance | `beliefs.trace(beliefId)` |
| Present | Active claims with confidence, edges, contradictions, gaps, clarity score | `beliefs.read()` |
| Future | Ranked next actions by expected information gain | `beliefs.moves.list()` |

### 3. Inject: `before(input)`

The belief state is rendered into a prompt fragment your LLM call can consume. This isn't dumping everything. The engine selects relevant claims, surfaces open contradictions, includes the ranked next moves, and respects context budget.

```ts
const context = await beliefs.before(userMessage)
const result = await myAgent.run({ system: context.prompt })
```

`context` also exposes the structured fields (`beliefs`, `gaps`, `moves`, `clarity`) so you can branch on state without parsing the prompt.

### 4. Act: the LLM run

Your agent runs against the injected context and produces output. That output becomes the next observation. Loop.

## Multi-agent fusion

When multiple agents share a `namespace` and `writeScope: 'space'`, they contribute to one fused belief state. Each agent has an `agent` identifier; the engine merges contributions trust-weighted, in timestamp order, surfacing cross-agent contradictions that no single agent could see.

```ts
const researcher = new Beliefs({ apiKey, namespace: 'market-map', agent: 'researcher', writeScope: 'space' })
const critic     = new Beliefs({ apiKey, namespace: 'market-map', agent: 'critic',     writeScope: 'space' })

await researcher.after(researchOutput)
await critic.after(criticReview)

// Both see the same fused world
const world = await researcher.read()
console.log(world.contradictions.length)
```

See [Scoping](/dev/sdk/scoping) for namespace, writeScope, thread, and contextLayers.

## What this gets you

- **Coherence across turns.** The agent doesn't drift back to its training prior because the user-correction belief has higher evidence weight.
- **Coherence across agents.** Multiple agents in one namespace share one fused world; contradictions become visible.
- **Auditable decisions.** Every claim has a ledger entry. `beliefs.trace(id)` returns the full provenance walk.
- **Direction.** The future tense is first-class. Moves rank what to do next by information gain, not vibes.

## Where to next

<CardGroup cols={3}>
  <DocsCard title="How it works" description="Extraction, fusion, decay, ledger." href="/dev/internals/how-it-works" />
  <DocsCard title="World" description="The environment + belief state, in detail." href="/dev/core/world" />
  <DocsCard title="Patterns" description="Single-turn, multi-turn, streaming, multi-agent." href="/dev/sdk/patterns" />
</CardGroup>
