---
title: How it works
description: "The lifecycle of a belief — from observation to fused state, with audit and decay along the way."
---

A mental model of what happens when you call `before` and `after`. The behaviors the engine is required to honor — what you build against — live on the [contracts](/dev/internals/contracts) page.

## The lifecycle

Every piece of information that enters the system follows the same path:

```
observation ──▶ extraction ──▶ fusion ──▶ persistence + audit
                    │              │              │
              structured       merged into     ledger entry
              claims out       world state     for replay
```

You don't manage this lifecycle yourself. Calling `before` and `after` drives it. The runtime mutates state atomically and serially, so every mutation is durable and observable the moment it lands. That's what lets an agent course-correct mid-turn — if the first tool result contradicts a hypothesis, the next call already sees the updated state.

The runtime processes updates on two timescales. **Real-time** updates merge as evidence arrives, so later actions in the same turn operate on the newest understanding. **Background** processing runs more thorough analysis between turns: relationship detection, contradiction analysis, reassessment of the overall picture. Both feed the same belief state.

## Fusion: combining contributions

When multiple agents — or multiple turns of the same agent — submit beliefs about the same claim, the engine merges them by trust weight. Higher-trust contributors move the fused state more; lower-trust contributors still contribute but with proportionally less pull. The fused state sharpens when sources agree and stays uncertain when they disagree.

Each agent and source carries a reliability weight. The engine starts with a calibrated baseline based on observed reliability and you override it at runtime via [`beliefs.trust.set()`](/dev/sdk/trust). Trust knobs behave predictably — lowering an agent's weight attenuates its contributions proportionally without affecting any other source.

Fusion is order-independent: combining the same set of contributions in any order produces the same result. Retries after a peer's write don't change the outcome.

## Decay: aging evidence

Without decay, agents act on stale analyses indefinitely — a six-month-old market estimate would carry the same weight as last week's verified data. Decay closes that gap: every belief's evidence weight shrinks over time, so old claims lose influence unless refreshed. Stale claims surface for re-verification rather than silently dominating.

Decay rates are configurable per workspace:

- **Fast** — market sentiment, competitive intelligence, security posture: anything where last month's analysis is probably wrong now.
- **Standard** (default) — market sizing, product positioning, strategic analyses: slow-moving but not static.
- **Slow** — regulatory environments, fundamental research, architectural invariants: evidence stays relevant for quarters or years.
- **None** — ground-truth observations and immutable historical facts. Use sparingly.

Decay applies on read, so the runtime always works with time-adjusted values. Decayed beliefs aren't deleted — they stay in the snapshot at reduced weight, so a UI can render them as muted/needs-re-verification rather than hiding them outright.

## Evidence: types and the is/ought firewall

Different evidence types carry different weight at fusion time, calibrated so quality matters more than volume. A single verified measurement moves confidence more than several inferences.

| Type | Typical source path |
|------|---------------------|
| `measurement` | Tool results from APIs, databases, instrumentation |
| `citation` | Tool results with cited sources; explicit `add(text, { source })` |
| `user-assertion` | `after(userMessage)` from a user-facing surface |
| `expert-judgment` | `add(text, { evidence })` with attributed reasoning |
| `inference` | `after(agentOutput)` extraction (default for free-form agent text) |
| `assumption` | `add(text, { type: 'assumption' })` |

The engine assigns the type based on the source path during extraction. You can override with the `evidence` option on `add()` when you know better.

**The is/ought firewall** is the most important design choice in evidence handling. Factual evidence updates beliefs; normative information (preferences, goals, desires) does not.

| Input | Effect |
|-------|--------|
| "The TAM is $5B" | Updates the market size belief |
| "Customer X reported a SOC2 audit failure on 2025-09-12" | Updates compliance/risk beliefs |
| "I want to target enterprise" | Recorded as a goal (intent) |
| "We've decided to target SOC2-compliant buyers" | Recorded as a constraint (intent) |
| "Gartner reports 34% growth" | Updates the growth-rate belief |

Without this separation, a user repeating "I want X" would gradually inflate the agent's confidence that X is *true* — preferences masquerading as evidence. The firewall keeps factual claims and normative intent on separate tracks. See [Intent](/dev/core/intent) for how the normative side is handled.

## Ledger: the audit trail

Every belief mutation lands in an append-only ledger. There's no in-place editing, no silent overwrite, no merge that erases history. If a belief exists in any state today, the ledger says how it got there.

Each entry captures what changed, who changed it, the state before and after, and a human-readable reason. Supersession is recorded as a new entry referencing the old one; deletions land as tombstone entries rather than erasing history.

```ts
// Workspace-wide trail
const all = await beliefs.trace()

// One belief's history
const history = await beliefs.trace('claim_market_size')

for (const entry of history) {
  console.log(`${entry.timestamp} | ${entry.action}`)
  if (entry.confidence) {
    console.log(`  ${entry.confidence.before} → ${entry.confidence.after}`)
  }
  if (entry.reason) console.log(`  reason: ${entry.reason}`)
}
```

For replay-shaped reads — "what did the world look like at time T?" — use [`beliefs.stateAt({ asOf })`](/dev/sdk/core-api#beliefsstateatoptions). It walks the ledger and rebuilds state for you.

The ledger is what makes calibration analysis possible (compare stated confidence with eventual outcomes), what makes debugging confidence shifts tractable ("why did this belief drop from 85% to 72%?"), and what makes audit trails possible without reconstruction.

<CardGroup cols={2}>
  <DocsCard title="Behavioral contracts" description="The eight guarantees the engine commits to." href="/dev/internals/contracts" />
  <DocsCard title="Concepts" description="The vocabulary: beliefs, intent, clarity, moves, world." href="/dev/core/beliefs" />
</CardGroup>
