Companies are organizing around world models instead of hierarchies. (Block calls this "from hierarchy to intelligence.") But a world model isn't a pile of context, and it isn't a smarter archive. A world model is a living set of beliefs about reality — what the system thinks is true, why, how confident, what contradicts it, and what would change its mind.
Beliefs are the unit of account inside a world model. They are how the model stays true as reality changes.
If that already makes sense, jump straight to a working build. If it doesn't, the model is worth learning before the API.
Pick your path
Learn the model
Why this primitive exists, what it teaches your agent, and how to think with it. ~30 minutes.
Ship in 10 minutes
API key, install, three calls, working agent. Copy-paste path with framework recipes.
Evaluate the fit
Common questions answered. How beliefs differ from RAG, vector stores, and memory.
What each path covers
Learn the model
For developers who want to use beliefs correctly — not just call the API.
The category is new. If you treat before / after like a logging hook, you'll miss what beliefs actually do. The Why section walks through the failure modes that motivated the primitive (agents contradicting themselves, confidence going invisible, gaps with no concept), then the Concepts section teaches the vocabulary (Beliefs, Intent, Clarity, Moves, World), then the Tutorial walks you through a guided build.
Path: Why → The Problem → Concepts → Beliefs → Tutorial → Build a Research Agent
Ship in 10 minutes
For developers who want code running today — hackathon, prototype, exploration.
The Hack Guide hands you a key, an install, the three-step loop, and copy-paste recipes for Vercel AI SDK, Anthropic SDK, OpenAI, and plain fetch. Project ideas at the bottom. Troubleshooting at the end.
Path: Hack Guide
Evaluate the fit
For developers who already have memory, RAG, or a vector store and want to know if beliefs is worth adding.
Six high-leverage questions answered: how it differs from RAG, when memory is enough, what happens with conflicting claims, what persists, what doesn't.
Path: FAQ → Why → Memory vs Beliefs → Quickstart
What every path eventually leads to
1import Beliefs from 'beliefs'
2
3const beliefs = new Beliefs({
4 apiKey: process.env.BELIEFS_KEY,
5 namespace: 'my-project',
6 writeScope: 'space',
7})
8
9// Before the agent acts — read current understanding
10const context = await beliefs.before(userMessage)
11
12// Run your agent with belief context injected
13const result = await myAgent.run({ system: context.prompt })
14
15// Feed the output — beliefs extracted, conflicts detected, state updated
16const delta = await beliefs.after(result.text)That's the loop. The three paths above are three different ways to get to the same place — depending on whether you'd rather learn first or ship first.
Why coding agents first
A codebase is already a compact world. It has laws (types, invariants), assumptions (architecture decisions, dependencies), history (commits, PRs), ownership, and contradictions (stale docs, drifted assumptions). Coding agents are already operating inside this world — but with short-lived context and weak memory.
The first world model thinkⁿ targets is the one your coding agent already lives in. Concrete beliefs the engine can hold for a repo:
1belief: Authentication is enforced at the API middleware layer
2confidence: 0.82
3evidence: middleware.ts, auth.test.ts, architecture.md
4contradicts: /api/internal/export bypasses middleware
5next move: inspect route-level auth coverage before modifying export flowSame machinery — different content — applies to research agents (claims about a market), analyst agents (beliefs about a customer or portfolio), or any system that needs to maintain a coherent picture of reality across many turns and many sources.
Using a coding agent?
Give your agent the SDK reference: llms.txt. It writes correct code on the first try.