---
title: Start Here
description: "World models, beliefs, and the fastest path into the SDK."
---

Companies and agents alike are organizing around **world models** instead of hierarchies and pipelines. A world model isn't a pile of context, and it isn't a smarter archive. **A world model is a living set of beliefs about reality** — what the system thinks is true, why, how confident, what contradicts it, and what would change its mind.

Beliefs are the operational core of a world model: claims with confidence, evidence, and lifecycle. They are how the model stays accurate as reality changes.

```bash
npm i beliefs
```

```ts
import Beliefs from 'beliefs'

const beliefs = new Beliefs({
  apiKey: process.env.BELIEFS_KEY,
  namespace: 'my-project',
  writeScope: 'space',
})

// Before the agent acts — read current understanding
const context = await beliefs.before(userMessage)

// Run your agent with belief context injected
const result = await myAgent.run({ system: context.prompt })

// Feed the output — beliefs extracted, conflicts detected, state updated
const delta = await beliefs.after(result.text)
```

That's the loop. Three calls per turn, regardless of which framework you ship on.

## Works with your stack

```ts
import { beliefsHooks } from 'beliefs/claude-agent-sdk'   // Anthropic Claude Agent SDK
import { beliefsMiddleware } from 'beliefs/vercel-ai'     // Vercel AI SDK
// React hooks + browser DevTools — coming soon
```

Or call `beliefs.before()` / `beliefs.after()` manually around any LLM (OpenAI, plain fetch, your own agent loop). See the [Hack Guide](/dev/tutorial/hack-guide) for working recipes across frameworks.

## I want to...

| I want to... | Start here |
|---|---|
| **Ship in 10 minutes.** Hackathon, prototype, exploration. | [Hack Guide](/dev/tutorial/hack-guide) — install + framework recipes + project ideas |
| **See it run end-to-end** before committing. | [Quickstart](/dev/start/quickstart) — 30 lines that print clarity rising |
| **Learn the model first**, then build. | [Why beliefs](/dev/why/index) → [Concepts](/dev/core/beliefs) → [Tutorial](/dev/tutorial/research-agent) |
| **Build chat memory** that's separate per conversation. | [Install](/dev/start/install) → use `writeScope: 'thread'` and bind `thread: 'id'` |
| **Run multi-agent shared state** (debate, supervisor/worker, swarm). | [Patterns → Multi-Agent](/dev/sdk/patterns) — same namespace, `writeScope: 'space'` |
| **Audit why an agent believes something.** | [How it works → Ledger](/dev/internals/how-it-works) and [`beliefs.trace()`](/dev/sdk/core-api) |
| **Evaluate fit** before integrating. | [FAQ](/dev/start/faq) — when beliefs help, when they don't |
| **Add beliefs to a Claude Agent SDK app.** | [Adapter: Claude Agent SDK](/dev/adapters/claude-agent-sdk) |
| **Add beliefs to a Vercel AI SDK app.** | [Adapter: Vercel AI](/dev/adapters/vercel-ai) |
| **See it across domains** (finance, health, science, engineering). | [Use cases](/dev/cases/finance) |

## Why coding agents first

A codebase is already a compact world. It has laws (types, invariants), assumptions (architecture decisions, dependencies), history (commits, PRs), ownership, and contradictions (stale docs, drifted assumptions). Coding agents are already operating inside this world — but with short-lived context and weak memory.

The first world model thinkⁿ targets is the one your coding agent already lives in. Concrete beliefs the engine can hold for a repo:

```
belief:    Authentication is enforced at the API middleware layer
confidence: 0.82
evidence:   middleware.ts, auth.test.ts, architecture.md
contradicts: /api/internal/export bypasses middleware
next move:  inspect route-level auth coverage before modifying export flow
```

The same machinery applies to research agents (claims about a market), analyst agents (beliefs about a customer or portfolio), or any system that needs to maintain a coherent picture of reality across many turns and many sources.

<Callout type="tip" title="Using a coding agent?">
Give your agent the SDK reference: [llms.txt](https://thinkn.ai/llms.txt). It writes correct code on the first try.
</Callout>
