---
title: Hack Guide
description: "Zero to building with beliefs in 10 minutes. Everything you need for the hackathon."
---

## Get Your Key

1. Sign in at [thinkn.ai](https://thinkn.ai)
2. Go to [Profile > API Keys](/profile/api-keys)
3. Click **Create Key**, copy the `bel_live_...` value

```bash
export BELIEFS_KEY=bel_live_...
```

## Install

```bash
npm i beliefs
```

Verify the connection:

```bash
node -e "import('beliefs').then(async ({default: B}) => { const b = new B({ apiKey: process.env.BELIEFS_KEY, namespace: 'hack-guide', writeScope: 'space' }); const s = await b.read(); console.log('beliefs:', s.beliefs.length, 'clarity:', s.clarity) })"
```

You should see `beliefs: 0 clarity: 0.25` — an empty belief state with baseline clarity, ready to go.

<Callout type="info" title="Choose the right scope">
These examples use `writeScope: 'space'` so they run immediately. For chat apps, keep the SDK default `writeScope: 'thread'` and bind a thread with `thread` or `beliefs.withThread(threadId)`.
</Callout>

<Callout type="tip" title="Using a coding agent?">
Give your agent the SDK reference so it can write correct code on the first try: `https://thinkn.ai/llms.txt`
</Callout>

## The Pattern

Every agent turn follows three steps:

```ts
// 1. What does the agent believe right now?
const context = await beliefs.before(userMessage)

// 2. Run your agent with belief context injected
const result = await myAgent.run({ system: context.prompt })

// 3. Feed the output — beliefs extracted automatically
const delta = await beliefs.after(result.text)
```

That is the entire integration. `before()` gives your agent context about what's already known. `after()` feeds the result back, so the world model learns from the turn — claims extracted, conflicts detected, confidence updated, next moves recomputed.

```
  ┌─────────┐     ┌───────────┐     ┌─────────┐
  │ before()│────▶│ your agent │────▶│ after() │
  │ beliefs │     │ runs here  │     │ extract │
  │ + moves │     │            │     │ + fuse  │
  └─────────┘     └───────────┘     └────┬────┘
       ▲                                  │
       └──────────── next turn ───────────┘
```

### What comes back

`before()` gives you a `BeliefContext`:

- `prompt` — inject this into your agent's system prompt
- `beliefs` — current claims with confidence scores
- `gaps` — what the agent doesn't know yet
- `clarity` — 0-1 readiness score (higher = more confident)
- `moves` — ranked next actions by expected information gain

`after()` gives you a `BeliefDelta`:

- `changes` — what was created, updated, or removed
- `clarity` — updated readiness score
- `readiness` — `'low'`, `'medium'`, or `'high'`
- `moves` — updated next actions
- `state` — full world state after this turn

---

## Framework Recipes

### Vercel AI SDK

Best for: streaming, multi-provider model swapping, and TypeScript-first ergonomics.

```ts
import { generateText } from 'ai'
import { anthropic } from '@ai-sdk/anthropic'
import Beliefs from 'beliefs'

const beliefs = new Beliefs({
  apiKey: process.env.BELIEFS_KEY,
  namespace: 'vercel-ai-hack',
  writeScope: 'space',
})

async function research(question: string) {
  const context = await beliefs.before(question)

  const { text } = await generateText({
    model: anthropic('claude-sonnet-4-20250514'),
    system: context.prompt,
    prompt: question,
  })

  const delta = await beliefs.after(text)
  console.log(`clarity: ${delta.clarity}, changes: ${delta.changes.length}`)
  return text
}
```

With streaming:

```ts
import { streamText } from 'ai'

const context = await beliefs.before(question)
const result = streamText({
  model: anthropic('claude-sonnet-4-20250514'),
  system: context.prompt,
  prompt: question,
})

let fullText = ''
for await (const chunk of result.textStream) {
  process.stdout.write(chunk)
  fullText += chunk
}

await beliefs.after(fullText)
```

<Callout type="warning" title="Streaming lifecycle">
Call `after()` exactly once per turn, after the stream completes. Do not call it on partial chunks — each call triggers extraction and fusion. Calling per-chunk creates duplicate beliefs from incomplete text.
</Callout>

### Anthropic SDK

Best for: direct control over Claude features (tool use, vision, extended thinking), minimal dependencies.

```ts
import Anthropic from '@anthropic-ai/sdk'
import Beliefs from 'beliefs'

const client = new Anthropic()
const beliefs = new Beliefs({
  apiKey: process.env.BELIEFS_KEY,
  namespace: 'anthropic-hack',
  writeScope: 'space',
})

async function research(question: string) {
  const context = await beliefs.before(question)

  const message = await client.messages.create({
    model: 'claude-sonnet-4-20250514',
    max_tokens: 4096,
    system: context.prompt,
    messages: [{ role: 'user', content: question }],
  })

  const text = message.content
    .filter(b => b.type === 'text')
    .map(b => b.text)
    .join('')

  const delta = await beliefs.after(text)
  return { text, delta }
}
```

### OpenAI SDK

Best for: GPT/o-series models, Responses API workflows, OpenAI-native ecosystems.

```ts
import OpenAI from 'openai'
import Beliefs from 'beliefs'

const openai = new OpenAI()
const beliefs = new Beliefs({
  apiKey: process.env.BELIEFS_KEY,
  namespace: 'openai-hack',
  writeScope: 'space',
})

async function research(question: string) {
  const context = await beliefs.before(question)

  const completion = await openai.chat.completions.create({
    model: 'gpt-4o',
    messages: [
      { role: 'system', content: context.prompt },
      { role: 'user', content: question },
    ],
  })

  const text = completion.choices[0]?.message?.content ?? ''
  const delta = await beliefs.after(text)
  return { text, delta }
}
```

> If using an o-series model (o3, o4-mini), change `role: 'system'` to `role: 'developer'`.

### Any LLM / Plain Fetch

Best for: serverless/edge runtimes, custom or self-hosted models, anywhere you don't want a vendor SDK in the dependency tree.

The SDK works with anything that produces text. Call `before`, pass `context.prompt` to your model, call `after` with the output.

```ts
import Beliefs from 'beliefs'

const beliefs = new Beliefs({
  apiKey: process.env.BELIEFS_KEY,
  namespace: 'plain-fetch-hack',
  writeScope: 'space',
})

async function withBeliefs(input: string, runAgent: (prompt: string) => Promise<string>) {
  const context = await beliefs.before(input)
  const output = await runAgent(context.prompt + '\n\nUser: ' + input)
  const delta = await beliefs.after(output)
  return { output, delta }
}
```

---

## Project Ideas

<Callout type="info" title="These are scaffolds, not runnable files">
The snippets below use `callLLM(systemPrompt, userMessage)` and `searchWeb(query)` as stand-ins for your model and search tool of choice. Plug in any of the four framework recipes above (Vercel AI, Anthropic, OpenAI, plain fetch) wherever you see `callLLM(...)`, and any search API for `searchWeb(...)`. The point of these examples is the belief flow, not the LLM wiring.
</Callout>

### Research Agent (Beginner)

An agent that researches a topic and tracks what it knows, what conflicts, and what's missing. Use `clarity` to decide when to stop researching and summarize.

```ts
const beliefs = new Beliefs({
  apiKey: process.env.BELIEFS_KEY,
  namespace: 'research-agent',
  writeScope: 'space',
})

async function deepResearch(topic: string) {
  await beliefs.add(`Research: ${topic}`, { type: 'goal' })

  for (let turn = 0; turn < 5; turn++) {
    const context = await beliefs.before(topic)
    if (context.clarity > 0.7) break

    const result = await callLLM(context.prompt, topic)
    const delta = await beliefs.after(result)

    console.log(`Turn ${turn + 1}: clarity ${delta.clarity.toFixed(2)}, ` +
      `${delta.changes.length} new beliefs`)
  }

  const world = await beliefs.read()
  return { beliefs: world.beliefs, gaps: world.gaps, clarity: world.clarity }
}

deepResearch('AI developer tools market').then(console.log)
```

### Multi-Agent Debate (Intermediate)

Two agents with different perspectives contribute to the same namespace. The belief system detects contradictions and tracks which claims survive.

```ts
const optimist = new Beliefs({ apiKey, agent: 'optimist', namespace: 'debate', writeScope: 'space' })
const skeptic = new Beliefs({ apiKey, agent: 'skeptic', namespace: 'debate', writeScope: 'space' })

const bullCase = await callLLM('Make the bull case for AI startups in 2026')
await optimist.after(bullCase)

const bearCase = await callLLM('Make the bear case for AI startups in 2026')
await skeptic.after(bearCase)

const world = await optimist.read()
console.log(`Contradictions: ${world.contradictions.length}`)
console.log(`Beliefs: ${world.beliefs.length}`)
```

### Fact Checker (Intermediate)

Verify claims by gathering evidence. Watch confidence shift as supporting and refuting evidence arrives.

```ts
const beliefs = new Beliefs({
  apiKey: process.env.BELIEFS_KEY,
  namespace: 'fact-check',
  writeScope: 'space',
})

async function checkClaim(claim: string) {
  await beliefs.add(claim, { confidence: 0.5 })

  const sources = await searchWeb(claim)
  for (const source of sources) {
    await beliefs.after(source.text, { tool: 'web_search' })
  }

  const page = await beliefs.list({ query: claim })
  return page.beliefs.map(b => ({ text: b.text, confidence: b.confidence }))
}

checkClaim('Global AI market is worth $200B by 2030').then(console.log)
```

### Decision Support (Advanced)

Use `moves` and `clarity` to build a system that tells you when you have enough information to make a decision, and what you should investigate next.

```ts
const beliefs = new Beliefs({
  apiKey: process.env.BELIEFS_KEY,
  namespace: 'decision-support',
  writeScope: 'space',
})

async function decisionLoop(question: string) {
  await beliefs.add(question, { type: 'goal' })

  while (true) {
    const context = await beliefs.before(question)
    if (context.clarity > 0.8) {
      return { recommendation: context.beliefs, confidence: context.clarity }
    }

    const nextMove = context.moves[0]
    if (!nextMove) break

    console.log(`Investigating: ${nextMove.target} (value: ${nextMove.value})`)
    const result = await callLLM(
      `${context.prompt}\n\nInvestigate: ${nextMove.target}`
    )
    await beliefs.after(result)
  }
}

decisionLoop('Should we enter the European market?').then(console.log)
```

---

## API Cheatsheet

| Method | What it does | Returns |
|--------|-------------|---------|
| `before(input?)` | Get current beliefs + next moves | `BeliefContext` |
| `after(text, { tool? })` | Feed agent output, extract beliefs | `BeliefDelta` |
| `add(text, opts?)` | Assert a belief, goal, or gap | `BeliefDelta` |
| `add([...items])` | Assert multiple in one request | `BeliefDelta` |
| `resolve(text)` | Mark a gap as resolved (exact text match) | `BeliefDelta` |
| `retract(id, reason?)` | Retract a belief (stays in graph) | `BeliefDelta` |
| `remove(id)` | Delete a belief entirely | `BeliefDelta` |
| `reset()` | Clear all state in this scope | `{ removed }` |
| `read()` | Full world state with clarity + moves | `WorldState` |
| `snapshot()` | Lightweight state without clarity/moves | `BeliefSnapshot` |
| `list({ query, filter, limit })` | Paged search by keyword + filters | `BeliefList` |
| `trace(beliefId?)` | Audit trail of belief changes | `TraceEntry[]` |

Belief types: `claim`, `assumption`, `evidence`, `risk`, `gap`, `goal`

---

## Troubleshooting

**`BetaAccessError: beliefs is in private beta…`**
Either your API key is missing from the environment, or it's not on the beta allowlist. Check that `BELIEFS_KEY` is exported in the shell you're running from, then verify the key at [Profile > API Keys](/profile/api-keys). New keys start with `bel_live_`.

**`resolve()` didn't remove my gap**
`resolve(text)` matches gap text exactly. Pass the same string you originally added, or call `read()` and copy the gap text from `state.gaps`.

**HTTP 429 — Rate limit exceeded**
The API allows 60 requests/minute per key. Add a small delay between calls in loops, or batch your work into fewer turns.

**Empty beliefs after `after()`**
The text you passed might be too short or not contain extractable claims. Try passing a longer, more substantive output. The extraction works best with paragraphs of analysis, not single sentences.

**`before()` returns empty state**
This is expected on a fresh namespace. Beliefs accumulate as you call `after()` and `add()`. The first `before()` will always have zero beliefs.

**Different agents not seeing each other's beliefs**
Make sure both agents use the same `namespace` and a shared write scope. The `agent` parameter identifies who contributed, but `namespace` plus `writeScope: 'space'` determine the shared state.

```ts
const a = new Beliefs({ apiKey, agent: 'agent-a', namespace: 'shared', writeScope: 'space' })
const b = new Beliefs({ apiKey, agent: 'agent-b', namespace: 'shared', writeScope: 'space' })
```

**Need to see what the SDK is doing?**
Enable debug mode to log every request and response:

```ts
const beliefs = new Beliefs({
  apiKey: process.env.BELIEFS_KEY,
  namespace: 'debug-example',
  writeScope: 'space',
  debug: true,
})
```
