thinkn
  • Product
    Manifesto
    The reason we exist
    Founder Studioprivate beta
    Make better product decisions faster
    Belief SDKinvite only
    Add belief states to your AI system
    Request Access →Join the private beta waitlist
  • Docs
  • Pricing
  • FAQ
  • Docs
  • Pricing
  • FAQ
Sign In
Welcome
  • Hack Guide
  • Introduction
  • Install
  • Quickstart
  • FAQ
  • The Problem
  • Memory vs Beliefs
  • Drift
  • Examples
start/hack-guide.mdx

Hack Guide

Zero to building with beliefs in 10 minutes. Everything you need for the hackathon.

Get Your Key

  1. Sign in at thinkn.ai
  2. Go to Profile > API Keys
  3. Click Create Key, copy the bel_live_... value
1export BELIEFS_KEY=bel_live_...

Install

1npm i beliefs

Verify the connection:

1node -e "import('beliefs').then(async ({default: B}) => { const b = new B({ apiKey: process.env.BELIEFS_KEY, namespace: 'hack-guide', writeScope: 'space' }); const s = await b.read(); console.log('beliefs:', s.beliefs.length, 'clarity:', s.clarity) })"

You should see beliefs: 0 clarity: 0.25 — an empty belief state with baseline clarity, ready to go.

Choose the right scope

These examples use writeScope: 'space' so they run immediately. For chat apps, keep the SDK default writeScope: 'thread' and bind a thread with thread or beliefs.withThread(threadId).

Using a coding agent?

Give your agent the SDK reference so it can write correct code on the first try: https://thinkn.ai/llms.txt

The Pattern

Every agent turn follows three steps:

1// 1. What does the agent believe right now?
2const context = await beliefs.before(userMessage)
3
4// 2. Run your agent with belief context injected
5const result = await myAgent.run({ system: context.prompt })
6
7// 3. Feed the output — beliefs extracted automatically
8const delta = await beliefs.after(result.text)

That is the entire integration. before returns the agent's current understanding. after extracts claims, detects conflicts, tracks confidence, and tells you what to do next.

1  ┌─────────┐     ┌───────────┐     ┌─────────┐
2  │ before()│────▶│ your agent │────▶│ after() │
3  │ beliefs │     │ runs here  │     │ extract │
4  │ + moves │     │            │     │ + fuse  │
5  └─────────┘     └───────────┘     └────┬────┘
6       ▲                                  │
7       └──────────── next turn ───────────┘

What comes back

before() gives you a BeliefContext:

  • prompt — inject this into your agent's system prompt
  • beliefs — current claims with confidence scores
  • gaps — what the agent doesn't know yet
  • clarity — 0-1 readiness score (higher = more confident)
  • moves — ranked next actions by expected information gain

after() gives you a BeliefDelta:

  • changes — what was created, updated, or removed
  • clarity — updated readiness score
  • readiness — 'low', 'medium', or 'high'
  • moves — updated next actions
  • state — full world state after this turn

Framework Recipes

Vercel AI SDK

1import { generateText } from 'ai'
2import { anthropic } from '@ai-sdk/anthropic'
3import Beliefs from 'beliefs'
4
5const beliefs = new Beliefs({
6  apiKey: process.env.BELIEFS_KEY,
7  namespace: 'vercel-ai-hack',
8  writeScope: 'space',
9})
10
11async function research(question: string) {
12  const context = await beliefs.before(question)
13
14  const { text } = await generateText({
15    model: anthropic('claude-sonnet-4-20250514'),
16    system: context.prompt,
17    prompt: question,
18  })
19
20  const delta = await beliefs.after(text)
21  console.log(`clarity: ${delta.clarity}, changes: ${delta.changes.length}`)
22  return text
23}

With streaming:

1import { streamText } from 'ai'
2
3const context = await beliefs.before(question)
4const result = streamText({
5  model: anthropic('claude-sonnet-4-20250514'),
6  system: context.prompt,
7  prompt: question,
8})
9
10let fullText = ''
11for await (const chunk of result.textStream) {
12  process.stdout.write(chunk)
13  fullText += chunk
14}
15
16await beliefs.after(fullText)

Streaming lifecycle

Call after() exactly once per turn, after the stream completes. Do not call it on partial chunks — each call triggers extraction and fusion. Calling per-chunk creates duplicate beliefs from incomplete text.

Anthropic SDK

1import Anthropic from '@anthropic-ai/sdk'
2import Beliefs from 'beliefs'
3
4const client = new Anthropic()
5const beliefs = new Beliefs({
6  apiKey: process.env.BELIEFS_KEY,
7  namespace: 'anthropic-hack',
8  writeScope: 'space',
9})
10
11async function research(question: string) {
12  const context = await beliefs.before(question)
13
14  const message = await client.messages.create({
15    model: 'claude-sonnet-4-20250514',
16    max_tokens: 4096,
17    system: context.prompt,
18    messages: [{ role: 'user', content: question }],
19  })
20
21  const text = message.content
22    .filter(b => b.type === 'text')
23    .map(b => b.text)
24    .join('')
25
26  const delta = await beliefs.after(text)
27  return { text, delta }
28}

OpenAI SDK

1import OpenAI from 'openai'
2import Beliefs from 'beliefs'
3
4const openai = new OpenAI()
5const beliefs = new Beliefs({
6  apiKey: process.env.BELIEFS_KEY,
7  namespace: 'openai-hack',
8  writeScope: 'space',
9})
10
11async function research(question: string) {
12  const context = await beliefs.before(question)
13
14  const completion = await openai.chat.completions.create({
15    model: 'gpt-4o',
16    messages: [
17      { role: 'system', content: context.prompt },
18      { role: 'user', content: question },
19    ],
20  })
21
22  const text = completion.choices[0]?.message?.content ?? ''
23  const delta = await beliefs.after(text)
24  return { text, delta }
25}

If using an o-series model (o3, o4-mini), change role: 'system' to role: 'developer'.

Any LLM / Plain Fetch

The SDK works with anything that produces text. Call before, pass context.prompt to your model, call after with the output.

1import Beliefs from 'beliefs'
2
3const beliefs = new Beliefs({
4  apiKey: process.env.BELIEFS_KEY,
5  namespace: 'plain-fetch-hack',
6  writeScope: 'space',
7})
8
9async function withBeliefs(input: string, runAgent: (prompt: string) => Promise<string>) {
10  const context = await beliefs.before(input)
11  const output = await runAgent(context.prompt + '\n\nUser: ' + input)
12  const delta = await beliefs.after(output)
13  return { output, delta }
14}

Project Ideas

Research Agent (Beginner)

An agent that researches a topic and tracks what it knows, what conflicts, and what's missing. Use clarity to decide when to stop researching and summarize.

1const beliefs = new Beliefs({
2  apiKey: process.env.BELIEFS_KEY,
3  namespace: 'research-agent',
4  writeScope: 'space',
5})
6
7async function deepResearch(topic: string) {
8  await beliefs.add(`Research: ${topic}`, { type: 'goal' })
9
10  for (let turn = 0; turn < 5; turn++) {
11    const context = await beliefs.before(topic)
12    if (context.clarity > 0.7) break
13
14    const result = await callLLM(context.prompt, topic)
15    const delta = await beliefs.after(result)
16
17    console.log(`Turn ${turn + 1}: clarity ${delta.clarity.toFixed(2)}, ` +
18      `${delta.changes.length} new beliefs`)
19  }
20
21  const world = await beliefs.read()
22  return { beliefs: world.beliefs, gaps: world.gaps, clarity: world.clarity }
23}
24
25deepResearch('AI developer tools market').then(console.log)

Multi-Agent Debate (Intermediate)

Two agents with different perspectives contribute to the same namespace. The belief system detects contradictions and tracks which claims survive.

1const optimist = new Beliefs({ apiKey, agent: 'optimist', namespace: 'debate', writeScope: 'space' })
2const skeptic = new Beliefs({ apiKey, agent: 'skeptic', namespace: 'debate', writeScope: 'space' })
3
4const bullCase = await callLLM('Make the bull case for AI startups in 2026')
5await optimist.after(bullCase)
6
7const bearCase = await callLLM('Make the bear case for AI startups in 2026')
8await skeptic.after(bearCase)
9
10const world = await optimist.read()
11console.log(`Contradictions: ${world.contradictions.length}`)
12console.log(`Beliefs: ${world.beliefs.length}`)

Fact Checker (Intermediate)

Verify claims by gathering evidence. Watch confidence shift as supporting and refuting evidence arrives.

1const beliefs = new Beliefs({
2  apiKey: process.env.BELIEFS_KEY,
3  namespace: 'fact-check',
4  writeScope: 'space',
5})
6
7async function checkClaim(claim: string) {
8  await beliefs.add(claim, { confidence: 0.5 })
9
10  const sources = await searchWeb(claim)
11  for (const source of sources) {
12    await beliefs.after(source.text, { tool: 'web_search' })
13  }
14
15  const results = await beliefs.search(claim)
16  return results.map(b => ({ text: b.text, confidence: b.confidence }))
17}
18
19checkClaim('Global AI market is worth $200B by 2030').then(console.log)

Decision Support (Advanced)

Use moves and clarity to build a system that tells you when you have enough information to make a decision, and what you should investigate next.

1const beliefs = new Beliefs({
2  apiKey: process.env.BELIEFS_KEY,
3  namespace: 'decision-support',
4  writeScope: 'space',
5})
6
7async function decisionLoop(question: string) {
8  await beliefs.add(question, { type: 'goal' })
9
10  while (true) {
11    const context = await beliefs.before(question)
12    if (context.clarity > 0.8) {
13      return { recommendation: context.beliefs, confidence: context.clarity }
14    }
15
16    const nextMove = context.moves[0]
17    if (!nextMove) break
18
19    console.log(`Investigating: ${nextMove.target} (value: ${nextMove.value})`)
20    const result = await callLLM(
21      `${context.prompt}\n\nInvestigate: ${nextMove.target}`
22    )
23    await beliefs.after(result)
24  }
25}
26
27decisionLoop('Should we enter the European market?').then(console.log)

API Cheatsheet

MethodWhat it doesReturns
before(input?)Get current beliefs + next movesBeliefContext
after(text, { tool? })Feed agent output, extract beliefsBeliefDelta
add(text, opts?)Assert a belief, goal, or gapBeliefDelta
add([...items])Assert multiple in one requestBeliefDelta
resolve(text)Mark a gap as resolved (exact text match)BeliefDelta
retract(id, reason?)Retract a belief (stays in graph)BeliefDelta
remove(id)Delete a belief entirelyBeliefDelta
reset()Clear all state in this scope{ removed }
read()Full world state with clarity + movesWorldState
snapshot()Lightweight state without clarity/movesBeliefSnapshot
search(query)Find beliefs by keywordBelief[]
trace(beliefId?)Audit trail of belief changesTraceEntry[]

Belief types: claim, assumption, evidence, risk, gap, goal


Troubleshooting

BetaAccessError: beliefs is in private beta… Either your API key is missing from the environment, or it's not on the beta allowlist. Check that BELIEFS_KEY is exported in the shell you're running from, then verify the key at Profile > API Keys. New keys start with bel_live_.

resolve() didn't remove my gap resolve(text) matches gap text exactly. Pass the same string you originally added, or call read() and copy the gap text from state.gaps.

HTTP 429 — Rate limit exceeded The API allows 60 requests/minute per key. Add a small delay between calls in loops, or batch your work into fewer turns.

Empty beliefs after after() The text you passed might be too short or not contain extractable claims. Try passing a longer, more substantive output. The extraction works best with paragraphs of analysis, not single sentences.

before() returns empty state This is expected on a fresh namespace. Beliefs accumulate as you call after() and add(). The first before() will always have zero beliefs.

Different agents not seeing each other's beliefs Make sure both agents use the same namespace and a shared write scope. The agent parameter identifies who contributed, but namespace plus writeScope: 'space' determine the shared state.

1const a = new Beliefs({ apiKey, agent: 'agent-a', namespace: 'shared', writeScope: 'space' })
2const b = new Beliefs({ apiKey, agent: 'agent-b', namespace: 'shared', writeScope: 'space' })

Need to see what the SDK is doing? Enable debug mode to log every request and response:

1const beliefs = new Beliefs({
2  apiKey: process.env.BELIEFS_KEY,
3  namespace: 'debug-example',
4  writeScope: 'space',
5  debug: true,
6})
NextIntroduction

On this page

  • Get Your Key
  • Install
  • The Pattern
  • What comes back
  • Framework Recipes
  • Vercel AI SDK
  • Anthropic SDK
  • OpenAI SDK
  • Any LLM / Plain Fetch
  • Project Ideas
  • Research Agent (Beginner)
  • Multi-Agent Debate (Intermediate)
  • Fact Checker (Intermediate)
  • Decision Support (Advanced)
  • API Cheatsheet
  • Troubleshooting