Use Today with the Core SDK
The core beliefs package works with the Vercel AI SDK right now. Wrap your generateText or streamText calls with before/after:
1npm i beliefs ai @ai-sdk/anthropicWith generateText
1import { generateText } from 'ai'
2import { anthropic } from '@ai-sdk/anthropic'
3import Beliefs from 'beliefs'
4
5const beliefs = new Beliefs({
6 apiKey: process.env.BELIEFS_KEY,
7 agent: 'research-agent',
8 namespace: 'vercel-ai',
9 writeScope: 'space',
10})
11
12async function research(question: string) {
13 const context = await beliefs.before(question)
14
15 const { text } = await generateText({
16 model: anthropic('claude-sonnet-4-20250514'),
17 system: context.prompt,
18 prompt: question,
19 })
20
21 const delta = await beliefs.after(text)
22 console.log(`clarity: ${delta.clarity}, changes: ${delta.changes.length}`)
23 return text
24}With streamText
1import { streamText } from 'ai'
2import { anthropic } from '@ai-sdk/anthropic'
3import Beliefs from 'beliefs'
4
5const beliefs = new Beliefs({
6 apiKey: process.env.BELIEFS_KEY,
7 agent: 'research-agent',
8 namespace: 'vercel-ai',
9 writeScope: 'space',
10})
11
12async function researchStream(question: string) {
13 const context = await beliefs.before(question)
14
15 const result = streamText({
16 model: anthropic('claude-sonnet-4-20250514'),
17 system: context.prompt,
18 prompt: question,
19 })
20
21 let fullText = ''
22 for await (const chunk of result.textStream) {
23 process.stdout.write(chunk)
24 fullText += chunk
25 }
26
27 const delta = await beliefs.after(fullText)
28 return { text: fullText, delta }
29}Streaming lifecycle
Call after() exactly once per turn, after the stream completes. Do not call it on partial chunks — each call triggers extraction and fusion. Calling per-chunk creates duplicate beliefs from incomplete text. For Next.js route handlers, use the onFinish callback shown below.
With Tool Results
Feed tool results individually so beliefs update as evidence arrives:
1import { generateText, tool } from 'ai'
2import { anthropic } from '@ai-sdk/anthropic'
3import { z } from 'zod'
4
5const context = await beliefs.before(question)
6
7const { text, toolResults } = await generateText({
8 model: anthropic('claude-sonnet-4-20250514'),
9 system: context.prompt,
10 prompt: question,
11 tools: {
12 search: tool({
13 description: 'Search the web',
14 parameters: z.object({ query: z.string() }),
15 execute: async ({ query }) => searchWeb(query),
16 }),
17 },
18 maxSteps: 5,
19})
20
21for (const result of toolResults) {
22 await beliefs.after(JSON.stringify(result.result), { tool: result.toolName })
23}
24await beliefs.after(text)In a Next.js Route Handler
1import { streamText } from 'ai'
2import { anthropic } from '@ai-sdk/anthropic'
3import Beliefs from 'beliefs'
4
5const beliefs = new Beliefs({
6 apiKey: process.env.BELIEFS_KEY,
7 agent: 'chat-agent',
8 namespace: 'chat',
9 writeScope: 'space',
10})
11
12export async function POST(req: Request) {
13 const { messages } = await req.json()
14 const lastMessage = messages[messages.length - 1]?.content ?? ''
15
16 const context = await beliefs.before(lastMessage)
17
18 const result = streamText({
19 model: anthropic('claude-sonnet-4-20250514'),
20 system: context.prompt,
21 messages,
22 onFinish: async ({ text }) => {
23 await beliefs.after(text)
24 },
25 })
26
27 return result.toDataStreamResponse()
28}Middleware Adapter
1npm i beliefsThe adapter integrates through the Vercel AI SDK's middleware system. Wrap your model with beliefsMiddleware for automatic belief extraction:
1import { generateText, wrapLanguageModel } from 'ai'
2import { anthropic } from '@ai-sdk/anthropic'
3import Beliefs from 'beliefs'
4import { beliefsMiddleware } from 'beliefs/vercel-ai'
5
6const beliefs = new Beliefs({
7 apiKey: process.env.BELIEFS_KEY,
8 agent: 'research-agent',
9 namespace: 'vercel-ai',
10 writeScope: 'space',
11})
12
13const { text } = await generateText({
14 model: wrapLanguageModel({
15 model: anthropic('claude-sonnet-4-20250514'),
16 middleware: beliefsMiddleware(beliefs),
17 }),
18 prompt: 'Research the competitive landscape for AI dev tools',
19})Capture Modes
1beliefsMiddleware(beliefs, { capture: 'response' }) // final response (default)
2beliefsMiddleware(beliefs, { capture: 'tools' }) // each tool call result
3beliefsMiddleware(beliefs, { capture: 'all' }) // bothConfiguration
1beliefsMiddleware(beliefs, {
2 capture: 'all',
3 includeContext: true,
4})| Option | Default | Description |
|---|---|---|
capture | 'response' | What to extract beliefs from: 'response', 'tools', or 'all' |
includeContext | true | Inject belief context into system prompt via before() |
resolveThreadId | — | Required when beliefs uses writeScope: 'thread' and no thread is already bound |
Thread-scoped chat memory
If you keep the SDK default writeScope: 'thread', either bind the thread ahead of time with beliefs.withThread(threadId) or pass resolveThreadId so the middleware can scope each invocation correctly.
1import { streamText, wrapLanguageModel } from 'ai'
2import { anthropic } from '@ai-sdk/anthropic'
3import Beliefs from 'beliefs'
4import { beliefsMiddleware } from 'beliefs/vercel-ai'
5
6export async function POST(req: Request) {
7 const { messages, conversationId } = await req.json()
8
9 const beliefs = new Beliefs({
10 apiKey: process.env.BELIEFS_KEY,
11 namespace: 'support',
12 writeScope: 'thread',
13 })
14
15 const model = wrapLanguageModel({
16 model: anthropic('claude-sonnet-4-20250514'),
17 middleware: beliefsMiddleware(beliefs, {
18 resolveThreadId: () => conversationId,
19 }),
20 })
21
22 const result = streamText({ model, messages })
23 return result.toDataStreamResponse()
24}