thinkn
  • Product
    Manifesto
    The reason we exist
    Founder Studioprivate beta
    Make better product decisions faster
    Belief SDKinvite only
    Add belief states to your AI system
    Request Access →Join the private beta waitlist
  • Docs
  • Pricing
  • FAQ
  • Docs
  • Pricing
  • FAQ
Sign In
Welcome
  • Hack Guide
  • Introduction
  • Install
  • Quickstart
  • FAQ
  • The Problem
  • Memory vs Beliefs
  • Drift
  • Examples
  • Evidence
  • Fusion
  • Ledger
  • Decay
  • Runtime
  • Math
internals/evidence.mdx

Evidence

How evidence flows into beliefs. Types, direction, and the is/ought firewall.

What Evidence Does

Evidence is information that updates a belief. It has a type, a direction, and a strength. When evidence arrives, it shifts the belief's confidence. How much depends on the evidence quality.

A Gartner report citing $4.2B market size is strong evidence. An agent's inference based on incomplete data is weaker. Both update beliefs, but by different amounts.

Evidence Types

Different evidence types carry different weight. A verified measurement updates beliefs more than an inference. A cited research report carries more weight than an explicit assumption.

TypeDescription
measurementAudited metric, verified data point
citationResearch report, external source with provenance
user-assertionUser explicitly stated this
expert-judgmentExpert opinion with rationale
inferenceAgent-derived inference from available data
assumptionExplicit assumption, no supporting evidence

The SDK calibrates the weight of each type so that evidence quality matters, not just volume. A single verified measurement moves confidence more than several inferences.

Direction

Every piece of evidence has a direction:

  • supports. Increases confidence in the claim.
  • refutes. Decreases confidence in the claim.
  • neutral. Adds information weight without shifting direction.

When the research agent finds a Gartner report supporting "Market size is $4.2B," confidence increases. When it finds an SEC filing showing a smaller number, that refuting evidence decreases confidence in the original claim.

Both are captured. Nothing is discarded.

How Evidence Updates Beliefs

The SDK converts evidence into calibrated belief updates based on the type, quality, and independence of the source. Higher-quality evidence from independent sources moves beliefs more than redundant or low-quality inputs.

This ensures consistent behavior regardless of which agent or tool produced the evidence. A research finding and a user assertion are both valid inputs, but they carry appropriately different weight.

The Is/Ought Firewall

This is the most important design decision in the evidence system.

Factual evidence updates beliefs. Normative evidence (preferences, goals, desires) does not.

A user saying "I want to target enterprise" does not increase confidence that enterprise is the right market. It records a goal. A user saying "I believe the TAM is $5B" is a factual assertion and updates the market size belief.

The distinction:

InputTypeEffect
"The TAM is $5B"FactualUpdates the market size belief
"I want to target enterprise"NormativeRecorded as a goal in intent
"We must support SOC2"NormativeRecorded as a constraint in intent
"Gartner reports 34% growth"FactualUpdates the growth rate belief

This prevents a common failure mode: a user's strong preference masquerading as strong evidence. Without this firewall, the more a user says "I want X," the more confident the system becomes that X is the right answer, regardless of what the evidence shows.

The firewall

Preferences do not update factual beliefs. This prevents strong opinions from masquerading as strong evidence. See Intent for how normative information is handled.

Automatic Extraction

The SDK extracts evidence automatically when you pass output to after. You do not need to parse agent outputs yourself.

1// Beliefs and evidence are extracted from the output automatically
2await beliefs.after(result)

With an adapter, the lifecycle is wired up for you so you do not call after at all:

1const agent = createAgent({
2  hooks: beliefsHooks(beliefs, { capture: 'all' }),
3})

Manual Override

When you have domain-specific extraction logic, you can add explicit claims and then run extraction on the output:

1await beliefs.add('Market size is $4.2B', { confidence: 0.85 })
2await beliefs.after(result)

For full manual control, use beliefs.add() with confidence and evidence options. See the Core API.

All paths feed the same update pipeline.

Intent

How goals, decisions, and constraints are handled.

Learn more

Patterns

Common evidence submission patterns.

Learn more
PreviousScience
NextFusion

On this page

  • What Evidence Does
  • Evidence Types
  • Direction
  • How Evidence Updates Beliefs
  • The Is/Ought Firewall
  • Automatic Extraction
  • Manual Override