Side by Side
| Dimension | Memory / RAG | Beliefs |
|---|---|---|
| What it stores | Text, embeddings | Probability distributions |
| Uncertainty | None. Every retrieved chunk looks equally valid | Two-channel: decision resolution + knowledge certainty |
| Conflicts | Returns both conflicting chunks, or last-write-wins | Detects, tracks, and resolves through trust-weighted fusion |
| Decay | Falls out of context window randomly | Principled decay toward uninformative prior over time |
| Provenance | "This chunk was retrieved" | Full trail: who stated it, what evidence, how confidence evolved |
| What is missing | No concept | Gaps are first-class. They drive the next action. |
What It Stores
Memory stores strings and vectors. When you ask "what do we know about the market?", it retrieves paragraphs that mention the word "market."
Beliefs stores probability distributions. When you ask the same question, it returns structured claims, each with a confidence score, evidence count, provenance chain, and a timestamp of when it was last confirmed.
The research agent says "Market is $4.2B." Memory records that sentence. Beliefs records a claim at 85% confidence, based on one Gartner citation, created 2 hours ago.
Uncertainty
Memory has no concept of uncertainty. Every retrieved chunk looks equally valid. A three-month-old estimate and a verified data point from yesterday sit side by side with no distinction.
Beliefs tracks uncertainty on two channels. The first, decision resolution, measures whether a claim is far enough from ambiguous to act on. The second, knowledge certainty, measures how much evidence has accumulated. This distinction matters: a claim with low confidence from zero investigation and a claim with low confidence from extensive research produce the same number on a single scale, but demand opposite next actions. Two channels separate them.
Conflicts
Memory returns conflicting information without flagging it. "Market is $4.2B" and "Market is $3.8B" both appear in results. The model picks one, usually the first.
Beliefs detects the conflict. Both values are tracked with their sources. The fusion engine resolves them by trust weight: a measurement from an SEC filing outweighs an inference from an agent. The contradiction remains visible in the ledger even after resolution.
Decay
Memory decays by accident. Old information falls out of the context window or gets ranked below newer chunks by recency bias. There is no principled mechanism for aging evidence.
Beliefs decays by design. Claims lose certainty over time through configurable temporal decay. A market estimate from six months ago carries less weight than one confirmed last week. Decay creates natural pressure to refresh. The system surfaces staleness instead of silently treating old analysis as current.
Provenance
Memory can tell you "this chunk was retrieved from document X." That is the extent of the trail.
Beliefs can tell you: this claim was created by the research agent in turn 3, based on a Gartner citation with high specificity. It was updated in turn 7 when an SEC filing partially contradicted it. Confidence moved from 90% to 72%. The full transition history is in the ledger.
What Is Missing
Memory has no concept of absence. It does not know what it has not retrieved. There is no "gap."
Beliefs tracks gaps explicitly. "No data on enterprise pricing models" is a structured gap that penalizes the clarity score and can drive the next research action. An agent that knows what it does not know is more useful than one that only has answers.
When Memory Is Enough
Memory works well for single-turn agents, simple retrieval, and short-lived tasks where information does not accumulate or conflict.
If your agent runs more than a few turns on the same topic, if conflicting information matters, or if you need to trace why the agent believes something, beliefs fills the gap that memory cannot.
Rule of thumb
If your agent runs more than 5 turns on the same topic, or if conflicting information matters, use beliefs.