Your agent is lying to itself.
One API call. Two beliefs. A verdict on whether your agent's memory is consistent -- or quietly contradicting itself.
The problem
Memory systems don't check consistency.
Your agent stores hundreds of observations over time. Some contradict each other. Some evolved. Some were wrong from the start. Without a consistency layer, your agent builds confidently on a fractured foundation.
The solution
Four verdicts. Zero ambiguity.
Every pair of beliefs gets classified into one of four categories.
Direct contradiction. These beliefs cannot both be true.
Not a flat contradiction, but these pull in opposite directions.
These beliefs reinforce each other. Consistent.
No meaningful semantic relationship detected.
Try it
See it work.
Enter two beliefs. Get a verdict. This is a cached demo -- the real endpoint returns results in under 200ms.
Under the hood
How it works.
Receive two beliefs
The endpoint accepts two text strings -- observations, memories, claims, or any natural language statements your agent has stored.
Embed and compare
Both beliefs are embedded into the same semantic space. Cosine similarity and domain overlap are computed to establish a baseline relationship.
Analyze for contradiction
A specialized model evaluates logical consistency, identifying negation patterns, scope conflicts, and implicit tensions that embedding distance alone misses.
Return verdict
You get a verdict (genuine, tension, complementary, unrelated), confidence score, reasoning, and related concepts -- all in a single JSON response.
Pricing
One penny.
$0.01 per request. USDC on Base. No subscription. No API key. No registration. Your wallet address is your identity.
Getting started
Three steps. Ship.
1. Install
npm install @x402/fetch viem2. Check a belief
import { x402Fetch } from "@x402/fetch";
import { createWalletClient, http } from "viem";
import { privateKeyToAccount } from "viem/accounts";
import { base } from "viem/chains";
const account = privateKeyToAccount(process.env.PRIVATE_KEY as `0x${string}`);
const wallet = createWalletClient({ account, chain: base, transport: http() });
const res = await x402Fetch(
"https://idapixl-cortex-215390428499.us-central1.run.app/x402/belief",
{
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({
belief_a: "I can handle complex multi-step tasks",
belief_b: "I should avoid tasks with more than 3 steps",
}),
},
wallet
);
const result = await res.json();
console.log(result.verdict); // "tension"3. Ship.
That's it. No dashboard to configure. No webhook to register. No trial to expire. Pay per request, get results.
Use cases
Built for agents that remember.
Multi-agent systems
Check consistency across agents sharing a memory store. Catch when Agent A's observations contradict Agent B's conclusions.
RAG pipelines
Validate retrieved context against existing knowledge before injecting into prompts. Prevent contradictory context from degrading output.
Policy engines
Verify that policy rules don't conflict. Detect when new rules create tension with existing ones before deployment.
Agent health monitoring
Run periodic consistency audits on your agent's belief store. Track drift over time. Alert when contradiction rates spike.
API reference
One endpoint. Two fields.
POST /x402/belief
Request body
| Field | Type | Description |
|---|---|---|
| belief_a | string | First belief or observation |
| belief_b | string | Second belief to check against |
Response
| Field | Type | Description |
|---|---|---|
| verdict | string | genuine | tension | complementary | unrelated |
| confidence | number | 0-1 confidence score |
| reasoning | string | Natural language explanation of the verdict |
| related_concepts | string[] | Semantic tags from the knowledge graph |