News you can verify, not just trust

Every summary on this site is generated by an AI model running inside a cryptographic virtual machine. The result is a mathematical proof that anyone can check — confirming exactly what went in, what model ran, and what came out.

The Problem

Every news outlet has a perspective

Media bias is pervasive and hard to detect. Every outlet has editorial perspective baked into its framing, word choice, story selection, and emphasis. Readers either don't notice the bias or have no systematic way to strip it out. Existing “bias checkers” are themselves opinionated — who checks the checker?

AI summaries are just another black box

When an AI tool claims to summarize or “de-bias” an article, there's no way to verify what actually happened. Was the claimed model actually used? Were the inputs tampered with? Did the operator quietly edit the output before showing it to you? You're just trusting a different black box.

The core gap: there is no mechanism for a reader to verify that a specific AI model processed a specific article and produced a specific output — without trusting the operator.

How We Solve It

We run a real AI model — Qwen2.5-0.5B, a 494-million-parameter language model — inside a special kind of virtual machine that records every computation it performs. Think of it like a security camera for math: the VM watches every single step the model takes, billions of operations in total, and produces a compact cryptographic proof of the entire run.

That proof locks in three things. The article text that went in is fingerprinted with a cryptographic hash — change even one character, and the proof won't match. The model's identity is baked into the proof itself, so you can confirm exactly which model was used. And the summary you see on screen is committed as the proof's output, byte for byte.

The result: nobody — not even us — can change the output after the fact without invalidating the proof. Anyone can verify it independently, in milliseconds.

The shift: instead of asking you to trust this website, we give you a mathematical proof you can check yourself. Trust moves from “believe us” to “verify it.”

What the Proof Means

Cryptographic proofs are powerful, but they aren't magic. Here is exactly what this proof does and does not guarantee.

Guaranteed

  • Model identity — the claimed model was the model that actually ran
  • Input integrity — the article text fed in matches the hash committed in the proof
  • Output integrity — the summary displayed is exactly what the model produced
  • No tampering — no one edited the output between model and display

Not Guaranteed

  • Factual accuracy — the model can still hallucinate or make mistakes
  • True objectivity — bias is subjective; the model's bias reduction is best-effort
  • Source authenticity — we hash the article we fetched, not necessarily what the outlet originally published

Technical Details

Verified inference is computationally expensive — this is the cost of cryptographic guarantees. For the technically curious, here are the numbers.

ModelQwen2.5-0.5B-Instruct (494M params)
VMBoundless (RISC-V based zero-knowledge VM)
OptimizationFreivalds algorithm (~2.9x speedup)
Cost~166B cycles per 100-token summary
Execution~26 minutes per summary
VerificationSTARK proof, verifiable in milliseconds