WikiOracle is a truthful, explainable LLM system designed as a public good — the Wikipedia model applied to artificial intelligence.
The Problem
Commercial LLMs use our data — sourced from billions of people — to train models that teach our children. Those models hallucinate. They can’t explain themselves. They are vulnerable to ideological capture and data-driven manipulation, especially under online learning. And the knowledge they encode is locked behind proprietary walls.
Most large AI systems today are built around a single global objective function, centralized data aggregation, hidden alignment rules, and implicit averaging over moral and cultural differences. The result is predictable: minority viewpoints are quietly averaged away, the loudest groups shape the model at scale, a single model becomes an authority node that everyone depends on, and predictive advantage converts into economic or political dominance.
What Makes WikiOracle Different
Truth as a first-class constraint
WikiOracle does not optimize for fluency and bolt on truthfulness as an afterthought. Truthfulness is the primary design constraint: truth-based models are less prone to hallucination and capture, and claims can be contested, improved, and revised openly.
You own your data
WikiOracle is local-first. Your conversation state, your trust entries, and your configuration live on your machine or in a pod — not in “the cloud” accumulating hidden central memory. Your data is yours.
A Democratic network of trust
Instead of one model that claims to know everything, WikiOracle integrates with a network of trust. You choose who to trust and how much to trust them. Trust is transitive and attenuated, distributed and structured: that way, no single actor — company, state, foundation, or maintainer group — can become the epistemic root for everyone else, and minority viewpoints are preserved rather than being averaged into oblivion.