The Wikipedia Paradox
Can We Still Know Anything Together?
How many men accompanied John Brown when he raided Harpers Ferry in 1859? The answer seems like it should be simple—count the bodies, check the records, done. But historians give different numbers: 18, 21, 22, depending on whether you count men who backed out at the last minute, scouts who weren’t at the arsenal itself, or supporters who arrived late.
Did Nero fiddle while Rome burned? No—the fiddle didn’t exist yet. But did he perform music while the city was in flames, as the historian Suetonius claimed? Or was he 35 miles away in Antium, as Tacitus suggests? Or did he actually rush back to coordinate relief efforts? We have ancient sources. They flatly contradict each other.
How many people died at Chernobyl? The immediate deaths—around 30—are relatively settled. But long-term deaths from radiation exposure? The World Health Organization says 4,000. Greenpeace says 93,000. Other studies suggest 200,000. These aren’t propaganda figures—they’re serious epidemiological studies using different methodologies about what counts, how to measure exposure, and how to attribute causation decades later.
Who invented the telephone? Your elementary school textbook said Alexander Graham Bell. But Antonio Meucci filed a caveat for a similar device earlier. Elisha Gray filed a patent application the same day as Bell—possibly the same hour. Courts, historians, and nations disagree.
In 2002, the U.S. House of Representatives passed a resolution recognizing Meucci’s contribution. Canada still celebrates Bell.
These aren’t fringe conspiracy theories. They’re legitimate historical uncertainties—questions where evidence is incomplete, where sources conflict, where methodological choices determine answers, where political interests shape what counts as fact.
And if we can’t agree on how many men stood with John Brown at Harpers Ferry—an event that happened 165 years ago, in public, with military records and newspaper coverage—how are we supposed to agree on anything happening right now?
This is the epistemological crisis of our age. Not that some people believe false things—that’s always been true. But that we’re losing the shared conditions under which we can recognize anything as true together.
In an era of collapsing trust in institutions, surging conspiracy thinking, and accelerating disinformation, Wikipedia should not work. It has no editorial board, no credentialing gate, no peer review. Anonymous contributors—some teenagers, some trolls—write about war, genomics, climate policy, and AI safety.
To many scholars, this is disqualifying. Can a knowledge system without certified experts be trusted?
Yet for millions of citizens, Wikipedia now has more credibility than Congress, cable news, or even some universities. Not because people think it’s perfect, but because its process is visible, continuously self-correcting, and never fully closed.
This essay argues that Wikipedia succeeds not by bypassing expertise, but by rebuilding public trust in knowledge through procedure rather than prestige—a lesson most of our institutions still refuse to learn.
Because here’s the uncomfortable truth: Even questions that seem straightforward—How many men were with John Brown?—turn out to be contested. Every historical “fact” arrives embedded in choices about sources, definitions, and methods. Every scientific finding involves judgment calls about what to measure and how to interpret it. Every news story reflects decisions about what’s relevant and what’s not.
We’ve always known this, abstractly. But we used to trust institutions to handle these complexities for us. That trust is gone. And it’s not coming back through credentials, prestige, or reassertions of authority.
The question before us is not whether we can return to automatic deference to experts. We cannot, and we shouldn’t—that system often protected injustice as much as truth.
The question is: Can we build knowledge systems that earn trust through visible, contestable, collectively maintained processes?
Systems where authority must be demonstrated, not declared. Where disagreement is structured rather than suppressed. Where expertise is proven through one’s contributions rather than one’s credentials.
Wikipedia shows it’s possible. Now our universities, newsrooms, and civic institutions must learn the same lesson. Or we will continue fracturing into separate realities, unable to agree even on how many people were present at events we have photographs of.
The work is already late. But it is not yet too late.
Diagnosing the Real Crisis
Truth has always been contested. But recently, the conditions under which truth can even be recognized have disintegrated. Hannah Arendt warned in “Truth and Politics” (1967) that the most dangerous moment is not when people are lied to, but when they lose the ability to distinguish lying from truth-telling at all. The problem is epistemic nihilism—facts no longer matter because every fact arrives pre-coded as tribal.
Jürgen Habermas argued that healthy public conversation doesn’t require agreement. It requires arguing in ways others can understand and potentially accept—what he calls “communicative rationality.” The crisis comes when we stop accepting any argument unless it comes from our own side. Then real debate becomes impossible. Politics becomes purely about power.
Bruno Latour came at this issue from different angle. For decades, he showed that scientific facts don’t emerge in a vacuum—they’re built through networks of people, instruments, and institutions. But later he grew alarmed.
In “Why Has Critique Run Out of Steam?” (2004), he admitted his insights were backfiring. Conspiracy theorists were using the same logic: “Facts are just constructions, so we can dismiss any we don’t like.”
Philosopher Miranda Fricker adds another piece to the puzzle in Epistemic Injustice (2007): marginalized people face a double barrier—they struggle not just to speak but to be heard as credible. When institutions consistently ignore certain voices, those people have good reason to stop trusting them. The crisis of trust has structural causes rooted in who gets taken seriously.
The bottom line: Institutions can’t just be right anymore. They must show their work in ways the public can see and verify.
Universities assume knowledge speaks for itself. Journalists think more facts will solve the problem. Scientists believe peer review automatically guarantees trust. But these systems were built for an era of automatic respect for authority. That era is over.
We’re trying to run 21st-century democracies on 19th-century systems of trust.
Why Wikipedia Works: Procedure Over Prestige
Wikipedia’s credibility doesn’t come from certified experts. It comes from complete process visibility. Every edit is recorded. Every disagreement is saved on “Talk” pages. Every claim must link to a source. Administrators can be challenged. Decisions can be appealed. The work is never finished.
Political theorist Hélène Landemore explains in Democratic Reason (2013) why this works: diverse groups can sometimes outperform expert groups—if the system channels contributions properly. Wikipedia gets cognitive diversity through open participation while maintaining quality through clear conflict resolution rules.
Wikipedia doesn’t eliminate disagreement—it manages it. It doesn’t claim final answers—it treats constant revision as normal. This is what legal scholar Yochai Benkler calls “peer production”: succeeding not despite openness, but because of it.
Wikipedia avoids two traps. It doesn’t say “trust us—we’re experts” (epistemic authoritarianism). It doesn’t say “trust nothing—everything is fake” (epistemic nihilism). Instead, it models something rare: knowledge built publicly, open to correction, created collaboratively.
Consider COVID-19. Wikipedia’s pandemic article became one of the most-edited pages ever—thousands of edits daily at peak. But it didn’t become a conspiracy mess. Through strict sourcing rules and dispute resolution, it maintained a stable account that evolved with scientific understanding. The process was messy—but people could see how it worked. That visibility created trust.
This doesn’t replace expertise. But it’s the only way expertise stays legitimate in a democracy.
Traditional institutions haven’t figured this out yet.
Jonathan Rauch makes a similar argument in The Constitution of Knowledge (2021): science is a “reality-based community” held together not by single authorities but by shared rules—claims must be checkable, no one gets the last word, persuasion beats prestige. Wikipedia applies these principles digitally.
The uncomfortable truth is that our institutions failed to adapt to the loss of faith in shared truth. Nowhere is this more dangerous than in what we’re teaching students.
The Failure of Our Educational Response
Schools and universities have tried to respond—but with tools built for an earlier world. The two dominant strategies—”media literacy” and “critical thinking”—both fail the same way: They teach students to doubt or critique, but not to create shared knowledge.
Media literacy trains students to spot bias, trace funding, check URLs—useful skills. But it often produces what Leon Wieseltier called “a knowingness mistaken for knowledge.” Total suspicion masquerading as intellectual maturity.
danah boyd warns in “You Think You Want Media Literacy...Do You?” (2018) that this can backfire. When students learn “every source has bias,” it metastasizes into “nothing is trustworthy—so choose your tribe.” White supremacists actively promote media literacy because “question everything” becomes “trust nothing except what confirms your priors.”
Critical thinking often does the same—students learn to dismantle arguments and display skepticism but are rarely taught how to steward claims forward, how to revise and co-author knowledge despite disagreement.
We’ve perfected suspicion but failed to build trust. We’ve raised a generation fluent in critique but struggling to construct reliable knowledge collaboratively.
Consider the typical research paper. Students learn to “cite sources” and “avoid plagiarism”—defensive moves. But they’re rarely taught why citation matters epistemically: it lets others verify claims, locates knowledge in ongoing conversation, makes knowledge collectively verifiable. Citation is not treated as infrastructure for shared truth.
Universities Treat Knowledge as a Product, Not a Shared Resource
Too often, students learn as if knowledge is made elsewhere—by experts—then handed to them. Listen to how we talk: professors “deliver” content, students “receive” or “absorb” it, we “assess” their “mastery.”
Knowledge sounds like a finished product, not something built together over time.
Even our best innovations—undergraduate research or media literacy courses—rarely put students’ work into actual circulation. Students practice participating, but their contributions don’t enter the real knowledge ecosystem. They rehearse but never perform.
Rethinking the University’s Knowledge Construction Role
What if universities saw their job as training citizens to maintain our shared understanding of reality?
This means giving up the idea that knowledge is stable “content” to master for tests. Instead, we’d teach knowledge as process and practice—something existing only because people keep working at it.
Political scientist Danielle Allen argues in Education and Equality (2016) that education’s democratic purpose is “participatory readiness”—preparing people not just to vote but to do the hard work of governing together. This requires what she calls “political friendship”—the ability to work with people you disagree with while respecting them.
We need something similar for knowledge: the ability to investigate questions together with people who see things differently.
What Would Epistemic Democratic Education Look Like?
Here are six concrete ways universities could train students to build and maintain shared knowledge:
1. Have Students Edit Authoritative Texts
Ask students to edit a textbook passage or Wikipedia entry—and defend their edits. This teaches: writing for public accountability, handling editorial disputes without appeals to personal authority, synthesizing sources to help others understand, accepting revision as improvement, earning credibility through demonstrated competence.
2. Use Structured Controversies
Have students argue positions they don’t hold, using the best available evidence. Then have them switch sides. Finally, create a synthesis showing what we actually know. The goal isn’t relativism but recognizing that understanding requires grappling with why smart people disagree.
3. Require Public Writing
Don’t just assign research papers. Require public writing—op-eds, policy briefs, Wikipedia entries. This teaches writing for actual audiences, not just in academic isolation.
4. Practice Collaborative Truth-Seeking
Classes could practice adversarial collaboration (design a study together with those who hold opposing views), red teaming (attack your own arguments), and consensus building (produce statements everyone can sign, noting agreements and differences).
5. Cultivate Epistemic Humility
Philip Tetlock found in Superforecasting (2015) that the best forecasters update beliefs most readily when evidence changes. Students could keep “belief journals” recording confidence in claims and publicly revising assessments as they learn more.
6. Practice Correction, Not Just Critique
Students should practice not just spotting misinformation but correcting it: find conspiracy theories, create accurate counter-narratives, contribute to fact-checking sites. Learn that fixing false information is harder than criticizing it—and more important.
These approaches share a common thread: turning students from knowledge consumers into contributors to knowledge.
A Revealing Example: X’s Community Notes
In 2021, Twitter (now X) introduced Community Notes (originally named Birdwatch), a crowd-sourced system that lets users add context or corrections to posts identified as misleading.
If enough contributors from different viewpoints rate a note as helpful, it becomes publicly visible under the post.
In the best-case scenarios, the system works: research from the University of Washington and UC San Diego shows that when a Community Note appears, the spread of misinformation drops significantly — fewer reposts, fewer likes, and credible corrections delivered in real time. It’s also more transparent and scalable than traditional fact-checking, since the algorithm behind it is public and the crowd does the work.
The system is far from sufficient on its own. A major study found that 74% of misleading election-related posts received no Community Note, even when submissions existed — either because the system timed out, couldn’t get cross-ideological consensus, or surfaced the note too late, after the misinformation had already gone viral.
Even when notes do appear, they often get far fewer views than the original post. The mechanism can be gamed, delayed, or simply outpaced by virality.
In short: Community Notes are a promising tool — but not a complete solution. There’s a danger: if platforms use them to moderation and fact checking, the result may be accuracy theater, not information integrity.
Yet, paradoxically, Community Notes may still inspire more public trust than almost any other model on the table right now — precisely because they’re transparent, participatory, and not controlled from above.
The University’s New Mission
If our crisis is about how we collectively maintain the possibility of knowing anything together, then universities need a new purpose: training students to participate in sustaining a shared world of knowledge.
From this perspective, knowledge isn’t a product to consume or territory to defend. It’s a public resource surviving only because people maintain it.
This means courses where students: enter live debates rather than learn settled answers, practice changing minds publicly without humiliation, experience knowledge as collaborative construction, see disagreement as necessary for understanding, understand expertise is earned through demonstrated contribution.
Some universities are already doing this:
▪ Arizona State’s “Teach-Out” program: students create public explainers during crises
▪ MIT’s Open Learning: students build educational resources anyone can use and adapt
▪ University of Edinburgh: embeds Wikipedia editing across courses as core skill
But these are exceptions. Most universities still treat knowledge as something students absorb privately, prove on exams, and use for personal advancement.
The alternative is treating classrooms like newsrooms or citizen assemblies—places where knowledge gets actively built and tested according to shared rules. It means treating students not as future professionals but as current participants in maintaining reality itself.
Common Objections
“This will lower standards.”
Expertise must now demonstrate competence publicly, not just claim credential-based authority. Wikipedia doesn’t eliminate expertise—it requires proving it through contribution. This arguably raises standards by making expertise visible and testable.
“Students aren’t ready.”
We learn by doing. Students learn what counts as evidence by making arguments others evaluate, not by memorizing definitions in isolation.
“Students will push their biases.”
Exactly—then they’ll encounter pushback. They’ll learn assertion isn’t enough. The process teaches knowledge requires meeting collective standards, not just expressing conviction.
“This won’t work for STEM.”
Science invented public, self-correcting knowledge. Science education’s problem is presenting results as finished facts rather than showing messy testing and revision. Teaching science like Wikipedia would show how science actually works.
What Else Must Change
Universities can’t solve this alone. Journalism needs to show its work—explaining methods, admitting uncertainty, making corrections prominent. Science needs more open access and post-publication review. Government needs accessible data and citizen interpretation. Tech platforms must choose between optimizing for engagement versus epistemic health.
But universities have unique advantages: they create knowledge and train the next generation, with some protection from market and political pressures. That freedom comes with responsibility.
The Stakes
Hannah Arendt argued that totalitarianism begins when the common world—the space of shared facts—dissolves. When people retreat into tribal certainties, politics becomes impossible because there’s no common ground.
What we’re witnessing may be not just epistemic crisis but ontological—fracturing not just what we know but what we take to be real. Once we recognize categories are constructed, we face perpetual temptation to dismiss any category we dislike as “merely” constructed.
The answer is not naive realism but what we might call “critical constructivism”—recognizing knowledge is socially maintained while taking responsibility for maintaining it well. Not “facts speak for themselves” but understanding all knowledge comes from somewhere, while insisting some situations give better views than others.
From Critique to Knowledge Construction
The great mistake of late 20th-century critical theory was assuming that exposing power’s role in knowledge production would automatically liberate. Instead, it often produced paralysis or nihilism.
We need to transform students from knowledge consumers to co-authors of knowledge.
Can we still know anything together? Yes—but only if we treat knowing-together as something we actively do, not something that simply happens.
We cannot return to automatic deference to authority. That era is gone, and good riddance to hierarchies that often protected injustice as much as truth.
But we can build something better: democratic epistemic institutions where authority is earned through visible competence and collective maintenance, where disagreement is structured rather than suppressed, where students learn not just to question but to keep.
Wikipedia shows it can work. Now universities must learn the lesson.

If I remember correctly, Richard Price has an interesting analysis of how suramaka elders debated oral histories, which has this iterative quality we see in the best historiography, when it functions best.
There has been some accusations that Wikipedia has, in certain instances, a left-wing bias. I'm not sure how valid they are, but apparently the editors lean more left than right, or did. I'm not sure if and how Wikipedia has addressed this. Perhaps it's only a mild problem for Wikipedia, but it is a more serious one for academia, as peer review has failed at times because of the ideological monoculture. One can have all the checks and procedures, but if there isn't an attempt have some balance among perspectives of the reviewers, the process will likely be weak.