Wikipedia will be 25 years old in January. During that time, the encyclopedia has gone from a punchline about the unreliability of online information to the factual foundation of the web. The project’s status as a trusted source of facts has made it a target of authoritarian governments and powerful individuals, who are attempting to undermine the site and threaten the volunteer editors who maintain it. (For more on this conflict and how Wikipedia is responding, you can read my feature from September.)
Now Wikipedia’s cofounder Jimmy Wales has written a new book, The Seven Rules of Trust: A Blueprint for Building Things That Last. In it, Wales describes a global decline in people’s trust in government, media, and each other, instead looking to Wikipedia and other organizations for lessons about how trust can be maintained or recovered. Trust, he writes, is at its core an interpersonal assessment of someone’s reliability and is best thought of in personal terms, even at the scale of organizations. Transparency, reciprocity — you have to give trust to get trust — and a common purpose are other ingredients that he attributes to Wikipedia’s success.
We spoke over video call about his book, how Wikipedia handles contentious topics, and the threats facing the project and other fact-based institutions.
Photo by Hayley Benoit / The Verge
The interview has been condensed and edited for clarity.
The Verge: You wrote a book about trust, and a global crisis in trust. Can you tell me what that crisis is and how we got there?
Jimmy Wales: If you look at the Edelman Trust Barometer survey, which has been going since 2000, you’ve seen this steady erosion of trust in journalism and media and business and to some degree in each other. I think it gives rise in a business context to a lot of increased cost and complexity, and politically, I think it’s tied up with the rise of populism. So I think it’s important that we focus on this issue and think about, What’s gone wrong? How do we get back to a culture of trust?
What do you think has gone wrong?
I think there’s a number of things that have gone wrong. The trend actually goes back to before the Edelman data. Some of the things I would point to are the decline of the business model for local journalism. To the extent that the business model for journalism has been very difficult, full stop, you see the rise of low-quality outlets, clickbait headlines, all of that. But also that local piece means people aren’t necessarily getting information that they can verify with their own eyes, and I think that tends to undermine trust. In more recent times, obviously the toxicity of social media hasn’t been helpful.
Why has Wikipedia so far bucked that trend and continued to be fairly widely trusted?
Part of the rationale for writing the book is to say, “Look, Wikipedia has gone from being kind of a joke to one of the few things people trust, even though we’re far from perfect.” I think transparency is hugely important. The idea that Wikipedia is an open, collaborative system and you can come and see how decisions are made, you can join and participate in those decisions — that’s been very helpful. I think neutrality is really important. The idea that we shouldn’t take sides on controversial topics is one that resonates with a lot of people. I don’t want to come to an encyclopedia or frankly a newspaper and be told only one side of the story. I want to get the full picture so I can understand the situation for myself.
You brought up the Edelman survey and decline in trust in media, government, and to a lesser extent individuals. Are we seeing a decline in trust or a transfer of trust from institutions to individuals? In the book, you say we are hardwired to trust at an interpersonal level by gauging other people’s authenticity, which is a trait that plays very well on social platforms, where some very trusted figures also gain extra trust by telling their followers not to trust in the media, the FDA, the universities. Do you see this dynamic playing a role, and if you do, how has Wikipedia, which is an institution, continued to be trusted?
I think there’s some truth to that. But I also think it’s incomplete because I think a lot of people who support Donald Trump will also say they don’t really trust him. They just think it’s not relevant. They’ve sort of lost faith in the idea of people being honest. So they’re more likely to say, “All politicians lie, so why is that a big deal?” I obviously think it is a big deal. I think that’s very problematic.
Similarly, I think a lot of the people who are jumping on a bandwagon undermining trust in science, for example, basically see a way to get successful doing it. I mean, that’s a pretty cynical view of those particular people, and I’m not a very cynical person, but it’s hard to come to any other conclusion sometimes, that there’s a lot of grifting going on.
I interviewed Francis Fry for the book, and she’s a Harvard academic who also has business experience. One of the things she said to me was, people often say that once you’ve lost trust — that’s it, you’ll never get it back. And she says that’s not true. You can rebuild trust. There are certain definable things that organizations and people can do to rebuild trust. So when we think about institutions being attacked, they probably should reflect on what made them vulnerable.
You have some examples in the book, like the back-and-forth about masking and covid, and obviously journalists do make errors. But I tend to think that most publications are fairly transparent about issuing corrections, though maybe not to the level of Wikipedia. How much of the decline in trust has to do with actual mistakes made by those institutions, versus people or groups that want to be able to define their own reality undermining what they see as rival centers of facts, whether that’s academia or science or journalism?
I absolutely think it’s both. In many cases, we have seen media with a real blind spot, and I typically would view it more often as a blind spot problem, rather than deliberately being biased. I live in London. All three of the major political parties were all opposed to Brexit, and in London you could not really find anybody who was openly supporting Brexit, not among my social group. Everybody thought it was a completely ridiculous idea. And yet the public voted for it.
I think a big part of that was that London wasn’t listening and the media tended too often to portray Brexit support as having to do with racism and so on. Which, of course, if that’s how you come at people, they tend to not go, “Oh, you’re right, I’m sorry. I’m going to stop being racist now and change my political views.” They’re more likely to say, “Hold on a minute, you’re not listening to me. I’m not being racist. There are these problems, functional problems, and I don’t think I’m being listened to.” To the extent the media isn’t representative of broader segments of society and isn’t listening to problems that people are having, that’s a problem. And then we also have people who are taking advantage of it and who see that opportunity to campaign and build trust by pointing the finger at the other guy.
Debates on Wikipedia talk pages can get heated. People rebut other people’s proposals without a lot of pleasantry. There is real conflict, but they are generally productive conflicts. People keep engaging with each other and usually reach a compromise, which I feel is very unique in online discourse. What do you think the mechanism or mechanisms are that make this possible?
We have a purpose to build an encyclopedia, to be high-quality and neutral, and we have a commitment to civility as a virtue in the community. We’re human beings, so of course sometimes those conversations are, I might say, a bit brusque but hopefully not stretching quite into personal attacks. There’s also this view that you really shouldn’t attack people personally. And if it gets overheated, you should probably apologize, and things like that, which is not that unusual except in online contexts. I mean, normally I think most people in real life, if you get into a proper nasty quarrel with someone, there is a sort of feeling like, Yeah, that wasn’t productive and maybe we need to apologize to each other and find a better way to deal with each other. In terms of how do we foster more of that? I think in online spaces, it has to do with changing culture. And in many cases, I think it’s the design of algorithms.
I don’t go on Facebook very much anymore, but if one day I logged in and Facebook had an option that said, “We’d like to show you things we think you will disagree with, but that we have some signals in our algorithm that are of quality. Would you like to see that?” I’d be like, yes, sign me up for that. As opposed to: “Our research has shown that you tend to get agitated about trolls, so we’re going to send more trolls your way because you stay on the site longer.” Or “we’re only going to send you stuff we think you’re going to agree with,” which is also not really healthy intellectually.
One of your other examples of a functional online space was the subreddit /changemyview, which feels similar to Wikipedia in some ways. It’s text-based. There are rules. You’re there for a specific purpose. Is it possible for a big platform like Facebook or X or whatever to become a healthy space, or do you need to be kind of constrained and purpose-built?
I think it’s hard for sure. And I think that’s a great question because I don’t think anybody knows right now. On Facebook, you’ll find pockets of groups that have good, well-run community members who are keeping the peace and insisting on certain standards. And you find horrible places as well. I think Reddit it’s the same. And another thing that I do think is interesting is looking back, because I’m now old, and I remember before the World Wide Web and I remember Usenet, which was a giant, enormous, largely unmoderated message board. That was super toxic. It had endless flame wars and horribleness and spam and all kinds of nonsense. So I always try to mention that when people have this view of the lovely, sweet days of the early internet — it was such a utopia. I’m like, it was kind of horrible then too. It turns out we don’t need algorithms to be horrible to each other. That’s actually something humans can do, and humans can be great to each other at the same time. But I do think, as consumers of internet spaces, I think we should say, “Actually, I really would much rather be in places that are good for me.”
You recently weighed in on one of the most contentious topics on Wikipedia or anywhere, the Israel-Gaza conflict. You wrote that you thought that it shouldn’t be called a genocide in wiki voice. You normally stay out of content debates on Wikipedia. Why did you decide to weigh in on that one?
I think it’s really important that Wikipedia remain neutral and that we refrain from saying things that are controversial in wiki voice. I think that’s not healthy for us and not healthy for the world. So it felt important to weigh in and say, “Let’s take a deeper look at this.” And the other thing is normally, we have this idea of consensus in the community, and I would say it has a certain usually constructive ambiguity, like what is consensus? How do you define that? We’ve avoided for good reason, I think, saying, “it’s 80 percent” or any kind of simple rule like that. And the reason is because there are so many different areas in editing where there are different levels of certainty and different levels of consensus. My simplest example is, which picture of the Eiffel Tower should we have as the main picture on the Eiffel Tower wiki page? Well, maybe somebody does a straw poll and it’s 60-40. Personally, if I’m in the 40 percent, I’m going to go, Most people don’t agree with me, oh well, because it isn’t that important.
Whereas in other cases, if you’ve got a significant number of good Wikipedians who are saying, “I don’t agree with this, I don’t think this should be in wiki voice, you shouldn’t go for 60 percent.” That’s nowhere near good enough, particularly not if it has enormous implications for the reputation of Wikipedia and neutrality. We should hold ourselves to a very high standard. This is the kind of thing that over the years, we have to reexamine over and over and over. Where are we drawing these lines? And are we doing a good job of it? And should we ratchet it up and be more serious about it? And over the years, we have gotten more serious about it. And I think we should be even more serious about it.
Some of the editors said they felt that there was a consensus, that they’d debated this question for months, and that to frame the article as you wanted would be to give both sides of the debate equal weight, rather than to represent the proportional view of experts and institutions. What are your thoughts on that critique?
Yeah, I think they’re wrong. I think we have to always dig deep and examine it, and I think it’s absolutely fine to say, “The consensus of academic genocide researchers is that this was genocide.” That, as far as I can tell, is a fact, so that’s fine. Report on that fact. That doesn’t mean that Wikipedia should say it in our own voice.
And that’s actually important more broadly that if there’s significant disagreement within the community of Wikipedians and we don’t have consensus, and if people are putting forward policy-based reasons to disagree with that, which they are, then hold on. We should always be looking for as much agreement as possible. So what can we all agree on? Oftentimes that may be stepping back, going meta and saying, “Okay, well, we can all agree to report on the facts. We’re not all going to agree on using wiki voice here. So we’re not going to do that. But we are going to report the facts that we can all agree on.”
And it’s important for two reasons. One, it’s what you want from an encyclopedia. You don’t want to be jumping to a conclusion while there’s still live debate. And two, socially within the community, it means we can all have a win-win situation where we can all point at this and say, “Yeah, we disagree but we can point to this with pride and say, ‘Actually, this is a good presentation. If you read this, you’ll understand the debate.’” Brilliant. That’s where we want to be.
When I see people attack Wikipedia for bias, it often comes down to which sources editors deem reliable. They’ll say, “Well, you don’t let us cite Breitbart, so now it’s going to be biased.” How are you thinking about how to draw the line of what is an acceptable source, and how to maintain neutrality as these decisions no longer seem neutral to people who have a completely different media diet made up of sources deemed unreliable?
It’s something we will always be grappling with. Wikipedia does not have firm rules. That’s one of the core pillars. We don’t completely ban sources. We may deprecate them and say, “Well, it’s not preferred as a source. We’d rather have something better.” And then I make no apologies at all for saying not all sources are equal. I always say, if I have a choice between The New England Journal of Medicine and Breitbart, I’m going with The New England Journal of Medicine. That’s just the way it is, and I think that’s fine. When I say we have to grapple with it and take seriously the question of bias, I think we do. But sometimes we’re going to conclude, Actually, I think we’re fine here.
Elon Musk has been a loud voice complaining about bias on Wikipedia. Now he has Grokipedia, an AI-rewritten version of Wikipedia that draws on a bunch of sources that Wikipedia won’t allow. Have you looked at Grokipedia?
A little. Not enough. I need to do a deep dive.
What are your thoughts on it?
I think a lot of the criticism that it’s getting is not surprising to me. I use large language models a lot and I know about the hallucination problem, and I see it all the time. Large language models really aren’t good enough to write an encyclopedia. And what’s particularly true is the more obscure the topic, then the more likely they are to hallucinate. I also think in terms of the question of trust, I’m not sure anybody’s going to trust an encyclopedia that has a thumb on the scales. Which is to say, when I’m not happy about something in Wikipedia, I open a conversation and enter the discourse. I’m sure if Elon doesn’t like something, it’s just going to change. I don’t see how you can trust a process like that. You know, it is reported that Grokipedia seems to agree with Elon Musk’s political views quite well. Fine. It’s Elon, but that might not be what we all want from an encyclopedia.
Are you concerned that it could be what some people want, or that people will start to use or prefer an AI-revised version of Wikipedia that conforms to their worldview?
Obviously you can’t dismiss that out of hand, but I actually reflect on various research that we cite in the book about trust, that if people feel like there’s a thumb on the scale, then even if they agree with that thumb on the scale, they are likely to trust it less.
I have great confidence in ordinary people. I think that if you ask people, “Would you prefer to have a news source that reflects all your own prejudices and biases and that you agree with every day?” or “Would you rather get something that is neutral and gives you insight into things you might not agree with?” I don’t think it’d be a contest. Most people would prefer the latter. That doesn’t mean they automatically click on it, and they may prefer their preferred outlet. That’s fine. That’s humanity. But I don’t think we’re about to all go off into our little mind bubbles permanently.
How are you thinking about Wikipedia and AI more generally? The internet is increasingly full of AI-generated slop, and the foundation noted earlier this year that bots scraping the site were straining Wikipedia’s servers. Do you see AI presenting a threat, possible benefit, both?
Both. AI slop on the internet I don’t think is a huge issue for Wikipedia because we’ve spent, you know, now nearly 25 years studying sources and debating the quality of sources. And so I think Wikipedians aren’t likely to be fooled by, you know, sort of fluff content that is generated by AI.
Obviously, crawling Wikipedia and hammering our servers, that’s not cool. So we hope we find a reasoned solution to that. The money that supports Wikipedia is the small donors giving an average of just over $10. They’re not donating to subsidize billion-dollar companies crawling Wikipedia. So you know, “pay for what you’re using” seems like a fair request.
Then the other thing that I think is super interesting are questions around how might we, the community, might use the technology in a new way. I’m not a very good programmer, but I’m a programmer and I just wrote a little thing that I can feed it a short Wikipedia entry that maybe has five sources and feed it the five sources and say, “Is there anything in the sources that should be in Wikipedia but isn’t? Or is there anything in Wikipedia that isn’t supported by the sources?” I haven’t even had time to play with it, but even at a first pass, I thought, this is actually not terrible.
Going back to why Wikipedia works, editors do seem to largely trust each other to be working in good faith, but it also seems like they have a lot of trust or respect for Wikipedia’s rules and processes in a way that feels rare in online communities. Where does that come from?
I think it probably has to do with everything being genuinely community-driven and genuinely consensus-driven. The rules aren’t imposed, the rules are people writing down accepted best practices. Certainly in the early days, that was absolutely how it worked. We would be doing something for a while and then we would notice, like, Oh, actually, you know, best practice is this, so we should maybe write that down as a guide for people, and it becomes policy at some point. That helps to build trust in the rules, that they’re genuinely not imposed top-down, that they are the product of our values and a process and the purpose of Wikipedia.








