It's Time To Stop Blaming Misinformation for Cynicism and Mistrust
On Meta's decision to replace fact-checkers with community notes
Meta is ending fact-checking and removing restrictions on speech across Facebook and Instagram, Mark Zuckerberg, its CEO, recently announced, calling the move an attempt to restore free expression on its platforms. Fact-checkers will be replaced with Community Notes, a system similar to what Elon Musk has done on X, in which users flag posts they believe are inaccurate or require more context.
While Meta will continue to target illegal behavior, Zuckerberg wrote in a separate Threads post, it will stop enforcing content rules about issues like immigration and gender that are “out of touch with mainstream discourse.” President Donald Trump, who was suspended from Facebook for two years after the Jan. 6 insurrection, was pleased. “They’ve come a long way,” he said after Zuckerberg’s announcement.
To people who are concerned that democratic societies are plagued by misinformation, this is deeply alarming. Meta, those people argue, is yielding to forces that reduce institutional trust and drive the public to accept falsehoods and conspiracy theories about climate change, vaccines, election fraud, and more. Former president Joe Biden once suggested that misinformation is “killing people.”
The fact is, though, that misinformation doesn’t actually change minds all that much.
As far back as the 2016 election, exposure to online content didn’t appear to affect elections as much as initially thought. A 2017 paper by Stanford economists Levi Boxell and Matthew Gentzkow and Brown economist Jesse Shapiro found that political polarization has been most intense among the oldest Americans, who spend the least time online. The research suggested that cable news was a more significant driver of partisan divisions. A 2018 paper from the same authors found that Trump performed worse than previous Republican candidates among internet users and people who got campaign news online and concluded that “the internet was not a source of advantage to Trump.”
It's worth considering some general facts about misinformation. First, it is relatively rare. Empirical research shows that the average person consumes very little of it, especially when contrasted with information from mainstream sources.
Second, engagement with misinformation is heavily concentrated in a small group of very active social media users.
Third, this small group is not a cross-section of the population. They are people with very specific traits, such as strong conspiratorial worldviews, high partisan animosity (i.e., they actively detest the opposing political party), anti-establishment attitudes, and—most importantly—institutional distrust. Exposure to misinformation seems to play a minor role in this distrust. Indeed, people seek out narrow misinformation because they distrust institutions (science, public health, mainstream media, etc.), not vice versa.
Put differently, it’s not that people who are otherwise trusting of institutions and democratic norms come across a piece of misinformation and become paranoid partisans. Rather, paranoid partisans actively seek out unreliable news sources to confirm their biases.
Interestingly, even these individuals prefer sharing accurate rather than false information because people generally want to look good in the eyes of others, and sharing truth rather than falsehoods garners higher approval even from co-partisans.
What are the goals of people who post misinformation? In a 2021 study, the political scientist Mathias Osmundsen and his colleagues found no evidence that such individuals are ignorant. Perhaps unsurprisingly, the best predictor of sharing misinformation was hatred of the opposing political party.
The researchers also find that left-leaning individuals are correct in their impression that Republicans are more likely to share misinformation online. Why is this the case? Osmundsen and his team found that the main news source type that was more positive towards Republicans than Democrats was “fake” news from unreliable sources. This suggests that when Democrats are motivated to post content that ridicules the opposing party, they can easily locate such ammunition in mainstream outlets. Republicans, in contrast, must seek out fringe news sites to find equally useful ammunition.
The researchers conclude by suggesting that the small numerical minority of individuals who post misleading information are not ignorant but are rather motivated by strong partisan goals. As the philosopher Dan Williams has pointed out in Boston Review, when people seem to have been manipulated by ideas, it is typically not because they have been duped against their interests but rather because the ideas promote their interests.
Trying to bolster trust by targeting online misinformation may only exacerbate the ongoing crisis of confidence in institutions. Interestingly, the economist Tyler Cowen once suggested that it is not fake news that is the problem, but real news. “The world of the internet — fundamentally a world of information — is reporting on the failures of the elites 24/7,” Cowen writes, and “An informed populace, however, can also be a cynical populace, and a cynical populace is willing to tolerate or maybe even support cynical leaders.”
Trying to control the flow of information to a deeply cynical public, and thereby hoping to reduce their cynicism, is like treating a brain tumor with painkillers to manage the headaches — it masks the symptom but doesn’t address the cause.
That’s precisely why preoccupation with misinformation is so appealing. It reframes complex, entrenched social and political dysfunction — often caused or worsened by establishment failures — into a neat, tractable problem: “fighting online misinformation.”
Fact-checking seems harmless enough in theory, but, as the saying goes, “who fact-checks the fact-checkers?” When enforced by biased and imperfect humans, these efforts often deepen the very social fractures and institutional distrust that drive the demand for misinformation. Outsourcing to “community notes” might not be any better, but at least each individual feels they have some say when they encounter falsehoods or misleading content online.
It’s tempting to believe that cracking down on online falsehoods could restore us to some pre-digital era of truth and objectivity. But not only is that golden age a myth, most of the real issues in modern political discourse aren’t born on social media — they’re merely reflected there.
In stepping back from its fact-checking policies, Meta may not be inviting further erosion of facts but acknowledging an uncomfortable truth: The battle against misinformation isn’t won by gatekeeping online speech. Instead, it requires addressing the deeper reasons people lose faith in democratic norms and institutions in the first place.
This article was originally published by the Boston Globe under the title “The myth of misinformation.”
There was a time, in a galaxy far, far away, where media fact checkers devoted their time exclusively to verifying the accuracy of their employer‘s product (e.g., The Washington Post), not the statements of others. Some of those publications employed an ombudsman who publicly took their employers to the woodshed whenever they botched their coverage of specific event (e.g., the Duke lacrosse team sex scandal).
Also—and I know this seems quaint—editors routinely ripped a cub reporter a new one whenever he submitted a story containing adjectives and adverbs. Tell the story, present the facts, and trust your subscribers‘ ability to determine who is right and who is wrong. Finally, if you really wish to regain the public‘s trust, own your mistakes.
Hey, I can dream can‘t I?
The false conspiratorial people are de minimis from my view. They get amplified buy the actual problem demographic that propagandize the playing field in their quest to be the real demanding architects of societal and economic design.
Both political sides of these real participants want the same things... the goals are generally shared with only a few conflicts. For example, both would like to see an end to homelessness.
The difference is the "how"... the path to get it done.
And here is where I see a problem with "disinformation". Often the issue of disinformation is what is missing. Using the example of homelessness, what is missing is the large body of evidence that the "how" advocated by the left side of the actual societal change architects has been a dismal failure. What is missing is the stories of how other countries like Finland and Switzerland have been successful with their policies on homelessness. What is missing is honest analysis of the potential for the right-side "how" to solve the problems.
The root cause of this type of disinformation is that the mainstream media is dominated by the left. And the left refuses to give any media air to validate any right-side solutions.
Even today, after the people have essentially elected a right-side solutions platform, the mainstream media is still involved in shooting holes into every policy idea and ignoring the still glaring evidence that left side policies for states and cities has resulted in a precipitous decline of almost every societal health indicator.
California and New York are a mess of crime, homelessness and disasters made worse by crappy governance... but hey, those fascist, Nazi MAGA types are a threat to democracy!