The Disinformation Vaccine: Is There a Cure for Conspiracy Theories?

Sander van der Linden was working in his office at the University of Cambridge a few years ago when he received a strange phone call. A professor of social psychology and director of the Cambridge Social Decision-Making Laboratory, van der Linden is one of the world’s leading researchers on how to combat the scourge of disinformation and misinformation. He receives requests all the time about his work from government agencies, media organizations, and civil-society groups. But the person who called that day was not a bureaucrat or a diplomat. It was a representative from L’Oréal, the multi-billion-dollar global beauty product company. L’Oréal had what it called a “scientific disinformation” problem related to some of its products. Could van der Linden help?
At the heart of van der Linden’s research is a theory: Our information crisis can and should be treated like a virus. Responding to fake stories or conspiracy theories after the fact is woefully insufficient, just as post-infection treatments don’t compare to vaccines. Indeed, a growing body of social science suggests that fact-checks and debunkings do little to correct falsehoods after people have seen a piece of misinformation (the unintentional spread of misleading or false stories) or disinformation (the intentional spread of such a story with a purpose in mind). Van der Linden believes we can protect people against bad information through something akin to inoculation. A truth vaccine. He calls this tactic “prebunking.”
“I like to think of it as a multi-layered defense system for society,” he says. “We have to start with prebunking or inoculation if you can, then real-time fact checks, and if that fails we can still do debunking.” He adds, “We need to think of this at the systems level.”
Van der Linden told the L’Oréal rep on the phone that he had a policy against consulting on specific products and politely declined. (Based on the company’s intentionally vague description, he suspected the products in question must have been L’Oréal’s shampoos.) But the request nonetheless piqued his interest. Who would launch a disinformation campaign about…shampoo? As van der Linden told me in a recent interview, “It dawned on me that there’s a lot of disinformation in the private sector.”
For the past half-decade, when the problem of disinformation, online trolls, and viral conspiracy theories comes up for discussion, it’s typically in the context of elections and public policy, diplomacy and geopolitics. We think of Russia’s Internet Research Agency trying to influence the 2016 election and Macedonian click farms gaming Facebook’s algorithm to spread fake, inflammatory stories. We think of Americans who, wittingly or not, amplify disturbing conspiracy theories such as QAnon and Stop the Steal, and still other Americans who believe those theories with such a passion that they engage in acts of violence in response.
But lately, the disinformation crisis has found fertile terrain in corporate America. The online furniture retailer Wayfair has been targeted by QAnon conspiracy theorists who falsely claimed the company was secretly trafficking young children. Victoria’s Secret was accused without any evidence of embedding human-tracking chips in its bras. A small business in Pennsylvania named Koch Family Farms was victimized by a Russian troll operation that spread the baseless claim that Koch’s turkeys were poisoning people. (Experts suspected the Russians thought the politically active billionaire Koch brothers had a connection to the farm, which they did not.) And for every corporate disinformation controversy that makes it into the news, experts say there are many more that happen without ever going public.
Earlier this year, Sander van der Linden got another request from the private sector. This time, it was from the global P.R. firm Edelman. Edelman represents some of the most well-known companies in the world. It also puts out an annual “trust barometer” report, an effort to gauge public trust in companies, brands, and institutions. In 2020 as part of its trust-measurement project, Edelman declared the U.S. in the midst of an “infodemic,” so riddled with doubt and cynicism and confusion was the state of the information ecosystem in this country. “We are in an era of information bankruptcy, when the public believes its leaders are liars and that platforms are biased,” Edelman CEO Richard Edelman says.
The Edelman reps who contacted van der Linden said their firm was toying with the idea of designing a new product to combat the spread of mis- and disinformation in the private sector writ large. And they wanted to incorporate his theories on inoculation and prebunking in that product, making for what could be the biggest test case yet of applying the theories of immunology and epidemiology to addressing our information crisis.
On Tuesday, Edelman launched what it calls the Disinformation Shield, a tool that uses artificial intelligence, real-time media monitoring of the open and dark webs, and social psychology to track, identify, and defuse the next viral meme or hideous conspiracy theory that brings a major corporation to its knees. The product draws on van der Linden’s inoculation theories to try to prevent the spread of disinformation, not unlike a public-health agency would do to prevent a highly transmissible and potentially dangerous virus from rampaging through a population.
Jim O’Leary, Edelman’s head of global corporate affairs, said in an interview that disinformation was the new battleground not just for elections and governments but for the business world. Just as most of corporate America awoke to the importance of cybersecurity six or seven years ago, companies large and small now realize they can’t ignore the global information crisis. “Disinformation is not ‘the next great threat,’” he says. “It’s here today. For our clients, this has entered the realm of not ‘if’ but ’when.’”
Disinformation has proliferated in large part because social media has made it infinitely quicker and cheaper for anyone to reach an audience and spread a message. A cottage industry of for-hire firms that specialize in disinformation, fake news, and information warfare has emerged. Some firms peddle more innocuous services including buying likes, retweets, and shares on Facebook, Instagram, or Twitter to look more popular, as the New York Times revealed in a 2018 investigative series. But according to Roman Sannikov, the director of cybercrime and underground intelligence at Insikt Group and a former contract linguist for the FBI, there are far more nefarious and possibly illegal services available online if you know where to look and have money to burn.
In 2019, Insikt Group published a report titled “The Price of Influence: Disinformation in the Private Sector,” in which he and his team went undercover to expose the gray and black markets where you could buy disinformation campaigns, smear campaigns, and more. They created a phony corporation and then sought out the services of two different firms: One to create fake positive content about the company, and one to attack that same company. Sannikov was quoted $15 for a short article and $8 for social media posts by one outfit and $150 for Facebook accounts and $200 for LinkedIn accounts. One of these companies even offered to file a false police report and a “takedown service” that included helping “sink an opponent in an election.”
More recently, Sannikov said he and his colleagues have noticed disinformation-for-hire groups offering to engage in stock manipulation, a modern twist on the old pump-and-dump scheme. He told me they researched whether disinformation played a part in the GameStop frenzy but didn’t find anything. Still, he added he could imagine such groups trying out similar tactics on less-visible penny stocks as a way to game the markets without drawing much attention.
Sannikov told me that most of the groups like this that he encountered operated in Eastern Europe and the Philippines. The biggest misconception about modern disinformation, he told me, was it was something nation-states did to each other, and that it was mostly the work of high-profile organizations like the Russian Internet Research Agency. But in his experience, Sannikov said, there were a lot of what he called “mom-and-pop” operations that offered disinformation for sale. “We tend to focus on the big-name players,” he told me. “In reality, there’s a lot of these smaller groups that can be involved in this activity and they have thousands of bots under their control that they can implement for the right price.”
One day in 2015, the fast-food chain KFC found itself on the receiving end of a viral misinformation campaign. A man in California posted on Facebook that he’d been served a deep fried rat, warning people to watch “what you eat” because “people are sick out there.” The viral post came to KFC’s attention when CNN ran a story about it, which sent the claim ping-ponging around the world. “We literally watched the news travel across the globe and saw in some cases double-digit sales impact in countries outside the U.S.,” says Staci Rawls, the company’s top communications executive.
Rawls contacted the man who wrote the post and the company tracked down the mysterious fried product, tested it, and discovered that it was not a rat but merely an unfortunately shaped chicken tender. But by then, the story had reached an audience of many millions of people. Was this a case of misinformation or disinformation, an unintentional post or a purposeful attempt to damage one of the best-known food chains? In a sense, the answer didn’t matter because the damage had been done.
Every big company nowadays tracks as many social media platforms as possible, watching in real-time what people are saying, trying to stay ahead of the next conspiracy theory, the next Great Fried Rat Hoax. But the speed with which a viral tweet or a TikTok video can travel and reach the masses means round-the-clock monitoring isn’t enough of a defense. The tricky question for a big company or a PR firm like Edelman is what you do once the meme is already on the move. How do you slow its spread, reduce the potential audience? That’s when Edelman contacted van der Linden, the Cambridge professor, for help.
Van der Linden got turned onto the idea of prebunking and inoculation theory when he stumbled across a decades-old news article about a man named Bill McGuire. McGuire was a social psychologist at Yale in the 1950s and 1960s who was concerned about the effects of Cold War-era propaganda and the Soviet Union’s attempts to brainwash Americans. So McGuire, who had done pioneering work on persuasion, began looking at the subject from the opposite angle — how to resist persuasion. “What he found was if people are not prepared to defend their beliefs or their knowledge, they’re really easily duped by arguments,” van der Linden says. “So he started thinking about fortifying people’s mental defenses.”
According to van der Linden, McGuire’s research into how prepare and even inoculate people against foreign propaganda faded from the spotlight. But to van der Linden, it was an outdated schematic for tackling a new problem in the modern era.
Five or six years ago, he took on the problem of climate-change denial. He wanted to test whether inoculation theory could be applied to prebunking that denial, building up people’s defenses against attempts to question climate science.
In one experiment, van der Linden tested out how to effectively counteract a widely circulated petition by a climate-denial group that claimed there was no consensus that climate change is caused by humans. First, he showed his test subjects a short prebunk message — “Some politically motivated groups use misleading tactics to try to convince the public that there is a lot of disagreement among scientists” — and then showed them the denialist petition. Then, he showed his subjects a more detailed prebunk — the same “Some politically motivated groups use misleading tactics” warning as well as the fact that the signatories to the denialist petition included members of the Spice Girls and Charles Darwin. Which worked better, he wanted to know, the short generic warning or the more detailed prebunk with the warning and some specific facts?
In both cases, he found, the prebunk worked. He and his collaborators went on to develop a series of online games that put the player in the position of a disinformation agent. By showing people how anger, fear, and other lizard-brain emotions were used to spread lies and misleading information, van der Linden theorized that it would help people develop awareness and protections — informational antibodies, if you will — for when they encountered mis- and disinformation in real life. He likens this to learning a magician’s tricks. “There’s really two ways to build people’s defenses: one is the passive way, someone gives you a fact sheet or a blueprint of how the trick works and you read it or absorb it,” van der Linden says. “The other is learning by doing. You go to a magic show and step into the shoes of the magician for a little while to try to figure out how and why the trick works.”
And now, in the midst of the Covid-19 pandemic, Van der Linden says he’s started to think about whether it’s possible to reach a version of herd immunity to disinformation just as public-health specialists and immunologists study what it will take to reach herd immunity to the coronavirus.
Other information experts are skeptical of applying concepts such as inoculation and herd immunity to disinformation. Shawn Walker, a professor of social and behavioral science at Arizona State University, says the epidemiological approach risks overlooking the nuances and differences between online communities and how one form of intervention or solution might work in, say, a particular Reddit subgroup but not on Twitter. “There has to be thoughtful engagement and the understanding of the different balkanization of these communities,” Walker says. “Some you want to go in and engage, and some you don’t want to because it feeds the beast.”
There are also clear ethical questions about the private sector, with its profit motives, using prebunks and inoculation to intervene in the information ecosystem. Rival companies might have little incentive to share tactics or best practices — or might even put those practices to nefarious use — when a competitor finds itself in a crisis, Shawn Walker says. And of course there’s the potential for a P.R. firm like Edelman to position itself as an arbiter of truth, snuffing out not just disinformation but unpopular opinions and inconvenient facts about a client. A Edelman rep says the firm has enlisted an outside consultancy called Compass Ethics to advise it on how to tackle disinformation without declaring itself judge and jury of what’s true and not.
Sander van der Linden told me he’s aware of these ethical trip wires. But in the end he agreed to advise Edelman on its disinformation project because he could envision real advances being made if big companies put their vast resources and manpower into addressing this issue. “Working with government is great,” he says, “but to achieve maximum effectiveness, we need the private sector on board as well.”