The Future of Election Meddling Is Americans Versus Americans

WASHINGTON — Imagine this: It is the evening of Sunday, November 1, 2020. In two days American voters will go to the polls and pick their next president. Seemingly out of nowhere, a strange video appears online. First on Twitter, then Facebook, then everywhere. The video shows the Democratic nominee talking about a plan to rig the upcoming election.
To the untrained eye, the clip is real — and devastating. The candidate’s face, voice, clothing, and body movements all match up. The candidate scrambles to rebut the video, claiming it’s phony and manufactured, but at that point the video has been viewed and shared tens of millions of times. Only weeks later is the explosive clip is revealed to be a deepfake, a type of counterfeit video made possible by highly sophisticated but readily available artificial intelligence technology. But by then, the election is already decided, the damage done.
Deepfakes are just one of the growing threats to the integrity — and potentially the outcomes — of U.S. elections, starting with 2020. So says Paul Barrett, the deputy director of New York University’s Stern Center for Business and Human Rights. In a new report, Barrett describes what election interference might look like in the upcoming 2020 elections and what tech companies, state and federal government, and regular citizens can do about it.
Barrett spoke with Rolling Stone on Monday about Russia’s next moves, why Instagram and WhatsApp are of particular concern, and why we should expect election interference to be more domestic — think Americans vs. Americans — than foreign in 2020. The interview has been edited for length and clarity.
What do you see as the main disinformation threats?
The primary threat, I assume, will remain Russia. In my assessment, they were relatively successful in 2016 in injecting themselves into the election process, spreading divisiveness and trying to tilt the election toward Donald Trump and away from Hillary Clinton.
That’s the starting place: What will Russia do? What will other countries do if they try to imitate Russia? Iran has already been active in this area. China has been active very, very recently using English-language disinformation aimed at the protests in Hong Kong. It reinforces the fact that we need to anticipate potential Chinese efforts during our own election season and maybe even other countries as well.
On the domestic front, in terms of sheer volume, domestically generated disinformation is far greater than what comes from abroad.
That’s kind of terrifying. How can you tell? Why is that?
We’re doing that to ourselves all the time. Most of it comes from the right but some of it has come from the left. And I see no evidence that it’s going to stop despite the efforts that Facebook, Twitter, and others have made to exclude certain notorious disinformation efforts.
We saw it demonstrated rather graphically with the doctored Nancy Pelosi video of several months ago. I fear there’s going to be a lot more material like that. And we see it practically every day in the person of our president, who is the disinformation purveyor in chief. The activity that he encourages and exemplifies we’ll see in the coming election, I think it’s safe to say.
How are these potential threats different from 2016?
We need to be on our toes about the Russians potentially focusing more narrowly on specific groups of Facebook, Twitter, Instagram, and YouTube users as opposed to projecting out from Facebook pages at the population as a whole the way they did in 2016.
I included in the report a section on “unwitting Americans” potentially finding themselves caught up in fake events because I think it’s possible that the Russians will take that more narrow approach, which may be harder for the platforms to detect and prevent. And while it may seem on one level to be less threatening because they’re not trying to rile up large portions of the population, if they’re able to confuse people even on a relatively limited scale, that’s something they could do over and over again.
During the 2016 campaign, there were stories about Macedonian fake-news mills and individual Americans making money churning out bogus, inflammatory content tailor-made for Facebook. But those stories seem almost quaint compared to the ways that political consultants are now getting into the disinformation business, spending huge sums to mess with foreign elections.
The business of disinformation is morphing from traditional clickbait to companies doing disinformation for pay for outside clients.
Political dirty tricks are as old as the republic. How do you draw the line between dissuasion and disinformation? Can you?
You’re putting your finger on very important and thorny questions. You have to draw the line by not attempting to do it a universal or global sense, but begin marking out areas that you are more confident can be sorted out. For example, someone telling Twitter and/or Facebook users that this year for the first time you can text in your vote. That’s dissuasion, that’s political dirty tricks, and you can laugh it off, but it’s false information that deserves not just to be marked as false but to be removed.
I argue in the report that provably false information deserves to be removed from the social media sites. While I recognized that that’s not a conventional position, I see it as being less radical than it sounds. It’s just another category added to hate speech, bullying, conspiracy hoaxes, added to election voter suppression speech. If you can prove something is completely false or is a conspiracy theory — that the shootings in El Paso or in Sandy Hook in 2012 never took place — I would say rather than just annotating that, you should remove it altogether.
If you can reach a similar conclusion with material related to political candidates, you should do the same thing. If a video shows a candidate stumbling through a speech to suggest she’s drunk, you should take it down. You should keep a record copy of it but not keep it on the site.
You specifically single out Instagram and WhatsApp. Why those two?
For very separate reasons. Instagram played a bigger role than most people appreciate in 2016 as a vehicle for the IRA (Russia’s Internet Research Agency). Instagram has not in the interim taken the steps that its own older brother or older sister platform, Facebook, has taken, and as a result I think it remains vulnerable.
On top of that, people need to appreciate that image-based or meme-based disinformation is equally if not more problematic and prevalent than strict text-based materials. Instagram is obviously a place where images rule. And Instagram is increasingly popular with younger users, and so potentially with new voters, who may or may not be paying attention to the provenance of the material they’re absorbing from social media.
WhatsApp is a different set of issues. The main reason I included WhatsApp is simply the degree to which it was used in Brazil and India during their presidential election. It’s important to note, as I do in the report, that WhatsApp is not as popular in the U.S. as it is in other countries and Facebook has already reduced the reach of WhatsApp gradually.
Nevertheless, smart people I talked to about this said the potential to abuse WhatsApp is still there.
You recommend limiting users’ ability to share stories on WhatsApp to one group.
The danger is that you end up with organized efforts that may be a political campaign or supporters of a political campaign forwarding significant amounts of material to numerous WhatsApp groups simultaneously. That sets off a chain reaction of potentially false material being disseminated very broadly very quickly — and beyond the view of the host platform, because the material is encrypted.
The more you reduce the ability to send out batches of materials at one time, the less attractive that platform is for misuse.
How much should we worry when the effects of disinformation on our elections aren’t clear?
There are people on both sides of that question. Kathleen Hall Jamieson at the Annenberg Center at Penn wrote a book in which she said it’s likely that Russian interference did have an effect on the election and helped elect Trump. Others have criticized the book and said they didn’t buy it.
But here’s what I’d be concerned about even short of being able to answer that ultimate question about 2016 or about 2020. It is the danger that the credibility of the election process is undermined. That the spread of disinformation heightens cynicism, causes voters to have difficulty telling reality from unreality, and potentially you end with a situation where a candidate either before the election or — god help us — after the election makes an argument that the whole thing was rigged.
We had a situation in 2016 before Election Day. Then-candidate Trump in a kind of jocular fashion mused about whether he would accept the election results. If you end with disinformation about the legitimacy of elections generally, I think that kind of demagoguery has the potential to be hugely disruptive and have effects that are hard to forecast.
What are the biggest open questions for you?
To what degree will our foreign rivals and antagonists bestir themselves to interfere directly with our election process? That would be question number one.
Number two is how prepared and how vigilant the social media platforms themselves turn out to be. They clearly have made progress since 2016. They’ve taken down huge amounts of fake accounts and smaller amounts of problematic accounts in more targeted removals — all of that seems to me to be good. They say they’re preparing themselves for the prospect of deepfakes or cheapfakes. They say they’re ready to deal with that. I hope that they’re right.
More News
-
-
Pentagon Kicks Off Pride Month by Caving to Matt Gaetz, Canceling Drag Show
- Don’t Serve, Don’t Slay
- By