Can Facebook Be Fixed?

Facebook has not done enough to combat hate speech and misinformation; its negligence could impact the November election; and decisions like allowing President Trump to post racist content unfettered constitute a “significant setback for civil rights.”
Those were among the conclusions of auditors who spent two years examining Facebook’s policies. None of the findings in their 89-page report, first revealed Wednesday in the New York Times, will come as a surprise to anyone who has been charting the proliferation of harmful content on the platform, particularly the civil rights leaders who helped spur a massive advertising boycott of the platform last month. Mark Zuckerberg met with some of these leaders on Tuesday, one of several recent steps the CEO has taken to placate the brands who have vowed not to spend money on the platform until action is taken to tamp down hate speech. The meeting didn’t go very well. “It was a disappointment,” Color of Change President Rashad Robinson wrote on Twitter. “They have had our demands for years and yet it is abundantly clear that they are not yet ready to address the vitriolic hate on their platform.”
The need for Facebook, which has over 2.5 billion active users, to overhaul its content moderation practices feels especially pertinent heading into the summer. Not only have the recent Black Lives Matter demonstrations ignited a national conversation about systemic racism and the spread of hate speech, in just a few months Americans will decide whether to give the world’s most prominent purveyor of said hate speech another four years in the White House. Considering the role Facebook played in putting him there in the first place, Zuckerberg is facing more pressure than ever to ditch his position that his platform shouldn’t be an “arbiter of truth” — and to do more to prevent President Trump and other bad actors from promoting lies.
To better understand what to expect from Facebook now, in the months leading up to the election and beyond, Rolling Stone spoke with Claire Wardle, executive director of First Draft, a nonprofit organization specializing in misinformation (First Draft has received funding from the Facebook Journalism Project). Though Wardle believes social media platforms have the capacity to allow for more transparency and better regulate hate speech and misinformation, she isn’t very optimistic they’re going to do play ball as long as their primary focus is making maximizing profits. “Ultimately, you’ve got shareholders, and you just want to make as much money as possible,” she says. “You don’t care.”
It seems like this advertising exodus is making Facebook squirm in a way that a lot of other criticism hasn’t. Do you think there’s potential for this to be a long-term movement that results in real action? Or does this feel like a temporary thing Facebook will weather with surface-level changes until the advertising returns?
It depends on how much pressure the brands continue to put on Facebook. For people sitting in the Unilever or Coca-Cola marketing board room 10 days ago, it was a win-win situation. By doing this, you get a ton of free press and all the moral high ground. But their ad budgets are down anyway because of COVID. How much are they going to see this, from a brand perspective, hitting them at the end of July? If they haven’t been advertising and it has impacted their bottom line, they’re going to say they can’t live without Facebook. Facebook is also sitting here right now saying they’ve done enough. It’s kind of a game of chicken until the end of this month. Brands have the most leverage on a company like Facebook, but do they really care about hate on that platform, or is this a marketing moment? I’m assuming they are under a ton of pressure from advocacy and civil rights groups to keep this pressure going. I also think there was some back-channeling or some sense that Mark Zuckerberg didn’t take this seriously and didn’t think it was going to last. I think both parties are watching to see what happens.
Last fall, former Facebook employee Yael Eisenstat wrote in an op-ed for the Washington Post that “as long as Facebook prioritizes profit over healthy discourse, it can’t avoid damaging democracy.” Do you agree with this? Is any for-profit social media platform fundamentally going to be a detriment to democracy?
I definitely believe that you can have a profit-driven business that understands the impact certain types of speech can have on democracy, and that this can be used as a selling point. [A company can] say they are creating a space where they are putting healthy democratic speech at the center of everything they and be clear about their guidelines and that they are taking tough steps against certain types of speech. However, if you are trying to make so much profit that you have growth every single quarter, I would argue that is why we’re in the mess we’re in. Having to hit these different numbers every single quarter means they can’t have discussions about what you want to do [about hate speech and misinformation]. The concern, if you think about the bottom line, is that if they really piss off a particular segment of our user base, that could potentially be half of their users just in the U.S. So for them, ultimately, this is a numbers game. They have over 2 billion users and anything that they think might impact that is going to be a problem. The level of profit these companies are trying to make means there’s no way that these two things can live side by side
Prior to the 2016 election, social media platforms were kind of like the Wild West, and Russia took advantage of this. Since then, steps have been taken to help prevent this from happening again, but we’ve also seen an explosion in the spread of conspiracy theories. As far as misinformation on these platforms, particularly misinformation that could impact an election, are we in a better or worse place now than we were four years ago?
There have been steps forward and steps back. I would actually say we’re in a worse place because the amount of domestic misinformation is much more serious than what we saw in the lead-up to 2016. In 2016, there were some domestic actors in play, but really the main thing we saw was Russian interference. Now, the challenge is that there’s been a recognition of what those tactics were, which had led to a much bigger problem with politicians pushing misinformation themselves. It’s asymmetrical, but we’re also seeing both parties now using similar tactics. We were just talking about Courier News, the ACRONYM-based organization that’s part of the Democratic Party, pushing out local news sites. They’re mirroring what we’ve seen on the Republican side. But again, it’s not asymmetrical by any stretch.
I would also say we’ve got a lot more of it. Conspiracies are much more damaging. We’re seeing more hate speech and more divisive speech, partly because the president is using language that is taking advantage of those divisions. Compared to the lead-up to 2016, it’s definitely a different level. We’ve seen some steps forward by the platforms, but just barely. I think we’ve kind of been seduced in the last four months because of the things they’ve done about COVID. But with COVID they can hang themselves off the [World Health Organization], whereas when it comes to political speech they don’t have that lifejacket. They’re still very, very, very nervous about political speech. Even with the ad boycott stuff it’s like, yeah, they pushed off some Boogaloo Boys, but it’s just completely opaque how they make these decisions, which makes me even more uncomfortable. Who is making the decisions and what are the criteria? The fact that they haven’t had a transparent process by which they make decisions around de-platforming over the past four years, I think is massively problematic.
When we look as a whole at everything that’s happened [since the 2016 election], it’s kind of astonishing how little has changed. If you as a reporter said to me, ‘Claire, are there any voter suppression campaigns happening online in the five battleground states,’ I couldn’t give you an honest answer despite the fact that I have a team of 10 monitoring this every day. Independent researchers can’t analyze what’s happening in real time in order to take [action], whether that’s debunking information or telling local polling locations about a rumors. I mean, there’s a whole bunch of things that we can’t actually react to in real time.
Given the sheer amount of hate speech and misinformation out there, it seems like A.I. could really go a long way in helping platforms combat a lot of this. What role does A.I. play right now, and how could it help shape the future of content moderation?
A.I. is good at certain things. It’s much better in the English language than any other language. It’s very good at graphic content. It turns out a nipple normally looks like a nipple. When you look at how effective it is around certain types of speech, particularly hate speech when it’s around keywords, it does a pretty good job. It’s pretty bad at misinformation, mostly because the people who are trying to push misinformation understand where the platform guidelines exist, so they go right up to where they can fit on the guidelines. Look at the Plandemic video. People took out the bit of the 25-minute video that broke Facebook’s guidelines and then uploaded that to TikTok. I mean, the sophistication, which isn’t even that sophisticated, of those people trying to push this stuff is a long way out of the bounds of A.I.
A lot of the stuff we see is not false, it’s just it’s misleading, or it’s genuine but out of context. If you take a genuine photo, A.I. is not clever enough to understand that the caption doesn’t connect with the meaning of that photo. We’re a long way away from it being able to do that. It’s better at doing things like understanding duplication. So if the fact checkers, for example, say this particular image of Joe Biden is photoshopped or whatever, then A.I. is much better at recognizing that that photo is also in these 26,000 videos or these 27,000 memes. So Facebook is getting better at scaling up. If you have one fact checker look at one thing, Facebook is better at sweeping out all of the examples of that thing. But again, I can’t independently tell you how effective it is. Facebook tells me it’s effective. I can’t independently judge that.
Facebook seems to be using the ambiguous nature of harmful content as an excuse to just kind of freelance these decisions on a case-by-case basis. To what extent would it even be possible for platforms to codify real, close-to-comprehensive guidelines about what constitutes content harmful enough to warrant removal?
It’s really frustrating, and it’s not just the platforms. Governments aren’t defining it. The U.K. has a white paper with all of these recommendations about preventing online harm. When you ask them what they mean by harm, they say, oh, you just know it. When you say problematic content, well, what’s problematic about it? The challenge is that most of this stuff, particularly the U.S., is legal speech. It’s not terrorist content, it’s not child pornography. As a society, there needs to be a conversation about what we want. The problem is we have a lack of empirical data. I think platforms should be taking stronger positions about what type of harms they will have policies around. They need to say one of the harms is people not vaccinating their children or people not taking a COVID vaccine. If that was a stated harm they were trying to prevent, it would allow them to to say they were going to take down all of the anti-vax COVID stuff. People might not like it, but [it would help] if they were up front about calling this particular perspective, which is it’s science-based, one of their harms.
My slight worry is we don’t have any research about longitudinal impacts of the kind of drip, drip, drip, low-level harms that don’t hit any kind of threshold. What does it mean to have years and years of hate content? What does it mean to have years and years of conspiratorial content that drives down trust in governments? That is the stuff that we as a society need to have a conversation around. Are we going to be Safety Sam and say we are pretty concerned about the long-term impact of conspiracies, and therefore these platforms can make the decisions about having that kind of content on their platforms? Or are we going to say free speech for all and conspiracies at random and everybody has a right to believe in chemtrails and 5G and blah blah blah?
Platforms have to take pretty strong position on policies, but I don’t think they ever going to. They have these very wooly policies that allow them to do a bit of a U-turn whenever something happens. I think they could have some clear policies on their positions on things like vaccines or climate or hate speech and divisive speech, but I don’t think they’re never going to because it would kill their profits. But if we’re serious about it, they would have to define those harms.
What kind of action should governments be taking right now to help hold social media platforms accountable? Are there any kinds of common sense regulatory measures you’d like to see implemented?
My frustration is that we have governments trying to pass pretty problematic regulation because they want to be seen as doing something. At the same time we have almost no empirical foundation about how much of this stuff is out there and what impact it has. There’s a pretty good gut instinct that it’s a lot and the impact is pretty shitty, but I couldn’t actually quantify it. I couldn’t say it’s gone up this year from last year, et cetera. So I’m kind of astonished that governments haven’t used their power to demand data. When they ask these platforms questions in these hearings, [the response] is normally something like, ‘Oh, yes, that’s a very good question. We’ll get back to you next week.’ There’s all this hedging. But what about the need to be able to independently audit the search results that people get when they search for certain queries?
Last year I did some research. I got people in 12 countries, 444 people, to take screenshots on Google and Facebook and Instagram of what they saw when they searched for vaccines. We got all these screenshots back. That was the only way we could audit how different people saw search results for vaccines. That is insane to me that the government couldn’t say to a company that they need to see the search results on these divisive topics and that they need something to audit. The E.U. Commission has a code of conduct on disinformation. I was asked to look at the filings they gave and they’d say something like their A.I. has caught 37 percent more disinformation. But there’s no way to independently interrogate the numbers they were giving us. Or they would say they’ve created a Facebook ad library. But it’s not until you start trying to play with it that you realize that it’s complete bullshit, and it’s really buggy, and they take three weeks to fix the bugs. If a company says they’ve added an “I” underneath a source of news, how many people click on that eye? Because if nobody clicks on it, it’s completely pointless. We need regulators with the knowledge to push back against these companies.
So it’s frustrating when people say governments can’t do anything. Governments could go on a five-year fact-finding mission to figure out the state of the situation so that then governments would be much better placed to understand the longitudinal impacts of this stuff. They could audit a ton of search terms. I mean, it’s just really good journalists who do this work. This should be what governments are forcing the platforms to be giving over at all times. They can’t say they can’t give the data because of privacy, either. If you’re a government, you can come up with secure processes. They audit financial companies, they order all sorts of people. They should be able to audit the platforms. I think it’s just that governments don’t have the tech-focused people to know which questions to ask them and what to demand.
Some of these lawmakers are going to have the chance to grill Zuckerberg and some other big tech CEOs at the end of the month. What kind of progress do you think Congress has over the past few years in terms of literacy of what’s going on with these platforms, and is there anything in particular you’d like to see asked of Zuckerberg?
There are definitely some particular senators who have had really good briefings. There was the last time when [Rep. Alexandria Ocasio-Cortez] asked Mark Zuckerberg a really good question about the fact checkers. Mark Zuckerberg basically lied — I don’t think he did deliberately; he just didn’t know the answer. But in that moment there’s nobody to go, ‘That’s a lie.’ So you have these moments of theater, but the whole system is set up in a really poor way. It’s not an interrogation like in a court where it’s, ‘Here’s Exhibit A. That’s not the case, Mark.’ But yes, I think the quality of the conversation is improving somewhat. But I think know asking specific questions about how many people internally working on this, why is it that journalists are the people who find it, why is it the tools they’re giving people around transparency are not up to it.
I don’t think there’s anything mind-blowing I would ask. It’s the same old shit we’ve been asking for for years and don’t get straight answers. The other problem is that politicians are asking the questions when right now we have a serious problem with politicians themselves pushing misinformation. It’s not just Trump. There is an awkward aspect to this, which is that you kind of need an independent group with power to ask questions of the platforms, because, to be blatantly honest, politicians are too wedded to platforms continuing to allow this stuff.
One potential legislative action that has been brought up multiple times by President Trump is altering or doing away with Section 230 [of the Communications Decency Act], which absolves platforms of legal liability for the content users publish. Lawmakers from both parties have argued it needs to be at least tweaked. Can you explain the importance of Section 230 and what kind of impact changing it could have on some of these platforms?
I completely understand and share people’s real frustrations that the platforms can’t be held accountable for some of this speech, and that they’re hiding behind Section 230. I think it’s hard for people unless they do this work all the time to see how much of this stuff is really gray. My fear is, if there were these changes to Section 230, the Internet would become a very, very, very different place. It would find other spaces that we wouldn’t actually have access to see. That’s where the stuff would go. We saw this with the German law back in 2017 like that when they really, really feared fines, the platform just took down so much. That’s what we would see again, I think. I know people are frustrated by it, but I think there’s an in-between. Forcing them to be much clearer about their policy guidelines, forcing them to be audited. I think there are ways [to address] the kind of speech that we’re really troubled by. It’s a spectrum, and because because it’s complex and difficult, people point to [Section 230] and say, ‘Yeah, that!’ I just don’t think it’s the right answer.
Yeah, people want action now, but at the same time there don’t seem to be very many quick solutions. Twitter banned political ads but that move has been criticized because of how it could favor incumbents.
I really applauded that decision, because they almost said we don’t have any research about that and we can’t right now understand the harm that misleading ads are having on our public sphere and therefore we’re just got to pull the plug entirely while we figure that out. I’d much rather that than historians looking back and saying when you guys were busy experimenting you basically led us into a civil war or genocide.
And I’m sorry if you’re not an incumbent, but if you’re telling me that you’re not going to win just because you’re not able to push some shitty Twitter ads, you’re not a good politician. If I’m balancing out the cost-benefit there, I’d rather not have it than have it done badly. If Facebook had its house together, maybe it would be different. But my frustration about Facebook still saying they’re going to have ads when there’s just no way for independent researchers to look into those ads, no real moderation of what those ads saying, and no real sense of the impact of those ads, then why the hell should they be allowed to make money off that?
This interview has been condensed and edited for clarity.