The acts of defiance were rare and rather novel, for a surveillance state. The way the world found out was well-worn. When Chinese protesters took to the streets to shout “Down with Xi Jinping” and denounce China’s harsh Covid lockdown measures, activists and their supporters took to Twitter to share news and clips of the disobedience.
To many, it seemed like a swan song, given Twitter’s uncertain fate under its new management.
“To everyone saying #Twitter is over… Maybe your last act on this godforsaken platform could be to amplify the voices of protesters in China calling for freedom?” Vickie Wang, a Taiwanese writer and comedian, wrote as she retweeted clips of protesters and shared Zoom meeting information where Chinese protesters could organize.
Not long afterwards, Twitter suspended Wang’s account, as well as those of others participating in, and supporting, the protests. That included Ai-Men Lau — a research analyst at the Taiwan-based nonprofit Doublethink Lab — who amplified clips and quotes from protesters and the hashtag they were using to discuss it: #A4Revolution.
Messages to Wang, Lau, and other activists cited fictional violations of the company’s policies on “platform manipulation and spam.”
It’s unclear how Twitter came to believe that their accounts were engaged in spamming, but former staffers for the company tell Rolling Stone that Twitter’s elimination of nearly all of its trust and safety staff and the company’s increasing reliance on automated content moderation will make incidents like this more common. The changes, they say, will make it easier for authoritarian governments like China’s to silence critical voices and make it harder to police abusive content from sophisticated adversaries.
The company uses a number of automated systems to police malicious spam and other abuses. And regarding these shutdowns, “it does look like those were system errors,” one former Twitter staffer tells Rolling Stone.
“Most of those [automations] used some heuristics based around both technical information on accounts and their origins as well as English/Mandarin keywords. All of the Chinese speakers and those who’d worked specifically on [Chinese influence operation] type activities are now gone from the company,” the former staffer says.
In a recent interview with Reuters, Ella Irwin, Twitter’s remaining vice president for trust and safety, said the company’s new owner Elon Musk has encouraged the company to rely more on automation and less on human review when making content moderation decisions.
Veterans of the now-defunct teams which conducted those manual reviews are skeptical of that approach. “Automations are only as good as the human in the loop,” another former Twitter staffer explained. “If you don’t have a human that is helping update and train it, it’s always behind.”
Twitter trust and safety veterans also cited concerns that the lack of staffing and reliance on automation could make it easier for authoritarian countries and sophisticated actors to game the company’s legitimate content reporting systems to boot critics.
Like a number of social media platforms, Twitter allows users to report posts in violation of its rules on abusive content, spam, and copyright violations. Those reports typically relied on staffers to assess their validity and make decisions about whether to enforce against a reporter account. But some former employees cited concerns about whether Twitter had the staff to ensure those reporting systems weren’t exploited with bad faith reports targeted at dissident voices in a practice sometimes called “coordinated adversarial account reporting.”
The concerns are far from theoretical. Even when fully staffed, Twitter’s trust and safety teams still faced determined and often clever networks attempting to game its reporting tools.
In one case, a network of Islamist conservative trolls in Bangladesh succeeded in getting 20 fan accounts affiliated with K-POP group BTS temporarily suspended from Twitter by sending bogus DMCA reports claiming to represent the group, one former staffer told Rolling Stone. Twitter referenced the incident vaguely in its last transparency report before Musk’s takeover, noting that an unspecified network targeted a “a popular K-POP group” with the false copyright claims. The group also targeted a former spokesperson for India’s Hindu nationalist BJP political party with similarly bogus claims after her controversial comments about the Prophet Muhammad, according to the source.
Aside from bogus reports, authoritarian governments have another tool they can use to silence critics on the platform: bot-fueled harassment campaigns.
One former Twitter staffer pointed to a specific example, the harassment of Australian-Chinese journalist Vicky Xu. Xu, a journalist and researcher at the Australian Strategic Policy Institute (ASPI). For over a year now, Xu has faced a massive, coordinated campaign of threats and harassment over her work that it critical of the Chinese government and its repression in China’s mostly Muslim province of Xinjiang.
Waves of spam accounts tweeted death threats, porn clips, and propaganda cartoons at Xu for months on end, effectively silencing her on the platform. “That was really damaging. I had to take a break from Twitter for a long time. It made my work a lot harder and it made interacting with other people on Twitter a bit impossible.”
“Even in the recent couple of weeks, the trolling has continued,” Xu said. ASPI, the research institute where she works, says that the trolls have stepped up their targeting in recent weeks and appear to belong to “Spamouflage Dragon,” a network of pro-China trolls linked to the Chinese government.
Former Twitter staffers tell Rolling Stone that an internal investigation into the harassment of Xu and other prominent women critical of China echoed that conclusion. Company researchers determined that campaign was a mixture of harassment from authentic nationalist trolls and coordinated inauthentic behavior from accounts with technical links to previously suspended networks of state-linked, pro-Chinese trolls.
The investigation also concluded that automated systems in place to counter China influence operations needed to be updated in order to prevent similar campaigns. Twitter had intended to update those systems with lessons learned from the harassment campaign and publicly disclose its findings prior to Musk’s takeover of the company. But it’s unclear if either will take place now.
In the meantime, pro-China spam networks have become more aggressive. As protestors took to the streets of Beijing in rare displays of open protest of the Communist Party, trolls flooded hashtags used to report news and commentary on the protests with pornographic content and other irrelevant spam, rendering them nearly useless.