Twitter’s efforts to tackle misleading tweets just made them thrive elsewhere

Misinformation is like a weed that just keeps sprouting up
Misinformation can spread from one social media platform to another. Max Pixel

Share

Misinformation in the modern age is like a many-headed hydra from Greek mythology. Each time it’s cut off on one social media platform, it can still live on in the form of screenshots on other social media platforms. 

That’s the upshot of a new study from researchers at NYU’s Center for Social Media and Politics, published on Tuesday in Harvard Kennedy School’s Misinformation Review.  

“A lot of the news coverage or popular discourse is ‘what has Facebook done,’ or ‘what has Twitter done.’ And those discussions tend to happen completely unrelated to each other,” says Zeve Sanderson, executive director at NYU’s Center for Social Media and Politics and an author on the paper. “We were interested in understanding when a single mainstream platform bans a particular message, or intervenes on a particular message, what happens to that message not only on that platform but other platforms.” 

To study this topic, the team traced the spread of tweets by former president Donald Trump from November 1, 2020 to January 8, 2021 that were flagged by Twitter for containing election-related misinformation.

Twitter announced during that time that it had put a “soft intervention” on around 300,000 election-related tweets, marking them as disputed and potentially misleading without removing them or blocking them from being shared. About 456 tweets received a “hard intervention” from Twitter. These tweets were covered with a warning message, meaning that other users could not retweet, reply to, or like the post. Twitter ultimately suspended Trump’s account in January 2021. 

[Related: Our brains love spreading lies all over the internet]

What Sanderson and his colleagues discovered was that when Twitter intervened on a message, either by putting a warning label on it, or by blocking engagement (preventing other users from retweeting or commenting on it), it did nothing to stop the broader spread of those messages. People could still link to the tweet, copy and paste the direct message, or take a screenshot of it and post it on other platforms such as Reddit, Instagram and Facebook, where it could enjoy a viral second life. 

“When platforms are only acting in isolation but people are using multiple platforms at the same time, your content moderation is just doomed to fail given the speed that misinformation moves at,” says Megan Brown, a research scientist at NYU’s Center for Social Media and Politics, and an author on the study. “All of these pieces are very interconnecting—fixing a given thing on each platform in isolation isn’t ever going to totally fix the problem.” 

Researchers say they need more data access

Sanderson stresses that this isn’t to say that hard interventions don’t work; they actually work well at controlling the spread of misinformation internally on the platform. “We see this in the plateau in retweets and engagement,” he says. “That’s also a costly intervention. They completely limit that type of speech. So, that’s in their back pocket but I understand why they would not want to use it all that frequently.” 

Further complicating the situation is the fact that during the 2020 election, other loud voices who would usually be considered an official source were also spreading misinformation about the election, Sanderson adds. “What we’ve seen recently is political elites, people with millions and millions of followers, tend to be the main vectors of spreading this misinformation.”  

“As we look towards 2022 and 2024 with the new elections and the pandemic, it’s critical for social media platforms and public officials to consider broad content moderation policies at the ecosystem level, not just at the individual platform level, if they’re interested in taking seriously effectively counteracting misinformation,” Sanderson says. In general, he notes that we should think more about how a diversity of online information sources, including search engines, work together in order to inform people or misinform people.  

“One place where I am sympathetic [to platforms] is on this epistemological question of authoritative information,” he says. “Information might change over time, and yesterday’s misinformation might become today’s information.” 

[Related: Social media really is making us more morally outraged]

Since the NYU study was observational, the team cannot say whether there is a causal relationship between these different interventions and how fast and far the misinformation spreads. “There are going to be a lot of interventions that have intuitive appeal, but aren’t necessarily effective,” Sanderson says. 

In order to gauge how effective these interventions actually are, Brown and Sanderson argue that independent researchers need to be able access data from these platforms, which has been complicated by the recent move Facebook made to block NYU researchers from studying political advertisements on its site. “This kind of research can’t just happen inside Facebook and never see the light of day,” Brown says. “Being able to test it on the outside of the platform is really important for this work.” 

Sanderson and Brown emphasize the need for platforms to collaborate with each other and with researchers to find solutions that work, and that can build user trust. That starts with being able to expand work at NYU and elsewhere to do privacy-protecting research using social media data. 

Sanderson points to a recent transparency report from Facebook, highlighting that some social media platforms are still not willing to give researchers full, direct access to the information they need. “It’s a really bizarre definition of transparency in that only they have access to the underlying data and choose how to analyze it,” Sanderson says. “It’s just not the way that transparency works elsewhere and it’s not the way transparency should work here.”