SHARE

If you’re someone who’s “chronically online,” then chances are good that you’ve encountered content that is either false or misleading. This is officially known as “misinformation,” and according to non-profit organization KFF, during the COVID-19 pandemic, 78 percent of US adults were either duped or tripped up by at least one false statement about the COVID pandemic or the vaccines. 

The online misinformation ecosystem is complicated, which makes actually improving the situation challenging. Social, psychological, and technological influences drive how fake news and other misleading facts spread across the internet. 

Here’s one approach: Adding labels about the credibility of the source is a method that social media companies and third party organizations have been using to combat misinformation, although doing so has been contentious. For example, a study from MIT showed that putting false labels on some articles made users give more credibility to unlabeled articles (some of which might not have been checked or verified). The scientists who conducted the study termed this the “implied truth effect.”

But new research in Science Advances found that on average, these labels have limited effectiveness in terms of changing where people get information and reducing their misperceptions. However, in a small subset of individuals who consume a lot of low-quality news, these labels do appear to steer them towards adding more higher quality news into the mix. 

Researchers from NYU and Princeton University sought to study whether using a browser extension that labeled the reliability of news sources would affect viewing patterns and dissuade people from going to low-quality sites as well as change their outlook on issues like trust in media, political cynicism, and common misconceptions.  

[Related: How older adults can learn to effectively spot fake news]

To conduct the research, they gathered around 3,000 participants from YouGov that represented the US population across demographics like age, gender, race, and randomly assigned half of the group to receive the NewsGuard extension. This extension showed a green, yellow, or red shield icon in front of site URLs they’re viewing on their browsers, as well as in Google search results, and their Facebook and Twitter newsfeeds. (The NewsGuard chrome extension is available to the general public; install it here). 

Sites get a green shield when they generally maintain “basic standards of accuracy.” Green sites include Reuters, AP, and even Fox News. Sites that users should “read with caution” get a red shield because they tend to fall short in areas like accuracy and labeling fact versus opinion. These include Epoch News and Daily Kos. Satire sites, like The Onion, get a gold shield. Sites that contain a hefty portion of unvetted or user-generated posts, like YouTube, Reddit, and Wikipedia, get a gray shield. 

The other half of the participants did not receive any guidance or information about the news sources they visited online. 

This survey was run back in 2020, and the misinformation that participants viewed was mainly about COVID-19 and the Black Lives Matter movement. Researchers gave all participants a survey two weeks before they asked the treatment group to install NewsGuard for three to four weeks, and another survey two weeks after that period.

“As you can see with Elon Musk buying Twitter and this whole debate around freedom of speech online, there’s definitely this fine line between autonomy versus giving people the right information,” says Kevin Aslett, a postdoctoral fellow at NYU’s Center for Social Media and Politics and the first author on the Science Advances paper. “NewsGuard walks this tightrope by not telling people this is false. It kinda gives them subtle information saying: ‘Hey, this is a low-quality news source that doesn’t seem to be reliable for these reasons.’ Previous work has found that if we give people this source information, they’re less likely to believe misinformation when they see it.” 

[Related: Pending any plot twists, Elon Musk will soon own Twitter]

But, for the majority of users, these labels didn’t alter their online behaviors in measurable ways. Overall, the team didn’t observe a significant effect on the average internet user using NewsGuard in terms of their news diet or on any of the “indicators” that misinformation effects, like polarization, political cynicism, trust in media, as well as common misperceptions, Aslett says. “The possible reason for that is people just don’t view a lot of low-quality news,” he says. “So, about 65 percent of our sample didn’t view anything unreliable over the course of the treatment period.”

However, what they did find was that it had a notable effect on the individuals who consumed the highest amount of  low-quality news. “When you do look at the 35 percent who do view unreliable news, I would say that a small percent of that mostly relies on low-quality news, and I think that group of individuals is where we saw the main effects,” Aslett says. “It wasn’t drastic, like they stopped viewing unreliable news completely, but there was definitely a drop in the amount of unreliable news they were viewing.” 

[Related: Twitter’s efforts to tackle misleading tweets just made them thrive elsewhere]

Additionally, the drop was more significant in the high-misinformation-consumption group compared to individuals who viewed a couple pieces of unreliable news every once a week or so. This finding is in line with previous studies that have found that the sharing of fake news actually occurs much less than expected and that more often, only a handful of people are responsible for spreading most of the misinformation online.

“One takeaway from this study is that these interventions are advertised to the average internet user, and probably, [to] internet users that view the most reliable news,” Aslett notes. “But it seems to only have a positive effect on those who are consuming the most misinformation that are probably not downloading these web extensions or turning these web extensions on.” 

“Maybe we need to change who we’re targeting these interventions to and try to find a way to advertise this to these groups of individuals who are viewing a lot of misinformation,” he adds.