How social media companies can benefit from election misinformation

Dangerous misinformation can lead to lots of attention—which isn't necessarily a bad thing for a social media company's bottom line.
Despite claims otherwise, social media companies are giving a pass to dangerous misinformation, according to reports. Pexels

Share

A new, in-depth study from Bloomberg published on Thursday analyzed thousands of social media posts from some of America’s controversial politicians, and the results are stark. When compared to other topics, candidates’ baseless claims surrounding US election fraud conspiracies are an engagement boon not only for themselves, but the social media companies that allow the content.

After reviewing all Facebook and Twitter content posted by every Republican running this year for Senate, Congress, governor, attorney general, and secretary of state, the report found that the two platforms, despite previously flagging election falsehoods, “did not have any context added to the misleading posts” at the time of analysis. These posts were identified via trawling for keywords and phrases like “rigged election” and “illegitimate president,” and vastly outperformed candidates’ content about other subjects like border security and the economy.

[Related: It’s possible to inoculate yourself against misinformation.]

“Nearly 400 election-denying posts from Republican candidates on Facebook collected at least 421,300 total likes, shares and comments across the platform, and reached as many as 120.4 million people,” writes Bloomberg, citing the Facebook-owned analysis tool, CrowdTangle. Bloomberg adds that, “On Twitter, 526 tweets promoting the Big Lie [a popular nickname given to the 2020 stolen election conspiracy] carried at least 401,200 shares on the platform.”

As another example, just six Twitter posts about from Rep. Marjorie Taylor Greene, the Trump loyalist from Georgia, garnered over 163,000 likes, retweets, and replies. Rep. Greene’s personal Twitter account was permanently banned in January 2022 following repeated inflammatory and false content, although her official political account remains online.

[Related: The complex realm of misinformation intervention.]

The outsized online attention doesn’t only benefit those perpetuating these lies, but the mediums themselves. Social media companies like Meta (which owns Facebook) and Twitter rely on user engagement as their chief profit source. The longer people spend on their platforms and engaging with posts, each other, and advertising, the more personal data can be harvested and subsequently sold to third-party companies for targeted marketing and other purposes. Whether intentional or not, the financial benefits are too lucrative to ignore. It’s a toxic loop—one that erodes public health and institutional trust.

For their part, companies like Facebook rebut these claims, with a Meta spokesperson telling Bloomberg that, “Meta has invested a huge amount to help protect elections and prevent voter interference, and we have clear policies about the kind of content we’ll remove, such as misinformation about who can vote and when, calls for violence related to voting, as well as ads that encourage people not to vote or question the legitimacy of the upcoming election.” Following Elon Musk’s recent $44 billion purchase of Twitter, hate speech soared on the platform while its content enforcement team was largely curtailed.