Facebook’s automated image removal system is flawed, says Oversight Board

The semiautonomous committee of experts warned Facebook's system could even make issues worse.
Thumbs up Facebook buttons reflected in water droplets in front of blurry thumbs down symbol
Facebook having moderation issues? Get out of town. Deposit Photos

Share

In 2019, Meta (neé Facebook) CEO Mark Zuckerberg announced the creation of a standalone content oversight board. This board of 23 experts and civic leaders was tasked with reviewing controversial moderation decisions following pressure from critics and users concerning the company’s complex and seemingly uneven policies. According to Zuckerberg at the time, the decisions from his self-dubbed social media Supreme Court “will be binding, even if I or anyone at Facebook disagrees with it.” Since then, the board has issued a number of rulings on issues—including hate speech, misinformation, and nudity. But the group’s latest decision is perhaps its most damning yet of Meta’s internal company strategy and ability to tackle the ongoing moderation problem.

Earlier today, the Oversight Board announced its findings from an appeals process regarding the previous removal of a Columbian political cartoon depicting police brutality. In a statement published online this morning, the board explained why Facebook should remove the artwork from Facebook’s AI-assisted Media Matching Service banks, a system that uses AI-scanning to identify and remove flagged images that violate the website’s content policies. They also argued why the entire current system is massively flawed.

[Related: Meta could protect abortion-related DMs, advocates say.]

“Meta was wrong to add this cartoon to its Media Matching Service bank, which led to a mass and disproportionate removal of the image from the platform, including the content posted by the user in this case,” writes the Oversight Board, before cautioning that “Despite 215 users appealing these removals, and 98 percent of those appeals being successful, Meta still did not remove the cartoon from this bank until the case reached the Board.”

The Oversight Board goes on to explain that Facebook’s existing automated content removal systems can—and have already—amplified incorrect decisions made by human employees. This is especially problematic given the ripple effects of such choices. “The stakes of mistaken additions to such banks are especially high when, as in this case, the content consists of political speech criticizing state actors,” it warns. In its recommendations, the board asked that Facebook make Media Matching Service banks’ error rates public and broken down by content policy for better transparency and accountability.

[Related: Meta’s chatbot is repeating users’ prejudice and misinfo.]

Unfortunately, here is where the social media’s “Supreme Court” differs from the one in Washington, DC—despite being an independent committee, Facebook isn’t under any legal obligation to adhere to these suggestions. Still, in making the Oversight Committee’s opinions known to the public, Zuckerberg and Meta execs may at least face additional pressure to continue reforming its moderation strategies.

 

Win the Holidays with PopSci's Gift Guides

Shopping for, well, anyone? The PopSci team’s holiday gift recommendations mean you’ll never need to buy another last-minute gift card.