Sharing AI-generated images on Facebook might get harder… eventually

And you'll soon have to fess up to posting 'synthetic' images on Meta's platforms.
Upset senior woman looks at the laptop screen
Meta hopes to address AI images with a bunch of help from other companies, and you. Deposit Photos

Share

That one aunt of yours (you know the one) may finally think twice before forwarding Facebook posts of “lost” photos of hipster Einstein and a fashion-forward Pope Francis. On Tuesday, Meta announced that “in the coming months,” it will attempt to begin flagging all AI-generated images made using programs from major companies like Microsoft, OpenAI, Midjourney, and Google that are flooding Facebook, Instagram, and Threads. 

But to tackle rampant generative AI abuse experts are calling “the world’s biggest short-term threat,” Meta requires cooperation from every major AI company, self-reporting from its roughly 5.4 billion users, as well as currently unreleased technologies.

Nick Clegg, Meta’s President of Global Affairs, explained in his February 6 post that the policy and tech rollouts are expected to debut ahead of pivotal election seasons around the world.

“During this time, we expect to learn much more about how people are creating and sharing AI content, what sort of transparency people find most valuable, and how these technologies evolve,” Clegg says.

[Related: Why an AI image of Pope Francis in a fly jacket stirred up the internet.]

Meta nebulous roadmap centers on working with “other companies in [its] industry” to develop and implement common identification technical standards for AI imagery. Examples might include digital signature algorithms and cryptographic information “manifests,” as suggested by the Coalition for Content Provenance and Authenticity (C2PA) and the International Press Telecommunications Council (IPTC). Once AI companies begin using these watermarks, Meta will begin labeling content accordingly using “classifiers” to help automatically detect AI-generated content.

If AI companies begin using watermarks” might be more accurate. While the company’s own Meta AI feature already labels its content with an “Imagined with AI” watermark, such easy identifiers aren’t currently uniform across AI programs from Google, OpenAI, Microsoft, Adobe, Midjourney, Shutterstock, and others.

This, of course, will do little to deter bad actors’ use of third-party programs, often to extremely distasteful effects. Last month, for example, AI-generated pornographic images involving Taylor Swift were shared tens of millions of times across social media.

Meta made clear in Tuesday’s post these safeguards will be limited to static images. But according to Clegg, anyone concerned by this ahead of a high-stakes US presidential election should take it up with other AI companies, not Meta. Although some companies are beginning to include identifiers in their image generators, “they haven’t started including them in AI tools that generate audio and video at the same scale, so we can’t yet detect those signals and label this content from other companies,” he writes.

While “the industry works towards this capability,” Meta appears to shift the onus onto its users. Another forthcoming feature will soon allow people to disclose their AI-generated video and audio uploads—something Clegg may eventually be a requirement punishable with “penalties.”

For what it’s worth, Meta also at least admitted it’s currently impossible to flag all AI-generated content, and there remain “ways that people can strip out invisible markers.” To potentially address these issues, however, Meta hopes to fight AI with AI. Although AI technology has long aided Meta’s policy enforcement, its use of generative AI for this “has been limited,” says Clegg, “But we’re optimistic that generative AI could help us take down harmful content faster and more accurately.”

“While this is not a perfect answer, we did not want to let perfect be the enemy of the good,” Clegg continued.