Google made an invisible watermark for AI-generated images

It only works with content generated through Imagen for now.
photos of a butterfly run under deepmind's watermark
DeepMind / Google

Share

AI-generated images are getting increasingly photorealistic, which is going to make spotting deepfakes and other kinds of image-based misinformation even harder. But Google’s DeepMind team thinks it might have a solution: A special watermarking tool called SynthID.

Announced at Google Cloud Next this week, SynthID is a partnership between the Google Cloud and Google DeepMind teams. A beta is already available for Image through Vertex AI, Google Cloud’s generative AI platform. For now, it only works with Imagen, Google’s DALL-E 2-like text-to-image generator, but the company is considering bringing similar technology to other generative AI models available on the web. 

According to the announcement blog post from the DeepMind team, SynthID works by embedding a “digital watermark directly into the pixels of an image, making it imperceptible to the human eye, but detectable for identification.” It’s their attempt to find “the right balance between imperceptibility and robustness to image manipulations.” A difficult challenge, but an important one.

As the DeepMind team explain in the announcement, “while generative AI can unlock huge creative potential, it also presents new risks, like enabling creators to spread false information—both intentionally or unintentionally.” Having some kind of system in place to help people and platforms can identify AI-generated content is going to be crucial to stopping the proliferation of misinformation. 

The researchers claim that traditional watermarks—like logos applied over the top of a stock photo—aren’t suitable for AI-generated images because if they’re small, they can be edited out with very little effort, and if they’re big and obvious, they “present aesthetic challenges for creative or commercial purposes.” (In other words, they look really ugly.)

Similarly, while there have been attempts to develop imperceptible watermarks in the past, the DeepMind researchers claim that simple manipulations like resizing the image can be enough to remove them. 

SynthID works using two related deep learning-based AI models: One for watermarking each image and one for identifying watermarks. The two models were trained together on the same “diverse set of images”, and the resulting combined model has been optimized to both make the watermark as imperceptible as possible to humans but also easily identifiable by the AI.

[Related: The New York Times is the latest to go to battle against AI scrapers]

Crucially, SynthID is trained to detect the embedded watermarks even after the original image has been edited. Things like cropping, flipping or rotating, adding a filter, changing the brightness, color, or contrast, or using a lossy compression algorithm won’t remove a watermark from an image—or at least, not so much that SynthID can’t still detect it. While there are presumably ways around it with aggressive editing, it should be pretty robust to most common modifications. 

As a further guardrail, SynthID has three confidence levels. If it detects the watermark, you can be fairly confident Imagen was used to create the image. Similarly, if it doesn’t detect the watermark and the image doesn’t look like it’s been edited beyond belief, it’s unlikely the image was created by Imagen. However, if it possibly detects the watermark (or, presumably, areas of an image that resemble a SynthID watermark) then it will throw a warning to treat it with caution. 

SynthID isn’t an instant fix for deepfakes, but it does allow ethical creators to watermark their images so they can be identified as AI-generated. If someone is using text-to-image tools to create deliberate misinformation, they’re unlikely to elect to mark their images as AI-generated, but at least it can prevent some AI images from being used out of context. 

The DeepMind team aim for SynthID to be part of a “broad suite of approaches” for identifying artificially generated digital content. While it should be accurate and effective, things like metadata, digital signatures, and simple visual inspections are still going to be part of identifying these types of images. 

Going forward, the team is gathering feedback from users and looking for ways to improve SynthID—it’s still in beta, after all. They are also exploring integrating it with other Google products and even releasing it to third-parties “in the near future.” Their end goal is laudable: Generative AIs are here, so the tools using them need to empower “people and organizations to responsibly work with AI-generated content.” Otherwise we’re going to be beset by a lot of possible misinformation.