Adobe is teaching artificial intelligence to sniff out Photoshopped images

AI is getting better at recognizing fakes, but forgers are moving fast.

Share

Adobe fake image tool
The AI looks for clues that point to misleading edits. Adobe

People have been manipulating photos since the onset of consumer cameras more than a century ago. As a result, cynics have been trying to sniff out those manipulations and expose frauds for the same amount of time. As editing tools have progressed, the methods for snooping out sneaky edits has lagged behind—at least outside of areas like law enforcement and image forensics.

Adobe’s arsenal for finding fakes includes several tools that leverage artificial intelligence. Recently, the company announced some research in conjunction with UC Berkeley that’s designed to detect changes made to a face using a feature built into Photoshop called Face Aware Liquify. There’s still a long way to go before you can point a piece of software at a photo and get a foolproof answer about its authenticity, but projects like this may be essential as we encounter more and more of the convincing, but fraudulent, videos knowns as deepfakes.

What kind of manipulation are we talking about?

This isn’t Adobe’s first foray into AI-based image authentication. In fact, the company already released data about techniques that focus on more common image editing techniques like splicing images together, removing objects from a photo, or copy-and-pasting one area of an image into another spot. All of these are well-known fakery strategies.

Those techniques, however, typically come from a human editor who can see the image and the finished product, but may not see the trail of clues they’re leaving about their work. For instance, copy and pasting an element from one place to another can disrupt the pattern of digital noise created by a digital camera’s sensor.

The new research, however, outlines a technique that’s meant to reverse-engineer edits that relied on AI in the first place.

Liquify your face

Photoshop’s Liquify tool has been around for generations of the software. Liquifying an image allows you to push and pull the pixels around. It’s a common tool for retouchers because it makes it easy (for better or worse) to make a specific part of a model’s body look thicker or thinner. Several versions ago, however, Adobe announced Face Aware Liquify which uses AI to recognize individual elements on the face and allows retouchers to easily manipulate variables like the size and shape of the eyes, nose, and mouth.

Using Face Aware Liquify, you can crank sliders all the way and you can make some truly nightmarish things, but use them subtly and you can change a person’s face or expression in a way that’s sometimes undetectable if you haven’t seen the original.

In its research, Adobe showed pairs of images—one altered photo, and the original—to both the neural network and human subjects. Humans could identify the modified photo about 53 percent of the time, but the neural network reportedly achieved roughly 99 percent accuracy. Beyond that, the AI could also sometimes undo the edits to get an approximation of the original using a combination of clues like distortion caused by the original warp effects.

Burden of proof

While tools like this are useful, there’s still a long way to go before the detection process catches up with the creation. Hany Farid, a computer science professor from UC Berkeley recently told the Washington Post that researchers trying to detect deepfake videos are outnumbered 100 to 1 by those working on making the videos more convincing and harder to sniff out.

And even if the technology catches up, there are still more variables to figure out, not the least of which is getting people to trust AI’s judgment, especially when the content it’s evaluating could be political or polarizing. There’s also the age-old problem with verifying images: certifying that a photo or video hasn’t been digitally altered can lend too much credibility to its message. The photographer or videographer is still actively choosing what information to present and how it’s presented, so it’s still very possible to mislead viewers without any digital manipulation.

There’s no way to try the tools at the moment, so you can’t try it for yourself just yet, but as Adobe and other companies—as well as the military—continue to work on this kind of tech, expect to hear more about it. Also, expect to see more faked images and videos.

 

Win the Holidays with PopSci's Gift Guides

Shopping for, well, anyone? The PopSci team’s holiday gift recommendations mean you’ll never need to buy another last-minute gift card.