AI photo
SHARE
Facebook opened closed eyes

Facebook’s AI tool for fixing blinks

Facebook used various data sets to train its AI, including some populated with celebrities around the world.

Taking a good picture of a person is surprisingly difficult. Lots of mistakes can ruin the photo, from bad lighting to an unflattering pose. Nothing, however, so expertly kneecaps a portrait like a poorly-timed blink. Facebook Research, however, is working on a method for replacing closed eyes with open ones using an AI-driven tool that strives to go beyond simply copy and pasting new peepers.

The idea of opening closed eyes in a portrait isn’t a new one, but the process typically involves pulling source material directly from another photo and transplanting it onto the blinking face. For instance, Adobe’s Photoshop Elements software (a simplified version of its professional image editing software) has a mode built specifically for this purpose. When you use it, the program prompts you to select another photo from the same session (assuming you took more than one) in which the person’s eyes are open. It can then use Adobe’s AI tech, which it calls Sensei, to try and blend the eyes from the previous image into the shot with the blink.

It’s a function that worked surprisingly well for a quick fix—especially when you consider how many steps it takes to carefully paste and blend in a new set of eyes using the full-fledged version of Photoshop. But, there are small details that it can’t always get right, like lighting specific lighting conditions or the directions of shadows.

Facebook AI opens closed eyes

The system isn’t always perfect

Sometimes the AI would misjudge the color of the subjects eyes (above) or fail to correct for an obstruction like hair (bottom).

“Understanding shadows is completely intuitive,” says Hany Farid, a professor of computer science at Dartmouth College and a photo forensics expert. “I can reason about where a light source is by looking at the shadow.” When a technician copy and pastes a set of eyes from another photo, it may not always take into consideration things like slight changes to shadows, which—as the study indicates—can sometimes cause the final image to look nearly correct, but still inexplicably odd. That’s the uncanny valley, as it’s called, that researchers hope to avoid.

A recent paper published by Facebook Research proposes a different kind of solution for replacing closed eyes, which depends on a deep neural network that can actually construct the missing data using context from all around the image, and not just the affected area. Facebook is using tech called a general adversarial network (GAN) to fill in this data. It’s the same fundamental technology responsible for a recent wave of “deep fake” videos, in which celebrities appear to say and do things they haven’t really done.

The Exemplar GAN model they used draws data from other images of the same person, but it only uses it as reference material, from which it learns what the subject looks like and any identifying marks that may be present on their faces. It then uses a process called in-painting to generate the required information to replace the eyelids with actual eyes. This kind of deep learning requires more reference than one simple image, which fits nicely into Facebook’s infrastructure where it can typically analyze many different images of the same user, often across a variety of different lighting situations.

Facebook’s initial results are impressive, if imperfect, but the researchers are still working to find the best training methods for the algorithms behind the process and navigate unpredictable variables like photos in which part of the eye is blocked by hair or glasses.

Still, the company believes that this kind of computing is useful, even beyond fixing photos with blinking subjects. Maybe AI could make us all even better looking in our profile pictures down the road. Even beyond photos, the company is working on similar AI tools that translate music from one style to another.