SHARE

The internet is flooded with photos—of your brunch, of your cat, of your estranged elementary school friend’s cousin’s wedding. 1.8 billion photos are uploaded daily, and most of them are objectively pretty terrible. Now a team of computer scientists from Princeton University and software company Adobe have created a program to make those photos just a little better, by identifying and eliminating distracting elements, according to a paper presented recently at this year’s Computer Vision and Pattern Recognition Conference in Boston.

There are lots of elements of a “casual” photo that can make it terrible—bad lighting, off-angles. But one that’s often ignored is the inclusion of distracting elements, objects in the photo that distract the viewer’s attention from the image’s main focus. Photo editing software allows photographers to remove these elements from their images, of course, but for most “this effort is neither possible nor warranted,” the researchers write. They wanted to create a program that removes distracting elements from a photo with a single tap.

Photos with distractions, left, and without, right.

Before they could create the program, the researchers needed a computer model to accurately identify distracting elements that should be eliminated. They had volunteers, found through Amazon’s Mechanical Turk system and through the photo editing app Fixel, sift through thousands of photos to annotate or edit out the distractors. “Please mark the areas you might point out to a professional photo editor to remove or tone-down to improve the image,” reads the prompt offered to Mechanical Turk participants. When the researchers processed the data, they found that distractors had a number of things in common—they’re usually objects, for example, and tend to appear near the border of the photos.

The researchers then used that information to create algorithms that can pick out distracting elements in a photo. The detectors each focus on a particular element of the photo and, depending on factors like its blurriness and position in the photo, determine whether or not to delete it. “”We have a specific car detector in the code because people often want to eliminate cars that wander into the frame,” Ohad Fried, one of the study authors, said in a press release. “We have a face detector. If the face is large and in the center of the photo, we probably don’t want to remove it. But if it is coming in from the side, it might be a photobomb.”

When they tested the algorithm on photo samples, the software did well, though it sometimes deleted important elements instead. There’s room for improvement.

The researchers hope that, after further development, their algorithm may be incorporated into photo editing software.

Distraction-predicting software in action; photos on the right are the ones with distractions removed.

MORE TO READ