Since 1989, Cops has famously aired footage of suspected criminals, many with their faces blurred out to protect their privacy. Ever since then, blurred or pixelated faces have become standard fare for concealing the identity of individuals who prefer not to be recognized in the media.
YouTube got in the game a few years ago, offering a facial blurring tool to help protect protestors against retribution from law enforcement or employers. But machine learning researchers at Cornell Tech and the University of Texas at Austin have developed software that makes it possible for users to recognize a person’s concealed face in photographs or videos.
The researchers used this software to defeat three different privacy tools, Wired reports, including both blurring and pixelating technologies. By teaching the computer program to identify faces, they could match distorted images to intact ones. They are careful to emphasize that this technique does not enable them to reconstruct distorted images.
And even though the researchers employed sophisticated machine-learning techniques to train the software to identify faces, the technology they used is available to the average person. According to the researchers, their techniques call into question how robust existing privacy software technologies are.
“The most surprising thing was how the simplest thing we tried worked so well,” Vitaly Shmatikov, one of the Cornell Tech researchers, told Popular Science.
Richard McPherson, Shmatikov’s student who collaborated on the research, emphasized how rudimentary some of the neural nets they used were. “One was almost a tutorial, the first one you download and play with when you’re learning neural nets,” he told Popular Science.
The researchers hope their work will show internet users how important it is to keep up with the rapid pace of privacy threats. “The balance is shifting,” says Shmatikov, “and manufacturers of privacy technology really need to take this into account.”