SHARE

In the future we may have to use neural networks to defeat neural networks.

The popularity of neural networks has skyrocketed ever since Google released the source code last month to part of its artificial neural network system dubbed DeepDream. The artificial intelligence program is what Google’s search engine uses to sort and categorize images online. The program learns to do this as it sifts through thousands of labeled images such as “starfish,” “bird,” or “banana,” and begins to recognize each distinct thing. The program can also be used to generate images of its own, however it often gets confused and creates beautiful and sometimes frightening chimeras of slug dogs, bird cars, and amoeba-like houses.

Although we are in the beginning stages of this technology, neural networks are already very effective at categorizing and tracing. This has software developer Kirk Kaiser wary about the future of privacy. So he conducted an experiment on whether he could throw a wrench into Facebook’s face recognition system, DeepFace by distorting his image using DeepDream.

“Every time somebody takes a photo of you or you upload a photo of yourself, it’s added to the training data set that exists in the ether of who and what you are,” says Kaiser. “The general idea is to corrupt the data set that exists on us and get back a little bit of control.”

Kaiser used DeepDream to alter an image of himself as many people did last month, and uploaded it to Facebook. He found that even though the image was distorted, DeepFace could still recognize him by his bushy beard. When he uploaded the distorted image, it pointed to his beard and tagged it, “Kirk Kaiser.” He then used an image of a randomized tile of his face as well as an image of a tree and ran DeepDreams over them and another open source neural network meant to detect age and gender. This generated a tiled image of tree bark and warped faces splashed with neon green and hot pink. DeepFace was confused by this and recognized around 92 distinct faces in the image, none of which were Kaiser’s. He calls his code Deep Graffiti.

httpswww.popsci.comsitespopsci.comfilesdeepgraffiti_tag.jpg
Tagging an image altered with Deep Graffiti to trick Facebook’s DeepFace Kirk Kaiser

The final step of Kaiser’s experiment was to tag the faces as his own to corrupt DeepFace’s data on him. However, whether this action is truly corrupting DeepFace’s ability to recognize Kaiser’s face or simply adding extra data to recognize him even with noise is still unknown.

“It’s going to take some more experimentation to get to the point where we can tell whether we’re ruining the data set or not,” says Kaiser. “It also brings up another question of when an image stops being me.”

As machine learning through neural networks advances, it may become more difficult to control our data and have privacy. As shown by Kaiser, our best defense might be a good offense where we use neural networks to trick one another and fight fire with fire.