Google is using AI to help humans and computers communicate through art
A program for people who can’t draw good
Google went big on art this week. The company launched a platform to help people who are terrible at art communicate visually. It also published research about teaching art to another terrible stick-figure drawer: a neural network.
On Tuesday, the company announced AutoDraw, a web-based service aimed at users who lack drawing talent. Essentially, the program allows you to use your finger (or mouse if you’re on a computer) to sketch out basic images like apples and zebras. Then, it analyzes your pathetical drawing and suggests a professionally-drawn version of the same thing. You then click on the nice drawing you wanted, and it replaces yours with the better one. It’s like autocorrect, but for drawing.
Nooka Jones, the team lead at Google’s creative lab, says that AutoDraw is about helping people express themselves. “A lot of people are fairly bad at drawing, but it shouldn’t limit them from being able to communicate visually,” he says. “What if we could help people sketch out their ideas, or bring their ideas to life, through visual communication, with the idea of machine learning?”
The system’s underlying tech has its roots in a surprising place, according to Dan Motzenbecker, a creative technologist at Google. “It’s a neural network that’s actually originally devised to recognize handwriting,” he says. That handwriting could be Latin script, or Chinese or Japanese characters, like kanji. From there, “it’s not that big of a leap to go to a doodle.”
As people makes their line drawings, the network tries to figure out what it is. “The same way that might work for a letter of the alphabet, or a Chinese character,” Motzenbecker says, “we can use that for a doodle of a toaster.”
Neural networks get better by learning from data, but when asked about how and if the system is learning from our drawings, Jones says: “In theory, yes; we don’t quite disclose what we actually use as input back into the algorithm.”
Just like there are different ways to draw a letter, there are multiple representations of an elephant or a horse. “The more variety it sees,” Motzenbecker says, “the more adaptable it is to seeing novel ways of sketching things.” Users are also confirming the AI’s guesses when selecting a new drawing, which could help to guide its future decisions.
“One of the things that you see across the entire industry, and Google has recognized the potential of this much earlier than most other technology companies,” says Shuman Ghosemajumder, the chief technology officer at Shape Security in Mountain View, Calif., and a former Google employee, “is the use of machine learning to be able to do things that were previously thought to require direct human intervention.” And machine learning models need data.
“In this case, if you’ve got an app that millions of people potentially will use to be able to attempt to draw different figures,” he adds, “even if your technology isn’t perfect right now, you are creating this amazing training set of input data that can be used to improve these models over time.”
While AutoDraw is about helping people turn their doodles into more recognizable images, the search giant is also interested in how computers draw. On Thursday, Google Research published a blog post and paper about how they had schooled a recurrent neural network to draw items like cats and pigs.
The researcher team’s goal was to train “a machine to draw and generalize abstract concepts in a manner similar to humans,” according to a blog item written by David Ha, a Google Brain Resident. The system works by taking human input—say, a drawing of a cat or just the word “cat,” according to a Google spokesperson—and then making its own drawing.
The results are fascinating and bizarre. In one example, the researchers presented the system with a sketch of a three-eyed cat. The computer drew its own cat, but this one had the right number of eyes, “suggesting that our model has learned that cats usually only have two eyes.”
In another, when presented with a picture of a toothbrush, the Google neural network’s cat model made a Picasso-like feline that still had a toothbrush-inspired feel to it.
A Google spokesperson confirmed that it is different neural networks that are powering AutoDraw and the other research, but the similarities are striking: in both cases, the system is drawing on machine learning to take a piece of input and then either suggest a professionally-drawn image, or create something new totally on its own.