AI photo
SHARE
A 3D model of a mushroom and a 3D printed version of that model.
3D model and 3D printout of a mushroom both created by A.I. Courtesy of Joel Lehman

Can computers make art? That’s one of the questions animating the field of computational creativity, which seeks to design artificial intelligence that can replicate human creativity.

We wrote recently about a Google effort to create algorithms that make original music. But what if artificial intelligence could design and make 3D objects you could actually hold in your hand? That was the challenge that Joel Lehman, an assistant professor at the IT University of Copenhagen, set out to tackle.

Lehman wondered if he could somehow leverage the remarkable image recognition power of deep neural networks (DNN) to create new artifacts without human input. The key, he felt, would be to combine a DNN with an evolutionary algorithm, a process that employs mechanisms that mimic natural evolution, such as selection, reproduction, and mutation. He and a colleague teamed with the University of Wyoming’s Evolving Artificial Intelligence Lab to design an A.I. system that could sculpt.

They call it creative object generation, and here’s how it works: The evolutionary algorithm generates a random blueprint from which it models a 3D image. It invariably resembles a misshapen blob of clay. The algorithm then passes a few snapshots of the blob over to the deep neural network (because the DNN can only comprehend 2D images), and basically asks, “What do you think of this?” The DNN compares the snapshots to the images in its vast database, decides if the object resembles anything it’s familiar with, and gives the algorithm some feedback. At first, it’s pretty harsh. Something like: “This looks .001% like a jellyfish.” Most humans would probably drop ceramics at this point, but the algorithm soldiers on.

The evolving algorithm then takes the blueprint, mutates it a little, and sends it back to the DNN. “How about now?” If the DNN thinks the blob looks worse, the new object is discarded and another mutation is made from the original. If the feedback improves because it now looks .002% like a jellyfish, say, the new version becomes the basis for further mutations. Like uber-patient master and pupil, the algorithm and the deep neural network go back and forth like this millions of times, slowly but surely sculpting a recognizable object.

Object models created by an evolutionary algorithm
Some of the object models created by the evolutionary algorithm. Courtesy of Joel Lehman

While Lehman acknowledges the parallels between the his algorithm and human learning process, he believes the more apt comparison is to natural evolution. That’s what inspired the project in the first place. “It’s incredibly fascinating to me that [evolution], with no volitional thinking, was able to create things of such enormous complexity,” he says. “Things that are still beyond our ability to engineer.” It’s the evolutionary discovery of pathways from simple organisms to more complex ones that he’s hoping to mimic. That’s why the DNN gives the algorithm such basic feedback. It doesn’t tell the algorithm what it did right or wrong, just whether it’s getting warmer or colder. That’s like evolution, he says. “Either you live or you die. Evolution doesn’t tell you how your DNA should change in order to become a better organism.”

The team ran the refining process for about two weeks. Over 2.5 million iterations later, the DNN had rated many of the algorithm’s creations at 95% accuracy. In all fairness, a human judge would probably rate some of them lower. The odd appearance of the objects, says Lehman, points to the neural network’s inability to comprehend a three-dimensional object. In a sense, the algorithm exploited that weakness, getting passing grades for fairly abstract-looking creations.

Lehman sent the finalized blueprints (which he calls “artificial DNA”) to the 3D printer. The result: several small sculptures created by an algorithm. Lehman describes them as “kind of pretty.”

Lehman and his team will present their results, along with the 3D printed objects, at the International Conference on Computational Creativity next week in Paris.