Creative A.I. Dreams Up 3D-Printed Objects

Evolving algorithm learns to sculpt based on feedback from a deep neural network
Rodin's bronze Clenched Left Hand looks similar to a patient with Charcot-Marie-Tooth disease, a neurological disorder that causes weakness in the hands and muscle atrophy."I see pain and anguish and stress in the hands," Chang said. "As a hand surgeon and an anatomist it's challenging to unravel the different imbalances between tendons and nerves that would have to occur to create that type of posture in the hand." Bronze, posthumous cast, 1971, 7/12. 18 3/4 in. x 10 3/4 in. x 6 in. Gift of the Iris and B. Gerald Cantor Foundation, 1974.49.

EA generates a random blueprint from which it models a 3D object, resembling a little blob. It kicks its creation over to a deep neural network which has a look, compares it to its vast image database, and gives the EA some feedback. Something like: this thing looks about .001% like a mushroom. Not an A+ but it’s a starting point.

The EA then mutates the blueprint a little and sends it back to the NNR. If the neural network says it got further away from the object the AE discards it and makes a new one. Eventually it will make an incremental improvement (that looks .002% like a mushroom) and makes a new mutation based on the slightly improved artifact.

Q: a bit like actual creative process. Accept that in this case the AE made 2.5 million iterations. I don’t know about you, but I would have written off ceramics class altogether by then.

Q A bit more like evolution… which inspired the algorithm in the first place.

Joel Lehman. Assistant professor at the IT University of Copenhagen. Collaboration with the Evolving Artificial Intelligence Laboratory at the University of Wyoming. Lehman lead researcher and his idea.

Deep neural networks have become very good at classifying images. Face recognition software or voice recognition [accessing labeled information. not really a creative process.] 5-10 years ago it was very hard problem to recognize objects in photos. THat’s basicallly a solved problem and that’s because of deep learning. [any image recog task.] THat’s at superhuman capabilities. Hygely successful. [keyword: imagenet. Huge data set of labeled images.]

“I was looking at the strengths of that and wondering if it could be leveraged to help computational creativity, to make computers be more creative, to create new artifacts, which the deep learning paradigm is really not well suited for.”

Evolutionary algorithms are more amenable to creativity. If you look at natural evolution it’s this prolifically creative process that’s created every single variety of life which is so diverse and insane if you really think about it. This unguided process created human level intelligence as well as bacteria and bears and barnacles all sorts of other B named organisms.

Taking the best of these 2 worlds and show some unique things you could do by combining them.

We linked together a state of the art deep learning network (synonym neural?) with an evolutionary algorithm that was trying to create sculptures. The EA is less constrained in terms of the kind of feedback you give it. It can do things beyond just the labeled paradigm.

2 main ingredients: 1) dnn: somebody trained it we just used it off the shelf. Using as a tool in this system. The part of the system that’s giving feedback to how good a particular sculpture is. How well that resembles an actual object. It takes in an image and then it says if it recognizes it as something and how certain it is.

2) EA it has this artificial DNA. It would map this DNA into a 3D object. Like artificial DNA that can map into 3D object. It’s an encoding . You can encode a 3D object with this artificial DNA. It’s like a blueprint for creating an object.

EA starts out by generating a random blueprint. Without any real structure. From scratch. The EA starts with random instances of artificial DNA which are essentially blueprints for creating a random 3D object. These are just like featureless blobs, the humble starting points of the process. The EA will take a blueprint create an object which will be just sort of a blob, and then it will take multiple pictures of this object from different angles. THose are the images that it feeds into the neural network. Its sort of liek the evolutionary alg takes some instructions for molding some clay, molds an object out of the clay and presents snapshots of this object. ANd then the dnn will say what it hthinks the object looks like. It woudl start out by saying it doesnt look like much of anything. But maybe it looks a little more like a mushroom than a tennis ball. That’s a little bit of feedback the algorithm can say, this is the best ex so far I have of a mushroom. I’ll try tweaking that disign just