If Google’s artificial intelligence can paint its dreams, why not make other kinds of art?
On June 1, Google is set to launch Magenta, a research project to explore using artificial intelligence to create art, and make that process easier for TensorFlow users. The group has about six researchers now, and will invite other academics to help try to solve the problem of creative machines. The project exists within Google Brain group.
Douglas Eck, a researcher on the Magenta project, said that the group will first tackle algorithms that can generate music, then move to video and then other visual arts.
“There’s a couple of things that got me wanting to form Magenta, and one of them was seeing the completely, frankly, astonishing improvements in the state of the art [of creative deep learning]. And I wanted to demystify this a little bit,” Eck said during a panel at Moogfest, a music and technology festival.
Eck said that his inspiration for the project came from Google DeepDream, a way that researchers used to look at how their artificial intelligence algorithms perceived objects by asking them to generate examples.
The Magenta project will build all of their deep learning models open-source on top of TensorFlow, Google’s open-source artificial intelligence platform, according to Eck. He says that the hope behind open-sourcing the project is that others will be able to take Google’s work and further it themselves. The project’s GitHub page is currently empty (other than a ReadMe file), but will have its first code soon.
Eck also mentioned a potential Magenta app, which would showcase the music and visual art created out of the Magenta project. The app would aim to gauge whether people like the art because it’s novel or because it has inherent artistic value.
We’ve reached out to Google, and will update with any more information.