A research team at Carnegie Mellon University has developed a new project that embraces artistic collaboration’s spontaneity and joy by merging the strengths of humans, artificial intelligence, and robotics. FRIDA—the Framework and Robotics Initiative for Developing Arts—ostensibly works like the generative art-bot DALL-E by developing an image based on a series of human prompts. But FRIDA  takes it a step further by actually painting its idea on a physical canvas.

As described in a paper to be presented in May at the IEEE International Conference on Robotics and Automation, the team first installed a paintbrush onto an off-the-shelf robotic arm, then programmed its accompanying AI to reinterpret human input, photographs, and even music. The final results arguably resemble somewhat rudimentary finger paintings.

Unlike other similar designs, FRIDA analyzes its inherently imprecise brushwork in real time, and adjusts accordingly. Its perceived mistakes are incorporated into the project as they come, offering a new level of spontaneity. “It will work with its failures and it will alter its goals,” Peter Schaldenbrand, a Ph.D. student and one of the FRIDA’s creators, said in the demonstration video provided by Carnegie Mellon.

[Related: Netflix used AI-generated images in anime short. Artists are not having it.]

Its creators’ emphasis on the robot being a tool for human creativity. According to the team’s research paper, FRIDA “is a robotics initiative to promote human creativity, rather than replacing it, by providing intuitive ways for humans to express their ideas using natural language or sample images.”

Going forward, researchers hope to continue honing FRIDA’s abilities, along with expanding its repertoire to potentially one day include sculpting, an advancement that could show great promise in a range of production industries.