Elon Musk’s Artificial Intelligence Group Opens A ‘Gym’ To Train A.I.
To focus on algorithms as versatile as the human brain
In any scientific arena, good research is able to be replicated. If others can mimic your experiment and get the same results, that bodes well for the validity of the finding. And if others can tweak your study to get better results, that’s of even more benefit to the community.
These ideas are the driving force behind OpenAI Gym, a new platform for artificial intelligence research. OpenAI, announced earlier this year, is the brainchild of Elon Musk, Y Combinator’s Sam Altman, and former Googler Ilya Sutskever. The collaboration vows to undertake ambitious artificial intelligence (A.I.) research while publishing and open-sourcing nearly everything they do. The platform wants to be the standard for benchmarking certain kinds of A.I. algorithms, and a place for people to share their results.
However, an interesting point is that the OpenAI Gym won’t have leaderboards based on who can make the top scoring algorithm. Instead, it will focus on promoting algorithms that generalize well—meaning they’re versatile in completing other similar tasks. Generalization is seen by many A.I. researchers as the biggest hindrance to human-level intelligence. Right now, the algorithms that can recognize imagery of a cat aren’t able to understand speech, because they approach the data in different ways. Generalizing would mean the algorithm knows how to deal with both, the way humans do.
The OpenAI team platform isn’t necessarily for iterative work with small improvements. They want projects that change the way we think about algorithms.
“It’s not just about maximizing score; it’s about finding solutions which will generalize well,” says OpenAI Gym’s submission documentation. “Solutions which involve task-specific hardcoding or otherwise don’t reveal interesting characteristics of learning algorithms are unlikely to pass review.”
The OpenAI Gym platform focuses on reinforcement learning, a flavor of artificial intelligence that’s centered around achieving a task. If the algorithm does well, it’s given a reward. If it fails, no reward—it then tries something different. Reinforcement learning has proved to work particularly well with robots and video games. It’s the same kind of artificial intelligence techniques that Google Deepmind used to beat Atari games.
In fact, Atari environments will be an option on the site, as well as simulated robotics and other board games. Even Go, the now-infamous ancient Chinese board game, will have a home on the site.
The idea is that researchers build their algorithms, and then put them in various environments (virtual spaces where the algorithms are tested). They can then see how their algorithm fared in an objective test, make adjustments, and even publish their benchmarks for the rest of the community to see. The platform works with various open-source artificial intelligence frameworks, like Google’s TensorFlow and University of Montreal’s Theano.
OpenAI Gym is in an open beta now, and researchers can start submitting their algorithms.