AI photo
SHARE

We may earn revenue from the products available on this page and participate in affiliate programs. Learn more ›

Researchers at Google’s DeepMind team have developed an artificial intelligence that’s loosely inspired by how the human brain works… and that’s able to play 49 classic Atari 2600 games, including Space Invaders and Pong.

The AI doesn’t need to know the rules for a game before it starts playing. Instead, it’s equipped with two basic things. The first is the ability to remember and learn from previous rounds of a game it’s played before. The second is the motivation to maximize its scores. With these powers, it can figure out the rules. Over time, it can improve its strategy. The AI ultimately was able to score at least 75 percent as well as a professional human games tester on 29 Atari games, its creators are reporting today in the journal Nature.

The AI is supposed to be a proof of concept–evidence that it should be possible eventually to build a program that’s able to learn to solve a variety of problems. “The ultimate goal is to build smart, general purpose machines,” Demis Hassabis, DeepMind’s founder, says. (DeepMind was a venture capital-backed startup before Google acquired it.) “We’re many decades off from doing that, but I do think this is the first significant rung of the ladder that we’re on.”

Hassabis imagined that in the near future, software like his AI could go into an app that knows how to respond when you say, “Plan a trip a Europe for me.”

“You just tell it that and it books all the flights and hotels that you need,” he says. Pieces of the AI could go into Google products such as search and translation, he adds.

How can software that now plays Atari games later understand your desire for a Continental Getaway? The idea is that at its foundation, the AI is flexible and is able to learn many things. In fact, its structure is a little reminiscent of the human brain, which is capable of learning an awe-inspiring variety of tasks. The AI is an artificial neural network, which means it’s made up of a bunch of connections, analogous to the connections between the brain cells.

The artificial neural network begins life with connections of random strengths. As it discovers which decisions result in more points, however, that strengthens certain connections. Meanwhile, less-used connections weaken. Similarly, when people practice certain skills, that strengthens the corresponding connections in their brains.

Up next, to take their research to the next level, Hassabis and his colleagues plan to improve the AI so it will be able to handle ‘90s games, such as Super Nintendo and early PC games. That means the software will have to learn to process more complex images, navigate bigger landscapes, and recall maps. Yes, it also means that this AI is chewing its way through your childhood. I hope you’re okay with that.

“It’s definitely fun to see computers discover things that you didn’t figure out yourself.”

Meanwhile, working with an AI that learns is full of surprises. For example, the AI came up with some strategies its human makers hadn’t thought of. In Breakout, it discovered the best technique was to get the ball behind the Breakout blocks. You can see that in the video above. In Seaquest, it found it could keep its submarine from running out of oxygen by keeping the sub near the water’s surface.

“Initially, we thought this was a bad thing because by doing that, it stopped accumulating points, but at the same time, it discovered a feature of the game we never knew,” says Vlad Mnih, another DeepMind engineer. “It’s definitely fun to see computers discover things that you didn’t figure out yourself.”