AI photo
SHARE

When IBM’s Deep Blue computer beat chess grandmaster Garry Kasparov, the world noticed. Machine had bested man, in a game of man’s own design. The rules for what machines could do had changed.

But now, moments like that (which Popular Science dubbed the “checkmate heard ’round the world” in 1997), are fewer and farther between. In a dichotomous world where artificial intelligence is either slowly improved in the open, like virtual personal assistants, or tweaked on the server-level to provide better customized content or facial recognition, these benchmark moments are usually more ambiguous.

Today (and late last night) two companies highly-invested in artificial intelligence are both trying to lay claim to the same benchmark, beating human players at the ancient Chinese game of Go.

In the last 24 hours, both Google DeepMind and Facebook’s AI Research lab have announced that their algorithms can perform at extremely high levels of competition. Google has a much more formal lead on this announcement, with a publication in Nature and real-life scenario in October where the computer, named AlphaGo, beat reigning European Go champion Fan Hui. Facebook’s software placed third in a monthly bot tournament held by online Go server KGS.

Google DeepMind claims their level of play, beating a reigning human champion, had been previously estimated to take 5-10 years longer than it did, a testament to the acceleration of our artificial intelligence research, while Facebook says that they’ve seen steady and quantifiable improvement in their algorithms.

To the Google or Facebook user, this is not a momentous occasion for either user experience. In fact, it probably won’t affect how the services are provided at all. What it does show is that researchers are getting quantifiably better at applying the established algorithms, in this case a flavor of deep neural networks that are good at processing visual information. It means algorithms that can detect patterns better, and express their conclusions more simply. In the future, this will mean better results.

Now, it just means one more thing computers are better at.

However, Facebook founder and CEO Mark Zuckerberg took this as an opportunity to flex some research muscle, posting about his company’s achievement on his personal Facebook page, suspiciously a day before Google DeepMind’s announcement.

Both companies used some combination of deep convolutional neural networks, actually invented by Facebook’s Yann LeCun at Bell Labs, and millions data points based on previous games of Go.

httpswww.popsci.comsitespopsci.comfilesscreen_shot_2016-01-27_at_12.56.06_pm.png
A typical Go board. Nature

The game of Go is a two-person game played on a 19×19 board. Each turn a player places a round tile, called a stone, to try to capture the other player’s stones. One player is represented by black stones, the other white. Since there are 361 potential spaces, each one’s appeal changing with each move, the game can be seen as endlessly complex and requiring human creativity to devise strategy for outsmarting their opponent.

Google’s Approach

Google used two convolutional neural nets to govern which move was played. These convolutional neural networks are the same style used in facial recognition and identifying objects in images, because the way they break down data plays nicely with how pixels are described by computers. The system is first fed a description of the board: which positions are white, which are black, and which are unclaimed. The first neural net is called a policy network, that outputs the probability of each potential move to be played by a human player. This is a relatively quick process, because the computer has already looked at 30 million positions from previous Go games, in a process called training. In training, the computer is fed all that positional data and the outcomes of each position. From this, it can rank how to best rate potential moves. It’s a lot like watching game footage for a football player or boxer, except the computer never forgets a move.

httpswww.popsci.comsitespopsci.comfilesscreen_shot_2016-01-27_at_12.35.06_pm.png
The outcomes of Google DeepMind’s AlphaGo software against French Go master Fan Hui. Google DeepMind

The second network is the decider. It’s another convolutional neural network that DeepMind calls the value network. It takes all those probabilities outputs a single number, that corresponds to which move is most likely to win the entire game.

That’s the system that beat one of the world’s foremost Go champions (five times!), and in Google’s tests, beat other leading virtual Go players in 494 out of 495 games, or 99.8 percent of the time.

Facebook’s Approach

Facebook’s stab at Go is a little more diverse. Instead of using two convolutional networks, they only used one, but in conjunction with another form of machine learning called a Monte Carlo Tree Search. This is sort of a randomized search that explores many, many potential moves learned in training, as described before, according to Facebook’s Yann LeCun. The convolutional network functions almost like Google’s policy network, in predicting the best potential next move, relying on the MCTS to actually explore learned moves. However, it should be noted that Facebook had one researcher on this problem (albeit one that sat 20 feet from Mark Zuckerberg), while the DeepMind Nature article had 20 co-authors.

Going forward, Google DeepMind’s AlphaGo will challenge human Go master Lee Sedol, regarded as the best Go player in the world. The match will take place in March 2016 (and we’ll tell you how it goes). Facebook’s LeCun says their model is still in development, but will look towards incorporating other types of deep learning in the future.