Almost nothing looks more orderly than chess pieces before a match starts. The first move, however, begins a spiral into chaos. After both players move, 400 possible board setups exist. After the second pair of turns, there are 197,742 possible games, and after three moves, 121 million. At every turn, players chart a progressively more distinctive path, and each game evolves into one that has probably never been played before. According to Jonathan Schaeffer, a computer scientist at the University of Alberta who demonstrates A.I. using games, "The possible number of chess games is so huge that no one will invest the effort to calculate the exact number." Some have estimated it at around 10100,000. Out of those, 10120 games are "typical": about 40 moves long with an average of 30 choices per move. There are only 1015 total hairs on all the human heads in the world, 1023 grains of sand on Earth, and about 1081 atoms in the universe. The number of typical chess games is many times as great as all those numbers multiplied together—an impressive feat for 32 wooden pieces lined up on a board.
Almost nothing looks more orderly than chess pieces before a match starts. The first move, however, begins a spiral into chaos. After both players move, 400 possible board setups exist. After the second pair of turns, there are 197,742 possible games, and after three moves, 121 million. At every turn, players chart a progressively more distinctive path, and each game evolves into one that has probably never been played before. According to Jonathan Schaeffer, a computer scientist at the University of Alberta who demonstrates A.I. using games, "The possible number of chess games is so huge that no one will invest the effort to calculate the exact number." Some have estimated it at around 10100,000. Out of those, 10120 games are "typical": about 40 moves long with an average of 30 choices per move. There are only 1015 total hairs on all the human heads in the world, 1023 grains of sand on Earth, and about 1081 atoms in the universe. The number of typical chess games is many times as great as all those numbers multiplied together—an impressive feat for 32 wooden pieces lined up on a board. Natalie Wolchover
SHARE

Chess is often seen as the ultimate mental challenge: 32 pieces on an 8-by-8 board, and nearly limitless possible moves. There are chess engines that calculate millions of moves per second, but the traditional approach is to “brute force” the match. Brute forcing is a method in hacking (and apparently computer chess simulation) that means to run every possibility of a problem until the program finds the best solution.

But Matthew Lai wants to make chess-playing computers smarter. For part of his Masters degree at Imperial College London, Lai trained artificial neural networks to play at the level of a FIDE International Master, better than 97.8 percent of rated tournament players. He calls his software Giraffe.

After 72 hours of training, Giraffe figured out the best possible move 46 percent of the time. The move that Giraffe selected was in the top 3 moves 70 percent of the time. Previous attempts at machine learning in chess, like Knightcap, needed programmers to design “pattern recognizers,” separate functions to learn moves like shielding a king with a pawn, or the importance of having both colors of a bishop, says Lai. The machine learning algorithm would watch already-defined moves, and learn how strong they were. Giraffe discovers these patterns automatically, so it can learn moves that even the programmer wouldn’t have considered.

“Giraffe derives its playing strength not from being able to see very far ahead, but from being able to evaluate tricky positions accurately, and understanding complicated positional concepts that are intuitive to humans,” Lai writes in his paper detailing Giraffe.

Lai trained his artificial neural networks, which mimic human learning by continually testing the effectiveness of the surmised solution, on a set of 175 million data points. He took 5 million initial legal board configurations from humans’ and other computers’ games, and then applied a random legal move to each, multiple times for each board. The learning process involved the computer playing itself, and then calculating whether it would win or not based on its next move.

Without any training, Giraffe scores about 6,000 of 15,000 points in a standardized chess engine test. After training for 72 hours, it peaked at 9,700. It learned.

Giraffe is only bested by an engine called Stockfish 5, created and tuned since 2008 (and first built around a 2004 Glaurung chess engine). Lai writes that Giraffe’s ability to stand up to “carefully hand-designed behemoths with hundreds of parameters” is remarkable for how young it is, and that the testing suite might even underestimate his program.

“Since the test suite is famous, it is likely that at least some of the engines have been tuned specifically against the test suite,” Lai writes in his paper. “Since Giraffe discovered all the evaluation features through self-play, it is likely that it knows about patterns that have not yet been studied by humans, and hence not included in the test suite.”

The next step is making Giraffe more efficient. Lai suggests that training smaller networks using Giraffe could increase speed, and using another neural network for time management.