When it comes to board games, humans don’t stand a chance against AI

“I’m not at the top, even if I become number one,” Korean Go champion Lee Sedol said.

If you sit down to play an old-school board game like chess this holiday season, it might be humbling to keep in mind just how bad you’d be against a computer. In fact, computers have shown they’re capable of taking humanity’s lunch money at board games for awhile now. Remember Deep Blue versus Gary Kasparov in 1997? The computer won. Or AlphaGo against Lee Sedol, in South Korea, at the game of Go, in 2016? Ditto.

In fact, Lee, a Go master, is retiring—and talking about how artificial intelligence is unbeatable. He said: “With the debut of AI in Go games, I’ve realised that I’m not at the top even if I become the number one,” the Guardian reported, citing the South Korean Yonhap News Agency.

Last year, the same team that created AlphaGo (the algorithm that beat Lee, four games to one, in 2016) celebrated something more formidable: an artificial intelligence system that is capable of teaching itself—and winning at—three different games. The AI is one network, but works for multiple games; that generalizability makes it more impressive, as it might also be able to learn other similar games, too.

They call it AlphaZero, and it knows chess, shogi (which is known as Japanese chess), and Go, a complex board game where black and white stones face off on a large grid. All of these games fall into the category of “full information” or “perfect information” contests—each player can see the entire board and has access to the same info. That’s different from games like poker, for example, where you don’t know what cards an opponent is holding.

“AlphaZero just learns completely on its own, just by playing against itself,” says Julian Schrittwieser, a software engineer at DeepMind, which created it. “And we get a completely new view of the game that is not influenced by how humans traditionally play the game.” Schrittwieser is a co-author on a 2018 study in Science describing AlphaZero, which was first announced in 2017.

Since AlphaZero is “more general” than the AI that won at Go, in the sense that it can play multiple games, “it hints that we have a good chance to extend this to even more real-world problems that we might want to tackle later,” Schrittwieser says.

The network needs to be told the rules of the game first, and after that, it learns by playing games against itself. That training took some 13 days for the game of Go, but just 9 hours for chess. After that, it didn’t take long for it to start beating other computer programs that were already experts at those games. For example, at shogi, AlphaZero took only two hours to start beating another program called Elmo. In fact, in a blog item, DeepMind boasts that the AI is “the strongest player in history” for chess, shogi, and Go. This same algorithm could be used to play other “full information” games, like the game of hex, with “no problem,” Schrittwieser says.

The new AI is similar to the artificial intelligence system that vanquished Lee Sedol in 2016. That headline-grabbing tournament is the subject of an excellent documentary, called AlphaGo, currently streaming on Netflix. It’s worth watching if the field of AI versus people interests you—or if the fascinating, ancient game of Go does.

And while this is modern AI research, board games have historically been a good way to test computers’ abilities, says Murray Campbell, a research scientist at IBM Research who authored a paper on the subject of AlphaGo in the same issue of Science. He says that the idea of having a computer play a board game dates back to 1950, and that by the 1990s, the machines were besting humans at checkers and chess. “It took us decades of work on these games to reach the point where we can perform them better than people,” Campbell says. “I think they’ve served the field very well; they’ve allowed us to explore techniques such as the ones used in AlphaZero.”

And the experience of working on the techniques used in AlphaZero will be helpful as the field aims at “more complex tasks,” Campbell adds. “And that was the whole point in the first place of tackling games—it wasn’t for their own sake, but [because] it is a constrained kind of environment where we can make progress.”

As for the human players, even if Lee is retiring, he still has a “final challenge” planned for December, according to The Korea Times: he’ll be pitted against another AI, called Handol, that was developed in Korea.

This story was first published in December, 2018. It has been updated with the news of Lee’s retirement and upcoming game with a new AI.

Rob Verger

Rob Vergeris an associate editor at PopSci, where he covers aviation, the military, transportation, outdoor gear and gadgets, and other tech topics. A graduate of Columbia Journalism School, he's also written for The Boston Globe, Newsweek, The Daily Beast, CJR, VICE News, and other publications. Contact the author here.