SHARE

If you’ve ever thought about a competitive situation in life as like a chess game, you might consider poker as a better metaphor. Chess is just a two-player contest, and each player has access to all the same info—it’s known as a “perfect information” game. But real-life isn’t usually like that. Consider a common but complex scenario: a company is trying to hire someone, but they don’t know what other firms that potential employee is interviewing with, or what other offers they have. That’s more like poker than chess. The interviewee keeps their cards hidden, and could even bluff about the strength of their hand.

Situations like that drive artificial intelligence research. Computer scientists want their algorithms to be able to succeed in scenarios with multiple hidden variables. In that vein, a division of Facebook called FAIR (Facebook AI Research) and Carnegie Mellon University have created an AI that is “superhuman” at poker. And tech like it could have implications far beyond the virtual felt of the gaming table.

“It’s the best player in six-player no-limit Texas hold ‘em in the world,” says Noam Brown, a research scientist at FAIR, describing their AI poker whiz.

Software is already great at beating other people in games like chess, checkers, Go. And while AI could already win at two-player poker, the breakthrough here is that the new artificial intelligence system, called Pluribus, can dominate at the multiplayer game. The study describing Pluribus was published today in the journal Science.

Like a human, the AI can bluff if its hand is weak. “It’s focused on playing a strategy that’s unpredictable,” Brown says. “The AI knows that if it only bets if it has a good hand, then the opponent is going to know to fold in response.”

The AI doesn’t see bluffing as a lie, but instead, a strategy to get its opponent to fold even if it has a weak hand. “Equally important, the bot is able to recognize that when its opponent bets, it might not have a strong hand,” he adds, meaning that maybe the bot should call.

The bot does not try to change its technique based on what kind of behavior it sees from humans—it just sticks a “fixed strategy.” To test it, Facebook pitted the AI against 15 expert poker players, who played thousands of hands against the bot over 12 days. “The opponents are not able to find an effective way to adapt to the bot,” Brown says. “They have not been able to find weaknesses they can take advantage of and exploit.” To learn how to be so good, the AI played against copies of itself for eight days.

“I was one of the earliest players to test the bot so I got to see its earlier versions,” professional poker player Darren Elias said in a statement provided by Facebook. “The bot went from being a beatable mediocre player to competing with the best players in the world in a few weeks.” Another player, Jason Les, said: “despite my best efforts, I was not successful in finding a way to exploit it.”

The AI’s ability to navigate all of these variables makes it promising for expansion into more practical scenarios. “If we want to deploy AI in the real world, it has to cope with those aspects of the world,” Brown says. “We’re taking a big step towards that direction.”

Poker has been a major benchmark in the world of imperfect information games (contests in which some info is hidden from others) in the AI community since 1970, says Tuomas Sandholm, a professor of computer science at Carnegie Mellon and the senior author on the new study. “It’s very clear that a lot of real-world applications—not all, but a lot—are not two-player zero-sum games,” he says.

Think about multi-party negotiations or auctions as scenarios where an unbeatable poker-style bot would be a key asset for the party that deploys it.

Brown says the fact that the humans couldn’t find weaknesses in their poker algorithm is important if they want to ever use AI bots like this in the real world. “When you deploy an AI system on a large scale, if there are weakness in it, then somebody is going to find those weaknesses,” he says, “and you have to have an AI that’s able to be unexploitable.”