The list of games in which humans still hold dominion is ever-dwindling. The latest casualty? Six-player Texas hold ’em poker, which the artificial intelligence (AI) Pluribus has mastered, according to a paper published this week in the journal Science.
Over the course of 20,000 games of online poker, Pluribus repeatedly trounced its human opponents, including fifteen of the world’s top professionals.
Pluribus, the brainchild of Facebook’s Noam Brown and Carnegie Mellon University’s Tuomas Sandholm, certainly isn’t the first game-playing AI to best a human—but its breakthrough achievement is a standout for a couple reasons. Most of the games in which AIs and people have been pitted against one another involve two players, both of whom have complete knowledge of what’s happening in the game (such as when all the playing pieces are clearly displayed on a board in front of them, as in chess). Texas hold ’em, on the other hand, is multi-player, and hinges on withholding information about which cards each player holds, versus which ones remain in the deck.
Games like poker are also tricky because of the sheer number of possible moves that can be made during gameplay, Tristan Cazenave of Paris Dauphine University, who was not involved in the study, told Donna Lu at New Scientist. Such complexity gives these games a level of unpredictability that, in a way, is more representative of the ambiguities of real-world decision making—something that AIs have yet to navigate with human-like dexterity.
To whip Pluribus into shape, Brown and Sandholm subjected the program to rigorous rounds of training in which the AI repeatedly played against itself, honing its technique through trial and error. The pair also carefully designed Pluribus to predict its opponents next couple moves—a strategy that gave the AI an edge without forcing it to calculate every possibility through the end of the game, which would have quickly gotten computationally expensive. With these parameters in place, Pluribus’ bootcamp took only eight days, in a process that could be replicated at a cost of just $150, Brown and Sandholm report.
The researchers then sent Pluribus off to the races—in the form of an online poker room where it played 10,000 games against five human opponents, and 10,000 more where a single human opponent faced five copies of the AI. Though Pluribus didn’t emerge victorious every time, it played well enough that, had real money been at stake, the program would have raked in about $1,000 an hour.
Brown and Sandholm attribute part of this winning streak to Pluribus’ unpredictable gameplay. The AI was able to bluff and switch up its strategy; it also bet boldly where human players might have shirked from the stress, Gizmodo’s George Dvorsky reports. The winning combo paid off.
“Whenever playing the bot, I feel like I pick up something new to incorporate into my game,” poker pro Jimmy Chou said in a statement released to the press. “As humans, I think we tend to oversimplify the game for ourselves, making strategies easier to adopt and remember. The bot doesn’t take any of these shortcuts.”
Clearly, Pluribus is impressive—but it remains to be seen how transferrable its skillset really is. Brown and Sandholm call the AI’s accomplishments “superhuman,” and hope to adapt their methodology to applications in cybersecurity, fraud detection, self-driving cars, and other domains that require complex problem-solving.
But artificial intelligence has yet to match—let alone beat—humans in countless other behavioral arenas. Games don’t capture a lot of what’s difficult about living in the real world. To get by, humans have to break rules, collaborate, innovate, and take unexpected developments in stride. Pluribus may now be a poker pro, but that doesn’t guarantee that true intelligence is in the cards.