By — Nsikan Akpan Nsikan Akpan Leave your feedback Share Copy URL https://www.pbs.org/newshour/science/google-artificial-intelligence-beats-champion-at-worlds-most-complicated-board-game Email Facebook Twitter LinkedIn Pinterest Tumblr Share on Facebook Share on Twitter Google artificial intelligence beats champion at world’s most complicated board game Science Jan 27, 2016 2:40 PM EDT An artificial intelligence program developed by researchers at Google can beat a human at the board game Go, which some consider to be the most complicated board game in existence. And this AI program — dubbed AlphaGo — didn’t defeat any ol’ human, but the European Go champion Fan Hui in a tournament last October by five games to nil. The findings, published today in the journal Nature, represent a major coup for machine learning algorithms. “In a nutshell, by publishing this work as peer-reviewed research, we at Nature want to stimulate the debate about transparency in artificial intelligence,” senior editor Tanguy Chouard said at a press briefing yesterday. “And this paper seems like the best occasion for this, as it goes- should I say, right at the heart of the mystery of what intelligence is.” Known as wéiqí in Chinese and baduk in Korean, Go originated in China over 2,500 years ago. The board consist of a 19 by 19 grid of intersecting lines. Two players take turns placing black and white marbles on individual intersection points. Once place, the stones can’t be moved, but they can be captured by completely surrounding an opponent’s marble. The ultimate objective is control more than 50 percent of the board, but since the board is so intricate, there are numerous possibilities for moves. “So Go is probably the most complex game ever devised by man. It has 10^170 possible board configurations, which is more than the numbers of atoms in the universe,” said study author and AlphaGo co-developer Demis Hassabis of Google DeepMind. The team’s goal was to beat the best human players, not just mimic them. Hassabis continued that it’s much harder for computer programs to play than chess. During a typical moment in chess, a player has an average of 20 possible moves. For Go, it’s 200 possible moves. To unpack this complexity, Google DeepMind created an artificial neural network — a web consisting of millions of computerized neurons. Actually, to be honest, they built two. The first — the policy network — predicts the next move. AlphaGo uses this network to narrow its mental search and consider the moves that are most likely to lead to a win. It was trained by observing 30 million moves from games played by human experts, until it could predict the human move 57 percent of the time. The previous record by a computer program was 44 percent. The second — the value network — takes a shallower approach, estimating the winner from each move, rather than searching all the way to the end of the game, said co-developer David Silver of Google DeepMind. “The policy network suggests intelligent moves to play, while the value network figures out who’s going to win in each of those positions that’s reached,” Silver said. “AlphaGo looks ahead by playing out the remainder of the game in its imagination many times over.” Video by nature video The team’s goal was to beat the best human players, not just mimic them. AlphaGo accomplishes this feat by discovering new strategies for itself. This method contrasts with using brute force programming, wherein a computer tries all possible candidates for a solution. IBM’s DeepBlue computer used brute force to beat chess champion Gary Kasparov in 1996. “[AlphaGo] plays thousands and thousands of games between its neural networks and gradually improving them using a trial-and-error process, known as reinforcement learning,” Silver said. By doing so, AlphaGo evaluated thousands of times fewer moves in its Go game against Hui compared to DeepBlue’s chess match against Kasparov. In other words, machine learning trumped brute in terms of efficiency. “Following the Chess match between Gary Kasparov and IBM’s Deep Blue in 1996, the goal of some Artificial Intelligence researchers to beat the top human Go players was an outstanding challenge, perhaps the most difficult one in the realm of games,” said Jon Diamond, president of the British Go Association in a statement. Now that AlphaGo has bested the European champion, it’s next challenge will be to face Lee Sedol, who is considered the world champion, in Seoul in March. Not to be outdone, Facebook announced earlier this morning that they are close to creating an artificial intelligence program that can beat a human player. By — Nsikan Akpan Nsikan Akpan Nsikan Akpan is the digital science producer for PBS NewsHour and co-creator of the award-winning, NewsHour digital series ScienceScope. @MoNscience
An artificial intelligence program developed by researchers at Google can beat a human at the board game Go, which some consider to be the most complicated board game in existence. And this AI program — dubbed AlphaGo — didn’t defeat any ol’ human, but the European Go champion Fan Hui in a tournament last October by five games to nil. The findings, published today in the journal Nature, represent a major coup for machine learning algorithms. “In a nutshell, by publishing this work as peer-reviewed research, we at Nature want to stimulate the debate about transparency in artificial intelligence,” senior editor Tanguy Chouard said at a press briefing yesterday. “And this paper seems like the best occasion for this, as it goes- should I say, right at the heart of the mystery of what intelligence is.” Known as wéiqí in Chinese and baduk in Korean, Go originated in China over 2,500 years ago. The board consist of a 19 by 19 grid of intersecting lines. Two players take turns placing black and white marbles on individual intersection points. Once place, the stones can’t be moved, but they can be captured by completely surrounding an opponent’s marble. The ultimate objective is control more than 50 percent of the board, but since the board is so intricate, there are numerous possibilities for moves. “So Go is probably the most complex game ever devised by man. It has 10^170 possible board configurations, which is more than the numbers of atoms in the universe,” said study author and AlphaGo co-developer Demis Hassabis of Google DeepMind. The team’s goal was to beat the best human players, not just mimic them. Hassabis continued that it’s much harder for computer programs to play than chess. During a typical moment in chess, a player has an average of 20 possible moves. For Go, it’s 200 possible moves. To unpack this complexity, Google DeepMind created an artificial neural network — a web consisting of millions of computerized neurons. Actually, to be honest, they built two. The first — the policy network — predicts the next move. AlphaGo uses this network to narrow its mental search and consider the moves that are most likely to lead to a win. It was trained by observing 30 million moves from games played by human experts, until it could predict the human move 57 percent of the time. The previous record by a computer program was 44 percent. The second — the value network — takes a shallower approach, estimating the winner from each move, rather than searching all the way to the end of the game, said co-developer David Silver of Google DeepMind. “The policy network suggests intelligent moves to play, while the value network figures out who’s going to win in each of those positions that’s reached,” Silver said. “AlphaGo looks ahead by playing out the remainder of the game in its imagination many times over.” Video by nature video The team’s goal was to beat the best human players, not just mimic them. AlphaGo accomplishes this feat by discovering new strategies for itself. This method contrasts with using brute force programming, wherein a computer tries all possible candidates for a solution. IBM’s DeepBlue computer used brute force to beat chess champion Gary Kasparov in 1996. “[AlphaGo] plays thousands and thousands of games between its neural networks and gradually improving them using a trial-and-error process, known as reinforcement learning,” Silver said. By doing so, AlphaGo evaluated thousands of times fewer moves in its Go game against Hui compared to DeepBlue’s chess match against Kasparov. In other words, machine learning trumped brute in terms of efficiency. “Following the Chess match between Gary Kasparov and IBM’s Deep Blue in 1996, the goal of some Artificial Intelligence researchers to beat the top human Go players was an outstanding challenge, perhaps the most difficult one in the realm of games,” said Jon Diamond, president of the British Go Association in a statement. Now that AlphaGo has bested the European champion, it’s next challenge will be to face Lee Sedol, who is considered the world champion, in Seoul in March. Not to be outdone, Facebook announced earlier this morning that they are close to creating an artificial intelligence program that can beat a human player.