An artificial intelligence program developed by researchers at Google can beat a human at the board game Go, which some consider to be the most complicated board game in existence. And this AI program — dubbed AlphaGo — didn’t defeat any ol’ human, but the European Go champion Fan Hui in a tournament last October by five games to nil. The findings, published today in the journal Nature, represent a major coup for machine learning algorithms.
“In a nutshell, by publishing this work as peer-reviewed research, we at Nature want to stimulate the debate about transparency in artificial intelligence,” senior editor Tanguy Chouard said at a press briefing yesterday. “And this paper seems like the best occasion for this, as it goes- should I say, right at the heart of the mystery of what intelligence is.”
Known as wéiqí in Chinese and baduk in Korean, Go originated in China over 2,500 years ago. The board consist of a 19 by 19 grid of intersecting lines. Two players take turns placing black and white marbles on individual intersection points. Once place, the stones can’t be moved, but they can be captured by completely surrounding an opponent’s marble. The ultimate objective is control more than 50 percent of the board, but since the board is so intricate, there are numerous possibilities for moves.
“So Go is probably the most complex game ever devised by man. It has 10^170 possible board configurations, which is more than the numbers of atoms in the universe,” said study author and AlphaGo co-developer Demis Hassabis of Google DeepMind.
Read more about Google DeepMind and how scientists created an artificial neural network.
Read the full transcript of this segment below:
GWEN IFILL: It’s not the first time a computer program has defeated a human in a big match, but researchers say today they have made a breakthrough in the quest to develop artificial intelligence.
Hari Sreenivasan has the story from our New York studios.
HARI SREENIVASAN: Scientists have developed a program to beat the Chinese board game Go. The object of the game is to surround and control more area of the board with markers than your opponent.
It sounds simple enough, but Go is considered an extremely complicated game. Google developed a program known as AlphaGo to defeat the top human player. The results were published in the scientific journal “Nature.”
Here’s an excerpt of a video about why the team at Google chose Go. It was produced by nature and Google.
MAN: It’s a very ancient game. It’s probably the most complex game humans play. There’s more configurations of the board than there are atoms in the universe.
MAN: In the game of Go, we need this amazingly complex, intuitive machinery, which people previously thought was only possible within the human brain, to even have an idea of who’s ahead and what the right move is that we should play there.
DEMIS HASSABIS, Google DeepMind: So, our program combines lots of different techniques together. And for the first time ever, AlphaGo, our program, beat a professional player on Go on even stones, so no handicap, on a full-size Go board.
So, I played chess from very young. And that was my first game that I fell in love with.
But then when I got to Cambridge and doing my undergrad club there, there was a very strong Go club at Cambridge with very active players, lots of mathematicians. It seems to appeal to mathematicians, the game.
MAN: Demis was a very, very strong game player. He’s a child prodigy and one of the strongest chess players in the world.
And it was very important to some people in the company that we could actually beat Demis at something. And I decided that the game of Go was my only chance, because Demis didn’t really know the game at that time.
DEMIS HASSABIS: From then on, I fell in love with the game. And since then, I have always thought it would be a great challenge for computers to be able to play such an aesthetic game, an intuitive game like Go, a much greater challenge than it was to play chess.
MAN: You look at the book, there are hundreds of different places that this stone can be placed down and hundreds of different ways that white can respond to each one of those moves, hundreds of different ways that black can respond in turn to white’s moves.
And you get this enormous search tree with hundreds times hundreds times hundreds of possibilities. And, in fact, the search base of Go is too enormous and too vast for brute force approaches to have any chance of succeeding. And as a result, A.I. researchers have to turn to more interesting approaches than brute force search, which are perhaps more human-like in a way that they deal with the position.
DEMIS HASSABIS: It’s a very intuitive game, so you ask a great Go player how it is that they decided on a move, they will often tell you it felt right.
HARI SREENIVASAN: Our science correspondent, Miles O’Brien, joins me now from Boston to help us understand more.
So, Miles, we have heard of computers beating human at chess. we have heard of computers beating video games. Why is it so significant that a computer is getting better and better and beating us at this particular game?
MILES O’BRIEN: Because it is so hard.
This is one of those games that if you saw somebody playing it, you would say, well, that’s not a lot more than glorified checkers. But it has literally trillions of possibilities when you start trying to play out the game. If you go back to 1997, when the IBM Deep Blue machine beat Gary Kasparov, a significant event in the world of artificial intelligence, that actually was brute force, Hari.
Each and every move, the machine would play out every countermove subsequent to that, all the way until the end of the game. It had the ability to do that because, in chess, there’s a limited number of choices. In the case of Go, even though it’s just white stones and black stones, there are just way too many potential choices, hundreds upon hundreds each turn.
And, of course, that builds, and eventually that will choke the computer, actually very quickly. What they did was they figured a way to kind of break down the problem and make the computer smart enough to beat a top Go player.
HARI SREENIVASAN: OK. This is coming from Google. Just a couple of days ago, we heard an announcement from Facebook.
Google, Facebook, Alibaba, these are all big tech companies. Why do they have teams of people working on artificial intelligence? Why does this matter to them?
MILES O’BRIEN: Well, you know what they say. Always follow the money, Hari.
The first and foremost thing, the short term here is to make sure you see the ad which in sometimes creepy ways is exactly what you’re looking for when you are browsing online. That is built on pattern recognition, artificial intelligence.
That’s why the Netflix suggestions are so apt, and that’s why what you see on Amazon seems to fit your needs for the moment. The better they get at that, the better their business model is for the present.
Medium term — and this is where we can talk about IBM’s Watson machine, which was so successful playing “Jeopardy” in 2011 — you could get to a situation where doctors could lean on computers with artificial intelligence using this pattern recognition, machine learning to do diagnoses.
Long term, what Google is talking about today is the idea that we would have essentially computer scientists with enough intelligence to be side by side with human scientists, pushing breakthroughs.
And then it gets into a very interesting world.
HARI SREENIVASAN: You have said that that’s long term and that that long-term future is what also scares some very smart people about artificial intelligence, of what its place is in society and whether we will become subservient to it.
MILES O’BRIEN: All of those things you see in the movies are, frankly, possibilities, if we don’t watch it.
I was talking to one of the leaders in the field today, Ray Kurzweil, and he was talking about the possibility, he thinks, around 2029, when machines will pass the so-called Turing test, which means you will be able to engage in a conversation, a human conversation with the machine, not knowing that it’s a machine at all.
When machines get to that point, when they can read and comprehend language like we do, then it turns into a whole new realm of sophistication and intelligence and ability to think in a machine context.
It is important — and this has been raised in the likes of Bill Gates and Elon Musk and Stephen Hawking — it’s important that we make sure, as this technology grows, and it grows quickly, as we’re seeing, that we keep control of it somehow, because, ultimately, these machines are going to be smarter than we are.
HARI SREENIVASAN: The conversation about artificial intelligence especially right now couldn’t happen without mentioning Marvin Minsky, one of the leaders in this field who just passed away.
I’m assuming that there are [by proteges of his at every one of these big tech companies. he probably knew that this happened.
MILES O’BRIEN: Yes, he was one of the fathers of this world, and helped coin the term artificial intelligence, legendary thinker in this regard.
And I remember interviewing him, Hari, back in 2010 for a piece we were doing on robotics and artificial intelligence. And what really struck me was this one statement. He said the things that are simple for us are hard for computers. And what’s hard for us is easy for computers.
This idea, he thought, that was so, you know, really unpredictable in some ways, until you think about it, is that stuff that a 3-year-old, commonsense things that a 3-year-old knows intuitively is very difficult to teach to a computer and vice versa.
And that — that interesting kind of paradox is what drives a lot of the thinking in artificial intelligence to this day. He was a great man and a great visionary.
HARI SREENIVASAN: All right, Miles O’Brien, thanks so much for joining us.
MILES O’BRIEN: You’re welcome, Hari.