Big Blue Wins
[Sorry, the video for this story has expired, but you can still read the transcript below. ]
JIM LEHRER: Now, a chess upset for the ages and to Paul Solman.
ANNOUNCER: Oh, Deep Blue–Kasparov has resigned!
PAUL SOLMAN: Yesterday, in the sixth and final round of man versus machine, the rematch, machine didn’t just beat man but trounced him, as IBM’s Deep Blue computer beat world champion Gary Kasparov.
GARY KASPAROV, Chess World Champion: I’m ashamed by what I did at the end of this match, but so be it.
PAUL SOLMAN: Yesterday’s loss in New York comes little more than a year after Kasparov beat Deep Blue in a six-game match in Philadelphia, three to one with two draws. In this year’s rematch, Kasparov was even with the new improved super duper supercomputer going into yesterday’s contest. But the champ was clearly shaken–by game two, which he should have played to a draw, but mistakenly resigned instead; by game five, a draw the computer forced, though Kasparov had the advantage; and by the computer’s seeming ability to play human-like strategies. After game five, to some, Kasparov sounded desperate.
GARY KASPAROV: I’m not afraid to admit that I’m afraid. And I’m not even afraid to say why I’m afraid, because sometimes, you know, it definitely goes beyond any known chess program in the world. You know, it makes decisions that still cannot be made by any computer, and facing such a challenge with no preparation–no preparation before the match, I have to be extremely cautious.
PAUL SOLMAN: Sunday’s showdown proved to be no contest. In fact, Kasparov made an early blunder that shocked experts. After just an hour of playing, instead of the usual four or so, Kasparov resigned the game and, thus, the match. At a post-game press conference Kasparov sounded bitter and said another rematch would prove he could beat any machine.
GARY KASPAROV: I think it’s time for Deep Blue to start playing real chess. And I personally assure you, everybody here, that if Deep Blue will start playing competitive chess, I personally guarantee you I’ll torn [tear] it in pieces with no question.
PAUL SOLMAN: C. J. Tan, leader of the IBM, savored the victory.
C. J. TAN, Deep Blue Programmer: It visibly shows the world that technology–what technology can do for man and how far we have been able to push technology.
PAUL SOLMAN: Kasparov says he wants a neutral party to sponsor a future contest. The Deep Blue team says it’s considering his challenge.
JIM LEHRER: More now on this victory and to Margaret Warner.
MARGARET WARNER: And now the smaller and larger meanings of this match. Frederic Friedel is Gary Kasparov’s technical adviser and a computer chess expert. Daniel Dennett teaches philosophy at Tufts University. He wrote about computer intelligence in his book Darwin’s Dangerous Idea. And Hubert Dreyfus is a philosophy professor at the University of California at Berkeley. He’s the author of the book What Computers Still Can’t Do. Welcome, gentlemen, to all of you.
Frederic Friedel, why do you think Gary Kasparov lost this match?
FREDERIC FRIEDEL, Kasparov Adviser: I think that he just didn’t stand up to the pressure of the situation. The situation was very unusual for him. For 20 years he’s been playing chess against human beings and flesh and blood. Here, the opponent was completely invisible. It was backed up by a team of engineers and programers. And every day we heard of new grand masters who had been on the team, so in some ways he cracked in the end.
MARGARET WARNER: Is that what he meant when he said, “Well, I’m a human being, and when I see something that’s well beyond my understanding, I’m afraid?”
FREDERIC FRIEDEL: What he meant was that there were certain phases of the game which he just didn’t understand. We had most of it analyzed quite well–game one, game three, game four. But game two he didn’t understand. And it played very heavily on his mind. And I think game two lost the match for him.
MARGARET WARNER: And that’s the one where a lot of experts said afterwards he missed a move that, in fact, he–he cracked too soon–he missed a move that he could have made?
FREDERIC FRIEDEL: Well, that was an additional thing that right in the end he resigned because he assumed a computer that’s playing so well would have calculated everything and the game is lost. It looked very lost. And 200 people in the auditorium and 20 grand masters noticed nothing. And then at 1 o’clock in the morning or 2 o’clock in the morning we discovered with a computer that it is a draw. He could have played on and drawn the game.
MARGARET WARNER: Hubert Dreyfus, what do you think is the significance of this? There’d been a lot of commentary about it. “Newsweek” Magazine called it the “brain’s last stand.” What do you see as the significance of this outcome?
HUBERT DREYFUS, University of California, Berkeley: Well, I think that’s a lot of hype, that it’s the brain’s last stand. It’s a significant achievement all right for the use of computers to rapidly calculate in a domain–and this is the important thing–completely separate from everyday human experience. It has no significance at all, as far as the question: will computers become intelligent like us in the world that we’re in? The reason the computer could win at chess–and everybody knew that eventually computers would win at chess–is because chess is a completely isolated domain. It doesn’t connect up with the rest of human life, therefore, like arithmetic, it’s completely formalizable, and you could, in principle, exhaust all the possibilities. And in that case, a fast enough computer can run through enough of these calculable possibilities to see a winning strategy or to see a move toward a winning strategy. But the way our everyday life is, we don’t have a formal world, and we can’t exhaust the possibilities and run through them. So what this shows is in a world in which calculation is possible, brute force meaningless calculation, the computer will always beat people, but when–in a world in which relevance and intelligence play a crucial role and meaning in concrete situations, the computer has always behaved miserably, and there’s no reason to think that that will change with this victory.
MARGARET WARNER: Daniel Dennett, what do you see as the significance? And respond, if you would, to Mr. Dreyfus’s critique.
DANIEL DENNETT, Tufts University: Certainly. It seems to me that right now is a time for the skeptics to start moving the goal posts. And I think Bert Dreyfus is doing just that. A hundred and fifty years ago Edgar Allan Poe was sure in his bones that no machine could ever play chess, and only 30 years ago so was Hubert Dreyfus, and he said so in the earlier edition of his book. Then he’s changed his mind, and, as he says, it’s–this is really no surprise. People in the computer world have known for a couple of decades that this–this day was going to happen. Now it’s happened. I think that the idea that Professor Dreyfus has that there’s something special about the informal world is an interesting idea, but we just have to wait and see. The idea that there’s something special about human intuition that is not capturable in the computer program is a sort of illusion, I think, when people talk about intuition. It’s just because they don’t know how something’s done. If we didn’t know how Deep Blue did what it did, we’d be very impressed with its intuitive powers, and we don’t know how people live in the informal world very well. And as we learn more about it, we’ll probably be able to reproduce that in a computer as well.
MARGARET WARNER: Mr. Dreyfus, do you think he’s right that perhaps we don’t–still just don’t completely understand what it is that humans do when they think, as we think of thinking?
HUBERT DREYFUS: I think that we don’t fully understand it in the sense that Dan Dennett and people in the AI community meet, if I fully understand.
MARGARET WARNER: By AI you mean artificial intelligence.
HUBERT DREYFUS: Right. That is, we don’t–we are not able to analyze it in terms of context-free features and tools for muting these futures. But I don’t think that’s just a limitation of our current knowledge. That’s where I differ with Dan. There is something about the everyday world which is tied up with the kind of being we are. We’ve got bodies, and we move around in this world, and the way that world is organized is in terms of our implicit understanding of things like we move forward more easily than backward, and we have to move toward a goal, and we have to overcome obstacles. Those aren’t facts that we understand. We understand that just by the way we are, like we understand that insults make us angry. You can state those as facts. But I think there’s a whole underlying domain of what we are as emotional embodied beings which you can’t completely articulate as facts and which underlies our ability to make sense of facts and our ability to find any facts relevant at all. Can I say one word about this–
MARGARET WARNER: Please.
HUBERT DREYFUS: –this story. I never said that computers couldn’t play chess. I’ve got a quote here. I said, “In ’65, still no computer can play even amateur chess.” That was a report on what was going on in 1965. I’ve had to put up for 35 years with this story that I said computers could never play chess. In fact, I said from the beginning it’s a formal game, and of course, computers could play, in principle, could play, world champion chess.
MARGARET WARNER: All right. Let me bring Mr. Friedel back in here. Mr. Friedel, did Gary Kasparov think the computer was thinking?
FREDERIC FRIEDEL: Not thinking but that it was showing intelligent behavior. When Gary Kasparov plays against the computer, he has the feeling that it is forming plans; it understands strategy; it’s trying to trick him; it’s blocking his ideas, and then to tell him, now, this has nothing to do with intelligence, it’s just number crunching, seems very semantic to him. He says the performance is what counts. I see it behaves like something that’s intelligent. If you put–if you put a curtain up, he plays the game and then you open the curtain, and it’s a human being. He says, ah, that was intelligent, and if it’s a box, he says, no, that was just number crunching. It’s the performance he’s interested in.
MARGARET WARNER: Daniel Dennett, I know you’re not a chess expert, but I mean, do you feel that in this situation the computer was thinking in the way that Mr. Friedel said Gary Kasparov thought it was, I mean, that it was somehow independently making judgments? I’m probably using the wrong terminology here.
DANIEL DENNETT: No. I think that’s fine. I think that Kasparov has put his finger on it too. It’s the performance that counts. And Kasparov is not kidding himself when he sees–when he confronts Deep Blue and feels that Deep Blue is, indeed, parroting his threats and recognizing what they are and trying to trick him, this is an entirely appropriate way to deal with that. And if Professor Dreyfus–
MARGARET WARNER: But do you think it was capable of trying to trick Kasparov?
DANIEL DENNETT: Certainly.
MARGARET WARNER: And Mr. Dreyfus, your view on that.
HUBERT DREYFUS: No. I think it was brute force, but the important thing is I’m willing to say, okay, it’s the performance that counts. But it’s the performance in a completely circumscribed, formal domain, mere meaningless–can produce performance full of trickery–performance in the everyday world.
MARGARET WARNER: Daniel Dennett, briefly in the time we have left, where do you think we are in the continuum of developing–percent of where computers–or 50 percent?
DANIEL DENNETT: No. I don’t think that’s the right way to look at it. In fact, Deep Blue in chess programming in general is a sort of offshoot to the most interesting work in artificial intelligence and largely for the reasons that Bert Dreyfus says. I think the most interesting work is the work that, for instance, Rodney Brooks and his colleagues and I are doing at MIT with the humanoid robot Cog, and as Dreyfus says–you’ve got to be embodied to live in a world, to develop real intelligence, and Cog does have a body. That’s why Cog is a robot. Now, if Bert will tell us what Cog can never do and promise in advance that he won’t move the goal posts and he won’t say, well, this wasn’t done in the right style, so it doesn’t count, if he’ll just give us a few tasks that are now and forever beyond the capacity of Cog, then we’ll have a new test.
MARGARET WARNER: All right. We have just a few seconds. Mr. Dreyfus, give us two tasks it’ll never be capable of, very quickly.
HUBERT DREYFUS: Okay. If Cog is programmed as a symbolic rule-using robot and not as a brain-imitating robot, it won’t be able to understand natural language. There’s no reason why a computer that’s simulating the way the neurons in the brain work won’t be intelligent. I’m talking about how what’s called symbolic manipulation won’t be intelligent.
MARGARET WARNER: All right. Thanks. We have to leave it there, but we’ll return–