MIND OF THE MACHINE
NOVA: You were one of the attendees of the original Dartmouth Summer Research Conference on Artificial Intelligence in 1956. Back then, what was the dream? Was the goal really to build a human intelligence?
Marvin Minsky: Well, the goal was to build something that could do everything we do. [The English mathematician] Alan Turing was perhaps the first person to write intelligible articles about this. He discussed the most complicated processes that we know of and explained in a famous 1936 paper ["On Computable Numbers, With an Application to the Entscheidungsproblem"] the idea that you could build one machine that could imitate any other kind of machine, even one more complicated than it. That was the idea of the "universal Turing machine." So here is this great man in 1936 writing about what could happen in the next 100 years, and the rest of us later read this paper and said, "Let's be part of that."
When you say the goal is to tell a machine to do everything that we can do, what does that mean?
Well, a typical person goes through childhood, learns a language; some people learn two or three languages. That's a wonderful thing. Then they learn a profession. They get good at architecture or street cleaning or baseball or something like that, but nobody gets good at many things. The smartest person might be an expert in four or five fields. It's been estimated that to be an expert at something you have to know maybe 20,000 fragments of knowledge or skills. And you can only learn a few of those a day, so it takes a few thousand days to become an expert. But why can't a person learn a hundred fields or a thousand specialties? Why is everyone so limited?
So one of the ideas is maybe we could build a machine or some gadgets to add to our brains so that we wouldn't have to spend 10 years getting good at something. Rather we could spend five minutes getting good at 20 things.
A NARROWING FIELD
At that conference in 1956, you all thought we would probably have a true artificial intelligence within about 10 or 15 years, is that right?
Well, I think maybe 30 or 40 years, within a human lifetime, we thought maybe we would have machines that would be more or less as smart as a person. And I still think that could have happened.
My picture of what happened, at least in the United States and certainly in most other countries, is that this kind of progress of trying new experiments with computers kept happening in the 1960s and '70s and part of the '80s, but then things tightened up. The great laboratories somehow disappeared, economies became tighter, and companies had to make a profit—they couldn't start projects that would take 10 years to pay off.
"There aren't any machines that can do the commonsense reasoning that a four- or five-year-old child can do."
In the 1950s, '60s, and '70s, almost all of my students became professors teaching other students. But after the 1970s, almost none of my students became professors, because the universities in the United States were filled. Since 1950 the average lifespan in the developed countries has increased one year every four. It's 60 years since 1950, so people are living on average 15 years longer, including the professors. So today, in 2010, very few professors are retiring, and the students have no place to go. Basic research is sort of dying out because there are no new jobs.
That's sobering. But what you and other AI researchers have found is that it's actually pretty difficult to build intelligence, right?
How hard is it to build an intelligent machine? I don't think it's so hard, but that's my opinion, and I've written two books on how I think one should do it. The basic idea I promote is that you mustn't look for a magic bullet. You mustn't look for one wonderful way to solve all problems. Instead you want to look for 20 or 30 ways to solve different kinds of problems. And to build some kind of higher administrative device that figures out what kind of problem you have and what method to use.
Now, if you take any particular researcher today, it's very unlikely that that researcher is going to work on this architectural level of what the thinking machine should be like. Instead a typical researcher says, "I have a new way to use statistics to solve all problems." Or: "I have a new way to make a system that imitates evolution. It does trials and finds the things that work and remembers the things that don't and gets better that way." And another one says, "It's going to use formal logic and reasoning of a certain kind, and it will figure out everything." So each researcher today is likely to have one particular idea, and that researcher is trying to show that he or she can make a machine that will solve all problems in that way.
I think this is a disease that has spread through my profession. Each practitioner thinks there's one magic way to get a machine to be smart, and so they're all wasting their time in a sense. On the other hand, each of them is improving some particular method, so maybe someday in the near future, or maybe it's two generations away, someone else will come around and say, "Let's put all these together," and then it will be smart.
WHAT ABOUT WATSON?
"Watson," the computer that plays Jeopardy!, is doing very much what you described. Its creators at IBM are using formal logic and machine learning and databases—basically a kitchen-sink approach—to develop a computer that can answer questions about a wide variety of things. They wouldn't say that they are building an artificial intelligence but rather the best question-answering machine ever built.
There are some projects that have tried to do commonsense reasoning, but none of them can solve difficult problems yet because they're all using one-way—one or another kind of pattern-matching. There aren't any machines that can do the commonsense reasoning that a four- or five-year-old child can do. No machine that I've heard of yet can answer a question that involves, for example, knowing that you can pull something with a string but you can't push something with a string—a simple thing like that.
But imagine a machine that's playing a game, and the category is "rhyme time." And the clue is: a politician's rant and a frothy dessert. And within two seconds, the machine comes back with "meringue harangue." Now to me that seems like magic. I mean, it's got to be smart, right?
Well, the average person only knows 20,000 words or so. In one-hundredth of a second, a modern computer can find all possible rhymes in those 20,000 words. Then maybe there are 20 other things it can do, like will a certain phrase connect with a certain year. And if you take about 20 of those, maybe you can answer most Jeopardy! questions. I don't know.
"It would be nice to have a machine that could make the next 10 string quartets that Beethoven didn't quite get to do."
But if we're impressed by somebody's program that plays Jeopardy!, then we have to ask, is this because it's taking a lot of data and doing something really stupid like the chess programs do, having no knowledge of chess itself but only knowing how to do, say, 20 of a certain kind of search and that's all there is to it? If that's the answer, then yes, ignorant people will be impressed, but people who understand how it works won't be impressed.
Now, the minute the Watson people publish a scientific paper saying how they did it, then we'll have something to discuss, because maybe some of us will say, "Yes, that is a good new idea, I'm really interested." Or, as in the case of chess programs, we'll say, "Now, I see, this is just another worthless, stupid trick that answers the kinds of questions that most people are interested in for no particular reason"—like what date did a certain baseball player make a certain kind of play. That doesn't require any intelligence to answer if you have the answer in a list.
But Watson has to understand the question, right? That's hard.
Well, you don't have to understand the question if just fitting and matching five keywords will give you an 80 percent chance of getting the answer without understanding either the question or the answer.
I have a good human example of this. My friend Joe Weizenbaum, who was one of the pioneers of AI, wrote a program that appeared to have a lot of common sense. It was called ELIZA, after the character in that wonderful [George Bernard] Shaw play Pygmalion. Joe said he got the idea because he had an aunt who was considered the wise woman of the neighborhood. People would come and tell her their problems—their daughter did this and that and some terrible thing happened and so forth—and Joe's aunt would listen. And after a while she'd say, "Yes, things like that happen." That's all she did that Joe could remember, but he noticed that it was this kind of reaction of appearing to understand that gave her this reputation in the neighborhood.
I hear what you're saying, but I think, Well, there are computer systems that can understand what I'm saying, ones that can answer questions, ones that are almost beginning to see, ones that can basically begin to move through the world. Is it possible that out of all of those we're going to get an artificial intelligence?
I don't think it will happen without a good architecture. It won't evolve from any particular program. It's a tough one. The problem is that there are some things that impress people, and there are some researchers who, for economic or other reasons, work on things that get an excited reaction from the public.
I'm afraid Watson is that, and I can't tell whether it will ever understand why you can't push something with a string. If you ask the average person why you can't push something with a string, he or she might find it very hard to explain that the string will bend and it won't transmit any force because when it comes to a curve the force will go off the end of the curve [laughs]. No one knows how to think about that.
What about music? Do you think that we'll ever have a computer that appreciates music?
There are a lot of good reasons why it would be nice to understand how music affects people and even to make machines that can produce music that affects them. For example, I like some things that Beethoven wrote toward the end of his life. There is a series of four or five string quartets in which he was getting new ideas, and three or four piano sonatas, the last ones he wrote. In the last sonata, he invents jazz in the second movement and shows some tricks you could do in the future with jazz. It was another 100 years before people like Thelonious Monk went further with that. Well, it would be nice to have a machine that could make the next 10 string quartets that Beethoven didn't quite get to do. And it would be nice to make machines that could listen to the old ones and see things that none of us can see.
I've heard about a French computer program that listened to all of Thelonious Monk and came up with new pieces in his style, and they were remarkably good given the fact that a computer was creating them. Probably it will get better and better at this, but when I hear something really beautiful, I tear up, I show a reaction. The question is: Will we ever have a machine that reacts the way we do?
I don't see why we should doubt that we can make machines that do anything that people do if we understand that people are very complicated machines. In the last 400 million years the nervous system has evolved, and as you can see in any neurology book, there are several hundred specialized organs. Now, when a person reacts to music, maybe 20 or 30 of these little computers are doing particular things, and we don't know what those things are, and maybe each of them is very complicated. Well, it's very hard to understand 20 or 30 complicated things at a time, but maybe someday we'll have a big computer for which that's child's play, and it will understand exactly why most people react to certain kinds of music in such and such a way, and it will say it's obvious.
"Something will seem very mysterious just because you're ignorant, not because it's terribly complicated."
But there's a difference between understanding why someone reacts that way and having the emotional response itself.
I feel emotional responses are much simpler than intellectual responses. It's an unfortunate thing that's happened in most human civilizations that people think that processes like getting angry or jealous or uncomfortable are more complicated than thinking of why triangles with three sides have to have three angles. This idea that emotions are profound and complicated I think comes from the fact that they're very hard to describe. But the reason they're very hard to describe is that they involve lots of rather simple processes that we don't know about. And something will seem very mysterious just because you're ignorant, not because it's terribly complicated.
Will there ever be a computer that laughs at Seinfeld?
The answer is probably somebody, some graduate student, will program a computer that only laughs at Seinfeld.
A NEW KIND OF INTELLIGENCE
So when do you think we'll see a true AI?
I think everything depends on the future of the economy. After World War II, in the 1950s, things looked very good from the vantage point of a young person who liked the idea of doing research. The universities were growing, the budgets were high. Every time you invented something it might increase productivity, and the world would get richer. Then something happened around 1980 that I don't understand, when things stopped growing and universities stopped expanding. I know in Taiwan, they started 20 or 30 new mathematics departments in the last decade. I don't think in the United States anything like that has happened.
And why is it so important to recreate human intelligence?
For most people it's not important. For people who have a larger view, one answer is that we may be the only intelligent species in the universe for all we know, and there is likely to be an accident. We know that in five billion years the sun will turn into a red giant and everything will be fried, so everything goes to waste. We also know that in a few trillion years the stars will go out and, in the current theory of physics, the universe will end. Now, we're not smart enough to fix this yet, but maybe if we were a little smarter we could.
By building a new intelligence?
By building a new kind of universe and jumping into it. Otherwise everything is a waste. So from this point of view everything that people do right now is worthless and useless, because it will just end without a trace. Making one of us or replacing us with something intelligent enough to fix the universe or make a better one and jump into it—that should be our main priority.
Now, if you don't live in the world of science fiction that might sound silly. But if you live in my world, everything else seems a little silly.