Gary Marcus is an experimental psychologist and the Director of the NYU Center for Language and Music at NYU, where he studies what babies and toddlers know about language and music, and how they come to know it. His most recent book, Guitar Zero: The New Musician and the Science of Learning was a New York Times bestseller, and he blogs regularly for The New Yorker.
The Future of Artificial Intelligence
Let’s cut to the chase. How soon until we’re sharing the earth with robots?
There are different levels of answering that question. We’re already cohabiting the planet with very sophisticated computers that we call cell phones or Smartphones. Smartphones are already completely cohabiting with us and they have cameras and microphones and unsavory people can collect data from you at any time, which is kind of a disturbing fact. The only thing that a cell phone doesn’t have is wheels.
We will very soon be surrounded by drones, which are a form of robot. They are going to be able to act autonomously, to fly around, film us, maybe to help us. That’s gonna happen in the next five years. These drones, which are likely to be annoying, are like little quadcopters that fly around, collecting data. There will be arguments in the senate about what restrictions should be placed on them and so forth. There’s already been some arguments about that. They are autonomous, computing creatures with sensation and the ability to act. Drones aren’t really our equals. They’re our betters at flight, but they’re not gonna be our conversational partners and they’re not gonna be as smart as we are.
I suppose the question you’re wondering is, when are we going to have robots that are more like our equals? Eventually, there will be robots that may take the form of humanoids. They may take the form of drones – they can take all sorts of different forms – they will be as smart as us; they will be able to do not just very particular things like play a good game of chess, but will be able to do almost anything as well as people. That might take 50 or 100 years but it will happen.
How does this future look compared to Spike Jonze’s film Her?
There are some things in that movie that aren’t that far away. They had video games, that sort of 3D, hologram stuff. That kind of stuff already exists in the lab. It’s not very far off. There’s something called Oculus Rift, which is basically a helmet you put on your head that’s totally immersive 3D virtual reality that will be out very soon.
The level of language that the character Samantha had is way beyond what any robot or AI can do right now. Siri is not too far from the state of the art. Most of us have chatted with Siri, and it’s not really like having a conversation. You can ask it a few questions here and there. Some of them she has the answer, and lots of them it does not.
Samantha is basically able to talk about anything, and we don’t yet know how to build machines with that level of versatility. Some think we will in 20 years. I don’t think we will. From my analysis of what the problems are and what hasn’t been solved, it’s probably more like 30 or 40 or 50 years. The other thing that we don’t yet have that Samantha the character had is a real insight into how human beings work. It wasn’t just that she understood everything [Theodor] said, but that she was able to interact socially, to understand his needs, his desires, what made him happy, what made him sad. Nobody to my knowledge has gotten that far in the AI of human understanding. I think it’s a solvable problem. I think it can be engineered. I put it more like on the 50 years horizon.
In the short term, machines don’t have enough common sense to know what we’re talking about or what we’re feeling. People have been avoiding common sense. They’re using big data techniques to find statistical correlations. But that’s not enough to get an abstract understanding of the world so that you can talk about any topic in a flexible way. Right now, I don’t think people are making that much progress towards that. I think there will probably be a period of another five years where people get everything they can out of big data. Then they will say, “Well we’re still not there yet, what do we need to do?”
And then what? Back to the drawing board?
I don’t think we’ll have to go back to the drawing board, but there are some tough problems that people have been avoiding. In terms of common sense reasoning, you just need a lot of very carefully developed information. People are looking for quick-fixes. There’s only so far we can get with the machine learning that churns through all the big data. We’re gonna need a more serious reckoning with how it is that human beings can make complicated inferences. My own field of cognitive science might be able to contribute to AI, by looking at the things that people still do better than machines.
Should we be afraid of the robots?
I’m not sure that we should be afraid, but I’m not sure that we shouldn’t. There are different arguments that people pass back and forth for why the machines might get out of control or why they might not get out of control. My take is we don’t have a proof yet that they won’t. We have some reason to think that the stories that people tell are fanciful, but we don’t have a proof that they are. My opinion is that we should start investing sooner rather than later in trying to develop technologies that will make sure that the robots don’t cause us harm.
You tackle issues like artificial intelligence on your New Yorker blog. Is it different communicating with a broad audience? Compared to, say, writing for your scientific peers in a scholarly journal.
What I like about writing for the New Yorker is that I can take on the deep theoretical questions of the day in different fields of science and address the scientists themselves and bring the public along for the ride.
I think that most of the deep questions – not all of them – can be explained if you really work at it – in 1,500 words. Not really answered, but explained.
People spend too much time in scholarly journal talking only to their three closest friends or three closest colleagues. The really important ideas in science are ones that we should be communicating to other scientists in other fields. I find that as soon as you start trying to explain what you’re doing to another smart scientist who is in a different field, you might as well bring the public along for the ride. So, in recent years i’ve tried to write as much as I can for as wide an audience as I can, even when I’m talking about the most sophisticated theoretical concepts.