Subscribe to Here’s the Deal, our politics
newsletter for analysis you won’t find anywhere else.
Thank you. Please check your inbox to confirm.
Leave your feedback
This has been a week where concerns over the rapidly expanding use of artificial intelligence resonated loudly in Washington and around the world. Geoffrey Hinton, one of the leading voices in the field of AI, announced he was quitting Google over his worries about what AI could eventually lead to if unchecked. Hinton discussed those concerns with Geoff Bennett.
This has been a week where concerns over the rapidly expanding use of artificial intelligence resonated loudly in Washington and around the world.
Vice President Kamala Harris met yesterday with top executives from companies leading in A.I. development, Microsoft, Google, OpenAI, and Anthropic.
The vice president discussed some of the growing risks and told the companies they had a — quote — "moral obligation" to develop A.I. safely. That meeting came just days after one of the leading voices in the field of A.I., Dr. Geoffrey Hinton, announced he was quitting Google over his worries about the future of A.I. and what it could eventually lead to unchecked.
We're going to hear about some of those concerns now with Dr. Geoffrey Hinton, who joins me from London.
Thank you for being with us. And what are you free to express now about artificial intelligence that you couldn't express freely when you were employed by Google?
Geoffrey Hinton, Artificial Intelligence Pioneer:
It wasn't that I couldn't express it freely when I was employed by Google.
It's that, inevitably, if you work for a company, you tend to self-censor. You tend to think about the impact it will have on the company. I want you to be to be able to talk about what I now perceive as the risks of super intelligent A.I. without having to think about the impact on Google.
What are those risks, as you see it?
There are quite a few different risks. There's the risk of producing a lot of fake news, so nobody knows what's true anymore.
There's the risk of encouraging polarization by getting people to click on things that make them indignant. There's the risk of putting people out of work. That — it should be that when we make things more productive, when we greatly increase productivity, it helps everybody.
But there's the worry that it might just help the rich. And then there's the risk that I want to talk about. Many other people talk about those other risks, including risks of bias and discrimination and so on.
I want to talk about a different risk, which is the risk of super intelligent A.I. taking over control from people.
Well, how do the two compare, human or biological intelligence and machine intelligence?
That's a very good question. And I have quite a long answer.
Biological intelligence has evolved to use very little power, so we only use 30 watts. And we have huge numbers of connections, like 100 trillion connections between neurons. And learning consists of changing the strength of those connections.
The digital intelligence we have been creating uses a lot of power, like a megawatt when you're training it. It has far fewer connections, only a trillion, but it can learn much, much more than any one person knows, which suggests that it's a better learning algorithm than what the brain has got.
Well, what would smarter-than-human A.I. systems do? What's — what's the concern that you have?
Well, the question is, what's going to motivate them?
Because they could easily manipulate us if they wanted to. Imagine yourself and a 2-year-old child. You could ask it, do you want the peas or the cauliflower? Well, the 2-year-old child doesn't realize it doesn't actually have to have either.
We know, for example, that you can invade a building in Washington without ever going there yourself, by just manipulating people. But imagine something that was much better at manipulating people than any of our current politicians.
I suppose the question is, then, why would A.I. want to do that? Wouldn't that require some form of sentience?
Let's not get confused with the issue of sentience. I have a lot to say about sentience. But I don't want to confuse the issue with it.
Let me give you one example of why it might want to do that. So, suppose you're getting an A.I. to do something, you give it a goal. And you also give it the ability to create subgoals. So, like, if you want to get to the airport, you create a subgoal of getting a taxi or something to get you to the airport.
Now, one thing it will notice quite quickly is that there's a subgoal, that, if you can achieve it, makes it easier to achieve all the other goals that you have been given by people. And the subgoal that makes it easier is get more control, get more power. The more power you have, the easier it is to get things done.
So, there's the alignment worry that we give it a perfectly reasonable goal, and it decides that, well, in order to achieve that, I'm going to get — get myself a lot more power. And because it's much smarter than us, and because it's trained from everything people ever do this — it's read every novel that ever was, it's read Machiavelli, it knows a lot about how to manipulate people — there's the worry that it might start manipulating us into giving it more power, and we might not have a clue what's going on.
When you were at the forefront of this technology decades ago, what did you think it might do? What were the applications that you had in mind at the time?
There's a huge number of very good applications, and that's why it will be a big mistake to stop developing this stuff.
It's going to be tremendously useful in medicine. For example, would you rather see a family doctor that has seen a few thousand patients or a family doctor that has seen a few hundred million patients, including many with the same rare disease you have? You can make much better doctors this way. Eric Topol has been talking about that recently.
You can make better nanotechnology for solar panels. You can predict floods. You can predict earthquakes. You can do tremendous good with this.
Is the problem, then, the technology, or is the problem the people behind it?
It's the combination.
Obviously, many of the organizations developing this technology are defense departments. And defense departments don't necessarily want to build in, be nice to people, as the first rule. Some defense departments would like to build in, kill people of a particular kind.
So we can't expect them all to have good intentions towards all people.
There is the question of what to do about it.
This technology is advancing far more quickly than governments and societies can keep pace with the capabilities of this technology. I mean, they leap forward every few months. What is required to write legislation, pass legislation, come up with international treaties, that takes years.
So, that — I mean, I have gone public to try and encourage much more resources and many more creative scientists to get into this area. I think it's an area in which we can actually have international collaboration, because the machines taking over is a threat for everybody. It's a threat for the Chinese and for the Americans and for the Europeans, just like a global nuclear war was.
And for a global nuclear war, people did actually collaborate to reduce the chances of it.
There are other experts in the field of A.I. who say that the concerns that you're raising, this dystopian future, that it distracts from the very real and immediate risks posed by artificial intelligence, some of which you mentioned, misinformation, fraud, discrimination.
How do you respond to that criticism?
Yes, I don't want to distract from those. I think they're very important concerns, and we should be working on those too.
I just want to add this other existential threat of it taking over. And one reason I want to do that is because that's an area in which I think we can get international collaboration.
Is there any turning back? When you say that there will come a time when A.I. is more intelligent than us, is there any coming back from that?
I don't know.
We're entering a time of great uncertainty, where we're dealing with kinds of things we have never dealt with before. It's as if aliens have landed, but we didn't really take it in because they speak good English.
How should we think differently, then, about artificial intelligence?
We should realize that we're probably going to get things more intelligent than us quite soon. And they will be wonderful. They will be able to do all sorts of things very easily that we find very difficult. So there's huge positive potential in these things.
But, of course, there's also huge negative possibilities. And I think we should put more or less equal resources into developing A.I. to make it much more powerful and into figuring out how to keep it under control and how to minimize bad side effects of it.
Dr. Geoffrey Hinton, thanks so much for your time and for sharing your insights with us.
Thank you for inviting me.
Watch the Full Episode
Geoff Bennett serves as co-anchor of PBS NewsHour. He also serves as an NBC News and MSNBC political contributor.
Support Provided By: