Subscribe to Here’s the Deal, our politics
newsletter for analysis you won’t find anywhere else.
Thank you. Please check your inbox to confirm.
Leave your feedback
Tech luminaries and scientists have been worried for years about the existential consequences of artificial intelligence for the human race. Philosopher Nick Bostrom of Oxford University's Future of Humanity Institute thinks money ought to be invested in how to manage machine superintelligence that could one day surpass us -- or even wipe us out. Economics correspondent Paul Solman reports.
Now: the fears around the development of artificial intelligence.
Computer superintelligence is a long, long way from the stuff of sci-fi movies, but several high-profile leaders and thinkers have been worrying quite publicly about what they see as the risks to come.
Our economics correspondent, Paul Solman, explores that. It's part of his weekly series, Making Sense.
I want to talk to you about the greatest scientific event in the history of man.
Are you building an A.I.?
A.I., artificial intelligence.
Do you think I might be switched off?
It's not up to me.
Why is it up to anyone?
Some version of this scenario has had prominent tech luminaries and scientists worried for years.
In 2014, cosmologist Stephen Hawking told the BBC:
STEPHEN HAWKING, Scientist (through computer voice):
I think the development of full artificial intelligence could spell the end of the human race.
And just this week, Tesla and SpaceX entrepreneur Elon Musk told the ®MDNM¯National Governors Association:
ELON MUSK, CEO, Tesla Motors:
A.I. is a fundamental existential risk for human civilization. And I don't think people fully appreciate that.
OK, but what's the economics angle? Well, at Oxford University's Future of Humanity Institute, founding director Nick Bostrom leads a team trying to figure out how best to invest in, well, the future of humanity.
NICK BOSTROM, Director, Future of Humanity Institute: We are in this very peculiar situation of looking back at the history of our species, 100,000 years old, and now finding ourselves just before the threshold to what looks like it will be this transition to some post-human era of superintelligence that can colonize the universe, and then maybe last for billions of years.
Philosopher Bostrom has been perhaps the most prominent thinker about the benefits and dangers to humanity of what he calls superintelligence for many years.
Once there is superintelligence, the fate of humanity may depend on what that superintelligence does.
There are plenty of ways to invest in humanity, he says, giving money to anti-disease charities, for example.
But Bostrom thinks longer-term, about investing to lessen existential risks, those that threaten to wipe out the human species entirely. Global warming might be one. But plenty of other people are worrying about that, he says. So, he thinks about other risks.
What are the greatest of those risks?
The greatest existential risks arise from certain anticipated technological breakthroughs that we might make, in particular, machine superintelligence, nanotechnology, and synthetic biology, fundamentally because we don't have the ability to uninvent anything that we invent.
We don't, as a human civilization, have the ability to put the genie back into the bottle. Once something has been published, then we are stuck with that knowledge.
So Bostrom wants money invested in how to manage A.I.
Specifically on the question, if and when in the future you could build machines that were really smart, maybe superintelligent, smarter than humans, how could you then ensure that you could control what those machines do, that they were beneficial, that they were aligned with human intentions?
How likely is it that machines would develop basically a mind of their own, which is what you're saying, right?
I do think that advanced A.I., including superintelligence, is a sort of portal through which humanity will have passage, assuming we don't destroy ourselves prematurely in some other way.
Right now, the human brain is where it's at. It's the source of almost all of the technologies we have.
I'm relieved to hear that.
And the complex social organization we have.
It's why the modern condition is so different from the way that the chimpanzees live.
It's all through the human brain's ability to discover and communicate. But there is no reason to think that human intelligence is anywhere near the greatest possible level of intelligence that could exist, that we are sort of the smartest possible species.
I think, rather, that we are the stupidest possible species that is capable of creating technological civilization.
And capable of creating technology that has begun to surpass us, first in chess, then in "Jeopardy," now in the supposedly impossible game for a machine to win, Go.
This is just task-oriented software, some have argued, and not really intelligence at all. Moreover, whatever you call it, there will be enormous benefits, says Bostrom.
On the other hand, if we approach real intelligence, it could also become a threat. Think of "Ex Machina" or "The Matrix" or Elon Musk's fantasy fear this week about advanced A.I.
Well, it could start a war by create — by doing fake news and spoofing e-mail accounts and fake press releases, and just by, you know, manipulating information. The pen is mightier than the sword.
So, this is going to be a cat-and-mouse game between us and the intelligence?
That would be one model. One line of attack is to try to leverage the A.I.'s intelligence to learn what it is that we value and what we want it to do.
In order to protect ourselves from what could be a truly existential risk.
So, how do you get the greatest good for the greatest number of present and future humans beings? It might be to invest now in controlling the evolution of artificial intelligence.
For the PBS NewsHour, this is economics correspondent Paul Solman, reporting from Oxford, England.
Watch the Full Episode
Support Provided By: