How do we invest in the future of humanity? Swedish philosopher Nick Bostrom explains

Economics correspondent Paul Solman and Swedish philosopher Nick Bostrom discuss existential threats such as nuclear winter and how the biggest threat to humanity may be what we don't yet know. Photo by Lucas Jackson/Reuters

Editor's note: Economics correspondent Paul Solman recently traveled to Oxford University's Future of Humanity Institute. And yes, there is an institute that studies only that — the future of the human species.

In PBS NewsHour's Thursday Making Sen$e report, Paul speaks with the institute's founding director Nick Bostrom, a Swedish philosopher known for his work on artificial intelligence and existential threats. You can watch Bostrom's TED talk on "superintelligence" — what happens when computers become smarter than humans — here.

At the the Future of Humanity Institute, Bostrom leads a team trying to figure out how to best invest in, well, the future of humanity. That means identifying threats to the continuing existence of homo sapiens and figuring out how to reduce the possibility of such events. Tonight's Making Sen$e report focuses on the need to invest in managing the evolution of artificial intelligence. But Paul and Bostrom discussed much more. Below, we have an excerpt of their conversation on how the institute determines what existential threats to study, and how the biggest threat to humanity may be what we don't yet know.

— Kristen Doerer, Making Sen$e editor


PAUL SOLMAN: If I care about future generations, 100,000 years from now, and there's some possibility that they won't exist, what should I invest in to give them the best chance of survival and having a happy life the way I've had one?

NICK BOSTROM: What you should invest in is what we are trying to figure out, and it's a really difficult question. How can we trace out the links between actions that people take today and really long-term outcomes for humanity — outcomes that stretch out indefinitely into the future?

PAUL SOLMAN: And that's why [the institute] is called the Future of Humanity…

NICK BOSTROM: That's one of the reasons it's called that. So I call this effort macrostrategy — that is, to think about the really big strategic situation for having a positive impact on the long-term future. There's the butterfly effect: A small change in an initial condition could have arbitrarily large consequences. And it's hard enough to predict the economy two years from now, so how could we even begin to think about how your actions make a difference a million years from now? So there are some ideas that maybe bring the answer a little bit closer. One idea is this concept of existential risk. That helps focus our attention.

READ MORE: Do labor-saving robots spell doom for American workers?

PAUL SOLMAN: Nuclear winter — that is, the period of abnormal cold that would follow a nuclear war. That has been, in my lifetime, I think the most common existential threat that people have talked about.

NICK BOSTROM: Well, if you think that nuclear war poses a threat to the survival of our species or even if you think that it would just be enormous destruction, then obviously we would look for ways to try to reduce the probability that there would be a nuclear war. So here you have to introduce a second consideration, which is how easy it is to actually make a difference to a particular race.

So it is quite difficult for some individual to reduce the probability of a nuclear war, because there are big nations with big stockpiles and strong incentives and a lot of money and a lot of people who have worked on this for decades. So if you, as an individual, choose to join a disarmament campaign, it might make some difference, but a small difference. So there might be other scenarios that have been more neglected and where maybe one extra person or one extra million dollars of research funding would make a larger, proportional difference. So you want to think, how big is the problem, and how much difference can you, on the margin, make to the degree to which the problem gets solved?

"So if there are big existential risks, I think they are going to come from our own activities and mostly from our own inventiveness and creativity."

PAUL SOLMAN: And one area that you yourself have been working on a lot is artificial intelligence, which you've called super intelligence. Is that an existential risk, do you think?

NICK BOSTROM: When I survey the possible things that could derail humanity's long-term future, it can roughly distinguish natural risks, such as volcano eruptions, earthquakes and asteroids, and risks that arise somewhere from our own activity. It's pretty clear that all the really big risks to our survival are of the latter kind, anthropogenic. We've survived risks from nature for 100,000 years, right? So, it's unlikely any of those things would do us in  within the next 100 years. Whereas, in the next century, we will be inventing radical new technologies — machine intelligence, perhaps nanotech, great advances in synthetic biology and other things we haven't even thought of yet. And those new powers will unlock wonderful opportunities, but they might also bring with them certain risks. And we have no track record of surviving those risks. So if there are big existential risks, I think they are going to come from our own activities and mostly from our own inventiveness and creativity.

PAUL SOLMAN: What are the greatest of those risks?

NICK BOSTROM: I think the greatest existential risks over the coming decades or century arise from certain, anticipated technological breakthroughs that we might make in particular, machine super intelligence, nanotechnology and synthetic biology. I think each of these has an enormous potential for improving the human condition by helping cure disease, poverty, etcetera. But one could imagine them being misused, used to create very powerful weapon systems, or even in some cases some kind of accidental destructive scenario, where we suddenly are in possession of some technology that's far more powerful than we are able to control or use wisely.

READ MORE: Why humanity is essential to the future of artificial intelligence

PAUL SOLMAN: How would you rank them in terms of the danger?

NICK BOSTROM: Biotech, synthetic biology and AI I think are near the top. I would also add the unknown. Suppose you had to ask me this question 100 years ago. What are the biggest existential risks? At that time, nobody would have mentioned AI; they didn't have computers, and it wasn't even a concept. Nobody had heard of nanotechnology or synthetic biology or even nuclear weapons, right? A hundred years from now, it's likely that there might be other things that we haven't thought of.

Recently in Making Sen$e