TOPICS > Science

Technology’s Dark Side

May 10, 2000 at 12:00 AM EST

TRANSCRIPT

RAY SUAREZ: Have you been watching the breathtaking pace of technological change and wondering, where does it all lead? Will humankind control the things it creates or end up controlled by them? Since the industrial revolution, critics have asked that question, but recently readers of “Wired” Magazine saw worries about the inventions of the future coming from a very unexpected source, one of the leading lights of the digital revolution, William Joy of Sun Microsystems, in a cover story called, “Why the Future Doesn’t Need Us.” William Joy joins us now. Very provocative title; what are you suggesting is now possible?

WILLIAM JOY, Chief Scientist, Sun Microsystems: Well, my job is to look at the future for sun, and I’ve been looking at the technologies that a emerging in the 21st century, and in particular some of these technologies– genetic engineering, nanotech, and robotics– have enormous potential, but they carry a hidden danger, along with this enormous potential, of a kind that I don’t think we’ve ever faced before. And this concerns me a lot, and so I decided to write this story.

RAY SUAREZ: A lot of people have probably seen the terms genetic modifications or genetic engineering, certainly robotics recently in the news, but what’s nanotech?

WILLIAM JOY: Well, just as with genetic engineering you can manipulate the genome, the sequence of letters in our DNA, if you will, and you can bring it into a computer like we’ve been doing in the Human Genome Project, you can take something that is built out of any material, out of atoms, and make kind of a blueprint for it, and imagine manufacturing almost anything at the atomic scale. Instead of hitting things to knock them apart, to carve things down to what we want, you build things bottom up with atoms. And this has the potential of making almost everything manufacturable at extremely low cost.

RAY SUAREZ: A lot of people answering those critics of rapid technological change have said “look, don’t worry, these machines, these processes, are essentially dumb. They can’t do the things themselves, they can only do what we teach them or tell them to do, so why worry?”

WILLIAM JOY: Well, the situation is different. I think in general, discovering new things and these new technologies bring enormous benefits and progress, and we’re creative and we can invent things and solve most of the problems. But with these new technologies you can create things which, if they’re released into the environment, make more copies of themselves. You can imagine genetically altering viruses or having a little nanomachine that gets loose and makes more copies of itself, or even creating a robot species which might then evolve on its own. Such things could be created by individuals just using extremely powerful computers, and after they’re released into the environment, might be irreversible. It might be impossible to recall them. It’s hard to imagine getting rid of them as getting rid of mosquitoes, so we have a special kind of a danger. Even a nuclear threat I don’t think had this kind of danger in it, because they’re made out of uranium and other materials that are hard to get to, and if you have a nuclear bomb, you can only blow it up once. If you release something into the environment and it makes more copies of itself, it could be a problem that you could never get rid of.

RAY SUAREZ: Some of this stuff sounds like what drove science fiction movies in the 50′s, the idea that there was a dark side to technology. What are you suggesting we do?

WILLIAM JOY: Well, it’s a very difficult problem. If you believe that these technologies will be widely available, that the technologies will be democratized like a personal computer is, everybody could have access to designing these things just on their personal computer, and the things are incredibly destructive, then I think we’ve created a situation of extreme peril. So we have to somehow prevent everyone from having the ability to create these kinds of massively destructive things in the 21st century, and this is something that scientists and technologists are going to have to take ethical responsibility for. We can’t simply release things that are beyond our ability to control, things of such incredible power.

RAY SUAREZ: But some recent writers have talked about knowledge as being almost like a virus, something that you can’t control or bottle up. To hear somebody who’s been an architect, a developer of some of the things that we take for granted in the modern world talk about controlling or controlling access to is pretty surprising.

WILLIAM JOY: Well, on one side we have the current track we’re on, where we’ll develop all this knowledge, the knowledge will be available to everyone; people will have machines, personal computers, say, a million times more powerful than they do today, so you could take the DNA sequence which we’re just writing down now and figure out everything. But if that gives you the ability to create your own disease, if you will, because you have a complete map… it would be like in “Mission: Impossible.” You have maps of the target, right? You know everything about it. If we let anybody do that, there are crazy people in the world. And what happened in the 20th century was we had terrible things like nuclear weapons, but only a couple nation states had enough nuclear weapons to destroy civilization. The loose nuke problem is dangerous enough, but we wouldn’t even begin to conceive of giving technologies of this power to everybody. And in an information age where information is itself a weapon, essentially, because the designs for weapons, or for diseases or for nanomachines or for robots are just software, we have a problem of figuring out how to control that so that we don’t put ourselves in an insane and unacceptable situation.

RAY SUAREZ: Well, all the way over on one extreme is doing nothing; just let everybody know everything they can know. Out on the other extreme is totally slamming on the brakes until we figure out what the next right thing to do is.

WILLIAM JOY: Right.

RAY SUAREZ: So where do we get the wisdom to figure out where we draw the line in a way that makes sense?

WILLIAM JOY: Well, I think the first thing we have to do is we have to judge what the risk is if we do nothing. And competent people have looked at this and said the risk of extinction may be in the range of 30% to 50%. And that, to me, is a number that just knocks me out of my chair. It’s beyond completely unacceptable. I don’t know what… How to describe that. On the other hand, slamming… Completely slamming on the breaks is never going to happen, so that wouldn’t help at all. I think that science operates in a community. The scientific community has to stand up and recognize that because we’re in an information age, we have a different nature of the threat, and we have to take ethical responsibility for the use of the tools, and the information, and the knowledge that we are generating, otherwise we’re complicit in their further evil uses.

RAY SUAREZ: Have you gotten some criticism, some negative feedback about suggesting that we should step more carefully into that future?

WILLIAM JOY: Surprisingly little. The best thing for me would be if someone stood up and told me I was completely wrong and why. And unfortunately, that hasn’t happened, because this seems to be such a difficult problem. I’ve gotten e-mail from Nobel laureates, I’ve gotten letters from people in all walks of life who’ve read the “Wired” article in very, very strong support, and some beginnings of a dialogue. But I think frankly, from what I’ve seen so far, if we talk about this some, I think we can move the discussion very quickly to what are our choices, because there doesn’t seem to be much controversy here. The people who are proposing nanotechnology freely admit the dangers. The people who are proposing the robotics freely discuss the dangers. In the case of genetics, that wasn’t… that hasn’t been and wasn’t so much the case, but these things all share the fact that they’re information-based, that the designs are fundamentally software, the tools to make these things are getting cheaper, and they all have this self-replication, amplification, so an individual act can then create irreversible harm.

RAY SUAREZ: Have we gotten to a point that’s significantly different in just the past couple of years, that sort of called you to make this point? Have we moved so quickly, so far, so fast, that where this wouldn’t have occurred to you a couple of years ago, you had to do it now?

WILLIAM JOY: Yeah, actually I ran into Ray Kurzweil, who’s a famous inventor, in a bar at a conference, and he told me he thought intelligent robots, we’re going to have them by 2030, and gave me a preprint of his book. And I read the book, I discovered that he quoted the Unabomber, Ted Kaczynski, who’s, you know, a criminal psychopath, right? But when I read the quote from Kaczynski, I didn’t realize it was Kaczynski’s words, thought it was Kurzweil’s, and it didn’t seem on the surface of it insane. And then later a physicist told me that he thought nanoelectronics, a form of nanotechnology, would happen, which I didn’t think could happen. In fact, he told me ten years before reasons why he didn’t think it would happen. The combination of these two things led me to believe what Ray said, that we could have intelligent machines. Then I went back and looked at the threat from nano and I went back and looked at where we are with genetics, and I saw a common pattern of empowerment of extreme individuals, and self- replication. These two factors, among a number of other ones that make the problem worse, are themselves so scary that I felt I had to write an article and explain this to people, because it’s extremely concerning.

RAY SUAREZ: Bill Joy, thanks a lot for coming by.

WILLIAM JOY: My pleasure.