Android: The core of my writing is not art, but truth. Thus what I tell is the truth.
LUCKY SEVERSON, correspondent: This is the late great Philip K. Dick, the science-fiction writer who wrote so much about androids. Actually, he, this, it is an android. But this is 2010 technology. Artificial intelligence today is vastly more sophisticated. Max Tegmark is a physicist at MIT.
PROFESSOR MAX TEGMARK (Department of Physics, MIT): One thing is certain, and that is that the reason we humans have more power on this planet than tigers is not because we have sharper claws than tigers, or stronger muscles. It’s because we’re smarter. So if we create machines that are smarter than us there’s absolutely no guarantee that we’re going to stay in control.
SEVERSON: And that’s why Professor Tegmark and some of the world’s top scientists, including Stephen Hawking, are warning that “the development of full artificial intelligence could spell the end of the human race.”
TEGMARK: If we can ever build a machine that’s better than us humans at all cognitive tasks, then it’s also going to be better than us at making artificially intelligent machines, because that’s a cognitive task, too. So it can very rapidly figure out how to reprogram itself to get even smarter. That leaves the possibility that after a rather short amount of time it might not just be a little bit smarter than us but vastly smarter than us, much like we are vastly smarter than a snail.
SEVERSON: Tegmark says it’s now possible that within this century artificial intelligence will surpass human intelligence, an event called “singularity,” a term made famous by the futurist Ray Kurzweil. The worry is that if mankind is not very careful, this runaway technology could rule, or perhaps, ruin the world. Wendell Wallach is a scholar at the Yale Interdisciplinary Center for Bioethics.
WENDELL WALLACH (Lecturer, Yale Interdisciplinary Center for Bioethics): If the technological singularity is truly possible, that will be one of the greatest crises humanity could confront, particularly in terms of whether we can manage or control that and exact benefits from it rather than, what shall I say, turned into the house pets of superior beings.
TEGMARK: I remember when I was a kid, and they came out with the most powerful computer ever, the new Cray super computer. Well, I have a computer in my lab which is more powerful than that now: my phone.
SEVERSON: Computers have come so far so fast. In Japan President Obama played soccer with an android, and baby androids can be programed to mimic a real baby. But the idea of robots isn’t new. What’s new is the intelligence itself.
I am Asimo, a humanoid robot. It is a pleasure to meet you.
SEVERSON: And science is trying to determine if artificial intelligence can be instilled with consciousness.
WALLACH: Consciousness is one of the faculties that used to be talked about as a gift of the soul. Consciousness and moral acumen and intelligence, all these things that we are now looking for ways to perhaps reproduce within computational systems. We have theories about how we may proceed. There are scientific ideas about what consciousness is, what moral acumen is, what all these different capabilities are, but those experiments are still at a very early stage.
SEVERSON: Wallach does not believe singularity is as near as some scientists claim, but he doesn’t discount it. He is a co-author of a book called Moral Machines: Teaching Robots Right from Wrong.
WALLACH: I’m one of those people who is really fascinated about what we can actually imbue into artificial intelligence, which of our capabilities, which of our capacities can we actually imbue in them, and what we learn about human nature or how we humans function in the process of trying. I think we’ve learned that moral decision-making is much more complex than we even thought it to be.
PROFESSOR PAUL SCHERZ (School of Theology and Religious Studies, Catholic University): We’ve been challenged over and over again with what humanity can do with technology.
SEVERSON: Paul Scherz is a professor of moral theology and ethics at Catholic University, and a skeptic that humans can pass along morality to machines.
SCHERZ: Where do we gain our sense of morality? We gain our sense of morality through community, through those around us, through our relationships. They’re the ones who teach us how to be good people.
SEVERSON: Another skeptic: Professor Gerard Mannion, the Amaturo Professor of Catholic Studies at Georgetown University.
PROFESSOR GERARD MANNION (Department of Theology, Georgetown University): For it to be morality they’d have to have free will, and they’d have to have every facet of consciousness that human beings have. That doesn’t seem to be on the cards just yet, and if you could program a machine with morality it would be a selective form of morality coming from the programmers.
WALLACH: So when you’re talking about programming morality, yes, who’s morality? What morality? Do we even want machines making moral decisions? Do you want them—to put them in morally significant situations so they have to make decisions? And even if we agree what the decisions are, can we be sure that the machine will make the appropriate decision?
SEVERSON: Professor Tegmark says there is no law in physics that says man cannot program morality into artificial intelligence.
TEGMARK: I think it should absolutely be possible to make machines which have morality in the same way that we do. If you think about me as a big connection of quarks and electrons that compute, take information in, figure out what to do and then do it, there are certain computational circuits in me that embody my morality, my empathy and compassion for other people, my knowing right from wrong, and just like we can teach this to our children, we should be able to teach it to machines. So, yes, it’s doable, but do we know how to do it now? Not so much.
SEVERSON: So if man can instill morality into a machine, what about a soul, which has been described as the immortal essence of a living thing?
MANNION: You could do an awful lot that would make it appear as if it has a soul. You know, we already have certain robots and so on that seem to be able to mimic and replicate a great deal of what you would attribute to human character. Anyone who has an iPhone, you know, will talk to Siri on a daily basis and marvel at her intelligence.
Siri, do you have a soul?
Siri: I’ve never really thought about it.
SEVERSON: It’s the technology of the present that worries Yale’s Wendell Wallach: drones, self-driving automobiles, our ability to spy on one another. His new book is called A Dangerous Master: How to Keep Technology from Slipping Beyond Our Control.
WALLACH: I think that it’s accelerating in a way where our oversight is very poor, and we aren’t attending to the risks. So we are in danger of letting the risks overwhelm the benefits and losing control of technological development.
MANNION: There’s certainly already an increasing spread of artificial intelligence into our daily lives, and that is of concern. I mean you could say in some ways, you know, there’s a form of singularity that’s already long passed. We’ve taken the eye off the ball for how much technology has taken over in our society.
SEVERSON: Professor Tegmark heads the Future of Life Institute in Cambridge, which has a goal of making the most of artificial intelligence for the good of mankind.
TEGMARK: Artificial intelligence is the ultimate powerful technology because if we ever succeed in making machines that are much smarter than us and keep amplifying their own intelligence, there’s basically no limit to how much we can figure out with their help. There’s almost no limit to how awesome life can become flourishing in the cosmos. But there’s also no limit to the extent we could mess up.
MANNION: We need to encourage more ethical literacy across the board and ethical discernment at each stage of scientific and technological development. We need to ask, hang on, before we allow a license for this technology to be rolled out and put out there, we need to say has due diligence been done on the ethical and social and legal implications of this?
TEGMARK: For any technology, there’s always a race between the power of the technology on one hand and the wisdom with which we manage the technology on the other hand. You always need to have the wisdom to win this race.
SEVERSON: It’s the wisdom and how to apply it that Max Tegmark and other physicists are working on. Regulations, if there are to be any, are down the road.
For Religion & Ethics NewsWeekly, I'm Lucky Severson in Cambridge, Massachusetts.