12.13.2024

Are We Ready for the AI Revolution? Fmr. Google CEO Eric Schmidt Says No

Read Transcript EXPAND

BIANNA GOLODRYGA, ANCHOR: Well, we turn now as a new year, a rapid technological advancement approaches, all eyes are on A.I. It’s what everyone’s been talking about. And the new bestselling book, “Genesis: Artificial Intelligence, Hope, and the Human Spirit,” is putting the new technology under the microscope. Taking a look at how it could help us and how it could stop us from hurting ourselves. Co-author and former Google CEO Eric Schmidt joins Walter Isaacson to discuss.

(BEGIN VIDEOTAPE)

WALTER ISAACSON, CO-HOST, AMANPOUR AND CO.: Thank you, Bianna. And, Eric Schmidt. Welcome back to the show.

ERIC SCHMIDT, CO-FOUNDER, SCHMIDT FUTURES AND FORMER CEO AND CHAIRMAN, GOOGLE: Thank you, Walter. It’s great to see you.

ISAACSON: This new book, which you wrote with the late Dr. Henry Kissinger before he died and Craig Mundie, a long time of Microsoft, is about how we’re supposed to handle A.I., but it’s even more philosophical. You say it’s a question of human survival, you’re addressing. Why do you say that and why did you call it “Genesis”?

SCHMIDT: We believe, in particular, Dr. Kissinger believed that the A.I. revolution is of the scale of the reformation, that right now we’re used to being the top dog, if you will, and we determine reason we determine outcomes. And with the A.I. revolution, there’s always a danger that we’re going to be the dog to their computers. In other words, they’re going to tell us what to do. And the book is really a statement of how important human dignity is. The ability to think and to be free, to not be subject to surveillance, all of the things that are possible on the downside of the A.I. revolution. We also spent a lot of time talking about what A.I. can do. And I’ll give you a simple example. In a few years, all of us believe that you’ll have in your pocket 90 percent of Leonardo da Vinci, something you know a lot about, 90 percent of the greatest physicists, the greatest chemists. What will it be like where each and every one of us have that kind of discovery capability?If — in our book, we start by talking about polymaths, something, again, you know a lot about, because of your previous writing. Polymaths are really important. They change the course of history. Well, what happens when everyone has their own polymath? We’re just not ready as a society for the implications of this powerful arrival of intelligence.

ISAACSON: Well, one of the things you say, because you’re generally a techno optimist, is that sometimes you worry we aren’t going fast enough. What do you mean by that?

SCHMIDT: Well, we have to start by all the things that this A.I. intelligence will provide, much faster cures for diseases. How do you want to solve climate change? You need new energy. You need A.I. to do that. Universal doctors, universal teachers, getting every human on the planet to their top potential, making businesses far more efficient, which means more profits, but also more growth, more jobs and so forth. All of those are going to happen, they’re going to happen very, very quickly. And the downsides are also quite serious. The ability to do cyber-attacks, nation state tensions, misinformation. One of my theories is most of what you see politically now is because everyone’s online and they’ve all found their special tribal groups and they’ve all decided that they all believe the same thing, even though reality is much more subtle than what in the individual seems to believe.

ISAACSON: Do you think this should be left to the technologists like you and Greg Bundy?

SCHMIDT: Well, Dr. Kissinger got started in this almost 10 years ago because he was very clear that people like myself should not be making these decisions. Let’s look at social media. We’ve now arrived at a situation where we have these huge companies, in which I was part of, and they all have this huge positive implication for entertainment and culture, but they have significant negative implications in terms of tribalism, misinformation, individual harm, especially against young people, and especially against young women. None of us foresaw that. Had — maybe if we’d had some non-technical people doing this with us, we would have foreseen the impact on society. I don’t want us to make that mistake again with a much more powerful tool.

ISAACSON: You talk about social media and you say that people who are technologists, you were at Google, didn’t really foresee some of the downsides. My colleague, Hari Sreenivasan, has been doing this a lot on this show, talking about the algorithms and how the algorithms incent depression sometimes, incent enragement, not just engagement. Is it baked into the algorithms? And if so, should we hold social media accountable for that?

SCHMIDT: We should. And it’s simple to understand. I am personally very strongly in favor of human speech, including human speech, which is terrible and I don’t agree with. That’s my personal view. But I’m not in favor of computer algorithm speech being the same thing. What they’re doing is boosting based on an algorithm. So, let’s imagine, you and I found a company and we’re perfect. We have no biases whatsoever, but we want to maximize revenue. Well, the best way to maximize revenue is to maximize engagement. And the best way to do that is outrage. Even if you and I, as well-meaning as we can be, no bias, no — we want to be truthful and all of that, our system will produce these holes, these cubicles, these caves that people will end up with.

ISAACSON: Do you think we should get rid of what’s sometimes called Section 230 protections, which is that part of the law that says you don’t hold a platform accountable for what gets posted and maybe even amplified?

SCHMIDT: You know, Section 230 was passed in roughly 1994. So, it’s about 30 years old. We had no idea that the internet would be used for this. And so, we simply asked — and I was part of it at the time. We just wanted an exemption for technology and content we didn’t own. That doesn’t make sense anymore. There needs to be restrictions on Section 230 for the worst cases. I’m talking about things where there’s real harm, harm to people, especially the young people, we have to change it.

ISAACSON: Would you count harm to democracy in that list?

SCHMIDT: I think democracies are being harmed by the tribalism and by the misinformation, but I doubt we’re going to come to an agreement, certainly not in the U.S., but also in many other countries as to what truth is. So, I think the best way that we can handle social media is basically to say that if there’s real harm, this thing has to get stopped. If it’s a case where I tell you one thing and you say another and it’s an open debate, that’s probably not going to destroy democracy.

ISAACSON: Tell me how these issues are increased, the problems as we move from social media, meaning, you know, networks like X or Facebook to A.I.

SCHMIDT: There are two really big things happening right now in our industry. One is the development of what are called agents, where agents can do something. So, you can say, I want to build a house. So, you find the architect, go through the land use, buy the house. This can all be done by computer, not just by humans. And then, the other thing is the ability for the computer to write code. So, if I say to you, I want to sort of study the audience for this show and I want you to figure out how to make a variant of my show for each and every person who’s watching it, the computer can do that. That’s how powerful the programming capabilities of A.I. are. We’ve — I mean, in my case, I’ve managed programmers my whole life and they typically don’t do what I want. You know, they do whatever they want. But with a computer, it’ll do exactly what you say. And the gains in computer programming from the A.I. systems are frightening. They’re both enticing because they will change the slope. Right now, the slope of A.I. is like this. And when you have A.I. scientists, that is computers developing AI, the slope will go this, it’ll go wham, right? But that development puts an awful lot of power in the hands of an awful lot of people.

ISAACSON: Let me ask this in a very broad way. We often talk about a duty of care that corporations have, others have. What is the duty of care that you think A.I. companies should do?

SCHMIDT: Well, I’m all in favor of A.I. companies inventing this new future, and I understand that they will make mistakes and there will be some initial harm, right? Some bad thing will happen. The secret is not that something bad happens, but that it doesn’t happen again. When we were running Google, their Instagram are now doing other things. We had a rule that if anything happened in the morning, we would fix it by noon, right? We were on it. And I think that kind of active management of social media and of A.I. in general for consumer products is going to be crucial.

ISAACSON: You and I first discussed this, I think, with Dr. Kissinger when we were all in China four or five years ago, I think it was. How do you think since then the Chinese are progressing on A.I.? And one of the things we talked about was they have restrictions on free speech. Is that going to help them or hurt them in this regard?

SCHMIDT: Well, they believe that those restrictions help them. And obviously, that’s horrific. It’s a violation of those sort of liberal western order. But I can’t fix that. When I was in China with Dr Kissinger a year and a half ago, I was quite convinced that China was about two years behind us. It looks like, unfortunately, I was wrong. And even with all of the chip restrictions that we put in, which Trump put in, President Biden put in as well, all the right, well, many things, the Chinese have gotten very close to our top models. Now, you sit there and you go, why is this important? Because these are models that can show planning. They can begin to do physics. They can do math. These models are now at the graduate level of math and physics people, right? True in China and in the United States. So, China clearly understands the value of having what is generally called general intelligence. As it applies to its national security, to its business goals, to its societal goals, and to the surveillance that is characteristic of the state. The west needs to win that battle. It’s really important that the systems that we use reflect American and western liberal values, such as freedom of thought, freedom of expression, the dignity of the of all the people involved. I’m very, very worried that in this contest they’re now so focused that they’re not only catching up, but they will catch up. And remember that the country or the company that develops the system that is smarter than any human in the world, this is called super intelligence, can then apply that to itself to get smarter and smarter and smarter. There are people who believe that such a system, when it appears, and we believe it will appear probably within the next decade, will give that, in this case, country or company, an asymmetric, powerful monopoly for decades to come. We just don’t know.

ISAACSON: This fear that China is catching up in may soon surpasses in A.I., is that an argument to not put too many regulations and restrictions in the U.S. on the development of A.I.?

SCHMIDT: In America, I think based on what President Trump has said, any existing restrictions are likely to be eliminated. In China, they are also moving so quickly. Their only restrictions are done after the fact. So, basically, you can do whatever you want to, but if you do something really bad, they will come and arrest you. So, it’s done that way. It’s very important right now in America to allow this innovation to occur during this critical time as quickly as we can. Now, I know people say, oh, that’s terrible. That means my privacy will be violated. We’ll deal with that if it happens. But right now, the sense of destiny that my industry has, that somehow we’re building something larger than ourselves, that the arrival of this intelligence that I’m discussing is so much powerful than people appreciate that we have to do it. I will tell you, by the way, I don’t think western democratic systems are ready for this. There are huge implications for this, wealth distribution, access, privacy, all the things that everyone talks about, but let’s make sure we win. I do not want to be — have China win this one ahead of us. It’s too important.

ISAACSON: Yes, you talk about the danger of too many regulations. Well, now you have the Trump administration coming in. You have David Sacks, who’s very much of a you know, techno progressive techno — you know, pushing for technology. Very close friend of Elon Musk. How do you think Trump, David Sacks and others will be looking at A.I.?

SCHMIDT: I’m assuming that they’re going to follow a laissez faire, no regulation approach. The president has indicated that he’s not going to continue some of the A.I. regulations that were put in place by President Biden. So, my prediction will be it will start with no regulation, but that there will be a major project within the next administration to understand the China versus U.S. national security issues of A.I.

ISAACSON: When you talk about the competition with China, you and, of course, Dr. Kissinger spent a whole lot of time in China talk to the top leadership. Do you think there’s a possibility we could end up cooperating with China more or do you think it’s inevitably a competition?

SCHMIDT: I spent a lot of years hoping that the collaboration would occur, and there are many people in our industry who think that the arrival and development of this new intelligence is so important, it should be done in a multinational way. It should be done in the equivalent of CERN, which is the great physics laboratory, which is global in Switzerland. The political tensions and the stress over values is so great. There’s just no scenario. There’s just — I want to say it again, there’s just no scenario where you can do that.

ISAACSON: You were chairman of the Defense Innovation Board a while back under President Obama, and I think you worked too with that with President Biden, and I was on the Defense Innovation Board with you, and we looked at A.I. and how that was going to affect warfare, particularly drone warfare. What do you think the future of warfare can and should be in the era of A.I.?

SCHMIDT: If you study the Russia-Ukraine conflict, the Ukrainians who had no Navy and no Air Force were forced to scramble, and they did so valiantly. I spent lots of time there and they ultimately built relatively simple drones that are now turning into be very complex weapons. It looks to me like for terrestrial conflict, the correct answer is autonomy, which ultimately means drones. I’ve personally seen situations in Ukraine where you have a soldier sitting at a screen drinking coffee, controlling a weapon that’s very far away doing whatever job it was doing. I know — if you think about war, in our history, thousands of years of history, it was stereotypically a man and a gun shooting the other man with a gun. That is an antiquated model of war. The correct model — and obviously war is horrific — is to have the people well behind and have the weapons well up front and have them networked and controlled by A.I. The future of war is A.I. networked drones of many different kinds.

ISAACSON: Do humans need to be in the loop?

SCHMIDT: Well, the U.S. rule is called a human in the loop or meaningful human control. So, what will happen is that the computer will produce the battle plan and the human will authorize it, thereby giving the legitimacy of both authorizing as a human, but also the legitimacy of control and liability if they make a mistake, that’s the likely outcome. One of the key issues, by the way, is that Russia and China do not have this doctrine. And so, you — there’s always this worry about the Dr. Strangelove situation where you have an automatic weapon, which makes the decision its own. That would be terrible.

ISAACSON: I’m sure you, like me, know the movie 2001, “A Space Odyssey” and the question of how and computer getting out of control and the humans having to try to pull the plug on it. Do you think we ought to have a kill switches, a way to pull the plug, and what situations would we use that for in our A.I. systems?

SCHMIDT: We’re going to have them. One thought experiment is imagine that you — that everyone in America has a red button that you press that disconnects the house from the internet. And you say, well, that’s stupid. But imagine a future scenario where an adversary has taken over the internet is now using it to attack your house, right? So, all of a sudden, these questions of national security that become very personal. So, I think that you’ll see, first, obviously huge monitoring systems, but you will have defensive systems along the lines of the red kill button for that reason.

ISAACSON: At the end of your book, you say that you have high confidence that we can imbue our machines with the intrinsic goodness that is in humanity. First of all, are you sure that all of humanity has intrinsic goodness and what about those who don’t?

SCHMIDT: Well, look, I think we all understand that there’s some percentage of people who are truly evil, terrorists, so forth and so on. The good news is the vast majority of humans on the planet are well meaning, they’re social creatures, they want themselves to do well and they want their neighbors and especially their tribe to do well. I see no reason to think that we can’t put those rules into the computers. One of the tech companies started its training of its model by putting in a constitution. And the constitution was embedded inside of the model of how you treat things. Now, of course, we can disagree on what the comp constitution is, but these systems are under our control. There are humans who are making the decisions to train them. And furthermore, the systems that you use whether it’s ChatGPT or Gemini or COD or what have you, have all been carefully examined after they were produced to make sure they don’t have any really horrific rough edges. So, humans are directly involved in the creation of these models and they have a responsibility to make sure that nothing horrendous occurs as a result of them.

ISAACSON: Eric Schmidt, thank you so much for joining us.

SCHMIDT: Thank you again.

About This Episode EXPAND

Fellow for the Prevention of Genocide at the Holocaust Museum Stephen Rapp discusses the case he has been building against Bashar Al-Assad. Angela Patton and Natalie Rae tell the story of young girls preparing to reunite with their incarcerated fathers in their new film “Daughters.” Former Google CEO Eric Schmidt discusses his new book “Genesis.”

LEARN MORE