GZERO WORLD with Ian Bremmer
Changing the AI Conversation
1/21/2022 | 26m 46sVideo has Closed Captions
Can we learn to control artificial intelligence before it learns to control us?
Can we learn to control artificial intelligence before it learns to control us? On the show this week, former Google CEO Eric Schmidt tries to change the AI conversation. Then, newly former German Chancellor Angela Merkel tries to disconnect.
Problems playing video? | Closed Captioning Feedback
Problems playing video? | Closed Captioning Feedback
GZERO WORLD with Ian Bremmer is a local public television program presented by THIRTEEN PBS
GZERO WORLD with Ian Bremmer is a local public television program presented by THIRTEEN PBS. The lead sponsor of GZERO WORLD with Ian Bremmer is Prologis. Additional funding is provided...
GZERO WORLD with Ian Bremmer
Changing the AI Conversation
1/21/2022 | 26m 46sVideo has Closed Captions
Can we learn to control artificial intelligence before it learns to control us? On the show this week, former Google CEO Eric Schmidt tries to change the AI conversation. Then, newly former German Chancellor Angela Merkel tries to disconnect.
Problems playing video? | Closed Captioning Feedback
How to Watch GZERO WORLD with Ian Bremmer
GZERO WORLD with Ian Bremmer is available to stream on pbs.org and the free PBS App, available on iPhone, Apple TV, Android TV, Android smartphones, Amazon Fire TV, Amazon Fire Tablet, Roku, Samsung Smart TV, and Vizio.
Providing Support for PBS.org
Learn Moreabout PBS online sponsorship>> The systems that we've built today that we all live on are basically driving the addiction cycle in humans.
"Oh, my God, there's another message, oh, my God, there's another crisis, oh, my God, there's another outrage.
Oh, my God, oh, my God, oh, my God, oh, my God."
♪♪ >> Hello and welcome to "GZERO World."
I'm Ian Bremmer, and on the show today, can humans learn to control artificial intelligence before it controls us?
You saw that movie.
In the last two years of the pandemic, more and more of our daily lives have moved from the physical to the digital.
And unlike in the brick and mortar world where governments more or less maintain authority, the digital space is still the Wild West.
What's more, A.I.-backed algorithms that power the world are increasingly becoming too smart for us to understand how they work.
So how do we prevent our future robot overlords from taking control?
I'll ask former Google CEO Eric Schmidt.
Don't worry, I've also got your "Puppet Regime."
>> No, I will not proofread your e-mail to Joe Biden.
Auf Wiedersehen.
>> But first, a word from the folks who help us keep the lights on.
>> Major corporate funding provided by founding sponsor First Republic.
At First Republic, our clients come first.
Taking the time to listen helps us provide customized banking and wealth-management solutions.
More on our clients at firstrepublic.com.
Additional funding provided by... ...and by... >> On April 29, 2020, two months into a pandemic that had sent global markets into a tailspin, Microsoft CEO Satya Nadella presented his company's quarterly earnings to a conference call of anxious investors.
Nearly two years later, that breakneck pace has only quickened as companies and communities have transferred more and more of our daily lives to the digital world.
We've got Zoom happy hours, virtual weddings continuing to replace in-person gatherings, and even many of the high-stakes global summits remaining largely remote.
According to a recent Pew poll, 90% of Americans say the Internet has been essential or important to them during the pandemic, with 40% saying they've used digital technology in new or different ways.
And unlike in the physical world where governments with guns ultimately hold sway, just a handful of Big Tech companies are writing the rules and regulations of the ever expanding digital space, and specifically, they're writing those rules in the form of computer algorithms powered by artificial intelligence and machine learning.
My concern isn't so much that a few fallible -- and if we're being honest -- kind of weird tech billionaires are controlling our digital lives.
It's more that the algorithms that they've set in motion but no longer fully understand are, in effect, in control.
Take A.I.-powered facial recognition software, which comes in handy when you want to unlock your smartphone but can also be used for more sinister purposes.
The Chinese government has already embraced A.I.
's unprecedented surveillance potential.
They've reportedly used A.I.
to identify members of the persecuted Uyghur minority and even to publicly shame jaywalkers in real time by displaying their faces, names and I.D.
numbers on nearby billboards.
And we've got problems right here at home.
>> Facial recognition technology played a key role in the wrongful arrest of a Farmington Hills man.
Robert Williams was taken into custody in his own driveway in front of his family and then spent 30 hours in jail for a crime he did not commit.
>> Communities of color have frequently taken the worst of A.I.
in an even more fundamental way, with Google image search results steeped in racial bias.
So how does humanity learn to control A.I.
before A.I.
controls humanity?
In the new book "The Age of A.I.
and Our Human Future," former Secretary of State Henry Kissinger, former Google CEO Eric Schmidt and MIT computer scientist Daniel Huttenlocher team up to try to answer that question, and Eric Schmidt joins me now.
Eric Schmidt, good to see you back.
Thanks for joining me.
>> Thank you for having me Ian.
>> So, a 98-year-old statesman, a computer scientist and a tech titan walk into a bar.
And I guess what you come up with is this new book you have on artificial intelligence.
And I will say that one sentence really struck me.
It was almost haunting when you wrote that "we have sought to consider its -- A.I.
's -- implications while it remains within the realm of human understanding," implying that in relatively short order, that will no longer be the case.
Explain your thinking behind that.
>> Well, we speculate that A.I.
will achieve near human level intelligence within a few decades.
And when we say near human, we don't mean the same as human.
And the book is a lot about how humans will coexist with this artificial intelligence.
And in particular, what does it mean to be human?
Dr. Kissinger got involved with this because he concluded that the age of artificial intelligence is of the same significance as the transition from the Age of Faith to the Age of Reason where humans, hundreds of years ago, learned how to be critical reasoning beings.
The impact of having non-human intelligence that is with us, controlling us, changing us is not understood and we're not ready for it as a society.
That's what this book is about.
>> So when I think about big technological advances, historically, you know, the advent of nuclear weapons, for example, the average human being may not have understood it, but the specialists knew exactly what was going on, right?
The theorists got it, the practical applications and the rest.
Even today, when we talk about artificial intelligence and the explanations behind why deep learning gets you the results that it gets you, frequently, I hear "We don't know."
How does it change your view of the way we work with artificial intelligence when we are at a situation today, when a context where we're taking advantage of things that we don't actually understand?
>> Well, remember that this is technology that we've never seen this combination before.
It's not precise, it's dynamic, it's emergent in that when you combine it, new things happen.
But most importantly, it's learning as it's going.
So you got all sorts of problems.
Imagine that the system learned something today, but it didn't tell you or it forgot to tell you and what it learned was not okay with you.
Imagine if your best friend for your kid is in fact not a human, but a computer.
And your kid loves this computer in the form of an A.I.
assistant or what have you, or a bear or a toy, and the toy learns something and it says to the kid, "Hey, I learned something interesting."
And the kid's going to say, "Sure, tell me."
But what if it's wrong?
What if it's against the law?
What if it's prejudicial?
We don't have any way of discussing this right now in our society.
>> Do you think that when we're talking about, for example, the exposure of young people to these algorithms we don't understand, do governments need to come in and say, "Actually, we need to significantly constrain what the exposure needs to be"?
>> Well, we just ran this experiment in the form of social media.
And what we learned is that sometimes the revenue goals, the advertising goals and the engagement goals are not consistent with our democratic values and perhaps even how the law should work, and especially on young minds.
We worry a lot in the book that A.I.
will amplify all of those errors.
It will all, of course, do amazing things as well, which we talk about.
But a good example here is we don't know how to regulate the objective functions of social media that are A.I.-enabled.
Is the goal engagement?
Well, the best way to get engagement is to get you upset.
Is that okay?
We don't even have a language in our regulatory model to discuss it.
We don't have people in the government who can formulate a solution to this problem.
The only solution we can propose in the book at the moment is to get people beyond computer scientists in a room to have this discussion.
Dr. Kissinger tells the story that in the early 1950s that the groups got together once the Soviet Union and the arms race began to develop the notion of mutually assured destruction and deterrence and so forth.
But it wasn't built by the physicists.
It was the physicists working with the historians and the social scientists and the economists and so forth.
We need the same initiative right now before something bad happens.
>> Who's at the forefront right now that you think are talking constructively about this issue?
>> There's a handful of people who have written very clearly on these issues, but there's no organized groups.
There's no organized meetings.
No one is debating the most consequential decisions that we're going to make, which is how are we going to coexist with this kind of intelligence?
And I want to be very clear that we know this intelligence is going to occur in some way.
We just don't know how it will be used.
Thirty years ago, with the advent of the Internet, we knew it was going to happen.
But we certainly did not know that once we connected everybody in the world, we would have all these problems.
I was fortunate enough to be the head of the National Security Commission for the Congress looking at artificial intelligence.
And we came back with lots of recommendations, some of which have been adopted -- more funding, research networks, working with our partners, making sure that we, the democratic countries, stay alive, staying ahead of China and their semiconductors and so forth.
There is no coherent group in our government or at least in our civil society in the West that's working on this.
By the way, China is confronting these things and, as you mentioned, is busy regulating A.I.
as we speak.
>> Anything we should be learning from the Chinese in terms of the steps, albeit tentative and early stage that the government is taking to try to rein in control and even understand these technologies?
>> What China a few years ago announced as part of its China 2025 plan and A.I.
2030, that it would try to dominate the following industries: A.I., quantum, energy, synthetic bio, environmental matters, financial services.
This is my entire world.
This is everything that I've been working on, and I suspect for you as well.
It's a great concern.
And China has identified building platform strategies that are global.
So the thing to learn is that we have a new competitor in the form of China who has a different system, a different set of values.
They're not democratic.
You can like them or not.
I don't particularly care for them, but you get the point.
They're coming and you would not want TikTok, for example, to reflect Chinese censorship.
You may not care where your kids are and TikTok may know where your teenagers are and that may not bother you, but you certainly don't want them to be affected by algorithms that are inspired by the Chinese and not by Western values.
And by the way, this is why a partnership with Japan and South Korea is so incredibly important because the values in South Korea and in Taiwan and in Japan are so very consistent with what we're doing and so much of our world comes from those countries.
>> Do American social media companies, algorithms of those companies, do they in any way reflect American or Western values, in your view?
>> They have not been so framed, and I think most people would argue that the polarization that we're seeing is a direct result from some of the social media algorithms.
A good example is amplification.
Today, social media will take something that some crackpot person has and amplify it a thousand times or 10,000 times.
That's the equivalent of giving the craziest people in the country the loudest loudspeakers.
That just doesn't seem right to me.
It's not the country that I grew up in, and I'm very much in favor of free speech.
I just am not in favor of free speech for robots.
I'm in favor of individual person speech.
>> So I mean, what you seem to be saying -- and I don't want to put words in your mouth but I'm interested in how you think about this -- is if the Chinese government is actively trying to ensure that the algorithms that its citizens are informed by, filtered into, do reflect Chinese values.
Socialist characteristics, if you will, that the Americans, the Europeans, the Japanese should actually be doing the same and right now they're not.
>> No, that's too strong a claim.
The Chinese government is clearly making sure that the Internet reflects the priorities of the autocracy that is the CCP.
There's no question when you look at their regulatory structure, they're regulating the Internet to make sure that they remain in power and that the kind of difficult speech which we typically enjoy in a politically free environment is not capable.
We're not saying that, and I'm not saying that.
What I am saying is that it's time for us in the West to decide how we want the online experience to come on -- to come.
I'm very concerned that the advent of these A.I.
algorithms, which boost things, and they can target things.
So here's an example.
Let's say I was doing a new company and I was completely unprincipled.
What I would do is figure out how to target each and every one of my users based on their individual preferences and completely lock them in with my false narrative, and I would have a different false narrative for each one of them.
Now, you say "He's mad," and of course, I wouldn't actually do that.
But the technology allows it, which means someone will try.
We have to figure out how to handle that.
Do we ban that?
Do we regulate it?
Do we sort of say "that's not appropriate"?
The software is there.
It's possible to be built today.
>> Because a company isn't doing that, but individual political actors across the spectrum are individually doing that presently.
>> Yeah, and again, we have to decide as a country, do you want to be a country which is largely anxious all day because everything is a crisis?
And the reason everything's a crisis is because that's the only way to get your attention.
In the book, we speculate that in the next decade, people will have to have their own assistance, which will be very tuned to their own preferences that will say "this is something to worry about.
This is a scam.
This is false."
In other words, you're going to have to, if you will, arm yourself with your own defensive tools against the enormous amount of misinformation that's going to be coming to you.
There's a famous Carnegie Mellon economist, Herb Simon, who in 1971 said it's obvious what the scarcity in this new economics is going to be.
It's the scarcity of attention, and that's what we're all fighting about.
And I don't know about you, but I'm overwhelmed by the current systems that want my attention.
Imagine five years from now and 10 years from now, when the A.I.
algorithms are fully empowered.
They're going to be both addictive but also wrong.
>> What is interesting about the argument you just make is right now, if a citizen is going on Facebook or Twitter, they're going in by themselves, right?
I mean, they're not going in with help and they're -- the corporate environment is what the corporation wants them to see and experience.
What I hear you saying is that in relatively short order, individual citizens, individual consumers need to have something on their side, whether that's an A.I.
bot or assistant or what have you, because otherwise they just won't be able to navigate systems that frankly are psychologically much more capable of damaging them than they are aware.
>> The systems that we've built today that we all live on are basically driving the addiction cycle in humans.
"Oh, my God, there's another message, oh, my God, there's another crisis, oh, my God, there's another outrage.
Oh, my God, oh, my God, oh, my God, oh, my God."
I don't think humans, at least in modern society, were evolved to be in an "oh, my God" situation all day.
I think it will lead to enormous depression, enormous dissatisfaction unless we come up with the appropriate rate limiters.
A rate limiter is your parents.
Your parents say, "Get off the games."
In China, the government tells you the answer.
Parents understand this with developing minds.
But what about adults?
Look at all the the anti-vax people who have become so darkled in a set of false facts that they can't get out of it, and they eventually die from their addiction by virtue of getting the disease and dying.
It's just horrific.
How do we accept that as a society?
>> You're kind of a creature of Silicon Valley.
You know these people.
You've lived among them.
I mean, in your conversations with them, senior people in these companies that know full well what's happening to society as a consequence of these algorithms, how do they respond?
How do they deal with it?
>> So, I have not discussed the internal Facebook stuff with my Facebook friends, but I will tell you that if you read the documents that were leaked out of Facebook, it's pretty clear the management knew what was going on.
They have to answer as to why they did one thing and didn't respond to another.
When I was CEO of Google more than a decade ago, we had similar but simpler problems and we always would get right on them and try to sort of establish a moral principle for how to deal with them, and we did a pretty good job, in my opinion.
When I think about 10 years ago, I want to be the first to admit that I was very naive.
I did not understand that the Internet would become such a weaponizing platform.
I did not understand, first and foremost, that governments would use it to interfere with elections.
I was wrong there.
But more importantly, I did not understand that the now A.I.-enabled algorithms would lead to this addiction cycle.
Now, why did I not understand that?
Well, maybe because I'm a computer scientist.
I went to engineering school.
I didn't study these things.
The conclusion in our book is that the only way to sort these issues out is to widen the discussion aperture.
If we simply let the computer scientists like me operate, we'll build beautiful, efficient systems, but they may not reflect the at least implied ethics of how humans want to live, and that debate needs to occur today.
I'm very concerned about national security, that what will happen is that there will be this compression of time where we won't have time to deliberate our response to an attack or a strategic move.
I'm very concerned that our opponents will, for example, launch on warning.
They'll actually have an A.I.
system decide to enter into a war without even having humans discussing it because the decision cycle is too fast for everybody.
These issues have to be discussed now, and they have to be discussed between nations and within the nation.
Again, China as an example -- and I'm not endorsing them -- has a law around data privacy and has a new algorithmic modification restriction law that is in process right now.
So they're trying in their own way to do it their way.
What's our answer?
What is the democratic answer?
>> The interesting change, of course, in technology compared to traditional geopolitics is that increasingly there are really only two dominant players.
And you know, the Chinese and the Americans are way ahead technologically from other countries.
They're also spending multiples of what other countries are on A.I.
research.
If you're looking at a country like Japan that, I mean, clearly needs to invest in China, needs a security umbrella from the United States, but isn't anywhere close to the technological capabilities of either countries, what do you say to them in terms of strategy going forward?
Holistic macro strategy for a government like Japan?
>> Japan should do what the United States has done so far, which it should organize an A.I.
partnership.
It should get all of the components of the Japanese society that are major players in A.I.
And remember, there are large companies using A.I.
in Japan and they need to build an A.I.
strategy for the nation.
That A.I.
strategy will include a lot more resources in universities, a lot more people trained in those universities in the government and an agreement that the A.I.
systems that are going to get built in Japan are consistent with Japanese law, but also Japanese culture and values.
That's the only path I know of to do.
Japan will be a significant player because of the extraordinary technological capability of Japanese scientists and Japanese software.
There's every reason to think that Japan, as well as South Korea, and to some degree, India can be very significant players in this because of the scale they have.
This is a game where you need a lot of resources.
And Japan has that.
>> So, Eric, before we close, we've talked about a lot of things that worry you both in terms of software, hardware and global policy.
Talk to me just a little bit about the things that excite you the most in the A.I.
field.
Where are the breakthroughs that you think are just going to be magnificent and life-changing for society that are coming soon to a theater near you?
>> Probably the most important is health and wellness and in particular new drugs, new drug discoveries.
And there are many, many such discoveries happening in science.
One of my friends said that biology -- that A.I.
is to biology the way math is to physics.
Biology is so difficult.
It's so difficult to model that we're going to have to use these technologies to do so.
I think there's every reason to think that whether it's in synthetic biology, which would be this immense new business.
I had a demonstration a few weeks ago of a company that was growing concrete.
Not mixing it, but growing it.
Now, this concrete wasn't as strong and it was more expensive.
But if you can grow concrete, you should be able to grow anything.
So imagine material science, new materials and all, new drugs, new diagnostics, new biology, as well as the most important thing, which is A.I.
assistance that help people be smarter, whether it's somebody who's doing their normal job or somebody who's a brilliant physicist who's just overwhelmed by the product that they ask the computer to help them.
The advances will come faster and faster and faster because of A.I.
>> Eric Schmidt, always great to talk to you, my friend.
Thanks so much for joining.
>> Thank you for having me, Ian.
♪♪ >> And now to "Puppet Regime," where a newly former German chancellor, Angela Merkel, still has a puppet.
She's finding that she too has a hard time disconnecting.
Roll that tape.
♪♪ >> Ah, this is the life -- no work, no responsibilities, no petty male egos to navigate.
I just love the peace and quiet.
[ Cellphone rings ] Yeah, what is it?
You need me to make another statement about getting vaccinated?
This is ridiculous.
Goodbye.
Ah, this is the life.
No work, no responsibilities.
No male egos to -- [ Cellphone rings ] Ugh.
Yeah?
Hello, President Macron.
Yes, you are doing fine.
No, I will not proofread your e-mail to Joe Biden.
Auf Wiedersehen.
[ Sighs ] He'll never be me.
He'll never be me.
Anyway, where was I?
Ah, yes, this is the life.
[ Cellphone rings ] Don't answer, Angela.
No, nein, nein, don't do it, no.
Hello.
Yeah, Ursula, I know that Vladimir Putin is still a pain in the -- Yeah, I know.
Look, enough.
You deal with it.
Goodbye.
[ Sighs ] Deep breaths, Angela.
Deep breaths.
Reminder -- this is the life.
No work, no responsibilities.
[ Needle scratches ] Ah, who are you kidding, Angela?
You hate this.
This is not the life.
You don't know what to do with yourself.
You can't just cash out like Gerhard Schroeder.
Angela needs to work.
Angela needs a second act.
But what should it be?
>> "Puppet Regime"!
>> That's our show this week.
Come back next week, and if you like what you see or you're concerned about A.I.
overlords taking control of everything, even GZERO Media, I'm not going to be replaced, but you should check us out at gzeromedia.com.
♪♪ ♪♪ ♪♪ ♪♪ >> Major corporate funding provided by founding sponsor First Republic.
At First Republic, our clients come first.
Taking the time to listen helps us provide customized banking and wealth-management solutions.
More on our clients at firstrepublic.com.
Additional funding provided by... ...and by...

- News and Public Affairs

Top journalists deliver compelling original analysis of the hour's headlines.

- News and Public Affairs

FRONTLINE is investigative journalism that questions, explains and changes our world.












Support for PBS provided by:
GZERO WORLD with Ian Bremmer is a local public television program presented by THIRTEEN PBS
GZERO WORLD with Ian Bremmer is a local public television program presented by THIRTEEN PBS. The lead sponsor of GZERO WORLD with Ian Bremmer is Prologis. Additional funding is provided...