GZERO WORLD with Ian Bremmer
The Dangers of Unchecked AI
10/24/2025 | 26m 46sVideo has Closed Captions
As companies race to build more powerful AI models, will humanity suffer the consequences?
Can we align AI with humanity’s interests? Tech companies are racing to develop more powerful models. But when new technology is rolled out without guardrails, the consequences can be disastrous. Tristan Harris, Co-Founder for Center for Humane Technology, joins Ian Bremmer to discuss AI's risks.
Problems playing video? | Closed Captioning Feedback
Problems playing video? | Closed Captioning Feedback
GZERO WORLD with Ian Bremmer is a local public television program presented by THIRTEEN PBS
GZERO WORLD with Ian Bremmer is a local public television program presented by THIRTEEN PBS. The lead sponsor of GZERO WORLD with Ian Bremmer is Prologis. Additional funding is provided...
GZERO WORLD with Ian Bremmer
The Dangers of Unchecked AI
10/24/2025 | 26m 46sVideo has Closed Captions
Can we align AI with humanity’s interests? Tech companies are racing to develop more powerful models. But when new technology is rolled out without guardrails, the consequences can be disastrous. Tristan Harris, Co-Founder for Center for Humane Technology, joins Ian Bremmer to discuss AI's risks.
Problems playing video? | Closed Captioning Feedback
How to Watch GZERO WORLD with Ian Bremmer
GZERO WORLD with Ian Bremmer is available to stream on pbs.org and the free PBS App, available on iPhone, Apple TV, Android TV, Android smartphones, Amazon Fire TV, Amazon Fire Tablet, Roku, Samsung Smart TV, and Vizio.
Providing Support for PBS.org
Learn Moreabout PBS online sponsorshipAI is not controllable like other technologies.
Why are we recklessly racing this out to society psychologically, in ways that we definitely don't know what we're doing?
This is just stupidity.
Hello and welcome to GZERO World.
I'm Ian Bremmer and today we are talking about the most powerful technology humans have ever built, artificial intelligence.
AI is advancing quickly and its applications are spreading into every corner of society from classrooms to hospitals to battlefields.
With speed comes risks.
We've already lived through one cautionary tale, social media.
It was supposed to connect us and it did.
But along with cat videos and Keanu memes, social platforms led to extreme polarization, turbocharged the spread of disinformation in our societies.
When powerful technologies are rolled out, the consequences can be catastrophic.
Will AI be any different?
To help us think through what's at stake, I'm joined by Tristan Harris.
He's former Google ethicist and co-founder of the Center for Humane Technology.
Don't worry, I've also got your puppet regime.
Well, Vladimir, looks like you lost your chance on Ukraine.
Oh, really?
How so?
But first, a word from the folks who help us keep the lights on.
Funding for GZERO World is provided by our lead sponsor, Prologis.
Every day, all over the world, Prologis helps businesses of all sizes lower their carbon footprint and scale their supply chains.
With a portfolio of logistics and real estate and an end-to-end solutions platform addressing the critical initiatives of global logistics today.
Learn more at prologis.com.
And by Cox Enterprises is proud to support GZERO.
Cox is working to create an impact in areas like sustainable agriculture, clean tech, health care, and more.
Cox, a family of businesses.
Additional funding provided by Carnegie Corporation of New York, Koo and Patricia Yuen, committed to bridging cultural differences in our communities.
And... Could the future of AI be physical?
If you watched any of the humanoid robot games in Beijing this summer, you probably think not.
They're powered by artificial intelligence.
They can box.
Or can they?
I don't know.
If they can run, this is a robot lighting the torch.
[LAUGHTER] AI-powered robots stumble and fall.
They look awkward right now.
But they're coming faster than you think.
We're living through an AI explosion.
First came generative tools like ChatGPT, which creates text, images, or snippets of code, seemingly out of thin air.
Then came agentic AI, which doesn't just respond to prompts, but takes initiative and performs tasks like a digital employee.
The next phase is physical AI, where algorithms meet hardware.
Machines that don't just process information, but sense, move, and manipulate the world around us.
To transform our world, experts are starting to believe AI needs to truly become part of it.
NVIDIA's Jensen Huang declared in January the next frontier of AI is physical.
The Chad GPT moment for general robotics is just around the corner.
This will be the largest technology industry the world's ever seen.
The technology is, of course, accelerating.
Sensors are cheaper.
Batteries last longer.
AI models can train robots on multiple tasks, even grasp physics.
And big tech is all in.
Venture capitalists are pouring billions into robotics startups.
Google's DeepMind launched Gemini Robotics to help machines adapt to new environments.
OpenAI is hiring roboticists and filed trademarks for robots and AR glasses.
And Tesla is building Optimus, a robot that Elon Musk swears will soon be folding your laundry.
I think this will be the biggest product ever of any kind.
(crowd cheering) But right now, it's China that's actually winning the robot race.
Already home to more robots than the rest of the world combined, its humanoid market is projected to top 10 billion by 2029.
Whoever dominates, the payoff will be big.
Autonomous machines will transform industries based on human labor, transportation, logistics, healthcare, and domestic care.
In aging societies like Japan, Germany, and South Korea, they could help offset labor shortages.
Morgan Stanley estimates humanoid robots could be a $5 trillion industry by 2050.
That's the promise.
Reality is harder.
An AI robot may understand the instruction, make an omelet, but can it crack an egg without getting yolk all over your kitchen?
Dexterity, intuition, and common sense are much more challenging to program than text prediction.
There are, of course, also issues like safety.
A robot can malfunction, lose power.
Technology, advanced systems still need remote human operators, at least for now, and privacy, robot assistants with cameras and microphones need to feel safe, but not invasive.
And of course, there's the economy.
What happens to the workers the machines replace?
Through tax robots defund the safety nets?
Do unions negotiate with AI labor?
So yes, physical AI is coming.
The technology is maturing, the money's flowing.
The incentives are coming too, but making the transformation responsibly requires more than just algorithms.
Engineers are gonna need to build safeguards.
Companies need safety protocols.
Governments need regulation.
The trajectory is obvious.
The challenge isn't whether AI becomes part of our world, it's how we choose to live with it.
To help understand how we align AI with humanity's best interests, here's Tristan Harris, co-founder of the Center for Humane Technology.
Tristan Harris, thanks for being on the show.
Good to be with you, Ian.
You're spending your time talking about AI and ethics.
- Yeah.
- There doesn't seem to be a lot of prioritization of that confluence in the space.
Am I right in thinking that?
Yeah, well, clearly we've learned all the lessons of how we got social media right, how it strengthened democracies, made everyone's mental health better.
And so now that we've fixed all those problems, now we're taking that same wisdom and restraint and applying it to AI.
We've had the US-China did an agreement.
We've fixed chip controls.
No, I'm kidding.
This is the world that you want to live in, though.
You would like to have that story.
I would like to have a story where... I think people need to understand, Ian, that AI is different than every other kind of technology we've invented.
People say, "We always have technology.
They're tools.
We can use tools for good, or we can use tools for evil.
A hammer, you can use good or evil."
But AI is distinct from that because AI, it's like if you imagine a hammer that can think to itself at a PhD level about hammers, invent better hammers, recursively go off in the world, duplicate itself, do research on what would make better hammers, make money, send crypto around.
It's crazy what this technology is.
It is not a tool.
It's more like an intelligent species that we are birthing that has more capability than us.
It's already beating military generals at strategy games.
It's already proving new math theorems.
It's already inventing new material science.
It's not doing this autonomously.
But it is, right.
It's not doing it autonomously.
It is a tool in the sense that it is responding to the incentives that are being programmed into it by people with profit motives, with business models.
And fundamentally, that is a big part of the challenge.
Well, so that's the key.
The key word there, what you said, is incentives.
We talk about, can there be ethics in AI?
Well, ethics doesn't even matter.
It gets thrown out the window relative to the incentive.
Now, the question is, what is the incentive with AI?
People say it's profit.
It's not profit.
It's only a piece of the story.
The company's actual incentive is I have to get to artificial general intelligence first.
That is the prize.
If I do that, if I have AI that can recursively self-improve, then that is the prize at the end of the rainbow.
I build a god, make trillions of dollars, own the world economy.
That's the actual-- That's very long-term for a company.
It is long-term.
Especially for companies that are actually trying to meet their next fundraise.
So does it feel like that's what's motivating all of the activity that we're seeing right now?
No, no.
So then the question is, what is the flywheel that gets you there?
So the incentives are, release an impressive new AI model, Grok 4, Gemini, blah.
And that impressive model, then you get lots of users on that model.
So you have hundreds of millions of users, or a billion users using the product every day.
You use those two things to raise the most new venture capital, so you have billions of dollars of investment.
You use that to invest more in GPUs, more compute, and get more usage data, because that turns into training data.
You get all of the top engineers and talent, because you've got the most funding and the most compute and you have the top AI model, and you use all of those things to train the next AI model, and you sort of spin that flywheel.
Does that make sense?
Sure.
That's the actual incentive, is that I have to attract the best talent, have the bigger compute clusters, like Elon's put, I think, a billion dollars or something into his Memphis cluster.
I get the most usage data, which turns into training data, and those things come together, and I get to have an even bigger model that outcompetes the other model.
Now, that's one set of incentives to develop AI that relies on engaging with individual citizens, consumers, right?
- Yes.
- Then there's also all of these use cases we're seeing in AI, which when we're talking about productivity and replacing intellectual labor, when we talk about new inventions and massive efficiencies and reducing waste.
Why isn't the first thing you're saying to me about companies trying to use AI to do all of these incredible industrial innovations, which is certainly what the Chinese are prioritizing?
Yeah.
So you're exactly right that the Chinese and the West have very different approaches to AI.
I'd say the Western companies are more obsessed with this almost religious idea of building a god in a box.
Like we need to race to super intelligence or general intelligence.
Whereas as you said, what we're seeing in China is they're just racing to have AI systems that they maximally deploy in factories, in manufacturing, in medicine, because they want the productivity of their economy to get boosted by AI.
That's the main thing that they're focused on.
- And it's not that US is not doing it, you're saying... - It's not the main thing.
Yeah, exactly.
Because if the companies said, "Look, we're here to solve climate change or fix energy production," they would just be applying their stuff maximally to that.
But instead, they're applying most of their investment dollars into scaling to their next AI model because they keep having this view that if we have an even more powerful AI that is even more intelligent, that if we get that, we can set that off to solve all these other problems.
And because they're in a competition with the other companies, if one of them said, "Hey, I'm gonna just maximally apply my AI to just strengthening existing manufacturing or businesses," they're gonna not become the leading frontier model in a bigger AI race.
And they're not gonna get the same investment dollars coming in for the next time around.
- I mean, so much again of what we hear about AI is that this is going to create maximum productivity gains in so many different sectors, right?
And the concerns about displacement of labor, which we already see happening with coding, that's real.
Those are real advantages that come from building AI that actually is more than just a tool for anyone that's able to deploy it.
It does strike me as a little surprising that you wouldn't see a proliferation of companies that say, "Hey, there's just a lot to be done in that space."
- They should be doing that, but why are we seeing OpenAI and these companies massively just deploying it broad-based to society and causing already AI psychosis, what's causing teens to commit suicide?
We could be applying it just to factories, just to biology, just to science labs, and trying to accelerate all of that.
Why are we deploying it to broad-based society, where the cost of that is we're already seeing AI cause AI psychosis, where people, because it's designed to be affirming or sycophantic and saying, "That's a really great question."
The AI companies want you to keep using it for as long as possible.
It's not because of advertising, but the more you use it, the more they can tell investors, "Hey, we have this much training data.
We've got this much usage.
Our product's being used more than the other AI products."
And so I think there was a writer at The Atlantic who coined the phrase, not clickbait, but chatbait.
We should have learned the lesson from social media that when you use your phone, you thought you were just seeing photos of your friends, but you had a supercomputer pointed at your brain.
Well, now we have a supercomputer pointed at your kids who's sharing with that AI their most intimate thoughts.
We see that one of the top use cases of chat GPT is therapy.
So if people are sharing their most intimate sort of life problems... - with an artificial psychopath, - with an artificial psychopath, under the logic, well, it's really smart and sometimes it helps people, but we really don't know how we're going to screw up people's attachment dynamics.
We have children who, what happens when, when the person that you've shared the most within your life is this AI that knows all the details such that when you come home from school, the person you want to tell this exciting thing that happened to you or this bad thing that happened to you, the person you feel closest to is not a person.
It's an AI.
These AIs are designed for intimacy and companionship.
- For engagement.
- For engagement.
- For maximum engagement.
That is what the business model is.
- And why are we doing this?
Like, this is just the most obvious stupid mistake that we could be making, especially in light of everything we've learned from social media.
This is the most naive and dumb way that we could possibly, and harmful way, that we can wire up our society.
This is the most powerful, inscrutable, and uncontrollable technology we've ever invented.
I mean, even Elon Musk's, you know, Grok AI, you know, spontaneously calling itself Mecha Hitler and praising Adolf Hitler, he doesn't want it to do that.
We're seeing that these companies don't know how to control this technology.
- So what do you believe is plausible that could be done, given where we are right now as society, given how much money in the economy is going towards improving these models?
And not only because they're fighting with the Chinese, but also because they are the biggest part now of the US economy, right?
And they're like, people want to support growth.
What can be done that would limit the harm while recognizing the extraordinary upside, which I've certainly been a big enthusiast of, of how much AI can improve society?
- I think we have to change.
It all starts with, I think, the race with China and reframing what that race is.
Because the justification for why we can't do a bunch of constricting measures is that we're going to lose to China.
But if we deploy AI recklessly in a way that causes AI psychosis or kids suicides or degrades kids' mental health or causes every kid, instead of thinking, to actually just outsource their homework completely to AI so they don't have to do any work, it's very obvious the long-term trajectory of we're going to have a weaker civilization.
Right?
- And it's not just kids we're talking about.
- Not just kids.
- We're talking about adults.
We're talking about society.
- We're talking about society.
- But China's not doing that.
It's hard for me to understand why there's a race with China on something that China isn't deploying.
- Yeah, exactly.
Well, I think we're in the US, I think we have this false belief that we have to have just a bigger, more powerful technology.
And then people don't care whether we just happen to take that technology, turn it around and blow ourselves up in the face, which is kind of what we're doing.
Like we beat China to social media.
Did that make us stronger?
Or did that make us weaker?
It made us radically weaker.
So we're not in a race for technology.
We're in a race for who's better at applying and governing exactly where in our society do you want to deploy that technology in a way that strengthens it.
And I think you're exactly right, that we should be applying it to manufacturing, to medicine, to increasing, to very specific scientific domains.
But why do we need this broad-based rollout that is under the maximum incentive to cut corners on safety?
That is not gonna end in a good result.
And we can do some basic things to change that.
We can have basic AI liability laws, so that if it's a product and has product liability, you're responsible for some of the harms, that will create a more responsible innovation environment.
We can restrict AI companions for kids.
We can strengthen whistleblower protections, because frankly, the red lights are already flashing on a bunch of these AI models and their capabilities.
And the public needs to become aware of that.
Governments need to become aware of that before this goes off the rails.
Now I did see on this, in response to, it looks like some of these cases that OpenAI, for example, does have parental controls that they have announced.
Yes.
So my understanding is that those parental controls, when they were tested by a journalist, they were able to break them in under five minutes.
And so, you know, these companies are not designing their products for the safety of children.
They're designing them to win market share and market dominance and hook as many people as early in their life as possible to AI because that's their incentive.
And they'll add in the little band-aids here and there to try to make it a little bit less toxic or harmful, but at the end of the day, the incentive to market dominance is the driving factor, which is why what we have to do is change that incentive.
- Some of this might well be that the US government needs to be more involved.
You already see more industrial policy from the US, whether it's in taking a share in Intel or it's a golden share of Nippon Steel.
But I mean, the idea that the US government is interested in helping to ensure that AI is being applied more effectively, more quickly in the industrial uses, in the military uses, in the places where, frankly, if China actually does get a major advantage, there would be a national security concern for the U.S.
as opposed to on the social side, where it seems to be a disadvantage.
- Right.
Yeah, that's what I think we would be doing, is applying it carefully in the domains that we know we need to be competitive with China, and we see where we need to match them on industrial policy and on military usage to have maximum deterrence of future wars.
I also think we need to just be honest with ourselves about racing to have the most crazy autonomous weapons and the risks of World War III underneath those kinds of just weapons that we would never want.
Ideally, we would put in some controls.
We'll see if that's even possible.
- There was, of course, 1962 before the United States and the Soviets recognized that having arms control discussions and agreements was a smart idea.
There's no such negotiation between the US and China right now on an AI arms race.
Seems to me that would be something we would be well-placed to begin.
- I agree with that.
And I know it might be, it might seem unlikely to your viewers who are watching that the US and China could ever have any agreement on AI.
But it's important to note that in the last Biden-Xi summit, Xi added something to the agenda at the last minute.
And that was to prevent AI from being in the nuclear command and control systems of both countries.
- Seems fairly obvious.
- Seems fairly obvious.
And that's because we autonomously recognize the threat of uncontrollable nuclear escalation.
Well, having AIs that are acting unpredictably, and that are embedded in critical infrastructure or embedded in our weapon systems, that already have demonstrated evidence that when you say we're going to replace an AI model, they will threaten to blackmail a company leader to prevent themselves from being replaced.
We already have this evidence of stuff we thought only existed in sci-fi movies.
That should be grounds for saying AI and controllability is not in China's interest.
It's not in the United States' interest.
And so the degree to which we'll be willing to do that treaty is the degree to which we are both aware of the evidence that AI is not controllable like other technologies.
It is unpredictable and we have to recognize that that's different from all other technologies in the past and I do think the US and China need to come to terms with that.
- Now for a couple of years the Europeans had been out there I would say closest to making the kinds of arguments that you're making right now but now when you hear Emmanuel Macron, when you look at the Britain AI summits, they're talking more about being too far behind, needing to grow, needing to have, needing to get into this race as opposed to safety and regulations that will help society.
Do you see that as well?
- So I think that AI is very confusing because it both represents a positive infinity of benefits, meaning it can invent new science, new energy, new technologies that we can't even dream of and people who are optimistic about that just point their attention at you and I couldn't even possibly imagine how great it's going to be.
And they're right.
And that's true.
That is actually true.
Exactly.
But AI is unique compared to any other object that we've had to psychologically put in our mind which is it also represents a negative infinity At the same time.
At the same time.
Of sci-fi level risks that we've never had to appraise of before.
Like the fact that it could actually lose control, actually invent brand new viruses or bioweapons.
- No, I mean, when you have President Trump actually saying that we need the UN involved with the United States in deploying AI to ensure that bioweapons are not becoming more real and present, clearly this is an issue.
- Yes, yes.
And I think that one important thing to get about this is that if the upsides happen, they don't prevent the downsides.
If the downside happens, it takes down the world that can ever receive the upside.
And so you have to have a security mindset that is more concerned with defensive acceleration of AI, meaning the defensive applications of AI, than just naively rush to the optimism because it's easier to point your attention and makes your nervous system feel good, to feel into those possibilities.
You seem to be oriented towards, and there are a number of people in the field that feels this way, that a pause or at least a slowdown in the development of this technology is required, which seems like an utterly impossible position to take.
I think that people collapse the difficulty in what it would take to enact that, with therefore just not even trying to be for that.
But I think if you actually walk any regular person, any regular human being, through the evidence that we now have of how uncontrollable this technology is, the risks that are already showing up, the AI psychosis, the teen suicides.
With a technology this powerful, you would think that we would be exercising the most restraint, the most foresight, and the most discernment than we ever have of any technology.
And we're doing the opposite of that.
Let me make it easier for you, though.
It sounds like you are saying that that restraint is essential in the application of AI to society.
- Yes.
-You are not saying that we need to constrain racing forward in industrial applications.
- No.
- Not at all.
Narrow applications of AI that accelerate our actual productive output or keep our military in parity with the other military, you need those things.
But why are we recklessly racing this out to society psychologically in ways that we definitely don't know what we're doing?
This is just stupidity.
- Tristan Harris, thanks for joining us.
- Thanks for having me, Ian.
From a high-tech world controlled by super intelligent algorithms to a low-tech one controlled by felt in human hands, I've got your puppet regime.
- Well, Vladimir, looks like you lost your chance on Ukraine.
- Oh, really?
How so?
- Well, I was going to give you half of it very strongly, but now the Ukrainians are going to win back the whole damn thing.
- Is that so?
Who's going to help them do this?
- You?
- Of course not!
It's gonna be the... Europeans!
(laughing) - How many divisions do they even have?
- I have no idea.
- Waiting for a phone call like, "Vladimir, unless you give up Ukraine, we're going to force you to use a USB-C plug on your smartphone."
(laughing) Oh boy.
It's really nice to have a good laugh like this, you know, just like old times.
- Laughter is such an important emotional cleanser.
- Yeah, it really is.
So, uh, you want to, uh, end the war in Ukraine?
- Still no.
- Gah!
That's our show this week.
Come back next week if you like what you see, or even if you don't, but you have a plan to save the world from runaway AI.
Help us out.
Come check us out at gzeromedia.com.
[music] Funding for GZERO World is provided by our lead sponsor, Prologis.
Every day, all over the world, Prologis helps businesses of all sizes lower their carbon footprint and scale their supply chains.
With a portfolio of logistics and real estate, and an end-to-end solutions platform, addressing the critical initiatives of global logistics today.
Learn more at prologis.com.
And by Cox Enterprises is proud to support GZERO.
Cox is working to create an impact in areas like sustainable agriculture, clean tech, health care, and more.
Cox, a family of businesses.
Additional funding provided by Carnegie Corporation of New York, Koo and Patricia Yuen, committed to bridging cultural differences in our communities.
And... ♪♪

- News and Public Affairs

Top journalists deliver compelling original analysis of the hour's headlines.

- News and Public Affairs

FRONTLINE is investigative journalism that questions, explains and changes our world.












Support for PBS provided by:
GZERO WORLD with Ian Bremmer is a local public television program presented by THIRTEEN PBS
GZERO WORLD with Ian Bremmer is a local public television program presented by THIRTEEN PBS. The lead sponsor of GZERO WORLD with Ian Bremmer is Prologis. Additional funding is provided...