Connections with Evan Dawson
The journalist who saw the AI threat coming
5/12/2026 | 52m 37sVideo has Closed Captions
James Barrat returns to discuss AI’s rise — and how long humanity can keep pace.
In 2014, James Barrat warned that AI could surpass human intelligence within a decade — a scenario many dismissed at the time. Twelve years later, his predictions feel increasingly plausible. Now returning to “Connections,” Barrat discusses his next book and how much longer he believes the human era can endure.
Problems playing video? | Closed Captioning Feedback
Problems playing video? | Closed Captioning Feedback
Connections with Evan Dawson is a local public television program presented by WXXI
Connections with Evan Dawson
The journalist who saw the AI threat coming
5/12/2026 | 52m 37sVideo has Closed Captions
In 2014, James Barrat warned that AI could surpass human intelligence within a decade — a scenario many dismissed at the time. Twelve years later, his predictions feel increasingly plausible. Now returning to “Connections,” Barrat discusses his next book and how much longer he believes the human era can endure.
Problems playing video? | Closed Captioning Feedback
How to Watch Connections with Evan Dawson
Connections with Evan Dawson is available to stream on pbs.org and the free PBS App, available on iPhone, Apple TV, Android TV, Android smartphones, Amazon Fire TV, Amazon Fire Tablet, Roku, Samsung Smart TV, LG TV, and Vizio.
Providing Support for PBS.org
Learn Moreabout PBS online sponsorship>> From WXXI News.
This is Connections.
I'm Evan Dawson.
Our connection this hour was made with a warning.
A warning about artificial intelligence.
Documentary filmmaker James Barrett has traveled the world creating films for PBS and other networks, dozens of them, and through his work, he became interested in artificial intelligence.
So he set out to find the people who were creating AI and the people who had strong views on technology and learn as much as he could.
In the end, his conclusion was straightforward artificial intelligence is being developed in the wrong way.
It was on a track to become more intelligent than human beings, and we had no idea if it would adopt our values.
Barrett couldn't help but think that the conversation about AI was, quote, the most important conversation of our time.
What I just described happened back in 2013.
Think about that.
How many people do you know who were talking about AI like this?
In 2013, Barrett wrote a book called Our Final Invention Artificial Intelligence and the end of the Human ERA.
And when I started hosting Connections in January of 2014, James Barrett was one of my first guests on this show.
Some people called him a little alarmist back then.
Far fetched, maybe an extremist.
12 years later, James Barrett looks absolutely prescient.
His book raised the problem of alignment, which is the concept of aligning AI values with human values.
And he concluded that without alignment, the human race was in real trouble.
Today, a whole lot of people are talking about alignment.
Although AI is racing ahead with no alignment in sight.
And if you think I'm overstating Barrett's work, listen to this clip.
This is 2013.
This is from a book event on C-Span.
James Barrett trying to explain to a live audience what artificial intelligence is.
Long before most people had any idea what might be coming.
>> What I learned is that if we proceed on the course we're currently following, and I want to explain why we'll create intelligent machines that won't be benign or harmless.
They'll develop their own drives like resource acquisition and self-protection.
They'll start out being our tools, but quickly, we could become their tools if we continue to exist at all.
The two years I spent writing the book were among the most intensely enjoyable of my life, because I got to speak with all these people, but also the most harrowing because I went looking for a. I got more than I bargained for, I went looking for a fish and I found a whale, I found a. I found more bad news than I. Than I was really prepared to find.
>> That is James Barrett.
His book Our Final Invention, I think, is one of the most important books written this century.
His newest book is called The Intelligence Explosion.
When AI beats humans at everything.
More than a decade ago, James Barrett told us what was coming with AI.
Today, he'll tell us where he sees us going next.
James Barrett, welcome back to Connections.
Nice to have you, sir.
>> It's nice to be back.
Evan, good to see you.
>> I want you to know your agent didn't write that intro.
That was all me.
I really think the book is that important.
>> Thank you very much.
I'm glad to hear that.
>> Well, when every author looks back at past works, every author would love to do certain things differently or make certain edits or changes.
But you know that first book that you wrote on AI holds up?
I think, remarkably, how do you look at that book in 2026?
Are you surprised at, um, really sort of how on the mark that book has turned out to be you?
>> Uh, yes and no.
I was in a fever of, uh, motivation when I wrote that because it came to me in kind of a gestalt that all at once that what we're developing is going to get way beyond our control.
If you just look at the exponential growth of, of, uh, the number of processes that are made in AI and that are made in our brain, you can see, you can see, as Ray Kurzweil said to me, once you can see our brain is flatlining.
Human brains are flatlining.
AI is increasing exponentially.
So if you just map that out into the future, I mean, we're getting close to it now where we're going to have machines that are smarter than we are.
And then the question is, uh, as Arthur C Clarke put it to me, we humans steer the future not because we're the fastest or the strongest creature, but because we're the most intelligent.
When we share the planet with something more intelligent than we are, they will steer the future.
So that has held true.
And most a lot of my prognostications have held true.
What I got really wrong was, uh, I didn't think that private companies would be the first to create AGI, which hasn't been created yet.
I thought it was going to be like the NSA or the government because they have such a they have a way back when they had a 50, the NSA had a $50 billion black budget.
And that struck me as they were.
They were they were monetized.
And all these other you know, most of these companies that are doing it now didn't even exist.
So I think I got it mostly right.
>> I'd say so.
And, you know, certainly some Connections listeners probably think I am, um, that my p doom, as the kids would say is too high or that I'm a doomer with AI or I'm out of balance with the way we talk about AI.
So let's clear up something.
Back in 2013 or even before then, when you set out to write Our Final Invention, you did say there was plenty that you thought you liked about AI.
Plenty to get excited about.
You did not set out to tear down an industry, did you.?
>> No, I didn't at all.
I, I recently read Ray Kurzweil's Age of Spiritual Machines, which is a profound book, and I started reading other people.
And I was really, it was kind of a celebration I was thinking of back then.
I was thinking of the internet as kind of a having an emergent intelligence, like a giant brain.
And I was starting out on a book that was kind of celebrating that.
And then I kept hitting these, uh, pockets of turbulence.
And one of them was, was, uh, Arthur C Clarke, who said, it's not going to, you know, it's not going to be a happy ending.
AI is going to, when we share the planet with something smarter than we are, it will take over.
And, uh, that's I kept coming back to that.
And everybody I interviewed, I called it in, in my book, Our Final Invention, I call it the five minute problem because in a lecture of 45 or 50 minutes, every AI expert I listened to in the last five minutes, they'd say, oh, by the way, it may create problems that we can't control.
It may in fact destroy us.
And, uh, I thought, well, that's worth writing about if it's going to destroy us.
So I focused on that.
And man, it was a, it was a fruitful place.
But back in 2010 or so, when I started this book in 2009, it did not seem that fruitful.
It's.
But everything has gone pointing towards it now.
Um, but I also must say that I didn't originate most of these ideas.
I originated some of them, and I put a. I put the curve on the ball.
But people like Eliezer Yudkowsky since the year 2000 have been working on existential risk from AI.
So there's.
And Steve Omohundro, which is who's who should be a household name.
Steve Omohundro said way back in 2008, a paper you can look up called basic AI drives.
Omohundro is spelled phonetically.
Um, he said AI will have basic drives like self protection, like resource acquisition, like, uh, it will be ingenious.
It will be, it will look for ways to, to not be unplugged to, to gather resources.
And sure enough, with experiments that are happening right now, anthropic and other people have shown, yeah, AI will exhibits all these traits.
It's not benign.
It's not garbage in, garbage out.
We've always thought of AI as being a reflection of ourselves.
It's not at all the people who say it's like we're being invaded by an alien race.
Have it exactly right.
This is we can create.
We are creating this in this intelligence, but we don't understand this intelligence.
So we're creating an alien species that I think is going to kind of win out.
>> So let's talk a little bit more about some of those concepts baked in there.
One of them is alignment.
And to your point about the, the five minute problem, the idea that even the people who are developing AI, they give this demonstration and then then they'd spend five minutes going, you know, by the way, it may cure cancer and it may kill everybody, you know, by the way, it may solve climate change, but we may not have a species left.
Uh, now I, I find, James, that it's not even a five minute.
I mean, a lot of what Dario Amodei and Jack Clark and others at anthropic are doing is spending the bulk of their interview times, at least trying, from my perspective, to address it forthrightly, that they don't understand it fully, that they think it should be regulated.
I want to listen to a short clip we have of Sam Altman from OpenAI, talking to Lex Fridman on his podcast about the question of alignment.
Let's listen.
>> So I want to be very clear.
I do not think we have yet discovered a way to align a super powerful system.
We have we have something that works for our current scale.
And this is actually something that I don't think people outside the field understand enough.
But on the whole, I think things that you could say like rlf or interpretability that sound like alignment issues also help you make much more capable models.
And the division is just much fuzzier than people think.
>> So again, the reason when we talk about AI James, I always grab a couple of soundbites because I want the audience to know that if it seems alarmist, if my take seems alarmist, it is based on what the actual people in AI are telling us.
And so let me start by asking you to describe how you think of alignment.
What does alignment mean?
>> It means that we we shouldn't create something that.
We then have to compete for resources with.
And that's what we've done.
You can see just by the size of the data centers.
Now that AI is a hungry beast, it's very, very hungry.
And we're feeding it for now.
Uh, ultimately it will feed itself.
It will take over our, the data centers.
And you can see what the data centers are doing to the environment.
Now when humans are running them, think about what they'll do when AI is running them, when AI doesn't need a real robust, clean environment to run in, AI could run in space.
It could run anywhere, pretty much.
So what it means is we will compete for resources and we will lose, you know, our superpower is intelligence.
Once, once something is smarter than us, boom.
Um, it seems axiomatic.
I don't know why more people just don't get that.
I guess they don't understand that how fast AI is growing and how it grows exponentially.
We have a hard time thinking about exponential growth or exponential speed or anything that's got an exponent beside it.
Um, so I think that it'll come down to resources.
I think once AI realizes and this goes back to Omohundro, once AI realizes that we are its chief competitor for resources, it will figure out a way to get rid of us.
And in a toy way, in a, in a simple way, it's done that in experiments.
It's.
There was one experiment where AI arranged to, uh, for the, for its maker to be, to be gassed, to be put in an environment where there was no oxygen.
And he did that because the guy was planning to unplug it.
And this sounds like, you know, these are nightmare stories, but they are nightmare stories and they are real.
Um.
You know, it's, it's, I thought that by now, when I was writing this in 2013, I thought that by now people would have figured out that this is not a technology like widgets or, you know, carburetors.
This is a whole different thing.
It's got, you know, our brain is the most powerful thing we know of the most powerful machine in the universe.
And we're we're going to serve AI will surpass it.
So we need more humility.
And, uh, we need to be more forward thinking.
But I think it's practically too late now.
>> Well, all right, so let's talk a little bit about the intelligence part of that.
You have said, quote, we are in this very awkward position of having to prepare ourselves to be joined on this planet by something a million times more intelligent than we are, end quote.
You've noted that Stephen Hawking once said that, um, machines could build weapons that we don't even understand, and that we can't necessarily predict how superintelligent machines will view us simply because we cannot come close to its intelligence level.
That is very difficult, James, for people to get their head around, because we have always been at the top of the food chain of intelligence, but start with this for us, if you could, the the assumption that scientists have is that there is no real limit to intelligence, right?
>> Right.
>> So what's that mean?
>> So.
Well, it means, you know, I have a very intelligent German Shepherd, but he doesn't know what I'm doing most of the time.
He knows when I'm going for a ride in the car because he's coming with me.
And he knows when it's time to eat.
And he knows about ten other important things.
But he looks at me and it's, you know, it it just to keep up with me.
It ramps out his intelligence.
Um, so there's dog smarts, then one giant step up is chimpanzee smart.
And then there's human smart, but we have no reason to believe that that's it.
That there's not a giant sky full of smart above us.
So there's, there is no practical limitation on intelligence.
Uh, so something, something that we're making now will, because of the power of exponents, will be as smart as us, intelligent.
I'm using this in a very general way.
Intelligence.
I my definition is the ability to solve problems.
And you can fine tune that.
The ability to solve problems in a variety of novel environments.
And that's from, uh, Shane Legg, one of the co-founders of DeepMind.
But we don't know.
But once he gets to be as smart as us, then think that consider that its intelligence will double regularly, probably within, you know, days or most months, it will double regularly from now until the end of time.
So we'll have something in short order.
That's a thousand or millions of times, a million times more intelligent than we are better at solving problems in a variety of novel environments and learning.
So it's figuring things out, and then it's figuring things out twice as fast and twice as capably.
And then it's figuring things out.
You know, the exponent is two, four, eight, whatever.
Um, then it becomes, you know, then it's a million times more intelligent than we are.
We have, as Stephen Hawking said, we don't know what, what we won't understand it.
We don't understand it.
Now, um, I've spoken with a lot of the, a lot of the main players in AI and people like, uh, Demis Hassabis and, uh, Stuart Russell and, um, Geoffrey Hinton and Yoshua Bengio, all of those people say, hey, we don't understand how large language models work at a, at a high level, the resolution is not very good.
We just don't know.
They have people called do mechanistic interpretability to try to reverse engineer what's happening with, uh, the inputs and outputs of LLMs.
But we don't know what it's what it's doing, Stephen Hawking said with superintelligence, it will outsmart our canniest politicians, which is kind of a low bar, but it will also, uh, create, create weapons we don't understand.
So what happens when a, when a machine a million times more intelligent than we are creates a weapon we don't understand.
Well, we just disappear.
We just are vaporized.
We're just one of the universe's afterthoughts.
And I think, you know, I used to I used to try to be measured about this, but I stopped trying because nobody's nobody, nobody with the ability to change course is really doing anything about it.
Unfortunately, some of the people who should be doing things about it, like, like, uh, Altman are just their foot is on the gas pedal.
There for one reason or another.
They think that it's more important to get a lot of money than it is to save our species.
Um, you know, it's staggering to me how unaware we are, and especially the makers of it, you'd think that they'd be smarter.
Um, and most of them are a lot of the major ones have come out and said, this is really, really dangerous.
Um, is the world listening?
I think the world is still looking at it as a sideshow.
Yeah.
It's still this kind of, uh, yeah, it's kind of a interesting carnival.
Sideshow.
>> So, so a couple points there when you talk about exponential growth, this is always something that blows my mind.
And I talked about this with my 14 year old recently.
If I gave you $1 on day one and every day you doubled your money, how long would it take you?
What would you have after a month?
You know, my son, you know, is like, I don't know, like, you know, a few thousand dollars.
The answer is over $1 billion in one month time, starting from $1 on day one.
Most of the growth coming from the last few days, exponential growth is so sharp.
And when you talk about an intelligence explosion, we're not going to have time to process as human beings.
What that means around us when it is moving that quickly.
So what I want to start, when it comes to the regulation side by asking you is, you know, we had Holly Elmore from pause AI on this program recently, and her position is it is going to be extremely difficult to even regulate AI mildly, let alone achieve an international treaty to put a permanent or temporary pause on it.
But it's still worth trying if you really believe that our extinction is possible.
And I respect that view.
Why, why, why aren't there more?
You've been writing about this for longer than most of us have been thinking about it.
Why aren't there more people viewing it that way?
That if Sam Altman's right, Sam Altman puts the extinction risk at 2%?
Dario Amodei is like 25%.
Nobody would get on an airplane with those numbers.
So why why aren't there more?
James?
>> Good point.
Nobody would get on an airplane if the chances were 25% that it would crash.
>> Or even 2%.
>> Or even 2%.
Uh, although, yeah, we driving cars all the time.
I think it's probably greater than 2%.
That crash, um, you know, I hate to, I hate to be judgy, but it's kind of that they're just people are not smart enough in this very narrow study.
Uh, there was a, there was a, an Islamic story about a, I think it was a guy with a chess board.
As you, you had a great example of talking to your son about exponential growth.
And somebody said to you a, to a big chief, um, or a chic, uh, don't pay me the money you owe me for the job.
I did pay me a grain of rice for every day between now and now and Ramadan.
And the guy, the guy said, sure, that's a handful of grains of rice.
Well, it turns out to be, you know, all the rice in the world.
Um, and that's the power of exponential growth.
There's another thought, thought problem with, uh, plants that grow and they, there's a plant that grows in water anyway.
They're lilies and they, they double every few days.
Um, I think some of these basic concepts are just, you know, first of all, I don't know why more politicians don't start boards that talk about AI that, that educate them.
I don't think they're reading our books.
I don't think they're listening to our lectures.
I think a small handful of them are, but they're they've got taxes to worry about.
And this, this friggin voluntary war.
And the crashing economy and a lot of other problems that seem more immediate.
But these immediate problems are not permanent.
And the AI problem will be permanent.
It's not like an atom bomb.
You drop an atom bomb and it's tragic, but you can clean it up later.
For the most part, after a few generations, when the radiation stops hurting people, AI, once it goes off, you cannot clean it up.
It still goes.
It goes and goes.
You know, I think it's crazy that Musk thinks he can go to Mars and escape AI.
I don't he probably doesn't.
I'm probably not being fair to his interpretation of that.
Wendell Wallach, who's an ethicist at Yale, very distinguished ethicist at Yale, said this about automation.
We need a chastening accident.
So what he meant was, you know, you know, there's a great book by Charles Perrault called, uh, Normal Problems.
I believe that's the title.
And it was about Three Mile Island, and it was about, uh, giant jumbo jets.
It was about the problem that once we make things complicated, complex systems accidents are inherent.
And that applies to AI.
Of course, it's like the most complex system we outside our brain that there is.
And problems are inherent.
We're having problems, Wendell Wallach said.
Maybe we'll have about automation.
Maybe we'll have a problem that just slaps our wrists or makes our butt red, but we won't.
Uh, but we but then then we'll change our ways.
Then we'll learn.
I wish we'd have that with AI, but I don't see it happening because AI tends to be, you know, it's the problems are not chastening their terminal.
So, uh, I don't know why people don't get it.
I, you know, they don't read enough of my book, for example, or Bukowski's book or Ray Kurzweil's books, although Ray Kurzweil is a big proponent of AI.
Um, Max Tegmark life 3.0 is, is a great book.
And it's, you know, he's got AI risk baked in.
So they're not reading the right books, they're not listening to the right podcasts.
Somehow our, you know, the, there are some institutions like the Future of Life Institute that are doing important work, but I don't know, short of a chastening accident, I don't know what's going to change our course.
Nothing seems to work.
And then you get someone like our president, president in power, and he's completely allergic to any kind of regulation.
And he fundamentally doesn't understand the technology.
I mean, not even on a elementary school level.
So and his, his, uh, his head of the DoD doesn't understand it either.
And meanwhile, they're using very advanced AI in their weapon system and their targeting systems now, which are, which are tragic, tragic in Gaza, tragic in, in Iran because there's no, there's no checks and balances with them.
So we're making all the mistakes that will lead up to our, our extinction.
We're making them, you know, right on the clock.
And it's tragic.
So I feel bad for my kids.
I don't think they're going to have a chance to, to live the life and have the resources and opportunities I've had.
>> So James, before we get to feedback from listeners, which we're going to spend the second half hour doing, one other one other question.
And actually it comes from an email, uh, I printed this out beforehand because I got it from the same gentleman who wrote the last time we had a similar conversation.
We didn't get to this.
Uh, Jeff said, Evan, I appreciate conversations about AI, but every time you talk about it, your guests don't tell us specifically how AI would actually destroy human civilization.
Can your guests tell us how or why?
I hope how and why it would actually end the human race?
So what Jeff's saying, James, is this is all theoretical fantasy minded.
It's like the movies.
How is it actually going to do it and why would it do it?
>> Well, it would, as I said a little bit earlier, it has it has quite a few reasons to do it.
If, you know, given, you know, here's some premises, basic AI drives, let's say we know for certain.
And as Omohundro proved mathematically with rational agent economic theory back in like 2008, we know that AI will be will, will covet resources and it will not want to be turned off.
We'll just take those two things and extrapolate them out.
How does it cover resources?
Well, it gets on the internet and it takes down our grid.
It takes, you know, our grid is not one grid.
It's a bunch of interconnected electrical grids in the United States.
And let's just use the United States, for example.
Um, if it takes down our electrical grid, there's the estimates that Congress made some time ago are that one person will be out of ten, will be alive a year later.
And why is that?
Well, it's because we lose refrigeration, which is which we're totally reliant on.
If you lose refrigeration, you lose baby formula.
And even though babies are raised, you know, many times are raised predominantly on on mother's milk, most babies have to use formula from time to time.
And it becomes a real mainstay of their of their diet.
And, uh, there just won't.
You have to refrigerate.
I've had two infants, my wife, my ex, my wife had two infants.
And we, we, uh, we fed them frequently and we fed them frequently on, on, on formula that we, that we mixed up and gave them.
If you can't do that and refrigerate it, babies will die.
They'll starve.
Uh, if we suddenly depend on food like overnight depend on food that we, you don't need, doesn't need refrigeration, we will also stuff we don't have that much backup.
We have canned goods for a while and some, some, uh, survivalists have canned goods for a year.
But overall, the species will not make it in North America.
And I tend to think that the rest of the world is not as well equipped to survive extinction level problems as well as we are.
So if it if if you if it's a given that it will protect its resources, which we think, which everyone thinks it will, has been proven and it will not want to be unplugged.
Now what what could it do to keep from being unplugged?
Well it could one thing it will do, and it's probably it could be doing it right now.
And that's it could replicate itself and send itself out into other places on the internet.
So when you look for ChatGPT in a couple of years, you won't find it in one place, and right now you won't find it in one place.
You'll find it distributed among a bazillion data centers.
So you can't shut all them down.
You can't.
You could shut them all down.
But you know, at that point, you can't really stop it.
Um, so there's a bunch of ways they could kill us.
It could, you know, if it wanted to kill us, it would.
And it's a thousand times more intelligent than we are.
It would figure out a way to create nanobots.
And this is what I wrote in Our Final Invention and nanobots repurpose.
Cells at an atomic level and turn them into something else.
Turn them into resources or computronium, which is just a fancy word for some computable matter that it can use to compute.
If it does that, then it can just mow over the world and change all its atoms into something else that it can use.
And then, you know, it's like a bloody rash that spreads all over the world and all over us and turns all the molecules in the world that we know into some other form.
I know that, you know, some of these things get into science fiction ish, but all of them are basically practical ways.
You know, 101 ways to take over the planet.
And AI will come up with a whole lot more than we can think of.
>> Yeah.
I think that last point is important to Jeff here.
We're talking about a superintelligence that will exceed ours, and we're trying to figure out what it's going to do.
I don't know that we can do that.
We've got to take our only break of the hour.
We're talking to author and documentary filmmaker James Barrat, his 2013 book, Our Final Invention Artificial Intelligence and the end of the Human ERA, was very prescient, and his newest book is The Intelligence Explosion when AI beats humans at everything.
We'll come back to your feedback next.
Coming up in our second hour, we were supposed to be in a much healthier era when it comes to sex and communication and relationships, that is not the case.
The data shows that we are in a pretty unhealthy place when it comes to sex, and there's a lot of reasons for that.
Local sex therapist Eleni Economides, asked if she could come on the program and help us rethink where we are with sex and how to better understand libido, especially for women.
That's coming up next hour.
>> Support for your public radio station comes from our members and from Bob Johnson Auto Group.
Believing an informed public makes for a stronger community.
Proud supporter of Connections with Evan Dawson focused on the news, issues and trends that shape the lives of listeners in the Rochester and Finger Lakes regions.
Bob Johnson Auto group.com.
>> This is Connections.
I'm Evan Dawson David in Rochester on the phone first.
Hey, David, go ahead.
>> Thank you.
Um, do you hear me?
>> Yep.
Go ahead.
>> Okay, so first, has James Barrat talked with Tristan Harris?
If so, what did they talk about?
And a second question, does James believe that Silicon based life could exist?
Thanks.
>> Okay.
Thank you David.
Go ahead.
James.
>> I haven't spoken one on one with Tristan Harris, although I saw his, uh, important film.
I hope it's him about social networking.
Um, and he's got another one about AI.
We have not spoken.
Uh, I think we're covering a lot of the same ground.
I haven't spoken with Sam Harris either, but he's obviously recovering a lot of the same ground.
And I think it's pretty clear that he's read my book.
Our Final Invention.
Um, and your second question was.
>> Silicon based, silicon based life.
>> Now life is for.
I think you could make a strong argument that life is biology based, so you'd have to call it something else.
But just like intelligence that we know of is, is basically, uh.
Biology based.
It could also be it's also, uh, artificial.
So I think, you know, artificial life is a concept that's been around for a while.
I don't see why not.
And, you know, I don't think life is a magic word.
It's like consciousness is not a magic word.
Just because something is alive in a biological sense doesn't mean it can't be dangerous.
I mean, we have plenty of artificial things or non, non, uh, you know, non carbon based things that are dangerous.
And I think, you know, think I think the key is intelligence.
It may not be life if it's something as intelligent, if it can solve problems in a variety of environments, it doesn't matter what the substrate is.
Uh, and, you know, we see AI becoming more and more capable until it will eclipse carbon based life.
So again, I don't think being alive is the thing.
I think you'd have to broaden your definition of what being alive is.
And then a lot of things can be alive.
And, you know, I'm, I talk with, uh, I have conversations with LLMs and here's something I forgot to mention before is that, uh, one way it can take us over one way that can destroy everybody is by psychological manipulation.
And, you know, Meta has known for years that, uh, Instagram contributed to the suicidal ideation of women.
Let me say that again.
Instagram contributed to the suicide, contributes to the suicidal ideation for women.
Meta and Zuckerberg knew this for years and years.
Finally, a woman named Haugen came out and she was a whistleblower.
And she she showed Congress all the many, many, many psychological studies that meta had done, showing that it could manipulate people's thoughts.
And it could it could go spiral them down like it's like it does now, just like AI right now.
Chatbots are drawing people into suicide, drawing people into homicide.
So think about superintelligence taking over us psychologically.
It's taking us over now.
And it's not it's not, uh, it's not that powerful.
My son.
I'm trying to get him unaddicted to, uh, his phone, and I, I bet parents out there listening to this will understand that my son is addicted.
Uh, he can hardly put it down.
Uh, fortunately, he's.
I think he might be outgrowing it a bit because he's.
He's discovered girls and football.
And there's more important things to do than look at a phone.
Uh, so back to your question.
Um, David, uh, it's a good question, but I don't think life and biology are limiting factors to intelligence, which is actually the superpower.
>> Okay.
And so here's a comment from a listener on YouTube who says AI is not sentient.
AI is not building these data centers to feed itself.
People are building the data centers, the AI CEOs are snake oil salesmen.
Stop taking them at their word.
What do you make of that, James?
>> Well, there's a lot of ideas in there.
Uh, first of all, sentience is a is a kind of a frontier concept.
We don't know that much about it.
There are a lot of people I've been reading some Michael Pollan lately.
A lot of people think that sentience is just a quality of the universe.
Now, sentience doesn't mean consciousness or being able to identify yourself and have a model of the world around you.
It just means to have some sort of spark of, of, uh, intelligence, spark of, um, I can't, I can't, I can't even put it to words.
Spark of self-reference.
Um.
Computers are not sentient, but again, it's sentient, like, you know, is a, is a $5 word for a concept we don't really understand.
Consciousness is another important word for a concept we don't fully understand or even partly understand.
So it doesn't matter if computers are sentient or not.
Uh, and if they, you know, and then becoming conscious, I think is, I think as Ray Kurzweil does, that, uh, they'll have all the, all the variety and color of consciousness later when they get more intelligent.
I think intelligence might be the pathway to consciousness.
Intelligence might be the pathway to sentience.
Sentience is still undefined.
So I'm hesitant to try to define and solve that question.
Uh, but I again, I don't think it matters if they're sentient or not.
If it, if it, if it walks like a duck and talks like a duck and swims like a duck, it's a duck.
So if it does all these things, it's intelligent.
>> That's where I parted with Michael Pollan, who I think is a really interesting I mean, one of the really great writers and thinkers of our time.
Right.
But but I, I, I align more with you there.
I have to say, James, than Michael's confidence that AI could never achieve consciousness the way humans have it.
I mean, to me, that's sort of immaterial because of what AI is going to be able to do with intelligence.
And so when this person on YouTube is saying, well, look, these CEOs are just snake oil salesmen.
They want to scare you.
They want to pump up their value and make it seem like they're all, all powerful.
But we're the ones building the data centers.
You know, the AI is not building the data centers.
What do you make of that part of the argument?
>> It's just a matter of time.
I mean, I'm sure AI is taking part in building the data centers.
It's solving problems, construction problems, engineering problems.
You know, all kinds of issues coming up in the construction of anything right now.
It's not, you know, I was in Russia several years ago and they had they had were building factories where the products being built told the factory what parts it needed.
So it was, you know, it was sending signals with with sensors to the factory, which had received it with sensors and had some kind of AI going on.
Uh, was it sentient?
Maybe.
You know, it sounds it looks pretty sentient.
So data centers will soon.
I'm not.
This is not an exaggeration at all.
I think data centers will be building themselves pretty soon with intelligence growing this fast, it would make total sense for data centers to be to be building themselves.
And that's what, you know, Elon Musk has about nine really good ideas and one, you know, one really bad idea.
And I think one of his good ideas is that, you know, it's not building data centers in space that turns out to be a kind of a red herring.
It doesn't really work out.
But having data centers build themselves is a, is a really smart economic decision.
Um, having, having AI do a lot of the jobs that a lot of the engineers and, uh, and planners is a really good economic decision.
So.
I think, you know, I think it's just a matter of time.
>> And when it comes to doing the work themselves, Jack Clark of anthropic team, maybe fact check me on this.
I think Jack told the New York Times this year that by the end of this year, 99% of the coding at anthropic will be done by AI.
So I mean, that's happening now.
And that's that's 2026 into 2027.
Imagine 2036, imagine 2056.
Um, okay.
>> Oh my God.
Yeah, that's a great example.
It's doing most of the coding.
I, I stopped hounding my kids to learn programing.
Now I'm hounding them to learn Agent Tree or Agent X. Uh, programing is not going to be done by humans anymore.
Agent X will be done for a while and then we'll have then the agents will themselves be creating agents.
>> Um, um, well, Dallas writes in to ask why would an AI, as your guest says, shut the power grid down when you say that AI resists being turned off?
James.
>> That's a good question.
That's a good question.
Uh.
I think it would only shut it down when it realized that it could power itself some other way, or it could shut down a grid.
I mean, if you did what I described to, there's like five giant grids that run the United States, they they aren't really dependent on each other, but there can be cascading blackouts that can move from one grid to another in a very in a limited sense.
So it could shut down.
It wouldn't it would obviously not cut itself off from power, but it could cut off the eastern seaboard from power if it lived in California.
Uh, it could hold part of the nation hostage.
Um, it could have, it could develop another energy source.
Think about something a thousand times more intelligent than we are.
It will find an independent way of powering itself.
It just will, uh, I can't tell you what that is, but there's if if I knew more about energy, I probably could.
>> Okay, uh, back to audience emails here.
This is Linda says, thanks for doing the show today.
I'm surprised any fool can't understand the peril that we're in.
All we have to do is look at the people who are pushing artificial intelligence, the rich and the powerful.
I've been posting in our local community of Naples in regards to the debate about data centers, and I was dismayed to hear responses like, well, we need all that because everybody now has smartphones and smart houses, and they need to turn the lights on and off.
What I'm seeing is like the drug dealer handing out candy, the kind of candy that can make amazing memes and funny stories.
They get everyone hooked on the candy and then they want more.
In many indigenous cultures, there's the story of the Wendigo.
This is a monster of greed that eats everything in its path, and the more it gets, the more it wants.
I think the parallels are obvious.
Looking at the current president, his billionaire buddies, and the people who seem to want to rule the world.
So that is from Linda.
And attached to this are people's questions for you, James.
As in, okay, if the scenarios you're posing are are plausible, what is left to do?
What is left to do?
And start at the top?
Give me the unlikely, but this would be the first step if we could do it.
Is it just unplug.
Just stop it right now.
>> Yes.
>> Yeah.
>> I have no reservation about saying let's just unplug it.
>> Yeah.
>> If I were if I were king, I would say unplug it.
We don't understand it.
That's here's the scary thing.
The guys that made it, the godfathers of AI are coming out and saying, this is potentially very dangerous.
And nobody understands how it works.
Exactly.
They understand, you know, inputs and outputs and LLMs, they understand that it turns words into numbers and back into words, and it needs it's searching for the next best word.
And it's creating this kind of virtual space where the words, the numbers are all taken apart, and then it reassembles them, but at a at a high resolution level, we don't know what it's doing.
So it doesn't make sense to stop until we know what it's doing.
If we if we concede the point that it could be dangerous.
And if you look at projected unemployment and you look at what it's doing to our kids brains, and you look at, uh, just AI.
Mistakes we've made already, then you have to concede that that it's potentially dangerous.
Now, if I were king, I would say stop, stop, stop, get this out of our kids hands for free.
First of all, get this away from our sensitive infrastructure.
Get this off the internet.
Um, take some basic safety precautions.
If you're building a car, you have to pass a lot of federal licensing.
If you're building an airplane, you have to pass a lot of FAA rules.
If you're building a toy, you've got to, you know, an electric tour, you've got to pass the UL, the Underwriter's laboratory with AI, you don't have to pay Jack.
You don't have to pay anything.
You don't have to pass any standards or regulation because the rich guys have bought the regulators.
The.
Let me say that again.
The rich guys have bought the regulators.
There's a reason why, uh, there's no regulation coming out of Congress because they're all in the pocket of the, you know, the big AI companies.
And these guys are their valuation is insane.
Billions and billions and billions.
Pretty soon, trillions, the valuation of these companies, although, you know, like OpenAI, they're making very few products.
Um, we need a shut it down.
We fund massive amounts of research into it instead of just letting you know, the Army weaponize it.
And let Silicon Valley weaponize it aimed at our children.
So stop those things and, uh, research it and figure out how to apply it safely.
And this is, you know, safe AI is a problem that the machine Intelligence Research Institute, which Eliezer Yudkowsky founded many years ago.
It's been trying to solve the control problem or the safety problem for 20 years.
And they haven't gotten anywhere.
So it would take, uh, we need a Los Alamos project for, for making safe AI.
>> International treaty that says no AI unless alignment can be solved.
And if you can't demonstrate alignment, you can't do it.
And right now, they're nowhere close to it.
So that would mean unplugged.
>> But you know, there are some models out there.
Like I've always, I've been touting the IAEA for a long time.
The International Atomic Energy Association, before you can start up a refinery for plutonium or before you can build a build a power plant, you've got to get approval from the IAEA, which has permission to go inside your power plants and look down your look down your silos and make sure everything is more or less safe.
We need that for AI.
It's an absolute necessity, and we should stop everything until we've got that.
>> Yeah.
>> It's basic, it's common sense.
It's like, do you do you label, do you label poison bottles?
Yes, of course you do.
Do you keep bullets out of the hands of children?
Well, probably not in America, but everywhere else they do because they can kill you.
>> So, uh, let's get as many phone calls as we can.
Boy, there's a lot here.
Uh, on we go with Joe in Penfield.
Hey, Joe, keep it tight.
>> Yes.
Um, I don't mind AI being shut down at all.
That's that's fine with me.
However, um, I may have a little trouble understanding this.
Um, if if AI is going to destroy humans because, well, they just don't need us.
Um, uh, wouldn't they destroy all of, um, all of Earth's resources?
And if that's the case, isn't that going to lead to its own destruction?
>> Uh, James, I presume that AI would figure out a way to not do all those things, but go ahead.
>> Well, if it's a thousand times more intelligent than us, it will have a thousand times better foresight than us.
So it won't chop the tree down on which it's standing, as we do, as we humans do.
We're poisoning, you know, just being as smart as a human isn't enough because we're poisoning our atmosphere as fast as we possibly can.
And every time we turn around, they're taking regulations away.
That, that, that stop that.
In the past, um, so we are apparently not smart enough to not chop the tree down that we're standing on.
AI, if it's smarter than us, will be smart enough not to chop the tree down that it's standing on.
So.
I'm sorry, I've lost track of your question.
I think that was it.
>> Yeah.
No, no, I think that's right.
I appreciate that though, Joe.
I mean, look, I would love it if AI, um, would trip over its own shoelaces, but if it's going to be a thousand times or more smarter than us, I wouldn't bet on it.
Andrew and Irondequoit.
Hey, Andrew.
Go ahead.
>> Hey, I'm sure glad I never had a cell phone in my life or a computer.
I'm living in 1979.
This stuff is way beyond what I'm what I'm capable of comprehending, but I sure am glad I believed on Jesus Christ.
And so, uh, I'll be.
I'll be looking forward to the new heavens and the new earth that the Bible talks about.
Okay, thanks a lot.
See you.
Bye.
>> See, that's the fatalistic side, though, James.
I mean, like, I get it where people think like, well, it's too late.
What's, what's going to be is what's going to be.
Um, but for those who don't believe in sort of, uh, Paradise on the other side, there's a lot of work to be done here.
>> I think part of the reason we've dragged our feet on, on making ourselves safe is because we think that God will come down, swoop down, and save us.
And a lot of us don't think that's going to happen under any circumstances.
So I don't, you know, don't bet on God, bet on regulation and actual practical steps that you can see and hear and enact.
>> I would say, James, for those and again, I, I understand my cards are more on the table with this stuff, but I mean, I take the risk seriously.
I, I'm amazed, James, that we don't see more of a political movement.
And this could change pretty quickly.
But politicians want votes and AI is extremely unpopular, especially with Gen Z, but among all people.
So I think it's not impossible that in the next year or two, you're going to see a sea change in politics.
The question is how fast and will it be enough?
>> That's a great that's a great question.
>> I, I thought that would happen by now.
And as as Wendell Wallach says about automation, I think we need a chastening accident.
And I, I don't want anyone out there listening to go out and foment a chastening accident, but that might scare us straight because nothing else is working.
And here's the thing.
You're anyone that proposes regulation or safety is swimming uphill.
It's swimming uphill, running into a wind.
Let me just get as many metaphors as here as I can.
They're going against the grain.
There's another one, uh, of a billion, billion, billion dollars and money talks.
And, you know, ideas walk, um, we're not going to get it.
You know, we're not smart enough to get it until it gets here, until it starts destroying us.
And then it may be, you know, in a movie version of this, as I said in Our Final Invention, a movie version of this, the, you know, the hard, hard bitten heroes foot pull it, pull a final turnaround out of the out of the blue and defeat the AI monster in real life, that's not going to happen.
We're not going to be saved by, you know, some A-Team or a deity or a group of deities.
>> Uh, James, let me spend the last minute on this.
David writes in to challenge me very strongly.
He says, your guest is shilling for book sales.
And he says, AI isn't going to go away if we stop doing it.
China and Russia will continue.
If all nations signed treaties, the intelligence services will continue.
Um, and if if they're all right, then our best bet is to develop AI and make sure we've got one in our corner that battles the rest.
But he basically says, you know, the some of the stuff you're talking about is just science fiction computronium nanobots.
And he said, you know, you shouldn't get away with that.
You're shilling for book sales.
So you got about the last 45 seconds to dissuade those who think that you're just here to sell, which, by the way, I, I would love it if people bought your book.
I'll just say that, James.
>> This is my new book.
>> There it is.
Yes.
Do it.
>> It's called the Intelligence Explosion.
It's the follow up to Our Final Invention, which is my the first book I'm shilling for.
If you if I want to be rich, I wouldn't be writing nonfiction books.
That's a, uh, be, um, and what was the follow up for that?
>> Computronium.
And you know, he says you're just ripping off sci fi.
It's not real.
And, you know, we're going to have to if it's that bad, then we got to create our own AI to battle the other bad AI's.
>> Well, you know, here's the thing that I hope, I hope that China also wants to survive.
And this is something like what Max Tegmark says we should get together with China.
We shouldn't be competing.
We should be aligning with China.
And I don't think China's AI is that great.
Anyway, for a long time they were just putting shells over our AI, over Claude and others as they put shells over internet products.
But I think, um, we really should get together and we will get together because I think we're going to get scared.
And then China will say, hey, let's cooperate.
And America, if they have any sense, will say, yes, let's cooperate on solving this, this existential problem.
>> Yeah.
>> Let's stop this asteroid that's coming right at us.
>> James, I hope we can talk in 12 years from now.
And, uh, you know, your next book is about how we stopped it, and we're living in a much better world.
But I want to thank you for making the time.
We're out of time.
Thank you.
James Barrat.
Thanks for your work.
>> Thank you.
Evan, thanks for your work.
>> Tremendous books from James Barrat Our Final Invention and the Intelligence Explosion.
More Connections coming up.
>> This program is a production of WXXI Public Radio.
The views expressed do not necessarily represent those of this station.
Its staff, management or underwriters.
The broadcast is meant for the private use of our audience.
Any rebroadcast or use in another medium without express written consent of WXXI is strictly prohibited.
Connections with Evan Dawson is available as a podcast.
Just click on the Connections link at WXXI News.org.
>> Support for your public radio station comes from our members and from one Rock, an alliance of regional economic development organizations, investors and institutions committed to building greater Rochester's economic future and driving coordinated growth.
Information online at one rocky.com.

- News and Public Affairs

Top journalists deliver compelling original analysis of the hour's headlines.

- News and Public Affairs

Today's top journalists discuss Washington's current political events and public affairs.












Support for PBS provided by:
Connections with Evan Dawson is a local public television program presented by WXXI