Cascade PBS Ideas Festival
Radio Atlantic: AI Elections 1.0
Season 1 Episode 9 | 27m 43sVideo has Closed Captions
Hanna Rosin and Charlie Warzel discuss the collision of AI technology and elections.
This year, two events will collide: AI voice replicas accurate enough to fool your best friend will be easier than ever to use; and half the world’s population will undergo an election. Hanna Rosin, host of Radio Atlantic, and Charlie Warzel, an Atlantic staff writer covering technology, discuss AI's potential to harass, commit fraud and sow political discord.
Problems playing video? | Closed Captioning Feedback
Problems playing video? | Closed Captioning Feedback
Cascade PBS Ideas Festival is a local public television program presented by Cascade PBS
Cascade PBS Ideas Festival
Radio Atlantic: AI Elections 1.0
Season 1 Episode 9 | 27m 43sVideo has Closed Captions
This year, two events will collide: AI voice replicas accurate enough to fool your best friend will be easier than ever to use; and half the world’s population will undergo an election. Hanna Rosin, host of Radio Atlantic, and Charlie Warzel, an Atlantic staff writer covering technology, discuss AI's potential to harass, commit fraud and sow political discord.
Problems playing video? | Closed Captioning Feedback
How to Watch Cascade PBS Ideas Festival
Cascade PBS Ideas Festival is available to stream on pbs.org and the free PBS App, available on iPhone, Apple TV, Android TV, Android smartphones, Amazon Fire TV, Amazon Fire Tablet, Roku, Samsung Smart TV, and Vizio.
Providing Support for PBS.org
Learn Moreabout PBS online sponsorship- [Sponsor] Did my legs shrink?
I can move, though.
I mean, I knew Alaska Airlines Premium Class had extra leg room, but this, this feels different.
Okay, crazy idea.
On the count of three, I'm going to try and cross my, oh boy, that's nice!
Woo-hoo!
(upbeat music) (mellow music) - [Narrator] And now, the Cascade PBS Ideas Festival, featuring journalists, newsmakers, and innovators from around the country in conversation about the issues making headlines.
Thank you for joining us for Radio Atlantic, AI Elections 1.0 with Charlie Warzel, moderated by Hanna Rosin.
Before we begin, a special thank you to our stage sponsor, Alaska Airlines.
And our founding sponsor, the Kerry and Linda Killinger Foundation.
Finally, thank you to our host sponsor, Amazon.
(audience applauding) - Hello everyone.
Welcome to the Cascade PBS Ideas Festival.
I'm Hanna Rosin, host of Radio Atlantic, and I'm joined today by Charlie Warzel, who's an "Atlantic" staff writer who covers technology.
Charlie, thank you for joining us.
- Thank you for having me.
- Today we're gonna talk about AI.
We're all aware that there's this thing barreling towards us called AI that's gonna lead to huge changes in our world.
You've probably heard something, seen something about deep fakes.
And then the next big word I wanna put in the room is election interference.
Today we're gonna connect the dots between those three big ideas and bring them a little closer to us, because there are two important truths that you need to know about this coming year.
One is that it is extremely easy, by which I mean $10 a month easy, to clone your own voice and possibly anybody's voice well enough to fool your mother.
Now, why do I know this?
Because I cloned my voice and I fooled my mother.
And I also fooled my partner, and I fooled my son.
You can clone your voice so well now that it really, really, really sounds a lot like you or the other person.
And the second fact that it's important to know about this year is that about half the world's population is about to undergo an election.
So those two facts together can lead to some chaos.
And that's something Charlie's been following for while.
Now we've already had our first taste of AI voice election chaos.
That came in the Democratic primary in November.
Charlie, tell us what happened there.
- The Democratic primary in November, a bunch of New Hampshire voters, I think it was about 5,000 people, got a phone call.
And it would say robocall when you pick it up, which is standard if you live in a state doing a primary.
And the voice on the other end of the line was this kind of grainy, but real sounding voice of Joe Biden urging people not to go out and vote in the primary that was coming up on Tuesday.
The reason is a little complicated, but it had to do with, you know, like counting for Democrats in a, you know, anyway.
It was, 5,000 people got this, 5,000 people thought that the president of the United States had recorded this message, and that this was sort of the Democratic party line.
- Let's, before we keep talking about it, listen to the robocall.
Okay, we're gonna play it.
- [Biden AI] Republicans have been trying to push nonpartisan and Democratic voters to participate in their primary.
What a bunch of malarkey.
We know the value of voting Democratic when our votes count.
It's important that you save your vote for the November election.
We'll need your help in electing Democrats up and down the ticket.
Voting this Tuesday only enables the Republicans in their quest to elect Donald Trump again.
Your vote makes a difference in November, not this Tuesday.
- I'm feeling that some of you are dubious, like that doesn't sound like Joe Biden.
Clap if you think it does not sound like Joe Biden.
(audience applauding) Oh, well, okay, somewhere in there.
So Charlie, is there any way to judge the impact of that call?
- So what's interesting about that is a lot of people were trying to figure out what, you know, who did this, right?
Whether it was, you know, some sort of nefarious, shadowy Republican organization or just like a trickster, just trolls on the internet.
- The Russians.
It's always the Russians.
- Hey, you never know.
And it was actually traced back to a Democratic strategist who paid a magician in New Orleans to make it.
And when everyone kind of confessed, when the, you know, the paper trail led back to them, what the Democratic strategist said was that I was actually just trying to test out these tools so that people could see it, expose it, and then raise awareness because people like us are gonna sit here and talk about it.
I am not going to, it sounds a little like, you know, when you get caught doing something and you have to come up with like a clever excuse, it's like, oh, I was just doing this because, you know, this is, I'm a good Samaritan.
- It's dangerous.
- I'm raising awareness for scams by scamming.
Rather suspect.
- Right.
When you heard that call, did you think uh-oh, here it comes.
Like what was the lesson you took from that call?
Or did you think, oh, this got solved in a second, and so we don't have to worry about it?
- When I saw this, I was actually reporting out a feature for the "Atlantic" about the company, ElevenLabs, whose technology was used to make that phone call.
So it was very resonant for me.
You know, when I started writing, I've been writing about deepfakes and things like that for quite a while.
I mean, in internet time, like since 2017.
But there's always been this feeling of, you know, what is the actual level of concern that I should have here?
Like, what is theoretical?
With technology and especially with misinformation stuff, we tend to, you know, talk and freak out about the theoretical so much, that sometimes we're not really talking about and thinking, grounding it in plausibility.
So with this, I was actually trying to get a sense of, you know, is this something that would actually have any real sway in the primary?
Like, did it go out to the right amount of people?
But also did people believe it, right?
Sort of what you just asked the audience, which is, is this plausible?
And I think when you're sitting here listening to this, you know, with hindsight, and knowing, you know, trying to evaluate, that's one thing.
But the call went out in the morning on a Sunday morning.
So people who are, you know, just having coffee, maybe haven't woken up yet, like, are you really gonna question, like at this moment in time, if you're getting that, especially if you, you know, aren't paying close attention to technology, are you really gonna like, be thinking about that?
Whether it's a robot on the other end of the line?
This software is, it's still working out some of the kinks, but I think the believability has crossed this threshold that is, you know, alarming.
- Okay, we're gonna test you guys now.
What we're gonna do is we have, this is the game, if maybe you played it as a kid, Two Reals and a Fake.
We're gonna play three clips of Joe Biden, your president talking.
One of these clips, and they'll just go by number one, number two, number three.
At the end, I'm gonna ask you which one you think is the fake one.
Okay?
So there are three separate short clips.
Okay, let's play the Biden clips.
- [Biden] My voice is a reflection of my values and my beliefs.
It's a voice that speaks to the heart of America.
A voice that stands up for what is right and a voice that has the power to bring people together.
- Number one.
- [Biden] I'm talking to you, I'm down here talking to these folks who are starting businesses, getting endorsed by minority businesses.
He's up with Marjorie Taylor Greene, north Georgia.
- Number two.
- [Biden] I had my mother and my father, my best friend in the world is my sister, my younger sister, my brother Jimmy, Frankie, the whole family.
I didn't wanna stay in the US Senate.
- Okay.
Charlie, you wanna guess first which one is fake?
No?
(laughing) I'm the only one who knows.
Charlie doesn't know which one is fake, so.
- I thought it was gonna be two, but then the end of the last one was really...
I'm gonna go two.
- Okay.
Audience, if you think number one was fake, clap.
(audience applauding) If you think number two was the fake, clap.
(audience applauding lightly) And if you think number three was the fake, clap.
(audience applauding lightly) I would say one and two were a tie.
- I think one sounded a little more robust to me.
- One sounded more robust?
You're right, it was number one.
That was the fake.
The fake was number one.
I would not have guessed number one if I didn't know the answer.
The rhythm of number one sounded totally, totally natural to me.
Anyway, you all see the problem, it sounds like Joe Biden.
It's a real problem.
So I mean that's just giving you a taste of how good this technology has gotten.
Okay, so Charlie went to visit the company that has brought us here.
And it's really interesting to look at them, because they did not set out to clone Joe Biden's voice.
They did not set out, obviously nobody sets out to run fake robocalls.
So getting behind that fortress and learning like, who are these people, what do they want was an interesting adventure.
I was very surprised, so it's called ElevenLabs.
And by the way, the "Atlantic", I will say, uses ElevenLabs to read out some articles in our magazine.
So just so you know that, a disclaimer.
I was really surprised to learn that it was a small company.
Like, I would expect that it was Google who crossed this threshold, but not this small company in London.
How did that happen?
- So one of the most interesting things I learned when I was there, I was interested in them because they were small and because they had produced this tech that is, I think, better than everyone else.
What I learned when I was there talking to them is they talked about their engineering team.
Their engineering team is seven people.
- Seven?
- Yeah.
So it's like former, this is the engineering research, I guess I should say, engineering research team.
And it's like former academics, a couple people poached from some bigger companies, a college dropout who was working at a call center when they got him, who won a coding competition to get in.
It's this small little team.
And they describe them almost as like, these brains in a tank that would just like, they would say, hey, you know, what we really wanna do is we want to create a dubbing, like part of our technology, right?
Where you can feed it video of a movie in, you know, Chinese, right?
And it will just sort of almost in real time, running it through the technology, dub it out in English or, you know, you name the language.
- Is that because dubbing is historically tragic and so they wanna make dubbing-- - It's quite bad.
It's quite flat in a lot of places.
Obviously if you live in a couple of like the big markets, you can get some good voice acting in the dubbing.
But like in Poland where these guys are from, it is all dubbed in a completely flat, they're called lectors, that's the name for it.
But like, when "The Real Housewives" was dubbed into Poland, it was one male voice that just spoke like this.
- Oh my God, that's amazing.
- For all the real housewives.
So that's like a good example of like, this isn't good.
- That's incredible.
- And so people like, you know, watching US cinema or TV in Poland is kind of like a grinding, terrible experience.
So they wanted to change things like that.
- So these guys did not, like a lot of founders, they did not set out to disrupt the election.
They probably have a dream besides just better dubbing.
What is their dream?
Like when they're sitting around and you got to enter their kind of brain space, what is the future, the magical future of many languages that they envision?
- The full dream is basically breaking down the walls of translation completely, right?
So there's this famous science fiction book, "Hitchhiker's Guide to the Galaxy" where there's this thing called the the Babel fish that, you know, can translate any language, you know, seamlessly in real time, so anyone can understand everyone.
That's what they ultimately wanna make.
They want to have this, you know, like the dubbing has a little bit of latency now, but it's getting faster.
That plus all the different, you know, voices.
And what they essentially want to do is create a tool at the end, you know, down the line that, you know, you can put an AirPod in your ear and you can go anywhere and everyone else has an AirPod in their ear, and you're talking, and so you can, you know, hear everything immediately in whatever language.
That's the end goal.
- So the beautiful dream, if you just take the purest version of it, is all peoples of the world will be able to communicate with each other.
- Yeah, when I started talking to them, because I, you know, living in America, I have a different experience than, you know, most of them are European, or the two founders are European.
You know, they said, you grow up and you have to like, you have to learn English in school, right?
And so there's only a few places where you don't grow up and you know, they say you also gotta learn English.
'Cause you know, if you want to go to university wherever, do whatever and participate in the world.
And they said, you know, what's the world gonna, if we do this, then you don't have to do that anymore.
- Ooh, there goes our hegemony.
- Imagine the time you would save of not having to learn this other language.
And it's interesting, right?
I mean, they also have these ideas for, you know, building an a complete and total academic repository of every single accent in the entire world.
So you could take, you know, a Google Maps thing of the world, click on any space and then, you know, pull it open and you could see all the historical accents from that neighborhood, from that, you know, like zip code, whatever it is.
So there's a lot of weird things that could, you know, come out of this.
But right now it's the voice cloning, it's the dubbing.
- So did you talk to them?
Because here are you and I, two journalists, like they're thinking about Babel and this beautiful dream and we're thinking like, oh my God, like who's gonna scam my grandmother?
And like, who's gonna mess up my election?
Do they think about that?
Did you talk to them about that?
Like, how aware are they of the potential chaos coming down?
- They're very aware.
I mean, I've dealt with a lot of, in my career, like tech executives who are sort of, you know, they're not willing to really entertain the question, right?
Or if they do, it's kind of like glib or you know, sort of, there's a little bit of resentment, you can tell.
They were very, and I think because of their age, the CEO is 29, very like, earnest about it.
Like, they care a lot.
They obviously look at all this and see, they're not blinded by the opportunity, but the opportunity looms so large that, you know, these negative externalities are just problems they will solve, right?
Or that they can solve.
And so we had this conversation where I would, you know, I called it like the bad things, right?
And I just kept, you know, like, what are you gonna do about jobs this takes away?
What are you gonna do about, you know, all this misinformation stuff?
What are you gonna do about scams?
And you know, they have these ideas like digitally watermarking all voices, right?
Like working with all sorts of different companies to build, you know, a watermarking coalition.
So when you voice record something on your phone, that has its own sort of like metadata, right?
That says like, this came from Charlie's phone on this time, you know, like, this is real.
And people can quickly decode it, right?
So there's all these ideas.
But we like, I can't tell you, it was like smashing my head against a brick wall for an hour and a half with this really earnest nice person who's like, yeah, no, like it's gonna take a while before we, you know, societally all get used to all these different tools, not just ElevenLabs.
And I was like, and in the meantime?
And they're sort of like, they would never say it this way, but the vibe is sort of like, well you gotta, you know, break a lot of eggs to get the universal translation omelet situation.
- And that's the same story, right?
- But like, you know, some of those eggs might be like the 2024 election maybe, right?
That's a big egg.
- Right, right, right.
So it's the familiar story, but more earnest and more self-aware.
Do you guys wanna do another test?
Okay, you've been listening to me talk for a while.
We're gonna play you me saying the same thing twice.
One of them is me recorded.
I just recorded it.
Me, the human being in the flesh right here.
And one of them is my AI avatar saying this thing.
There's only two, I'm saying the same thing.
So we're gonna vote at the end for which one is fake AI Hanna.
Okay, let's play the two Hannas.
- [Hanna Recording] Charlie, how far do you think artificial intelligence is from being able to spit out a million warrior robots programmed to destroy humanity?
- [Hanna Recording] Charlie, how far do you think artificial intelligence is from being able to spit out a million warrior robots programmed to destroy humanity?
- Okay, who thinks that number one is fake Hanna?
(audience applauding lightly) Who thinks number two is fake Hanna?
(audience applauding) - It's pretty even.
- It's pretty even.
I would say two is more robust, and two is correct.
That's the fake one.
- Man, I'm 0 for 2.
- But man, it's close!
Like, Charlie's spent time at this place and he's gotten both of them wrong so far.
- We work together.
- It is really, we work together!
This is really, really close.
We only have one more test for you guys.
Anyway.
Okay, so you've been to this place, you've talked to these guys.
I think my impression is that no matter how well-meaning they are, we've been down this road before, you can't totally depend on the people making the technology to put in guardrails on the technology, not because they're evil, sometimes because they're evil, but because they're excited.
And it's just like, very exciting.
I mean, it's the same thing with all AI things.
It's like, you stumbled on the most amazing, world-transforming thing, and you can't be expected, for the most part, to restrain yourself.
So is there anything, you know, are there any ideas out there about regulation that seem like they're implementable?
- The short answer is no, because it's all so new and it's moving so fast.
And you know, I mean, there hasn't been any like, other than what might be happening down the line with TikTok, there hasn't been like, big tech regulation in the United States at least.
And we've been, you know, those problems are actually like a little in your face than this stuff right now.
I think that like, the only bulwark right now against this stuff is that I do think people are generally like pretty dubious now of most things.
Like, I do think there is just a general suspicion of stuff that happens online.
Basically like, I don't know how effective these things are yet because of the human element.
It seems like we have a little bit more of a defense now than we did, you know, let's say in 2016.
- Because we're savvier, because we're all talking to these people, because everybody knows that this is a possibility?
- Yeah, and I think a lot of people are just like kind of beaten down by like, the misinformation in the world and things like that, where they just are less, you know, willing to pick up the robocall, right?
You know, there's things just like that.
And I do think that time is our greatest asset here with all of this.
The problem is, you know, it only takes one, right?
It only takes some person, you know, in late October who puts out something just good enough, or early November, that it's the last thing someone sees before they go to the polls.
And it's too hard to debunk or that person doesn't see the debunking.
And these elections are getting closer and closer and it's harder and harder to understand, you know, where they're swinging necessarily.
So those are the things that make you nervous.
But also I don't think yet that we're dealing with like, god-like ability to just totally destroy reality.
It's sort of somewhere in the middle, which is still, you know-- - I see, so the the danger scenario is a thin margin, very strategic use of this technology.
Like less informed voters, suppress the vote, someplace where you could use it in small, strategic ways.
That's a realistic fear.
- Yeah, I think like hyper targeted in some way.
I mean, it's funny, I've talked to a couple of, you know, like AI experts and people in the field of this.
And they're so worried about it, it's really hard to coax out nightmare scenarios from them.
They're like, no, I've got mine, and I'm absolutely not telling a journalist.
Like, no way.
I do not want this printed.
I do not want anyone to know about it.
But I do think, and you know, this could be the fact that they're too close to something, or it could be that they're right and they are really close to it.
But there's so much fear from people who work with these tools.
I'm not talking about the ElevenLabs people necessarily.
- But AI people.
- But AI people.
I mean, true believers in the sense of, you know, if it doesn't happen this time around, wait till you see what it's gonna be in four years.
- I know, that really worries me, that some of the people inside are so worried about it.
It's like they've birthed a monster kind of vibe.
- But it's also good marketing.
So like, you can go back and forth on this, right?
Like the whole idea of, you know, we're building the Terminator, we're building Skynet, it could end humanity.
- Makes it exciting.
- There's no better marketing than like, we are creating the potential apocalypse, pay attention.
- Right.
Okay, one final fake voice trick.
This one's on me, since Charlie, you were wrong both times, now it's my turn.
My producers wanted to give me the experience of knowing what it's like to have your voice saying something that you didn't say.
So they took my account, they had my voice say things, and I haven't heard it and I don't know what it is.
So we are gonna listen to that now.
It'll be a surprise for all of us, including me.
So let's listen to these fake voicemails created by my wonderful producers.
- [Hanna Recording] Hi, I'm calling to leave a message about afterschool pickup for my kids.
Just wanted to let their homeroom teacher know that Zeke in the white van is a dear family friend and he'll be picking them up today.
- (laughing) Okay.
- [Hanna Recording] Hi mom, I'm calling from jail and I can't talk long.
I've only got one phone call.
I really need you to send bail money as soon as you can.
I need about $10,000.
Cash App, Venmo or Bitcoin all work.
- My mom does not have $10,000.
- [Hanna Recording] Hey, I hope I have the right number.
This is a voicemail for the folks running the Cascade PBS Ideas Festival.
I'm running late at the moment and wondering if I'm going to make it.
Honestly, I feel like I should just skip it.
I can't stand talking to that Charlie whatever character.
(Hanna laughing) Why am I even here?
Washington DC is clearly the superior Washington anyway.
(audience booing) Ooh.
Yeah, okay, okay, okay.
Now I would say I was talking too fast.
- But also what if then you took audio, so the one from jail, right?
What if you took audio, your producers or our producers are great, and inserted, you know, a lot of noise that sounded like it was coming from a crowd or like a slamming of, you know, like a cell door or something like that in the background, faded it in nicely.
That would be enough to ratchet it up, right?
And so I think all those things can become extremely believable if you layer the right context on them.
- Is there a word you would use to sum up how you feel now?
'Cause clearly it's uncertain, we don't actually know.
We don't know how quickly this technology is gonna move.
How should we feel about it?
Be our proxy.
- I think it's disorientation is the word.
And I think when you're disoriented it's best to be really wary of your surroundings.
To pay very close attention.
And that's what it feels like right now.
- We can handle the truth.
Thank you for giving us the truth.
And thank you all for coming today and for listening to this talk.
And be prepared to be disoriented.
(audience applauding) (upbeat music)

- News and Public Affairs

Top journalists deliver compelling original analysis of the hour's headlines.

- News and Public Affairs

FRONTLINE is investigative journalism that questions, explains and changes our world.












Support for PBS provided by:
Cascade PBS Ideas Festival is a local public television program presented by Cascade PBS