Special Report | A.I. in Politics
Clip | 17m 22sVideo has Closed Captions
Public relations experts talk with Tim Skubick on how is A.I. transforming the political landscape.
Artificial Intelligence is transforming the political landscape in the United States and around the world. Tim Skubick, host of WKAR's "Off the Record," sits down with public relations experts John Sellek and Adrian Hemond to get the inside out on the threats and benefits A.I. brings to the political arena.
Problems playing video? | Closed Captioning Feedback
Problems playing video? | Closed Captioning Feedback
WKAR is supported by the MSU Research Foundation. Bringing new innovations to the marketplace for global impact. Learn more at msufoundation.org
Special Report | A.I. in Politics
Clip | 17m 22sVideo has Closed Captions
Artificial Intelligence is transforming the political landscape in the United States and around the world. Tim Skubick, host of WKAR's "Off the Record," sits down with public relations experts John Sellek and Adrian Hemond to get the inside out on the threats and benefits A.I. brings to the political arena.
Problems playing video? | Closed Captioning Feedback
How to Watch
is available to stream on pbs.org and the free PBS App, available on iPhone, Apple TV, Android TV, Android smartphones, Amazon Fire TV, Amazon Fire Tablet, Roku, Samsung Smart TV, and Vizio.
I'd like to welcome to this segment, my two major leaks in the Democratic caucus in the attorney general's office.
Not true.
Gentlemen, thanks for being with us.
Artificial intelligence.
On a scale of 1 to 10, how concerned are you that this is going to go sideways and really damage the political arena?
Well, I think it's a massive opportunity for chaos and distrust and discord, but it's also a massive opportunity for innovation and creativity and advancement.
It can really go both ways.
It probably will go both ways.
But what we're already seeing in elections in other countries, like in the UK, they question whether some of the candidates even actually existed or whether they were created by A.I.
to help boost vote share for a burgeoning new parties in Slovakia, they recorded a voice that sounded like the pro-Western candidate, making it sound like he was faking and stealing the election and the pro-Kremlin candidate won instead.
So it's already happening elsewhere.
It's coming here.
a scale of 1 to 10, in your private moments, where does it end up?
Well, for now, because the complete lack of regulation, the fact that the federal government is barely getting its act together.
I just saw on the news on the way over here, Tim...Oh the FCC is thinking about putting restrictions in on fake robocalls.
So they're a little bit behind the ball.
So, you know, it's it's somewhere around a seven as a problem right now, I think.
What's your number?
Oh, it's a ten.
I'm making so much money off of this stuff right now.
It's a ten.
And you should be alarmed.
Oh, well, that's the point.
Okay.
Why are you making money off of this?
There are so many different applications for artificial intelligence and politics.
I think the thing that right at the top of the list that worries people the most is just straight up fakes of things.
We kind of saw this with the DeSantis campaign where they they recorded an ad with an AI of Trump's voice saying something that Trump had said, but not they didn't have a recording of it.
So they just made one up.
You know, John pointed to some examples from Europe that we've seen.
That's just the fakes to say nothing of the way that you can use AI tools to get incredibly invasive about mass quantities of information about millions and millions of people at once that you can analyze in a way that a human being never would.
You can use that to target people.
You can use that to create content that's meant to stimulate their particular brains.
It's incredibly frightening.
Do you have clients coming to you saying...political clients who are running for office, protect me?
No, because we won't take money from politicians.
They don't have very much.
But we do have clients that are coming to us asking us affirmatively, how can we use these AI tools to move our cause forward.
Really?
Okay.
And so give me an example.
You don't have to name names, but how many?
What do you tell them?
It depends on the application.
But a good example of this is we had a client approach us about doing calls at scale to extract information from people and their pool sort of.
But we're trying to harvest much more granular information about people's lives and you can actually get an AI that will have a conversation with someone on their telephone.
And you can do that at scale.
Millions and millions of times over and over again.
Asking what kind of questions.
Literally anything that you want to know about people, any information that they're willing to give to this thing that they think is a human on the other end of the line.
And when you do that is a part of your conscience that says I'm uncomfortable.
No, I mean, this is.
Because you have no control.
I know my standards are low, but they exist.
No, this is the regulatory regime that we have right now.
John alluded to the fact that Washington is way behind the stick in terms of potential regulations here.
Senator Klobuchar has got a couple of bipartisan bills in the Senate to potentially regulate some applications of AI part of the problem with those is that there are serious First Amendment concerns with that lies are protected by the First Amendment.
Well, how do you control this?
The genie is out of the bottle, John.
Yeah, absolutely is the most basic way we're seeing it used right now on a mass scale on campaigns is behind the scenes.
The writers of the world are going to get replaced because if you are need to create a texting program and you need 15 variations on one text to send out as a fundraising chat, GPT, you can crank that thing out and like less than a minute you'll have all of them.
Now you still need somebody that knows the strategy of the campaign because that's just a first draft devoid of your particular campaign's polling data and messaging strategy.
So you still have to look at it.
But yesterday I made this press release in 2 minutes and it's actually pretty accurate.
Veteran Michigan political reporter Tim Skubick announces 2026 gubernatorial run.
And not only did it crank out the full press release in AP style in less than a minute, it knew that your lack of direct political experience may become a focal point of the campaign.
My lack of what direct political experience?
And that the analysis drew.
Shockwaves through Michigan's political landscape because your candidacy represents an unconventional move from journalism to the governor's race.
So even A.I.
right now knows what some of the sticking points are because it knows you're a journalist and it knows from all the data it's harvested why that could potentially be an issue less than 2 minutes.
All right.
So let's assume that somebody wanted to put that out and I came to you.
Protect me.
Help me.
Can you can you do that?
Basically becomes like what we call crisis counts, that we have to put everything else aside and go on the warpath, talking to every partner that we can find to show that it's fake.
And that means that large campaigns are going to have to invest resources to do that.
Small campaigns won't be able to do it.
And what we know traditionally with campaigns, as you drop the biggest bomb toward the end when the candidate can't respond, and what we're seeing in silicon swarms that not just one of these things comes out, but suddenly on social media, there's 20 false information or disinformation.
Things that come out in a campaign can't respond to them.
So is it fair to say that this has the potential for dismantling our democracy?
I think so, in the sense that voters won't know what to trust and they already feel that way right now.
If video content is the future, even though you've been on video contact from the start, you're ahead of the game.
What we had before our words, we have people putting false things in words on Facebook and that now videos and sound are coming out to the extent that the fake audio of Joe Biden in the primary in New Hampshire this year.
That said, don't bother to come out and vote, just vote for me in the fall.
It happened and everybody believed it.
The DeSantis clip that it was Trump's words, but they used to use Trump's voice to actually read it aloud because he hadn't actually done that, because his written stuff tends to be even more incendiary than his verbal stuff.
So they added the verbal stuff to it.
Nobody will ever know what's true or not.
And unfortunately, what I think will happen as people just come to accept that you can't trust any of it.
I say same question.
Democracy at risk here through A.I.
Sure, absolutely.
I think there are serious risks to democratic governance if people aren't able to participate honestly in politics and aren't able to find honest sources of information.
I think it goes broader than just the democracy argument.
Even, you know, we've seen fake images being deployed, deployed in conflicts around the world in the Gaza war.
Right.
We've seen both sides.
They're using fake images to try to generate atrocities that aren't actually real, never mind the atrocities that may already have happened.
We've seen air tools deployed in the Ukraine war and we've also seen air tools being deployed in the information war in social media where you just create, you know, fake A.I.-powered bot accounts that will carry on conversations, troll politicians, disseminate this false information and these fake videos.
There's no way to ever trace those back to a person.
So have you had clients come to you asking to use A.I.?
Well, of course they all want to know if it's available and is it usable.
And that's what they ask first.
They definitely want to know that, but the ethical ones don't want to fake it.
But let's look at the the gray line where we talked about earlier, First Amendment rights.
You referred to lies, but B-roll footage showing fake families having a happy time at the park is the background of a political ad that's perfectly acceptable.
Commercial advertising, the same thing.
We don't think any of those moms and dads and kids that we're seeing buy those great products online are real, they're fake.
And so why can't I just make that Like remember there's the old episode, you probably never watched a ton of Entourage on HBO, where they had James Cameron, the famous producer, and they had a joke in 2010.
Well, we still need actors in five years.
And he said, probably not what we're getting to that point now where we aren't going to need them all in Variety.
This morning, you can find an article talking about how right now when we watch a foreign language show or movie, which is becoming more and more popular, we don't have to have the words on the bottom of the screen as closed captioning anymore because the technology is so strong and they can reshape the mouth and have it spit out a different language.
She could watch it and whatever language you want.
The democratization of something like ChatGPT where anybody like me can go on for free and type of thing and it's going to have a massive effect on the generative eye visually for the big time campaign consultants in Washington, D.C. because at the same time, this technology is coming into play and it's actually usable.
The political establishment in D.C. is being bought up by venture capitalists.
They're looking at any other place they can make a profit.
And it turns out that billions of dollars are not going through politics.
That means you're used to your stand alone polling firm, your standalone TV production firm, your stand alone political firm.
They're all bought up and put under one umbrella and then empowered with millions and millions of dollars in resources.
They're going to grab a hold of this stuff and take it to a whole nother level while at the same time, because some of it is free and easy to use campaigns, they only have one staff person or maybe even the candidate can essentially fake a whole campaign, a website, all the writing, the press releases all by themselves.
You agree?
Yes, absolutely.
And I think it's you know, it's sort of the blessing and the curse at the same time that large institutions are able to, you know, leverage massive computing power from these A.I.
tools.
The little guy can do it, too.
Right.
And so at the same time that it creates problems for what information can we trust?
What's actually real, the being able to pull out what would normally be human labor from a campaign that's under resourced creates opportunities for the little guy to potentially win.
Right?
So if you have a client that comes to you and says somebody sent out a phony press release, you can't reach enough people to undo what's already been done.
Yes.
No, that's absolutely correct.
So do you tell this client to go away or do you still take their money?
It depends on the context and whether we think we can actually help them.
You can't unring that bell, but depending on what their goals are, you may be able to do enough damage control for them to survive or still achieve their goals.
Given the opportunity to them to make you essentially to the victim, where you flip the whole thing and bring more attention to yourself as the good guy who's being wronged.
There's a lot of different strategies there.
Yes, with.
Media who get snookered and it's going to happen.
It just is.
We can go back to them and talk about them and they'll they'll do something different.
They'll even bring up the fact that, you know, something, you know, devious is going on here.
But because of social media, especially if it's advertised as out there, that's the bell.
You can't unring that one.
That's out of control, isn't it?
Yeah.
If we didn't have social media, our lives would be better, right?
I mean, the owners of those social media companies seem to think so.
It's why they largely don't let their children use them.
So what kind of safeguards do you have?
Have you turned away clients who requested stuff that crossed the line for you?
Yeah, we we have to create the ethical standards because the fact that the government is so far behind in creating any of themselves, we do have to acknowledge that the state of Michigan passed new laws this year, an attempt to get in front of this and the issue must pull.
And it's important enough that we're seeing people like Detroit Mayor Mike Duggan was up testifying and talking about that legislation, saying what a big deal it is, because he's saying that to the public they are nervous about something like this.
That's a hot button issue outside of his governance.
So he came to talk about it.
So he's in the mix, even though normally that wouldn't be necessarily his thing.
So that's a recognition that the polling shows that citizens and voters are really worried about.
Well, how do I know what I'm looking at is real at all right.
So state government has passed some laws.
But to me, when I wouldn't risk 93 days in jail, but a misdemeanor, 93 days in jail, we've got all kinds of unscrupulous people, especially.
How do you find them?
Because if they're running an outside superPAC or some group from somewhere else, it's a bunch of nameless, faceless stuff.
It would take a prosecutor potentially years to figure out who actually threw the side out at the last second to turn an election.
Obviously, you guys are playing this game above boards because your careers are on the line.
Okay?
But there's got to be people in your business who don't give a hoot.
Yeah, that's right.
And you know, some of this is you know, you mentioned the politicians being behind the stick on this.
That's exactly right.
These tools have been being used in commercial businesses for years and years and years.
Typically, the way that it works for firms like mine and John, is that we end up picking up the tools that large multinational corporations have already been using and bringing them into politics.
Right.
But John's exactly right about this.
And, you know, the cat's out of the bag like we're not we're not going to live in a world without these A.I.
tools being used.
And so figuring out a regulatory regime for them that complies with the First Amendment is a really important task.
And we're years away from it.
The biggest danger, like I think you imply there is for people like us that are on the ground, that actually see the media every day, that actually have to talk to candidates and have the respect of what we're going to do.
We have to police what was going on with us.
But when the off the record super-PAC drops in from LA and starts blasting ads for the last three weeks of a campaign, what are they worried about?
They're not worried about anything.
They're a nameless, faceless organization.
Nobody really knows who's behind it.
The only risk they might take is the potential that a prosecutor somewhere would come after them under these new laws that Michigan enacted.
All right.
So if the government ops committee chair called you guys and said, what should we do?
What would you tell them?
I'm not sure.
There is a whole lot more you can do right now.
They're basically pursuing two tracks at the state level and right now at the FEC, where they're starting to put new rules on the table.
But this rules process takes forever because it's government, right?
Good luck if they get it all enacted into place by November.
The first track is to require that one eye is used, that it has to be disclosed.
Now, like I just said, commercial advertising right now and political advertising for decades has used fake stuff in their ads.
Fake families, fake factories, whatever it takes to get their message across.
So we're in a kind of a gray period here, but they require that.
And the second part is trying to ban deepfakes that they're somehow intentional.
And it kind of brings up that old question of like, well, one, is it one of the intentionally deceptive or is it just telling a story how they're going to prosecute those things?
That's never been done before?
It's going to be very interesting.
What's your recommendation?
Yeah, it's really, really tough problem.
Again, because of the First Amendment.
You know, John brought up the disclosure piece of this.
Senator Klobuchar, one of those bipartisan bills that she has is to do exactly that right to require disclosure when I tools are being used to present what you're seeing or hearing, that probably runs afoul of the First Amendment right.
Anonymous paid political speeches as American as apple pie.
We know that because the founders were all engaged in it.
Right?
That's what the Federalist Papers are.
We're talking about a different scale.
We're talking about different tools.
Right.
It's an A.I.
as opposed to a printing press.
But it is in principle the same thing and probably runs afoul of the First Amendment, some of the more muscular approaches that are being proposed in terms of, you know, banning deceptive images, that's clearly unconstitutional.
So it's a really thorny regulatory problem, free speech.
And, you know, free speech is foundational to American democracy, but it's also in conflict with these tools that are both a blessing and a threat to American democracy.
It's a thorny issue.
You may actually also enjoy the flip side idea that as distrust grows and voters behave from what they see on their screen, the ability for legit attacks based on someone's political background, or if they got arrested for drunk driving or some lawsuit that was in their background.
Legit campaigns will put out legit information about those things the last two weeks of a campaign and voters will say, I don't even know if that's real, so I'm not going to pay attention to it.
And so that's going to alter how campaigns have to adapt and adjust.
It's probably going to get uglier before it gets any better.
And another ominous note, thank you for your expertise and good to see both of you.
Thanks.
Thanks for having us.
Yeah.
Take care, guys.
Let me see.
That.
Special Report | A.I. in Politics
Video has Closed Captions
Clip | 17m 22s | Public relations experts talk with Tim Skubick on how is A.I. transforming the political landscape. (17m 22s)
PREVIEW | Decoding Disinformation: AI and the Threat to Democracy
Video has Closed Captions
Preview: Special | 30s | Thu Sept 26, 2024 at 8pm ET | Artificial intelligence and its impact on the spread of disinformation (30s)
Providing Support for PBS.org
Learn Moreabout PBS online sponsorship
- News and Public Affairs

Top journalists deliver compelling original analysis of the hour's headlines.

- News and Public Affairs

FRONTLINE is investigative journalism that questions, explains and changes our world.












Support for PBS provided by:
WKAR is supported by the MSU Research Foundation. Bringing new innovations to the marketplace for global impact. Learn more at msufoundation.org

