Connections with Evan Dawson
AI, propaganda, and the future of democracy
11/5/2025 | 52m 25sVideo has Closed Captions
Bret Schafer discusses AI-driven disinformation and how authoritarian regimes manipulate democracies
Authoritarian regimes are using AI to spread propaganda and weaken democracies, says Bret Schafer of the Alliance for Securing Democracy. He studies how AI tools shape public opinion through disinformation. Visiting Rochester with the World Affairs Council, Schafer joins *Connections* to discuss spotting AI-generated falsehoods and strategies to counter them.
Problems playing video? | Closed Captioning Feedback
Problems playing video? | Closed Captioning Feedback
Connections with Evan Dawson is a local public television program presented by WXXI
Connections with Evan Dawson
AI, propaganda, and the future of democracy
11/5/2025 | 52m 25sVideo has Closed Captions
Authoritarian regimes are using AI to spread propaganda and weaken democracies, says Bret Schafer of the Alliance for Securing Democracy. He studies how AI tools shape public opinion through disinformation. Visiting Rochester with the World Affairs Council, Schafer joins *Connections* to discuss spotting AI-generated falsehoods and strategies to counter them.
Problems playing video? | Closed Captioning Feedback
How to Watch Connections with Evan Dawson
Connections with Evan Dawson is available to stream on pbs.org and the free PBS App, available on iPhone, Apple TV, Android TV, Android smartphones, Amazon Fire TV, Amazon Fire Tablet, Roku, Samsung Smart TV, and Vizio.
Providing Support for PBS.org
Learn Moreabout PBS online sponsorshipThis is Connections.
I'm Evan Dawson.
>> Our connection this hour was made when a group of researchers set out to understand how effective A.I.
propaganda has become.
When we talk A.I.
propaganda, talking artificial intelligence, of course, and the study from Stanford University is rather chilling to me.
They concluded, quote, major breakthroughs in large language models have catalyzed concerns about nation states using these tools to create convincing propaganda, we sought to answer a single question could foreign actors use A.I.
to generate persuasive propaganda?
In short, we found that the answer is yes.
End quote.
Now they added that more research is necessary, but every day A.I.
gets a little better and a little more sophisticated, and it's quite possibly cheaper to use than a more elaborate propaganda scheme.
So what does that mean for the way that foreign governments might use A.I.?
What are the risks?
How can we become better consumers of information?
One way to think about this is if you were to be shown a news report online, how good do you think you would be at sussing out whether it is real news, legitimate news, A.I., propaganda?
You think you'd be pretty good at it?
A lot of people say they'd be pretty good, and we're not as good as we think.
I want to talk about it this hour with our guest, who is in Rochester as a guest of the local chapter of the World Affairs Council.
Bret Schafer is a senior fellow focusing on media and digital disinformation for the Alliance for Securing Democracy.
Bret.
Welcome.
Welcome to Rochester.
Thanks for making time.
>> Thank you.
Appreciate it.
>> Right off the plane.
And he made it on time, which says something about the current state of the air industry.
So we're we're grateful for your time.
let me just start by asking you to tell the audience a little bit more about what it is that you do, and you focus on.
>> Sure.
So we do research is sort of the main thing that we do.
So we look online, we try to find vulnerabilities on social media platforms and the information environment more broadly.
So we look at the ways that Russia, China, Iran can take advantage of the current digital ecosystem.
So that's both with creation of content.
So can you make a deepfake video?
Can you create the perception that something is happening that's not really happening?
A lot of our focus though is on attacking information systems.
How do you get information in front of people?
How do you get information in front of people so they don't know the source of the information?
So we look at the ways in which our information space can be manipulated and weaponized by foreign actors.
>> how much has changed in the last three years?
>> A lot, a ton.
So I started this work in 2017 as a full time profession, and then we were focused on social media and fake accounts.
It has changed dramatically since then.
And as you noted, the last three years with A.I.
is when it is radically changed.
So now the work that used to be done by Russian trolls sitting in Saint Petersburg, it was manual labor that can be scaled up significantly.
So this is this is Ford in the information age where you're able to produce at a much higher volume, the speed and scale is better.
And this is the sophistication is much, much better.
>> So I want to spend some time talking about how we form beliefs or what we accept, and the different biases that we we struggle with our own sort of confirmation bias or motivated reasoning in regards to A.I.
But before we even do that, I want to back up a little to the work you were doing in 2017, 2018, 2019 because to your point, at that time, you know, there's a sea of bots or Russian trolls, for example, or people who are creating fake accounts just to inflame.
And one of the things that really concerns me, frankly, the reason I barely post on X or Twitter anymore is because I look back and I'm stunned at how often I would actually engage with accounts.
I couldn't tell if they were people or not.
And I'm like, well, someone's wrong on the internet, you know?
I mean, I have to spend some time on this and my mental health is so much better that I don't do that.
But I'm also you know, that's a tool that used to be effective at building community and getting different sources.
And now it's mostly gone.
But I'm amazed at how long that it took me and how long it's taken.
A lot of people, frankly, a lot of people still do it.
They they will engage with these accounts that you can't tell if they're real.
You can't tell if that's a person.
You can't tell if it's bot, you can't tell if it's legit.
And we do it and we get so angry.
And then we try to act as if it means something we're not very good consumers of information.
Before even A.I.
hits the scene in a big way, are we?
>> No.
I mean, a lot of this is perception hacking.
It's manufacturing consensus around a debate.
So when you look at the purpose behind what Russia is doing here, it is to create rage.
A lot of the time China's a little bit different.
They tend to have policy objectives.
They try to push.
Russia is just happy for Americans to be fighting against Americans.
So a lot of times you see comment sections even that are just flooded with commentary meant to keep you enraged and meant to keep you distracted.
And so that's that's the purpose in and of itself.
And so this is a real problem as we try to kind of understand the political opinions of those around us, political, social opinions is so much of it is being manufactured that if your perception of the other in this country is being drawn from social media, you have a very warped perception.
And so we need to get out away from our computers and actually talk to people where, yes, we're polarized, but people tend to be reasonable.
A lot of what's happening on social media a are people who are doing it for clicks or for money or intentionally.
The rage is being manufactured by those who want us to be enraged.
>> And I want to talk.
I want to get back to China a second.
I'm very interested in the difference, as you see them in the different purposes, but I recall during the Republican presidential primaries that Governor Nikki Haley had talked about possibly regulating in ways that that didn't allow for anonymous social media accounts.
You know, I mean, my instinct is to to not seek to overregulate speech on platforms.
I also see the damage that's being done.
I don't know how much a proposal like that would fly, but is there before we get to sort of cultural or educational fixes, are there any policy fixes to some of this stuff?
>> Well, the EU.
>> Is certainly trying, you know.
>> More aggressively than we are.
>> Right.
Way more aggressively.
We have not done much in this country, frankly, the only place where we've seen movement is around kid safety.
So we have passed a couple bills to update some of the original digital policies around the safety of children online, but that's about it.
Nothing else has moved forward in the U.S.
The EU has passed several sort of comprehensive bills.
Now, these are controversial.
They're very controversial here.
The Trump administration does not like them.
Silicon Valley really doesn't like them, but they have put in place some efforts to at least get some transparency from the social media companies.
So we have an idea of why we're being shown certain things, how the algorithms work.
There's some accountability.
If it is found that platforms are systematically abusing the certain terms and conditions that people agree to.
So the EU's kind of pushed way ahead of us on this front.
And again, some of these things are controversial, some I don't necessarily like, but I at least like the effort for there to be some regulation here, because the Wild West has not worked out particularly well.
>> So is there a piece of low hanging fruit that you say, look, we're not going to go full EU, but if there's one step we could take that we haven't.
It's this transparency.
>> I don't know why transparency is controversial.
but how how is effectively you audit banks.
Right.
There's independent auditors who go in and say this bank is doing what this bank is supposed to be doing.
It's not defrauding customers.
You have a better understanding of how it's operating.
That can happen with social media companies, and it doesn't get into speech.
This is not saying, look, you've got to take that down.
That's false.
I don't think anyone wants that.
And depending on which political party you support, you look at the other party in office and then you really don't support that kind of legislation.
But to have an independent auditor go into these companies and say, look, they are systematically preventing certain opinions from rising to the top of their feeds.
They are abusing user data, for example.
These are common sense things that really should not be controversial.
Again, the companies don't want them because they're expensive, right?
This doesn't help their business.
Bottom line.
But I think across sort of political lines, if people understood what transparency could bring about, there would be strong support for it.
>> Is there the resistance mostly coming from the companies and then the pressure that the companies exert on lawmakers?
>> Yeah, exactly.
I mean, these are U.S.
companies.
And so when the EU put in place these regulations, their tack was to sort of align themselves with the current administration, to say that these are European practices trying to hurt U.S.
businesses.
And they would to some degree.
And the other argument coming out of Silicon Valley is like, look, you're going to put in place these regulations while China runs wild with their creation of A.I.
tools.
So you're putting brakes on us while our competitors are geopolitical adversaries are allowed to run ahead with none of these regulations in place.
I mean, that is a compelling argument, but I don't think it really holds water.
>> Yeah, I think it was was it to false almost three years ago?
I don't remember when it was the six month pause letter.
You know, that Musk and so many others signed, which was a nice thing that everyone immediately ignored.
It never struck me as practical for the exact reason that you're describing.
You're going to have nation states competing, and then you're going to have companies competing.
And sometimes it overlaps and sometimes it doesn't.
But all of that competition is going to drive this lightning speed.
It feels like I don't know how you regulate A.I.
in a way that would ever allow us to slow it down and feel, quote, unquote, safe about each new iteration.
>> You don't want to regulate the moment, right?
Again, going back to 2016, 2017, when I started this, everybody was focused on regulating bots.
Do we need to have more transparency around bots?
Should bots be allowed three years later, nobody was really talking about bots anymore.
So you want to put in place an infrastructure that can stand up to time.
I mean, this is going to move too quickly, and by the time regulators understand a new technology, it's on to the next thing.
And so this has always been a challenge, particularly in Capitol Hill.
We've got older regulators who don't necessarily understand the new technology of the day.
By the time they're up to speed on something, it's gone.
We're on to the next threat.
And so this again, is why we push transparency.
You put in place certain ways that we can ensure that the companies aren't abusing our data, that they are not sort of systematically, preferentially treating certain political opinions.
And those can withstand time.
>> So before we talk more about solutions, describe for me what you think the downside risk is to doing nothing or to standing on the sidelines the way we have.
>> I think we're seeing it right.
I mean, I don't think anyone across the political spectrum is happy with where we are from an information standpoint.
You might blame the other side, but if I give these talks to conservatives, liberals, everyone looks at our current information ecosystem as failing us.
It's not doing.
There's bipartisan.
>> Agreement.
>> There's bipartisan agreement.
Again, they'll blame the other side.
But everyone says, no, I don't trust information anymore.
No, I don't think it is helping our democracy.
>> Or discourse.
>> Or discourse.
I don't think it is improved.
How I perceive the political other in this country.
And so I think that is a sort of basis to start.
Okay.
So what do we do about it is where we need to go to.
And so doing nothing has led us here.
And here is not great.
So I think we need to start coming up with some more creative solutions.
>> Because as you describe these are American companies.
They're exerting a lot of pressure on lawmakers.
They're going to be resisting change.
But if the constituents of these lawmakers are in a bipartisan fashion furious with the system, why can't that be enough to to spark change?
>> I don't know, I mean, you would hope that they are getting this kind of pressure, but I'm not sure that they are.
I think the Partizanship has taken over.
And so we've seen different levels of attack coming from the different parties.
So the last 2 or 3 years, the right has focused on what they've called the censorship industrial complex and alleged censorship from the Biden administration.
So there have been all these hearings about that, and there have been all these allegations of the ways in which political parties have tried to manipulate the system for their own benefit.
So we focus too much on the partisan.
And so any sort of any driving force from constituents, I think, has focused on how do we kind of attack the other here, as opposed to how do we deal with the problem in a way that actually will create a solution that will stand the test of time?
They'll have bipartisan support, et cetera.
>> As a side note, for progressive listeners, progressive minded listeners who are really wondering how can Republicans or people on the political right be so angry at the Biden administration?
Is it fair to say that the attempts that the Biden administration made, even if they were well-intended during the pandemic, to talk to Facebook, talk to YouTube about the kind of information that should or should not be platformed.
Did that backfire?
>> It did.
And there were some cases I honestly, I don't quite understand.
In particular, there was a push from the Biden administration, I think, to get Facebook to take down posts about the COVID virus originating from a lab in China.
That was a very politically charged topic.
I'm not sure why there would have been the push to take that down in ways around public health.
I do understand, you know, people were dying.
There was a lot of information out there that was harming people.
I understand why the administration was pushing on these companies to have a sort of healthier discourse around what is actually sort of scientifically valid information.
So I understand that, but I think there were mistakes there for sure.
Now, I don't agree with the meta narrative coming from the right that the Biden administration was systematically censoring conservative voices.
I don't buy that, partly because, you know, we and me personally have been attacked by that same machinery.
But I do think there were mistakes, and I do think there should be lessons learned going forward about how the government communicates with social media platforms and the transparency around it.
I think part of the problem is this was all seen as sort of backroom deals, because it was to a degree.
I mean, these were people in the administration calling their counterparts at meta.
And so I think there should be lessons learned going forward not to do that again, because I do think it has fueled a lot of the discontent.
>> I also think that when you describe the the Partizan narrative that lawmakers have, that they can see that people are angry, but they're just going to say, yeah, but that's because of the policy of our political opponents.
I think that that is the the, the action of a very lazy but taking the easy way out set of lawmakers who don't want to do the hard thing, which is tell the tech giants that now you're either going to be regulated or you're going to have changes to how you're allowed to do business.
It's just easier to say, well, I agree with everybody.
It's terrible, but it's because of the Republicans or it's because of the Democrats.
>> Tech regulation is hard.
It's really hard.
The online safety bill in the UK took about five years from when it was first introduced to actually get passed into law, the EU's Digital Services Act.
I don't know exactly the timeframe, but it was it was years, at least three years where they had to go through a lot of different rounds and rewrites and soliciting feedback from civil society.
And so getting to that end point is hard.
It's going to last longer than some of these politicians will be in office.
And so that's not a particularly appealing thing to latch on to when the endpoint may happen years down the road, but we're electing them to do the hard work.
And exactly to your point, it is easy just to blame the last administration for everything that's wrong.
It is hard to actually go through the process of figuring out, okay, what does this regulation, what should this regulation look like?
What would it take to come into force?
What are the drawbacks?
That's the hard work we expect from them.
>> And the folks in tech write checks to Republicans and Democrats, don't they?
Yeah, sure they do.
I also want to ask you, apart from policy on the sort of cultural or just becoming a smarter citizen side of this, if you were talking to children, I have a 13-year-old son.
If you were talking, and I don't want to make this about kids, because sometimes these conversations happen, it's like, well, how are we going to teach kids?
Adults are really struggling with this.
Like people of all ages are really struggling.
But if you wanted to send a message, especially to young people who are going to be coming up in the social media age of how to handle online trolls, and the outrage machine, what are your what are your Guideposts for them?
>> They're first of all, there are different challenges by by age, by generation.
So younger generation tends to struggle a bit more with media literacy, but they actually they absolutely understand digital technology.
So they understand how these systems work.
And I think they more intuitively understand that a lot of the people engaging with them online aren't real in ways that the older generation, they do have a better standard of media literacy.
They understand what good journalism looks like, but they struggle with the digital technologies.
So I think there are challenges sort of across generational divides.
But in terms of advice, you don't want to say don't trust anything you see, because that leads to an uninformed citizenry.
You don't want people to disengage, but you do have to be exceptionally skeptical, especially when results are being fed to you through an A.I.
system.
So now, if you've seen on search engines, almost all of them have put into place an A.I.
generated result at the top.
What we've seen from studies is the vast majority of users do not make it past that.
A.I.
generated summary.
Those are often just not wrong.
People have to understand that this is all being trained by the information that's out there.
So if bad information is going in, bad information is coming back out.
There's not a wizard behind the scenes saying, yeah, this, this is accurate.
We're going to push this forward.
It's all trained on data.
If the training data is corrupt, if the training data is based on bad or biased information, your output is also going to be bad and biased.
>> Yeah.
Just a couple weeks ago, the Rochester City School District has a new superintendent.
in our city, we get one of those just about every year.
We have a we've had a lot.
And so I asked a basic search on Google.
I just wanted to say, you know, what's the list?
The full list of superintendents going back 25 years.
And it it did not include three of the last five.
And it told me that this person was followed by this.
And it was absolutely wrong.
I could see it right away.
I had the expertise or the information to know right away that it was wrong, but if a person was doing that search and didn't, they'd go into a presentation and just present that as if it was fact.
>> Right?
>> So be skeptical.
Although the I don't know I don't know if irony is the right word.
What I'm worried about here is it is important to teach that skepticism or that rigor that says, don't just trust.
The first A.I.
summary.
Know how to check sources, know how to go deeper than that.
Instead, what we have is a growing sort of group of Americans who say, well, I'm skeptical of everything, including all institutions.
And so now I don't trust any institutions.
Now I don't trust vaccines.
Now I don't trust things that do have a track record and that that does its own set of damage, doesn't it?
>> Absolutely, absolutely.
I mean, I would look at that as being the bigger problem right now is the lack of trust in institutions and expertise.
And so, again, I think COVID was disastrous for this whole field.
And so we saw a lot of mistrust grow there.
And so I think there needs to be a better explanation of, look, when the scientific community comes to consensus, this doesn't mean it's necessarily a fact written in stone.
It's just the best information we have right now.
And that information might change and it might turn out to be inaccurate.
But also if you're in a place of saying, well, everybody's wrong and I can't trust anyone, they all have their own agenda.
Like, where do you want your information to come from?
>> Yeah.
>> Like, do you want to live in a world where you're trusting a random account on Twitter more than someone with 40, 50 years of expertise?
>> That's what we have with a number of people.
>> Absolutely do.
And that that grew exponentially during COVID.
And so I don't know how we get back from there, but we have to, I think, recognize that and go through like, what did we do wrong?
And I know the public health community is going through this process right now of saying, we have to stop saying like, this is just the consensus, this, this is this is right.
And actually meeting people where they are and saying, like, we understand your skepticism.
We understand this wasn't your lived experience, but this is our process and this is how we got here.
It may turn out not to be right, but this is the rigor that went into this finding versus who knows where this guy came up with this information.
>> Yeah, one prediction I would be confident in making.
Is that what we've seen with things like polio is that the longer time passes since, for example, the polio vaccine and really the sea change in public life that happened so quickly, the more time that passes to where Americans, number one, didn't live through that.
Also, don't know anybody who had polio.
The more they say, you know, I don't even think it was that big of a deal or I doubt it was the vaccine.
People email the show and they say was the aerosol change that we did as a society.
I mean, all kinds of things.
For the farther you get.
I would not be surprised.
And I would predict that as time goes on, people are going to look back at COVID and say the whole thing was a hoax.
I don't even think it was that bad.
It was all about government control as opposed to there was a lot of missteps or clumsiness in a in a crisis.
But, you know, more than a million Americans died.
People died.
I mean, I know I'm I know close people who were hospitalized and nearly lost their lives.
I think that it's going to be a disaster going forward with that whole thing.
>> I mean, I think it already is for many people.
I mean, we're only a few years out, and I think that fact that a million people died is lost and people focus on, well, you know, the the shutdowns at schools were disastrous and they were terrible for education.
Yes.
And we should learn from them.
>> Oh, yeah.
>> We should 100%.
But again, you have to remember why some of these things happened.
And yes, if we did it again, would we do some things differently?
We should.
But I think you're missing to your point, the underlying causes of some of these things, and as you do, get distance from them, people forget the reasons behind some of the things that have happened, both for COVID and other things.
I mean, we haven't had a war, for example, in this country in a very long time.
Thank God.
And so if you don't have that visceral experience of what it is like to go through something, I think the realities of playing around with disinformation and polarization and radicalization gets almost gamified because the consequences aren't real for people.
>> If I were to tell, if I were to convince an entire generation of teenagers right now that in your whole life, usage of social media, if the one thing you changed was that you never responded to trolls who were just trying to make you angry, or you never responded to accounts that you couldn't be sure were real people.
How much would that help?
>> It would help some, but I do think the toxicity still stays with people.
So even if you put in place guardrails of I'm not going to engage the level of vitriol coming at you, I think stays with you even when you get off.
>> Can't I be optimistic that if we don't engage.
>> It's better.
>> It changes or goes.
>> Away?
>> And look, people have individual responsibilities, like you don't have to then share something.
You don't have to create a bigger issue.
It is not actually effective.
If you get into a fight with a friend on a public forum on Facebook.
I know kids aren't on Facebook right now, but our older generation, if you have a relative who spreads something that you know to be false, to call them out publicly and get like, that's not helpful for family dynamics, discourse in general.
So there is a way to engage better.
But also, you know, I do think there's some onus on the platforms not to prioritize rage and to make sure accounts that aren't real, that are there for that explicit purpose.
You're not saying necessarily to take them down, but they should not be elevated in people's feeds.
>> When we come back from our only break of the hour and listener named Jack is asking, does our government use A.I.
to create propaganda or misinform?
We'll ask Brad about that.
I do want to talk a little bit more about how different states might use A.I., or propaganda differently, and how we can be better informed as a citizenry about that.
I'm curious to know what our guest makes as I read more of this Stanford study that includes this with human machine teaming, including editing the prompts fed to a model and curating GPT three output, A.I.
generated articles have been, on average, just as persuasive or even more persuasive than real world propaganda attempts.
That is disturbing, and it's only getting more sophisticated.
My guest this hour is Bret Schafer senior fellow, focusing on media and digital disinformation for the Alliance for Securing Democracy.
He is in Rochester as a guest of the local chapter of the World Affairs Council, and they are open to new members.
They would.
They'd love to see you sometime.
If you want to look them up or look up.
World Affairs Council the Rochester chapter, they do pretty regular events.
They bring outstanding, talented and really high ranking folks from around the world.
People like Bret Schafer for you, for you to listen to.
So they would love to hear from you.
If you want to do that, we'll come right back and take some of your questions next.
I'm Evan Dawson Wednesday on the next Connections, we will run down the election results with our team from WXXI News.
We'll tell you what happened in your town and your area and your state and in your country.
And we'll talk about whether it has implications for parties nationally, if it means anything, look into next year.
Then in our second hour, astrophysicist Adam Frank joins us with some of his students talking about what it takes to be a scientist in 2025.
>> Support for your public radio station comes from our members and from Jewish home.
Believing older adults are not defined by their age.
Offering a portfolio of care in a community where all are welcome.
Jewish home.
Come be you more at Jewish.
>> This is Connections.
I'm Evan Dawson we'll get Jack's question in a second.
I think Gary has a good one here.
Just for definitions, how do you identify the difference between a troll versus someone who just disagrees with you?
>> Well.
>> I mean, I guess that's a fine line.
I mean, typically when we talk about a troll, we're talking about a professional.
I mean, some someone who goes to an office like the Saint Petersburg trolls being the most famous, but they're certainly not the only ones, and it is their full time job.
It is their occupation to troll people, which is a very narrow definition.
I mean, I think most people would broaden that definition to people who just get kicks out of enraging people.
And that's very different than someone who engages in good faith but might disagree and might disagree in language that you don't like.
I mean, I, I guess it's sort of a fine line, but to me it's fairly easy to distinguish between someone who's a troll and somebody who genuinely disagrees and wants to engage.
>> Okay.
And I want to stress a point that Bret is making, though professional trolls not only have existed, they do exist.
They are paid by in some cases, their government to make American general.
I mean people in a lot of places, but often Americans angry to just to cue up outrage, to make you angry at fellow Americans, to make you angry at your government, to make you stop trusting the institutions of your country.
And are we just easy marks?
Is that what's happened?
>> Unfortunately, we we are.
If your job is, how do we polarize Americans?
That's your job.
Go to an office for eight hours a day and try to make Americans more polarized online.
I mean, you don't have to do a lot of work.
And that's been the sad thing, honestly, in my job over time, is the vast majority of what we've seen, the Russians in particular, and the Russians are the best at this because they've been doing it for 100 years, obviously not online, but they understand us because they've engaged in a propaganda war for decades.
They really understand the pressure points and what gets Americans enraged, whether it's race or police brutality.
And so they understand really what makes us tick.
But most, most of the time, almost all of the time, they are taking content or narratives from Americans.
They're recycling them, they're repurposing them.
They're trying to spread it farther, but they are not inventing these conflicts.
So they're sort of vultures on the internet.
They know the websites to look to, towards, to find whatever is the sort of divisive topic of the day.
And they latch on to it and they spread it farther, but they're not creating these things whole cloth.
So we're doing a lot of the destruction to ourselves.
The Russians and others try to make it worse.
They try to make the conversations worse.
They try to spread it farther so more people see it.
But we are creating most of the content and the conflict for them.
>> So they're very good at this.
We're not very good at being consumers of information.
And you talked about how really digging into this work starting in 2017, that's a big part of it.
Then 2022, 2023 hits and the gpts of the world.
And all these A.I.
tools are out there now.
And so what are you seeing now that we didn't see five years ago?
>> So one of the things that we focused on five years ago was search results and how often the Russians in particular, would target search engines to try to get favorable narratives to surface higher, because what the Russians understand, what any communication professional understands is if people don't see your message, it doesn't matter.
And they were very good at attacking systems, so they would carpet bomb certain topics that they wanted to rise in social media feeds.
So whether it was their narrative around who shot down the airliner over Ukraine, the mH17, Western media would cover it when something new would happen, the Russians would cover it every single day because they knew search engines would prioritize their content.
So if you're a kid in high school and you search mH17, you don't know anything about it.
You are very likely to find Russian information at the top, because it was the newest and they were the most frequent to publish about it.
Now, with large language models, they're doing the same thing.
So the term is WL grooming.
So they're trying to groom these large language models with their preferred narrative.
So a lot of times they're spinning up websites that they know people are not going to go to.
But it's publishing the same narrative over and over and over again.
And it's a numbers game because they know these large language models are sucking in a lot of data, and so they're more likely to suck in a lot of Russian data now.
And so again, when you prompt a chatbot, what various research studies have found is quite often you're being spit back.
pro-Russian perspectives on events just because they are put out so much content that the training data is being manipulated and they're doing this very intentionally.
And so they're targeting specific events that they care about, and they're able to kind of manipulate these systems.
So when a user just enters a neutral prompt, they get fed back, not all the time.
But sometimes these Russian perspectives taken from not necessarily overt Russian state media, but these cut out websites that are rerunning Russian perspectives.
>> What about the use of A.I.
to create video or audio that is made to look or sound real?
Is that happening more often?
>> Yes.
>> Although less often than I think has been the concern now for several years.
So deepfakes were a thing going back to 2017.
But they were they were hard.
And you had to have some skills.
There's now been a democratization.
I mean, you don't have to have any real skill to put together a deepfake video or to create an image.
>> I saw someone this week who is an L.A.
Dodgers fan and used Vin Scully's voice to to create Vin Scully doing the last inning of game seven of the World Series.
I didn't think it sounded great.
And somebody, somebody actually gave a really good review, said it sounded like like a 1990s, like entertainment arts, baseball, play by play version.
but a lot of people in the comments were like, oh, it's like, Vin is back.
This is beautiful.
Thank you for this.
And Frank.
And then a lot of other people were like, I wouldn't have known this was A.I.
if you didn't tell me.
So even the ones I think are a little clumsy are better than they were a decade ago.
Yes, there's so much better.
And this guy's like, well, I spent like ten minutes on this, like ten minutes.
>> Videos.
Not all the way there yet.
for the most part, if you watch an A.I.
generated video, you'll still see enough tells that you should know that this is A.I.
and not real, but it will get there maybe within a year or two, but certainly within five years.
Audio is the thing that concerns me the most right now, because you don't get any of those visual clues to let you know that something is off, even if it's really well done.
There's often just something off with the video, but audio is so realistic now that we have seen that used the most in sort of real world settings to try to attack candidates.
For example, Slovakia is the classic case.
Early last year, the Slovaks had an election.
It was very, very close.
Right before the election, there was a supposed leaked recording of the opposition candidate saying that they were going to rig the vote and that they were going to raise the price of beer, which may have been a bigger problem for him in Slovakia.
And this was all A.I.
generated and there wasn't enough time to debunk it.
Who knows if it flipped the outcome of the election?
But he did lose and it was close.
So I'm guessing there was a voter somewhere who probably didn't cast a ballot because of this.
And so audio is what concerns me the most right now.
But video will get there pretty soon.
>> How do you regulate?
And then what do we do as a culture?
Two different questions.
>> Well, this is a place where the tech platforms have actually been a little bit more willing to crack down.
And so almost all of them before the last election had said that they will take down deepfake audio video if they catch it.
So if there is a purported leaked bit of audio having Harris or Trump saying something they didn't say, that can be proven.
And, you know their systems are good enough to catch it.
They'll take it down.
And so at this point, I think the tech companies feel more comfortable in this space than they do with content that's actually generated by a real user.
Again, I do think there needs to be some light A.I.
regulation around it.
Again, it's not a truth or fiction thing, but what gets complicated is the world of satire.
Most of what we're seeing in terms of the uses of A.I.
generated content right now could be called satirical.
Certainly the Trump administration is using a lot of A.I.
right now through the president's feed.
Most of it I mean, some of it is not particularly civil, but it's clearly not an attempt to mislead people about an event that didn't happen.
>> Like Trump flying a plane and dumping excrement on the No King's day, reporters, correct.
Or protesters.
>> Correct.
>> And so you have a lot of those examples where, again, I don't like the style of communication, but this is clearly not trying to mislead people.
I think the real test for the companies will come, whether it's before an election or another time when a political candidate puts something out that is attacking an opponent, it is generated through A.I.
But whoever put it out there says, you know, this was meant to be satirical.
It was meant to sort of prove a point.
I mean, we think about the, you know, Haitians eating pets narrative.
When JD Vance said, well, you know, I know that probably didn't happen, but it was kind of to prove a point.
What do they do then, where there's an argument that, well, we're not like really saying this happened.
And we could even put a little like A.I.
generated note at the bottom, but it still is probably going to mislead people in another, in another kind of way.
>> Yeah.
And maybe related is part of what you told time magazine in a piece about the coarsening of the discourse.
It's not that when the president posts an A.I.
image of himself with huge muscles wearing a crown or something silly, that people are going to think it's a real image, it's that this used to be what the Redditors did, and now it's what mainstream political people are doing.
What's the cost of that?
>> Well, one, I think broadly, the coarsening of our discourse has led to a lot of people feeling very isolated and removed from their elected leaders, because if you are sitting in a partisan politician's district and you belong to the other party, you certainly do not feel represented by them.
When you look at their Twitter feeds, I mean, they're very hostile, they're very coarse.
The use of A.I.
here, I think, again, it just it cheapens the political discourse.
I understand why comms professionals are saying, well, like it's the language of the internet.
Now you've got to you've got to get up to date with it or you're going to miss out.
And to a degree, that's true.
I mean, when you look at who are the voices that are the most engaged with on X, those tend to be the people that then get spots on cable news.
Oftentimes they're not the most influential politicians by any means.
And so there's this perverse incentive to engage and behave badly online because it drives engagement most.
>> Marjorie Taylor Greene being an example.
>> Being an example.
Yes.
And you go through the top 20 of the most engaged with politicians, maybe other than party leaders, but possibly even including party leaders.
And there are people that you would find to be very politically polarizing that drives engagement on social media.
I don't think it is a good way to govern.
I don't think it is a good way of engaging with your actual constituents, but it gets you coverage.
And so this incentive structure being so skewed online has really, I think, created.
An exceptionally sort of perverse architecture and ways people engage now because they know, like, I'm going to be in the wilderness of Twitter unless I become more controversial, more extreme.
I start using memes, but I don't think it's good for our politics.
I don't think it's good for our country.
>> So if Russia wants to make us angry, how is China using this kind of propaganda?
>> China is at a very interesting evolution.
Through my time working on these issues.
So China initially was not very engaged online.
When we started this work in 2016, 2017, you couldn't even find Chinese diplomats on X. There was maybe 1 or 2.
And Chinese state media had some influence in other parts of the world, but negligible influence in the U.S.
Then around 2019, they learned from the Russians and they became more controversial and they became more hostile, and they leaned into the Russian style of engaging Americans.
And you had the birth of what was called the sort of wolf warrior diplomats.
And so these were Chinese diplomats who were behaving very undiplomatically in the same way that Russians have done this for decades.
And they started talking about things like the George Floyd protests, and they'd never done that before.
They wouldn't touch those issues around 2022, 2023.
They pivoted back and they sort of dropped this wolf warrior approach because I think it was blowing back on them in ways that it was perceived as not being beneficial for China's interests around the world.
And so they now are more subtle, but they certainly try to rewrite historical records in a way that is more beneficial to the Chinese Communist Party.
And so one of those is by actually trying to limit the kind of information that certain chat bots and large language models get trained on, and they control many of these systems.
So that's where China has an advantage over Russia, Russia, other than Yandex, a search engine that's relatively popular, hasn't created a lot of digital tools.
China has them.
So China can say Tiananmen Square.
That never happened and that cannot be in training data.
And so going into history, when people use a Chinese chat bot and ask about Tiananmen Square, you are not going to get a result that comports with history.
You are going to get this sort of pro PRC censorship model of information.
>> I understand people who don't get good information struggling because of autocratic states trying to control the information.
I'm concerned about us.
I'm concerned about people who should know better and who just want to believe the most salacious thing they see.
If it comports with their bias.
And an example from the last week is a story out of Arizona, the Halloween teachers.
Have you seen this story?
>> I haven't seen that.
>> Okay, so some troll somewhere posts on a social platform.
So this is there's a picture of a group of real math teachers in Arizona.
These are real teachers.
And for Halloween, they they all purchased and wore a shirt that said problem solved.
And it was like it was it looked like kind of bloody.
So it was like, oh, problem is solved.
Math problems.
But we're going to do it because it's Halloween with blood.
So somebody posts online says, can you believe that this is our American school teachers now, American school teachers are off the deep end.
This is a group of teachers who went for Halloween mocking the death of Charlie Kirk.
And it became I mean, it has touched the highest levels of the political right on social media, to the point where Guy Benson, one of the top Fox News contributors, shared it and said, oh, my gosh, this really looks like that's what they did.
I mean, countless I mean, Fox News has covered this.
So it comes out that not only did they wear the shirts last year, but the shirts have been for sale since 2016.
A lot of people wear them.
It's a dumb math joke.
So what do people who shared this do?
Some.
Some say I'm still not convinced they knew that Charlie Kirk died.
They'd get a reaction.
Like the worst thing you can think about your fellow humans.
You won't admit that you were wrong about them.
Then they're saying, I think it was an A.I.
generated image.
whatever they were wearing last year, I don't think it was real.
I choose to say that that was A.I.
last year.
But this image is real, and they must have been at least mocking Charlie Kirk and maybe mocking conservatives.
And.
And then the other flavor is this instead of I'm sorry, I got that wrong, it's well, it's in bad taste for teachers to wear bloody shirts at all.
Like, doesn't that show that teachers are still bad?
We're going to keep moving the goalposts here, because now we're going to be I'm going to use the political rights favorite word.
Now we're going to be snowflakes about Halloween, because it can still suit our biases.
Instead of saying, whoa, I got taken in by something.
So maybe you get taken in by an A.I.
video.
Maybe you get taken in by an A.I.
audio, and Slovakia, or here will we pull back and say, I was wrong?
I shared it because it comported with my bias.
I was eager to see it be real.
And it wasn't.
Or do we double down?
I can't believe how many people are doubling down.
I don't think we're handling this very well.
>> But that's the style of the internet right now.
Is doubling down.
Never apologizing.
Being in your tribal camp and sticking to your guns.
And even if you say, okay, I got this one wrong, it's usually.
But still, you know, the Russians have a term called what?
Aboutism.
Yes, yes, which is always well, yes.
You know, we are launching this horrific war in Ukraine, but what about Iraq?
It's this deflection to somebody else on the other side is doing something worse.
The other part of that that I think is important too, is as we talk about A.I.
and all of these high tech tools that are being used, more often than not, the really successful campaigns we see are using old school tactics of taking pictures out of context, changing dates, changing captions.
That all works incredibly well.
Still.
So as we're all kind of freaking out about deepfakes, what we call the deepfakes have always worked just fine, and they're actually harder for systems to catch because that's a real image.
It was just a year earlier.
The context is not right.
Yeah, it takes zero skill to be able to pull that off.
And so that's a concern as well.
But I'm with you.
I mean, I just I think that doesn't happen offline to the same degree because I think it's hard for people to sit across from one another when they're provably wrong.
And to just dig their heels in and keep saying, well, no.
But online, that's just what we kind of fall back to.
And this is why I always push.
We need to get out from behind our computer screens and actually talk to people in person, because the conversations tend to go better.
They devolve very quickly online.
And you do get into this sort of tribalism debate.
And again, the what aboutism of well, sure, we got this one wrong.
But like let me point the six examples out of you guys behaving terribly.
>> Or how dare you say this because one time you did this, Vice President JD Vance asked about was asked about the political reporting on the racist text thread and said that was a bunch of kids, wasn't kids.
It was people aged 24 to 35.
But he said, but what about this AG candidate in Virginia?
He also said something bad.
So we are inflected with all of the worst ideas that came from the troll farms, and now we're just living that.
So as we get ready to wrap up here, it's been a really inspiring hour.
no.
Honestly, in many ways it is because I'm glad Bret Schafer, that people like you are working on this, and I'm grateful for that.
So two questions to wrap.
First of all, a listener named Jack had emailed earlier saying, do we do the same thing?
Do we use A.I.
or propaganda tools in this way?
>> Well, obviously, as we covered in political communication, it's certainly being used.
Now.
Trump, being at the top, has started using A.I.
a ton in terms of the U.S.
running disinformation campaigns outside of our borders.
We certainly have done this in the past.
I don't.
>> There's no reason to think that we would stop doing it in the age of A.I.
>> No, and we have been caught a few times in embarrassing ways and in ways that I think are not really defensible.
I mean, the case that comes to mind is during COVID, it was found that a DoD contractor was running a campaign, and I think the Philippines, to try to get Filipinos not to take the Chinese COVID vaccine.
And they're running basically the same style of a troll campaign with Sockpuppet accounts presenting themselves as Filipinos, saying, this Chinese vaccine will kill you, et cetera., et cetera.
really kind of gross stuff.
So we've we've done it, at least in the past.
It was something that was still seen as as taboo and something we shouldn't be doing.
But you know, who knows what's happening in the intelligence community?
I would guess some of this is happening.
>> And I want to close with what we can do going forward.
I mean, you've talked policy.
We've talked about culture.
but I read something this morning that was pretty interesting.
back in 2017, someone was writing a book on some of the ways that the right was starting to fracture towards authoritarianism and some of the strains that we've seen with Tucker Carlson, Nick Fuentes lately, and the author of the book said that what he has been trying to tell people is that every time you do a takedown of Nick Fuentes or these characters, it just gives them fuel because they want to present themselves as the as the titan of this empire online.
And they've got this army of people when really they are just professional trolls who don't have anything.
If they don't get the attention now, they certainly have got a lot of power now and a lot of access to powerful people.
But sometimes it is as easy as just getting offline, being in the real world, spending more time in discourse with real people as opposed to anonymous accounts.
What would give us hope that we're not doomed?
>> I would say don't let the online discourse set the rules for engagement sort of shoot over the top of it.
Because if you get into a debate right now, let's say Charlie Kirk and everything that happened in those weeks afterwards.
Yeah, that is going to be a hyper polarized conversation.
If you set liberals and progressives on one side, conservatives on the other, the outcome is not going to be great.
So I would just leave that and let's go with let's start with concentric circles of what we agree on.
Just take the information space.
What do you want to have actually show up in your feed when you run a Google search that could have life or death results, or affect your money or your life?
You know, I have young kids.
I've googled things in the middle of the night.
Do you want our information systems to work?
Saying, oh, you know, I think my kids might have lice.
And the top result says, just throw some bleach on them and they'll be fine.
Everybody says, no, that's terrible.
So okay, like, how do we get to an information system where we say some sources actually are trusted more than others, and we sort of work out from there.
So like, don't let the trolls dictate the topics that we talk about or how we talk about them.
Just shoot right over that.
Because again, even with all the restraint that I try to bring to this, because it's my job, if I start talking about the Charlie Kirk assassination and some of the people who are enraged at Jimmy Kimmel afterwards, knowing that some of those same people have been coming at us, my colleagues, that's not going to be a constructive debate.
So let's just not have it right now.
And let's kind of push forward and talk about something else where we can have a constructive dialog.
>> That's a great way to end it.
I want to thank Bret Schafer senior fellow focusing on media and digital disinformation for the Alliance for Securing Democracy.
Get a guest in Rochester tonight of the World Affairs Council, the local chapter of the World Affairs Council, and they would love to see you in the future.
If you'd like to join the World Affairs Council.
They bring in people like Bret and colleagues in different fields throughout the year, and sometimes we're lucky enough to have them on Connections.
You are generous with your time.
Thank you for flying in and coming right to the studio, and thank you for the work that you are doing.
>> Bret thank you.
>> Appreciate it.
And from all of us at Connections, thank you for watching, listening wherever, whatever platform we're on.
This is the real thing.
No A.I.
here.
It's just real people doing journalism and having conversations.
And we'll be right back with you tomorrow.
Recapping the elections on Connections.
>> This program is a production of WXXI Public Radio.
The views expressed do not necessarily represent those of this station.
Its staff, management or underwriters.
The broadcast is meant for the private use of our audience.
Any rebroadcast or use in another medium without express written consent of WXXI is strictly prohibited.
Connections with Evan Dawson is available as a podcast.
Just click on the Connections link at wxxinews.org.

- News and Public Affairs

Top journalists deliver compelling original analysis of the hour's headlines.

- News and Public Affairs

FRONTLINE is investigative journalism that questions, explains and changes our world.












Support for PBS provided by:
Connections with Evan Dawson is a local public television program presented by WXXI