Subscribe to Here’s the Deal, our politics newsletter for analysis you won’t find anywhere else.
Thank you. Please check your inbox to confirm.
Disinformation, especially on social media, is threatening the upcoming election just like it did in 2016, social media and tech giants Facebook and Twitter announced this week. Deen Freelon, an associate professor at University of North Carolina at Chapel Hill joins Michael Hill to discuss the potential power of these digital disruptions.
This week, Facebook and Twitter announced that disinformation campaigns, just like the ones that targeted the 2016 presidential election, are threatening this year's election.
As more of these sophisticated campaigns persist, many are spreading rapidly online before social media companies can fact check and take them down.
I recently spoke with Deen Freelon, associate professor at the University of North Carolina Chapel Hill about these digital election disruptions.
Delivering disinformation in politics and political campaigns is really nothing new. What makes it so dangerous and disruptive?
Well, I think disinformation has really been exploding in the digital sphere, right. So this is something that we saw a lot during the 2016 election and something that we're continuing to see now.
And I think that in the 2016 election, before, during and after, this was something that Americans were not necessarily aware of, they didn't really see coming. And now they're a bit more aware of it.
But, of course, the disinformation has evolved. So as it evolves, it sort of remains ahead of Americans' understanding of it and it's perception of it and that's what really makes it dangerous.
It's savvier now, would you say?
Yes. Right. So that if the point of disinformation is to deceive people and to get them to believe or engage in behaviors that they otherwise wouldn't you can't really be forthright about your identity and what you're talking about. So you have to keep ahead of the media.
And, of course, the academics like me that are studying it, trying to get to the bottom of it. And the real problem that we encounter is that so much of this can only really be understood in retrospect. So in other words, it's only after the social media companies release the identities of the accounts and the people behind them that we can truly understand what's going on. And that's what really makes it scary, because in the moment, there's really no way to do that.
What is the impact then of this and how effective is it? How well does it work?
The impact of disinformation should be understood within the broader context of media effects more generally. So I participated in a study that showed that disinformation provided by the Russian Internet Research Agency really didn't change anybody's minds. Right? So it wasn't like shifting, you know, left wing to right wing or anything like that. So that's pretty definitive.
And other accounts have also suggested this is not the way that this disinformation typically works if it has any effects at all. What it typically does is it pushes people further along the direction that they're already in. So if you're left, it maybe makes you more left. If you're right, it maybe pushes you more to the right. There needs to be more research on that. But we know that it does not result in this sort of, you know, major opinion shifts that sometimes is out there propagated in popular accounts of this type of phenomenon.
Does it have a disproportionate impact with any segment of the population?
Yes. So we know that there have been certain segments of the population that have been disproportionately targeted. We know that in the case of the Russian Internet Research Agency that white conservatives and Black protesters on the left were disproportionately targeted in 2016 and afterward. And this, of course, follows along with disinformation playbook, where you target the most vulnerable and the most populations that are most likely to engage with this kind of content. And where the impact is likely to be the greatest.
You said we're seeing more of it is savvier now than in 2016. Is more of it also based on stereotypical things?
Yes. I mean, this is one of the areas, the fissures in American life, that foreign based disinformation actors like to exploit the most. And so race has been a major sort of flashpoint for that. You know immigration is another one. And so these kinds of hot button issues are really ripe for disinformation, exploitation.
How easy, how hard is it to detect? And the reason I ask that is because The New York Times is reporting this week that there is some Russian poll agency, some agent of the Kremlin that has been hiring American freelance journalists to work for something called peace data or peace research or something like that. And the folks who are being hired don't know that they're working for the Russians and they're going out and spreading disinformation.
Right. So this is part of that evolution that I'm talking about so primarily in 2016 and immediately after, it was mostly a Russian agent, people who are sort of Russian speakers, at least we know that, that were engaging in this. And so the recruitment of actual Americans is really the next evolution of that. And, you know, who better to pretend to be Americans or who better to represent foreign interests than Americans themselves right, they're less suspect because they're some of us, right?
And so, absolutely, I think this is part of that next evolution that I was talking about, making it much more difficult to detect. Even the people that are carrying these messages forward can't detect it. They don't know necessarily that they're engaging you. They're just collecting a paycheck. And of course, at the moment of economic precarity that we're at, you know, there may not be too many questions asked about where that money is coming from if you're desperate to get some of that.
In 2016, I remember I was presented with something that someone apparently got from Facebook and was about Hillary Clinton and the child trafficking and all that stuff. And then I asked, OK, where did that come from? And someone told me the source, I immediately did not recognize it as anything worthy, anything credible. So that then raises the question for me then how do we as the targets of these disinformation campaigns, how do we separate fiction from fact?
Part of the issue that we've run into is it is extremely difficult to do so until after the fact because we have to rely on these social media companies to disclose the identities or at least the social media handles of this information, actors. But one thing I tell my students is that you really need to understand that disinformation plays upon your preexisting political biases. Right? So it plays on confirmation bias, motivated reasoning, which means that when people are really trying to appeal to you with disinformation, they're going to try to say things and do things that are going to attack people that you already don't like and support people that you do like.
And so that, I think, is where people should really pay attention when it's something that seems to be too good to be true is attacking something you don't like, whether it's in support of somebody you do like. That raises the possibility that people are really trying to to engage in a disinformation style attack on you. And this maybe from somebody that you know or an organization that you're aware of or from a source that you are not familiar with.
But when it's really going overboard in support of your political beliefs, that raises the possibility. It's not definitive proof, but it really should put people on high alert that they may be on the receiving end of a disinformation attack.
Associate Professor Dean Freelon of the University of North Carolina on disinformation in our political system. Professor, thank you very much for joining us.
Thanks a lot.
Watch the Full Episode
Support Provided By:
Additional Support Provided By: