
Story in the Public Square 2/4/2024
Season 15 Episode 5 | 26m 55sVideo has Closed Captions
This week’s guest is Allie Funk, Research Director for Technology & Democracy
Allie Funk researches the complex and evolving role technology plays in democracy at home and around the world. Funk dives into the ways artificial intelligence is exacerbating digital repression, as it relates to disinformation, censorship, and surveillance. She contends that developments in AI capability have made it easy to create false and misleading information at scale.
Problems playing video? | Closed Captioning Feedback
Problems playing video? | Closed Captioning Feedback
Story in the Public Square is a local public television program presented by Ocean State Media

Story in the Public Square 2/4/2024
Season 15 Episode 5 | 26m 55sVideo has Closed Captions
Allie Funk researches the complex and evolving role technology plays in democracy at home and around the world. Funk dives into the ways artificial intelligence is exacerbating digital repression, as it relates to disinformation, censorship, and surveillance. She contends that developments in AI capability have made it easy to create false and misleading information at scale.
Problems playing video? | Closed Captioning Feedback
How to Watch Story in the Public Square
Story in the Public Square is available to stream on pbs.org and the free PBS App, available on iPhone, Apple TV, Android TV, Android smartphones, Amazon Fire TV, Amazon Fire Tablet, Roku, Samsung Smart TV, and Vizio.
Providing Support for PBS.org
Learn Moreabout PBS online sponsorshipit was easy to get caught up in the triumphalism of the rise of big technology in our lives, including in our democracy.
Today's guest researches the complex and evolving role technology plays in democracy at home and around the world.
She's Allie Funk, this week on "Story in the Public Square".
(bright music) (bright music continues) Hello, and welcome to a "Story in the Public Square" where storytelling meets public affairs.
I'm Jim Ludes, from The Pell Center at Salve Regina University.
- And I'm G. Wayne Miller, also with Salve's Pell Center.
- And our guest this week is Allie Funk, Research Director for Technology and Democracy at Freedom House, she joins us today from New York.
Allie, thank you so much for being with us.
- Thanks for having me, excited to chat with y'all.
- Well, you know, I've gotta tell you, I'm really excited to have you on the show.
I have been a fan of Freedom House since I was a student in high school, I was a weird kid.
But maybe for those who don't know, tell us a little bit about the work that Freedom House does.
- Well, I think probably you won't be alone with your listeners and your watchers that they learned about Freedom House through high school and college.
We pop up in a lot of those classes.
So Freedom House, we're a independent, non-partisan organization founded back in 1941, we're actually the oldest democracy organization still around today.
And we wanna defend and expand democracy throughout the world.
So we do that through our research, our Freedom in the World, Freedom on the Net reports that study the global state of political rights, civil liberties, we do it through advocacy to democratic governments, to the private sector.
And then, we do it, we promote democracy through direct, emergency and programmatic assistance to human rights defenders around the world that are fighting for freedom.
- You know, so you lead Freedom House's work on technology and democracy.
We'll get into some of the details of some of your recent research in a moment, but talk to us about the pairing of those two issues, technology and democracy.
What's the link between them?
- Yeah, so we came up with this technology and democracy portfolio now about a decade or so ago with the understanding that, you know, technology can be used to drive authoritarianism, just as much as it can be used to protect democracy and advance human rights.
So we wanted to understand, how is that happening?
What type of technologies are protecting human rights, undermining them, and what are the best practices?
You know, our everyday lives now connect with the internet, especially since COVID, I mean, I'm calling in from my home in Brooklyn.
So, you know, with our everyday lives so connected to what's online, our phones, we wanted to understand, how does technology undermine or impact democracy and what can we do about the threats to make sure that it is advancing democracy.
- Have we been a little bit naive about the promise of technology and democracy?
There's a iconic, in my mind, cover of MIT Technology Review 2013/2014, boasting about how big data was gonna save politics and save democracy, and that didn't pan out.
Were we naive about the perils of technology to democracy?
- I think we were excited, maybe a little bit naive, but really excited about what the internet could do.
I mean, I think back, you know, even being able to, you know, get on AIM at home when my mom is on the phone and you couldn't do that at the same time, and look where we are now, where you could have every device in your home connected to the internet, it was a really exciting time.
And based on what we knew about how technology, you know, if you think about how the radio played a really important role for democracy, you know, decades before that.
There was a lot of potential, but, you know, just as technology can promote democracy, it can do the exact opposite.
And now, I think over the past decade, we've really deepened our understanding of how that can happen and we're at a better place to resist those threats than we were, you know, in the early 2000s.
So I think we know a lot more and we're a little less naive, and I hope that we're actually still excited for what technology can bring.
- So late in 2023, you led publication of Freedom House's annual report, which is titled "Freedom on the Net".
And among its findings, quote, "Global internet freedom declined for the 13th consecutive year."
Let's take this in two pieces here.
First, what happened in 2023?
- Oh, so much happened, so yes, we released "Freedom on the Net" back in October.
And for folks who don't know, it's our annual assessment of internet freedom around the world, we've been doing it since 2009.
So I have just a really deep understanding of how the internet has evolved over these years.
2023, 13th consecutive year of decline.
And what our findings really made clear is how we are having a record-breaking crackdown on free expression.
So more governments than ever before, since we started doing this research, are arresting people who are simply expressing their political views, their religious views, their social views online.
We also have a record number of governments that are blocking websites that host political speech, religious speech.
So, you know, what we're really seeing is that, you know, digital repression is deepening, it's worsening.
And one of the largest findings of this report that I'm excited to dive in with you all today is how artificial intelligence is actually making this problem worse.
- So we're gonna get to AI in a moment here, but sort of a general question, how do you measure internet freedom?
I mean, it seems like sort of a nebulous term on one level, but obviously, you're the researcher and the scientist, so fill us in.
- Yeah, happy to.
So first, internet freedom to us, it's just the idea that those rights you have offline, the things you can do in the street or in your home should be protected when you're online.
So what we are measuring in every country we cover is how easy it is for folks to get online in that country.
What does that internet look like once they're online?
You know, can they access reliable information?
Can they access different social media platforms?
Can they communicate with their loved ones?
And then, you know, while they're participating online and doing that communication, or, you know, streaming those videos, are their rights to free expression and privacy protected or undermined?
So things like surveillance laws, you know, free expression laws, are there strong due process?
Are people harassed by the state?
And I think it's important to think about, you know, for all of our countries, we're fundamentally looking at people's experiences, we're not ranking, you know, one government's performance.
So it really allows us to get a comprehensive analysis of how people's freedoms are or are not protected.
- Yeah, so Allie, you mentioned the key finding about artificial intelligence.
And if I remember correctly, it was essentially that generative artificial intelligence threatens, I think the verb you used was, "to supercharge online disinformation."
We've spent a lot of time talking about disinformation on this show in the seven years that we've been on the air.
What does AI portend for disinformation?
- Yeah, so we looked into the ways in which AI is exacerbating digital repression, as it relates to disinformation, censorship and also surveillance.
So zooming in on the disinformation point, you know, we've, like I mentioned, been doing this report since 2009, disinformation campaigns are really just a fundamental feature of the information space.
One point we make in the report is how of the 70 countries that we cover, at least 47 governments employ pro-government commentators to manipulate information in their favor.
Those could be ministries that are doing it, maybe they're outsourcing to private companies.
But what our big concern of is with generative AI, and here I'm talking about, you know, chatbots like ChatGPT, but also tools that can generate videos or images, our concern is that generative AI, it lowers the barrier of entry into the disinformation space.
So it basically makes it so easy to create false and misleading information at scale.
And then, what we expect to see is that those networks that governments have spent years building up can then distribute that information at scale.
So what it's doing is it's lowering, you know, the creation of disinformation and then we already have the structures to disseminate it.
So we were really interested in, you know, where are we already seeing this?
Where are we already seeing cases of generative AI being used for this purpose?
And we found in 16 different countries that generative AI was used to sow doubt, to smear opponents, to discredit, you know, critics.
And it's popping up in elections periods, around protests, in countries where you have, you know, political crises that is already ripe for problematic information that people are already, you know, trying to find what's real, what's not.
And we also found too that generative AI, you know, is most disproportionately impacting women and other marginalized communities that are already facing increased threats from those in power.
- Yeah, you said 16 countries where you already see this taking place.
Can you name names?
Can you give us some specific examples?
- Absolutely, so one example we really highlight in the report is from Nigeria.
So they had elections last February, big moment for that country's democracy.
And one example that we call out is there was an audio clip, generated by AI, that spread across social media of an opposition figure alleging to claim, alleging to rig balloting in their favor.
And you know what I think is important for this context, you're like, oh, it's just audio clip, like, I might not listen to that when I'm on, you know, playing around on Facebook or Instagram.
But first, audio clips really hard to sometimes figure out whether that's real or not, you don't always have the same tells in an audio clip that you might have in an image that was generated by AI, you know, an image, you might see weird fingers or weird hands.
- [Jim] Yeah.
- Audio might be harder to discern.
And also, within the Nigerian context, there's a serious risk for political violence, and this has happened previously.
So what it's doing is not only, you know, muddying the information space, so maybe what's false or what's real, but it's also, you know, risking heightened already in a heightened environment for violence.
Another example that we highlight, it comes from Venezuela, in which we found, you know, reporting about these broadcasts of this fake news outlet that was claiming that, you know, folks should visit Venezuela, that the economy's going really great and, you know, these broadcasts were coming from a fake news outlet, totally made up news outlet.
The broadcasts were created by a service in which, you know, the user can just type in what they want.
And these videos were spread by Venezuelan state media accounts across users in the country.
- So Allie, is it just countries that are doing this or are private companies somehow involved?
And my guess is there are some private companies spreading disinformation?
- Absolutely, so the thing that's really interesting and what makes this research challenging is it's really hard to trace back the actor who created it and who necessarily originated its spread.
So we don't always, and we try to be, you know, really clear in the research and the broader disinformation research community makes this case that you can't always link back to one actor, but sometimes you can.
And we did find cases in which the private sector is, you know, creating these tools, and then they can be used by whoever they want.
And you know, a lot of these companies might have terms of service about these tools can't be used for political campaigning and for making false information, but enforcing those terms of service can be really challenging.
So there's this enforcement gap over what the rules are and how they're actually used.
- So why are private companies doing this?
I'm guessing to make money 'cause they're selling, right?
- Yeah.
(chuckles) Yeah, it's lucrative.
- The profit motive, right?
- Yeah, it's very lucrative.
- Expand on that, if you can.
- And I mean, you see this, even going beyond the disinformation space, you see this in the digital repression world more broadly, how so many forms of digital repression have been outsourced to the private sector.
So you obviously have the companies that are creating disinformation campaigns, you have companies that are hacked for hire groups that can hack into, you know, different users' computers, you have the spyware mercenary industry in which, you know, surveillance tools are really cheap, easy to get, and what this has done overall, you know, in the world that I exist in and that we monitor, is how digital repression is just so much easier to do now because it's more affordable.
And this means that governments that might not have, you know, the same sophistication in their technical ability can enter the digital repression space and go after dissidents and journalists and other folks.
- But, Allie, it's not just disinformation.
I know that you've also written about the use of artificial intelligence by regimes for censorship, could you explain that?
- Absolutely, so we found two different ways that AI is enhancing government censorship.
The first is actually this really new emerging trend that I think is fascinating, deeply concerning, but intellectually really interesting as well is, you know, with ChatGPT, with Bard, with these chatbots, what they can do is they can give you information, access to information across the internet, right?
But in countries where the internet is broadly censored, they can also give you access to information that is otherwise censored.
So it's actually this really great tool that could be used to, you know, get access to government criticism, information about religions that might otherwise be censored.
And what that has meant is that some governments have tried to censor the use of chatbots, and China is the one that's really leading this charge.
So, you know, companies there cannot incorporate ChatGPT within their products, and regulation in the country also requires that AI services created by Chinese companies or foreign companies, they must adhere to the ruling party stances.
So in practice, that means, if you play around, there's, you know, this chatbot called Ernie Bot, which I think is a fantastic name, created by the Chinese company, Baidu.
And if you're trying to talk to Ernie Bot and you ask questions about, you know, what happened in Tiananmen Square, you know, decades ago where there was a massacre of pro-democracy protestors, what was Tiananmen Square?
The bot won't answer.
Same thing if you ask about Xinjiang and what's happening to the Muslim community there and the government repression they're facing, also won't answer.
So we expect that as these chatbots become increasingly used by people around the world, you're gonna see more attempts to censor the outputs that the chatbots put out.
The second area we really zoomed in on is requirements for private companies to use AI when they're moderating content in different countries.
So we found that at least 22 different countries' governments have laws that require automated systems to censor political, social and religious speech.
And this gets a little wonky in how this happens, but, basically, you know, any content moderation that is being done by any platform is going to use AI, you cannot do moderation at scale without AI, it's really important, it helps you detect disinformation campaigns, it also safeguards, you know, human moderators from seeing really traumatic content.
But a lot of countries now are actually just requiring the use of AI to censor political content.
And one example comes from India, you know, their YouTube and Twitter were required to, well, now X, were required to limit access to post about a BBC documentary that had investigated Prime Minister Modi's role in communal violence back in 2002.
And now because of the country's legal frameworks, those companies have to use AI to limit access to any future posts about that BBC documentary moving forward.
- So, Allie, has the rise of AI replaced conventional forms of information suppression and disinformation?
And what is the trend, if that has not yet happened?
- Yeah, you know, I think this was actually one of the most interesting points, and I think one of the most important points of the research is how AI, it's worsening a lot of the trends that we've been concerned about, but it's not replacing them.
So we still found record high crackdowns in forms of traditional censorship, so things that have nothing to do with AI.
I mentioned this a little bit earlier, but a record high number of governments are blocking websites to limit access to information for people in their countries.
We had, you know, at least 16 different countries that just shut off the internet altogether.
And, you know, why is that happening?
You know, if governments have access to these tools, why aren't they just using those if it's easier for them?
One of the reasons that I think traditional forms of censorship are still thriving is because AI powered tools aren't always that effective.
And especially during times of crisis, like protests, elections, these AI tools, they just struggle to keep up with a surge of dissent that's happening online.
And I think Iran, this past year, is really such a clear case of this.
The Iranian government hosts some of the most sophisticated censorship controls in the world, only second, you know, really to China and maybe a few other governments.
But during mass anti-government protests, you know, led by the women in the country, the government still shut off the internet, they blocked the remaining social media platforms, despite having these sophisticated tools.
And we think that's why it's 'cause they just couldn't keep up, right?
Those censors just couldn't keep up with the mass outpouring of discontent from women and other folks in the country, so they had to resort to these blunt controls.
- So given how many people use social media, a little, a lot, all ages, pretty much, how can individuals sort through this?
How can they make sense of this?
How can they find the real versus the not real?
That sounds like a tremendous challenge, any advice here?
- It's a huge challenge.
I mean, there's basic digital security, digital resilience, information literacy best practices someone can take, such as, checking your sources, I think, is really great, if you see a headline, you know, click to read the article, make sure that the source that's providing it is reliable.
I, also, when I read the news, I like to crosscheck different outlets, that can be really important.
But I think, fundamentally, the individual can only do so much.
What we actually need is governments to step in in a rights respecting way to, you know, encourage the diversity of information, to support local media environments, I think, you know, a huge challenge right now is that so many local media environments have just been, you know, totally or partly wiped out.
So you don't have as much diverse journalists, diverse news outlet.
So governments have a role here to play, as well do companies, companies need to make sure they have their best practices when they're building these tools, making sure they're testing these tools, before they get introduced to the public.
And you know, just making sure that they also understand the impact of those tools, these are called human rights impact assessments.
'Cause a lot of times we don't even know what is happening once these tools get introduced and how they're impacting our everyday lives.
- Yeah, Allie, there's so much that I wanna ask you about here.
I'm old enough to remember a time when social media companies were largely left to their own devices, they were gonna self-regulate.
And that didn't exactly work to plan, at least for the health of the Republic.
Have we learned, has the US Congress learned anything from that experience that's gonna be applied to the AI companies as they emerge?
- I hope so, and we do have, you know, I think they have, if we're looking at how quickly AI regulation has come about.
If you think about, internet regulation took years and years for us to finally understand, oh, this laissez-faire approach, it's not working, we need to come up with a new approach here.
And now, you know, just think how quickly ChatGPT was only introduced 13 months ago, and we already have the EU, you know, finalizing text on a major AI act that is going to in part cover some of those services.
So we are in a different space fundamentally from where we were when the internet generally got introduced.
Looking at the US, I mean, I think there are challenges with what Congress can do, you know, there's been a lot of legislative paralysis on tech policy and internet regulation that we've seen in Congress.
The Biden administration has sought to fill this gap, there's a new executive order on AI that was introduced, we've got a AI bill of rights that the administration has introduced last year as well that tries to lay out a vision for how AI should or should not be used.
And I hope that Congress, there seems to be a lot of bipartisan interest in Congress acting on this issue.
So Freedom House is, we're watching these all closely, we're working with our partners to try to guide those in DC to figure out, how do we do this?
How do we regulate in a way that's gonna protect human rights, while also still ensuring innovation and setting us up for a more rights respecting future.
- You know, we're moving into peak political season here in the United States, there are a number of elections taking place in 2024 around the world.
How are these dynamics, how are these forces, how are these trends going to impact those democratic elections?
- I'm really concerned about this next year of elections.
I mean, 2024 has over 50 different countries with folks heading to the polls.
You've got, obviously, some of the world's biggest democracies, you have the US, you have India, you have Indonesia, Mexico, I mean, the list goes on.
And this is all happening, like, we've been talking about at a time in which generative AI is now on the scene, it's worsening disinformation, and governments and their supporters have a whole playbook of tools at their disposal to undermine a reliable information space.
And one of the things that I think is the most concerning too is that so many social media platforms, some AI companies, have cut the very teams that were dedicated to, you know, these concerns, you know, the platforms have cut teams on trust and safety, they've cut teams that are focused on specific country contexts.
Because I think, you know, one of the main points here is that how a company responds to threats to election integrity in India are gonna be very different to how they respond to election integrity threats in Mexico.
So it's really important that these companies have internal expertise on those countries, on those regions, and we've seen a rollback of that.
So I think we're in a really tough position coming into this big election year.
And not enough attention is being paid on these elections around the world.
- You know, Allie, we've got literally about 20 seconds left here.
Are there any bright spots?
It's sort of a heavy conversation there, is there any reason for optimism, hope going forward?
- Absolutely, I mean, global civil society, and here, I mean, nonprofits, activists, journalists, academia, global civil society, again and again, is pushing back against digital repression and they're winning in a lot of cases.
So, you know, they're taking problematic surveillance laws to judiciaries, and judiciaries are overturning those laws.
They're mobilizing broad public support against government policy, and it's convincing those in power to change their approach.
So again and again, if you empower civil society, if you give them the financial resources that they need to do this work, you're going to see an impact.
And I think that is ultimately how we reverse this global decline in internet freedom.
- It's hugely important work, Allie Funk from Freedom House, thank you so much for sharing some of your work with us today.
That is all the time we have this week, but if you wanna know more about "Story in the Public Square", you can find us on social media or visit pellcenter.org, where you can always catch up on previous episodes.
For G. Wayne Miller, I'm Jim Ludes, asking you to join us again next time for more "Story in the Public Square".
(bright music) (bright music continues) (upbeat music) (no audio)

- News and Public Affairs

Top journalists deliver compelling original analysis of the hour's headlines.

- News and Public Affairs

FRONTLINE is investigative journalism that questions, explains and changes our world.












Support for PBS provided by:
Story in the Public Square is a local public television program presented by Ocean State Media