
Our First Presidential Election in the Age of Artificial Intelligence
8/12/2024 | 58m 45sVideo has Closed Captions
Panelists discuss the threats we may face in the first presidential election in the age of AI.
Panelists discuss the threats we may face in the first presidential election held in the age of AI. Speakers include Dara Lindenbaum, Matthew Perrault and Chris Bail.
Problems playing video? | Closed Captioning Feedback
Problems playing video? | Closed Captioning Feedback
Asheville Ideas Fest is a local public television program presented by PBS NC

Our First Presidential Election in the Age of Artificial Intelligence
8/12/2024 | 58m 45sVideo has Closed Captions
Panelists discuss the threats we may face in the first presidential election held in the age of AI. Speakers include Dara Lindenbaum, Matthew Perrault and Chris Bail.
Problems playing video? | Closed Captioning Feedback
How to Watch Asheville Ideas Fest
Asheville Ideas Fest is available to stream on pbs.org and the free PBS App, available on iPhone, Apple TV, Android TV, Android smartphones, Amazon Fire TV, Amazon Fire Tablet, Roku, Samsung Smart TV, and Vizio.
Providing Support for PBS.org
Learn Moreabout PBS online sponsorship- Hi, I'm Kirk Swenson here at Ideas Fest in Asheville, North Carolina.
In this next program, we talk about possible threats and benefits of artificial intelligence in the 2024 presidential election.
Hear from election and tech policy experts in this panel discussion.
- [Announcer] Quality public television is made possible through the financial contributions of viewers like you who invite you to join them in supporting PBS NC.
- Artificial intelligence is shaking the foundations of our democracy.
We're months away from our nation's first AI election, and we, I think, have potentially things to fear.
My goal from this panel is I think not that we go from "oh no" to "oh yes", but maybe that we just go from "oh no" to "oh".
[audience laughing] And I'm hoping that we'll get there by looking at three questions.
So the first is, what do we mean when we say that this is our first AI election?
What does that mean?
What are the risks?
What are the opportunities?
The second is, how might we govern AI as we move into a world where it's more of a part of our political discourse?
We have two experts here who will look at that from a range of different issue perspectives, not just from governance but also from how we think about interventions on the platform side that might help us to govern AI more effectively.
And then third, we are all citizens in our democracy.
How do we navigate this new information landscape as citizens and as this election approaches?
We have two extraordinary panelists joining us today to help us move through this, Dr. Chris Bail, one of the world's foremost experts on polarization.
He's a professor of sociology, political science, and public policy at Duke University.
He also runs the Polarization Lab, which he founded.
His 2021 book, "Breaking the Social Media Prism", was featured in "The New York Times" and "The New Yorker", and it was described by "Science Magazine" as masterful, and I read the book and I agree with "Science Magazine's" characterization.
Commissioner Dara Lindenbaum has served on the Federal Election Commission since she was confirmed by the Senate in May 2022.
She was previously a partner at the law firm Sandler Reiff where she advised clients on campaign finance and election laws.
Before that, she worked at the Voting Rights Project at the Lawyers' Committee for Civil Rights Under Law.
So starting off, what do we mean by the first AI election?
Dara?
- Huh, thanks for having me, and thank you all for coming, and Asheville is an incredible place, and you are all so lucky to live here.
[audience laughing] I have fallen for this place hard in my 24 hours here.
[audience laughing] So let me first just say I am here, you know, as a commissioner from the Federal Election Commission.
There are six of us.
So I speak for myself.
I don't get to speak for my colleagues.
So, you know, me, Dara, I see this as the first election cycle where everyone who's running for office and participating in politics has relatively easy access to tools that can help them amplify their message in different ways, and that can be really quite good, and some of the fears around it are how that of course can be used against it, which I'm sure we'll get more into and the idea of deepfakes and issues like that, and while I am certainly, you know, concerned about misinformation and disinformation and, you know, polarization and how, you know, tech can be used to really get at people's beliefs and really edge people in, to me, this is not very new, and particularly when we talk about foreign actors trying to interfere with our elections through social media and through tech, we've seen that for a while now.
We may not have realized it, which was part of the problem, but I think what we are entering is actually a period of time where we are all thinking a little bit more critically about what we see, and I see that as a benefit here.
- So Chris, can you parse out what is new and what's different?
Artificial intelligence wasn't invented in 2023 or 2022 or 2024.
It has been around.
What is new, and do you see this as our first AI election?
- Sure, and, you know, to Dara's earlier point, the first time we saw broad-scale foreign influence on our elections was probably around 2016, and back then, we were dealing with a much different kind of AI.
You know, that was sort of a more primitive version which could do some plausibly human messaging but nothing that we're seeing today, which is to say a technology which is almost indistinguishable from real people when we do research.
So if we show people hundreds of messages generated by AI, large language models like ChatGPT or GPT-4 versus a human, it's basically a coin flip in whether people can tell which one is which.
Now, that's a little scary.
Even scarier is that there's new research that suggests these AI also have persuasive capabilities.
Some research suggests they can even be more persuasive than humans.
So when you put these two things together, there are certainly threats on the horizon that make this, I think, a unique election.
- Okay, so we're zigging toward the risk side.
So everyone is like, "You promised 'oh'.
We're heading toward 'oh no'", but some of your research has talked about the benefits of AI for political discourse, so can you talk a little bit about that?
- Absolutely, and to Dara's point, this is not a new phenomenon.
We've been dealing with foreign influence for a while now.
You know, a shocking piece of research that many people don't hear about is that the influence of misinformation is actually much smaller than you might think in terms of our attitudes and behaviors.
- So can we say that one more time because this was the shocking thing to me.
So I was on the public policy team at Facebook.
I lived through the 2016 election there.
We woke up in the morning after the 2016 election and then for multiple years afterward with the reinforced idea that misinformation is one of the foremost if not the foremost threat to our democracy.
When I got out of Facebook and went into the academy, what you just said was something I heard repeatedly from social scientists, often sort of privately, not stated quite as clearly publicly as you're stating it now.
Can you just say it one more time?
[audience laughing] - Misinformation doesn't have as much influence as most people think, and look, I get it.
It's really easy to blame misinformation and social media for all of our current social malaise.
It's really a facile thing to do, right?
It solves all our issues, right?
There's polarization because of social media.
There's misinformation because of social media, and I just worry personally that if we lay all the blame at a platform's feet, we won't solve the problem, which cuts much deeper than that.
So as we think about how AI represents a threat to the social fabric, we also, to Matt's point, need to think about how it can begin to repair the fabric, and I think there's some good news there too.
- Okay, so I think I know exactly what each of you are thinking right now, which is, "Okay, so Chris said this thing that misinformation is less impactful than we think it is, but I saw this crazy thing on Facebook, and I'm sure it's taking off and everyone believes it."
- Yeah.
[audience laughing] So here's the really interesting thing, the average American in 2016 saw two pieces of misinformation.
Now, if you ask people how much misinformation they see, they say, "Almost none," but then if you ask them how much misinformation do other people see, they say, "Everyone else sees tons of misinformation," [audience laughing] right?
And so that is really the heart of the problem, right?
It may not directly affect our attitudes or our behaviors very often, but it can, we think, decrease public trust, and that's something that is deeply concerning, especially as these things become more capable and scaling.
I don't wanna minimize the threat.
It's still there.
It's just not perhaps an existential threat, I would argue.
- Okay, so I want to go to the threats in just a second, but before we get there, benefits of the first AI election.
What do you see this technology potentially bringing to political discourse that might be positive?
- Well, let me just start with I certainly understand that point, but from what I've seen, it may be across the board that might be the case, but in black and brown communities, it's not the case.
- Sure, very good point.
Yeah, very good point.
- And I think it does us a disservice to not address that, that, you know, just like when we look at polling, if you look at the entire country on the presidential election, it doesn't really matter, right?
It's about the individual states that are the swing states, and if we're talking about how foreign actors or local actors are targeting people, it is unfair of us to say that it's approaching people differently or that it's not touching people when certain communities are absolutely being targeted, and we see it day in and day out that they're trying to expose it.
So you all might not be seeing it, but people are.
- Can can we just pause on that for a second because I think that is lots of people's intuition.
It was at one point my intuition.
Can we just get your sense of whether the data supports the, I think the allegation is that, the assertion is that targeted misinformation is more impactful than misinformation generally, particularly in certain communities.
- Yeah, certainly years of research in political science would suggest that targeted persuasion is better than general persuasion.
So it is the case, even way back in 2016, that Black Lives Matter movement and other non-white social movements were targeted by especially the Russian Internet Research Agency, and look, a small amount of misinformation can do a lot of damage.
So just one person, let's say an elected official, sharing misinformation can reverberate very far.
So again, I don't want to mitigate the threat, and I do think you're quite right that misinformation has different types of effects on different types of people, and Matt, your question about the research, I would love to give you a one-line answer, but the key problem is we just don't have the evidentiary basis that we need to say concretely that we know the answer to your question because we often lack access to data, and when researchers like me try to collaborate with social media platforms, sometimes we have great success, but other times, particularly for sensitive issues like this, we don't get the data we would need to answer a fine-grained question like the one Dara was raising.
- Yep, great, okay.
- Sorry, the benefits.
- So, okay, so yeah, so those are something on the dangerous side.
Let's go briefly to the benefits side.
- Yeah, so there are a couple of different things.
This technology can be very cheap, and it can mean that a, you know, local candidate running for office who is challenging the, you know, establishment can put together really great scripts for their door knocking using ChatGPT and really tailor it there.
One of my favorite examples, unfortunately, "The New York Times" recently did it, so it's not my secret example, is this group called Fair Count, where they were part of this test case where when their door knockers were knocking, I think in Mississippi, and they were asking people, "Do you plan to vote?
If not, why?"
it's really hard to put that all into a system and to look through it and to synthesize it, but using this artificial intelligence technology, they were able to use voice notes that were then synthesized and really spat out some really great ways to reach people who didn't plan to vote.
So that was a really great, you know, use of it.
You know, putting together ads can be a lot cheaper now.
You don't have to pay as much money to your media consultants and, you know, that can be good.
An area where people are actually really concerned right now is chatbots and how they interact with people, and there is one way where a chatbot can be really great.
So it's cheap and it could be across the board good in responding to voters' questions and saying, "How do you feel about this?
How does the candidate feel about this?"
Questions about how to vote, where to vote, things like that, but people are very concerned that that can be turned around and that the chatbot really is just telling people what they wanna hear, which is totally unrelated to the candidate's positions and ends up in a different form of kind of taking you down.
Of course, the biggest concerns right now in the press and amongst other people because it's the sexy topic is about these, you know, deepfake videos, that what we've seen is, yeah, now you can do a really good deepfake pretty cheaply, but you've also been able to do that, not as good but for a while.
Back in even 2010, I remember a robocall in Maryland where they had a voice that was, you know, made to sound like Michelle Obama's telling Democrats that they could stay home, that Martin O'Malley and the Democrats have won.
That was easy to prosecute.
All those robocalls are.
And at the same time, we just saw a video about President Biden that was twisted around to make it appear like he wasn't talking to anybody, but you don't need artificial intelligence to do that.
You can just do a different camera angle.
So again, I'm not trying to underplay what a game changer this can be, but so many of the risks that existed already existed before this, and there are these added benefits that we are still seeing.
- Okay, so I wanna go now as deeply as we can into the scary stuff.
So you are responsible for governing the electoral process.
You're at the Federal Election Commission.
We're a couple of, maybe that in itself is scary.
[audience laughing] We're a few months from an election.
What are the things related to AI that you wake up in the middle of the night worrying about?
- Yeah, so let me also be clear that at the Federal Election Commission, we only deal with campaign finance, the money in politics, how money is spent, how money is raised, disclosing it, et cetera.
We have now been put in the spotlight here on artificial intelligence because we had a petition for rule making asking us to create rules for determining if someone is misrepresenting what they're doing.
It is a long way of kind of throwing us in there because there's no one else to throw in there.
In the United States, we have two federal agencies that deal with elections.
It's the Federal Election Commission, which is us, and the Election Assistance Commission, which really deals with administering elections.
They work with the election offices around the country.
They certify the machines.
They're not about, you know, the voting right side or the, you know, adjudicating anything.
So there's not many people you can look at here to do anything.
So they're trying to look at us, and there's not much on the books with Congress.
They're trying to do that as well.
The people we don't talk about that I'm pretty sure are doing something are the intelligence agencies, because to me, that's where the threats really are.
We have these foreign actors that have been meddling for a long time.
They know what they're doing.
They have all of the ways to do it, and my biggest concerns are having these campaigns dropped in really close before an election.
I don't know what they're gonna do 'cause I'm not, you know, master of the dark arts in campaigns.
I just used to represent those people and I don't have to anymore, [audience laughing] but there's a lot of damage I think they can do, and we as regulators can't stop it.
- Can we try to get a little bit, just to get as specific as we can about that, I mean, I know, I understand you weren't responsible for the dark arts, but like, what is the specific thing that you think a foreign influence operation might be able to do that strikes you as disruptive?
- There are two sides that scare me the most.
One is a, I guess, a deepfake of sorts that really cannot be, you know, something where they are using all of our concerns already.
You know, Biden's age has been in the news, right?
There's no denying that, but where they take that and exploit it and put together a video of, you know, something happening with him that is so detrimental and seared into people's minds, that kind of thing, that kind of fake thing.
The flip side of it is the liar's dividend idea of it, where, you know, nothing is real anymore.
So let's say you have a candidate who does something truly horrific and we have it on video, and now they're saying, "Oh, that's not me.
That's fake.
That's artificial intelligence."
And that so skews the way that we view the world and what's real and what isn't, and that further divides us because you don't know what, me and my best friend could have different perspectives on if something is real, and to me that is terrifying.
- Mm-hmm, but again, there's something new here, but it's not the idea that we can manipulate facts.
Facts have been able to be manipulated.
- Always.
- I think the allegation, the assertion is that AI enables us to do it at scale or enables us to do it in more authentic ways or enables us to target it or personalize information or manipulate information in ways that draw on people's biases in new ways.
Chris, given your understanding of the literature, what are the things that you're scared of?
- Yeah, the thing that keeps me up at night is that we probably won't be able to identify these things.
You know, in 2016, it took the huge parts of the US intelligence service, investigative journalists, hundreds of people like Matt inside platforms to surface several thousand accounts, and those are real people working in a building in Moscow and elsewhere around the world, many other places, in fact, and we were able to pull a thread, you know, looking at a digital trace here, a misspelling there and stitch together that this was probably not an authentic account.
Now there's no constraints on the number of people, right?
So if these foreign adversaries can permeate the platforms, which is not simple to do but is probably possible, we could have hundreds or tens of thousands of accounts that most problematically we won't know are an AI.
We don't have a reliable way to detect AI, especially text-based AI at scale.
Digital images, we have watermarks that we can embed with some success to identify AI, but the really scary thing is we can't fight something that we can't see, and so I think that's the thing that keeps me up the most.
- But so a few minutes ago, you told us that misinformation is less impactful than the current narrative generally suggested it is.
So why then should we be particularly concerned that you might not be able to identify misinformation if actually it's a little bit less impactful than the general perception?
- Yeah, this is the sorta good news, bad news, is we're already so darn polarized that it's hard to make us more polarized.
To be honest, that's a big part of the answer to this question, you know?
Changing people's minds is really, really hard, and, you know, if I asked you all to, with a show of hands, to say, hey, how many times or have you ever seen something on social media that would make you change your mind about something?
I mean, it's a rare phenomenon because we rarely change our minds.
It's against our basic psychology.
Now, I'm a huge fan of intellectual humility and I see it all over the place at this wonderful festival, but you all are not the typical voter, right?
The typical voter has one or two issues that they care about.
They get cues from elites.
They don't get to sort of calmly and deliberatively go through messages and decide whether they approve of them.
Very few things can stick, and so the good news, bad news is AI doesn't have a lot of influence or misinformation doesn't have a lot of influence, sorry, I should be clear, but neither do most political messaging campaigns.
- So I wanna switch to governance in one second, but before we do, maybe just, like, maybe just in 30 seconds each, what do you think the current narrative around this issue gets wrong?
Where's the disconnection between what you see as part of the governance infrastructure and as someone who really understands on an empirical basis how this actually functions in reality?
What is the general news cycle getting wrong on this issue?
Chris, you wanna start with that one?
- Sure, and I wanna take us to the good side of this a little bit if I may, Matt, because, you know, we could spend 45 minutes talking about the dangers and harms, but like so many other things with AI, it's, you know, you have one group of people saying, "This is an existential threat," and then you have another group of people who say, you know, "In five years, large language models are gonna run every corporation in the S&P 500."
Both perspectives are quite clearly wrong, and the interesting action is gonna be in the middle.
So in this space, how can AI be a counterweight?
We know it's gonna be a threat.
What can we do as researchers, practitioners, and policymakers to turn the tide?
And one thing we discovered in the Polarization Lab is that we can use an AI to do the type of conflict mediation that you all will learn about in a workshop this afternoon with Randy out there from Braver Angels.
People like him do, you know, wonderful work at bringing people together, and someone like Randy, our research shows, can reduce animosity between Republicans and Democrats in a matter of, you know, hours.
The problem is there aren't enough Randys, and we need to intervene at the scale of social media.
So how on earth do we do that?
Well, you know, chatbots can be bad.
We've already talked about that, but chatbots can also be good.
They can help surface our biases and teach us to articulate our opinions in ways that are more productive, and so when we did a large experiment where half of the people were assigned to have a chat assistant powered by GPT-3, one of the larger and more prominent large language models, those people had more productive conversations.
They were rated as having better argumentation.
We didn't force it on them.
It was elective.
You could choose to use this chat assistant tool, but it really seemed to move the needle, and months later, we implemented it with the social media company Nextdoor, which some of you may know as one of the more toxic platforms out there despite being the neighborly platform that it's meant to be, and that reduced toxic language by 15%, which is a huge effect in social science terms.
So there is some good here.
- Dara, what do you think the current dialogue's getting wrong?
- I think it's that deepfakes videos are the big answer.
I think that it's we can regulate our way out of the real concerns, and I think that it's that this is a partisan issue.
We are seeing states across the country pass laws to try to stop or control the use of artificial intelligence in videos.
These are bipartisan, red states, blue states, what have you.
Politicians are terrified 'cause they don't want it to hurt them.
The only place where it's polarizing kind of is in Congress right now, but even there, I've had, you know, Republican congressmen be terrified about this and say, "We've gotta do something about it," but it's just not the big talking point there now.
I think that Congress is going to pass laws somehow restricting this, but it's not gonna happen until next year.
- Mm-hmm, so that's a good segue to governance.
Earlier you described all the different people who are focusing on this issue.
My takeaway is that you're responsible for 1/6 of election governance in this country.
- 10, yeah.
[laughing] - Roughly 16%.
So can you just clarify, just starting at kinda the high level 'cause I don't know how many people in the audience, including myself, are familiar with exactly what the Federal Election Commission can do and then also the limitations to its mandate.
So you alluded to it earlier, but can you give us a little more specificity on what you can do and then also some specifics on what can you not do?
- Yep, yep, so the Federal Election Commission was started in the '70s in the wake of Watergate really with the idea of, you know, it was kind of a free for all in money in politics back then, and the Federal Election Campaign Act established the commission made up of three Republicans and three Democrats.
The only thing we can get anything done is if they're are four votes, so it has to be bipartisan.
It prevents against, you know, partisan attacks, et cetera.
There is in the law, it regulates how much people can contribute.
It requires disclosure of all campaign donations above $200 and all spending by campaigns and PACs and other political actors, and it has a few other things that kind of touch across the board.
So one of the things prohibits, you know, different ways of fundraising, but we really stick to this money in politics, so how the money is raised, how the money is spent, and how it's disclosed.
There's one area that's about fraudulent misrepresentation that we have in our statute that basically, it's now relatively controversial, but it says in essence that one camp, one politician, so if y'all are familiar with the band Pearl Jam, it's my favorite band.
I use all my examples around them.
You don't need to know the band [audience laughing] in order to know, to understand this analogy, but so let's say Eddie is running for Congress and Stone is running for Congress.
If Eddie puts out an ad that uses, that makes it appear like Stone has said something he didn't say, well, if there's a disclaimer on there that says, "Paid for by Stone for Congress," it wasn't.
It was paid for by Eddie for Congress.
Well, that violates our fraudulent misrepresentation laws.
If the ad says, "Paid for by Eddie," he's the one that paid for it.
Even though it has Stone saying something he didn't say and is cobbled together, not a problem under our laws.
We have been asked to amend the regulations to say that if somebody uses artificial intelligence in that ad to make it appear like Stone is saying something he didn't say, then that violates our laws.
Now, it already does.
So the fact that they used artificial intelligence or not, that doesn't matter.
The fact that somebody is putting paid for by the wrong candidate in and of itself violates it.
We have been brought into this because different organizations wanna try to get involved here.
So they petitioned us to change our rules to specifically say if artificial intelligence is used, that is covered here, when it already is.
So we're kind of in this conundrum on, you know, do we move forward with it?
Do we not?
I'll tell you that during this process, the best thing to come out of it are the comments we receive.
So when we do a rulemaking or a chance at a rulemaking, we seek comments from the public for 60 days.
We've received so many really thoughtful comments from academics and other experts just about different ways to regulate this, many ways that we can't.
We don't have the jurisdiction to do it.
We can only do what Congress lets us do, and Congress does not let us do this.
They are thinking about it.
There are some bills pending that would require a disclaimer on anything that has artificial intelligence or some things that have artificial intelligence, and if Congress passes those bills, then we would have the power, but again, during this process, getting all of that information in and recommendations has been so great, I think, for everybody involved in this community, including Congress, and to me, that has been the net benefit from this whole process.
Whether or not we move forward with doing a rulemaking, we'll see.
That should hopefully be happening soon, but again, there are six of us.
It's hard for us to do much sometimes, but this has been a benefit.
- So while we're talking about Pearl Jam, I think the lyrics, "I'm still alive," apply to the last couple of weeks in your life.
- That is true.
- "The New York Times" wrote, some of you may have seen it, a piece that focused on your willingness to vote not just with Democrats on the commission, so a three-three commission, as many of you might think, like, bodies that are with a hope is that they will develop laws or rules typically are not evenly numbered.
The Supreme Court doesn't have eight justices.
The FTC doesn't have six commission members.
The FEC has six, three Republicans, three Democrats, and that means it is set up for there to be challenges in initiating new rules.
The "New York Times" piece talked about your willingness to vote with Republicans.
Chris and I were talking about this the other day, talking about it in the context of polarization, and sort of the framing in the piece was somewhat negative, and we were saying, like, this looks like depolarizing.
This looks like someone who's willing to look at the law and vote with people when you view them to be, you view that to be the right side of the issue independent of their political party.
You alluded a few minutes ago to this issue, AI, being one where you think that the parties might actually be able to reach some agreement.
So I'm curious, given your thought process in the wake of the "New York Times" story and then also your optimism generally about this being a bipartisan issue, how do you see this unfolding in terms of the politics of the commission?
- Yeah, so, you know, with the commission, we come to it in different ways.
There's controversy at the commission on how far we should go, right?
I was a voting rights attorney.
I was a practitioner.
A lot of what I'm talking to you about are things that I know from my personal work experience that some of my colleagues don't have this experience in, and we have some history of personalities disagreeing with each other about how far we go.
So that causes problems for us, and in Congress, we have some people that we consider old-school Republicans when it comes to campaign finance.
They're deregulators.
Senator McConnell is that one, who he does not like any rules, you know?
It's let them play, let's see what happens, but that is drastically changing.
I think of Senator Hawley, who has put together legislation about regulating how outside money can be spent.
So because of those changes and because Senator McConnell is stepping down next year from being a leader, I see an opening here for more regulation in campaign finance, which is fascinating on, you know, my end, because really in reality, the way things work in campaign finance is that Democrats and Republicans are kinda in agreement, [laughing] you know, of not having these restrictive, cumbersome rules on top of us, and it just hasn't been the narrative for so long.
So this "New York Times" piece, you all should go read it.
It is thinly sourced and has some, I would say, misinformation in it because it quotes a senator who doesn't have any information of all the good we've done, and while we may not agree, my Republican colleagues and I may not agree on some cases, there's some really big things we have found agreement on.
For example, our agency had not requested any more money in the budget for over a decade.
So for an agency with the cost of living increases and all the campaign funds, we are in the hole, and now we've been requesting 10 million more dollars a year from Congress and fighting for it.
We have put together regulations on internet ads.
The internet's been in existence for a really long time, and the FEC did not have any regulations on internet ads, but we've done that because we talked together and we worked together on it.
We passed these great new procedures for these audits we used to have that used to burn so much staff time, so much campaign time, and now we've streamlined that process.
So we have done so much.
Even though we can't agree on a few things, we gotta let it lie, and my position has always been try to convince me, and when I'm done being convinced, I'll say, "Okay, I'm done," and, you know, that works.
- So I don't think I agree with the perspective in "The New York Times", but I also want, in fairness to that article, to maybe raise some of the arguments raised in it, which I think was that the series, the decisions where you have agreed with Republicans are decisions that will result in more influence of big money in campaigns.
So what's your response to that criticism?
- I think it absolutely does.
It absolutely does.
We are in a current environment where the Supreme Court has a certain perspective on how the influence of money and elections and how the First Amendment controls.
That's a reality.
There's nothing I can do about that.
I also have the law as Congress wrote it and the regulations as my previous generations of FEC commissioners have written it, and I have to follow all of those things.
There's also a level of protecting what exists.
I think any laws that, any challenges in the courts that go to the Supreme Court, probably gonna further, you know, ever since Citizens United, we've seen the laws slowly just get torn apart.
The more we keep it away from the courts by properly regulating and by interpreting the law as written, the more we're gonna keep the, you know, we are very close to the dismantling of the campaign contribution restriction disclosures limits.
So there is going to be a day, I think, where it's gonna be unlimited money across the board or where they're not gonna have to disclose the donors, and the longer we keep it away from the courts, the more likely it is that we can push that day off.
So I think, did that answer your question?
[Dara laughing] - I think so, yeah.
We'll see when we ask the, you can ask the audience in about 10 minutes.
- You all can read that article quickly right now.
- Okay, so that's governance from the FEC's perspective.
Chris, I'm curious about your thoughts on how we govern this new phenomenon.
You've done thinking about how the empirical world translates into public policy.
You're a professor of public policy among other things, and then also as you mentioned with your work in Nextdoor, you've thought about governance not just from the government's perspective but how companies who have some power in influencing, structuring their products might be able to shape how AI is used for good relative to ill.
So what's your thought about how we look forward in terms of governance?
- Sure, you know, and one of the things just to shine, just to give Dara some praise for what is really a brave vote, I think, in a lot of ways, we don't, you know, these stories, they don't get surfaced on social media, right?
You hear about all the bad, but when there's a genuine compromise, right, we don't hear about it, and that's because, you know, the structure of our platforms in many ways rewards engagement.
It rewards people saying the craziest thing they can think of and then their friends telling them it's crazy or their enemies telling them it's crazy, and suddenly, you know, you walk into what looks like a bar fight, right?
And so the really interesting phenomenon that we researchers see is this is quite out of step with reality.
So the story of, you know, Dara reaching across the aisle, and we heard it earlier this week with Mayor Lance Bottoms and this wonderful, if you were there, I think many of you were, this wonderful anecdote about the compromise that she had reached with her, a sort of Republican adversary in the state legislature, and on the way out of the meeting, he says, "Please don't tell anybody we had a good meeting because," and he was serious in her telling because he was worried that it would hurt him politically, right?
So we need to surface these sort of bipartisan leaders more effectively, and one way we can do that is to think about how technology can surface consensus instead of animosity, and so in the Polarization Lab, we developed a tool which looks at content that gets positive feedback from both Republicans and Democrats, and we built some tools, things like apps and bots that can try to surface this content, and we showed in some smaller studies that this could really increase bipartisanship, bipartisan attitudes, productive conversations, and working with Twitter several years ago, we even implemented it on platform and showed that it could reduce the spread of misinformation by 25%.
Again, a huge effect.
So we need to, I think the thing that I'm most worried about is a lack of imagination, you know?
We're so locked into the current moment, and we've accepted that social media is gonna be this way, right, and Congress is gonna be this way when I hear time and again stories like Dara's, of, you know, we did something good here, and we need to find ways to elevate that content.
I think that's the most important thing that platforms and policymakers and academics can work together to do.
- So you have tools that you're, yes, applause.
You have tools that have a demonstrated track record of success.
I mean, we talked early on in the panel how hard it is to develop empirical evidence of success and actually moving the needle.
You have developed tools, thought about tools, uncovered tools that do that, when why do we not see that across all tech platforms?
So when you go to a platform and you say, "I have an idea about how you can implement this tool that's going to result in improving X, Y, Z on your platform," why is the answer not yes?
- That is a $100 million question, Matt.
Literally, right?
[audience laughing] Because I'll tell you a story.
I won't name names.
I had a private conversation with the CEO of a major tech company, and I was telling this person about this new technology, and they said, "That's great."
And I said, "Well, would you think about doing it?
You know, might save quite a bit of money on content moderation."
You know, getting rid of misinformation on the platform costs a billion dollars to a place like Facebook, at least by my last analysis, and the CEO said, "You know, yeah, that's great, but my job as CEO is to go sell something new to the board.
Saving a bit of money is not something that gets me, you know, the celebrity and the," so I didn't wanna hear that, but I'm glad that I heard it because it reminded me that all of these solutions, right, I can give you some pie-in-the-sky kumbaya solutions that everyone in this room might love, right, but again, we are unusual.
The people who seek consensus, who wanna reach out beyond their own sort of corner of social media or life in general are rare, right?
Most people are content to sit in what's familiar and comfortable.
So we have to find ways to make this engaging, and one way we can do this, consensus, you know, by the way, I don't think we should always promote consensus for everything, right?
I mean, there are issues on which we really disagree in meaningful, important ways.
You know, climate change comes to mind, race.
Many people don't want to move their positions on that issue, and maybe they shouldn't, but we don't have to think about consensus building as a sort of eat-your-broccoli moment.
You can also think of it as, what am I missing out on, right?
And, like, fear of missing out is another human instinct that we can perhaps tap, and we can do that by, you know, if I asked you, "Wouldn't you like to see the piece, the message today," let's say it's the "New York Times", or maybe not.
Let's go "Washington Post" given the previous.
[audience laughing] - I like them best right now.
- Or "Wall Street Journal".
Right, right, exactly, yeah.
You know, wouldn't you like to know what other people have found interesting and important and useful to them that you don't yet know about, right?
And especially if that happens across social divides, right?
Like old people, young people, men, women, any kind of, internationally, right?
If you could optimize the information you're receiving, it might actually be useful to you and efficient, doesn't even have to just be politics.
It could be whatever you're into, fishing, video games, whatever it is, right?
You could design social media to expose you to that type of content.
Those are the type of solutions that I think actually can scale and we can actually and companies can actually make money with.
- So we're about five minutes from questions.
So please get them ready.
So let's transition from this external conversation, AI is out in the world and it's coming to get us, to what we can do as members of our communities, members of our society to digest information in more healthy ways.
I find some of these conversations frustrating in that I think they put us in a position as if we are, all my analogies are not Pearl Jam, they're surfing related, as if we are waiting for a wave to come and it will crash on top of us as opposed to taking steps that we can take to surf the wave.
So Chris, maybe we can start with you for this one.
What's your guidance to this group of members of this community, potential voters, citizens, about how they should navigate an AI world?
- Yeah, well, first, I'm a terrible surfer, and I definitely, [audience laughing] don't take advice on surfing from me.
I think the most important thing, you know, public ignorance about AI is a big, big problem right now.
You know, I think people like us think about AI constantly, but if you look at public opinion data, you know, well over half of the country has never used a tool like ChatGPT, and I suspect many people in this room have never used a tool like ChatGPT, and there's obviously great good.
We've been talking about that.
There's great danger, but we won't be able to identify either unless people become more familiar with it.
So I think there's a number of great resources out there to sort of begin to take a step into AI.
There's a wonderful new book which I'd highly recommend, which is called "AI Snake Oil", and it tries to throw some cold water on a lot of the hype around AI 'cause AI is being over-hyped for sure, but it also tries to get us to that productive plateau.
You know, how can you use AI in your daily life in a way that is pro-social, that is useful, and in this space in particular, elections, I suspect, I'm aware of several efforts already of voter guides.
Think like things like Dara was talking about earlier, where you can have a conversation with an AI and say, you know, "Here are the issues I care about.
Here's where I live.
You know, who should I vote for?"
And what's really neat about AI is it's conversational, right?
It can go back and forth and it can learn your preferences the more that you talk to it.
Really briefly I also want to just say there's even evidence that this can fight conspiracy theories.
So a new study from MIT shows that about a 20-minute conversation between someone who believes in a conspiracy theory, like, say, Pizzagate.
- Can you just explain Pizzagate?
- Pizzagate is the misinformation that there's a, well, I don't know if I can go there.
[audience laughing] It's that Democrats are doing a bunch of bad stuff in pizza restaurants.
Let's leave it at that.
- Yeah, and potentially resulted in an arrest in 2016.
- Comet Ping Pong.
- Exactly, and we should be serious 'cause it really did have serious, you know, manifestations.
In a conversation between an AI and someone who believes in that conspiracy for 20 minutes is dropping belief in conspiracy by 20 to 30%.
Why?
Because these things can patiently explain things to us, learn our preferences, learn our value system, and start to articulate messages in a way that's familiar and persuasive to us.
So this can really be productive provided that it's channeled in pro-social ways like learning about elections, or getting people to vote I think is even a better example because as soon as we start tailoring people's votes, we go to a scary place, but getting people to vote I think we can all agree is a good thing.
- Mm-hmm, so we've got about 15 minutes left.
So please feel free to make your way to the mics if you'd like to ask a question.
Dara, in the next couple of months, how should this audience prepare themselves for our first AI election?
- You know, it's hard to say this because we're seeing, you know, the downfall of, like, local news, but, you know, one of the most important things is finding your trusted resources, which, again, is still hard because what is a trusted resource at this point?
It's been a lot of work training election administrators on how to become the trusted resource that voters can look to when, you know, there's so much going on, but again, that's not about the substance, but you have to determine what are gonna be your trusted resources and looking at them now and talking to other people you know about your trusted resources 'cause we can't, this used to be whack-a-mole about this kind of thing, where you just try to get rid of it, you know?
I used to spend a lotta time sending takedown notices to Twitter and Facebook and all the things.
You just can't do that.
It's not worth it anymore, and you've gotta fight the bad information with the good information.
- So let's talk for one more second about that.
'cause I think it's easy to say, like, "Consult your trusted sources," but on this panel, we've discussed people, I think, think "The New York Times", very trustworthy, social media, very untrustworthy.
That may be generally true, but we've talked in this panel about, in your view, thinly sourced "New York Times" articles or you've talked about some examples of social media actually helping to correct misinformation.
How do we evaluate trusted and untrusted?
[audience laughing] Lots of answers in the audience.
- I could ask ChatGPT right now.
- Yeah, Chris, what do you think?
- It's a really good question, Matt, I mean, particularly given what I mentioned earlier about the undetectable nature of so much AI, right?
I mean, maybe some of you have been following the news recently of Google, for example, telling people to eat rocks or make pizza with glue, right?
This is AI summarizing information from the internet, and so the first thing you see when you Google pizza recipe might be a pizza recipe with glue in it, right?
Obviously quite dangerous and problematic.
Now, that's probably a small amount of the overall search traffic that saw that message, but it's a proof of concept of what Matt's talking about, right?
As we see the increase in AI-generated content, particularly when it's undetectable, not only will we struggle to see what is AI generated and what is not AI generated, but future AI models will also struggle to see what's AI generated and what's not AI generated, creating potentially a sort of self-fulfilling prophecy of this sort of misinformation or, more likely, slightly false information, right, almost indetectably and delivered in a confident manner.
That's the really scary thing.
So I think, yeah, I mean, you know, these well-tested sources, you know, we know "The New York Times" doesn't always get it right, but hopefully if we read "The New York Times", "The Wall Street Journal" and "The Washington Post", they'll mostly get it right, and I think that's all we can hope to do at this particular moment personally.
- Dara, anything on trusted versus- - No, I wish I knew.
- All right, go ahead please.
- Yeah, Dara, you referenced earlier the erosion of campaign finance rules that are already going on.
I know that deepfakes and misinformation is obviously kind of the first and foremost in terms of worries with AI, but I'm wondering if there are any sort of specific instances or worries that we have seen where AI can be used to skirt campaign finance laws that are already in place, and when I ask that, I'm thinking more not so much campaigns themselves but super PACs, AIs, or not AIs, independent expenditures, dark money in politics and if there's a way where those entities are using AI to skirt the rules that are already in place in terms of donor reporting, contribution reporting, things like that.
- Yep, absolutely.
It's a great question and I think I'm pretty sure that we just saw this really first used in a New York local election using matching funds, where if a candidate raises a certain amount of money, then they get matching funds from the city, and you can only raise, those matching funds are only people within the city, and I believe they had used some kind of artificial intelligence system to game it and make it appear like those funds were coming and get more money, and it was like they got a lotta money, so I think things like that.
I think that when it comes to circumventing some of the rules, I think it can easily probably be done with different models and different ways of finding real addresses and placing them with people, but I see that I think the benefit of that is pretty minimal at this point.
So I don't think people are going to be doing it as much.
I just don't see a big benefit to the campaigns or the super PACs, and the super PACs, they don't, you know, so many of them, they're really just trying to take in, you know, large amounts of money from one person or to then work with their candidate, and I don't think they need this in order to do that.
- Chris, thoughts on this one?
- Oh, boy.
I think you're the expert here.
I wanna stay in my lane here.
[audience laughing] - Yeah, please.
- Yeah, hi.
Thank you all for being here.
This subject's so powerful.
I was one of the people that was fighting against the trolls.
I even wrote an article about it called "Troll Training" on Medium and was able to recognize through syntax and all of the things, and I would write about how to tell the trolls by, you know, are you getting paid 5 cents a comment or 25 cents a comment and teach people how to recognize that.
So my question to you is how big of a part of education in how to recognize trolls and how is the social media platforms getting together to open up the potential for education to recognize AI with the regulations that are coming down the pipeline to say, "Hey, this is AI generated."
Like, what's being happening now, like with the, it says AI generated, but how is that happening on a grander scale, essentially unifying all of the social media platforms?
- You wanna start, Chris?
- Yeah, it's a great question.
So if you're not familiar, a troll is just sort of an extremist who sort of searches out antagonism on social media, and, you know, there's certainly synthetic trolling going on, you know, these sort of, we sometimes use the term astroturfing to describe these sort of seemingly grassroots campaigns that are actually people being paid, right?
But a theme in this conversation has also been that many of these issues predate AI, and we should think about the complex causes, not just, you know, AI is responsible for everything, and I'll share a story from some of our research about a troll that I think will help shed a little bit of light on the complexity here.
So we did this massive study where we had people follow a bot who would retweet messages from the opposing political party.
So if you're a Republican, you saw a Democrat's messages for a while.
If you're a Democrat, you saw a Republican's messages for about a month, and we talked to these people for hours.
We studied them in many different ways.
We tracked their online behavior, and one of the most interesting people in the whole study was this guy named Ray, and when we met him in person, nicest guy you ever meet.
I mean, he could be sitting right here, you know, nodding along to, you know, kumbaya, right?
But you find him online and he is one of the most vicious trolls on the internet, I mean, nightly Photoshopping meticulously five or six memes negatively depicting Michelle Obama, Barack Obama and other places, and the question arises, what is this Dr. Jekyll and Mr. Hyde kind of behavior?
And when we really got to know him and study him in great detail, we discovered the real answer is social isolation.
You know, this is a very lonely person who was suffering in his life.
He did not have, in his case, conservative friends.
He lived in a very liberal place.
He worked in a liberal profession, and social media had become sort of, you know, this refuge for him to gain a kind of micro-celebrity, however deleterious for the rest of us, yeah?
So I think that part of the solution, yes, is identifying the synthetic content, but part of the solution is a theme that again has come up throughout this conference.
You know, we're talking about mental health two days ago, right, loneliness, social isolation, social inequality.
We can't just intervene at the platform level.
I would love to say let's just flip a switch inside Facebook and we'll turn off the trolls, right?
But unfortunately we have a supply side problem.
We have a lot of angry, lonely people who are suffering and that no amount of social engineering on Facebook is gonna fix that problem.
- Does the FEC have a role to play in digital literacy issues?
Like, are there things?
There are other agencies that can put out best practices guides or research reports or sort of study an ecosystem and come up with general recommendations.
As you were describing earlier, the FEC's mandate is relatively narrow.
Can you do these sorts of things when digital literacy crosses over into the election context?
- Theoretically we could but we won't is really it.
Again, we would really need four votes for something from the agency to come out, and we just don't, you know, we are a small agency without much skills, but individual commissioners sometimes do.
So I can do these kinds of things.
Some of my colleagues can do these kinds of things, but the agency could not.
- Hi, real quick question.
I just wanna say I read the article.
I liked you a lot after reading it, so it didn't, [audience laughing] First Amendment guy with a First Amendment question.
We've seen at the state level some attempts to regulate the use of deepfakes, Minnesota I'm thinking of.
There's a 90-day limit around the election where you can't intentionally send on something.
Is that on the commission's radar, the First Amendment concerns here, and I guess also the academy's radar, and thanks for your time.
- Can you actually, like, say two seconds more on what would the First Amendment concerns be?
- Yeah, the First Amendment concerns really are throwing the baby out with the bathwater, right, allowing the government the power to determine that something has been sent around maliciously and thus shutting down political speech, which is at the apex of the First Amendment's protection.
- Yeah, this is a thing I'm currently super interested in.
So actually, well, can you stay at the mic for just one second 'cause I just wanna make sure I get this right 'cause I think this is an important thing about governance.
There are a lot of proposals now that focus on different remedies for deceptive political content, and I think if there was a focus on deceptive political content, then the regulation is content based, right?
- Well, yeah, and Donald Trump's perception of what deceptive political content is gonna be much different than Joe Biden's, and whoever's holding the hammer is gonna swing it.
- Right, so there are those concerns, which I think are immense, but just as a matter of- - [Attendee] Just as a matter of pure law.
- Just as a matter of the current state of the law, once you regulate the content specifically, it is subject to heightened review in court in terms of First Amendment analysis, and that means that it is much more likely to be struck down than a content-neutral distinction, and so these laws that are focused on deceptive AI content, deceptive political content, because it's content based, the First Amendment concerns are likely more acute.
Is that right just as a statement of law?
- Yes, sir.
- Okay.
So how do you, to go back to your question, are these on your radar?
How do you think of them?
- They are 100% on my radar, and I've been following the way that different states have been doing it.
I think many states are, state government is hard.
People, you know, sometimes, you know, do this job for $20,000, two months.
They don't have very professional staff, so they're trying.
Some are really trying to work it through and trying to find a constitutional way and looking at it different ways.
I think that having it be content based and the 90 days is a good stop gap, but the content is there.
What one of my thoughts have been is if the ad is saying something about an opponent, then that could be something that falls within like the way Virginia does ads where they stand by their ad, but that is kind of one part of it there.
So I think that's a concern.
The other concerns, well, my broader concern is something I said before, is that if something is challenged, it could take the whole system down.
So if we talk about disclaimers, one of the ideas out there is that any ad that has artificial intelligence, let's say just artificial intelligence used has to have a disclaimer that says, "This ad uses artificial intelligence."
If the disclaimer's structure, the where you say, "Paid for by," has been upheld for decades, one of the problems we have right now is digital ads.
So digital ads sometimes are, like, five seconds.
If you have to do a spoken ad, which you don't right now for a digital ad, that would be a nine-second ad.
So you have five seconds of content, four seconds of a disclaimer.
The court is not going to look at that very well, and my concern is that somebody is gonna do one of these, you know, one of these laws that's gonna say, even for digi ads, that you have to have a four-second saying, "This is paid for," I mean, "This uses artificial intelligence."
That goes up to the Supreme Court and it strikes down all disclaimers.
So that's something I've been trying to encourage as people are looking at this, you know?
I've never understood why the end of the ad says, "Paid for by."
I think it should be throughout because it, but I wasn't around in the '70s.
I don't know why that exists, but I think the disclaimer should be on throughout so everybody knows while they're watching or if they take clips.
So there are some creative ways people are looking at it.
I think the way it's happening in states is the way it should happen.
Congress is certainly looking at it and is aware of it, and I think we're, again, Congress I don't think it's gonna do anything this year.
Next year is a different story.
- [Attendee] Thank you.
- So we've got about 90 seconds left.
So why don't we take- - Sorry.
[laughing] - No, no, so let's take both questions, and then we can do some rapid answers.
Sorry, let's start over here.
- My question is on voter education, heard some great things about using these tools to help.
I had a Waffle House conversation with a waitress that said, "I've never voted, I wanna vote, but I go on the internet and it's daunting to try to research all this stuff."
My question is, at the institutional level, is there any hope for League of Women Voters, these kinds of organizations at scale helping develop with these tools really effective tools for voters to use to make their own judgments about which way to go?
- I'm interested in linguistic psychology, and I've noticed the word lie has not been used once, and it's not being used.
Is there some reason that people have decided to use false information and words instead of something is a lie?
- One of the fascinating things we've been able to do in research lately is link the digital behavior to the offline behavior, and one shocking revelation that has come up is that many of the people sharing misinformation know that it's misinformation.
So, you know, we like to think these are uneducated people.
It's a political weapon.
So I think to your point, like, lying has become weaponized for politics in ways that are deeply problematic.
- And Dara, let's go to the first question about how we can use these tools to provide better resources for voters.
- Yeah, it's hard 'cause you need to have somebody that's willing to spend the money to reach that voter and to tell them where they can get this information, and a lot of this is grassroots in the Waffle House.
You were talking to people like that.
Part of the problem with money in politics right now is that the parties are very weak, and it used to be the parties that would get that information out and really do that work, but because they have limits because of a lot of reasons, that money is now with the super PACs who aren't usually spending that kind of money.
Locally, there are groups, you know, in local that you don't hear about that are doing this on-the-ground work, that are going and knocking on doors, that are going to the Waffle House and engaging these conversations.
Again, if you're not part of the political community that they're trying to reach, you're not gonna hear about it.
Sometimes that's good.
It means they're using their resources properly, sometimes using AI to target their messaging, but sometimes just because you don't know it's happening doesn't mean it's happening.
- We hope you enjoyed this program.
I'm Kirk Swenson, and thank you for joining us for this year's Asheville Ideas Fest.
[gentle music]
Support for PBS provided by:
Asheville Ideas Fest is a local public television program presented by PBS NC















