
AI, Deepfakes, and Misinformation with American Sunlight Project
1/31/2025 | 26m 46sVideo has Closed Captions
This week, Bonnie Erbé talks to Nina Jankowicz of the American Sunlight Project
This week, Bonnie Erbé talks to Nina Jankowicz of the American Sunlight Project to discuss the rise of misinformation and how AI and deepfakes play a role.
Problems playing video? | Closed Captioning Feedback
Problems playing video? | Closed Captioning Feedback
Funding for TO THE CONTRARY is provided by the E. Rhodes and Leona B. Carpenter Foundation, the Park Foundation and the Charles A. Frueauff Foundation.

AI, Deepfakes, and Misinformation with American Sunlight Project
1/31/2025 | 26m 46sVideo has Closed Captions
This week, Bonnie Erbé talks to Nina Jankowicz of the American Sunlight Project to discuss the rise of misinformation and how AI and deepfakes play a role.
Problems playing video? | Closed Captioning Feedback
How to Watch To The Contrary
To The Contrary is available to stream on pbs.org and the free PBS App, available on iPhone, Apple TV, Android TV, Android smartphones, Amazon Fire TV, Amazon Fire Tablet, Roku, Samsung Smart TV, and Vizio.
Providing Support for PBS.org
Learn Moreabout PBS online sponsorshipFunding for “To the Contrary,” provided by: This week, on “To the Contrary.” They target women who are prominent, so celebrities, members of Congress, officials.
But I also have to stress they target normal women everyday, ordinary women as well Hello, I'm Bonnie Erbé.
Welcome to “To the Contrary,” our discussion of news and social trends from diverse perspectives.
Misinformation poses significant challenges in today's digital age, affecting various aspects of society, including voting and democratic process.
Since it has an impact across different demographics, notably women and communities of color, often through sophisticated methods such as AI-generated deepfakes.
The American Sunlight Project was established to tackle these issues.
This organization focuses on countering misinformation through research and transparency initiatives.
Leading this effort is Nina Jankowicz, who co-founded the project and serves as its CEO.
Jankowicz has a background in disinformation.
She led the Disinformation Governance Board at the Department of Homeland Security and is an author known for her work on disinformation and online harassment.
Welcome to you, Nina.
Thank you for being here.
Thanks for having me.
So let's start out with a brief explanation, including for me, please, about what AI really is.
What capabilities does it have that we didn't have before AI was around and what, you know, how that leads to deep the, you know, the production for the internet of deepfakes.
Yeah.
So artificial intelligence takes on a lot of different forms, but I think the form that most people are interested in today is what we see with large language models like ChatGPT and the deepfakes that you mentioned.
So let's take each of them in turn.
Large language models are essentially, large sets of data that have been combed through by a computer that has learned all they can about various different sets of topics, so that when you ask it, what is George Washington's birthday?
It has read that many, many, many times.
It can look it up, it has learned it, and it can spit it back out to you.
Similarly, when we're talking about deepfakes, these computer models have been trained again, so fed and combed through many, many images and videos.
Deepfakes can also be audio format.
So, can be recordings of voices and things like that.
Making it sound like your voice and using that to get into a past, you know, a voice coded site for.
Absolutely.
Yes, yes.
We've actually seen a lot of examples of individuals being scammed with audio deepfakes because it sounds like their grandmother or a relative is calling them in distress, asking them to wire the money, for example.
But we also see photorealistic deepfakes that look like still images or videos.
And these can be used for fun purposes.
You know, we've all seen celebrities dancing or doing things that they might not otherwise do and kind of a joke, but also nefarious purposes.
We see deepfakes being used primarily, in fact, the vast majority of deepfakes on the internet today, are deepfake, nonconsensual, image-based abuse.
So, often pornographic content that women primarily have not consented to that they are subject to because of the accessibility and widespread nature of this technology.
Tell me some of the most egregious efforts in that department.
And where do they show up?
And who are the women who are being deepfaked, as it were?
Yeah.
So there are millions and millions of these videos online.
Primarily, you know, they target women who are prominent.
So celebrities, members of Congress, officials.
But I also have to stress, they target normal women, everyday ordinary women as well because now for a few dollars, even less, you can download an app that will take one single picture of someone and notify them.
So we're seeing girls in their pre-teens and middle schools being targeted with this behavior by their classmates.
We're seeing school teachers being targeted with it.
And of course, the infamous Taylor Swift case around this time last year in which deepfake pornography of Taylor Swift, the very famous singer, was trending on X, the social media platform, brought a lot of attention to this plight.
Unfortunately, in the United States, it is something that does not have a federal level criminal penalty for it.
So in many states across the country, there isn't even a civil penalty.
So you find yourself with no recourse if you're a victim of deepfake image-based abuse.
And there are a couple of laws percolating in Congress.
One almost made it into last year's continuing resolution, but was removed from that congressional funding bill at the very last minute and hopefully will be reintroduced soon by Senators Ted Cruz and Amy Klobuchar, the co-sponsors of the bill.
So tell me what you're doing to fight these kinds of uses of AI.
Well, one of the things that we're doing is talking about personal experiences, but also putting data behind that.
So I had the unfortunate, experience of being depicted in deepfake sexual abuse myself as a result of conspiracy theories and abuse that were spread about me thanks to my role in the government and I, when I found that out, I thought it was important for me to talk about that experience, because I think most Americans will agree that whether they agree with me politically or not, whether they think my role, which was widely lied about in the Biden administration, was a good thing or not, I don't deserve as a normal person to be subject to sexual abuse for having taken that job.
So I wrote about that, and I speak about that fairly frequently.
But the other thing that we've done at the American Sunlight Project is try to put numbers to this phenomenon.
And one thing that we recently found out was that 1 in 6 women members of Congress is depicted in deepfake image-based abuse, and only one man and the entire body of Congress is depicted.
So it's a pretty stark comparison.
And it goes across political parties.
It doesn't matter if you're a Republican or a Democrat.
If you are a woman in public office in the United States, you are likely to be targeted.
If you're a man, you are much, much less likely.
And so what this says to me is that people are using deepfakes to attempt to put women in their place to say, if you speak out, if you engage in public life, if you make me mad in some way, there's this new tool that I have that can, can humiliate you and can denigrate you.
And we use that research to advocate for solutions, because women in many states across the country do not even have civil recourse against the folks who have put this abuse on the internet.
And members of Congress find themselves wondering what to do about it as well.
And we've spoken with many of their offices and I think now there is an understanding that this is something that this Congress, despite all of its, you know, animosity is hopefully going to address.
What are members of Congress doing to try to tamp down the bad publicity or negative reaction from voters they may get who think it's real?
Yeah.
So the interesting thing about deepfake pornography is that many of these videos appear on sites that are clearly labeled as deepfake, which, again, speaks to the motivation of the people that are putting them up there.
You know, perhaps they get some sort of cheap thrill out of it.
But the point is to denigrate the, the brand, the trustworthiness of the individuals who are depicted.
In Taylor Swift's case, I think it was pretty clear that, you know, the deepfakes themselves were not very good.
Let's say, the quality wasn't very good, but I think there were still some people who thought perhaps they were real because they were percolating on X, but members of Congress have, I think, you know, a privileged position where they are able to have their staff or Capitol Police or others reach out to these sites and say, you know, this is a copyright issue.
I did not consent to my image being on here.
And, you know, as a member of Congress, I've, I've got quite a lot of, I pack a lot of a punch.
So, please take it down.
Whereas if you're a normal woman, it's much, much harder to get these sites to take you seriously.
So I think that's why the members themselves are thinking about their constituents and thinking about the fact that, you know, they are in a privileged, privileged position and there are a couple of bills percolating.
I mentioned one sponsored by Senators Cruz and Klobuchar.
It's called the Take It Down Act.
That does two things for victims.
It allows them to pursue criminal penalties if deepfakes are spread of them.
So would bring in, you know, the federal government in making sure justice is served.
But it also, encourages social media platforms to remove content that is non-consensual or deepfake sexual abuse within a certain time period of being notified about that.
So increasing the responsibility of the platforms to police for this content, which I think is incredibly important.
They police for terrorist content, they police for child sexual abuse material, and certainly they should be policing for other image-based sexual abuse that affects adults as well.
There's another bill that has been co-sponsored by Congresswoman Alexandria Ocasio- Cortez, which introduces a civil penalty and would make that a federal law as well.
So women in all 50 states would be covered by these two pieces of federal legislation in order to allow them a pathway to justice.
Now, tell me about, why can't a woman just call one of these sites and say, I don't know, you know, I'm fabulously wealthy.
I have, I own law firms, and we're going to sue the living bejesus out of yo if you don't take it down.
They wouldn't pay attention to that?
Some of them might.
And some women have had luck with copyright related claims in particular.
So you own your image.
And you can say, I own this image and it was used without my consent.
And if you don't take it down, then I will sue you under copyright.
But many of these sites arent based in the United State Many of them don't have Contact Us forums.
They're just not, they're kind of rogue, right?
They don't answer for the harm that they are perpetrating in the world.
And I'm not sure what Taylor Swift's lawyer or her team have been doing.
But certainly, you know, even in her case and the case of many celebrities, part of the problem is that these deepfakes proliferate across the internet.
And so if it starts on one site, it can be copied to other sites.
And it's like you're playing a game of whack a troll.
It's very, very difficult to keep ahead of them.
And it's in fact requires kind of constant vigilance.
And that's why I think with a penalty involved, a criminal or a civil penalty where money or freedom might be on the line for folks, they'll think twice before uploading.
And how much of this is going on on the dark web?
And please explain for our viewers how one accesses the dark web.
The dark web is a kind of alternative internet.
It's not open.
You have to know where you're going in order to access it.
Often there are access codes that you need to use to be able to get in.
I'm sure there is a degree of this content that is being traded on the dark web, but the sad fact is that most of it is being traded, posted, created, tips shared for how to make them more lifelike on the open web.
And it is searchable.
In fact, when I found out that I was depicted in deepfake image-based sexual abuse myself, I found out because of a Google alert.
So it is being indexed and searchable.
So if someone were to search my name plus deepfake pornography, this would easily come up.
And that is something that tech companies are absolutely responsible for.
And while it's kind of funny to think that one morning I opened my email and, you know, something that normally alerts me when I've been quoted in press alerted me of this crazy material about me.
You know, I think the tech companies play a role in indexing this stuff and surfacing it for the people who are looking for it in providing users, with apps, even, you know, underage users with apps that can notify people and taking money for that.
The tech companies role in this chain of operations needs to be further investigated.
And that's something I hope to see.
And how likely is it that bill will be approved by Congress?
It's actually pretty likely this legislative session.
It's not very often you see the likes of Ted Cruz and Amy Klobuchar and many other kind of cross aisle partnerships happening like this.
The bill had quite a lot of support and passed the Senate by voice vote at the end of the last legislative session.
Similarly, the Defiance Act sponsored by Congresswoman AOC also had a lot of support.
So I think it's pretty likely that we see this happen because, again, as I said before, this is not a partisan issue.
It is one that is attempting to keep women out of public life.
And both Republican and Democratic congresspeople recognize that it's something that they can easily improve and deliver to their constituents.
So I'm pretty optimistic about it.
Not much to be optimistic for right now, except for that.
And let's talk about countries where these companies are located and governments even that are involved in this deepfake and misinformation campaigns.
I'm thinking China, Russia.
Certainly the deepfake sites, the apps themselves, are located all over the world, sometimes incorporated it in places where there is less financial oversight or transparency.
But we see, and I think it's important to note that this isn't just an issue that, you know, schoolboys or internet trolls are using.
It's something that our adversaries are using as well.
We have seen China, Russia and Iran use what we call gendered disinformation.
So disinformation that is false or misleading, that has sex-based tropes and included within it and meant to demean and undermine women.
We've seen them use that for decades.
And now with the accessibility of this technology, it makes it very easy to create these videos, in a way that there never was before.
You know, the term kompromat compromising material, is what Russia used to have to use where it got, you know, dignitaries or officials into compromising situations and then released the evidence of it.
Now they don't even need to engineer that.
They can just create this stuff from scratch and undermine, female officials or perhaps even gay officials around the world.
And we see a lot of that.
You know, we've done some preliminary research about NATO leaders and the way that they're being targeted.
We've seen Finnish Prime Minister Sanna Marin, former Chancellor Angela Merkel for New Zealand, Prime Minister Jacinda Ardern all targeted with similar deepfakes to what we've seen with members of Congress here.
So it's certainly something that our adversaries are looking at.
But also it's something that, unfortunately, the men we sit next to on the bus and at the bar are engaging in as well.
Now, how influential do you believe that deepfakes were in the 2024 elections?
They certainly were more evident than they have been ever before because the accessibility of this technology has become so much greater in the past four years.
You can create a deepfake with an app on your phone or on your computer.
You can have access to ChatGPT and other open, large language models that make all of this stuff much easier to create.
What we saw primarily in the U.S. election was the use of deepfakes in parody.
So, of course, Elon Musk shared a very famous audio deepfake which was overlaid over Kamala Harris's first campaign ad.
We saw a couple of other uses, humorous uses of deepfakes that I think most people knew were deepfakes.
The more troubling example from last election cycle is the one that happened during the New Hampshire primaries, in which, then candidate Joe Biden, you know, running for reelection, was claimed to be calling New Hampshire voters, saying, stay home, don't go vote in the primary, it'll just give the votes to the Republicans.
And this was found to be an AI- generated robocall.
That sort of deepfake, I think, is much, much, much more scary than the use of deepfakes for parody.
And it's something that I think we're going to see more of as the audio deepfakes in particular get better.
Now, was there deepfake pornography created of Vice President Harris?
Yes, 100%.
There was, but we didn't see any of that getting to a fever pitch.
None of it was taken at face value.
We see a lot of memeification and sexualization of the former vice president that I think is, frankly, has no place in politics.
But I don't think that it was the deciding factor in this election.
What do you think about deepfakes in terms of changing culture at all?
Do you do you think, for example, since you said the sexual ones, the pornographic ones mainly target women?
Is that demeaning womens status in American society?
In a world where perhaps men feel that they are no longer in a position of power that they used to be.
I think this is a way of some men of trying to exact that power and rebalance the scales in a way that's extraordinarily unfair.
You know, one thing we haven't touched on yet is that the models with which deepfakes are created are primarily trained on women's bodies.
So you might see deepfakes of men sometimes, but they're much more, they're less convincing, let's say, than the deepfakes of women.
The truth is that, yeah, I think this is a, is an attempt to level the playing field and to say to women, again, any women who are voicing their opinions, who are saying things perhaps that men don't like on both sides of the political spectrum, again, that this is our way of kind of taking women down a peg.
And until we have laws introduced to stop that, then it's going to cause real harm.
And I think that's important to note as well.
There's a lot of research that shows that the psychological response to seeing yourself depicted in a non-consensual sexual situation like this, and many of them are quite violent as well, is similar to actual physical sexual abuse.
So let's not cheap in it and say that it's just, you know, mean words online.
It's much more than that.
Is the same true that you're saying is true for women and the deepfakes true of people of color as well?
And when we were able to do our research around members of Congress, we just didn't have a statistically significant enough sample to be able to see the trends for people of color.
But what I can tell you is that with other forms of online abuse, the more compounded identities you have, if you have an intersectional identity, let's say you are a Black, lesbian.
You're more likely to be targeted with abuse.
Where do you see this heading?
Where?
Tell me how deepfakes and AI are going to get more sophisticated over the next couple of years.
And where you know what the ultimate outcome might be.
We're at a moment of inflection where I think there is a lot of concern about artificial intelligence around the world.
We saw the Biden administration sign an executive order that was quite extensive, putting some guardrails on artificial intelligence.
It's unclear what the Trump administration is going to do in this space or if they're just going to remove any modicum of guardrails that existed in order to, to pump up innovation.
But as we wait for these guardrails to be put in place, you know, the EU is regulating around this, other entities are regulating around it.
But I'm not sure we're regulating fast enough because, in the meantime, authors, journalists, you know, artists, poets are all having their work hoovered up by these models, which are, I would argue, violating copyright.
They are creating, you know, new images, new works that are derivative of these former works, great works of art, in many cases, with no credit at all.
And in the meantime the models are also hoovering up all of the biases in our society, whether that's misogyny, racism, homophobia, you name it, they're taking the worst of the internet and often the, the open internet.
Right.
So the things that aren't copyrighted chat forums, Reddit, you name it, putting that into the AI and the result is that the results that the AI gets, you are misogynistic, are racist, are homophobic.
So I think it's incredibly important for this, this administration coming in to think about putting guardrails on AI again to protect intellectual property, but also to protect people because this is the future of the internet in many cases.
Many are arguing that it's going to be the future of work, and we want it to reflect the best of our society, not the worst.
Is that a tip to our viewers?
By the way, that if you copyright your video, which really just means putting a copyright symbol on it, that's a stronger way to protect your videos and photographs online.
Unfortunately, even copyrighted material has been fed into large AI models.
I found out that an article that I wrote for the New York Times several years ago, about eight years ago, was fed into an AI model, and there are a couple of lawsuits percolating of authors who have had their entire books fed into AI models, and now they're suing, the AI companies for having done that without their permission, and certainly without having paid them, for their, their knowledge and their work.
So I think that remains to be seen.
Right now, I would say that, unless you are specifically opting out of your data, being fed into AI models on platforms like Facebook, Instagram and whatever other social platforms you may be on.
In fact, I know that on X, if you are on X right now, you automatically are opted in.
And there, I believe is no opt out to your data being fed into its AI model, which is called proc.
You should just assume that, your work, your words, your images, your videos are being fed into these models in order to make them smarter.
That paints a bit of a scary picture then, doesn't it?
I certainly would have preferred to not rush to market with these models and make sure that they were right.
Make sure that they were fair and balanced before, before unleashing them.
But this is the way that tech works.
The old slogan at Facebook used to be “move fast and break things.” And the question that I've been asking when I think about AI is, okay, you can move fast, but what are you going to break?
Are you going to break humanity in the course of all of this?
And I think, right now the the evidence is pointing toward many of humanity's worst tendencies when it comes to artificial intelligence and the results that it's providing so far.
To wrap it up, tell us something that's hopeful about the future.
I am very hopeful that Congress will pass, some fairly, you know, progressive reforms related to deepfake sexual abuse.
And I don't mean progressive politically, I mean progressive in the, in the sense that there's only a handful of countries that have laws like this on the book.
And I think it is important for America to lead the way.
But also, I think, you know, even in a time when many people are confused and we feel a lot of division in our society.
I'm still hopeful every day when I run into people who, who find my work, particularly around, you know, gender and women's equality online helpful in their own quest to speak out, this is always going to be a project until, you know, we have equal representation in Congress and many other institutions around our country.
But if I can empower, I was going to say younger women, but older women as well to, to go out and use their voice and make it heard online.
And, and in the real world, so to speak.
Then I feel like I'm really making an impact and that gives me hope every day.
Thank you so much, Nina Jankowicz.
It's been a pleasure learning from you.
And, I wish you all good things in the future for your efforts to control these deepfakes.
Thank you very much.
That's it for this edition of “To the Contrary.” Keep the conversation going on our social media platforms X, Facebook, Instagram and TikTok.
Reach out to @tothecontrary and visit our website address on the screen.
And whether you agree or think to the contrary, see you next time.
Funding for “To the Contrary,” provided by:
- News and Public Affairs
Top journalists deliver compelling original analysis of the hour's headlines.
- News and Public Affairs
FRONTLINE is investigative journalism that questions, explains and changes our world.
Support for PBS provided by:
Funding for TO THE CONTRARY is provided by the E. Rhodes and Leona B. Carpenter Foundation, the Park Foundation and the Charles A. Frueauff Foundation.