Subscribe to Here’s the Deal, our politics
newsletter for analysis you won’t find anywhere else.
Thank you. Please check your inbox to confirm.
Leave your feedback
As technology grows more sophisticated, so does the potential for deception. Last month, images went viral that purported to show police arresting Donald Trump and the former president in an orange prisoner’s jumpsuit — but they were fakes. Jack Stubbs, vice president of intelligence at Graphika, a research firm that studies online disinformation, joins William Brangham to discuss.
As technology grows most sophisticated sodas the potential for deception. Last month, images went viral purporting to show police arresting Donald Trump and the former president in an orange prisoner's jumpsuit, but they were fakes. Trump hadn't even been indicted yet. There have also been lots of other so called Deep Fakes on social media, including an image supposedly showing Pope Francis wearing a stylish, puffy jacket.
William Brangham spoke with Jack Stubbs, the Vice President of intelligence at Graphika, a research firm that studies online disinformation,
Jack Stubbs, thank you so much for being here. Before we get into the weeds of this, can you just start with a clear definition of what a deep fake actually is?
It's a good question. And it's one probably a lot more people are asking themselves and they were a few months ago. Deep fake is often the word used to describe a piece of media content that has been created by artificial intelligence. And typically, you would use the fake to refer to an AI generated media content that is also misleading. So portraying something that hasn't happened.
So I showed some of those examples of deep fakes that we've seen circulating recently. How else are deep fakes being used today?
I mean, we see this type of technology being used across the board. And a lot of it has a very legitimate use case, you know, some fantastic piece of art, for example, have been created using this technology. But unfortunately, you know, as with anything, people will use it for good things, but there will be people out there that use it for less good things, and some that are outright bad.
At Graphika, we study and kind of analyze a whole host of different harmful online pages from politically motivated influence operations by foreign nation state actors to coordinated harassment campaigns. And what we're seeing is that this technology is kind of having an impact across all those different arenas.
And how easy is this technology to use? I mean, I think people are familiar with old school, sort of Photoshop, and how to creating those images requires a certain level of technical knowhow. Is this technology similarly difficult to master?
Well, that's one of the things that's really interesting. And that's probably how much stuff has changed over the last six months. So, this type of technology using, you know, computers to create images, or video has been around for a long time, right? I mean, special effects have existed in the in the movies for decades, and they've got increasingly good.
But what we're seeing now is that the sophistication of the technology is increasing, but at the same time is becoming more accessible. So the majority of these tools are now available for anyone to use on the Internet for just a handful of dollars and a subscription fee. And that means more people can do it. And the stuff that able to do with it. There's a wide variety of outputs.
I mean, some of these examples are pretty harmless. I mean, I thought the pope looked pretty swanky in that puffy coat. But it's not too hard to imagine the darker side of all of this. Can you sketch out some of the scarier possibilities for this?
Yes, possibilities and also things we've seen, you know. So for example, we very closely track, state aligned influence operations from a host of different countries that are targeting political conversations United States and other Western countries. We recently saw a Chinese state aligned influence operation, using AI generated fictitious avatars in their videos to create content about domestic political issues like gun violence, and then try and distribute them online to influence the conversation so authentic online people are engaged.
I mean, do we have any way I know this is a tricky thing to try to measure. But is there any way to know whether or not people are actually being fooled by these things?
It's very tricky to measure and it probably comes down to a case by case basis but I mean, the that image of the pope in a very swanky puffer jacket is a good example. A lot of people including myself, I mean saw that and thought notes it's probably true and it's quite funny.
Most of these outputs whether it's AI generated video or images, I mean, they don't stand up to kind of deeper inspection and scrutiny. You'll see that maybe the hand is actually quite blurred or they're often quite bad at that showing text.
But what they're good enough to basically kind of pass it a cursory glance. And that's the nature of the internet, right? I mean, it's an attention deficit environment. And people don't often look at things for more than a few seconds before making a reaction or feeling a certain way.
When we saw recently, with regards to artificial intelligence that Elon Musk and another of other prominent tech people called for a moratorium sort of pause on the development of those technologies. Has anyone called for a moratorium on the use of deep fakes?
Not that I'm aware of, and I'm not sure that would be practical or possible to enforce, honestly.
Because the cat out of the bag, so to speak.
Yes, the cats out of the bag, the technology is available, and folks are going to express themselves in good and bad ways. You know, regardless of what we try to do about it, and just emphasizes a lot of really positive and legitimate use cases for this technology. Not just in terms of deep fake images. But you think about the technology we now see with language models and things like ChatGPT, this is an amazing tool. I mean, it can organize holidays for you. And write emails and be a basically a personal assistant.
But as with any technology, as well as least legitimate kind of good faith use cases, you'll see that bad actors also use it for bad use cases, whether that is conducting an authentic influence operation or coordinating an online harassment campaign.
So you're part of an organization that studies disinformation, how do we go about helping people combat this?
I think we need to talk about it. And that not particularly original book kind of tried and tested answers, a lot of it comes down to education and media literacy, as we were discussing earlier. You know, many people don't interrogate the sources of media that they see online for more than a couple of seconds.
But we need to ingrain a reaction at the people of this is a really interesting and funny picture of the Pope and a puffer jacket isn't actually true. And how do I know that? And how is it making me feel? And what is going to be my reaction? After I've kind of made that more informed and thoughtful assessment.
You mentioned how if you really scrutinize these images currently, you can usually find flaws in the in the visual detail that are a tip off. But we know that technology is getting better every day and will continue to get better. Do you think in this ongoing war between fact and fiction, which side is going to win out?
I can't say which side is going to win out and I want to be optimistic. You know, humans have existed for a long time and technologies, you know, had multiple leaps forward that has brought these really profound impacts to the way we live. And, you know, for the most part, we're actually still in a fairly good place. But we're accelerating in terms of the speed at which we're heading towards this situation that some people refer to as zero trust, you know, this, this environment, particularly online, where it's almost impossible to ascertain what is true or what is false.
It's not just being presented with, you know, something that never happened to be convinced it's real. But also on the flip side, where there can be perfectly real world legitimate, authentic events, but it's impossible to verify that's the case. For example, the Access Hollywood tape from a few years ago, if that was released a day, it'd be very easy to argue that that wasn't real, and we're very hard to prove otherwise.
All right, that is Jack Stubbs, the research organization Graphika, thank you so much for being here.
Thanks for having me.
Watch the Full Episode
William Brangham is a correspondent and producer for PBS NewsHour in Washington, D.C. He joined the flagship PBS program in 2015, after spending two years with PBS NewsHour Weekend in New York City.
Support Provided By: