Support Intelligent, In-Depth, Trustworthy Journalism.
The U.S. intelligence community has made it clear that when it comes to foreign interference in American elections, 2016 was only the beginning. As November approaches, efforts to persuade voters and sow disinformation and mistrust are growing. Nick Schifrin reports and talks to Ben Nimmo of cybersecurity firm Graphika about a Russian operation leveraging American writers and fake personas.
2016 was only the beginning. The threats from hacking and interference to the November election from foreign governments are growing as we move closer to November.
Nick Schifrin explores the latest we know about Russia's role.
Judy, the Internet Research Agency was a troll farm, a sock puppet army of fake online accounts and automated bots, spreading synchronized talking points.
Today, Facebook and the independent cybersecurity firm Graphika say members of that agency created a new site, Peace Data, which bills itself as a global news organization. Its stories are designed to criticize Biden from the left, to steer possible voters away from his campaign.
It was also hostile to Trump, revealing the main Russian goal remains the same, sow division.
But one difference this time, they're trying to hire Americans. That's an attempt to launder the Russian origins of the disinformation.
And we turn now to the primary author of that Graphika report and memo, the firm's director of investigations.
Ben Nimmo, welcome back to the "NewsHour."
Laundering the source of this information, it's a favorite tool of Russia, Soviet disinformation long before it. How did it work in this case?
In this case, what you — happened was an operation that was built around a Web site called Peace Data, which published in English and Arabic.
And on the Peace Data Web site, it named its member of staff, its editor, its editorial assistants. And all of those were fake personas. They had all been invented by the operators behind this particular network. They all had fake profile pictures which had been generated by A.I.
But what they were doing was, rather than using these fake personas to write stories themselves, they used them to then contact freelance journalists from around the world, including in the United States, and say, would you like to write for us? Can you send us stories?
And they — it seems that they hired quite a few different journalists in different countries to write stories which then went on the Web site. And then the operation itself had accounts on Facebook and Twitter. It had personas on LinkedIn.
And it would use these social media accounts to try and plant the stories in front of receptive audiences. So, for example, on Facebook, once the operation had got freelances to write the stories that it was interested in, it would use its own Facebook accounts to post into groups that it thought would be particularly important targets.
And the kind of groups that it was targeting were very much progressive groups. There were groups that focused on Democratic socialism. There were groups that focused on DemExit. There were some environmental groups in the mix.
But you can see that the process was, get somebody else to write the story, so it looks authentic. And then you use the fake accounts to drop it in front of the communities that you're trying to target.
One of the other evolutions that you write about that the Internet Research Agency has changed in the last couple years is using fewer, more crafted, more targeted accounts than in 2016.
What does that say about how these efforts are evolving?
It really looks like they're trying harder with each persona, partly because they're having to.
If you think back to 2016, the Internet Research Agency was running hundreds of accounts off different platforms, with only ever a paper-thin attempt of having an identity.
But what's happened since then is, we have seen multiple rounds of takedowns by different platforms. We have seen multiple exposures of different ways that the Internet Research Agency and other information operations have been working.
And, really, particularly now with the election coming up, the hunt is on. There's a whole community of researchers out there, both inside the platforms and outside, who are looking for this kind of fake account activity.
And so the operators who were behind this particular network were having to try harder to create a persona that would — that would stand the test. So they'd have the same persona on the Web site and on Facebook and on Twitter and on LinkedIn. They would try and give it a little bit of a biography.
They would try and give it a little bit of a personality. But, still, it wasn't enough to stop them getting caught. And that really tells you something important about the way — if you like, the way the game has shifted since 2016.
In 2016, it was almost painfully easy for the Internet Research Agency to run these fake accounts and get away with it. And what we have seen since then is, it's been getting harder and harder.
On the one hand, Facebook says the fact that this particular site had so few audience members is a sign they're doing well in cracking down on this.
On the other hand, the FBI tipped Facebook off about this content in July, and here we are in September. Is Facebook doing enough fast enough?
In 2016, the operation that targeted the U.S. election was finally exposed and taken down the September after the election.
This time around, we have seen an operation being taken down across multiple platforms, in cooperation with law enforcement, September before the election. And that's a really, really important difference. Catching it before it can actually reach the day that it's targeting is much more effective than catching it a year down the line.
We should mention that Peace Data has supposedly responded to these criticisms about it, saying that it's evidence that Facebook and the FBI — quote — "want to shut up independent left-wing voices."
Is that in some ways just part of the disinformation playbook?
Yes, that's something we see every time there's an exposure of an information operation.
Part of what they will do is, they will raise — try and raise a storm of protest and say, we're not an information operation. You guys are.
It's an absolutely typical part of Russian information operations. And I have seen it with various different operations that have been taken down. It's part of the trolling game. What will be interesting to see is whether anybody actually falls for it.
This Web site operates by running fake personas with fake profile pictures. It's now taken down the about page on the Web site, where all these fake profile pictures were being displayed. But that's been kept in a number of Web site archives. So the evidence is still there.
And so the operation has had to take down part of what it was doing. And so, if their claim is that, we are just poor oppressed journalists, then the question remains, so why were you using A.I.-generated profile pictures in the first place?
And they don't seem to have an answer for that, which is why they're now trying to hide the evidence.
Ben Nimmo, thank you very much.
Watch the Full Episode
Nick Schifrin is the foreign affairs and defense correspondent for PBS NewsHour, based in Washington, D.C. He leads NewsHour's foreign reporting and has created week-long, in-depth series for NewsHour from China, Russia, Ukraine, Nigeria, Egypt, Kenya, Cuba, Mexico, and the Baltics. The PBS NewsHour series "Inside Putin's Russia" won a 2018 Peabody Award and the National Press Club's Edwin M. Hood Award for Diplomatic Correspondence. In November 2020, Schifrin received the American Academy of Diplomacy’s Arthur Ross Media Award for Distinguished Reporting and Analysis of Foreign Affairs.
Support Provided By:
Support PBS NewsHour:
Subscribe to Here’s the Deal, our politics newsletter for analysis you won’t find anywhere else.
Thank you. Please check your inbox to confirm.