Support Intelligent, In-Depth, Trustworthy Journalism.
Moments after President Donald Trump stepped up to the dais in the House Chamber to deliver his 2019 State of the Union speech to Congress, Cameron Hickey sat at his computer in Cambridge, Massachusetts, and scanned social media for patterns of problematic messages.
He found them. Twitter accounts that had previously been flagged by the platform as abusive had disseminated photos of female lawmakers who attended the address wearing all-white — a nod to the suffragettes. But these photos were edited to include Ku Klux Klan hoods.
In the hours after Trump’s speech, the spokesperson for Trump’s 2016 presidential campaign, Katrina Pierson, echoed the meme, tweeting, “The only thing the Democrats uniform was missing tonight is the matching hood.”
The messages smearing the women in white “took hold instantly” and were “shocking,” said Hickey, technology manager at the Information Disorder Lab at Harvard University’s Shorenstein Center on Media, Politics and Public Policy. He and his team sift through thousands of social media posts per hour, using a tool they developed called NewsTracker, to identify and track emerging misinformation. Hickey originally created the concept for NewsTracker as a science producer at PBS NewsHour.
The fake imagery of congressional female Democrats may have been used to rile up national tensions about racism. As a candidate and since taking office, Trump has faced several allegations of racism — including when he said an American-born judge was not qualified to hear a case because of his Mexican heritage, or reportedly disparaged Haiti and African nations. But, Hickey said, this example tries to flip it back onto the other party. This year, the State of the Union fell alongside a controversy over Virginia Gov. Ralph Northam and state Attorney General Mark Herring — both Democrats — wearing blackface in their pasts.
As fast as these incendiary messages mushroom, Hickey said, it’s unclear how long they stay in the tweetosphere. But failing to successfully weed them out could ultimately undermine political discourse and democracy in the country.
Last year, a majority of Americans got their news from social media, and yet they don’t trust it entirely, said Galen Stocking, a computational social scientist at Pew Research Center.
“There is a sense that news on social media isn’t accurate,” he said, adding that despite those doubts, convenience keeps Americans coming back for more.
Two-thirds of Americans say they are familiar with social media bots, which are computer-operated accounts that post content on platforms, according to a nationally representative survey the Pew Research Center released in October 2018 ahead of the midterm elections.
Among those who had heard anything about social media bots, 80 percent of respondents said these bots were used for bad purposes, and two-thirds said bots have had a mostly negative effect on U.S. news consumers’ ability to stay informed. Nearly half of people within that same pool of respondents said they probably could spot a social media post sent out by a bot.
Although a majority of Americans know this is a threat, many still fall for it. Tweets that compared the congressional women in white to KKK members, for example, were shared over and over again. The motivation to share could take many forms — the account holder may believe the images are real, foster a dark sense of humor or be party to tribalism.
Ultimately, low-credibility information can spread virally, Hickey said, and people who do not value truth and accuracy will exploit that vulnerability in how discourse evolves in social media to their own gain.
Social media companies are aware that orchestrated chaos is unfolding within the information ecosystems they created, and have faced scrutiny and calls to do more to intervene. Congress grilled Facebook founder Mark Zuckerberg last April for the company’s failure to prevent rampant misinformation spread by Russian social media bots during the 2016 presidential election. In November, Zuckerberg announced the company would be introducing a global, independent oversight body to help govern content on the platform.
After the 2018 U.S. midterm election,Twitter conducted a review that revealed there had been competing efforts by users to both register voters and suppress voter participation, as well as foreign information operations (but to a lesser degree than in 2016).
Problematic messages — whether they be conspiracy theory, hyperpartisan spin, or meme designed to inflame tension — sometimes originate in even less regulated online spaces. They may lie dormant in a comment thread on 4chan or Reddit for months or years before moving onto gateway platforms, such as Twitter or Facebook, where the news cycle could summon it like a virus into mainstream media coverage.
Cognitive psychologist Gordon Pennycook, who studies what distinguishes people’s gut feelings from their analytical reasoning at the University of Regina in Canada, admits he has fallen for fake claims that made their way into news stories. A case in point was a reported confrontation in January during the March for Life rally in Washington, D.C., between Covington Catholic High School student and a Native American protester.
Growing up in rural Saskatchewan, Pennycook said he had witnessed disrespectful behavior toward First Nation communities, so the story of a young high school student being rude to an elderly Native American wasn’t hard to believe. But subsequent reporting by the Washington Post and others suggested the confrontation was more complex than social media initially understood.
“I bought into it like everybody else did,” Pennycook said, but his research armed him with restraint in reacting to the story on social media. “I didn’t pile on or retweet.”
So why do people fall for fake news — and share it? In a May 2018 study published in the journal Cognition, Pennycook and his co-author, David Rand from the Massachusetts Institute of Technology, explored what compels people to share partisan-driven fake news. To test that question, Pennycook and Rand administered the Cognition Reflection Test to more than 3,400 Amazon Mechanical Turk workers, checking their ability to discern fake news headlines even when pitted against their own ideological biases. The pair concluded a person’s vulnerability to fake news was more deeply rooted in “lazy thinking” than in party politics.
It doesn’t help U.S. confront the problem of fake news “to have somebody with a very large platform saying things that are demonstrably false,” Pennycook said.
He explained it was socially and politically problematic when Trump used his State of the Union address and the White House to make claims about jobless rates among Latinos and migrant caravans that can be quickly proven untrue.
More broadly, Pennycook says it’s tough to know if humans can control the fake news monster they have created: “It’s a reflection of our nature, in a sense.”
At Indiana University’s Center for Complex Networks and Systems Research, Fil Menczer has built a tool called Hoaxy that he hopes will help people discern the trustworthiness of the news they consume.
To use it, you can add a keyword or phrase (“State of the Union”). The database then builds a webbed network of Twitter accounts that have shared stories on this subject, grading each account on the likelihood that it is a bot. Was the account quoted, mentioned or retweeted anyone? No? Has anyone else quoted, mentioned or retweeted that account? Still no? Then, according to Hoaxy’s Bot-o-meter, there’s a solid chance that account is a bot. Hoaxy endlessly monitors hyperpartisan sites, junk science, fake news, hoaxes, as well as fact-checking websites, Menczer said.
A search of NewsTracker and Hoaxy for memes that popped up before and after Pierson’s tweet that linked Democratic women to the KKK, shows how quickly bot accounts jumped on the topic.
A small slice of Hoaxy’s data shows how a single bot account, @cparham65, was quickly retweeted by dozens of other bots once it had latched onto the topic. The graphic below represents activity around the tweet, showing a photoshopped meme of former President Obama and a flock of white sheep.
Menczer, a professor of informatics and computer science, did not specifically track or study information how bots responded to Trump’s latest State of the Union speech. But he has studied how current events can spawn misinformation.
In a world where people are flooded with messages from their phones, televisions, laptops and more, Menczer said creators of problematic content abide by the economics for their potential audience’s attention. The people behind misinformation want to arrest you while you’re scrolling through your newsfeed. They know their message is competing against a lot of other stuff — news, friend’s baby photos, hypnotic videos of bakers icing cupcakes.
People have begun to realize how easy it is to inject misinformation and warp a community’s perceptions of the world around them, Menczer said.
“If you can manipulate this network, you can manipulate people’s opinions,” he said. “And if you can manipulate people’s opinions, you can do a lot of damage, and democracy’s at risk.”
And the odds of containing misinformation don’t look promising, Menczer warned. At Facebook, for example, the embattled company dismantled billions of suspicious accounts amid widespread public scrutiny. But even if the company removed these accounts with an accuracy rate of 99.9 percent, Menczer said, “you still have millions of accounts that won’t get caught.”
Back in Cambridge, Hickey said he is applying the lessons he learned during the 2018 midterms, tracking problematic content on social media, and gearing up for what he expects to be a proliferation of bad information ahead of the 2020 presidential election.
He does not focus on identifying Russian bots, he said, because it is so hard for anyone outside of a particular social media platform to judge a bot’s origin. Instead, he isolates suspicious accounts by message frequency and how and if they share legitimate (or junky) content.
During the 2018 midterms, Hickey said his team identified 1,700 cases of problematic content that received very high engagement — sometimes thousands of interactions on Facebook or Twitter. The kinds of messages that hit this threshold touched on immigration, Islamophobia, the hearings of Supreme Court Justice Brett Kavanaugh. In that last case, misinformation spread both about Kavanaugh and Christine Blasey Ford, the woman who accused him of sexual assault. One anti-Kavanaugh viral tweet, highlighted by Quartz, referenced a Wall Street Journal article that did not exist. Public reception of these problematic memes was “incredibly responsive,” Hickey said.
While platforms such as Twitter, Facebook and YouTube are trying to mitigate the potentially disastrous effects of peddlers of misinformation that have a political or financial stake, Hickey said there is a constant game of cat-and-mouse that he doesn’t see ending any time soon. Whether a foreign or domestic assault, he said the techniques used to shovel misinformation into the news cycle are the same with similar results.
“You build up a bunch of audiences using this platform,” he said. “And then when you’re prepared to push a particular message, you can do it.”
Laura Santhanam is the Health Reporter and Coordinating Producer for Polling for the PBS NewsHour, where she has also worked as the Data Producer. Follow @LauraSanthanam
Support Provided By:
Support PBS NewsHour:
Subscribe to Here’s the Deal, our politics newsletter for analysis you won’t find anywhere else.
Thank you. Please check your inbox to confirm.