Depending on whom you follow online, you could be living in a “parallel universe” of misinformation. Just look at the wildly divergent narratives surrounding the viral encounter that occured last Friday between Native American activist and Omaha elder Nathan Phillips and Covington Catholic High School students, and how it has become political fodder.
Video clips shared on social media helped to ignite what became a national controversy, and one of the Twitter accounts that helped spread them has since been suspended, the Washington Post reported on Tuesday.
The spark point — the lack of context around the short clips — may have been deliberate manipulation, like the onslaught of “fake news” that helped polarize voters during the 2016 election season.
In the lead-up to the last presidential election, a handful of “supersharers” tweeted hundreds of stories every day, relaying false political information from pervasive, untrustworthy sources.
In fact, most fake news on Twitter during the 2016 election bounced around one “seedy neighborhood,” according to a study published Jan. 24 in Science by researchers at Northeastern University in Boston.
Eighty percent of all content from suspect sources was shared by less than 1 percent of the human tweeters sampled in their study. Those users were disproportionately politically conservative, older and more highly engaged with political news.
This is good news in one way, said Emily Thorson, a political scientist from Syracuse University who was not involved in this study.
“Most people are not being inundated with fake news,” Thorson said. “That is not to say that this isn’t a problem, but I don’t think this is the magnitude of a problem that people often think it is.”
What they studied
Twitter “clearly in some sense has emerged as a ‘public square’ of the 21st century,” said David Lazer, a computational social scientist who led the study. “More than 20 percent of U.S. adults are on Twitter — it isn’t everyone, but it’s a lot of people and a lot of opinion influencers.”
Yet it’s no easy feat to figure out who’s who online, especially if you’re trying to create a representative sample of U.S. Twitter users.
For this study, Lazer and his team needed a big group of Twitter users who could be identified. They turned to a large, publicly accessible source: voter registration.
The team found a sample of nearly 2 million voter records whose combined first and last names were one-of-a-kind in their state. Then, they searched for those people on Twitter, matching their names and locations to individual Twitter profiles. The team also attempted to eliminate bots and hijacked accounts from their panel of subjects.
Just 1 percent of the original 2 million voters could be matched to a Twitter account, and the researchers settled eventually on 16,442 unprotected and uncompromised accounts. Each subject followed at least one other account, had sent at least one tweet and had most likely seen at least one political link in their Twitter feeds during the study.
And thanks to the voting records, the team now had information on the age, gender and race of each Twitter user.
To ensure their sample was somewhat representative of U.S. adults overall, the researchers matched their data with a Pew survey of Twitter demographics. Lazer’s team found that their selection of Twitter users closely paralleled Pew’s assessment of who is on Twitter. This result means that the conclusions drawn by Lazer’s study likely match the general experiences of U.S. adults on Twitter.
“That’s reassuring, that we’re not violently off,” said Lazer, though he admitted their panel missed people with common names and people from very populated areas. The study also naturally excluded Twitter users who don’t reveal their locational information or their full names.
“As scientists, we want to be honest that those are concerns,” Lazer said.
The researchers defined “fake news outlets” rather than individual fake news stories, identifying publishers whose stories imitated news but actually spread inaccurate information. The study classified some sites, known to publish almost exclusively made-up stories, as “black” sites, whereas “red” and “orange” sites had low journalistic quality and high levels of inaccuracy.
Twitter users were considered “exposed” to fake news if web addresses from those sites appeared on the users’ Twitter feeds.
What the researchers found
“Out of about 16,000 accounts, 16 of them accounted for 80 percent of fake news being shared,” Lazer said.
Those 16 accounts sometimes shared hundreds of news stories each day, with a high proportion of false or misleading news. Based on this online cluster of news sources, Lazer said, some of these users appear to live “in a parallel reality where Hillary Clinton runs a pedophile ring in a basement and George Soros has a global network and so on.”
Users who shared articles from fake news sources were also “disproportionately older and had right-leaning political diets,” he said. “People who consumed the content of the right consumed a lot more fake news. That’s been consistent with prior research.”
A growing field of studies point to a host of different factors that can increase readers’ engagement with disinformation — for example, limited attention spans paired with overwhelming floods of information, the prevalence of bots and the novelty inherent in fabricated news.
But Lazar said that this new study could help anyone combating misinformation on Twitter to identify and eliminate some of the most egregious fake news sources.
What the data means
The researchers don’t want to draw too many partisan conclusions.
Weighing in on the study, Thorson said she is “always worried that people are going to see data like this and say, ‘Oh my gosh, conservatives are more susceptible to misinformation than liberals are.’ I don’t think that’s the correct interpretation.”
Instead, she related Lazer’s work to another recent paper that used different methods to study web traffic. That study found that “there is this relationship between age and sharing of fake news that cannot be entirely attributable to party identification.”
That means older people on each political extreme — both liberals and conservatives — tend to share fake news that agrees with their political leaning.
Despite this balance, “I hope this is not interpreted as ‘nothing to worry about,’ said Filippo Menczer, a computer and cognitive scientist and director of Indiana University’s Center for Complex Networks and Systems Research, and who was also not involved in this study.
The study pointed out a tendency for the spreaders of fake news to be older, highly politically active people, and that demographic happens to be the most likely to turn out for elections.
“If exposure [to fake news] is concentrated and uneven,” Menczer said, “it might be concentrated in just the right way to make things worse.”
For the moment, the right side of the political spectrum sees far more fake news in their feeds and is more likely to share it. But Thorson thinks this situation is a pendulum.
“When groups are out of power, that’s when these kinds of conspiracy theories start to arise,” Thorson said. “Republicans were out of power for eight years, so it makes sense that there would be more of a demand for that kind of product on that side,”
If Trump spends eight years in office, she expects to see a surge of fake news strike the left.
The team’s data shows that a relatively small amount of fake news seems to infiltrate Twitter — it makes up around 5 percent of all the political links shared. But Menczer argued that this study could be underestimating — creating a “lower bound” to — the infiltration of misinformation.
Menczer pointed out that the study assumes that all posts in a Twitter user’s feed have an equal chance of being viewed, but social media “platforms rank things based on the estimate of things that will be more likely to be engaging.”
Misinformation is highly engaging, he said. “It’s designed to appeal to our emotions and our pre-existing beliefs.” As a result, those enticingly false posts might have been treated differently by Twitter’s algorithms and been shown more widely than this study would detect.
“I think it would be an important direction for future research,” Menczer said.
Social media sites are taking actions to limit the spread of blatant misinformation on their platforms.
Facebook’s attempted crackdown on fake news seems to be working: After the 2016 election, interactions with fake news decreased sharply on the site, according to a recent paper from Stanford University. Twitter has banned tens of millions of bots, but the Stanford paper reports that the website has still seen a steady rise in fake news content since 2016.
“There’s a broader question: What should the platforms do?” Lazer asked. “That’s a tough one.”