TOPICS > Science > The Leading Edge

Cracking the stealth political influence of bots

October 26, 2016 at 6:30 PM EDT
Among the millions of real people tweeting about the presidential race, there are also a lot accounts operated by fake people, or “bots.” Politicians and regular users alike use these accounts to increase their follower bases and push messages. Science correspondent Miles O’Brien reports on how computer scientists can analyze Twitter handles to determine whether or not they are bots.

HARI SREENIVASAN: More than ever before, a big part of this election campaign has played itself out on social media.

No doubt the candidates and their campaigns have tried to take advantage of these platforms. But there’s been a much bigger role this year as well for unseen players. You might call it the rise of the bots.

Miles O’Brien has the story, part of our weekly reporting about on the Leading Edge of science and technology.

HILLARY CLINTON (D), Presidential Nominee: Donald supported the invasion of Iraq.

DONALD TRUMP (R), Presidential Nominee: Wrong.

HILLARY CLINTON: That is absolutely proved over and over again.

DONALD TRUMP: Wrong. Wrong.


MILES O’BRIEN: Much as we may wish otherwise, the race for the Oval Office is not over. But there is already one clear winner, by acclamation, Twitter.

Thanks to a candidate who prefers campaigning in 140-character spurts, the social networking platform has become an essential political forum. What could go wrong with that? Plenty. Tweeters, beware.

FILIPPO MENCZER, Indiana University: Because so much of the opinions that we form and the information that we digest comes from social networks and social media, it is possible to try and manipulate the network to control opinions.

MILES O’BRIEN: Filippo Menczer is director of the Center for Complex Networks and Systems at Indiana University. He says the junction of the political and computer sciences is a dangerous place for democracy.

FILIPPO MENCZER: Just like people have tried to influence elections forever without social media, why wouldn’t they also try with social media? Just like any other tool and technology, it can be used for good things and it can also be manipulated or abused.

MILES O’BRIEN: Menczer and his team are researching the shadowy world of bots, software programmed to mimic humans and engage with them. You probably encounter bots all the time

ACTRESS: What’s my day look like?

COMPUTER VOICE: Not bad. Only two meetings today.

MILES O’BRIEN: Whether it’s Apple’s Siri.

ACTOR: Alexa, play rock music.


MILES O’BRIEN: Amazon’s Alexa.

ACTOR: Alexa, stop.

MILES O’BRIEN: Or the chat box that pops up offering advice when your cable TV is on the fritz. A good one is hard to differentiate from a real person. And that’s the trouble.

Clever bots, employed in a stealthy, strategic manner, can put a virtual finger on the scale of political discourse. Bots generated a huge volume of tweets pro and con during the Brexit vote in the United Kingdom. They also promoted candidates in the 2014 India elections. ISIS uses bots to amplify propaganda by creating thousands of phony accounts.

FILIPPO MENCZER: It’s like very easy to create one or 10 or 100 or 1,000 accounts controlled by that campaign and make it look like these are just regular people who are expressing their freedom of speech.

MILES O’BRIEN: Bots first reared their ugly heads in U.S. presidential politics in 2012. Mitt Romney got caught buying bots when he gained more than 140,000 followers in two days.

For a few dollars, anyone can buy a few thousand bots that can be deployed to strengthen your Twitter prowess. But this go-round, the bots are doing much more than providing artificial popularity.

Phil Howard is a professor of the Internet at Oxford University.

PHIL HOWARD, University of Oxford: You program the bots that are following you to repeat your message. And what happens is that a larger and larger number of people see the retweets and think, this is an important position paper or this is a great new idea.

DONALD TRUMP: You’re talking about taking out ISIS, but you were there and you were secretary of state when it was a little infant.

MILES O’BRIEN: Howard analyzed tweets during the first presidential debate. He found bots were behind one-fifth of pro-Clinton Twitter traffic, and nearly one-third of pro-Trump tweets.

PHIL HOWARD: The real problem here is that not all users can tell when the content that comes up in their social media feed is actually generated by these bots.

It’s very difficult to know what the overall impact is on public opinion, but we do know that most Americans can’t distinguish these bots from real users.

MILES O’BRIEN: It’s hard enough for computer scientists and, for that matter, Twitter itself. The company admits 8.5 percent of its accounts are updated with no discernible human action.

But that includes useful applications like TweetDeck, as well as harmful bots. Finding them is a challenge. Menczer and his team are working on this. They visualize Twitter traffic, helping them find accounts that look and act suspiciously nonhuman.

FILIPPO MENCZER: Very often, there are patterns that we can still discern. They’re not necessarily easy for the human eye, but, sometimes, the machine-learning algorithms, by looking at over 1,000 features, they can recognize some patterns that make one particular account similar to other bots.

MILES O’BRIEN: The result is a Web site called BotOrNot, which analyzes Twitter accounts based on the frequency of tweets, the type of networks and friends, and the sentiment the tweets convey.

This Twitterbot account simply repeats pro-Hillary Reddit postings. BotOrNot predicts it is 70 percent likely to be a bot.

PHIL HOWARD: One of the fun things about studying bots is that it’s very difficult based on the code alone to predict what they will do. And even designers can’t always predict what a bot will do.

MILES O’BRIEN: Brad Hayes can attest to that. He is a postdoctoral associate at MIT. He read a Boston Globe article that concluded Donald Trump speaks at the level of a fourth-grader. To him, that sounded like good grist for some Twitterbot fun.

BRAD HAYES, Robotics and Artificial Intelligence Scientist: I went to work, and I created this bot, and I went through and grabbed all of the transcripts I could find from his speeches, the Republican debates, from his Twitter feed, basically anything, any kind of Donald Trump information, things that he has said, such that I could train the statistical model on that.

MILES O’BRIEN: His bot, DeepDrumpf, doesn’t understand language, per se, but rather detects patterns in the sequence of letters.

DONALD TRUMP: A lot of things are going on folks, a lot of things.

MILES O’BRIEN: By analyzing reams of Trump speeches, it can make statistically valid and frequently humorous predictions of how the real Donald Trump might complete sentences that Brad begins.

BRAD HAYES: “I’m a great judge of this country. We have to control everybody and let them fight each other. They won’t refuse me. I will make a fortune.”

MILES O’BRIEN: What’s your best tweet?

BRAD HAYES: “If I don’t win in the end, I will fire the entire American people. We cannot achieve peace if I don’t want it.”

MILES O’BRIEN: Does it kind of surprise you at times what comes out?

BRAD HAYES: Absolutely. As a computer scientist, the most surprising thing is that this kind of simple model can create such great results.

MILES O’BRIEN: Which is why he doesn’t let his bot tweet without human supervision. DeepDrumpf is all in good fun and clearly labeled a bot.

But, tweeter, beware of the more nefarious bots that lurk out there in the Twitterverse. What may seem like a viral grassroots movement can be nothing more than an empty field covered with Astroturf. So, take a moment to check before you retweet.

Miles O’Brien, the “PBS NewsHour,” Boston.