Support Intelligent, In-Depth, Trustworthy Journalism.
Leave your feedback
During the last three months of the campaign, fake news headlines drew more engagement than real reporting, and social media platforms were criticized for not doing enough to dispute false information. Now Facebook is launching new tools to help identify dubious or made-up stories. Hari Sreenivasan talks to Slate’s Will Oremus about weeding out fake news.
It was a stunning finding, even in a digital age where stories of all kind go viral. During the last three months of the presidential campaign, fake or false news headlines actually generated more engagement on Facebook than true ones. Facebook and other social media platforms were criticized for not doing enough to flag or dispute these posts.
Today, Facebook launched several new tools to flag and dispute what it calls the "worst of the worst" when it comes to clear lies. Those tools are essentially embedded in your individual feed.
Here's a bit of a video the company posted about how it will work.
You may see an alert before you share some links that have been disputed by third-party fact checkers. You can then cancel or continue with the post. If you suspect a news story is fake, you can report it. It just takes a few taps. Your report helps us track and prevent fake news from spreading.
Let's learn more about this effort to detect and slow the spread of fake news, part of our occasional series on the subject. Will Oremus has been writing about this extensively for "Slate" and working on that site's own new tool for identifying false stories.
First, Will, let's talk a little bit about what Facebook announce today. How is it going to work?
WILL OREMUS, Slate, So, Facebook's approach to fake news has several components. One thing it's going to try to do is make it easier for users to report it when they see fake news in their feeds. The next thing they're going to do is they're going to take that information about stories that are being reported as fake, and they're going to use some software, run some algorithms and create a dashboard of stories that might be fake and give access to that dashboard to third-party checking organizations. So, these are like Snopes or PolitiFact, Factcheck.org.
Those fact checkers are going to have their human editors evaluate some of the most viral of the stories that have been flagged as fake, and if they determine it is in fact a fake news story, Facebook is going to treat it differently. It's going to show it to fewer people in its feeds. It's going to make it go less viral and it's also going to give people a warning before they try to share that story, saying this story has been disputed. It will still let you share it. It's not censoring or filtering out anything. But it is downgrading it in the ranking algorithm and it is letting people know that this has been disputed.
So, Facebook is not the arbiter of the truth. There are third parties checking this for them, right?
Yes, and Facebook has been incredibly reluctant to become the arbiter of what's true for good reason. Facebook, the value of its business, depends on appealing to people on both sides, all across the political spectrum.
So, it doesn't want to be a media company. It has said this many times. What it is doing here is shrewd, I think. It is delegating the responsibility to respected, non-profit, third-party organizations whose whole job is to figure out what's true and what's not.
You have been covering this space for a while. You want to draw a distinction between what's fake news and what are just outright lies and conspiracies. There is a distinction.
Yes, the term "fake news" is relatively news. A few years ago if somebody said, "fake news," you wouldn't know necessarily what they are talking about, maybe they were talking about a satire site like "The Onion "or "The Daily Show." It came in to currency in recent months because of the rise of a particular type of thing, which is a story that's basically made up. It was very popular during the election season for people to– for hoaxsters to make up stories that played to people's political biases.
So, something like, you know, Hillary Clinton is about to be arrested by the New York Police Department for email crimes.
They would just make that up. They would publish it. And it would get shared widely on Facebook. Since then, the term has become applied — it has become a political football. And people call — you hear people on the right calling the "New York Times" fake news, people on the left saying Breitbart is fake news. But originally, it was actual hoaxes that were made up out of whole cloth.
Now, people have been trying to fix the fake news problem. There was a recent hack-a-thon, and some Princeton students came up with what they thought was a fix. Your folks at "Slate" had actually worked on a tool. You guys just launched this, not coincidentally on Monday.
Let's take a look at how this works. We're going to put this up on the screen here. So, if I come across a fake news story in my feed, and there's this big red banner saying, "This news story is fake. Here's how we know. Share the proof."
How do you know? Identify by this as fake. This is the tool.
Yes, that's right. So, what we wanted to do was not just flag stories as fake when they appear in your Facebook feed. We actually wanted to give users the power to do something about it, because — I mean, it's so frustrating, right? You try to be a good consumer of the media, you try to evaluate what's true and what's credible, but then you see friends and relatives sharing this stuff.
So, what we do is we actually provide a link to a reputable debunking of that particular story that will appear automatically. And then we prompt you to share that link with the person who posted the fake news so that they and all of their followers can see that that story is fake or they can go to the debunking site and judge of the evidence for themselves.
Now, there's a tool you can actually add to your browser. It's kind of an extension, a Chrome extension and a button that works there. We can look at other examples of stories as well.
Who is the arbiter of truth in your system? Who decided that this story was false, even though it looks just like an ABC news site?
Yes, I mean, that's a good question, and this is really the trickiest question on this whole thing. This is going to be an issue for Facebook, too. I mean, if one of these fact check organizations says this story has some parts that are true, some parts that are false. Is that a fake news story?
I think what we've done and in fact it seems what Facebook has done as well is to try to set a really high bar for what counts as fake. It's not just a story that might be misleading.
Or has a couple of factual errors in it. It's a story intentionally designed to mislead people and it's just — you know, it's a hoax, basically. So we have human editors who are going to be reviewing the posts that are flagged by our users as potentially fake and they're going to be looking for, again, a reputable third-party site that has used evidence to debunk that. We're not going to be, you know, doing the debunking ourselves.
Can technology solve this problem? There is a recent Pew study saying 14 percent of people out there shared a fake news story, even after they knew it was fake.
Yes. No, technology can't solve the whole problem. I think technology can be a part of the solution. And that's because it's not just a technological problem or just a human problem.
And there are human issues at work here in why fake news is shared. There's confirmation bias. There's the desire for something to be true. I mean, you want something to be true.
What's your incentive to go and check it out. But there is also a technological component, which is that Facebook in particular has had this leveling effect on the media where a story from abcnews.com, which is a big, reputable news site, looks just the same in your Facebook feed as a story from abcnews.com.co, which is a hoax site designed to trick people.
And so, Facebook has created the conditions for this fake news to thrive. And that's why I think, you know, technology, whether it's Facebook or a tool like ours, technology can be part of the solution. But it has to be human, too.
That and a wonderfully informed citizenry and who are media literate.
Will Oremus from "Slate" — thanks so much.
Thanks for having me.
Watch the Full Episode
Support Provided By:
Support PBS NewsHour:
Subscribe to Here’s the Deal, our politics newsletter for analysis you won’t find anywhere else.
Thank you. Please check your inbox to confirm.
Additional Support Provided By: