Group Seeks Help From Social Networks to Combat Hate Speech
Photo of a Facebook user by Rodrigo Buendia/AFP/Getty Images.
Social networking websites such as Facebook and Twitter have helped users mobilize around a common cause like never before. But what if their message is one of hate?
The Simon Wiesenthal Center, a Los Angeles-based group working against global racism, has compiled a list of hundreds of websites it deems hateful and is pushing their host sites to remove them. They range from a white supremacist motorcycle group’s blog, to the sale of a Nazi-era ring on eBay.
The group’s associate dean, Rabbi Abraham Cooper, recently spoke at a briefing on Capitol Hill, where he described the group’s efforts and what they’re up against.
After the terrorist attacks of 9/11, said Cooper, many experts were concerned about U.S. homegrown militants — such as neoNazis and KKK members — adopting al-Qaida-like tactics, but instead al-Qaida has co-opted theirs by using the Internet to spread its message of violence and to recruit members. The ease of communication that the Internet brings is supplying groups seeking to do harm with a tool to inspire bad behavior in others, he said.
Cooper and his organization are hoping to convince social networks, which have their own guidelines and standards, to become more proactive in taking down objectionable content, rather than waiting for users to flag it.
Rick Eaton, senior researcher at the Simon Wiesenthal Center, said Facebook personnel “do a pretty good job and are very receptive to our concerns,” but they allow some sites like “F— Religion” to stay if they consider it a “discussion.” Twitter, on the other hand, has not agreed to a meeting with the center, he said, and continues to let troubling users have accounts, for example Jabhat al-Nusra, whom the State Department has identified as a terrorist front.
Facebook declined to be interviewed for this article and Twitter did not respond to our interview request.
Facebook has a page that explains how to report an objectionable page and what the company does in response. It also lists types of prohibited content, such as threats to public safety and pornography, that it can remove.
Twitter also prohibits direct threats of violence, and describes when it can suspend user accounts.
YouTube starts its user guidelines with a message of trust and outlines its grounds for permanent banning.
Individuals can continue to point out objectionable pages, but the social networking companies have the expertise and could be more effective at it, Cooper said.
Liz Heron, director of social media and engagement at the Wall Street Journal, said all three social networks have enormous amounts of content that make it hard to regulate. Facebook and YouTube have teams of people looking for and taking down harmful content, but it’s not always a perfect process, she said.
She cited as an example a nude painting of actress Bea Arthur that was sold at an auction earlier this month for $1.9 million. News organizations that covered the sale were blocked from posting their stories on Facebook.
Twitter has taken a more hands-off approach, Heron said. “I think free speech is really important to them and because of that they tend not to get involved in what should or should not be taken down.”
Twitter posts a transparency report twice a year on the number of requests it gets via court order or from government agencies around the world to remove objectionable material. In its latest report, from July 1 through Dec. 31, 2012, Twitter got 42 such requests and withheld access to one account in Germany and to 44 tweets in France. There were four requests in the United States and none were blocked.
In the case in France, where making anti-Semitic remarks is illegal, a student group had objected to anti-Semitic tweets. So Twitter blocked access to those particular tweets in France.
The technology to detect a user’s country of origin does exist, but it’s up to the social networking sites to use it, said Paul Schiff Berman, a law professor at George Washington University. The websites then could filter content according to what kind of speech is illegal in each country.
“It’s hard when you’re talking about millions of tweets, but it’s not necessarily impossible,” he said. And the companies that want to do business in other countries might find themselves taking on that massive effort.
In the meantime, groups concerned about hate speech can continue going to the intermediaries, such as search engines and social networking sites, rather than the person who posted the material or the end user who is downloading it, because those individual people are harder to find.
Cooper admits it is an uphill battle. He said he approached Facebook to ask why it removed the “F— Muslims” page but not those lambasting other religions. Facebook ended up restoring the Muslim page.
What do you think the social networking companies should do, if anything? Let us know below in the comments section.