As Facebook Addresses Role in Myanmar Violence, Look Back at Early Warnings

November 6, 2018
/
by Patrice Taddonio Digital Writer & Audience Development Strategist

Signage is displayed outside Facebook Inc. headquarters in Menlo Park, California, U.S., on Tuesday, Oct. 30, 2018. David Paul Morris/Bloomberg via Getty Images

On the eve of the U.S. midterm elections, Facebook released an outside report it commissioned on its impact on human rights in Myanmar, where the country’s Rohingya Muslim minority has been the subject of brutal violence that the United Nations has since called a genocide.

The U.N. has said social media — and Facebook in particular — was a significant factor, as the platform allowed hate speech and calls for violence against the Rohingya to spread across Myanmar. Facebook had admitted it had been slow to respond to concerns. But this report offers a clearer picture of the company’s impact on the ground.

The report, by the firm Business for Social Responsibility, found that Facebook was “directly linked” to harm in Myanmar when people used the platform in ways that violate its community standards — for example, to incite violence, spread disinformation or promote hate speech. While it said that the company hadn’t caused or contributed to human rights violations “via its own actions,” the assessment found that Facebook’s platform had been “useful” for those seeking to bring about real-world harm in Myanmar, and it outlined recommendations for the company to address the problem. The report also warned of the potential that the run-up to the country’s 2020 elections could pose new risks.

In the past year, Facebook says it’s taken down problematic accounts in Myanmar, hired more language experts, and improved its policies. “We agree that we can and should do more,” said Alex Warofka, a Facebook product policy manager, in a post on Facebook’s blog. Warofka outlined Facebook’s efforts to address five areas for “continued improvement” identified by BSR. He also noted the report’s finding that “Facebook alone cannot bring about the broad changes needed to address the human rights situation in Myanmar.”

But Facebook had plenty of early warnings from Myanmar and other countries about how it was being used to shape events on the ground. Last week, in The Facebook Dilemma, FRONTLINE explored how the company responded to warnings about the platform’s role in spreading disinformation and hate speech, and sparking real-world violence in Myanmar in particular. In the below excerpt from the documentary, a tech entrepreneur living in Myanmar named David Madden recounts making a presentation at Facebook headquarters back in May of 2015, warning that Myanmar’s Muslim minority was being targeted with hate speech.

“I drew the analogy with what had happened in Rwanda, where radios had played a really key role in the execution of this genocide,” Madden told FRONTLINE. “And so I said, ‘Facebook runs the risk of being in Myanmar what radios were in Rwanda’ – that this platform could be used to foment hate and to incite violence.”

Madden says he received an email that the concerns he raised were shared internally and taken “very seriously.” But the violence intensified. As the film reports, Madden and other local activists had another meeting with Facebook in early 2017, warning that the platform’s processes for addressing content that demonized the country’s Muslims weren’t working.

“I think, I think the, the main response from Facebook was, ‘We’ll need to go away and dig into this and come back with something substantive,’” Madden tells FRONTLINE. “The thing was, it never came.”

Watch FRONTLINE’s full two-part investigation into Facebook here.

In order to foster a civil and literate discussion that respects all participants, FRONTLINE has the following guidelines for commentary. By submitting comments here, you are consenting to these rules:

Readers' comments that include profanity, obscenity, personal attacks, harassment, or are defamatory, sexist, racist, violate a third party's right to privacy, or are otherwise inappropriate, will be removed. Entries that are unsigned or are "signed" by someone other than the actual author will be removed. We reserve the right to not post comments that are more than 400 words. We will take steps to block users who repeatedly violate our commenting rules, terms of use, or privacy policies. You are fully responsible for your comments.

blog comments powered by Disqus
Support Provided By Learn more