Last week I wrote up the #FBrape campaign’s strategy: to hold Facebook accountable for the misogynistic content of its users by pressuring advertisers. Only seven days after the open letter was published, Marne Levine, Facebook’s VP of Global Publicy Policy, published a response agreeing to the campaign’s demands to better train the company’s moderators, improve reporting processes, and hold offending users more accountable for the content they publish.

i-1be9bd40adf62b992cf7f14efc1fc5cf-facebook-logo.jpg

The campaigners say they generated 5,000 emails to advertisers, and convinced Nissan to pull its advertising from the platform. This is great initial traction for a social media advocacy campaign, but it represents a miniscule percentage of Facebook’s users and advertisers. For people interested in shaping what kinds of speech social media giants allow, the #FBrape campaign quickly confirmed the relative value of targeting companies’ revenue sources rather than directly petition the corporations. The #FBrape campaign also had a clear moral high road over the terrible instances of speech it campaigned to censor. But the results are still illuminating, as we struggle to determine how much power companies like Facebook wield over our self expression, and the organizational processes and technical mechanisms of how that power is exterted.

Continued attention will be required to hold Facebook, Inc. to its promises to train its content moderators (and an entire planet of actual users) to flag and remove violent content. Facebook has also promised to establish more direct lines of communication with women’s groups organizing against such content. This is the kind of personal relationship and human contact groups have clamored for (see WITNESS and YouTube’s relationship).

‘fair, thoughtful, scalable’

Technology companies have tended to avoid establishing such relationships, probably because they require relatively large amounts of time in a venture that’s taking on an entire planet worth of communications. Facebook itself lists its preferences for solutions to governing speech that are “fair, thoughtful, and scalable.” Given the crazy scale of content uploaded every minute, Facebook might look into algorithmic solutions to identify content before users are exposed to it. YouTube has conducted research to automatically categorize some of its own torrent of incoming user content to identify the higher quality material. According to their post, Facebook has “built industry leading technical and human systems to encourage people using Facebook to report violations of our terms and developed sophisticated tools to help our teams evaluate the reports we receive.”

This is unlikely to be the last we hear about this. By publishing an official response, Facebook gave 130 media outlets and counting an excuse to cover the campaign, which few had done prior to the company’s reply. And whether they relish the position or not, social media companies like Facebook have positioned themselves as arbiters of speech online, subject to the laws of the lands they operate within, but also comfortable codifying their own preferences into their policies. Kudos to Facebook for taking a minute to respond to some of the messy side effects of connecting over a billion human beings.

Matt Stempeck is a Research Assistant at the Center for Civic Media at the MIT Media Lab. He has spent his career at the intersection of technology and social change, mostly in Washington, D.C. He has advised numerous non-profits, startups, and socially responsible businesses on online strategy. Matt’s interested in location, games, online tools, and other fun things. He’s on Twitter @mstem.

This post originally appeared on the MIT Center for Civic Media blog.