Who decides what is acceptable speech on social media platforms?

There are questions once again about the future of Twitter and what it should and should not allow online. Specifically, how far should the company go when it comes to permitting free speech? What should be taken down when it comes to misinformation? And does the company adequately guard against hatemongering speech? Charlie Warzel joined William Brangham to discuss.

Read the Full Transcript

  • Judy Woodruff:

    There are questions once again about the future of the social media platform Twitter and what it should and should not allow online.

    Specifically, how far should the company go when it comes to permitting free speech? What should be taken down when it comes — becomes misinformation? And does the company adequately guard against hatemongering speech?

    William Brangham explores some of those questions.

  • William Brangham:

    Judy, Elon Musk's pending ownership of Twitter has been driving a lot of these questions recently.

    Musk has said he wants to overturn some of Twitter's moderation policies and allow some suspended users, like former President Trump, to return to the platform. But what is acceptable and what isn't? And who gets to decide?

    For example, over the weekend, Ye, the artist formerly known as Kanye West, tweeted a now-deleted antisemitic comment. His Twitter account was locked for violating Twitter's policies. So, will these kinds of bans and blocks continue under new leadership?

    For more on all of this, I'm joined by Charlie Warzel. He's the Atlantic newsletter called Galaxy Brain, where he writes about technology, media and politics.

    Charlie, it's great to have you back on the "NewsHour."

    What do you make of this Kanye West episode? I mean, isn't that a pretty clear-cut case of hateful rhetoric, or does it, in your mind, underscore some of the larger issues here?

  • Charlie Warzel, The Atlantic:

    Well, I think it's a great example of some of the policies that Twitter has put into place since around 2017 around content moderation around its own rules and code of conduct and terms of use.

    And this is — this is Twitter enforcing it rather quickly. I think there is a large consensus that he overstepped the boundaries of what Twitter deems as acceptable speech, and they took action.

    But I think, in this broader context, you're looking at the difficulty and the spotlight of content moderation on big platforms like Twitter, big social networks. I mean, this is — you're not just going to have trolls harassing people. You're going to have these huge power users who are in the middle of our public and cultural spotlight trying to either press the boundaries or saying patently unacceptable things, and there's going to be all this pressure.

    So it sort of speaks to the way that these content moderation conversations aren't academic. They're real. They're happening all the time. And the stakes are high.

  • William Brangham:

    I mean, a lot of the critics of the social media companies, Twitter included, argue, one, that conservatives are discriminated against, and that, two, the policies are policed in a somewhat willy-nilly fashion.

    I mean, for example, Elon — I mean, Kanye West gets booted for antisemitism, but the Iranian government's Twitter account, which regularly spouts antisemitism, is alive and well.

    I mean, is there a good-faith argument to be made that these policies are either, A, discriminatory or, B, not really working currently?

  • Charlie Warzel:

    Well, I think they're always very difficult to enforce, right? The hardest part about content moderation is moderating at scale, right? And there's always going to be lots of pressures from outside organizations.

    And content moderation is — even though we don't like to talk about it this way, it is frequently — does have ideological components to it. And that's what makes it difficult, right? It is part of this political conversation.

    I do want to say, though, to push back a little, there's this idea that it is — content moderation has harmed conservatives or is unfairly biased towards conservatives. I want to be very clear that a lot of the work looking into this and looking into the enforcement also shows that conservative politicians and shock jocks and potential candidates are pushing the boundaries a lot more with what is acceptable speech.

    They're the ones who are kind of trying to draw big tech companies into this, because, in all honesty, a lot of times, getting their content moderated by a big tech company is almost the goal, because then they say they have been censored. It's very good for their sort of culture war. Hence…

  • William Brangham:

    Right. It's that I'm going to work the refs as much as I possibly can.

    I mean, let's say that Musk does take over Twitter and does loosen some of the reins. You touched on this earlier, but do you think a more loose and open Twitter is a net positive, a net negative for our body politic?

  • Charlie Warzel:

    Well, I want to be clear that what we have seen so far from Elon Musk is an incredibly shallow idea of what content moderation is, and really just the challenge of it, right?

    When he — back last April, when he was sort of floating this idea, he spoke with a couple of people. He gave — he spoke at TED Talks in Vancouver, and he was floating these ideas that had either already been tried by Twitter or ideas that people who have tried them have realized patently don't work.

    He seems to be sort of coming at this issue the way that a lot of the tech platform founders came at it, say, back in 2004, 2005, 2006, this really permissive idea of speech that didn't really take into account the fact that you're going to be dealing with political entities. You might be dealing with terroristic entities. You might be dealing with inauthentic bot campaigns. There's just so much to this.

    And he's sort of coming at it sort of wide-eyed and very confidently saying, I have had a lot of success in one realm of the tech industry, and surely it will translate into this realm.

    And I don't think that's true.

  • William Brangham:

    Are there good examples, when you look out there on the landscape of social media platforms, that you think do a good job of both keeping a healthy debate going, while keeping the hateful, awful stuff at bay?

  • Charlie Warzel:

    I think Reddit is a platform that initially really struggled with their sense of moderation, but has found a good middle ground, right?

    They have community moderators who sort of keep their communities in check and set their own standards, and then Reddit will come in if things get out of hand or if a community's hateful rhetoric spreads to another community. And they have — they have done a lot of work kicking off communities that are toxic.

    And I think this is an important point about the Internet, which is that every community, every forum, whether large or small, relies on moderators. They set the standards for discourse there, and they make it a space that people want to be in.

  • William Brangham:

    Charlie Warzel writes the Galaxy Brain newsletter.

    Thank you so much for being here.

  • Charlie Warzel:

    Thanks for having me.

Listen to this Segment