What do you think? Leave a respectful comment.

How social media platforms reacted to viral video of New Zealand shootings

Amid the many questions swirling around the New Zealand mosque shootings is whether Facebook and other digital platforms acted swiftly enough to stop video footage of the attacks from circulating. These social media giants are already facing scrutiny for enabling users to perpetuate false stories and hate speech. Judy Woodruff talks to The Washington Post’s Elizabeth Dwoskin for more.

Read the Full Transcript

  • Judy Woodruff:

    Many questions have been raised about how graphic videos of this attack were posted and allowed to spread quickly on YouTube, on Facebook and other social media platforms.

    The companies said they tried to stop it, but faced big challenges. Facebook said that it removed 1.5 million videos depicting images from the shooting in the first 24 hours after it happened.

    More than a million of those were blocked as they were transferred to social media. YouTube had its own war room-like response center. But it, too, struggled to stop the posting in the minutes after the attack.

    Reporter Elizabeth Dwoskin took a close look at how YouTube and others tried to combat all this. She's the Silicon Valley correspondent for The Washington Post and she joins us now from San Francisco.

    Elizabeth Dwoskin, welcome to the "NewsHour."

    So I think my first question is, as this shooter, this gunman, decided he was going to use his camera as he began this terrible massacre, was there anything in social media to stop him?

  • Elizabeth Dwoskin:

    That's a great — that is like the 10,000-foot question, and it's a great question.

    I would say the social media companies, often, they say, it's like, another day, another failure. This isn't the first murder that's been streamed live on Facebook or uploaded onto YouTube, but it was the one most designed to go viral, because the shooter was live-streaming himself while appearing to niche online communities that aggressively reposted it.

    And they reposted different permeations of it, cutting it in half, changing the length, even turning it into animations, like a video game.

    And all that stuff, even after years, after Russian meddling and the tech companies saying they're putting millions of dollars of resources into fighting these types of problems, many of them couldn't take down the viral content for 24 hours.

  • Judy Woodruff:

    How did YouTube, which you — I know you have spent some time talking to them.

    How did they first see what was going on?

  • Elizabeth Dwoskin:

    Well, they knew right away on Thursday night, U.S. time, that a video had been uploaded, because, again, it was streamed live on Facebook.

    But then, immediately, people on 8chan and other sites started taking copies of the Facebook video and uploading it like crazy onto YouTube. So they already knew this was going to be a problem. And what's wild is that, even though it was kind of an all-systems-go effort, by the next morning, they were realizing that the stuff is still up and easily findable.

    So what they actually chose to do — and it doesn't say great things about where the situation is — they basically hit a panic button, and they chose to disable some major features of YouTube. And they made a huge decision that they have never made before to suspend the use of human moderators.

    Usually, the content will go through A.I., and then a moderator, a human makes a decision. But they realized the humans are going too slowly. We're just going to let A.I. make the decision, even though A.I. is wrong a lot. But they would rather err on the side of being wrong and having less video.

    But that was like a stopgap measure.

  • Judy Woodruff:

    And A.I., of course, automated moderating.

    And was that more successful? Did they find that that worked better?

  • Elizabeth Dwoskin:

    Yes, after about 24 hours, they were able — with disabling core features of their site — I mean, this is the biggest video site in the world, and you have core features that are still disabled to this moment.

    When they did that, they were able to contain it. But you have to say, and they acknowledge, that's not a solution. That's not a long-term solution. And it's not the first time that they have told me or other journalists that there was a crisis and there was some unprecedented element of that crisis that made it hard for them to anticipate.

    If you remember the Parkland massacre, remember, when, after Parkland, the students were being called crisis actors, and those videos rose to the very top of YouTube. And that was another thing where they said, well, we couldn't have anticipated that.

  • Judy Woodruff:

    So is YouTube — and I want to ask you about Facebook as well. Are they saying they weren't aware that the video could be changed into many different forms and shared in many different ways?

    It's not as if they didn't know this could happen, is it?

  • Elizabeth Dwoskin:

    They know. And, remember, they deal with copyrighted videos. They deal with ISIS videos all the time, where people use similar tactics.

    The difference is, is with ISIS recruiting videos and copyright, they actually have pre-recorded files of those videos. Therefore, they can teach their technology to understand any possible snippet of it at upload.

    But when you have a new video that the technology hasn't seen before, especially when new variations are coming up all the time, what they said is that this tripped the system up.

    But you're looking and saying, yes, I mean, can't you anticipate that a new — a totally new video might go viral, might be put on your Web site in different ways, and that you should try to teach the technology to anticipate that? And I think the sad truth is, is technology isn't there, which then raises the question of, can platforms actually police themselves at all?

  • Judy Woodruff:

    Well — and, of course, all this raises the question, do these social media platforms, do they see their responsibility as stopping this kind of material from being spread?

  • Elizabeth Dwoskin:

    They would say yes. But the reality is, is that that's where they fail.

    They also will tell you that it'll never not be posted, because they have a system where there's not prior review. Anyone can post, and it only gets reviewed later, if it gets reviewed. And as long as you have that system, you're going to accept that some of the stuff goes up and gets spread.

    And then let's add to — let's add to this their responsibility. It's not just like the content goes up and anyone sees it. YouTube and Facebook, they have highly personalized algorithms, where the content is actually designed to be turbocharged when people click on it. They start recommending it.

    So they're making a lot of editorial, curatorial actions that actually promote content to people who didn't even ask for it. And so they have a huge role. I talked today to a former director at YouTube who said that he himself was stunned by the level of irresponsibility of those design choices.

  • Judy Woodruff:

    But, very quickly, it sounds as if you're saying that if something like this were to happen next week, that the same thing could happen again, that it would be spread that quickly.

  • Elizabeth Dwoskin:

    I think the companies couldn't say no to that.

  • Judy Woodruff:

    Well, that's disturbing and something for all of us to reflect on.

    Elizabeth Dwoskin of "The Washington Post," we thank you.

  • Elizabeth Dwoskin:

    Thanks, Judy.

Listen to this Segment

The Latest