Twitter launched a new set of tools Tuesday in an attempt to stem harassment on its platform.
The social media site said in a statement it has seen a sharp rise in “abuse, bullying and harassment” over the past few years.
“Because Twitter happens in public and in real-time, we’ve had some challenges keeping up with and curbing abusive conduct,” the company said. “We took a step back to reset and take a new approach, find and focus on the most critical needs, and rapidly improve.”
As part of its reset, Twitter has added a new feature that lets its users “mute” specific words, emojis, and entire conversations, blocking them from being included in a user’s notifications. Twitter users could already mute entire accounts.
The company is also introducing a “more direct way” for users to report conduct that violated Twitter’s “hateful conduct policy,” which prohibits encouraging violence or directly attacking someone based on race, ethnicity, gender, sexual orientation, and other qualifiers.
Twitter said it has also retrained its support teams so they can deal with the reports more quickly.
Emma Llanso, director of the Center for Democracy & Technology’s Free Expression Project, said the changes are a good step forward but emphasized that Twitter needed to collect hard evidence to show its new policies are working. In the past, the company has been criticized for its slow response to taking down hateful language.
“What impact do they really have?” she said. “I will like to see more information from social media sites about the impact. We need to see more discussion about was this effective,” she added.
Llason also said she saw a “beneficial reason” for the mute button, “but I can see people using it to mute dissenting views.”
The company said in a statement it realizes the changes will not “suddenly remove abusive conduct from Twitter.”
“No single action by us would do that,” Twitter said. “Instead we commit to rapidly improving Twitter based on everything we observe and learn.”
In 2015, Twitter’s former CEO admitted that the company “suck[ed] at dealing with abuse and trolls on the platform, and we’ve sucked at it for years.”