Twitter unveils measures to curb online abuse
Social media company moves to clearly define line between expression and invective
A Twitter executive acknowledged that the social media platform has not always moved fast enough to clamp down on abusive behaviour. Photograph: Getty Images
Social media companies are under increasing scrutiny for the amount of hate speech that thrives on their platforms, especially since the presidential election.
Now, Twitter has unveiled several new measures to curb the online abuse, although the changes are unlikely to be far-reaching enough to quiet the company’s critics. On Tuesday, Twitter said it was making it easier for its users to hide content they do not wish to see on the service and to report abusive posts, even when those messages are directed at other users. The company has given its support teams training to better identify mistreatment on Twitter.
“There’s a fine line between free expression and abuse, and this launch is another step on the path toward getting rid of abuse,” said Del Harvey, Twitter’s vice-president of trust and security. “We’ve been launching new products to address this, and the cadence of product releases is picking up. We have a lot planned on this path.”
Online harassment and hate speech have long festered on Twitter, but the incidents appeared to rise during the presidential campaign. Exchanges between supporters of President-elect Donald Trump and Hillary Clinton grew personal and acrimonious. Many of Trump’s supporters also relied on a series of images – some anti-Semitic and others quietly coded as racist – to circulate hate speech on Twitter.
Since the Trump victory last week, Twitter has been filled with reports of racist and derogatory taunts against minorities. Many users have expressed fear and concern about the escalation of such behaviour. When asked about harassment of minorities, Mr Trump told the 60 Minutes TV programme that his supporters should “stop it!”
Twitter has not had a comprehensive response for dealing with hate speech, largely because the company did not want to limit freedom of expression on the service. But over time, Twitter has rolled out measures to tackle the problem. It has let people mute the accounts of other users, effectively making their content disappear from view. Last year, it issued an explicit prohibition against hateful conduct.
The company is now taking more action. It is letting people more specifically block out what they do not want to see on the service, including muting words, phrases and even entire conversations. Twitter is also making it easier for people to report abusive behaviour, even if they are only bystanders to the abuse, and for the company to evaluate those reports. And it has overhauled its approach to training support teams, holding special sessions on cultural and historical context for hateful conduct.
“Someone looking at user complaints in Asia may not recognize something happening in the EU or the US as hateful,” said Ms Harvey. “We need to make sure there is a universal familiarity with the most common trends and themes we’re seeing that are abusive, but may not seem so at first glance.”
Critics said that while the steps are positive, they will not eliminate hate speech. Twitter’s changes “don’t stop the problem of posting abusive content”, said Mark S Luckie, a former Twitter manager who now runs a digest of the topics trending among Twitter’s African-American users called Today in #BlackTwitter. “People will find a way to abuse others online, but these changes may put users at ease and curb the perception of abuse on Twitter.”
Part of the reason abuse has thrived on Twitter is because the company allows anonymity, said Mr Luckie. “But unlike other sites with anonymity, Twitter lets users broadcast to the world – so their abuse has a huge potential impact,” he said. “If you fight with a celebrity and the celebrity fights back, then you have potentially reached a global audience with your abuse.”
Ms Harvey acknowledged that Twitter has not always moved fast enough to clamp down on abusive behaviour. She said that the problem would probably never be entirely solved, but that the company was doing more to identify repeat offenders who create new accounts when Twitter shuts them down or who incite users to gang up on others.
“When I hear that Twitter only cares about this now, that we’re only going through the motions, I understand where that perception comes from,” she said. “But this is important to us because people are experiencing these abusive things, and we need to make sure they know what options they have and that we’re taking action to make things better.”