Twitter tackles online abuse problems

Social media giant says 38% of abusive content is now directly detected by them not Twitter users

In the past Twitter has been criticised - and rightly so - for its sluggish response to reports of online abuse as well as a lack of screening for hate speech accounts. Twitter, of course, issued a “we must do better” response but a year on, it reports significant improvements.

In a joint post, vice president of Twitter Services, Donald Hicks, and director of Product Management & Health, David Gasca, say that 38 per cent of the abusive content they deal with is now directly detected by their team rather than relying on reports from Twitter users. Additionally, they are clamping down on troll respawning: between January and March 2019 100,000 accounts were suspended on the grounds that the account creator had been caught creating a new account while already suspended.

Hicks and Gasca say that three times as many abusive accounts were suspended within 24 hours of being reported compared to the same time last year and over twice the amount of private information has been removed upon request thanks to a new reporting process.

In addition, chief executive Jack Dorsey says the platform is currently testing a mute button that will allow users to selectively hide replies to their own tweets. Readers wouldn't see the contents of muted tweets as they scrolled past instead having to purposely tap on the muted tweet to read it.