Social media firms make progress on hate speech, says EU report

Facebook, Twitter, YouTube and Microsoft removing 70% of illegal content

Facebook: the social-media company, where almost half of the illegal content was to be found, reviewed complaints in less than 24 hours in 89.3 per cent of cases. Photograph: Dado Ruvic/Reuters

Facebook: the social-media company, where almost half of the illegal content was to be found, reviewed complaints in less than 24 hours in 89.3 per cent of cases. Photograph: Dado Ruvic/Reuters

 

The drive to take illegal hate speech offline is bearing fruit, the European Commission said on Friday. Its third monitoring survey of industry practices found that the four big IT companies committed to the commission’s voluntary code of conduct removed from their platforms on average 70 per cent of the illegal hate speech notified to them, most of it within 24 hours.

The four – Facebook, Twitter, YouTube and Microsoft – are to be joined by Instagram and the Google+ social network. The companies have a clear incentive to make the voluntary system work to forestall growing demands for legislative action.

Facebook, where almost half of the illegal content was to be found, according to the survey, announced last year that it would hire an additional 3,000 moderators to scour the platform for potential hateful content. It reviewed complaints in less than 24 hours in 89.3 per cent of cases, YouTube in 62.7 per cent of cases, and Twitter in 80.2 per cent of cases.

Welcoming the rising commitment of the companies – the rate of removals has steadily increased from 28 per cent in the first monitoring round, in 2016, to 59 per cent in the second round, in May 2017 – the commissioner for justice, consumers and gender equality, Vera Jourova, said their progress would continue to be closely monitored.

€50 million fine

The success of the code of conduct might obviate the need for legislation like that in Germany, she said, where a law went into force this year providing for fines of up to €50 million for social-media companies that do not remove hate speech quickly enough.

“The code of conduct is now proving to be a valuable tool to tackle illegal content quickly and efficiently,” Ms Jourova said. “This shows that where there is a strong collaboration between technology companies, civil society and policymakers we can get results and, at the same time, preserve freedom of speech.”

She remained critical, however, of the companies’ failure to provide feedback on up to a third of complaints, and at the low level of reporting – just 20 per cent – of cases to the police. Such cases need to be promptly investigated, she said.

Illegal hate speech is defined in EU law as the public incitement to violence, or hatred directed at groups or individuals on the basis of certain characteristics, including race, colour, religion, descent and national or ethnic origin.

The right simply to be rude or to offend is protected, while other programmes are directed at content involving terrorist propaganda or child pornography.