Facebook, Twitter and YouTube have stepped up the removal of online hate speech on their platforms, according to new figures from the European Commission.
The news comes as Google and Facebook's Instagram announced plans to join the European Commission's Code of Conduct, which was established in May 2016 to combat the spread of hate speech online.
According to a review carried out by the European Commission, Facebook, Twitter and YouTube, who all signed up to the code in 2016, have increased their monitoring of such content in Europe.
The figures find the companies have removed 70 per cent of hate speech notified to them by Non-governmental organisations (NGOs) and public bodies. This compares to just 28 per cent in the first monitoring round conducted two years ago and 59 per cent in a second review carried out last May.
The review also shows that participating tech firms have met a target for reviewing the majority of notifications they receive within a 24-hour period. The European Commission said companies are reacting to 81 per cent of notification received with a day, up from 51 per cent in the previous monitoring round.
A total of 2,982 notifications were submitted to IT companies with 46 cases originating in Ireland. The ground on which content was notified included xenophobia, sexual orientation and anti-Semitism.
Facebook, which received the most notifications overall removed 79.8 per cent of the content in question, while YouTube and Twitter removed 75 per cent and 45.7 per cent respectively. Microsoft, which has also signed up to the code of conduct, did not receive any notifications during the period under review.
While Twitter was found to have removed the lowest percentage of content flagged, it reviewed 80.2 per cent of hate speech it was notified about, up from just 39 per cent in the prior period.
Just one in five cases reported to tech companies were also reported by NGOs to police or prosecutors. While this is more than double the number recorded in the last review, it remains relatively low.