Instigators of the Dublin riots last week may have used the Irish language to evade social media platform rules that ban the spreading of hatred and illegal content, it has emerged.
Large social media platforms have little if any moderation of content in the Irish language, according to data reported by several of them to the European Commission earlier this year.
“In the case of the riots in Dublin we saw that those spreading hatred, illegal and harmful content ‘exploit’ the lack of Gaelic speaking moderators, as a lot of such content was posted in Gaelic,” an EU official said.
When public disorder broke out in the wake of the stabbing of three children and two adults last week, Irish authorities triggered an alert that pulled in the help of European Union officials to appeal to the major online platforms and remind them they are obliged to remove illegal content under powerful legislation that came into force this year.
It was the first time the provision under the new Digital Services Act (DSA) had been activated by any country. Under the “incident protocol”, Irish authorities were able to enlist the help of the European Commission in contacting the major online platforms and urging them to act.
The lack of Irish-language moderators “is an issue on X and Google”, the EU official said. “Meta instead was much better prepared.”
Facebook and Instagram have 42 Irish-speaking content moderators, according to data submitted to the Commission. There is no moderation of content in the Irish language on YouTube, according to transparency data submitted by Google earlier this year.
The use of the incident protocol “worked” in getting online platforms to act, the official said, and led to three-way discussions between EU officials, Irish authorities, and the tech giants.
“We see that platforms are more responsive when the Commission is part of the discussion,” the official said, adding that this had also been evident in exchanges with platforms prior to recent elections in Slovakia and the Netherlands.
The DSA, which went into effect in August, allows for huge fines for tech giants in case of flagrant breaches of the law. It obliges platforms to take down content that includes illegal hate speech, incitement to terror, and disinformation, which the Commission considers can lead to real-life violence.
Under the law, national regulators like Ireland’s Comisiún na Meán must police online platforms to ensure they are complying.
In the wake of the stabbing attack and disorder, Comisiún na Meán issued a statement to say it had “immediately contacted large online platforms to bring the incident to their attention so that they could respond”.
The regulator expressed concerns about imagery and videos of the incident being shared online, as well as “the use of this incident to incite violence against individuals or groups”.
The online platforms responded and engaged in discussions with the regulator on how they would deal with the incident, according to the statement.
This week Tánaiste Micheál Martin said X had not been as responsive as other platforms in taking down content, and that there was also a challenge with messaging app Telegram as it is based in Dubai and is therefore “beyond our jurisdiction”.
One voice note that went viral was shared on a private messaging group in Telegram, and called on people to gather in the city centre at 7pm and to “kill” foreigners.
- Find The Irish Times on WhatsApp and stay up to date