Changes introduced to make the online world safer in Ireland and the European Union so far have “not been sufficient” and new harms are continuing to emerge, an Oireachtas committee on artificial intelligence (AI) has heard.
Jeremy Godfrey, executive chairperson at Coimisiún na Meán, said that in the past three years they have “begun to see changes to make the online world safer” and there were a number of investigations which had been opened.
“However, the changes have not yet been sufficient and new types of harm, especially harms related to AI have emerged,” he said.
The committee was examining issues around images, deepfakes and consent, following controversy related to the use of Grok, an AI tool, to generate sexualised images of adults and children.
READ MORE
Under the AI Act – European legislation currently being transposed into Irish law – platforms will be prohibited from deploying AI systems that are manipulative, deceptive or exploitative and cause users to take actions that cause serious harm. They will also have transparency obligations related to labelling of deep fakes.
It was “worth considering whether to supplement these requirements”, for example, by making it “a prohibited practice to deploy AI systems that are capable of producing intimate imagery of real people without their consent, or which are capable of producing child sex abuse material”, Godfrey told the committee.
“It may also be useful to widen the scope of ‘high-risk systems’ under the AI Act to include a wider range of chatbots and generative AI tools,” he said.
The European Commission is conducting an investigation into X over the Grok controversy, and Godfrey was not able to comment further on the matter due to the ongoing investigation.
However, he added that there were additional concerns emerging in this space, including that ‘nudification’ is “not the only way that generative AI might produce illegal content that depicts real people without their consent”.
“For instance, it can be used to produce deep fakes that incite hatred or violence. And it can be used to produce scam ads that include a purported endorsement by a public figure,” he said.
The committee also heard from Des Hogan, chair of the Data Protection Commission (DPC), who said “the sheer pace of AI has taken everyone by surprise over the last couple of years, and is only accelerating”.
Since 2021, the DPC’s supervision function has seen a six-fold increase in AI-related engagements, he said.
By 2025, AI accounted for over 25 per cent of all DPC controller engagements, and since its market emergence in 2023, engagement on generative AI related products has “dominated”, representing 74 per cent of all AI-related such activity.
The DPC’s particular focus is on ensuring that the processing of personal data is done lawfully and transparently and the rights of individuals are upheld.
Hogan said AI developers should “mitigate risks and reduce the likelihood of future enquiries” by providing robust, evidence-based documentation, and building products that respect the fundamental rights of European citizens.













