The head of the Oireachtas committee on artificial intelligence (AI) has called on the Government to fast track a Bill that would criminalise the harmful misuse of someone’s voice or image.
Fianna Fáil TD Malcolm Byrne introduced the Protection of Voice and Image Bill in April, which would criminalise what are colloquially known as “deepfakes” - when someone’s voice or image is used to generate a false photo or video with AI.
It follows concerns over Elon Musk’s AI tool Grok being used to digitally undress women and children for distribution on his social media channel X.
The strength of laws in Ireland to criminalise AI-generated non-consensual intimate images and child sex abuse images are being examined by the Attorney General.
READ MORE
Mr Byrne said while generating child sex abuse imagery and sharing intimate images without consent are already criminal offences, this Bill would create a new standalone criminal offence for those who “knowingly exploit another person’s name, image, voice or likeness without consent”, especially when it is done to harm to deceive.
“The deliberate misuse of someone’s image or voice without their consent for malign purposes should be a criminal offence. This Bill is a useful baseline and we need to move quickly to address this problem,” Mr Byrne said
Ireland’s child protection rapporteur, Caoilfhionn Gallagher, has said the harms from deep fake sexual abuse for the individuals depicted are “equivalent” to those from authentic images “because for victims the videos feel real”.
Ms Gallagher also questioned whether Ireland’s protections holding social media platforms to account were sufficient.
The special rapporteur on child protection was speaking on RTÉ radio’s Morning Ireland about concerns over the X platform’s Grok AI tool which includes the facility to “nudify” images.
“Given how realistic they are, victims know that they might be perceived as real by others,” she said.
“Generating these images is often part of a pattern of abuse or harassment. And I’m acutely conscious of the horrendous case of Nicole Coco Fox from Clondalkin, who died by suicide due to online abuse. So we know how devastating online abuse of any kind can be. We have to see this in that perspective.”
Ms Gallagher also raised the issue of other nudification apps being trained on vast data sets of mostly female images “because they tend to work most effectively on women’s bodies”.
“As a result, 99 per cent of sexually explicit deepfakes accessible online are estimated to be of women and girls. So this is also a gender-based violence issue.”
Ms Gallagher added that there was concern internationally about whether the protections in place were sufficient, because most of the protections from a legal and policy perspective internationally were very focused on the users themselves who may generate the images rather than the platforms and the products which facilitate the creation of these images.
“To take the platforms themselves, in this case X AI has its own acceptable use policy and which prohibits depicting likenesses of persons in a pornographic manner, but plainly that’s completely insufficient.”
Ms Gallagher said Ireland’s relevant laws, including section five of the Child Trafficking and Pornography Act 1998 and Coco’s Law, were “quite focused on the individual users”.
“The issue here is Ireland’s not alone and internationally there is a concern about the adequacy of the mechanisms for holding the platforms to account. Ultimately, this is in part a product safety issue and about whether the product itself which allows the images to be generated should be illegal or should be regulated more tightly rather than simply the individual users who take action using it,” she said.











