Grok, the generative artificial intelligence chatbot, was one of the most downloaded apps in Ireland on Wednesday morning.
The Elon Musk-owned learned language model became the number one free app on the Apple store after more than a week of international coverage of its perverse ability to create intimate photographs of people without their consent and create child sex abuse imagery.
It’s impossible to know if the downloads were being driven by the media coverage or something even more grim.
What is more certain at this point in time is the apparent lack of consequences for those who use the AI model to create illegal sexual images of women and children, and what appears to be a lacuna of regulation for the platform that is not just hosting but generating the illegal content.
READ MORE
Grok, the AI chatbot that lives on the social network X, is a key component of xAI – the Musk venture which the billionaire has boastfully claimed will “transform” humanity.
Though Musk came comparatively late to the AI boom, he has made audacious claims about his ability to catch up to and overtake competitors through his Colossus project – gigantic data centres based in Memphis, Tennessee.
Musk has claimed that he has been able to acquire huge amounts of graphics processing units (GPUs), the highly sought computer chips which are crucial for training and developing AI models.
The most publicly visible and accessible aspect of Musk’s global AI venture, which recently attracted $20 billion in investment funding, is Grok.
People who use X can ask Grok questions or send commands. Over the last week, the X platform has been flooded with non-consensual intimate images as users figured out that Grok could generate sexual content of women and children.
X has previously claimed to have advanced technology that can detect when users are posting child sex abuse imagery on the platform, even before other users have to report it. But it seemed to be lethargic when the illegal images were being created by its own chatbot.
[ How Elon Musk’s rogue Grok chatbot became a cautionary AI taleOpens in new window ]
In between posting laughing and fire emojis about the scandal, Musk eventually said that “anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content”.
The free speech absolutist placed responsibility for the illegal, disturbing images on the individuals who had asked Grok to generate them rather than on the AI model that had complied. Musk’s response bears a striking resemblance to that of the Irish Government, which has also responded to the disturbing trend with a focus on the individual.
When someone uses AI to make a false image or video of you, this is called a “deep fake”. These are not illegal. But when these images are sexual and intimate, they become illegal under existing laws.
Like many social media giants, X bases its European headquarters in Dublin. The Taoiseach, Tánaiste and the media regulator Coimisiún na Meán were quick to point out that both non-consensual intimate images (formally known by the outdated term “revenge porn”) and child sex abuse images are already illegal.
The Government and the State agency encouraged anyone who witnessed such illegal content to report it.
Which means that a hypothetical Irish victim of AI-generated intimate images would have to navigate the complexity of possible cross-jurisdictional crime, if the person harassing them is based in another country, as well as the significant barrier to justice that could be posed by an abuser using an anonymous account.
Such a case would also rely on pre-existing laws that were not specifically designed with AI in mind.
Last year, Ireland’s Artificial Intelligence Advisory Council recommended that the Government create a new law that would ban the creation of digital “deep fakes” of individuals without their consent.
Fianna Fáil TD Malcolm Byrne, who is also chair of the Oireachtas committee on AI, introduced such a Bill last year and is now calling on the Government to fast-track it.
But even if a new or old law were to produce a successful case, the individual who asked Grok for an illegal image would be prosecuted. The creators of the AI technology itself would bear no responsibility in this scenario, and X would not even face a fine.
What is likely to be of greater concern to the public is the ease with which an emergent technology can generate illegal, deeply harmful content and what, if anything, the Government can or will do to hold X to account.
There is an EU law called the Artificial Intelligence (AI) Act which regulates AI across all member states. Platforms that breach the regulations could face fines of up to €35 million or 7 per cent of total worldwide annual turnover, whichever figure is higher.
The regulations do not explicitly ban deep fakes, but say that platforms have to disclose that the content has been “artificially generated or manipulated”. But the Department of Communications confirmed that the provisions to “enable surveillance, penalties and enforcement” under the Act will not come into effect until August.
In short, the emergent threat of intimate and illegal AI images has created a situation where there is a very obvious problem with absolutely no obvious regulatory solution.
















