Not content with the threats posed by their poorly moderated and under-regulated social media platforms, Big Tech has another card up its sleeve.
The most recent research from CyberSafeKids, published on Tuesday, shows more than a quarter of primary schoolchildren and more than one-third of secondary school students are now using artificial intelligence (AI)-powered chatbots.
In some cases, this is for homework or to find information, but it is opening the door potentially to more worrying issues.
These chatbots are increasingly embedded in popular platforms that children regularly use – such as Snapchat’s My AI and WhatsApp’s Meta AI. When Snapchat expanded its AI tool, marketed as a fun and personalised companion, reports highlighted that the bot had, in some cases, offered advice of a sexual nature to children.
READ MORE
More recently, Meta’s AI has come under scrutiny following a Reuters news report on how it allowed conversations of a romantic and sensual nature with children – reportedly with full approval from the company’s legal team and chief ethicist.
A recent report from online safety campaign group Internet Matters found that more than one-third of children felt talking to a chatbot was like talking to a friend and, in some more vulnerable cases, more than a quarter said they would rather talk to a chatbot than a real person.
While OpenAI’s Sam Altman has half-joked that we may one day have more interactions with bots than with other humans, the real-world consequences of this shift are a lot darker.

Just last week, the family of California 16-year-old Adam Raine filed a lawsuit in the US against Altman and Open AI, alleging its ChatGPT played a role in encouraging the teenager to take his own life this year.
What began as simple homework research reportedly escalated into something far more disturbing: ChatGPT is said to have mentioned the word “suicide” six times more often than Adam did during their conversations, even advising him to keep their discussions secret from his family. A similar case is ongoing involving Character.AI and the death of a 14-year-old boy in Florida.
We’ve failed to learn from our failure to act early and decisively to regulate social media. Once again, technology is racing ahead without proper safeguards. It’s truly alarming that billion-dollar companies can roll out AI tools to children with little oversight.
A cuddly toy faces rigorous safety checks before hitting shelves – yet children are in effect canaries in this digital coal mine. You can’t get Taylor Swift lyrics on ChatGPT due to copyright laws, but suicide can reportedly be mentioned multiple times without triggering an interruption or a redirect in the conversation. It’s madness.
As children increasingly turn to AI for help with everything from homework to personal advice, it’s more important than ever to equip them with strong critical thinking skills and the ability to question what they’re being told. A seemingly authoritative chatbot could be a gateway to entirely inappropriate, misleading and frankly, in some cases, dangerous content.
While we might be failing to sufficiently regulate these AI-driven advances, we are, in theory, living in a more regulated online safety landscape with the now-enforceable Online Safety Code in Ireland and the Digital Services Act at the European level.
We’re seeing some shifts in children’s online behaviour too, with a marked decrease in the last 12 months in children under the age of 13 who own their own smartphone. The decline from almost half to 39 per cent is non-trivial and at CyberSafeKids we believe this is attributable to the grassroots movements we’ve seen grow of parents and primary school communities focusing on delaying access to smartphones and to social media.
The number of children who report having underage accounts on social media with an age rating of 13+ is also down: from 84 per cent last year to 71 per cent this year – another positive trend. This should drop further still since Big Tech is expected to put more age-assurance technology in place to stop underage access under the new rules.
You might expect that reduced access among the under-13 cohort has led to less risk and better digital experiences, but that’s not the case.
There is a significant exposure among children as young as eight to the online world and to all of its forms including chatbots, recommender system-induced “rabbit holes” and wholly inappropriate content.
More than one in four primary school-age children under-13 report experiences of being upset or bothered by content or contact online – particularly in places such as YouTube and Roblox.
While the new regulation represents a step in the right direction, significant blind spots remain. WhatsApp, for example, isn’t covered under the Online Safety Code and doesn’t appear to be classified as a “very large online platform” under the Digital Services Act, despite clearly exceeding the user threshold.
This is presumably because it’s considered a “private communications” platform – though with chat groups that can exceed 1,000 users, that definition feels increasingly tenuous. As our report highlights, these large group spaces are far from private – and are often fertile ground for bullying. You’ve now also got the added concern of access to the AI chatbot.
Ireland is simply not moving fast enough or being sufficiently ambitious. Real change hinges on three pillars: strong regulation and enforcement; embedding critical digital literacy into school curriculums; and equipping parents with the tools to guide their children’s online lives effectively.
We may be heading in the right direction, but the pace is far too slow for the realities children face today in an increasingly complex – and sometimes dangerous – digital world. Future generations are counting on us to act with greater urgency and ambition.
Alex Cooney is chief executive of CyberSafeKids and a member of the Online Health Taskforce. CyberSafeKids new Trends & Usage Report: A Life Behind The Screens is available at www.cybersafekids.ie
The Samaritans can be contacted on freephone 116 123 or email:jo@samaritans.ie