Subscriber OnlyTechnology

Karlin Lillington: Online restrictions must be based on sound evidence not emotional argument

Despite a lack of clear research on harm to children from social media, there can be a rush towards legal curbs

Ireland’s new Online Safety Commissioner, Niamh Hodnett, faces a weighty task this year. She’ll be establishing the regulatory framework for enforcing Ireland’s Online Safety and Media Regulation Act 2022. According to the Government, one intent of the Act is “to hold designated online services to account for how they tackle the availability of some of the most serious forms of harmful online content”.

For Hodnett, this will include determining the parameters of a binding Online Safety Code. Still to be decided is how the Act and the code will balance rights against protections. The new commission has to clarify boundaries between allowable speech and expression and hate speech, abuse, and harmful activity. This is daunting, as there’s no agreed global definition for free speech, much less online harm.

The US offers the strongest protections to free speech, a right enshrined in the US constitution but, even there, those protections are not clear-cut nor absolute and are still bitterly debated, and adjudicated all the way to the US supreme court. EU free speech rights and protections are less definitive, even though people here often hazily assume, wrongly, that they have the same free speech safeguards granted Americans.

In an attempt to better control some of the documented harm caused by social media, the EU will impose some limits on speech through its Digital Services Act, which Ireland pre-empted by signing in its own Bill in December. The Irish Act will now need to align with the DSA, which itself still lacks detail. Digital and civil rights advocates, including the Irish Council for Civil Liberties, have consistently cautioned against imposing restrictions that stifle important speech rights.


Authorities in the US and EU have often aimed to soften public opposition to restrictive laws by using the most emotive argument: a need to better protect children. And true to form, in announcing the Irish Act’s signing in December, Catherine Martin, Minister for Tourism, Culture, Arts, Gaeltacht, Sport and Media, issued a statement that said the new Act “lays the foundations for the new regulatory frontier of online safety which will be of great importance to protecting children online”.

And yet, despite a broad global assumption that children are harmed, generally, by social media, this is not a clearly established fact. This might seem strange and counterintuitive, when individual stories of online abuse are commonplace, and some children and teens have been tragically affected by such abuse, even to the point of taking their own lives.

But years of overview studies – which weigh together multiple individual studies – repeatedly find individual studies are inconclusive and poorly designed, with results too weak to be taken as evidence of harm. Individual cases also don’t prove general harm.

Quoting from several major studies, a New York Times article stated recently: “Reviews of the existing studies on social media use and adolescents’ mental health have found the bulk of them to be ‘weak,’ ‘inconsistent,’ ‘inconclusive, ‘a bag of mixed findings’ and ‘weighed down by a lack of quality’ and ‘conflicting evidence’.” The piece also states, “It’s also hard to prove that social media causes poor mental health, versus being correlated with it.”

But secretive social media companies operate in a non-transparent, self-beneficial, largely unrestrained way. We absolutely do need better online rights and protections – especially for children. And the EU-wide approach to imposing greater liability and responsibility on platforms is a ground-breaking step, possible here because platforms do not have the same fallback on to broad free speech constitutional rights and the speech liability shield of the “section 230 “provision within US federal law.

Nonetheless, several EU countries, including Ireland, and the UK in its online safety Bill, have indicated they may opt for too-specific, speech-rights-suppressive approaches to this fraught issue, with “protecting children” used as an emotive wedge for pushing them forward.

But we need far better understanding of online harms, especially for children and teenagers. This requires more algorithmic transparency and greater research access to meaningful platform data. Both of those provisions are included in the DSA, though details have to be worked through, and platforms will fight such access every inch of the way.

Unfortunately, we could have done much more, far sooner. Global lawmakers did little when arguments were made for years for these forms of platform transparency and responsibility. As a result, the regulatory cart is now being placed before the horse, in a way that could have damaging ramifications on important speech rights. Due to their own laggard response to the serious societal threats imposed by powerful platforms, lawmakers are scrambling to impose restrictions on the basis of emotional argument, rather than sound evidence. Poorly designed restrictions may do little to protect children, as many studies show important benefits to children in using online media.

The danger is that rapidly imposed laws may ultimately be designed to suit individual politicians, lobby groups or constituencies, rather than society – or children – as a whole. We can only hope Hodnett treads carefully as she creates Ireland’s new online codes, and leaves them flexible enough to reflect more adequate online harm research in future.