Subscriber OnlyTechnology

Social media's data key to keeping young people safe online

Lack of detailed evidence means regulators are in dark on what is needed

Online safety for young people is in the headlines – again – which is as it should be for as long as we cannot fix on adequate educational, regulatory, or structural approaches to manage genuine concerns.

At European level, parliamentarians are gearing up to consider concrete steps towards better safety for children under the broad Digital Europe programme. And the UK's online safety Bill has just been published, launching debate across the Irish Sea.

Here in Ireland, we're still waiting on concrete proposals under the Online Safety and Media Regulation Bill 2020, which includes the recommendation to create an Online Safety Commissioner.

Next to doing nothing, doing something without good data is the worst of all possible outcomes

Progress has been slow, and it’s uncertain what shape such a role would take. It’s important we get it right, balancing real, not imagined, risks against appropriate steps taken to manage them.


The Joint Committee on Tourism, Culture, Arts, Sport and Media published its pre-legislative scrutiny of the Bill last month, with 33 recommendations.

Real and potential harms

Meanwhile, every week seems to bring more stories of real and potential harms, most notably from the use of social media platforms by children and adolescents. Particularly attention-grabbing were leaked documents from Facebook whistleblower Frances Haugen, which indicated that the company's own internal research showed it knew of harms to adolescents using its Instagram platform, but failed to move to address them.

And yet, we still have little real insight, because the data remains internal to Meta/Facebook.

Two weeks ago, a group of more than 250 academics – global experts in psychology, psychiatry, online technology, child development and social data science – signed an open letter to Mark Zuckerberg, founder and chief executive of Meta (formerly Facebook). In it, they argue that the company has failed to respond to its own internal research showing its platforms can negatively affect young people.

They also said Meta’s own internal research is poorly designed and secretive. They called for more collaboration with outside researchers and more transparency.

Prof Andrew Przybylski, director of research at the Oxford Internet Institute, University of Oxford, and one of the letter's principal authors, told me that proper research with open scrutiny of methods and data sets is needed because, right now, public discussion lacks the real insight that better and more transparent studies would provide.

Therefore, “regulators and policymakers have little idea what practical steps need to be taken to make progress on child and adolescent mental health science”. This is a serious problem and we see it play out as battles between, as he says, “powerful corporate interests” and “scientifically illiterate thought leaders”.

That makes it worryingly easy to get a critical role, such as that of, say, an online safety commissioner, very wrong.

Meta provided me with a detailed response to the letter, which I gave to Prof Przybylski. He said he welcomed that Meta was clearly taking their letter seriously, though he noted with some amusement that one of the list of publications they offered to show that adolescents are not harmed by platforms, was one he co-authored.

Although it’s titled There Is No Evidence That Associations Between Adolescents’ Digital Technology Engagement and Mental Health Problems Have Increased, the paper’s main points include those stated in the open letter – that researchers had to depend on limited data and need more and better data from the platforms “to more rigorously examine these possibilities”.

I asked him if other platforms were more forthcoming with data. As far as data that would inform research goes, not really, he says.

“Reddit is by far the most open, followed by Twitter. But those platforms are kind of useless for mental health science. You’re not going to learn a hell of a lot about teenage body image or school bullying from the backend of LinkedIn.

"Meta, TikTok, and YouTube [which do not release adequate data] do have data that would work with other efforts to understand mental health though."

And yet, access to properly sourced and protected data is needed by researchers in many fields. Przybylski points to recommendations in the US surgeon general’s recent youth mental health advisory, which notes: “Technology companies must step up and take responsibility for creating a safe digital environment for children and youth” and recommends giving outside researchers access to data to better analyse products and algorithms.

“Next to doing nothing, doing something without good data is the worst of all possible outcomes,” Przybylski says. “It’s absolutely fine for nations and societies to set value-based rules for how they want companies to operate, but let’s not kid ourselves that any of this has a lick to do with science. That’s nonsense.

“If we want evidence-based policy on online harms, we actually need evidence and this needs to be independently analysed. Not behind closed doors at Meta or some regulator’s office. This data belongs to us and we should be able to learn about ourselves, and govern ourselves, taking it into account.”