Subscriber OnlyTechnology

Getting to grips with regulating AI which no one fully understands

Ireland will be a key player in regulation but setting rules requires unsettling levels of guess-making about how AI technologies might affect us

AI deepfakes pose a threat to upcoming elections. AI can help find the source of cancers in the body. Actors fear AI could start taking their voiceover jobs. Amazon is full of awful AI-written travel guides and nonsense titles. AI can spot bank scammers. The Pope has (again) issued a call for caution and responsibility with the use of AIs.

All this AI joy and fear comes from just under 24 hours of news headlines on a single day (Tuesday). Not to forget “AI reveals what it thinks typical footy fans look like” (thank you, Daily Telegraph).

I can’t remember any technology that has so thoroughly flooded the daily news cycle and with such a broad arc of topicality, from the mundane and humorous to the national and geopolitical. Is it a threat? Is it a boon? Should we be terrified? Should we be buying AI company shares? What a time to be an AI (if only they were sentient, which they aren’t).

If you don’t know quite what to think of the world’s obsession with artificial intelligence, neither does anyone else, including leading AI experts. As the stories indicate, AI will penetrate multiple layers of daily life. It is already upending protective strategies and technologies. Few controls are in place, and those that are – like tools for identifying AI-generated text, images and video – are quickly outmanoeuvred.

READ MORE

AI poses a serious – some believe an existential – global management and regulatory challenge. As this week’s progress update to the Government’s National Artificial Intelligence Strategy notes with understatement, AI will present “many unpredictable challenges”.

At the report’s launch, Minister of State for Digital Regulation Dara Calleary said: “We are actively promoting a robust governance framework to safeguard against risk and ensure public trust in AI.”

Government wants to come up with regulatory pathways for these technologies, but also to enable national innovation (ah yes, that favoured but totally vague term beloved of the tech industry when it wishes to evade as much regulation as possible). The best of luck.

We know that while even the AI industry pleads for regulation – witness the statements from Microsoft, the plea to the US Congress from Sam Altman, chief executive of ChatGPT developer OpenAI, or the (supposed) thousands of signatories to that strange Elon Musk letter warning of AI’s threats (then Musk started an AI company) – there’s regulation, and then there’s Regulation.

Altman, for example, wanted regulation, but perhaps not the actual Regulation announced by the EU in its groundbreaking Artificial Intelligence Act. In late May, when the EU announced strengthened AI regulatory proposals, Altman threatened to pull ChatGPT from Europe before moderating his stance after a little chat with European Commissioner Thierry Breton.

Tech CEOs publicly beg for regulation and policy “guard rails” (another favourite tech term) for obviously risky technologies. But they know that effective regulation can be dodged or delayed due to deep confusions over what these new technologies are and what they do or might do. That’s in significant part because of the fundamental evasiveness of the companies, which fail to offer much insight into algorithms or implementations or release meaningful data to researchers so that impact and risk may be assessed.

Companies worry about EU regulation because it’s more effective than US regulation. The EU has moved firmly towards risk-based regulation and oversight, ramping up regulatory scrutiny and corporate responsibility depending on the size and power of companies and technologies. It’s done the same with the General Data Protection Regulation, and with its recent Digital Services and Digital Markets Acts. Amid so many uncertainties, risk-based regulation is the most balanced, impactful approach.

The US, the other significant regulatory environment, takes the much less effective route of divvying up oversight among federal agencies. Their strength is alarmingly dependent on presidential appointments, linking agencies to changing presidential agendas, and a deeply divided Congress approving nominations.

As a result, Europe is the only global regulator of true impact, and that’s unlikely to change. But with AI, this requires unsettling levels of guess-making about how AI technologies might affect us.

There are additional problems too. While the EU and the US might try their best to protect their democracies by legislating AI at home, doing so may increase threats globally because neither seriously considers global rather than national threats. In a compelling article in the journal Foreign Policy, Bhaskar Chakravorti, dean of global business at Tufts University’s Fletcher School of Law and Diplomacy, argues that “the aggregate effect amounts to a paradox of regulating disinformation: the more you regulate it in the West, the worse it gets globally”.

Ireland, in its AI deliberations, should choose to think big and include such considerations. As with GDPR, we are likely to have international regulatory influence because so many tech multinationals will be regulated from Ireland.

But we still think far too much about national agendas and political point scoring, rather than EU or international impacts (witness some GDPR decisions, or Ireland’s navel-gazing debate on internet safety). On AI – which will affect data protection, privacy and online safety – we will need to do much, much better.