Subscriber OnlyTechnology

We are rushing to a new AI-driven world without vital regulation

Tech advances are nearing a crisis point, with big implications for how we live our day-to-day lives

We’re fast approaching a technological crisis point where we will face a growing range of serious global consequences if moderating action isn’t taken.

We’re now saturated in a range of poorly-understood and weakly-constrained technologies which have the capacity for serious harm, primarily concentrated in the hands of a few opaque technology giants. For evidence just look to events over the past few weeks.

The marquee tech problem child here is artificial intelligence (AI), and the ongoing OpenAI saga brought forward all the most worrying elements into public view. The company’s board fired its chief executive and co-founder Sam Altman on the basis that he had not been transparent with the board. In the wake of that shock announcement it appeared that the issue was the too-fast commercial development of risky technologies alongside a push for big industry ties (Microsoft already being a key partner and company investor).

OpenAI was set up by Altman and others to create AI for the common good, and structured its board and governance to keep it non-commercial and prevent these important, costly-to-develop new technologies from being concentrated in the hands of just a few tech giants. But OpenAI’s advancements in the field led to commercial fever dreams across the sector. The company ended up with an awkward structure of having a commercially-focused arm stuck on to the not-for-profit governance structure.

READ MORE

And yet how did this play out? A Big Tech feeding frenzy, of course. Microsoft immediately offered to hire Altman. Microsoft’s stock, which tanked when Altman was fired from OpenAI, soared when he was offered a Microsoft job – of course. Then Altman was apparently forgiven, went back to OpenAI and, surprise, Microsoft is now a major force on the board of a company that was not supposed to have commercial tech influencing it at board or corporate level. It’s a big win for Big Tech. Business as depressingly usual, then.

All this drama apparently has also placed the major incoming EU AI regulation in jeopardy of being watered down into general meaninglessness. Remember, the EU began considering the issues around AI and how it should be regulated so as not to allow rampant and barely checked development years ago, before we all were blinded by the AI light of OpenAI’s ChatGPT.

Now the big countries that were in firm support of AI regulation – France, Germany, Italy – appear ready to row back on needed controls because AI lobbyists are pushing the notion that the big EU countries will lose out on a development extravaganza and AI financial rewards. This is ridiculous, dangerous posturing by the tech sector. We need to slow walk potentially devastating technologies.

Plenty of AI experts acknowledge that we do not understand how AI’s learn, whether that be in terms of chess strategies or, say, the development and management of nuclear or war technologies. In computing if you do not understand how the code is working, you cannot effectively constrain it. This is hardly a safe structure for the fast-paced development of technologies which quite conceivably poses a threat to our own long-term survival.

Just as bad is the what-could-possibly-go-wrong idiocy of allowing industry self-regulation. Not in an era where we can patently see the gross harms coming from, say, social media technologies and other big platform companies where repeated multimillion euro fines for regulatory violations and leaked internal documents from whistleblowers show Big Tech cannot even comply with existing regulations, and knowingly takes decisions that breach rules.

A swift round-up of other headline-making events only adds more evidence that we are fast approaching a crunch point at which meaningful, co-ordinated international efforts must be made to restrain and regulate technology companies, their products and how third parties use them.

On Monday DNA health and ancestry testing company 23andMe confirmed people’s sensitive health data was exposed during a major data breach earlier this year in which personal DNA data on millions of users was leaked and offered for sale on the dark web. Such companies and other commercial DNA research firms have insisted DNA data is watertight, securely stored and managed. Yet a hacker accessed the data using a relatively simple hack. While “only” 14,000 initial accounts were accessed, these linked to nearly seven million 23andMe customers, because people share their DNA details on many ancestry sites.

Finally and closer to home, in the wake of the Dublin riots that followed the horrific stabbing of children and a teacher, the Government is rushing to push through a Bill to allow Garda use of a highly controversial form of AI – facial recognition technology (FRT) – with little Dáil discussion and in advance of EU AI regulations for FRT. Using such technologies has broad societal impacts beyond finding some rioting suspects, many of them detrimental to groups of people already vulnerable to discrimination and policing prejudice, such as people of colour. And such technologies risk ultimately placing every single citizen on a faceprint database, potentially subject to public FRT tracking in real time.

We need public debate and careful consideration of where adopting poorly-considered policies, ignoring emerging problems, or weakening the possibilities of proper oversight across incoming technologies could lead us in a future that we cannot yet begin to imagine.