The regulation of technology and, in particular, artificial intelligence has become a wedge issue between the US and the EU. How it is resolved has enormous consequences for everyone.
A case in point is the release of the Claude Mythos tool developed by Anthropic, the US AI firm, last week. Claude Mythos is billed by its owners as the most advanced model ever developed to detect cybersecurity risks. It exemplifies the breakneck pace of development in the field.
What is at issue is the level of engagement, or rather the lack of engagement, with regulatory agencies during the development of Claude Mythos. Appearing before the Oireachtas Communications Committee last Tuesday, representatives of Ireland’s National Cyber Security Centre (NCSC) said the centre has reviewed the published technical material released by Anthropic in relation to Claude Mythos, confirming that the capabilities described by Anthropic appear to represent a significant change in how hardware and software vulnerabilities are identified and patched.
The experience of the NCSC is mirrored across every EU member state. While all national regulators got a preview of the published technical material, there was no wider engagement.
READ MORE
Anthropic says that is because Claude Mythos is only available to a limited pool of about 40 technology companies, so it did not need to go through the normal regulatory hoops.
This is causing considerable disquiet in the EU. The European Commission published the EU AI Act in 2024 to regulate the technology. It is a comprehensive piece of legislation but its effectiveness has been undermined by the Trump administration.
On a recent visit to Budapest, US vice-president JD Vance again took aim at the European Commission over what he said was its over-intrusive approach to regulating US tech firms.
Unlike the EU, the White House accepts the argument made by US tech firms that they understand the industry best and that anything other than self-regulation will stymie the growth and potential of AI.
Pro-AI groups funded by tech companies have amassed a war chest of $300 million for the midterm elections to campaign against candidates – mostly Democrats – who favour stronger regulation.
Self-regulation has never ended well in the past. In the late 1990s and early noughties, the financial sector campaigned for a light-touch regulatory regime on the grounds that anything more comprehensive would act as a drag on economic growth. The outcome was the 2008 global financial crisis. Many believe the risks associated with AI far outweigh those of an unregulated financial sector. A globally co-ordinated system of checks and controls is essential.











