Sam Altman, the chief executive of OpenAI, suggested during a recent European tour that he might have to withdraw ChatGPT from Europe if it becomes too challenging to comply with the EU’s proposed new regulations for artificial intelligence.
On June 14th the European Parliament adopted its position on its proposed AI Act. Negotiations have now begun with the European Council of the heads of states, with the Act likely to pass into legislation across Europe by the end of this year.
The proposed regulations would prohibit AI systems that carried unacceptable risks to public safety or discrimination, such as classifying people according to their social behaviour. Systems deemed at high-risk include those affecting health, rights, the environment and influencing the outcome of elections.
Generative systems, such as ChatGPT, would have to disclose that their outputs were computer-generated (and thus fake images and videos would have to be labelled as such). Full details of materials used to train each generative system would have to be disclosed, and all copyright protections and applicable legislation duly honoured.
EU needs to step up financing to support collective security and accelerate productivity and growth
Mario Rosenstock: ‘Everyone lost money in the crash. I was no different, but it never bothered me’
UnitedHealth targeted: US healthcare giant faces scrutiny after chief executive’s murder
PTSB goes for job cuts as bloated costs stand out among European peers
[ OpenAI teams up with Stripe to monetise ChatGPTOpens in new window ]
Altman later retracted his position in a tweet at the end of his tour, stating that he is in fact excited to continue to operate in Europe and has no plans to leave.
However, at the end of June, an open letter to the European Commission, European Council and European Parliament expressing deep concern about the proposed legislation was signed by 160 senior executives from across Europe, including leadership from Airbus, Dassault Systèmes, Deutsche Telekom, E ON, Renault and Siemens – although no executive in Ireland.
The letter notes that highly innovative companies, and investor risk capital, might move outside Europe to avoid the proposed new regulations. The signatories accept that due duty of care in the development and labelling of AI content should be enforced, but baulk at encapsulation in law as being bureaucratic and counterproductive. Instead they propose a regulatory body of appropriately qualified experts that would be sufficiently agile to respond to technology changes and developments, including in co-operation with the United States.
Three of the less well-known signatories to the letter are Timothée Lacroix, Guillaume Lample, and Arthur Mensch. They are co-founders of Mistral AI, a Parisian start-up. They launched their company in mid May and just four weeks later they had raised an astonishing €105 million in venture capital, valuing their company at more than €260 million – one of the fastest and highest fund raises in Europe. They are in their 30s, they have known each other since school days and are first-time company founders. Mensch was previously at Deep Mind (a UK AI company acquired by Google in 2014), and Lacroix and Lample were at Meta’s AI research division.
The financing round was led by the Parisian office of LightSpeed Ventures, a Silicon Valley venture capital firm. More than a dozen other firms and individuals joined the financing. The French national public investment bank, Bpifrance, also strongly participated. The French digital minister, Jean-Noël Barrot, expressed his strong congratulations for the raise.
Mistral’s trained models and data will be made fully open source, thus allowing others to duplicate them, unlike the restrictions imposed by its competitors
Can Mistral AI take on the even better-funded US leaders in generative AI, including OpenAI, Google, Microsoft and Meta? Some may recall the 2005 joint French and German initiative, Quaero (Latin for “I seek”), to publicly fund a text and multimedia search engine to compete with US dominance. That project ended, unsuccessfully, in 2013. Why could Mistral AI be different?
Perhaps the answer is: because of regulation. Mistral has asserted that it intends to be fully compliant with the imminent EU AI Act, including only using publicly available data for its training materials, and abiding by all applicable copyright protections. This would certainly differentiate it from its American competitors, at least at this time when the American systems have already been extensively trained with apparent casual regard for legal protection of content. Mistral could gain competitive advantage through regulation.
Furthermore, Mistral’s trained models and data will be made fully open source, thus allowing others to duplicate them, unlike the restrictions imposed by its competitors. Finally, the commercial focus of the start-up will be on business rather than consumer applications, including enabling businesses to easily integrate their own proprietary data sets.
It may be surprising, therefore, that the three Mistral co-founders signed the joint letter. In early June, Barrot had also publicly indicated his similar concerns, not wishing to constrain European companies from competing with the US titans. In May, French president Emmanuel Macron expressed the need for synchronisation between regulation and innovation.
The French leadership cite the success of the European aerospace industry against American competition and appear determined that Europe, especially France, might develop a counter to American leadership in AI systems. Regulation of digital systems should not only protect society and norms, but also may play a strategic economic role if balanced with innovation.