For once, Rishi Sunak enjoyed some positive media coverage last week following the high-profile international conference he convened at the historically evocative location of Bletchley Park, home to the celebrated Enigma codebreakers during the second World War. The Bletchley Declaration signed there by almost 30 countries including the US and China, along with the EU, was a more robust statement of principles than many had expected on the challenges posed by current developments in generative artificial intelligence (AI).
These challenges range from the urgent to the existential. They include concerns about the use of artificially generated material to mislead voters in upcoming elections, as well as the potential loss of millions of jobs when AI takes on tasks previously performed by humans. But they also encompass warnings from experts in the field that unleashing the extraordinary power of AI without adequate safeguards could pose a threat to the future of humankind. This has led to calls for a pause in any further development of AI pending the introduction of appropriate regulation. Such grim warnings must be taken seriously, while the more immediate concerns are far from trivial.
None of this is to gainsay AI’s potential to drive remarkable improvements in science, medicine and other fields, thereby enhancing human welfare and wellbeing. Silicon Valley advocates such as Elon Musk and Marc Andreesen have been keen to promote a vision of a tech-driven utopia in which, for example, nobody would have to work unless they wanted to. History has taught us to be wary of billionaires bearing such gifts.
The declaration does acknowledge the “enormous global opportunities” offered by AI and calls for it to be developed in a way that is “human-centric, trustworthy and responsible”. It proposes an international network of scientific research on safety issues and recognises the potential for “serious, even catastrophic, harm, either deliberate or unintentional, stemming from the most significant capabilities of these AI models”.
AI technology is advancing at a dizzying speed, with legislators struggling to catch up. In the US, where the most cutting-edge development is taking place, the White House issued an executive order last week requiring developers of systems that pose risks to national security, the economy, public health or safety to share the results of safety tests with the US government before releasing them to the public. That is a welcome first step, although some critics believe big tech companies such as Google and Microsoft are over-emphasising safety issues in order to preserve their current competitive advantage over new entrants. The financial rewards of AI are so high, and the geopolitical stakes so great, that it is right to be sceptical of the motives of all concerned. But international co-operation is essential nonetheless.