Some of the biggest names in the development of artificial intelligence (AI) have called for global leaders to work towards mitigating the risk of “extinction” from the technology.
In a short statement, which did not clarify what they think may become extinct, business and academic leaders said the risks from AI should be treated with the same urgency as pandemics or nuclear war.
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” they said.
The statement was organised by the Centre for AI Safety, a San Francisco-based non-profit which aims “to reduce societal-scale risks from AI”.
Yes, the US has higher income per capita than Europe, but what is the real measure of a wealthy nation?
Your work questions answered: Can bonuses be deducted pro-rata during a maternity leave?
China the key for tech’s raw materials whether Trump likes it or not
Belfast-based watchmaker Nomadic moves with the times to reinvent retail experience
It said the use of AI in warfare could be “extremely harmful” as it could be used to develop new chemical weapons and enhance aerial combat.
The letter was signed by some of the biggest names in the field, including Geoffrey Hinton, who is sometimes nicknamed the “Godfather of AI”.
The signatories also include Sam Altman and Ilya Sutskever, the chief executive and co-founder respectively of ChatGPT-developer OpenAI.
The list also included dozens of academics, senior bosses at companies like Google DeepMind, the co-founder of Skype, and the founders of AI company Anthropic.
AI is now in the global consciousness after several firms released new tools allowing users to generate text, images and even computer code by just asking for what they want.
Experts say the technology could take over jobs from humans – but this statement warns of an even deeper concern. – PA