Sam Altman, in the news for his dizzyingly fast transition from being fired to being reinstated as chief executive of OpenAI, likes to tell people that he shares a birthday with the father of the atomic bomb, J Robert Oppenheimer.
Altman, one of the founders of OpenAI which developed ChatGPT, believes that work on artificial intelligence resembles the famous Manhattan Project, which gathered the best minds to beat Germany in the race to produce nuclear weapons.
It would seem to be an unfortunate analogy, but Altman believes that by foreseeing the potential for disaster we can work to avoid it and benefit human beings instead.
The recent debacle demonstrates how unlikely it is that this optimistic vision will prevail. OpenAI started as a non-profit in 2015 but soon ran into funding difficulties.
Instead of talking about assisted dying, we should prioritise palliative care
Forget Bluesky and pre-Musk Twitter. Friendship is the only true antidote to polarisation
Opposition to abortion is seen as a position of the right, but it’s not that simple
The principal can’t sleep for worrying. If she paid all the bills on her desk, she couldn’t open the school
A for-profit subsidiary was initiated in 2019 under the scrutiny of the non-profit board, which was to ensure that “safe artificial intelligence is developed and benefits all of humanity”. This was to take precedence over “any obligation to create a profit”.
Loosely speaking, the board had more doomsayers, those who worry that AI has the potential to be dangerous to the extent of wiping out all of humanity, while Altman is more of an accelerationist, who believes that the potential benefits far outweigh the risks.
What happened when the board no longer had faith in Altman because “he was not consistently candid in his communications with the board”? Altman jumped ship to Microsoft, followed by Greg Brockman, another founder, and the majority of OpenAI employees threatened to do likewise. Yes, Microsoft, which was last year criticised by a group of German data-protection regulators over its compliance with GDPR.
[ What will happen when the middle class get hit by ChatGPT?Opens in new window ]
The pressure to reinstate Altman may not have been motivated purely by uncritical adoration, as staff and investors knew that firing him meant that a potential $86 billion deal to sell employee shares would probably not happen.
The board’s first real attempt to rein Altman in failed miserably, in other words. The new board includes Larry Summers, former US treasury secretary and superstar economist, who has been the subject of a number of recent controversies, including over his connection to Jeffrey Epstein. When he was president of Harvard, Summers was forced to apologise for substantially understating “the impact of socialisation and discrimination” on the numbers of women employed in higher education in science and maths. He had suggested that it was mostly down to genetic factors rather than discrimination.
At a recent seminar in Bonnevaux, France, at the headquarters of the World Community of Christian Meditators, former Archbishop of Canterbury Rowan Williams addressed the question of how worried we should be about artificial intelligence. He made a valid point, echoed by people such as Jaron Lanier, computer scientist and virtual reality pioneer, that artificial intelligence is a misnomer for what we now have. He compared the kind of holistic learning that his two-year-old grandson demonstrates with the high-order data processing of large language models. His grandson is learning to navigate a complex landscape without bumping too much into things or people, to code and decode messages including metaphors and singing, all in a holistic way where it is difficult to disentangle the strands of what is going on. Unlike AI, his grandson is also capable of wonder.
While Archbishop Williams’s distinction between human learning and machine learning is sound, the problem may not be the ways in which AI does not resemble us, or learn like us. We may need to fear AI most when it mirrors the worst aspects of our humanity – without the leavening influence of our higher qualities.
As cartoonist Walt Kelly had his character, Pogo, say in an Earth Day poster, ‘We have met the enemy and he is us’
Take hallucinations, the polite term for when ChatGPT lies to you, such as falsely accusing a legal scholar of sexual harassment, as outlined in a Washington Post article this year. (To add insult to injury, it cited a non-existent Washington Post article as evidence of the non-existent harassment.) As yet, no one has succeeded in programming a large language model so that it does not hallucinate, partly for technical reasons and partly because these chatbots are scraping enormous amounts of information from the internet and reassembling it in plausible ways. As the early computer scientists used to say, garbage in, garbage out.
Human beings used the internet from the beginning to lie and spread disinformation. Human beings created the large language models that mimic humanity so effectively. We allow them to continue to develop even though OpenAI has not shared, for commercial reasons, how it designed and trained its model.
Talking about regulation, as Altman does with plausible earnestness, is meaningless if we do not understand what we are regulating. The real fears of potential mass destruction are brushed aside.
As cartoonist Walt Kelly had his character, Pogo, say in an Earth Day poster, “We have met the enemy and he is us.” Our inability to cry halt or even pause shows our worst qualities – greed, naive belief in inevitable progress, and the inability to think with future generations in mind. We should perhaps focus less on the terrors of AI, and more on the astonishing hubris of those who have created and unleashed them.