Karen Hao on AI tech bosses: ‘Many choose not to have children because they don’t think the world is going to be around much longer’

The author of Empire of AI: Inside the Reckless Race for Total Domination discusses the cost of Big Tech’s huge investment in technologies that may do more harm than good

Author Karen Hao's book tells the story of OpenAI and the company's founder Sam Altman.  Photograph: Nick Bradshaw
Author Karen Hao's book tells the story of OpenAI and the company's founder Sam Altman. Photograph: Nick Bradshaw

Scarlett Johansson never intended to take on the might of Silicon Valley. But last summer the Hollywood star discovered a ChatGPT model had been developed whose voice – husky, with a hint of vocal fry – bore an uncanny resemblance to the AI assistant voiced by Johansson in the 2013 Spike Jonze movie Her. On the day of the launch, OpenAI chief executive Sam Altman, maker of ChatGPT, posted on X a one-word comment: “her”. Later Johansson released a furious statement revealing she had been asked to voice the new aide but had declined. Soon the model was scrapped. Johansson and a phalanx of lawyers had defeated the tech behemoths.

That skirmish is one among the many related in Karen Hao’s new book Empire of AI: Inside the Reckless Race for Total Domination, a 482-page volume that, in telling the story of San Francisco company OpenAI and its founder, Altman, concerns itself with large and worrying truths. Could AI steal your job, destabilise your mental health and, via its energy-guzzling servers plunge the environment into catastrophe? Yes to all of the above, and more.

Empire of AI: Inside the Reckless Race for Total Domination by Karen Hao - Precise, insightful, troublingOpens in new window ]

As Hao puts it in the book: “How do we govern artificial intelligence? AI is one of the most consequential technologies of this era. In a little over a decade, it has reformed the backbone of the Internet. It is now on track to rewire a great many other critical functions in society, from healthcare to education, from law to finance, from journalism to government. The future of AI – the shape this technology takes – is inextricably tied to our future.”

It’s a rainy day in Dublin when I travel to Dalkey to meet Hao, a Hong Kong-dwelling, New Jersey-raised journalist who has become a thorn in Altman’s side. Educated at MIT, she writes for the Atlantic and leads the Pulitzer Centre AI Spotlight series, a programme that trains journalists in covering AI matters. Among families grabbing a bite to eat in a local hotel, the boisterous kids running around tables in the lobby and tourists checking in and out, Hao, neat and professional in a cream blazer with her hair tied back, radiates an air of calm authority.

“AI is such an urgent story,” she says. “The pursuit of AI becomes dangerous as an idea because it’s eroding people’s data privacy. It’s eroding people’s fundamental rights. It’s exploiting labour, but it’s humans that are doing that, in the name of AI.”

Whether you’re in Dublin or San Diego, AI is hurtling into our lives. ChatGPT has 400 million weekly users. You can’t go on to WhatsApp, Google or Meta without encountering an AI bot. It was revealed in a recent UK Internet Matters survey that 12 per cent of kids and teens use chatbots to offset feelings of loneliness. Secondary school students are changing their CAO forms to give themselves the best chance of thwarting the broken career ladder that AI has created.

The impact of AI on the environment is extraordinary. Just one ChatGPT search about something as simple as the weather consumes vast energy, 10 times more than a Google search. Or, as Des Traynor of Intercom put it at Dalkey Book Festival recently, it’s like using a “massive diesel generator to power a calculator”.

It’s far from the utopian ideal of a medical solutions-focused, climate-improving enterprise that was first trumpeted to Hao when she began investigating OpenAI and Altman in 2019.

As a 20-something reporter at MIT Technology Review covering artificial intelligence, Hao became intrigued by the company. Founded as a non-profit, OpenAI claimed not to chase commercialisation. Even its revamp into a partially for-profit model didn’t alter its mission statement: to safely build artificial intelligence for the benefit of humanity. And to be open and transparent while doing it.

But when Hao arrived at the plush headquarters on San Francisco’s 18th and Folsom Streets, all exposed wood beam ceilings and comfy couches, she noticed that: nobody seemed to be allowed to talk to her casually. Her photograph had been sent to security. She couldn’t even eat lunch in the canteen with the employees. “They were really secretive, even though they kept saying they were transparent,” Hao says. “Later on, I started sourcing my own interviews. People started telling me: this is the most secretive organisation I’ve ever worked for.”

Karen Hao in Dublin during the Dalkey book festival.  Photograph: Nick Bradshaw
Karen Hao in Dublin during the Dalkey book festival. Photograph: Nick Bradshaw

The meetings Hao had with OpenAI executives did not impress her. “In the first meeting, they could not articulate what the mission was. I was like, well, this organisation has consistently been positioning itself as anti-Silicon Valley. But this feels exactly like Silicon Valley, where men are thrown boatloads of money when they don’t yet have a clear idea of what they’re even doing.”

Simple questions appeared to wrong-foot the executives. They spoke about AGI (artificial general intelligence), the theoretical notion that silicon chips could one day give rise to a human-like consciousness. AGI would help solve complex problems in medicine and climate change, they enthused. But how would they achieve this and how would AGI technology be successfully distributed? They hedged. “Fire is another example,” Hao was told. “It’s also got some real drawbacks to it.”

Since that time, AGI has not been developed, but billions have been pumped into large language models such as ChatGPT, which can perform tasks such as question answering and translation. Built by consuming vast amounts of often garbage data from the bottom drawer of the Internet, AI chatbots are frequently unreliable. An AI assistant might give you the right answer. Or it might, as Elon Musk’s AI bot Grok did recently, praise Adolf Hitler and cast doubt on people with Jewish surnames. “Quality information and misinformation are being mixed together constantly,” Hao says, “and no one can tell any more what are the sources of truth.”

It didn’t have to be this way. “Before ChatGPT and before OpenAI took the scaling approach, the original trend in AI research was towards tiny AI models and small data sets,” Hao says. “The idea was that you could have really powerful AI systems with highly curated data sets that were only a couple of hundred images or data points. But the key was you needed to do the curation on the way in. When it’s the other way around, you’re culling the gunk and toxicity and that becomes content moderation.”

One particularly moving section of Hao’s book is when she journeys to poorer countries to look at how people who work on the content moderation side of OpenAI cope day-to-day. Meagre incomes, job instability and exposure to hate speech, child sex abuse and rape fantasies online are just some of the realities contractors face. In Kenya, one worker’s sanity became so frayed his wife and daughter left him. When he told Hao his story, the author says she felt like she’d been punched in the gut. “I went back to my hotel, and I cried because I was like, this is tearing people’s families apart.”

Hao nearly didn’t get her book out. She had thought she would have some collaboration with Altman and OpenAI, but the participation didn’t happen. “I was devastated,” she admits. “Fortunately I had a lot of amazing people in my life who were like, ‘Are you going to let them win or are you going to continue being the excellent journalist you know you can be, and report it without them?’”

Understanding companies such as OpenAI is becoming more important for everyone. In recent weeks, Meta, Microsoft, Amazon and Alphabet, Google’s parent company, delivered their quarterly public financial reports, disclosing that their year-to-date capital expenditure ran into tens of billions, much of it required for the creation and maintenance of data centres to power AI’s services.

In Ireland, there are more than 80 data centres, gobbling up 50 per cent of the electricity in the Dublin region, and hoovering up more than 20 per cent nationally, as they work to process and distribute huge quantities of digital information.

Let’s get real: Ireland’s data centre boom is driving up fossil fuel dependenceOpens in new window ]

Hao believes governments must force tech companies to have more transparency in relation to the energy their data centres consume. “If you’re going to build data centres, you have to report to the public what the actual energy consumed is, how much water is actually used. That enables the public and the government to decide if this is a trade-off worth continuing. And they need to invest more in independent institutions for cultivating AI expertise.”

While governments have to play their part, it’s difficult reading the book not to find yourself asking the simple question: why aren’t tech bosses themselves concerned about what they’re doing?

Tech behemoths may be making billions – AI researchers are negotiating pay packages of $250 million from companies such as Meta – but surely they’ve given a care to their children’s future? And their children’s children? Wouldn’t they prefer them to live in a world they still have flowers and polar bears and untainted water?

Adam Kelly: I am a university lecturer witnessing how AI is shrinking our ability to thinkOpens in new window ]

“What’s interesting is many of them choose not to have children because they don’t think the world is going to be around much longer,” Hao says. “With some people in more extreme parts of the community, their idea of Utopia is all humans eventually going away and being superseded by this superior intelligence. They see this as a natural force of evolution.”

“It’s like a very intense version of utilitarianism,” she adds. “You’d maximise morality in the world if you created superior intelligences that are more moral than us, and then they inherited our Earth.”

Offering a more positive outlook, there are many in the AI community who would say that the work they are doing will result in delivering solutions that benefit the planet. AI has the potential to accelerate scientific discoveries: its possibilities are exciting because they are potentially paradigm-shifting.

Is that enough to justify the actions being taken? Not according to Hao. “The problem is: we don’t have time to continue destroying our planet with the hope that one day maybe all of it will be solved by this thing that we’re creating,” she says. “They’re taking real world harm today and offsetting it with a possible future tomorrow. That possible future could go in the opposite direction.”

“They can make these trade-offs because they’re the ones that are going to be fine. They’re the ones with the wealth to build the bunkers. If climate change comes, they have everything ready.”

Empire of AI: Inside the Reckless Race for Total Domination by Karen Hao is published by Allen Lane