With €520bn expected to be invested in AI this year, can it ever really deliver artificial general intelligence?

Impact of AI on productivity is apparent but is still relatively small compared to the considerable investments in the technology to date

The massive investment in computing power and energy required to drive advances in artificial intelligence have yet to show signs of delivering a return on the investment. Photograph: Josep Lago/AFP/Getty
The massive investment in computing power and energy required to drive advances in artificial intelligence have yet to show signs of delivering a return on the investment. Photograph: Josep Lago/AFP/Getty

“The whole is greater than the sum of its parts” is a quote attributed to Aristotle, from the 4th century BC. Many technologists, such as Sam Altman of OpenAI and Mark Zuckerberg of Meta, believe Aristotle’s perspective is central to the quest for human-level, or even superhuman, artificial general intelligence.

Our current artificial intelligence systems generally are not at the human level but proponents hypothesise that hyper-intelligence will intrinsically emerge from today’s systems, needing only the application of sufficient computing resources.

Behaviour can emerge at a large scale even if its properties are not found in the individual parts. Such emergent behaviour occurs in many different domains. Large ensembles of individual birds, ants and bees synchronise their activities.

A visual example in a stunning photograph by James Crombie appeared on the front page of this newspaper in March 2021 of a flock of starlings over Lough Ennell in Co Westmeath, appearing as if a single large bird. The wetness of water or the hardness of ice cannot be explained from studying single water molecules, but rather are the consequences of interactions across huge numbers of them.

The four nucleotides of genetic DNA – A, C, G and T – enable molecular biologists to study genetic sequences without necessarily having to work through the complex chemical structures of the underlying nucleobases.

In 1776, the economist Adam Smith argued that when individual citizens pursue individual self-interest, then a more stable economy emerges from their myriad reactions to price changes than might otherwise arise from “top-down” planning by a central government. In War and Peace, Leo Tolstoy argued that history emerges from the collective spirit and individual actions of countless ordinary people rather than from the high-level decrees of emperors and kings.

Will artificial general intelligence emerge from today’s machine learning and generative systems if they are expanded to a sufficiently large scale?

Consciousness is a foremost example of emergence in humans. How is it that the intricate mesh of some 86 billion neurons, interconnected by about 100 trillion synapses, in the damp biology of the human brain somehow enables us to become self-aware?

Graduate recruitment slowdown by firms like KPMG in UK sends shiver down spine of south Co DublinOpens in new window ]

AI technologists wonder how many neurons are required to enable consciousness – if mice each have about 70 million neurons and octopi about 500 million neurons, is either species conscious? Are elephants sentient, given that they have about three times as many neurons as us?

At their heart, our current AI engines, such as Alphabet’s Gemini, Meta’s Llama-4 and OpenAI’s GPT-5, produce a response to some query by predicting the statistically most likely next word to follow previously generated phrases and within the context of the query. Trained on enormous quantities of data to build their statistical models, they mimic the intricacy of human discourse using pattern recognition.

Today’s systems do not understand concepts or emotions in the human sense. The emergent behaviour we see elsewhere suggests a hypothesis: if these systems were to be trained on even more data, would such a human-like understanding then emerge? Perhaps no new architectural breakthrough is required, but instead a shaman quality might magically emerge from just the sheer quantity of data.

How much data would be needed? The systems have already each been trained on trillions of data, including massive scrapes of the public internet (including social media, Wikipedia, government and corporate public reports), email, published academic papers, curated literature archives, code repositories, private collections, and video and audio transcripts. Some sourcing of data has been highly controversial and it’s doubtful that substantially more training data can be found.

How large do the generative models need to become to achieve emergence? The connections between the artificial neurons in today’s AI systems are controlled by parameters which training then optimises. These parameters are conceptually similar to brain synapses but so far number only up to about 1 or 2 per cent of those 100 trillion synapses within our own brains.

Maybe tens or even hundreds of trillions of parameters are needed before a general artificial intelligence might eventually emerge, and this would of course require a commensurate increase in training data.

Artificial intelligence is here ... and it is already rewriting the rules of educationOpens in new window ]

As yet, the mathematics of emergent behaviour are poorly understood, whether in AI or any other field. Classical science strives to understand a large entity by studying its component elements, whether these be the molecules of a chemical, the cells of an organism or the atoms of some substance. In contrast, studying complexity investigates how purpose can emerge from an otherwise intricate or even chaotic collection of parts.

However, complexity science is a relatively new discipline, and does not yet have the analytical models and tools to analytically predict the general requirements for emergence, including for artificial general intelligence.

None of this has been cheap. Industry analysts such as IDC estimate that the global investment in AI was about €200 billion last year, with some forecasting that it may reach over €520 billion for this calendar year, particularly now via heavy debt financing.

In contrast, the annual global investment in technology at the height of the “dotcom” era before the 2001 crash was approximately €90 billion (about €160 billion today, adjusted for inflation). The second World War Manhattan project to develop the atomic bomb, in total took about €27 billion (in today’s prices).

Much of the investment has been into data centres. A McKinsey study earlier this year projected that about 75 per cent of the capital expenditure in data centres is now driven by AI workloads. Many are currently loss-making as their depreciation losses on AI hardware expenditure far exceed their income.

Here in Ireland, we have received a cumulative investment of about €13 billion into data centres, including by Alphabet, Amazon, Meta and Microsoft. The centres consumed about 22 per cent of our national electricity in 2024, according to the Central Statistics Office, and more than all the power used by our urban homes.

EirGrid’s analysis, quoted in the government’s 2024 Climate Action Plan, forecasts this growing to 31 per cent by 2027, considerably more than all our housing nationwide.

The Irish Times view on data centres: how many is too many?Opens in new window ]

Three years have now passed since the release of ChatGPT to the world, with substantial resources expended and careers turned upside down. AI technology has improved efficiency in many sectors, particularly among office and white-collar staff.

US labour productivity in the non-agricultural sector is rising by about 3 per cent annualised since the tool’s release, surpassing the pre-pandemic trend by 1.5 percentage points. The impact of AI is apparent but it is still relatively small as compared to the considerable investments in it made to date.

Perhaps a different approach is needed. The chief scientist at Meta, Yann LeCun, recently resigned, stridently asserting that generative AI has plateaued and is a dead end. Instead his new start-up is researching how well machines might learn the world by analysing video clips, rather than all data across the entire web, and hypothesising that artificial general intelligence will then emerge.

The year ahead will be crucial for the AI sector: if anything like €520 billion has really been expended in just this year alone, just how much more is needed and can there ever be an acceptable return on the enormous capital to date?

Investor scepticism is rising and the sector must demonstrably progress to the emergence of artificial general intelligence.