The great innovation debate: has the revolution stalled?
Growth in productivity and output has proved disappointing while society is increasingly divided
Future shock: The visions of a wildly prosperous future imagined in the 1950s and 1960s have failed to materialise
Less than two decades into the new millennium, humanity is building new technologies with seemingly limitless practical applications. Yet in The Rise and Fall of American Growth: The US Standard of Living Since the Civil War Robert Gordon argues that the digital Revolution, though important, is comparatively limited in its transformative potential. Decades of advance in information technology have not generated anything like the soaring growth in output per person, adjusted for inflation, that industrialised countries enjoyed in the mid-20th century.
Life expectancy is not growing as fast as it once did. The visions of a wildly prosperous future imagined in the 1950s and 1960s, based on robotics and rocketry and powerful computing, have failed to materialise. Gordon has only to point to the world around him to support his argument. The wonders of the digital revolution have grown in their power and capabilities, but the pay for many workers, adjusted for inflation, has grown at nothing like the rates observed a half-century ago. The societies of the rich world are angry and frustrated rather than satisfied and optimistic.
Productivity pessimists have data on their side. Economists point to growth in productivity, or the amount of output produced with a given set of land, labour and capital, as the key to long-run growth in incomes and living standards. Productivity growth roared ahead in the decades immediately after the second World War across the rich world, but it decelerated sharply in the 1970s.
In the late 1990s productivity growth rebounded, in the US especially, and many economists hailed the arrival, at long last, of the dividend from information technology. Yet by the mid-2000s the boomlet had petered out again; nary a recovery is in sight.
That, Gordon reckons, is all there was. The growth spurt of the tech boom represented the capture of gains from digitisation, personal computing and the internet, he says. As snazzy as more recent advances on those technologies have been, they are insufficient to raise productivity growth.
Mobile technology and social networks do not much change humanity’s ability to produce more with less; we were promised flying cars, to paraphrase Peter Thiel, a venture capitalist, but wound up instead with social networks.
Driverless cars are not productivity boosters, since people can only be so productive when they are sitting in a car, whether or not they are driving.
Meanwhile, pessimists point out, the pace of innovation in computing, which helped sustain progress towards existing technologies, is slowing. Even Moore’s law, which supposed that the number of transistors on a chip doubled once every two years or so, is now running out of steam.
All things considered, Gordon argues, the outlook for a technology-driven renaissance in the first half of the third millennium seems bleak. Are the prospects really so grim?
Gordon is somewhat unfair to the digital revolution. He rightly credits big innovations such as electrification and automobiles with the long boom in output per person that the rich world enjoyed from the late 19th to the mid-20th century. He does not spend much time discussing a crucial point, however: that realising the potential of these innovations took a very long time indeed. The scientists experimenting with electricity had made many of the critical advances in the fundamentals by 1890. Yet productivity-boosting applications did not immediately arrive en masse.
Telegraphy appeared quite early on, for instance, but the broad electrification of homes and factories, and the productivity gains that followed, was not accomplished until much later.
Chad Syverson of the University of Chicago compares data on labour-productivity growth during the electrification age with the record so far in the IT era. The pattern is remarkably similar.
The delay between the arrival of a technology and the full exploitation of its potential is mostly accounted for by the time needed to discover how best to use the new innovation and to rearrange the world accordingly. There were horseless carriages rattling around in the late 19th century, but it was a long time later that cars materially lifted growth. First, manufacturers had to figure out how to reduce their cost, governments had to amend regulations and invest in new forms of infrastructure, and firms had to experiment with new business models built around the automobile.
These dynamics mean that productivity growth is always a reflection of the technological development that came before: between five and 15 years before on average, suggest economists Susanto Basu, of Boston College, and John Fernald, of the Federal Reserve Bank of San Francisco, but occasionally more.
Some pessimism is justified, though not of the sort favoured by Gordon and his peers. If it is difficult to imagine exactly how an AI-controlled house or car will change our lives in future, it is far easier to imagine the trouble society will have adjusting to the change. Promising technologies such as driverless cars and drones are already running headlong into regulatory thickets. This will slow the spread of new inventions and dampen their effect on the economy.
The trickiest adjustment of all, however, will be the management of the effect of these new technologies on labour markets and on the pay earned by workers. Indeed, it is possible that labour-market troubles are already having a serious detrimental effect on the use of new technologies and on productivity growth. Over the past few decades, wage growth for most workers in most rich countries has slowed by more than growth in the economy. At the same time, low levels of unemployment seem to be less effective at generating upward pressure on pay than was the case in the past.
The ways in which businesses and whole economies adjust to the technological changes ahead will almost certainly present one of the biggest social and political challenges between now and 2050.
Driverless cars and trucks could quickly eliminate tens of millions of jobs around the rich world. Clever AI systems could displace tens of millions more, beginning with customer-service representatives and office assistants, then moving on to education and medicine, finance and accounting. Though some people should benefit fantastically from these innovations – because they own shares in profitable companies or possess skills that are complementary to the new machine brains – many more will find themselves threatened with displacement, forced to compete with many others for available work or to accept low pay as the price of retaining a job.
This pattern will leave people and economies on the whole poorer than they ought to be. Unfortunately, there are no easy solutions. Governments might begin to pay larger wage subsidies to workers, or even introduce unconditional basic-income payments to all citizens. But such payments will be expensive and will require heavy taxes on those getting rich from new technologies. Even if those picking up the bill consent, society as a whole might struggle to adapt to a world in which working is optional.
Governments might instead provide make-work jobs to the underemployed, but that would be expensive and wasteful. Or societies might simply become far more unequal than they are now, as technology creates a mass underclass of subservient service labour.
There is a precedent for this kind of difficult social adjustment. Early in the industrial era, the explosive growth of factory employment outstripped society’s capacity to cope. Workers flooded into slums in cities without the infrastructure needed to provide clean water, or decent housing, or management of refuse and waste. The horrible living conditions that this brought about killed millions of workers. Those who survived earned meagre wages.
Loss of work meant the risk of fatal poverty. Only after years of labour organising, social unrest, political reform and, in some cases, revolution did social institutions evolve in ways that facilitated the broad sharing of the gains from growth. These changes, which allowed workers to live longer, healthier lives, to get more education, and to save and invest more, also raised the capacity of the economy to grow by using new technologies.
Part of the reason growth in productivity and output has proved disappointing so far is the collision of new digital technologies with 19th- and 20th-century social institutions. In the absence of new reforms and investments, economies will continue to operate with vast reservoirs of underemployed less-skilled workers. These workers will hold down wages and discourage the use of clever new robots and thinking machines. If, in coming decades, society finds ways to allow workers to be more choosy in seeking where they work and how long to spend on the job, then firms might have an incentive to make better use of both technology and human labour. This could bring back the productivity growth of the good old decades of the 20th century, and make life far better for everyone.
Ryan Avent is a senior editor and economics columnist for the Economist. This is an edited extract from a chapter of Megatech: Technology in 2050, edited by Daniel Franklin. Other contributors include Nobel prize-winner Frank Wilczek, Silicon Valley venture-capitalist Ann Winblad, philanthropist Melinda Gates, science-fiction author Alastair Reynolds and Luciano Floridi of the Oxford Internet Institute.