Breaking Moore’s Law – silicon reaches the bounds of possibility
Silicon chips are expected to hit peak performance by the end of the decade
It’s been happening for so long now that we have come to expect it: each year, computing and other electronic devices shrink in size and drop in price, while growing ever more powerful.
But if you are expecting this embodiment of technology’s most famous dictum – Moore’s Law – to continue on forever, get ready to think again, says Robert Colwell, director of the Microsystems Technology Office at the Defence Advanced Research Projects Agency (Darpa) and Intel’s former chief architect. In a recent speech at the Hot Chips conference at Stanford University in California, Colwell argued that Moore’s Law, the engine that has driven the digital age, will reach the end of its life in less than a decade.
Formulated by Intel co-founder Gordon Moore in the 1960s, the law states that the number of transistors that will fit on an integrated circuit (a computer chip, or processor) will double about every two years.
The prediction was so accurate that it has been used as a roadmap for chip innovation for decades.
“He foresaw what is really a cost-based argument. If you can reduce the size of the transistor, you can get more functionality. That’s a function per cost argument. And that gives you incredible growth,” says Prof Jim Greer, head of graduate studies at Cork’s Tyndall Institute.
According to Colwell, twin problems will bring about the demise of Moore’s Law: a range of technological impasses that will limit how much more computing power can be packed on to a circuit and how much smaller it can become, and the exorbitant costs of creating chips using some possible alternative technologies.
Chipmaking is already an expensive business. Fabrication plants, or fabs, for the complex process, such as Intel’s in Leixlip, cost billions. A new Intel fab under construction in New York state is predicted to cost about $6 billion. But Moore’s Law has made chip production – even with falling costs to the end-user – a lucrative business.
The end of Moore’s Law has been predicted before, notes Greer, but new technological processes have always come to the rescue. Processes that were used in the 1980s were becoming obsolete by the mid-1990s, he says, but new materials and slightly different structures for transistors injected new life into the law.
Some limits with current chips have already been reached. Clock speed – the frequency at which a processor performs calculations – “has plateaued”, Greer says, remaining the same for some time now.
To squeeze out greater performance, chipmakers instead place several processors, or “cores”, on a single chip – first two, then four. But getting more cores on to a chip is running into physical barriers, making it more difficult to maintain Moore’s law and shrink the size of processors while increasing performance at lower cost. “The reason why this trend is inevitably going to end is rooted in the atomic structure of materials like silicon,” says Dr Jiri Vala, SFI research fellow at NUI Maynooth and an expert in quantum computation.