The economics of chip manufacture on advanced technologies
8 mins read
In a speech to the International Electron Device Meeting (IEDM) in 1975, Gordon Moore set out the path which the semiconductor industry would follow for at least 45 years.
The Moore plot, as it was known before being rebranded as Moore's Law by VLSI design bible co-author Carver Mead, dated back a further ten years. Because it was developed for a much younger industry, it turned out to be too aggressive on timescales and it took 10 years before Moore's Law settled into the pattern that is still followed today – but possibly not for much longer.
From the early 1960s until the mid-1970s, the rate of growth in transistor density had doubled every year. Moore later explained the background to the 1965 article in Electronics. "We [had] made a few circuits and gotten up to 30 circuits on the most complex chips that were out there in the laboratory. We were working on about 60, and I looked and said, gee, in fact from the days of the original planar transistor, which was 1959, we had about doubled every year the amount of components we could put on a chip. So I blindly extrapolated for about ten years and said OK, in 1975, we'll have about 60,000 components on a chip... I had no idea this was going to be an accurate prediction."
However, one of the most important graphs in the original article is often ignored. Moore plotted cost functions for three generations of process: those available in 1962 and 1965; plus a projected cost curve for 1970.
The left hand side of the curve (see fig 1) reflects the cost of a design being pad or interconnect-limited. In this case, the perimeter needed to place I/O pads around the die is larger than the area needed for the core logic, effectively leaving empty space in part of the die. A further factor is the proportion of the wafer given over to the scribe lanes used to separate individual chips; not significant, except for the smallest chips, which are rarely, if ever, produced on leading-edge processes.
As flip-chip bonding becomes more common, fewer dice should be pad-limited, which should lead to a flatter curve. However, the effect of design costs needs to be taken into account: it is not worth bearing additional non-recurrent engineering (NRE) costs if the die made on the new process is not much cheaper.
The right hand side of the curve shows the influence of die size on yield. As die size increases, the probability of a killer defect appearing on one of the many chip layers increases. So, not only do you get fewer large chips from a wafer, but a high proportion of them will also fail to work. While defect densities have fallen over time, allowing the economic die size to increase, the effect was far more dramatic during the first 25 years of Moore's Law than it is now. The sweet spot for die size in terms of density versus random defect density – and therefore yield – has remained stuck in the 100 to 150mm² region for years. You can see this effect clearly in the memory market. High-volume DRAMs are rarely much larger than 60mm² (New Electronics, 28 June). Intel's cheaper processors normally sit in the sub-100mm² region.
For successive process technologies, the point of minimum cost always shifts to the right. The problem for the industry is maintaining a clear cost advantage for the newer process for more than a few points on the curve. Successive processes have a higher cost-per-wafer because of their higher complexity and the impact of newer equipment being depreciated on the balance sheet. This has become more difficult as the race to make smaller devices has resulted in a dramatic increase in wafer-processing cost.
During the 1980s and 1990s, wafer cost for a given wafer size increased by approximately 12% per generation, or 5.5% per year. This allowed a significant reduction in manufacturing cost for a shrink from one process to the next, so long as the design did not turn out to be pad limited. While the process itself might be more expensive to deploy, you would still see a payback from the near doubling in transistor density: the fall in cost per function has historically been close to 70% per generation.
Chipmakers are now faced with a problem: wafer cost has increased by 20% per generation on average in the past decade. And the increase is expected to be far greater for the upcoming 20nm process than it has been for any previous generation. In fact, the rise is so large that it threatens the economics of the semiconductor business, if the projections become reality. International Business Strategies has estimated that wafer costs for the 20nm process will be 70% higher than that of the 28nm technology currently ramping up at TSMC and expected to go into production soon at GlobalFoundries. The 14nm process is likely to see an increase of another 60%.
The main contributor to this sudden rise in cost is the arrival of double patterning as a mainstream technique for production lithography. This approach, which splits the pattern between two separately exposed masks, has become necessary because it is no longer possible to stretch optical lithography using mask corrections to pack features together more closely. While techniques such as optical proximity correction can make progressively smaller features, they cannot squeeze those shapes any closer together. Double patterning lets chipmakers sidestep the problem, at least for a while, by having a minimum pitch on each mask that is twice that of the minimum pitch of on-chip features. When the results are combined, you get features at the target resolution.
But there is a critical economic problem with double patterning, although it has already been used in a limited form. Initially, Intel used this approach on the very lowest layers of its 45nm-generation chips to avoid problems caused by water immersion, a technique favoured by competitors. The problem is that you can either halve the number of wafers going through the fab at the stage affected by double patterning or you can buy twice as much equipment. As lithography scanners represent the greatest proportion of a fab's equipment cost and the depreciation of that equipment is the greatest contributor to manufacturing cost during the first five years of a fab line's operation, having to buy twice the number of scanners is not good news.
There is an option that chipmakers can use to prevent wafer costs from increasing too quickly: increased wafer size. Moving from 200mm to 300mm wafers at the beginning of the past decade effectively doubled the number of chips per wafer, but operating costs didn't rise by anywhere near that amount. As a result, on a given process, once the technology became mature, a foundry with a 300mm line could easily outcompete one with access only to 200mm equipment. Each die would cost, on average, 30% less than its equivalent from a 200mm line.
A model put together by analyst firm IC Knowledge projects a similar cost benefit for 450mm over 300mm emerging towards the end of this decade, around three or four years after the first fabs go into production – assuming the industry can hit an aggressive deadline of having 450mm ready by 2015. This timescale is not likely to help even the 14nm processes, which is likely to present chipmakers with yet another expensive change in lithography.
The situation could have been worse at 20nm, because double patterning works out a lot cheaper than attempting to use extreme ultraviolet (EUV) lithography. Not only is wafer throughput less than 10% that of an optical scanner, but energy costs would also rise dramatically because of the intense power demands of the EUV laser. Despite these problems, EUV is a distinct possibility past 14nm as the alternative might be triple or even quadruple patterning.
One possibility might be to 'tweak the knobs' used to provide Moore's Law scaling: simple lithographic scaling has not always been the key to increasing density. What is most surprising is the consistency in the rate of growth in density since the mid-1970s, given that all the assumptions Moore's used to guide his projection have changed radically. The one he considered to be the least important in 1975 turned out to be the main driver for more than 20 years.
In his 1975 IEDM speech, Moore claimed there were three main contributors to improvements in integration. He saw increases in die size as coming up with almost half the growth in transistor count, with reductions in transistor dimensions contributing far less. However, taken together, the two factors were seen as providing two-thirds of the growth needed for what was to become Moore's Law. The remainder laid in the 'contribution of device and circuit cleverness' – architectural changes made to circuits to improve overall density.
Circuit cleverness was a major component in the early days of Moore's Law: engineers took a number of years to learn ways to pack more transistors into smaller and smaller spaces. By 1975, there did not seem, at least to Moore, a great deal of cleverness left to find. So, he predicted a slowdown in the rate of growth in chip density.
When it comes to logic chips, it is possible to argue that circuit density actually went into reverse. Moore pointed to the problem in 1975: he knew it would get harder and harder to design chips as they became more complex. The response from the industry was to employ more automation, such as the techniques pioneered by Lynn Conway and Carver Mead. The techniques needed to make that automation possible generally led to less space-efficient circuitry – a brake on Moore's Law. Microprocessors stayed closer to the predicted trend because they used a higher proportion of hand-drawn circuits, compared with chips intended for things that would ship in lower volume, such as ASICs.
Unfortunately, the industry now has fewer degrees of freedom left with which it can improve effective density. The last big shift came courtesy of the increase in the number of metal layers during the 1990s. One of the rarely mentioned aspects of chip design – except when it comes to routing-poor architectures such as FPGAs – is just how much space is wasted when implementing logic. Estimates indicate that, on most on most digital chips, around 30% is 'white space'. Part of the reason for this is the fact that overall density is limited by congestion in the wiring layers. Increasing the number of metal layers helps to alleviate congestion and increase overall density.
As chipmakers moved away from increasing die size as a way of increasing transistor integration, they started to increase the number of metal layers as a way of improving device density, thanks to the development of chemical-mechanical polishing (CMP) during the early 1990s. Before then, only two or three metal layers were feasible. With planarisation, it became possible to build chips with more than ten layers of metal. The impact on density of using more metal layers became apparent in images of high-density SRAM shown at successive IEDM conferences in recent years.
Chipmakers have gradually moved from using just polysilicon and the first metal layer for the SRAM cell to using three metal layers and have also taken the risk of using more connections between layers as they shifted to denser asymmetric memory-cell arrangements. Vias tend to fail more easily than metal lines, which is why manufacturers like to use redundant vias where possible. But SRAM cells cannot afford that option in order to maintain a high packing density, so any redundancy needs to be implemented at the array level or the manufacturer simply suffers from lower overall yield.
Lithography rules now demand that lower-level metal lines have preferred directions – either horizontal or vertical – which limits further design-led reductions in cell size, although a move further into the growing metal stack may provide scope for scaling beyond what is possible with an optical shrink.
With limited room to manoeuvre on lithography or with SRAMs, what the industry cannot afford to do is lose what scaling is achievable to looser logic design. But this is a distinct possibility because of the knock-on effects of double patterning. Trying to align two masks on the most sensitive layers of a chip poses clear challenges in terms of alignment and registration. Any misalignment translates into increased variability and loss of performance. While design-tool companies are working on ways to limit the impact, the simplest techniques tend to reduce density – which is exactly what the chipmakers do not want. They are keenly aware that, to justify the migration to 20nm, they need a doubling in density with no compromise.
The demand for density is translating into a higher cost of design in an industry where those costs have already risen significantly. A study for the 2010 International Technology Roadmap for Semiconductors showed chip design costs, as calculated for hardware, would surpass $40million within three years, although software costs could be expected to peak next year at close to $80m and then fall back. Analyst Gary Smith says the point at which most companies shut off SoC design would be at around $60m. Only those guaranteed to have the highest volume would venture into the world of chip design – which will limit competition and how far prices will fall.
Chip design can be expected to be expensive, in the absence of a breakthrough, for the rest of the decade. For those that stay in the game, limits on competition may help to deal with a fall in margins created by increased wafer costs – but that is likely to lead to a slowdown in process migration as chipmakers attempt to milk what they can from each generation and use improved manufacturing efficiencies on existing technologies to push costs down.