Shrinking feature sizes: Putting off the inevitable
6 mins read
Chris Edwards explores the 'tricks' semiconductor device manufacturers are using to cope with shrinking feature sizes.
At the International Electron Device Meeting (IEDM) held at the end of 1984 – a boom year, driven by the first home computers and gaming systems – Texas Instruments' George Heilmeier predicted a bust. He explained how, in five to ten years, the silicon industry might have to shift gears and get used to being a low growth industry.
By 1994, Intel was shipping Pentiums in volume and chipmakers were putting their collective foot on the scaling accelerator, taking advantage of advances in lithography to deliver smaller than expected transistors. Instead of reaching a limit of 0.3µm to 0.5µm in minimum feature size, as Heilmeier predicted, without a radical change in lithography, design improvements smashed through the barrier.
Some 25 years later, senior Intel engineer Kelin Kuhn stood up at IEDM and pointed to Heilmeier and others as Jeremiahs who failed to foresee how an industry that had a $300billion per year market in its sights might find ways around the problems brought on by Moore's Law scaling.
To be fair to Heilmeier, his IEDM paper was far more nuanced in its details. He warned the industry would be stymied if it did not find revolutionary ways around the problems it was running headlong toward. So, he projected a migration to devices that use quantum mechanical effects. They still remain an option for the future. But, as with optical lithography, the silicon industry has found evolutionary ways to compensate for the difficulties caused by scaling. Quantum mechanics is making a different contribution, at least for the next decade: steering device evolution.
The field effect transistor (FET) has one simple problem when the spacing between the key contacts – gate, source and drain – reduces to distances measured in tens of nanometres. The fields they generate overlap. The effect that now dominates process engineers' concerns is drain induced barrier lowering (DIBL).
In a transistor, the gate usually has full control over a potential barrier formed by the channel below it that joins the source and drain. So, when the transistor is off, practically no current passes through the channel. While hat was true of older, longer gate transistors, for the past decade the electric field of the drain has crept closer to that of the source, lowering the potential barrier and allowing electrons to flow.
To some degree, it is possible to fight DIBL by improving the electrostatic control the gate has over the channel. Traditionally, this has been achieved by thinning the insulator that divides the gate from the channel, providing a stronger electric field. This came at the cost of increased current leakage through the gate.
This factor has encouraged the adoption of high dielectric constant (k) insulators – first by Intel and now by the major foundries – because it makes it possible to use thicker oxides for a given field strength.
Ultimately, even high-k dielectrics will run out of steam on a conventional planar transistor. One answer is to provide greater surface area for the gate contact. The finFET architecture moves the channel above the surface of the silicon wafer, allow the gate to wrap around three of its sides.
A big problem for finFETs is design, particularly when it comes to analogue circuits. The fin has to be made with a specific width. You make higher-strength transistors by using more fins in parallel, in contrast to conventional FETs that can just be made a bit wider. As a result, the finFET's circuit performance is to some extent quantised.
An alternative is to make the channel thinner and to isolate it from the silicon substrate by placing it on top of a layer of oxide. This is the ultra thin body (UTB) silicon on insulator (SOI) approach. As with the finFET, the DIBL is massively reduced in a UTB device.
But it has the handicap of poor carrier mobility at normal operating temperatures. The on state current is often ten times lower than conventional cmos transistors. Although off state leakage is much lower, the reduced on-current makes it hard to build fast, complex logic circuits. For decades, on-current has been a key metric of device design, with process engineers using heavier doping and increasing amounts of strain to improve this number.
However, people working in low power design claim that on-current is not necessarily the most appropriate metric. At IEDM last year, PR Chidambaran, technology director at Qualcomm, argued that for many designs, the bigger problem is having to charge up the gate at the beginning of a cycle rather than the current flowing through the device when it is fully switched on. At the SOI Workshop that followed the conference, Thomas Skotnicki, director of advanced devices at STMicroelectronics, argued that drive strength only matters for high performance designs, such as pc processors.
A study conducted by IBM ten years ago found that low power transistors never use the full range of the transistor. What is far more important are parasitics such as DIBL.
As a result, Skotnicki argues the best path forward, at least for low power designs, is to move to new devices architectures, such as UTB. FinFETs should also see lower DIBL, but TSMC revealed figures for its 20nm finFET process that had higher than expected numbers. They were less than bulk silicon, but higher than the sub-100mV/V level normally expected of finFETs.
Ultimately, the gate may wrap all the way around the channel. A number of researchers have proposed the use of designs built around silicon based nanowires that might be built horizontally or, to save space and potentially improve manufacturability, vertically.
New devices, such as finFETs and UTB transistors, have further advantages over planar devices: they exhibit lower variability because they do not need the high doping concentrations that are now commonplace on conventional cmos transistors. In larger devices, it was a reasonable assumption that the dopants would form reasonably smooth concentration gradients after implantation and annealing. As devices head toward the 22nm node, the number of dopants in a channel will reduce to the point where the placement of any given atom will have a material effect on device performance.
Changes in device performance manifest themselves as subtle shifts in threshold voltage or leakage. The threshold shift is particularly problematic for sram cells as it reduces the memory array's signal to noise margin. Many of the cells will have good noise performance, but with growing arrays and shrinking static noise margin, there is a growing probability that individual cells will not be able to write or read out data reliably.
Dopants do not provide the only source of variability. Another key contributor lies in the techniques that made it possible to extend optical lithography beyond Heilmeier's limit to geometries an order of magnitude smaller. The resolution enhancement techniques used tend to exacerbate the roughness of the drawn edges. The fluctuations in width, even though they are small, have an effect on electron flow and lead to an increase in variability.
Then there is deterministic variability: the subtle interplay between each transistor and its local environment. Engineers who designed cell libraries and analogue circuits have had to deal with some of these effects for years. However, the environment that affects a transistor in a digital circuit now extends beyond the limit of an individual cell.
Again, lithography has an influence because the complex interference patterns created by resolution enhancement means shapes some way from the target can influence how the pattern appears on the wafer during exposure. Furthermore, masking effects by the resist placed on a device during implantation can alter how dopants sit in the channel. The level of doping will change depending on where a transistor lies relative to its neighbours, changing the effective gate length and its threshold voltage, sometimes by tens of millivolts.
As an example, a study by Mentor Graphics found that transistors near the edge of a cell would have gates up to 5nm longer than those at the centre on a 65nm process – a factor of 10% that made the outer transistors slower to switch, although they would typically also exhibit lower leakage. The answer was to surround critical transistors with dummy pieces of polysilicon. The downside of this technique is that it tends to spread cells out. The alternative is to rework designs so that critical path transistors are buried well inside a cell and away from isolation rings. Those transistors that do not need high switching speed can be placed near the edges of cells.
While more regular design will help with predictability and manufacturability, it will make designs larger than they would be if a wider array of cells were used. As they struggle with variability, chipmakers will also have to wrestle with worsening parasitics, such as increased capacitance between the gate, source and drain contacts as they move ever closer together. And, with smaller contacts, series resistance is getting worse.
As always with the semiconductor industry, whether manufacturers choose to slash variability and improve parasitics by moving to more complex structures such as finFETs or UTB devices or put greater restrictions on design with planar transistors will come down to relative cost. On paper, the more exotic architectures are more expensive to make. But the combination of lower areal density and an increasing number of process tweaks to make planar devices work may tip the balance in favour of a change in device structure. But the work going on in all these areas is likely to extend silicon technology for more than a few process generations.