The measures chip makers are taking to continue to use 193nm lithography
6 mins read
We have an industry that relies on using light with a wavelength close to 200nm to draw features as narrow as 30nm on chips.
Conventional optics says this is impossible because the diffraction limit that determines the performance of a lens sets a theoretical resolution of almost 400nm. It's a wonder that anything appears on the surface of the wafer.
Yet the industry passed the 350nm generation without a hitch in the late 1990s – and that was achieved using the slightly longer wavelength of 248nm, rather than today's 193nm. The 28nm generation is now ramping as manufacturers start to work out what equipment to buy to move to the 22nm node.
The diffraction effects are intense. Today, if you exposed a wafer using a mask that contained the transistor and wiring shapes you actually wanted to appear in the circuit, the result would be some blurry, isolated blobs. But it is possible to make diffraction work for you, instead of against you.
Optical proximity correction (OPC), in a process called fracturing, creates tiny additional features around the basic shapes in an attempt to put the corners back into features that would otherwise be blurred into rounded blobs. Interference between the diffracted light rays helps improve the edges.
Some shapes have simply turned circular. The contact holes used to connect layers of a chip's circuitry together used to be neatly square. They are now completely circular.
When the 180nm process was being readied, phase shift mask techniques were introduced to help improve effective resolution. These use phase cancellation around critical features to provide cleaner edges between exposed and unexposed areas of a wafer. However, OPC has become the dominant technique because, although it is computationally expensive to calculate which shapes should be used where, the masks themselves are easier to make and check.
Because of the way that modern lithography relies on interference effects, the easiest features you can print on a chip are effectively diffraction gratings. Chipmakers have exploited this by changing the way they layout their circuits: traces continue in straight lines as much as possible and they space them at regular intervals. Jogs and short turns are kept to the bare minimum.
In Intel's case, even the contacts that connected transistor electrodes to the wiring became rectangular, rather than square, for the company's 45nm generation processors: they happened to print better that way and designers were encouraged to work around the increase in capacitance between the gate and contacts caused by this manufacturing change.
Even if the design of a layer looks like a diffraction grating, there are still limits to how densely the features can be packed. The optical effects mean it's relatively straightforward to print narrow lines – but the quality of the lines reduces dramatically as they are brought closer together.
This proved useful in the generations between 180nm and 90nm. The gate length of transistors scaled significantly more quickly than the spacing between the devices. This allowed processor makers to increase clock speed with comparatively little effort.
The increase in leakage that higher clock speeds caused put an end to the gate scaling race and so the pace of shrinkage there has slowed dramatically, allowing the packing density of transistors to keep up. The problem now is that it is hard to move drain and source contacts as close together as they will need to be to keep the 2x density improvement per generation that Moore's Law predicts.
One option is to split the exposure into two; a technique called double patterning. If you can't pack the features together well in one exposure, you simply move every other one to a second mask and expose that separately. Even with double patterning, some shapes are tough to define. Features such as vias, which link two different chip layers, turn to be problematic because of their random distribution.
Most important is that double patterning is expensive. It demands two passes through a lithography tool – by far the most expensive item in a semiconductor fab, in a business where cost is dominated by capital depreciation. To maintain a throughput of around 100wafer/hr – what many fab operators aim for – you need to double up these tools, significantly adding to the cost of moving to a new process.
Immersion lithography has bought some time for chipmakers. The increased refractive index of water improves the effective resolution of the lithography scanner, although it comes with its own set of problems, such as bubbles forming in the liquid that wind up distorting the printed image on the chip. But it has only bought a little time. Manufacturers are now trying to work out whether it is time to make the jump to extreme ultraviolet (EUV) lithography – which still has problems with throughput and power – or wring a little more out of the uv wavelength in use today.
There is still some mileage left in OPC, some manufacturers and design tool companies reckon. But this is not an OPC that just modifies the circuit features to nudge lines back into shape. It rethinks the whole process as one in which the wavefront of light that comes from the scanner can be transformed into an image resolved on the surface of the wafer. That involves not just radical changes to the way that the mask is constructed, but also how the light itself is generated.
Companies have taken a number of separate, but conceptually similar, approaches that go under the banner of computational lithography. One key technique is source-mask optimisation. Effectively, it is OPC taken to its logical conclusion: using it not just on the mask, but also on the light source.
For a number of years, lithography tools have used subtle lighting tricks, such as off-axis illumination or multiple sources, to try to improve performance under heavy diffraction. Today, they only use a few simple lighting shapes, such as four arcs arranged around the edge of a ring.
Computational lithography goes much further, using far more complex shapes, often reminiscent of a Rorschach inkblot image, either using custom masks or pixellated light sources to allow arbitrary shapes to be programmed.
Rather than using models to work out which shapes to add to the mask features, the software effectively works backward from the desired image to calculate the best combination of mask and source light shapes. The result is a mask that, on inspection, has almost nothing in common with the final shapes that print and a light source that is known to print those shapes well.
The light source optimised for a contact mask turns out to be quite different from one optimised for printing metal lines. This can make checking the mask somewhat difficult as the only way to fully test it is to run a simulation to work out what would print through the mask that has been made.
One of the key benefits of the source-mask optimisation technique is that it works well to combat one of the biggest problems with lithography today: the terrible depth of field in modern tools. The region that the image produced by a mask is in focus is incredibly narrow. Despite the precise tolerances used to make silicon wafers, they can bow enough for one part of the disc to be in focus, while another part centimetres away receives a fuzzier image.
Experiments performed by IBM and Mentor Graphics, who have an agreement to develop source-mask optimisation tools, have shown that the use of pixellated sources can expand the effective depth of field by 30%, demonstrated as a reduction in the roughness of printed lines. As line edge roughness is a major contributor to process variability, this should result in better consistency between chips from the same wafer.
A significant component of mask manufacture, already in the millions of dollars per set, lies in testing and repair. With conventional OPC, it is hard to work out which features are correct and which are errors. To evaluate a mask that has no apparent connection to the final image demands advanced software that can simulate how the lighting will interact with the mask shapes.
Computational lithography has a major impact on design rules. As the source and mask are optimised to print certain types of features on a given mask – generally contact holes and vias or interconnect lines – it helps if the design fits with those types of features. OPC has already forced the use of restrictive design rules to reduce the number of 'problem' shapes.
Computational lithography takes that one step further in forbidding certain combinations that are known to print badly with a certain type of mask. Even more computation time – which is pretty high just for source-mask optimisation – is needed to work out the best design rules for a given combination of mask and source.
Even mask writers are getting involved. For years, manufacturers have wanted square shapes on their chips and design tool companies prefer to work with rectangular features for ease of layout and simulation. The electron-beam tools used to make masks, however, are very good at producing circles.
So tool manufacturers are trying to encourage chipmakers to use circles if they can, because it will reduce the time needed to make chips and reduce their cost. If huge amounts of computer time are going to be invested in working out which mask shapes will work, they might as well be devoted to shapes that are made more readily.
Although the deadline for 22nm is approaching, there are still questions over whether computational lithography will pay off, especially given the cost of the supercomputers that will be needed to implement it. But the other options do not currently look all that attractive. If it simply allows the introduction of expensive EUV equipment to be delayed for even half a generation, that may be enough.