As process technologies shrink, the problems for analogue designers grow in complexity
4 mins read
Low voltage supply rails and strict design rules are changing the way designers need to think about circuit design. It is sweeping away old assumptions – it is a topsy-turvy world in which iterative topologies can be fast and where you can implement analogue circuits using digital synthesis. Because of the complexity of the most advanced processes, some level of automated synthesis for analogue design has become almost essential.
The new crop of finFET processes shows how difficult traditional approaches to analogue design have become. Initially, concern focused on the quantised width of finFET transistors. It is no longer possible to scale a transistor to an arbitrary width to provide the required current drive for a given devices. Designers have to pick an integral number of fins.
But, as analogue circuitry is often less dense than digital logic, this has turned out to be less of a problem than expected. Instead, SRAM designers have borne the brunt of fin quantisation because they need to work with transistors that are as small as possible to achieve the desired density in an environment where choosing between a one or two fin device results in a massive difference in drive current.
In practice, the problems for analogue designers revolve around a massive increase in the physical effects the layout has on the circuitry. The proximity of a transistor to the edge of a well can affect its performance. dramatically The finFET adds even more problems because its complex shape leads to a massive increase in parasitic effects and these need to be simulated. Meanwhile, the need to use lithographic tricks such as double patterning means even the distances between transistors and low-level interconnect need to be constrained.
"Everything is on a grid," says Jean-Marie Brunet, product marketing director at Mentor Graphics. The need to lock to a grid leads to the problem of the final layout not being what that circuit designer really wanted because the devices wind up being close to a well edge and connected to metal traces of the wrong length.
The solution that some of the design tools companies have adopted is to lay out early and lay out often. Pulsic launched a tool earlier in the year that produces multiple variants of a circuit. Simulation in SPICE identifies the best candidates that can then be picked by the design team as the basis for the final layout. Last year, Cadence Design Systems brought into its Virtuoso mixed signal environment the ability to work with partial designs to give circuit engineers the ability to see how layout parasitics would affect them.
Wilbur Luo, senior group director for custom IC design at Cadence says: "Ideally, you have an early view of layout, instead of waiting for layout engineers to finish their layout, feed it back and then find a problem."
The push towards synthesis is leading to the introduction of novel, even counter intuitive, architectures that enable lower power designs in advanced nodes. One is a novel form of flash A/D converter that is compatible not with analogue synthesis but digital.
In a conventional flash A/D converter, comparators are carefully arranged on some form of reference ladder such that the thresholds of each comparator are spaced by one least-significant bit of resolution. Designers have had to live with the random offset that pushes the comparator away from this ideal gap. The stochastic A/D converter devised by a team from Oregon State University turns these random offsets into a virtue (see fig 1).
The converter works on the basis that if all the comparators are sized identically, they will show a Gaussian distribution of offsets. Rather than reading off the value of an input from the comparator ladder, the converter uses the number triggered. The more that are triggered, the higher the input signal. But, rather than following a straight line, the number follows a Gaussian probability distribution function. DSP linearises this output into a usable reading using comparatively simple piecewise linear approximation. To guarantee a uniform distribution, the extreme ranges of the distribution are not used.
Although the design tools need to be prevented from optimising apparently redundant comparators out of the design, a key advantage of the stochastic converter is that it can be synthesised digitally and so take advantage of the automated layout and verification support that logic has. So far, the converter has limited resolution – just six or seven bits – but the use of small comparators results in a flash A/D converter that is comparatively lean on power.
Even what are now comparatively mature processes provide enough of a difference in performance to revisit older architectures in a new light. One that has proven to be a fruitful avenue already is the successive approximation (SAR) converter. This used to be much slower than traditional power hungry flash architectures because it would switch progressively through a bank of comparators from the most significant bit to the least, gradually closing on the incoming voltage level.
To try to improve throughput, older SAR implementations used pipelining, but the architecture still imposed significant latency. "At 65nm and smaller, latency becomes less of an issue," says Darren Hobbs, director of product management at design house S3, claiming that small geometry SARs can now compete with flash A/D converters.
Process technology has greatly improved the speed of the switched capacitor circuitry that underpins the SAR architecture, although it does increase the issues caused by parasitics, says Hobbs. "It leads to extra complexity in calibration. The DSP is your workhorse to calibrate out the non linearities."
For an SAR converter that runs at up to 160Msample/s, S3 claims an efficiency of 31fJ/conversion on a 40nm process. But it is a design that maps readily onto smaller geometries. "That's the beauty of the architecture: you can scale rapidly and probably a bit more easily than others because of the large digital content."
The ability to couple logic and analogue processing has led to some novel takes on the SAR approach. A design presented at the 2014 International Solid State Circuits Conference by researchers from the Massachusetts Institute of Technology (MIT) flipped the traditional approach to SAR on its head (see fig 2). Instead of approximating from the most significant bit first, MIT's design uses digital control to take a best guess and adjusts from the direction of the least significant bits.
MIT researcher Frank Yaul explains. "Our assumption is that the input signal has low mean rate of change, so it is likely that the current sample is close in value to the previous one. By taking LSB-sized steps first, we can save power by reducing the number of bitcycles used to complete the conversion."
The approach slashed the energy figure of merit to less than 4fJ/conversion in favourable cases. Even with bad initial guesses, the conversion energy per step was still only 20fJ. "The converter's figure of merit scales logarithmically with the mean rate of change of the signal," Yaul says.
The combination of higher switching speeds, strict process rules and designs that rely on the increasing use of digital techniques will lead to more circuit architectures that turn conventional wisdom on its head.