A/D converter architectures still up to the challenge
8 mins read
Electronics design may have gone strongly digital, but the world with which most systems deal remains unreservedly analogue. As more signal-processing functions have moved into the digital domain – partly for flexibility and increasingly because the algorithms offer better power-performance than analogue-domain processing in the latest deep submicron processes – the analogue-to-digital (A/D) converter has become more important than ever.
Despite the growing significance of the A/D converter, all the architectures in widespread use today date back decades, with most of the work today going into finding novel implementations and ways to combine them to improve their cost and power efficiency. And low cost does not necessarily mean poor accuracy.
The simplest, cheapest and smallest A/D converter is very slow, but can be incredibly accurate, as long as you have a signal that is equally slow moving. The dual-slope A/D converter is found in voltmeters and in interfaces for sensors such as strain gauges.
All you need, in addition to a few passive components, are an analogue integrator, a comparator, a counter and a clock. When conversion starts, the integrator steadily increases its output voltage for a fixed number of clock cycles. A controller circuit then switches from the integrator input to a known, negative reference voltage so the integrator output voltage gradually falls to 0V. The counter measures how long it takes for the voltage to reach zero, providing a time that is proportional to the input voltage being sampled.
There are a number of variations on the dual-slope converter, mainly aimed at improving accuracy. In the basic design, the quality of the capacitor in the integrator and the accuracy of the comparator affect the overall results. But, through simplicity, it has proved a popular choice of converter for integration on low-cost microcontrollers.
One modification is to increase the number of slopes. In the charge-balance or multi-slope architecture, a free-running integrator sits in a feedback loop. The converter continuously attempts to null its input by subtracting precise amounts of charge when the accumulated charge exceeds a limit. The rate at which the charge packets are generated is proportional to the input voltage, so a counter converts the rate to a sample reading. But this is still a slow approach.
For any application that calls for the measurement of a signal that changes more quickly than on the order of tens of milliseconds, you need faster techniques.
The most common A/D converter architecture used in industrial applications today is the successive approximation converter, which uses a binary search for the input voltage.
At the start of conversion, all bits of the register used to store the digitised value are set to zero, except the most significant bit. This value is used to drive a digital-to-analogue (D/A) converter. Itf the D/A converter's output is greater than the input as measured at the comparator, the most-significant bit is reset to zero. Otherwise it is left at one. The next most-significant bit is then set to one and tested. The process repeats until all of the bits have been set or reset as needed; conversion will take as many steps as there are bits of resolution.
However, a 16bit conversion does not simply take twice as long as an 8bit process, as you might expect. Settling time has a big impact and it takes longer for the circuit to settle to 16bit accuracy than it does for 8bit, so is largely unaffected by faster silicon processes, although more power-hungry circuits can achieve a faster response. In practice, the maximum speed of a 16bit successive-approximation converter is limited to some way less than 1Msample/s.
To achieve speed, you need to do more in parallel, which is where the flash A/D converter comes in. The limiting factors in the successive-approximation converter are the comparison loop and how long it takes on each step before the comparator sees a reliable signal. The flash simply adds more comparators to remove the loop: for non-trivial designs, this is a lot. An n-bit flash converter needs 2^(n-1) comparators.
Each comparator is fed by a reference voltage equivalent to one least-significant bit higher than that of the comparator beneath it in the chain. For a given input voltage, all the comparators beneath a certain point will have their input voltage higher than their reference voltage, giving a logic one output. All the comparators above that point will have a reference voltage larger than the input voltage and so will give a logic zero output.
Because there is much more circuitry and a demand for speed that is realised through the use of more power-hungry comparator architectures, flash converters are rarely low-energy devices. But they can deliver sample rates of more than 1Gsample/s, if only at comparatively low resolutions of 8 or 10bit.
For greater resolution, it is possible to combine multiple flash stages in a pipeline: two 7bit flash A/D converters are very much smaller than a single 14bit converter. Each stage in the pipeline can carry out an operation on a sample, providing the output for the following stage (see fig 1). At any given time, all stages in the pipeline can be processing different samples. So, the maximum possible throughput in these subranging converters depends only on the speed of each stage and the acquisition time of the next sampler.
In a two-stage 14bit subranging converter, the seven most-significant bits are digitised by the first flash converter. The digital output of that stage is then applied to a 7bit D/A converter, the output of which is subtracted from the stored sample. The resulting signal is amplified and applied to the second 7bit flash A/D converter. The outputs of the two flash converters are then combined into a single 14bit binary output word.
Designing such a converter is not quite as simple as taking some 7bit converter macros and plugging them together. Although the digital resolution of each stage is only 7bit, the circuitry needs to be designed to cater for 14bit accuracy and mismatches and the degradation of the held sample need to be taken into account. In practice, the second flash converter is designed to overlap with the first – perhaps using an 8bit circuit rather than 7bit – and the results combined using some form of digital correction.
High-accuracy, high-speed converters can be built using more than two stages, although analogue circuit issues will, in practice, limit how far these architectures can go.
While flash converters have moved into the gigahertz range, what happens if you need to go further? The answer is to stretch time: an exercise that is not quite as difficult as it sounds although it is far from being a cheap option.
The technique relies on the ability to control the speed of light, based on its frequency by passing it through dispersive media (see fig 2). In most cases, this differential slowing is an unwanted side effect of sending light down an optical fibre. But in the case of the photonic time-stretching A/D converter, it is a useful attribute. First, a pulse of wide bandwidth light is sent into a frequency dispersive loop of optical fibre, which stretches out the pulse into a chirp. Then, a portion of the incoming electrical signal is modulated onto the optical signal and sent into another loop of dispersive fibre so it stretches out even further.
The light is filtered into an array of photodetectors, a setup similar to that used in wavelength-division multiplexed communications systems, so that each photodetector receives a small, time-stretched portion of the original signal. The resulting electrical waveform – which is now slow enough to sampled accurately – is then fed to a conventional A/D converter. Digital logic reconstructs the input signal by stitching together the time-stretched fragments.
In more conventional systems, as power consumption becomes an increasingly pressing problem rather than all-out speed, designers have looked again at the trade-offs between flash and successive-approximation architectures. At the 2008 International Solid State
Circuits Conference, researchers from the University of Texas pointed out that the power consumption of successive-approximation converters scales linearly with bit-resolution. In flash, because of the way the comparator count increases, it is an exponential relationship.
However, if the successive-approximation circuit needs to operate at a relatively high clock rate to deliver samples quickly enough, the higher threshold voltage needed will increase the energy needed compared with a highly parallel flash circuit using transistors designed to switch far more slowly. As a result, the two energy-consumption curves cross over at lower resolutions.
By replacing multiple single-bit comparison steps with multibit flash converters, it is possible to decode several bits at a time in each step. In the University of Texas converter – a 1.25Gsample/s 6bit A/D device – the circuitry employed a 3bit flash module in a three-step successive-approximation loop.
In operation, the flash module would compare the input voltage against three different reference voltages using a network of capacitors to generate the required levels. On the first step, three voltages are programmed to provide sample outputs of 16, 32 and 48. The selection from that comparison would be programmed into the most significant bits of the second stage and added to three intermediate values representing 4, 8 and 12. The output from that stage would finally be added to the comparison for the least-significant bits. The three 3bit outputs from the comparisons are then passed to three full adders to compute the final 6bit output (see fig 3).
The result was a converter that could make use of switched-capacitor circuitry and which suited deep submicron processes well. Fabbed on a 130nm process, the device had the lowest power consumption reported for such an A/D converter at that time.
Another way to take advantage of the rapidly increasing density of digital logic is to force more of the converter's workload into that domain, leaving behind a relatively small number of high-accuracy components that deal with the analogue signal. This is the philosophy behind the sigma-delta A/D converter: an architecture that uses signal-processing theory to render 24bit accuracy from a 1bit-resolution input.
A practical monolithic sigma-delta converter (see fig 4) contains very simple analogue circuit blocks – a comparator, a switch, one or more integrators and analogue-summing circuits. This feeds into a very large digital filter.
For close to 30 years, sigma-delta converters were practically impossible to make because of the size of the filter. It was only in the 1990s that logic integration progressed far enough to implement them at reasonable cost. Now, because they neatly fit the audio range and are easier to integrate with digital processors, the sigma-delta has become the consumer audio converter of choice: from mobile phones to home cinema. Because of that, the sigma-delta is arguably the most prevalent A/D converter architecture in production.
The key to the sigma-delta architecture is the 1bit quantiser, which usually runs at a clock rate many times higher than the Nyquist frequency – the lowest frequency at which a sinewave can be reliably reconstructed from samples. The quantiser creates a large number of low-resolution samples that, when passed through a decimation filter to average them, yield a much higher dynamic range.
To work with a 1bit quantiser, the sigma-delta A/D converter relies on feedback. The modulator generates positive or negative voltage pulses in order to track the incoming analogue signal. Each pulse corresponds to a digital output of one or zero, respectively. The output from this loop is a pulse stream: the density of ones or zeroes provides a digital representation of the input signal.
Over-sampling provides a signal-to-noise ratio (SNR) improvement of approximately 6dB – 1bit's worth – for every quadrupling of the sample rate. However, even at audio rates, that demands a comparatively high clock rate for many consumer electronics products. So dynamic range is extended by a combination of over-sampling and by increasing the effective resolution of the quantiser or by adding more integration stages to the modulator.
A useful side-effect of the process for audio applications is noise shaping. Over-sampling effectively spreads quantisation noise and reduces the in-band noise at the expense of higher out-of-band noise. Digital filtering then attenuates the out-of-band noise and signal components.
Using more integrators within the loop can improve this shaping to provide further dynamic range increases without increasing the quantiser's sampling rate, although the quality of the analogue circuitry will determine the ultimate SNR characteristics.
Almost all sigma-delta converters found in audio applications are discrete-time designs. Sample-and-hold circuits and switched-capacitor filters, used to reject signals higher than the target sampling rate, can create their own mixing products, giving rise to aliasing noise in the sigma-delta converter itself. These noise components are usually very small for audio applications, but the frequency at which the switched-capacitor stages can be operated puts a limit on the maximum signal bandwidth. As a result, sigma-delta designs have not progressed far beyond audio, even though the high resolution would be useful in applications such as ultrasound.
However, in a variation of the sigma-delta, the sample-and-hold circuit is removed from the front-end so the converter is presented with a continuous-time signal. This makes it possible to increase the frequency at which the A/D converter works into the ultrasonic range.
The main drawback of the continuous-time architecture is that the digital filter needs to be redesigned to accommodate a continuous-time input. The filter needs to be tuned carefully to the input sampling rate, which is straightforward for a fixed sampling rate. But that makes it difficult for vendors to support a range of applications with one product. The answer is to use programmable, adaptive filters in the digital stages. These are more area-intensive, but silicon technology is at least moving in the same direction.
As system-on-chip devices move down the process geometry path, they sprout more A/D converters and we can expect more architectures that demand large-amounts of digital assistance and processing to be revisited – with more investigation into hybrid architectures that use different sampling elements to offer better power consumption at a given process node and circuit speed.