Holistic power supply designs gain popularity
6 mins read
The question of efficiency is never far away when looking at power supply design, even though it might be trumped by cost when the final choice is made. For years, however, efficiency has been about a single number: the peak efficiency at a favourable load point.
The trend now is to look at power supply efficiency from a more holistic point of view. This is particularly true of the server environment where a focus on the idea of the green data centre – or at least one that is nowhere near as heavy on energy consumption – has taken hold.
Air conditioning is one of the reasons why data centre energy consumption is so high. One option is to use less of it and run the equipment at a higher temperature. Naturally, this will have an effect on the reliability of the components, not least the temperature sensitive components in the power supply, such as the large electrolytic capacitors. This may force manufacturers to look at components that offer longer lifetimes in hot conditions.
Point of load conversion
A further trend is the gradual, albeit slow, adoption of 380V distribution around the racks, converting down to 48V only at the blade level, from where the PCB level voltages are distributed. Even there, the PCB voltages may go into reverse in response to the problems of feeding high current at low voltages to large processors and SoCs. These devices are increasingly likely to sport on chip or within package voltage regulators so that the number of power pins that a device needs can be reduced.
Then there is the question of what peak efficiency means in the context of a complete system. This level of performance is often only seen when the load is a very high proportion of the maximum output. Because the power supply may have been specified with a certain level of headroom and because the computer itself may only be expected to run at high load for short periods of time, the actual delivered efficiency can be a long way short of the peak. Attention has now shifted to real world use and how well power supplies stand up to loads of 50% or less. This is having a knock on effect on the architecture of the power supplies themselves.
Today, in servers and similar systems, power factor corrected (PFC) rectifiers have pretty much replaced their uncorrected predecessors. A simple rectifier circuit has very high peak current consumption and imposes high harmonic distortion on the input, bringing the power factor down as low as 0.5. The PFC converter introduces much faster switching, usually under pulse width modulation (PWM) control, to reduce the peak current level. Typically, the PFC converter takes the form of a full-bridge rectifier, followed by a boost converter, which contains the PWM-controlled switch itself, an inductor and an output diode.
There are two ways to control the switching. One is continuous conduction mode (CCM), in which the current in the inductor need not be zero when the switch is turned on. This causes the diode to experience reverse recovery. Switching this MOSFET on and off regulates the current, which is smoothed using a large capacitor.
Under the alternative scheme, called boundary conduction mode (BCM), the inductor current is allowed to fall to zero before the MOSFET is turned on. This results in soft switching and so reverse recovery does not occur in the diode. However, the technique tends to result in higher conduction losses through the MOSFET and diode because the peak inductor current tends to be higher than with CCM switching. BCM also relies on variable-frequency switching so that the inductor current can be allowed to fall to zero.
Typically, BCM wins in designs of up to around 300W, but the distinction is not clear cut. The arrival of silicon carbide diodes has made it possible to improve the efficiency of CCM because these diodes have very much lower reverse recovery losses than conventional materials, although they are more expensive. Some silicon diodes can also cut reverse-recovery losses, but sometimes at the cost of higher conduction losses.
The diodes in the full-bridge rectifier itself can experience large losses, particularly as they may be slow-recovery parts. This is where the 'bridgeless' PFC topology comes in. MOSFET switches are used in place of two of the diodes and allow a reworking of the bridge-boost converter topology. With the switches replacing half of the bridge, it is possible to remove the boost diode that follows the inductor in a traditional circuit. The bridge diodes themselves perform roughly the same job. As this topology looks a little like two boost converters glued together, sharing a common inductor, the bridge seems to have disappeared, providing the source of the term 'bridgeless'. Higher overall efficiency results from a reduction in the number of semiconductor devices in the inductor-charging current path from three to two.
However, the bridgeless-PFC topology is more complex outside the power path as the switching modules need to be able to perform current sensing and input voltage sensing. This, in turn, is driving the development of digital, rather than analogue, control.
There is a further complication, however: some of the power losses that would normally be handled by diodes in a bridge rectifier now have to be transferred to the power MOSFETs, leading to higher junction temperatures and an increase in transistor size and cost.
Noise is also a problem with the conventional bridgeless-PFC topology as there is now no low-frequency path to the output. This leads to an increase in common-mode electromagnetic interference (EMI) from the charging and discharging of parasitic capacitances. The result is that practical bridgeless-PFC converters tend to be more complex than the basic theoretical topology.
In response to the idea of approaching power design within a large system as a holistic problem, one PFC technology that is becoming more common is the boost-follower PFC. It can help increase overall efficiency within a larger system just as long as downstream converters are designed to deal with it. In this topology, the output voltage is allowed to change with the input voltage. This method increases the overall efficiency of the PFC converter, but it demands that any downstream DC/DC converters are able to operate over a wider input voltage range – which might be 200V to 400V. This, in turn, restricts the topologies that can be used.
Within the DC/DC converters themselves, the topologies have moved to more complex forms of switched conversion, such as quasi-resonant operation or multiphase conversion. These switchers tend to operate at high frequencies, so switching losses are the ones that need to be tamed. As a result, the quasi-resonant approach takes advantage of soft switching to reduce recovery losses. The quasi-resonant converter gets its name from having the switching frequency determined not by the natural oscillations of an inductor-based circuit but by a digital PWM controller.
On their own, quasi-resonant and similar topologies do not fare so well with widely varying loads, such as a server blade that can suddenly come out of idle, process heavily for a few minutes and then go back into a quiescent state. This is where the multiphase converter comes in: it splits the task of converting from one voltage to another among several parallel circuits.
A multiphase converter (see fig 2) might have four or five phases, all of which are active at full load. As the processor slows, the load reduces and the power supply – if it is intelligent enough – can start to drop phases. This reduces the impact of switching losses, as they will start to dominate at lower loads, and better matches power supply output to the load.
When dealing with multiphase converters, it is not entirely straightforward to work out many phases are needed. While the more phases there are means the more control you have over the efficiency curve, each phase costs money in terms of additional control circuitry and power transistors. In general, the rule of thumb is that each phase should deliver around 25A. Improvements in technology are steadily pushing that number up, although the increase in output capacitance needed for a smaller number of phases can negate some of the cost saving. As efficiency at lower loads becomes more important, it can make sense to soak up the cost of the extra active components.
Sometimes, even the current from one phase is too much. This is where pulse-skipping or burst mode comes in. In burst mode, the switching circuit is only activated when the output voltage starts to move out of regulation, effectively skipping some of the switching cycles that would have happened if it were left to run as normal.
When the load is powered-down completely, there is the question of what happens to the front-end power supply. If left to run, it will tick over, but waste most of the energy that it delivers. This situation is particularly problematic for battery chargers, which will run until the cells are fully charged and then switch off.
Through improvements in efficiency, a quiescent charger might draw 500mW. This can be brought down further, to less than 100mW, by moving to active control over the actual power supply circuit – using a digital controller on a separate circuit to disable power to the main switching circuit when there is no load detected.
From simply going after peak efficiency, what happens on the load side is now the dominant factor in power supply design. That is likely to lead to greater levels of interaction between the equipment and the power supply architecture as designers try to eke greater performance out of their circuits.