Component size and communication speed propel PCB design
8 mins read
Viewed from the outside, the world of the printed circuit board (PCB) seems to move very slowly. The core technologies used in PCB fabrication have not changed radically in decades; boards remain based on glass fibre and vias are still the result of holes being drilled and plated. But PCB tools have evolved as more subtle changes in packaging and fabrication technology have taken place.
Most of the recent technology focus in PCB design has been on high speed, high density projects. Although many users may not consider themselves to be doing high speed design with dense interconnections, the general trend of component design points in that direction, particularly when it comes to microprocessors and memory buses. Memory ICs tend to have a short shelf life so, even if you wanted to design with older, slower memory buses, most of the components that support them are beyond the end of life phase and increasingly difficult to source.
The buses are, in general, moving away from traditional clocked parallel buses to multi lane serial designs that function more like longer distance communications protocols. Because they transfer data at rates of more than 1Gbit/s, traditional clocking schemes have been replaced by self clocked data lines which use relatively small voltage swings. This reduced voltage puts more emphasis on the PCB's signal integrity as there is less room for error.
The other general trend is that of increasing wiring density. As with speed, designers do not necessarily get the choice not to use high density interconnects unless they are working with comparatively low end microcontrollers. As packages such as BGAs become more mainstream, even designers working with comparatively low density boards will have to find ways to break out the many signal, power and ground connections – at a pitch of 1mm or less – on a BGA to the outside world.
One of the ways in which PCB manufacturers have allowed for the rise of high density packages such as BGAs has been through smaller and more flexible via construction. Instead of drilling all the way through a PCB stack and then plating the hole, buried and blind vias are made using sequential buildup techniques – a blind via does not pass all the way through the stack, while a buried via a connection between two internal layers and does not reach either outer surface.
Vias can also be made smaller than traditional through hole forms, hence the name microvia. The IPC defines a microvia as being smaller than 0.15mm in diameter.
As well as improving density, microvia improve electrical properties as they have less capacitance than a traditional plated through hole via. But the flexibility that microvias offer can lead to problems with manufacturing yield. The temptation is to put them directly underneath or very close to BGA pads. Some PCB manufacturers warn against these practices or use stringent design rules because the solder that is meant to attach the ball to the PCB can wick into the via, leading to a weak or broken connection.
Although not mainstream outside of the portable device market, embedded components are now being designed into PCBs and this has had a knock on effect on the design tools as some traditional assumptions about how components are wired up have changed. Two techniques that have called for changes in the rules are the use of vertically mounted components and transistors, and dual sided contacts. Components with contacts on either side make routing more flexible as the electrical connection can be made to one side or the other. Potentially, routing to both sides at once could improve overall board yield in case a via fails.
The overall effect of these changes has been an increase in the number and complexity of design rules. And components with high speed I/O requirements demand designers pay close attention to how much degradation individual signal lines can suffer through PCB parasitics. Even the length of bus lines is now critical to ensure that data bits turn up in sync. Without tool support, the designer has to spend time inserting jogs and detours to balance out trace lengths and maintain consistent delays across a bundle of signals. This is seeing more intelligence being deployed into autorouters.
From a computer science perspective, autorouting remains an unsolved problem. Most routing related problems are classed as 'NP complete': there is no quick solution to them. The task of the writer of PCB autorouting software is not made any easier by design rules which may vary from layer to layer and where different types of via can be used.
Luckily, PCB autorouters only have to do a job that is good enough and not provide a perfect solution. PCB tools vendors have tried out numerous algorithms, but the one that still seems to be popular is the 'rip up and retry' router. This will attempt to perform, one at a time, a detailed route for all nets that do not have fixed layouts. As its name suggests, if this first attempt is unsuccessful – it cannot complete detailed routing for all the nets – it will 'rip up' the work it has done and choose to route the nets in a different order.
In addition, most autorouters in use today employ shape based algorithms, rather than the older and simpler gridded techniques, which constrain routes to lie on a fixed grid. The rise of microvias and high density packages cause problems for gridded routers as they imply the size of the grid needs to be changed to accommodate them, which can slow the routing of less densely populated board areas. Shape based algorithms use the true shape of components and traces to determine where obstacles lie.
Routing algorithms are often driven by multiple cost functions. For example, vias can be treated as high costs because of their impact on yield, forcing the autorouter to use paths that lie on just one board layer as much as possible. Even design rule violations can be treated as costs. Although this can cause problems in terms of design for manufacture, allowing the autorouter to infringe design rules can improve its chances of completion. Manual cleanup may be all that is necessary to make the board design rule clean. Many autorouters are now interactive, so the designer can make changes 'on the fly' and have the computer route other traces around the manually fixed path and do other clean up work.
In many cases, the tool will provide interactive feedback on design rule violations when the user is performing manual routing to avoid having a pile-up of errors that only become obvious when the final production checks are made.
Automatic placement is less widespread than autorouting, but a number of tools offer it. The function can be useful in the early stages of design to see whether the board size is reasonable for the number of components that have to go onto it. Typically, few boards will be placed completely automatically as many devices, particularly connectors, have to be inserted at specific points to fit the enclosure. But the autoplacer can speed the tedious placement of components that do not need to sit in a specific part of the board.
Typically, the autoplacer will choose a layout that minimises the length of the interconnects. The placer generally does not attempt to perform any routing – instead it will calculate the length based on straight line 'rat's nest' interconnections.
The next step from autoplacement and autorouting is constraint based design – introduced largely to help deal with the signal integrity issues of high speed I/O. Constraint based design tends to be a feature of higher end tools aimed at high speed interconnects, but it looks likely to migrate to low end tools as gigabit serial buses become more commonly used. Constraint based design provides a link between the original schematic and the placed and routed board layout. It lets the user tag components and interconnects with information on how they should be treated by the autoplacer and autorouter.
Constraints make it possible for the designer to specify allowable propagation delays and skews on the schematic along with physical rules on the way in which components should be placed relative to each other and how connections between them should be routed.
Hierarchical constraint management systems let the designers group timing and physical rules, such as maximum stub length or via count. For example, by defining the various rules for a memory bus, the constraint set can be stored and recalled in future designs that use that type of bus. Constraint managers tend to use a spreadsheet like structure on the basis that it will be familiar to most users who have, in many cases, been managing these types of rules manually using Microsoft Excel.
Although high speed memory buses and serial communications protocols make PCB design more intricate, increasing integration is pushing some of the complexity inside packaged devices. 3D packaging will accelerate this process by combining multiple pieces of silicon on a silicon interposer or in a stack of silicon dice. Both approaches increase the speed at which chip interfaces can communicate and cut power, helping to keep high frequency signals inside the package.
For many users, high integration microcontrollers and FPGAs are already soaking up a lot of functions that would previously relied on combinations of discrete devices. These devices potentially provide the PCB layout engineer with a lot of flexibility.
Often, the pinout for microcontrollers can be altered to some extent by reprogramming various configuration registers. Some vendors provide graphical tools to ease the process of configuring I/O functions and pinouts. One way this can be used to improve flexibility is to have the firmware engineers, who configure the logical behaviour of the microcontroller, provide a set of pinout options. From these, the PCB layout engineer can select the one that causes the fewest routing problems.
As they are almost fully programmable, FPGAs offer even more flexibility on pinouts. As a result, providing a set of design options is not practical. Instead, some PCB design tools have built links to FPGA development software to communicate design changes between the two domains. It makes it possible to generate a component with appropriate pin names that can be used in the schematic and layout editors, instead of demanding the PCB engineer creates symbols and pin names manually for devices with as many as 1000 I/Os.
If the PCB designer requests a pin swap to avoid having to route a trace that uses too many vias or conflicts with design rules, this can be fed back into the FPGA place and route tools or rejected if the change violates timing or the device's own design rules.
Data interchange is one of the slowest moving portions of the PCB design environment. The export format of choice remains the Gerber format, although it is now primarily the extended Gerber format, otherwise known as RS274X. Dating back to the early 1980s, the Gerber format is based on ASCII text and is, in principle, human readable. It contains a set of commands and coordinates that, when interpreted, create 2D shapes. Although Gerber files can also contain data for drilling holes, manufacturers will tend to use the IPC-NC-349 or the Excellon formats – which are derived from the RS274C standard, a subset of the original Gerber format.
Recently, efforts to improve compatibility with mechanical design have led to bigger changes. Although companies have looked at building better bridges to 3D mechanical CAD tools for many years, it is only comparatively recently that PCB design tool vendors have embraced the idea.
The US based organisation PDES defined more than a decade ago two standards for exchanging data between PCB and mechanical CAD tools – STEP AP210 and AP214. But vendors were not, in the main, enthusiastic about supporting them. A later iteration of the STEP work has proven more popular. Taking requirements from a series of workshops held in 2005, the ProSTEP iVIP group worked on EDMD, a data exchange format based on the XML language used in internet software and which took many of the elements of AP210 and AP214.
The result was the IDX format first supported by Mentor Graphics and Parametric Technology's Pro/Engineer MCAD tool, while Cadence recently added support for the iVIP derived work. But the main interchange format of choice remains the Intermediate Data Format (IDF).
Free tools such as Farnell's Eagle and DesignSpark PCB from RS Components, in combination with Google's SketchUp, are helping to widen the audience for 3D design. SketchUp is not a 3D CAD tool in the traditional sense. It uses the Collada format developed for interactive 3D modelling and animation and so focuses on surface modelling rather than the full description of solid objects that would be expected from true MCAD tools.
However, SketchUp can provide virtual mock ups of products that make it possible to determine whether connectors fit an enclosure properly and that tall components such as heatsinks will not cause problems. To get designs into SketchUp, utilities are available that map the IDF output of most PCB design tools into the Collada format.
Further work with extracted 3D models is helping to map heatflow through an enclosure, using electrical models to determine where hot spots are likely to turn up. Research is continuing to explore the possibility of using similar simulations to gauge electromagnetic compatibility – particularly as more systems to turn to gigabit serial interconnects. As electronic controls become a part of many consumer and industrial products, the integration between the electrical and mechanical parts of the design is becoming much tighter.