Issues and solutions for variation aware custom IC design
5 mins read
In today's highly competitive semiconductor industry, profitability hinges largely on advantageous design performance, high yield and rapid time to market. As leading edge designs push into smaller process nodes, this is becoming even more evident.
However, due to the common use of foundries, silicon technology no longer provides the differentiating factor that it once did. Rather, the key lies in custom integrated circuit design delivering the necessary advantages in performance, power and area.
With this continued node migration, variation has become a critical design concern and managing it properly is fast becoming the primary issue facing design engineers for the immediate future. Fortunately, fast, accurate and scalable specialist tools are now available to enable variation aware design techniques to be implemented within the context of the existing eda design and foundry process flows.
So what do we mean by variation aware custom IC design?
Variation impacts the full spectrum of custom IC designs from analogue/mixed signal, rf and I/O through to memory and standard cell libraries. It results in actual silicon performance and yield differing from that predicted in simulation and is caused by such factors as global and local random effects, environmental (voltage, temperature, load etc) issues and layout dependent effects such as proximity. Engineers are increasingly required to manage variation under tight performance, power, yield, and time constraints.
Given that local and global process variation, environmental variation and proximity variation can differ between circuits, variation design problems can be broadly categorised as process voltage temperature (PVT) corner analysis, Monte Carlo statistical, high sigma Monte Carlo statistical or proximity issues. Depending on the type of variation design problem, engineers face numerous challenges.
Process voltage temperature corners
In some cases, a PVT corners approach to variation, in which global F/S corners are used to model process variation, may be enough. However, it can be cumbersome and running a comprehensive range of PVT corners can take days. To overcome this, engineers can try guessing which PVT corner combinations are worst case, but this leads to inaccuracies and increases design risk.
To reduce risk, they may try adding margins at the expense of performance, power or area. Additionally, the PVT corners don't always reflect the actual worst-case corners, so a closer look at random variation becomes important.
Monte Carlo statistical
In many design problems, global F/S corners are not accurate enough and statistical models of the global and local process distribution are needed. Engineers can use Monte Carlo sampling, but design iterations in this respect are time consuming, typically taking days rather than hours, making it expensive to incorporate in any design flow. As, also mentioned previously, the alternative use of PVT corners increases design risk.
High sigma Monte Carlo statistical
Circuits like bitcells, sense amps, and standard cell digital library circuits are replicated thousands or millions of times and must, therefore, have a very low failure rate (e.g. one in a billion). It is not feasible to use a big Monte Carlo run, as it would take a billion samples just to get one failure. Alternatively, they could use a small Monte Carlo run and extrapolate with density estimation, but this will be inaccurate because there will be no information at the tails.
Proximity Effects
Designing for proximity variation effectively involves trading off costly schematic and layout iterations against over-designing and blowing out area. Ignoring proximity effects can lead to catastrophic failure, which can result expensive foundry iterations and delays in getting to market. Engineers may defer managing proximity until the layout is complete and then measure proximity effects; however, trying to reach convergence in this way is time-consuming because it forces long iterations with the design. Finally, engineers could add a guard-band to each device, and accept the heavy area penalty.
In examining these issues and the challenges they present, a pattern emerges. Regardless of the type of variation problem, engineers face a similar dilemma. With traditional approaches they face slow design iterations; the alternative is to risk design failures (under-design), or compromises in performance, power and area (over design).
While methodologies exist to deal with variation issues, with the complexities involved at the sub-micron level, traditional techniques simply do not resolve the engineering dilemma. Engineers have traditionally been forced to deal with variation issues either by over-margining, which leaves significant performance, power and area on the table, or under-designing, which results in yield failures. This is costly as it either vastly under-utilises the benefits of node migration or wastes money on failed silicon.
The question is: how can this dilemma be resolved so that engineers can iterate their designs quickly yet use an accurate model of variation so that over- and under-design are avoided? The answer lies in using appropriately chosen design-specific corners – a small representative set that simulates quickly, yet captures the bounds of the performance distribution. The first step of such a flow would be to verify the circuit using a verification tool.
If the design is acceptable, the flow is complete. If not, representative corners need to be extracted, and the loop repeated for each type of variation problem - PVT, and Monte Carlo Statistical or High-Sigma Monte Carlo Statistical. At the end of the flow, engineers can use a proximity-aware tool to compute appropriate guard-bands. Engineers can combine the flows for different variation problems, for example, undertaking a first-cut design using just a single nominal corner, then adding some PVT corners and design against them, and finally adding Monte Carlo Statistical corners.
Managing variation properly with appropriate tools and technology, delivers considerable benefits in terms of improving design performance, power and area, maximising parametric yield, avoiding project delays and preventing catastrophic silicon failures.
To meet the challenge, engineers need fast, accurate specialized variation analysis and design tools, such as those provided by Solido's Variation Designer platform, that scale with design complexity and also take full advantage of the variation models provided by foundries. Fully integrated with the simulation environments and custom IC design flows of the leading EDA tool vendors (Cadence, Synopsys, Mentor, Magma and Berkeley Design) as well as the process flows of leading foundries (TSMC and Global Foundries), Variation Designer goes beyond purely analysing the impact of variation.
With packages to deal with PVT, Monte Carlo Statistical, High-Sigma Monte Carlo Statistical and Proximity variation issues, it also provides the ability to identify the electrical hotspots that have the greatest impact on performance and recommend design adjustments to meet the specification.
Solido draws on new core techniques and technologies based on variation-aware design methodologies - Optimal Sampling and Design-Specific Corners. Optimal sampling comprises efficient sampling algorithms that return the same accuracy of results in far fewer simulations than traditional PVT corner and Monte Carlo analysis.
Design specific corners are a reduced number of representative corners that can be simulated quickly to accurately capture boundaries of variation, including environmental and random variation. Using these techniques, the amount of information derived per simulation is maximised while the number of simulations minimised. As a consequence, engineers no longer need to make trade-offs of accuracy versus the speed of results inherent in traditional variation analysis.
In doing so, design teams can recover anywhere between 15% and 50% potential loss in performance, power, area and yield as well as perform variation aware design tasks much more quickly. Furthermore, they can do so without disrupting their existing, standard design flows.
The key to fast, accurate, and scalable PVT verification and corner extraction is evolutionary design of experiments with response surface modelling, enabling speed ups of 2 to 50 times over traditional approaches.
Monte Carlo Statistical verification can exploit Optimal Spread Sampling and density estimation for 1.5x to 10x speed up over naïve Monte Carlo, with no loss in accuracy. Monte Carlo Statistical Corner extraction extracts corners at pre specified target yields, leveraging density estimation, response surface modelling, and non linear programming for corners that are 5 to10 times more accurate than naïve Monte Carlo corners.
For high sigma statistical problems, high sigma techniques give orders of magnitude speed up at no loss in accuracy. The results of an optimal sampling run can be exploited to extract design specific corners as representative failure cases to design against in order to improve the circuit.
For proximity problems, the key is to combine adaptive sampling with sensitivity analysis. By placing guard bands only where they are needed, area can be reduced by upwards of 50% and over design avoided. By giving visibility to proximity variation earlier in the design flow, under design is also avoided.
To conclude, as leading edge design moves further into sub micro process geometries, engineers must manage variation effectively to satisfy profitability requirements.
For engineers, this translates to the need to manage diverse variations (global and local process variations, environmental variations, etc.) and reconcile yield with performance (power, speed, area, etc.), while under intense time pressures. The deployment of a variation aware design flow using design specific corners enables engineers to manage variation effectively, because the corners simulate quickly yet represent the bounds of performance.
Amit Gupta is co founder, president and ceo of Solido Design Automation.