Solving the DFM puzzle
4 mins read
Until now, design for manufacturing has been all about the individual processes. If it is to be successful, it must be applied on a broader scale.
DFM processes such as lithography friendly design, critical area analysis and planarity analysis have been used independently to solve one particular manufacturing problem at a time (see figure 1).
Although there have been some limited attempts to incorporate these processes into the design flow (for example, DFM aware place and route tools), the industry has not yet put together an integrated DFM toolset, nor has it brought manufacturing processes into the flow. More than that, we are still locked into the application of DFM to one integrated circuit design at a time.
The next big leap that designers and manufacturers have to take – and they have to take it together – is to begin regarding DFM as a holistic, evolutionary process in which information is gathered about manufacturing design sensitivities over all designs implemented on a given process node to improve our long term chances of success. DFM needs to evolve to a system that can capture learnings from one tape out and apply them to the next, as a company releases multiple designs within a process node and as these designs move from prototyping to production volumes.
Why is this so important? The external specifications of an ic – such as functionality, clock rate and power consumption – determine the competitiveness of a product. As we migrate to smaller nodes, the impact of process variability is impacting core designer issues such as timing, speed, and power. When DFM is tied to these core issues, it impacts all designers, not just the 'DFM team'.
To be successful and profitable in the ic business, designers need to 'out design' their competitors. Everyone faces the same manufacturing limitations, so the company that finds the best ways to minimise these limitations holds the advantage. It's not enough to be first to the market anymore; now, you have to be 'first to the money' with a product that not only performs, but which can also be produced profitably and reliably.
DFM has traditionally been viewed primarily as a yield improvement strategy, but embracing and mastering DFM in its fullest implementation can provide that competitive advantage in terms of design optimisation. The premise of DFM is simple: if designers can selectively reduce the size of guard bands based on superior knowledge of how physical design features interact with manufacturing variability, they can design a product with more competitive performance specifications. In effect, DFM provides the sensitivity analysis needed to 'tighten up' the design process.
To realize the full value of DFM, this must be done not only for one design; instead, it should be based on cumulative process data from all designs at a given technology node. DFM then becomes an integral part of the classical yield learning process.
How do DFM methods and tools need to change to enable this closed loop engineering approach? First, we must adopt a lifecycle approach, in which DFM data becomes an integral part of the design and manufacturing process, and is saved and refined from one design to the next.
Next, we must adjust our mind set to view increasing process variability as a opportunity to customise products and to produce better results than our competition, rather than seeing it as the unwelcome effect of shrinking geometries. We need to put in place a system that can collect and leverage volume test data from manufacturing operations to create a feedback loop that can identify and analyse systematic yield loss and then use that information to drive proactive design side corrective actions.
Finally, we need to use design constraint data provided by the foundries to drive intelligent manufacturing optimisations downstream, starting with smarter tapeout operations and mask optimization processes, as shown in Figure 2.
Some foundries are beginning to force these changes as they add DFM requirements to their qualification kits. Many current DFM tools may be too conservative to ensure both manufacturability and profitability of advanced node designs. Likewise, DFM tools that are not integrated into design flows create more work for designers and distract them from their primary objectives. Ideal DFM tools provide immediate and intuitive insight into how designs are rendered in the manufacturing process, and provide directed guidance to designers on what specific design improvements will provide the greatest increase in overall yield.
However, three things need to happen before interoperability becomes the norm: First, the value of DFM needs to be validated through use in real production environments at major design firms and foundries and documented with real production yield data. Second, DFM tools need to be tightly integrated with the design environments currently used by ic designers. Finally, the tools need to have designer friendly interfaces that give specific and intuitive guidance on how designs can best be improved, or the ability to make fixes automatically.
On the manufacturing side, we need to better understand how specific design practices interact with the manufacturing process to determine the best DFM design practices. However, in manufacturing, production test and diagnosis primarily focus on die failures. Whether the failure is DFM related or not isn't relevant to the analysis. Companies don't have the time or resources to characterise features that aren't failing (to find out how much more they can optimize), or to characterise failure rates for every DFM rule. What design houses need is a way to analyse test data that is specific to their designs, so they can use those results to adjust automatically the DFM rule set for specific design styles and market goals.
One of the main challenges to integrating DFM, production test, diagnosis, and yield analysis is that designs must be pushed to failure to get the data needed to make meaningful design changes. This can be a chicken and egg situation for a lot of companies – you can't get the data without failures, but failures cost time and money.
One option would be the development of a unified platform, providing automated support and tool integration to let design, test and yield engineers share information and work together to optimise designs and yields. All the pieces exist, but no one has quite put them all together. Some companies are ahead of the game, with tools built on the same underlying engine, allowing for easier integration and exchange of information. Companies with lots of designs, or multiple spins on a single design, that can apply their learning to their next design in a very short timeframe will likely be the leaders in adopting and benefiting from a unified approach.