Over the past 20 years, the integrated circuit (IC) industry has progressed in ways that are sometimes unimaginable, and we’re now witnessing the next phase of positive change. Harnessing the opportunities presented by advances in semiconductor process technology has required the continuous development of new tools and methodologies so that the design engineers working with these technologies are as productive as possible. In tandem with these developments, machine learning (ML) has also progressed to the point where it has become one of those phrases that everyone seems to be using these days, but what does ML mean, and more importantly, what does it mean for chip design? By taking a quick trip quickly through the annals of digital design and finding out how ML is taking it to the next level, you will be left in no doubt that ML is indeed the future of chip design.
Productivity challenge
In the nascent days of digital design, engineers employed a full custom approach to circuit layout, manually placing each transistor before connecting it to surrounding devices - an arduous and often time-consuming task. Designers soon realised that the layout task could be accelerated by using standard cells and schematic netlists to implement digital logic designs. However, creating schematic netlists was also manually intensive. The advent of desktop Unix workstations made Register Transfer Logic (RTL) synthesis possible, allowing engineers to create digital logic functions using high-level hardware description languages (like VHDL and Verilog) that could quickly synthesize a netlist with thousands of logic gates. Although this helped to overcome the design problem, it inadvertently created another- how to layout vast numbers standard cells. This, in turn, was overcome through the development of automated place-and-route, and the combined effect of these two systems was to massively increase the productivity of the digital design flow, allowing designers to focus on optimising power, performance and area (PPA).
However, the size of the design challenge scales to the magnitude of the design and the size of standard cells has quickly grown from thousands to millions. While the size of ICs continues to increase, the number of available IC design engineers is not keeping pace, causing a loss in productivity that continues to exacerbate. As foundry process dimensions shrink, transistor density inevitably increases. At 7nm and below, it is no longer feasible to create blocks with ‘only’ 2 million cells. ICs with over 50 blocks, containing more than 5 million cells each, are becoming more typical. For the industry to keep pace with this rising level of complexity, design engineers must become more productive.
Machine learning is ideally placed to help achieve this goal.
ML in EDA
In 1959, Arthur Samuel, a pioneer in computer gaming and artificial intelligence, defined ML as the “field of study that gives computers the ability to learn without being explicitly programmed”. It is difficult to automate each element of the chip design process in such way that it can be ‘programmed’ as it relies heavily on the experience of the individual engineer.
Traditionally, the industry has tackled the problem of massive chip design projects by breaking them it into smaller tasks with a significant amount of time spent crafting what ingredients need to go into the recipe to make a chip. This includes different approaches to timing closure, placement constraints, floor-planning and understanding the finer details of electromigration and adhering to the design rules for specific processes. There are many variables that require the input of many experts, with almost every chip design company, be they large and small, having resident ‘gurus’ with expertise in timing or floor-planning or power convergence.
What makes ML ideally suited for application to design automation is the fact that much of this design process is so manual, requiring the iterative evaluation of predictable ‘what-if’ scenarios. The power of ML inferencing is that it does not need all the data in a dataset to arrive at a result, so it can be trained to deliver improved results in less time than a manual approach. While the concept of ML was little more than a theory back in 1959, massive advances in computer technology, resulting in the availability of multiple powerful GPUs running in parallel with the ability to perform complex calculations in the cloud, has allowed computer scientists to make huge strides in the field of ML. In recent years, it has been successfully applied to individual tasks of the design flow (e.g., and post route timing), but the full power of ML will only full be revealed by applying it to an even higher level of abstraction.
Design flow optimisation
While ML has delivered improvements in individual components of the design flow, the next step is to use it to accelerate the entire design flow, which has always required greater levels of manual interaction within the design team. Capturing this expertise in an intelligent system can potentially have an even greater impact on productivity. In the current manual and iterative flow development process, designers create an initial flow, run the design and generate results. Based on the output results, experienced engineers then adjust the flow, which is then rerun to generate new results. This iterative process continues until the desired PPA is achieved or the team exhausts their schedule and must accept the available results. This requires significant engineering effort and is an inefficient use of computing resources. Adding more engineers to the team does not necessarily translate to a PPA improvement.
Now, there is a revolutionary, machine learning-driven approach to chip design flow optimisation. This new approach allows engineers to specify PPA targets, and it then optimises all aspects of the digital RTL-to-GDS flow, in a fully automated way, to meet these targets much more quickly than a manual flow. It also makes it possible for engineers to work on optimising the flow for multiple blocks concurrently. Today’s engineer can build on existing ML architectures and take advantage of the massive computing power now available.
ML uses only samples of real time design data, allowing it to make optimisation decisions ‘on-the-fly’. This means it can immediately halt runs that are not converging on better PPA results, allowing it to reallocate computing resources to other alternative configurations. This approach is more efficient than manual flow tuning, where the results are only reviewed at the end of each run. Huge volumes of design data are analysed by the learning engine during flow optimisation. As the reinforcement learning process proceeds, a machine learning model is created, capturing the design data analysis. This can then be used as a starting point for future design flow optimization by re-using data between projects, saving significant computing time and delivering improved PPA even more quickly.
Conclusion
The continued growth of the semiconductor industry will require chip design engineers to become more productive. Taking advantage of cloud-enabled, parallel and distributed computing resources now available, ML tools will further improve PPA, allow engineering teams to achieve the productivity levels needed to meet the challenges posed by larger and increasingly complex chip designs. In the past, EDA tools increased the productivity of engineers. From here forward, ML will increase the productivity of EDA tools, and hence, the engineers who use them.
Author details: CHIN-CHI TENG, PhD, Senior Vice President & General Manager, Digital & Signoff Group, Cadence