Outlook 2020 - Swapping Moore for Kurzweil in pursuit of accelerating returns

4 mins read

The end of Moore’s law has been proclaimed on many occasions and it’s probably safe to say that we are now working in the post-Moore era. But no one is ready to slow down just yet. We can view Gordon Moore’s observation on transistor densification as just one aspect of a longer-term underlying technological trend – the Law of Accelerating Returns articulated by Kurzweil.

Arguably, companies became somewhat complacent in the Moore era, happy to settle for the gains brought by each new process node. Although we can expect scaling to continue, albeit at a slower pace, the end of Moore’s Law delivers a stronger incentive to push other trends harder

Some exciting new technologies are now emerging such as multi-chip 3D integration: we maxed every square millimetre, so let’s go cubic. We will also achieve rapid gains through the introduction of new technologies such as storage-class memory and silicon photonics. As we come up against the physical speed limit for conventional I/O circuits, which we know is about 100Gb/s per metre, the quest to increase the speed of multi-chip connectivity and lower I/O power will draw silicon photonics technology into future generations of advanced ICs like FPGAs.

A golden age for computer architecture
This is genuinely a hugely exciting period. 2017 Turing-Award winners and respectively Stanford and Berkeley professors, John Hennessy and David Patterson, have hailed a golden age for computer architecture; one of the key drivers being the pursuit of domain-specific optimisations. I can point to an example in Xilinx’s AI Engine, one of the most important and powerful features of the Versal ACAP (adaptive compute acceleration platform), which we introduced in October 2018. I think it’s unlikely we would have done anything like this even just a few years ago, when performance gains were more easily achievable elsewhere.

Today, progress is not only about processing performance. I mentioned earlier the development of silicon photonics as a technology to increase I/O speeds. In fact, the explosion of AI workloads is one of the most powerful drivers shifting our attention to find faster ways of moving data into, across, and out of accelerators like Versal chips. The programmable high-bandwidth Network On Chip (NOC) interconnect of Versal, and other design features such as the close and short connections between distributed on-chip memories and processing elements, are further examples of the ways chip makers are looking beyond scaling to achieve next-generation performance gains.

Right now, AI is probably the dominant influence on current and future processor architectures. It’s fair to say that the demands of data centre applications drive much of what Xilinx is doing today, and it’s about workload diversification.

Historically, hyperscale data centres have served as huge repositories of our data archives, storing video, pictures, audio, and serving content on demand. Increasingly, all of us - as individuals and as businesses - are demanding so much more as we connect autonomous vehicles and numerous IoT data streams from smart factories, smart cities, smart infrastructures. We need help to find those deep insights that are desperately needed to keep raising business productivity, energy efficiency, public safety and security, and standards of living.

Adaptable and configurable data centres
Given this diversification of workloads, data centres need different arrays of resources to tackle them efficiently. Data-centre architectures are moving away from rigid CPU-centric structures and instead prioritise adaptability and configurability to optimise resources such as memory and accelerators assigned to individual workloads. There is no longer a single figure of merit. It’s not all about Tera-OPS. Other metrics such as transfers-per-second and latency come to the fore as demands become more real-time; autonomous vehicles being an obvious and important example.

Clearly this is an area where Xilinx’s expertise in programmable devices is directly applicable, and we are directing solutions like the Versal ACAP to meet these industry needs. It shows how far the company has come when you consider that our early business was mostly with ASIC designers seeking faster design cycles and lower engineering costs, as well as EDA software users.

Today, our customer base is transitioning towards computer scientists and data scientists, and this is placing increased emphasis on delivering powerful tools to help them unleash the maximum power from programmable devices without needing to know the low-level architectural details. Our PYNQ – Python on Zynq – initiative will, I believe, be an important move that makes advanced programmable architectures more usable for more diverse engineering communities.

The rise of 5G
How will priorities change in the future? The transition to 5G is an area Xilinx has devoted intensive resources to and we are able to provide value propositions no one else can offer. In many cases solutions operate across the traditional boundaries between the cloud and edge and embedded platforms that are obviously power-conscious and cost-sensitive. There is plenty more to do as far as tools are concerned, to accommodate this spread of applications across these boundaries.

Like IoT, 5G will be heavily reliant on edge computing and machine learning. We all know that these technologies are just at the beginning of their development and there is much greater potential to be unleashed as our understanding grows. Today, commercial machine-learning applications are realised in two phases: the first comprises data collection, data labelling, and neural-network training, while the second phase is the deployment in the field of the trained inference engine.

Already we can see that the established sequence is cumbersome and slow, demanding massive quantities of data and laborious labelling that require huge resources and infrastructure. Many see this as unsustainable going forward, from both energy-consumption and time-to-market standpoints. Also, the technology is not accessible to enough developers to deliver the solutions we will need. Transitioning from traditional data-intensive training of neural networks to reinforcement learning could provide faster and more economical strategies by enabling training and deployment to happen concurrently.

The relevance of blockchain
There is one more industry megatrend that I want to mention, and it’s blockchain. To some, it may already have a bad reputation, tarnished by association with the anarchy of cryptocurrency, but I believe it will be more widely relevant than many of us realise. Who could have foreseen the development of today’s Internet when ARPANET first appeared as a simple platform for distributed computing and sending email? Through projects such as the open-source Hyperledger, blockchain technology could be game-changing as a platform for building trust in transactions executed over the Internet.

We may soon be talking in terms of the Trusted Internet, which effectively protects privacy by enabling people to prove data without having to submit more data, and that finally delivers a solution to problems such as fake news by clearly identifying the origins and sources of information. We need to find ways to efficiently build and scale blockchain applications, and technologies such as ACAP that effectively accelerate compute, storage, and networking will be a major part of the solution.

The predictability of Moore’s Law may have become rather too comfortable and slow. The future requires maximising the flexibility, agility, and efficiency of our technologies, and reaching out to communities who may not be familiar with them, but whose intellects we must include if we are to achieve the advancements we all need. With Moore’s Law now behind us, we can more clearly see the inevitability in Kurzweil’s Law of Accelerating Returns.

Author details:

Ivo Bolsens, Senior VP & Chief Technology Officer, Xilinx