In his opening address to the Forum, Ramine Roane, Xilinx’ senior director, software and IP, said that as CPU architectures were failing to meet demand, there had been a resurgence of interest in FPGAs as designers look to address the need for both higher performance and lower latency.
“Processor frequency scaling stopped around 10 to 12 years ago and while the move to multicore architectures initially addressed this, we’re now seeing multicore architecture scaling flattening,” he argued.
“There are too many transistors switching at the same time and current leakage at lower geometries is hitting power constraint limits, and this is all happening at a time when workload demand is growing exponentially both in the Cloud and at the edge,” he suggested.
“We’re seeing increased demand for video transcoding in the Cloud, which requires both high performance and low latency, for example. Likewise, at the edge, we are seeing similar increased workloads, especially when it comes to data analytics.
“Processor speeds are not scaling, so there has been a need for application specific accelerators, whether that’s for video, AI, machine learning or search services.”
The problem, however, according to Roane, is that application specific accelerators can only be justified if developers are looking to address the needs of a particular ‘killer app’.
According to research undertaken by Google into workloads at its datacentres, so-called ‘killer apps’ – such as search and video transcoding – were found to account for just 9.9% of total CPU cycles.
“Over the past few years, Google has found that its data centres are seeing workloads becoming more diverse, with demand changing rapidly.”
As a result, Roane suggests that there’s been a move away from building application specific accelerators, even for video.
“We are seeing a move towards reconfigurable accelerators which are, in most cases, the only viable option for addressing this type of flexible workload – and this is not only valid in the Cloud, but also at the edge.”
Reconfigurable accelerators
According to Roane, this move to reconfigurable accelerators is playing to the strengths of FPGAs and SoCs.
“They provide configurable processor sub-systems and hardware that can be reconfigured dynamically. Their key advantages are that design engineers can build their own custom data flow graph which can be customised to their own application with its own custom memory hierarchy, which is probably the biggest advantage as it lets you keep data internal to your pipeline.”
While FPGAs can offer massive computational throughput and miniscule latency, the problem remains of how to program these devices because despite being incredibly powerful they have proved to be difficult to engineer.
“It’s fair to say that FPGAs are not the easiest devices to program, compared to CPUs.” Ramine Roane, Xilinx |
“It’s fair to say that FPGAs are not the easiest devices to program, compared to CPUs,” admitted Roane, “and historically they have been costly to manage and develop.”
Roane conceded that the cost of FPGA engineering has been one important reason why they haven’t become mainstream, along with the complexity of programming them.
However, Xilinx and its growing ecosystem of partners are now delivering a much richer development stack so that hardware, embedded and application software developers can program them more easily by using higher level programming options, like C, C++ and OpenCL.
“We are now able to deliver a development stack that designers are increasingly familiar with and which is also available on the Cloud via secure cloud services platforms,” explains Roane.
For example, Amazon Web Services (AWS) FPGA EC2 F1 not only makes it possible to program Xilinx FPGAs, but is also helping customers to move their workloads to the cloud. AWS also provides a hardware development kit for FPGA configurations.
“The ability to gain access to FPGAs via the Cloud as a service has been a significant development in recent years and provides much broader access to FPGAs via platforms provided by Amazon or Alibaba Cloud and means that FPGA development, as a service, is expanding worldwide.
“I think there’s a trend here,” he concluded, “and it is one in which things are going to get more accessible.”