Tachyum completes testing of debugger on its Prodigy FPGA prototype

1 min read

Tachyum has revealed that it has completed testing of hardware debugging features on the FPGA prototype of its Prodigy Universal Processor.

Credit: Usman - adobe.stock.com

Testing was conducted with the GNU Debugger (gdb), a widely used Linux tool for debugging software, to ensure successful operations and is a key component of Prodigy’s move towards production.

As part of the hardware development process, debuggers are used to search for and identify components that are not operating correctly or are incorrectly configured. The Prodigy platform supports four debug registers for hardware breakpoints and four registers for hardware watchpoints for memory operations.

Four hardware PC breakpoints can even debug ROM code where software breakpoints are unable to be used.

“Having hardware debugging features incorporated into the Prodigy FPGA allows our customers to debug directly on the prototype itself instead of through less effective software processes,” said Dr. Radoslav Danilak, founder and CEO of Tachyum. “Previous debugging, which ran on Prodigy software c-model, is now running on FPGA and ensures that hardware issues can be resolved prior to tape-out. This is the latest feature we’ve incorporated into the Prodigy FPGA tests to ensure our customers have all the necessary tools and capabilities implemented into our Universal Processor on day 1 of launch.”

As a Universal Processor offering performance for all workloads, Prodigy-powered data centre servers will be able to dynamically switch between computational domains (such as AI/ML, HPC, and cloud) with a single homogeneous architecture. By eliminating the need for expensive dedicated AI hardware and dramatically increasing server utilisation, Prodigy is intended to significantly reduce CAPEX and OPEX while delivering improved levels of performance and power more economically.

Prodigy integrates 192 high-performance custom-designed 64-bit compute cores and, according to Tachyum, can deliver up to 4.5x the performance of the highest-performing x86 processors for cloud workloads, up to 3x that of the highest performing GPU for HPC, and 6x for AI applications.