Researchers at the Georgia Institute of Technology have developed a new approach to intelligently process that data closer to where it is generated - on the antenna subarrays themselves.
Combining technologies including machine learning, field-programmable gate arrays (FPGAs), graphics processing units (GPUs), and a new radio-frequency image processing algorithm, the research has streamlined the modular handling of radar signals to reduce processing time and cost. The improvements – as much as two or three orders of magnitude – could lead to real-time analysis of RF image data from sources ranging from potential enemy targets to speeding automobiles headed toward collisions.
The research, which has been tested on a 16-element digital antenna array, was funded by the Defense Advanced Research Projects Agency’s (DARPA) Tensors for Reprogrammable Intelligent Array Demonstrations (TRIAD). While the project has so far focused on real-time imaging operations on vast amounts of data, it supports the conventional beamforming operations also done by phased arrays.
“The goal is to push processing up front, to where all the raw data is coming in,” said Ryan Westafer, a principal research engineer at the Georgia Tech Research Institute (GTRI). “We work to manage the high-dimensional data there and extract features in real time. With so many data sources from autonomous vehicles to drones, we can’t be sharing all those raw data feeds. We need to be analysing the data locally and sharing only the information content – the relevant features.”
With potentially hundreds or even thousands of subarrays generating terabytes of data every second, Westafer said this “edge intelligence” can pull out the desired information in real time, allowing defence and transportation applications alike to get the important details right away – when they need it – without waiting for processing by backend servers.
“Classical approaches process the data in the analogue format, choosing only certain components of the vast information flow for digitizing where needed,” noted Alex Saad-Falcon, a Georgia Tech Ph.D. student and former GTRI researcher who co-led the project. Other portions of the data can be stored on a server for later analysis.
“We want to digitise all of the data, then off-load a smaller digital portion to be shared,” he said. “That gives more flexibility to antenna array algorithm designers because it is much easier to create an algorithm in the digital domain because you can write it in code, versus analogue, where you have to design a circuit and get it built. That also facilitates reprogramming when conditions change.”
FPGAs and GPUs are essential to Georgia Tech’s modular TRIAD approach. With low power consumption and high processing speeds, the FPGAs are located adjacent to the analogue-to-digital converters on antenna subarrays. With help from graphics processing units (GPUs), they process the data, quickly sending it to a CPU where information from other subarrays is aggregated.
As a key feature of the project, GTRI researchers collaborated with academic researchers in Georgia Tech’s School of Electrical and Computer Engineering (ECE) to utilise SoloPulse, a new array processing algorithm designed for radio-frequency images generated in synthetic aperture radars (SAR).
“The algorithm provides an estimate of energy coming from different points in the vicinity of the array,” Saad-Falcon explained. “That allows you to form an image, though you have some uncertainty about where the actual source is. The goal was to train the machine learning model to reduce that uncertainty, or learn from it to predict the source location.”
Though SoloPulse was not originally designed for the purpose the GTRI researchers needed, their collaborators – ECE Professor Christopher Barnes and Research Technologist J. Michael McKinney – supported its adaptation to the TRIAD goals.
Programming in the digital domain can utilise tensors, which are multilinear algebraic entities that describe the relationships between objects in terms of scalars and vectors. Utilising tensor operations also allows data representations to be shared with machine learning algorithms such as deep neural networks, which can learn how to improve their operation every time they receive new data.
“You funnel the data into the new artificial intelligence tensor operations, which you also bundle up, and then at the end you get a detection, some kind of an end result that is human-actionable,” said Saad-Falcon. “The whole idea is that because you frame both the traditional algorithms and the machine learning algorithms in the same format as these tensor operations, you can effectively chain them together and get speedups that you wouldn’t be able to get otherwise.”
Beyond accelerating the data processing, the use of FPGA and GPU chips could help conserve power, which can be critical for mobile applications. “You have a finite compute budget on the array, so you need to intelligently allocate the computation and use an algorithm that extracts the information you want from the signal most effectively,” he said. “This is of interest to a lot of different applications in the industry right now.”
Part of the project’s goal was a demonstration to process radar pulses received by the 16-element array. The researchers used a moving emitter on a turntable in their lab to evaluate TRIAD’s imaging ability. “We could immediately see the result and our total latency from emitter motion to screen update was on the order of about 20 milliseconds – almost faster than the human eye can see.”
The DARPA project concluded in December 2022 and the researchers are now looking at other potential applications for the technologies. Among the possible uses is shared perception, which could have applications in autonomous vehicle networks, both for commercial and defence needs.