Compression breakthrough ‘faster than FFT’ claim MIT researchers
2 mins read
Researchers from MIT have developed a new algorithm designed to improve on the fast Fourier transform (FFT) in a large number of cases. The discovery paves the way for smartphones to wirelessly transmit large video files without draining batteries or consuming monthly bandwidth allotments.
Under some circumstances, the team claims the new algorithm can result in a tenfold increase in speed.
The Fourier transform is a method for representing an irregular signal, such as the voltage fluctuations in the wire that connects an mp3 player to a loudspeaker, as a combination of pure frequencies. It's universal in signal processing but can also be used for applications such as the compression of images and audio files. The Fourier transform is so prevalent due to the FFT algorithm which makes it possible to calculate Fourier transforms dynamically. However, the MIT researchers believe they have found an even faster algorithm.
Like the FFT, the algorithm works on digital signals. It takes a digital signal containing a certain number of analogue signal samples and expresses it as the 'weighted' sum of an equivalent number of frequencies. 'Weighted' means that some of the frequencies count more toward the total than others, while many may have such low weights that they can be safely disregarded. This is why the Fourier transform is useful for compression. The researchers note that an eight by eight block of pixels can be thought of as a 64 sample signal – the sum of 64 different frequencies – but add that studies show that on average 57 of those frequencies can be discarded with minimal loss of image quality.
Signals in which the Fourier transforms include a relatively small number of heavily weighted frequencies are called 'sparse' and the new algorithm determines the weights of a signal's most heavily weighted frequencies. The sparser the signal, the greater the speed up the algorithm can provide and, if the signal is sparse enough, the algorithm can simply sample it randomly rather than reading it in its entirety.
The new algorithm relies on two key ideas. The first is to divide a signal into narrower slices of bandwidth, sized so that a slice will generally contain only one frequency with a heavy weight. In signal processing, filters isolate particular frequencies but tend to have blurry boundaries. While one range of frequencies will pass through the filter more or less intact, the further the other frequencies outside that range, the more attenuated they are, until certain ones are filtered out almost perfectly.
According to the MIT team, the one frequency with a heavy weight is at the edge of the filter, however, it could end up so attenuated that it can't be identified. The researchers have found a computationally efficient way to combine filters so that they overlap, ensuring that no frequencies inside the target range will be unduly attenuated, but that the boundaries between slices of spectrum are still fairly sharp. Once a slice of spectrum has been isolated, it's necessary to identify the most heavily weighted frequency in the slice. This was achieved by repeatedly cutting the slice of spectrum into smaller pieces and keeping only those in which most of the signal power was concentrated. They are currently working on an even more efficient technique which borrows a signal processing strategy from 4G cellular networks. Frequencies are generally represented as 'up and down squiggles' but they can also be thought of as oscillations. By sampling the same slice of bandwidth at different times, the researchers claim they can determine where the dominant frequency is in its oscillatory cycle.
The team will present the new algorithm at the Association for Computing Machinery's Symposium on Discrete Algorithms this week.