The team’s sensor uses one light-sensitive pixel to build up moving images of objects placed in front of it. Not only are single-pixel sensors much cheaper than the megapixel sensors found in digital cameras, they can also build images at wavelengths where conventional cameras are expensive or simply don’t exist, such as at infrared or terahertz frequencies.
Lead researcher Dr David Phillips said: “Initially, I was trying to maximise the frame rate of the single-pixel system to make the video output as smooth as possible.
“However, I started to think a bit about how vision works in living things and realised that building a program which could interpret the data from our single-pixel sensor along similar lines could solve the problem.”
The system produces square images containing 1000 pixels. Instead of the pixels being spread evenly across the image, the Glasgow system can prioritise the most important areas within the frame, sharpening the detail in some sections while sacrificing detail in others. This pixel distribution can be changed from one frame to the next, in a way similar to how biological vision systems work.
“By prioritising the information from the sensor in this way,” Dr Phillips added, “we’ve managed to produce images at an improved frame rate, but we’ve also taught the system a valuable new skill.”
The team’s paper – Adaptive foveated single-pixel imaging with dynamic supersampling – is published in Science Advances.