Field of View (FOV) is what is visible through the camera at a particular position and orientation and is controlled by the focal length. This is the distance from the point of convergence to the focal plane and varies with the wavelength of light. For example, the focal length of blue light (450nm) is slightly shorter than it is for red light (620nm).
Lenses with variable focal length are called zoom lenses, those with fixed focal length are prime lenses. Within fixed focal length, lenses of shorter focal length are called wide-angle lenses (14 to 35mm, 114 to 64°). Longer-focal-length lenses are referred to as long-focus lenses (85mm to >300mm, 30° to <1° FOV) and are used to magnify distant objects.
For example, home security cameras which make up a large portion of the IoT typically have wide angle fixed focal length lenses with which to monitor and report on specific regions of interest.
Depth of field
Another metric to consider is depth of field (DOF). It represents the distance between the nearest and farthest objects and is determined by three factors – aperture size, distance from the lens, and the focal length of the lens. Larger aperture (smaller f-number) results in a shallower DOF. Closer focus distance results in a shallower DOF, which is appropriate for artistic imaging. It focuses on the subject at hand and dims out the background, highlighting the main object. For a given f-number, increasing the magnification, either by moving the camera closer to the subject or using a lens of greater focal length, decreases the DOF; decreasing magnification increases DOF.
Aperture is an indicator of how much light can enter the lens, similar to the iris of the eye. Consider two lenses with the same size optical format. A wider aperture (f/1, for example) will allow more light to enter than a narrower aperture (f/12, say). A wider aperture results in a faster shutter speed resulting in capturing high speed motion with less blur. More light also means less graininess in low light. If low light performance is critical to the application then a lower f number lens is important.
If we consider a home security application, the f/1.8 lens is the most common option. Superior image quality especially in darker environments is key for most home automation devices. In fact, low light performance is one of the strongest selling points for an image sensor.
Cameras are the heart of the IoT revolution |
Dynamic range
For an image sensor, dynamic range (DR) provides the range between the brightest and darkest detailed parts of an image that can be captured simultaneously. Image sensors typically have a dynamic range between 54dB and 70dB. Higher dynamic range is derived through image processing, which can happen in the sensor or in the image processor. Some sensors in the market support high dynamic range (HDR), going up to 105dB.
In HDR mode, sensors capture two exposures sequentially within the same frame by maintaining two separate read and reset pointers interleaved within the rolling shutter readout. As soon as a pixel’s two exposure values are available, they are combined to create a linearised value for each pixel’s response. Alternatively, the sensors have the option to output two separate streams of data representing the two varying exposures. These can then be processed off chip.
Initially, DIY devices have used sensors with a standard dynamic range (54dB to 70dB) but as devices have got ‘smarter’ and use cases more diverse, there is a requirement for higher dynamic range to be supported. In the indoor environment, there is a lower probability of large differences in lighting in a single scene. However, use the same device in an outdoor environment and there is a much higher chance of having a brightly lit area coupled with shadowed areas in the same scene.
Today, cameras are expected to compensate for sudden changes in lighting, such as a door opening, or the turning of a light switch. An HDR image sensor provides the capability of seeing the door opening, or the light being switched on and off, while maintaining clarity in image quality. Such technology makes it much easier to track people and objects, identify faces in the hardest of lighting conditions.
Low light performance
High quality imaging in low light is a key selling feature for IoT applications. When choosing the sensor for camera systems that must produce high-quality images under low-light conditions, there are several parameters to consider such as modulation transfer function (MTF) and Signal to Noise Ratio (SNR).
MTF is a common way to quantise the ability of a sensor to provide sharpness of an image. MTF in the visible light spectrum is quite consistent, problems tend arise at longer wavelengths. Lower MTF limits the resolution of the system making small detail not very obvious.
SNR is another key factor influencing the ability of a sensor to deliver a useful image. The higher the SNR, the better the image quality. SNR gives the ratio of signal to noise present in the image – noise shows up as graininess in the image. There are two primary ways in which a sensor’s SNR can be increased: decreasing noise; or increasing the signal, thereby optimising the response of the sensor in terms of quantum efficiency (QE).
QE represents the percentage of photons that get converted into electrons. For low-light use cases, where subtle differences in light levels must be captured, less than 1mV of noise on a chip can be perceived as noise in the image. A noisy environment can easily dominate a low-level signal generated in low-light or shaded conditions. Peak SNR, as seen on some high performing CMOS image sensors in the IoT space, runs between 39dB and 41dB.
Some home automation products are beginning to adopt RGB-NIR image sensors in their applications. RGB-NIR is a pattern of colour filters on the image sensor’s pixel array that individually collect red, green, blue and near infrared (NIR) photons. The image quality as seen from these sensors provides good colour reproduction during the day when the IR is low and at night with NIR LEDs actively used and only black and white images are taken. These sensors eliminate the need for a mechanical IR cut filter. This reduces overall cost and cuts down on field failures resulting from the mechanical operation.
When streaming over data limited wireless protocols, the video typically goes through compression. The higher the compression, the more complex the processing, leading to higher power consumption so while 18Mpixel cameras may sound compelling,power tradeoffs force cameras for most IoT applications to be in the 2Mpixel to 5Mpixel range.
Cameras are at the heart of the IoT revolution, giving rise to new categories of devices and applications and advances in CMOS image sensor technology are helping to drive that revolution.
Author profile:
Radhika Arora is product line manager (IoT) with ON Semiconductor.