Useful, wireless sensors can be deployed on the edge of a network to prolong life, increase security, enhance productivity, and improve comfort in the field. Everyday products such as wall switches, environmental sensors and even curbside trash sensors can be included in automation and monitoring ecosystems at an attractive cost and performance point.
ML has traditionally been accomplished through neural networks, a computation intensive and expensive platform. Such cloud/data centre solutions have limitations in terms of latency, data bandwidth, power consumption, and other factors. Let’s look at whether there’s a better answer: running embedded ML at the edge.
AI and ML in the data centre
Centralised data centres offer the tech sector a limited exposure to rising CapEx and OpEx cost, as they allow the tech sector to share servers, utilities, cooling, real estate and security. Furthermore, data centres can scale up and down resources as required, such as the amount of compute and storage needed. Due to the shared costs, new technologies such as ML and their models are more readily available.
The interconnection of globally distributed data centres also offers the tech sector the ability to use regional facilities. An IoT company based in the US could offer services to consumers in Europe without incurring a transatlantic delay. Data can be transmitted and routed without needing to be moved between the continents, thus avoiding breach of regional privacy and data protection laws. It also keeps latency and delays lower – no-one wants to wait two seconds for the lights to come on after they flick the switch.
This data centre model has served ML well. But as the technology is implemented in more IoT applications and billions more sensors, the latency and bandwidth costs are becoming increasingly problematic.
If we take the example of predictive maintenance, ML offers an effective way to automatically evaluate the data it generates. Using ML to see tiny changes in a device before the failure happens reduces the complexity associated with tiny, and therefore subtle, digital signatures. These changes could be vibrational or acoustic in a motor or slight temperature changes outside of expectation in a heat exchanger or condenser; like having a knowledgeable sentinel at each sensor.
But the volume of data created can be huge – which has created new technical challenges for developers and operators. On the surface, those problems appear to be scaling problems – just add more servers, add more storage, and other data centre-based consumables. However, fixing these issues doesn’t solve the increasing issues forming at the other end of the data pipe.
Sending massive volumes of data that may represent ‘no change’ is expensive; radios consume power, and in busy RF spectrums, they consume even more power through transmission re-tries. More sensors lead to even busier RF environments. In addition to the issues surrounding battery life and local bandwidth, some applications may be more susceptible to security concerns. Massive quantities of data can form patterns that those with malicious intent could take advantage of if intercepted. There’s got to be a better solution, right?
Computing on the Edge
There is a growing trend to counter these issues by returning a lot of that decision-making near the sensors (a.k.a. the edge), and only transmitting data that is identified as more important. This reduces the power consumption, bandwidth, and digital signature. However, returning that decision-making to the end node may mean an increase in end-node processing, storage, and, once again, power consumption. It seems that the IoT is caught in a vicious cycle limiting its accessibility and market growth.
Innovations in embedded ML have enabled the use of smaller microcontrollers, such as an Arm Cortex-M, and reduced the demands on memory, for both flash and RAM. The code size used to implement ML in a system can also be much smaller than that of traditional coding when implementing complex algorithms that address any real-life corner cases. This also makes firmware updates smaller, faster to develop, and easier to distribute across large sensor fleets.
In practice, an AI/ML implementation for industrial IoT may well have multiple layers, with only the appropriate data being routed up to the next layer. For example, a simple device like a thermostat might include a Cortex-M class microprocessor, which can run a slimmed-down, embedded ML model to help with decision making.
Moving up a layer, but still on-premises, we may use an application processor such as a Cortex-A class to handle more data and provide higher performance, so to make decisions at a building-wide level, such as controlling heating and lighting in response to users’ movements. At a basic level this means turning off the lights and adjusting temperature settings when people go home and reducing the energy consumption of an unoccupied building without compromising security and safety.
Then, we still have the option to send data to the cloud for remote data processing when extra computing performance is required, or to link together multiple locations to monitor and analyse data across a broader area.
Putting it into practice
While the concepts are relatively simple, putting together a ML based system may seem complicated but there is a lot of knowledge and experience available. We’re all in this together. This is where development kits can help – enabling design engineers to experiment and learn about sensors, ML, and radios to prototype and demonstrate their systems. For example, the Thunderboard Sense 2 kit from Silicon Labs provides an SoC including an ARM Cortex-M4 core, onboard radio capabilities and a range of sensors.
ML is a tool for making decisions that involve pattern recognition, including patterns difficult for a human observer to identify. It has a lot to offer, and its limitations can often be addressed by running embedded ML at the edge. Conventional signal processing and small microcontrollers combined with ML algorithms at the edge turn high-bandwidth raw data into usable signals.
By starting small with a development kit, engineers can see it for themselves and make sure they make the most of this new technology.Author details: Paul Daigle, Senior Product Marketing Manager, Silicon Labs