At the heart of this smartphone explosion – and hence the always-on connectivity revolution – is the Applications Processor. The Applications Processor has evolved from the baseband processor in the earlier mobile phone era, but moved quickly from voice-processing into the forerunner of today’s latest multi-media processors. Today, multi-media and convenience have become basic expectations for all consumer electronics devices.
Applications Processors started to be optimised for other fields, like early productivity-enhancing PDAs and e-Readers, even transforming the utilitarian-focus of the yesteryear cars into the comfort and safety-focus of today’s cars, with their advanced driver information systems. Over the past five years, with the rapidly developing ‘Internet of Things’ (IoT), Applications Processors have further expanded to drive many hundreds more connected applications in the consumer and industrial markets. In fact, market analysts today actively track more than 100 user market segments, not including handsets and computers, with a combined market size of $18bn, even overtaking the traditional MCU market.
‘Smart’ and ‘Aware’ edge nodes
At the south-end of an IoT network are billions of devices called ‘edge nodes’ – things with which we interact and therefore, the face and experience of the IoT for end users. And this brings an increasing demand for edge nodes to have smartphone-like user-friendly interfaces with rich 2D/3D graphics and features like touch sensing, voice assistance, biometrics for password-less access and facial recognition. This growing trend requires edge nodes to have substantial compute performance.
The next challenge is data exchange between edge nodes and the cloud. There are simple examples, in which the amount of data exchange is quite limited – potentially just a yes or no. But if the application is more complex and requires lots of data exchange, a few problems arise. One is the increasing cost of wireless data exchange; another potentially more important concern is the latency involved in the edge to cloud communication, especially when the response is time-critical.
An effective approach to solving these challenges is to create a distributed computing network with significantly enhanced computation devices. Here, edge nodes will handle most of the data processing and only communicate with the cloud as necessary. This emergence of ‘edge computing’ not only relieves the wireless network burden, but also improves edge node response time and reduces data centre costs.
Then there’s the concern of security and privacy when you’re dealing with information that’s so sensitive that you don’t want to (or aren’t allowed to) transmit it all to the cloud. This is where edge computing with localised data storage will thrive. In this scenario, the cloud still plays a role in providing large-scale data analytics, but does not become a repository for all the data from the network. By not storing all the private data in the cloud, the impetus for attacking the cloud is reduced significantly; but it also means that edge nodes must have higher attack resistance. It also means the network itself cannot be built with edge devices that all have a homogeneous security implementation, lest a single vulnerable device leads to unlocking the secrets to all the device on the network.
“Every edge node has the benefit of learning from the collective of the network without the burden of direct interaction with each other. This is Metcalfe’s Law in action – the collective value of a network is proportional to the square of the connected nodes.” Geoff Lees |
Welcome to Machine Learning Phase II – with embedded Artificial Intelligence, where edge nodes are not only smart, but are also trained to be ‘aware’ of their environment and situation, making them capable of taking unsupervised decisions. Much of the higher-level training and classifications will still happen in the cloud, with the results pushed down to the edge node in the form of inference rules. These inference rules generated by the cloud represent the aggregated knowledge of the entire network, and therefore, every edge node has the benefit of learning from the collective of the network without the burden of direct interaction with each other. This is Metcalfe’s Law in action – the collective value of a network is proportional to the square of the connected nodes.
Secure connectivity between the edge and the cloud is another requirement of this envisioned network, since it is the cloud that must securely authenticate and provision the edge nodes, handle subscription services, provide over-the-air updates and manage device lifecycle. But there is a tremendous diversity in cloud platforms, which means that edge nodes need a cloud-agnostic software enablement.
Which Embedded Processor?
The growth of edge nodes with embedded AI requires an Applications Processor-level of performance, which traditional MCUs have not evolved to deliver. As the use-cases for Applications Processors expanded, so did their architecture. From single core with some peripherals, they have evolved into multi-core architectures with graphics capability, advanced security, power management and support for a variety of peripherals. More recently, applications processors with heterogeneous cores have been introduced in which advanced computations and graphics are handled by Cortex-A cores, with Cortex-M cores integrated to handle low-power requirements and sensor fusion needs.
Switching to Applications Processors is not always easy for those developing IoT edge nodes. Developers of solutions for the thousands of IoT applications often lack the scale and resources to support Linux or Android based applications processor designs. An example would be an appliance designer looking to add edge computing capabilities to its products without increasing the per-unit cost drastically or greatly extending their time-to-market with an extensive redesign. In such cases, choosing the right embedded processor will be the difference between success and failure in the market.
As we look ahead at the expanding possibilities of a connected world and map the potential for traditional Applications Processors and MCUs to provide embedded processing solutions for the future, we find an increasing gap in addressing these growing needs. What’s needed are scalable, cost-effective embedded processors that are easy-to-use, have a real-time response, are cost-effective, but which deliver high performance compute, advanced security, and support for rich user experiences like graphics, display, and audio.
NXP believes embedded product designers should be able to develop new products without being limited by the performance challenges of traditional MCUs or the systems-level complexities of the applications processors. But this requires breaking the technological boundary between Applications Processors and MCUs – and that’s exactly what NXP has done. NXP’s new class of ‘Crossover Processors’ is architected from the Applications Processor, for performance and functional capabilities, but with a Cortex-M core for realising the ease-of-use, low-power, real-time operation and low interrupt latency capabilities of traditional MCUs. In addition, Crossover Processors are integrated with fast and secure interfaces for external memories and high density of on-chip RAM which eliminates the need for embedded flash. This not only reduces silicon cost, but also lowers the product cost due to the cost-effectiveness of manufacturing and programming external serial flash.
The edge is getting smarter and more aware. The reality is the connected world requires edge computing with embedded AI and data management, reliable security and assured privacy. And Crossover Processors are set to meet those needs.NXP Semiconductors |