Outlook 2011: Making a difference
4 mins read
Virtualisation technology has been widely adopted in the enterprise market to make the best use of ever more powerful microprocessors with multiple cores. Virtualisation now spreading to the embedded market.
New multicore devices are bringing a true change in the way embedded developers are designing systems. There are several key differences in the way that embedded developers are looking at multicore and virtualisation.
Virtualisation allows multiple instances of OSs to run on one processor. Each instance is usually referred to as a virtual machine or virtual board. The virtual board provides an environment in which a guest OS operates in isolation. A virtual machine manager (or hypervisor) manages the virtual boards and arbitrates scheduling, as well as memory and device access. Virtualisation can be used in single or multicore processors.
A common way to use a multicore processor is by running a single OS in a symmetric multiprocessing (SMP) configuration. The single OS has a single scheduler and can dispatch processes and tasks to the different cores. One of the benefits of SMP is that it makes load balancing across the cores straightforward. However, SMP does not allow multiple cores to execute different OSs; for example, a general purpose OS as well as a real time OS. The single OS in SMP mode is also a single point of failure; if the system crashes, all cores will crash and a reboot will be needed.
The multiple cores can also be configured in an asymmetric multiprocessing (AMP) configuration, where each core has a separate OS that schedules its own tasks. The OSs can be the same type (for instance, multiple instances of Linux) or a mix of OSs (for instance, Wind River Linux and VxWorks).
Embedded systems are constrained in the power they can use, the amount of memory and the form factor. This means many server virtualisation solutions are not well suited to embedded applications.
Virtualisation allows designers to consolidate different pieces of functionality that have traditionally required multiple dedicated processors into one processor – single or multicore. This reduces cost, increases functionality and allows differentiation.
Embedded virtualisation solutions need to be configurable and adaptable to run on constrained hardware. Here are some of the considerations:
• Real-time response: Real time means fast and deterministic responses to events. Virtualisation requires context switches between the different virtual boards and these need to be fast and to not impact determinism.
• Reliability: A fault in one virtual board should not bring the entire system down. The faulty board can be reset individually to restore it into service.
• Boot time: The time from power-on to a responsive system is important in many industries, for example automotive.
• Memory footprint: Embedded systems have more memory than they did, but memory is still on a budget. Virtualisation requires memory, but the amount needs to be minimised.
Luckily, hardware and software virtualisation technologies are coming together to provide a foundation that provides enough capability with a small enough footprint for embedded devices.
Hardware support for virtualisation, such as Intel's VT-x and VT-d, provides a boost in efficiency for embedded systems virtualisation. Hardware support speeds the administrative work a hypervisor has to perform. This involves operations such as memory access protection, IRQ dispatching and so forth. It also provides a boost in efficiency for virtualisation of devices such as communications and networking elements.
Virtualisation is done through an embedded hypervisor: a thin administration layer running directly on the hardware that arbitrates the resources of the hardware between the different virtual boards. Use of a hypervisor is beneficial in both single and multicore scenarios; it arbitrates scheduling if multiple virtual boards run on one core and it arbitrates resources such as memory and devices between virtual boards in both single and multicore configurations.
In order to an OS inside a virtual board on a hypervisor, the OS must either be paravirtualised or the hypervisor must perform emulation. Emulation allows an OS to run unmodified on a virtual board. While this seems attractive, it has a serious drawback: it requires more work from the hypervisor to emulate hardware when the guest OSs try to access it. More work means more code and memory and less performance and determinism.
With paravirtualisation, the OS is modified to collaborate with the hypervisor. This provides greater performance by ensuring the fastest possible interaction between the hypervisor and the guest OSs. Applications on the OS continue to run unmodified. Paravirtualisation also allows for direct interaction between a guest OS and a hardware device, if approved by the hypervisor, which improves throughput and latency.
A hypervisor can execute a virtual board which can contains a complete guest OS; but the virtual board can also contain a 'minimal executive'. Here, the virtual board presents an interface in which to run an executive without an OS. One of the benefits is a very quick boot time; the minimal executive comes up first and is operational, while the other OSs take their time to boot.
Most people will consider a hypervisor when one core needs to execute multiple virtual boards with different OSs. However, hypervisors can provide benefits in other situations. Consider the case where the user has a dual core processor and wants to run VxWorks on one core – for deterministic real time behaviour – and Linux on the other core, for network connectivity or graphics. This is an AMP configuration. Without arbitration, both OSs have access to the full hardware, which means that Linux, for example, could step over memory owned by VxWorks and vice versa. Manually configuring each OS instance to avoid conflicts is complicated and error prone.
By design, hypervisors provide this separation between the virtual boards. The hypervisor can map every virtual board one to one to the cores on a multicore processor. Each virtual board now runs on a core as the only OS on that core; there is no need (and hence no overhead) for scheduling. The hypervisor only gets involved to protect memory and separate devices. If required, the hypervisor could even map an OS to multiple cores within a multicore design, delivering an SMP OS over a subset of the cores.
Another benefit of using hypervisors is reliability. The hypervisor is in control of the hardware at all times; it can detect whether a virtual board misbehaves and it can reboot this virtual board without affecting any of the other virtual boards in the system.
Using a hypervisor for configuring an AMP system provides the developer with a full set of capabilities (AMP, SMP, protection, boot) to configure multicore easily. This allows managers to feel confident they have a proven solution that provides portability and future proofs their projects.
So what does this allow embedded developers to do differently?
• Reduce the number of processors by consolidating them onto virtual boards in a single or multicore processor.
• Increase the reliability of AMP systems by guaranteeing resource separation and the ability to restart virtual boards.
• Migrate systems into a virtual board and add more functionality, providing the opportunity for reuse and innovation.
• Combine real-time, legacy and general purpose OSs in the same device.
• Provide faster performance through the use of multicore.
Mark Hermmeling is the senior product manager for Wind River