If you take a look at Barr Group’s annual survey of language use in embedded, C continues to enjoy a commanding lead, used by close to 70 per cent of respondents. Remarkably, usage of C++, which was created a decade after C, apparently peaked in 2006 before dropping back to around 20 per cent, though its penetration has increased slowly since the middle of the current decade.
Inertia is one reason for the seemingly slow pace of change. Anders Holmberg, chief strategy officer at IAR Systems says: “There’s so much code out there written in C that nobody really wants to rewrite in another language, as that will risk introduce new bugs into the code base. So even if a design is updated with new features, the code base is very likely to still be C.”
Another thing that C has had in its favour is the ubiquity of tools that support it. Today, no-one attempts to launch a microcontroller architecture without some kind of C compiler, build and debug support. However, circumstances are changing that are easing the switch to other options.
Holmberg points to the increasing use of operating systems such as Windows, Linux and Android in embedded, where development tends to favour C++. This tends to push teams towards C++ in order to maintain a common code base for the bare-metal or RTOS-based code as well. “This trend is partly driven by the fact that most cross-compilers for embedded, at least the ‘big names’, now solidly support C++14 and often most of C++17.”
The migration could go further by a trend in toolchain design to make compilers more modular. The prime open-source example of that is the Low-Level Virtual Machine (LLVM) toolchain which has been embraced by companies such as Arm and, more recently, Wind River. As well as supporting recent iterations of C++, Michel Chabroux, senior director of product management at Wind River, points to LLVM as being instrumental in the company’s ability to support C alternatives, such as the Rust language, as well as making it easier to run prototype code on a development host through the WebAssembly intermediate format the compiler can generate.
One big advantage of Rust claimed by its advocates is that it streamlines for the programmer the way dynamic memory allocation is handled compared to C or C++ without resorting to the garbage-collection strategies used by Java or by Go, another C-like language from the world of cloud computing. Without garbage collection, there is no need to periodically halt the runtime to search for memory to clean up.
Rust enforces checks on how memory is allocated and accessed at compile time that. In principle, this makes it easier to avoid memory leaks, a variety of race conditions and other task-terminating flaws. For situations where programmers need greater control, Rust has an ‘unsafe’ keyword to use on blocks where the compile-time checks are disabled. In an embedded-systems context, Rust is not completely safe when it comes to memory handling using the default constructs. With direct-memory access (DMA), it is entirely possible to get hardware to corrupt the heap using its scope-based allocation system even with the protections turned on. Niall Cooling, CEO at training company Feabhas, points out that it is unlikely any software language will ever deal with the potential mismatches between system design and software behaviour.
Chabroux says Rust has attracted attention from some big names: “Microsoft, Google, Amazon, Intel and others have or are starting to use Rust in low-level subsystems.”
But such high-profile users may not move the needle much in the wider, highly fragmented embedded-systems space. Cooling notes: “Rust has the potential to be the third language in embedded but I can’t see it replacing C or C++ anytime soon.”
Holmberg says the overall support for Rust in embedded systems is not mature and there can be significant differences in what is available for a target architecture. “On the other hand, there are people out there who feel that C++17 and C++20 are way too complex, with new language features bolted on left and right. I think it might be a very strong contender against C++ as the syntax is a bit leaner and it does not have the burden of the C heritage built into the language.”
Joe Fabbre, global technology director at Green Hills Software, says having already decided to introduce support for C++17 in addition to C++14 the company is looking at Rust and its use in embedded systems “We are generally encouraged that there is demand for language support that makes writing safe and secure code more practical and scalable.”
Modern C++
Although it is evolving rapidly and acquiring features that may not be entirely useful in an embedded context, Modern C++, which is the blanket term for versions of the language standardised after 2011, is attracting greater attention now because it deals with many of the memory-protection and code-quality issues from which C programs can so often suffer.
Cooling points to a number of features which make Modern C++ attractive from a code-quality perspective. One of the earlier additions was the smart pointer. Though it is possible to wind up with issues caused by the way in which some functions are implemented in the standard template library, the smart pointer wraps a class library around the basic pointer that takes care of releasing memory when it is no longer needed.
The static_assert keyword makes it possible to embed compile-time checks in the source code to perform tasks such as checking that a function is called properly: echoing the contract-based programming of the Eiffel language. Similarly, the constexpr keyword is used to perform some computations at compile time so that programmers can maintain more readable code. Cooling says the 2017 and 2020 versions of C++ are pushing a wider range of expressions to compile time, which should improve quality. He also points to the standard attributes which provides more information to the compiler so that it can issue meaningful warnings if a function does not comply with the attributes declared for it by the programmer.
Above: The Barr Group's graph of language use over time, published in 2019
“It seems to be a general trend in bigger or more mature software development organisations to move towards more formalised coding guidelines, enforced by static-analysis tools. One of the reasons for that is that even a ‘small’ project might contain several 100,000 lines of code, so it’s simply impossible to uphold quality standards without tools,” Holmberg says.
Compilers such as LLVM and its Clang front-end can go further by adding custom parsing engines to the compilation process. One system described at the LLVM Developers Meeting last year tried to identify situations where a program might access a null pointer and crash. Such checks are prone to false positives because they lack runtime information. But improved heuristics may improve the accuracy of the checks over time and reduce the number of unnecessary warnings.
For safety-critical systems, programmers have used rule decks such as MISRA for many years, often enforced by static tools used ahead of compilation. Such guidelines often rule out things like dynamic memory allocation because they are so fraught with problems. More advanced static checks will make it easier to soften those guidelines and continue to take advantage of the low-level control of C and, once again, slow the migration to new languages.