Meeting critical aerospace requirements with DO-178B
4 mins read
The number of software components in the aerospace sector has increased exponentially over time and, as the role of software has become more pivotal, so too has the need for improved safety analysis and proof that software intensive systems are inherently safe.
It is relatively easy to write software; the hard part is to guarantee that it will only do what it is required to, that it will do it consistently and reliably and that it will do it when – and only when – necessary. This is especially difficult within a systems context. This is the basis for producing software in a safety or mission critical domain, such as the aerospace sector where, depending on the subsystem in question, the failure of the software system could cause a catastrophic accident.
For example, the Full Authority Digital Engine Controller (FADEC) on an aircraft is, in many ways, similar to the engine management system (EMS) in your car. But if the EMS fails, it causes a breakdown by the side of the road; if the FADEC fails, an aircraft falls from the sky. As a consequence, the software for the FADEC needs to be verified to the highest level possible. The process of fully testing, verifying and certifying the software on an aircraft starts with a system safety analysis to determine which systems are critical and which are incidental.
So, for instance, the FADEC is a critical system because it is required to keep the aircraft in the sky; the in-flight entertainment system is not critical, but might cause frustration for flight attendants and passengers. But, if it fails, it does not impact on the safety of the flight. This difference in safety criticality allows systems to be produced and certified to different levels of integrity.
At this stage, it is a system assurance level, but all systems are usually made up of hardware and software and the system assurance can be flowed down to these components. Once the safety requirements are known, development of the hardware and software can proceed following the correct processes to achieve the necessary assurance level. To ensure uniform application of these processes, an international standard called DO-178 was published by RTCA (formerly the Radio Technical Commission for Aeronautics).
While the latest issue, DO-178B, was released in 1992, it is due to be updated to DO-178C at the end of 2011. However, the underlying requirements have not changed and it can be argued that the standard has been remarkably successful in preventing accidents, since no accident has been solely attributed to software. DO-178B – or Software Considerations in Airborne Systems and Equipment Certification – is an objective based standard.
It does not prescribe how to achieve anything; rather, it outlines what needs to be achieved for each assurance level. Each piece of airborne software can be assigned an assurance level (see table 1). The corresponding objectives for each assurance level are then prescribed and the number required for each assurance level decreases as the assurance level decreases (see table 2).
Testing, design reviews, code reviews and so on are objectives. However, it is not only the number of objectives that is important; there is also a requirement for independent verification. This requires an action to be checked by someone not linked to the design. If you review your own code, you will assume it's OK; someone else will have to understand what you have done and make sure the requirement has been met. Independence in testing is a good thing because it tends to remove the possibility that management and budget pressures may be lead to corners being cut.
Because of the complexity of software, it is impossible to fully test and guarantee that there are no faults or unintended consequences. Even in the game of chess, bounded by an 8 x 8 board, there are innumerable possibilities. In a complex software module, there are so many variables and interactions that to achieve 100% coverage would effectively take forever. This is not to say testing is not necessary, but it cannot be the sole proof that software is correct.
It is not sufficient to only test the software does what it is designed to do, tests also need to determine that it does not have unintended consequences. In an incident involving Malaysia Airlines in 2005, the FAA eventually determined that a software error on a Boeing 777 permitted the air data inertial reference unit to accept data from a failed accelerometer, causing the aircraft to make uncommanded manoeuvres. All possible input conditions had not been considered when testing the software.
DO-178B also takes into account the design process and software lifecycle to demonstrate confidence at all stages that the software is suitable to the required assurance level. It requires documented evidence that an appropriate lifecycle has been chosen and detailed information on how certification will be achieved. It identifies information that needs to be considered and suggests documents that need to be produced at each lifecycle stage (see table 3).
DO-178B describes the information that is required, not the tools that should be used nor the format in which the information should be presented. Various documents may be combined if it makes sense and, as long as the objective is achieved and the required information is available, the format can be the standard output from a favoured tool set. The software industry has moved on a long way since DO-178B was published: tools and environments have evolved; programming languages have changed; and the entire design process has progressed.
These changes could have required a complete rewrite of the standard, but the objectives have remained the same. This is why DO-178C does not contain fundamental changes. If an organisation is compliant with DO-178B, it will also be compliant with DO-178C. Whilst this level of required documentation may seem burdensome, it is absolutely appropriate to achieve certification at the required assurance level. Any organisation with a recognised and documented development process will already have a lot of the information required for certification.
The issue is to identify the appropriate processes and information that is already available and to present them to meet the objectives of the standard. To achieve certification, it is necessary to appoint a Designated Engineering Representative at the start of the development process who must be approved by the certifying body. The planning process and documentation is then reviewed at an early stage to ensure the appropriate issues have been considered and there is a suitable route to certification.
Software development is often considered to be arcane, but the DO-254 standard that covers compliance for complex airborne electronic hardware systems prescribes a process that is remarkably similar to that for software. This symmetry is natural, given that hardware and software are equally critical to air safety.
Russell Jugg is a consulting engineer with Critical Software.