One of the problems with bug finding is knowing just how many there are in a program. Often, they are only found once the software is being used in earnest; a particular combination of circumstances highlights the program’s inability to operate as intended.
There are, of course, tools which can be used to analyse software, but how effective are they? Researchers from New York University’s Tandon School of Engineering say there is no way determine how many bugs go unnoticed, nor to measure the efficacy of bug finding tools. So it has addressed the problem counterintuitively by adding a known number of bugs into programs written in C and seeing how many can be found.
And the results are, let’s say, interesting – some of the tools used detected as few as 2% of the injected bugs.
How well these results translate into the embedded software sector isn’t clear, but you might make some educated guesses.
Exercising mission critical software to find weaknesses before deployment has always been a challenge. For example, autonomous and semi autonomous vehicles are beginning to appear on our roads and bug free software is becoming more important than ever.
But I still come back to a session at the 2014 Electronics Design Show conference, where an attendee asked: “Who in this room has written bug free software?” Nobody will put their hands up in answer to that question, but everyone will rely on some kinds of tools to help debug their code. The question now is have we been relying on tools which are less capable than we believed?