Leverage methods and techniques of Airborne Software Certification to make the autonomous cars safer

Jeudi 25 avril 2019
Amine Smires
Director, Product Management
CS Communication & Systèmes Canada

The challenges related to the certification of the automotive safety-critical embedded systems are huge and only a few years from the entry into service of partial or fully autonomous vehicles. With the emergence of the Advanced Driver-Assistance Systems (ADAS) such as Lane-Keeping, Emergency Braking or Assisted Parking combined with the exponential growth of advanced automotive projects, the Society of Automotive Engineers (SAE) has defined in 2014 five level of automation for cars as shown in Figure 1.


Description : Figure 1 : The SAE five levels of driving automation.
Source : SAE International.
We are all very familiar with levels 0 and 1, or respectively no automation whatsoever and (1) when the vehicle enables only one aspect of automation (braking, acceleration or steering). An example of level 1 is adaptive cruise control. However, things get interesting with level 2 automation when the system executes both the steering and the speed. We have only seen such systems very recently with the Mercedes Distronic Plus, the General Motors Super Cruise and Tesla’s Autopilot.
There are no level 3, 4 or 5 automation vehicles commercially available yet. Audi is the closest Original Equipment Manufacturer (OEM) to launch a level 3 technology with the Traffic Jam Pilot system, which allows the car to operate autonomously under certain conditions, however this has not yet received legal approval in most countries (including the United States). There is also a debate regarding this level of automation because the drivers must be prepared to take back control when conditions are no longer met, sometimes after not having paid attention for quite some time. That is why level 3 has been ignored by most OEMs who would prefer to jump to level 4 and 5, where they would avoid any confusion and liability issues with automation.

Safety Challenges:

The development of these advanced products needed for high level of autonomy is predominantly achieved through software. It has drastically increased the complexity of the software not only by incorporating artificial intelligence, but also by exponentially increasing the software size. As shown in Figure 2, the size of the software embedded in a current modern high-end car is 100 million lines of code (with level 1 automation) - almost ten times as many as in a Boeing 787 Dreamliner!

Description : Figure 2 : Comparison of the software size of different systems and applications.
Source : NASA, IEEE, Wired, Boeing, Microsoft, Linux Foundation, Ohioh.


When meeting with one of the leading Tier 1 companies in Michigan, in 2018, their engineers told me that their electric power steering product used for level 2 automation contains 50 million lines of code by itself! It is therefore very likely that a fully autonomous level 5 car will require more than a billion lines of code, considering the numerous ADAS systems (Figure 3), combined with the introduction of machine learning.

Description : Figure 3 : ADAS Systems and their corresponding sensor usage.
Source : Michigan Tech research Institute.


Consequently, the most prevalent question is: how do we ensure that these hundreds of millions of lines of code are safe? How do we prevent fatal accidents such as the Tesla autopilot[1] or Uber[2] self-driving car accidents that happened last year?

We think that one of the solutions will come from using good practices from other industries, more specifically Aerospace that dealt with fail operational systems.

Testing Methods for Fail-Operational Systems:

For SAE level 1 and 2 ADAS systems, we recommend applying the ISO 26262 standard (Road Vehicles Functional Safety), released in 2011, it is extensively based on the Airborne standard DO-178C. CS Canada proposes an agile strategy to apply ISO 26262 based on our aerospace experience:

  1. Start by conducting a Test Readiness Review, which is basically ‘shaking’ the software to ensure the major functionalities do not malfunction under stress.
  2. Minimum Gate Testing: ensuring a minimum number of tests to provide a high level of confidence in the system, in order to release an official software version.
  3. System Level Testing, which are performed to verify functional and dynamic system behaviour (stack analysis, memory margin, timing margin, etc.).
  4. Finally, Requirements-Based Testing is recommended to achieve the desired software testing objectives.

All these phases are being performed on Hardware in the Loop (HIL) and Software in the Loop (SIL) platforms developed by CS Canada and our partners, as shown in Figure 4.

Description : Figure 4 : Hardware in the loop (HIL) platforms at CS Canada.
Source : CS Communication & Systèmes Canada Inc..


Also, most of the test cases used in step 1 and 2 are being reused for all the other phases allowing the ISO 26262 certification process to be accelerated, and making it more cost effective.

Moreover, most of the fail-operational systems use both redundancy and degraded mode, to allow for maximum safety. In redundancy-mode, the system must continue to work properly while providing an alternative to the failure detected (loss of communication, loss of sensor signals, Actuator System Fault, or power failure for example). As these system failures are expected, alternative operating modes are provided. To verify the system’s behaviour during redundancy-mode, each expected failure is entered into the system and the behaviour is analyzed to ensure that the system continues working without perturbation. Activities are verified by testing that:

  • The transition between normal operation and redundancy-mode is performed without interruption (e.g. a channel switchover)
  • The system behaviour has the same performance every time (in response time, precision, etc.)

In the case of the loss of critical function, the system must continue to operate safely and within the defined categories of failure conditions. The system’s critical operations should still perform even with a compromised system and a loss of performance. In the case of degraded-mode, the verification activities consist of:

  • The transition between operating modes, from normal to critical mode, is performed as per the system requirement
  • The fail annunciation is propagated correctly in the global system (for vehicles, aircraft, etc.)
  • The system transitions into an acceptable behaviour for a minimum duration of time (e.g. allows the autonomous car to park safely).

Finally, for the Software Testing objectives we strongly recommend the Test Coverage analysis to be performed for both the requirement and structural coverage analysis. The requirement coverage analysis determines how well the requirements-based testing verified the implementation of the software requirements. Eventually, it will reveal the need for additional requirement test cases due to either an incorrect requirement implementation or a code structure that was not exercised. The structural coverage analysis determines which code structure was not exercised by the requirements-based test procedure. Following the analysis, it also could lead to additional testing due to:

  • Missing or incomplete requirements
  • Not justified deactivated code
  • Extraneous Code (e.g. dead code)

Any lack of coverage needs to be justified and the effect on the system must be assessed for safety and security. Although mandatory for ASIL D systems, CS Communication & Systems Canada (CS Canada) have been surprised to notice that some companies are skipping this critical phase of the process. We at CS Canada have certified more than thirty safety-critical softwares and have developed scalable and systemic ways of performing coverage analysis using tools such as Reqtify, VectorCAST or LDRA.

Application and Usage of ‘Formal Methods’:

As a final point, we would like to highlight one of the most innovative and advanced capabilities we offer to support our customers’ developing safe software: Formal Methods. Formal Methods are techniques based on various elements of discrete mathematics (e.g. finite state machines, symbolic logic, set theory, etc.) combined with powerful ‘automated reasoning’ algorithms. Judicious use of formal methods has the potential to reduce the amount of conventional testing and most importantly reduce the cost of testing by finding errors earlier in the Software Development Life Cycle (SDLC), as shown in Figure 5.

Description : Figure 5 : CS Canada objective: find errors very early in the process!.
Source : CS Communication & Systèmes Canada Inc..


In development for the past forty years as part of our research and development in universities and industry, Formal Methods is now being deployed as a state-of-the-art verification and validation technology due to the increase in available computing power. For example, Airbus has used Formal Methods for the purpose of Avionics Software Certification on the Superjumbo A380 aircraft[3].

CS Canada has used it on various aerospace projects such as engine controls to ensure that the inputs and outputs of the model-based design are within the safety ranges automatically (without creating additional test procedures).

More recently, we have exploited another approach to Formal Methods called ‘‘Model Checking’’ on ADAS systems developed by Automotive Tier 1 companies to ensure their safety properties are verifiably true. Before the design and the code are even developed, we build an abstract model of their system and make it as robust as possible in just a few weeks. We then start entering safety properties into the model (e.g. longitudinal center of vehicle is always within fifteen inches of the lane center) and ‘ask’ the program whether or not the properties are verifiable. The magic power of this method is that it will not only say TRUE or FALSE, but it will also provide data on how the failure occurred and ultimately if the product is viable.

What’s next?

The transformation of the automotive industry is accelerating and the race to fully autonomous cars amongst the OEMs, Tier 1s & Tier 2s is peaking. We have realized in 2018 that the certification and awareness on applying functional safety to SAE level 1 & 2 automation is starting to be adopted, and we must continue our support for this system in aiding with their functional safety challenges. In that regard, a new standard called SOTIF[4] (Safety Of The Intended Functionality) was recently published in January 2019 and complements the ISO 26262 for the safety of road vehicles. SOTIF is not addressing the safety risks related to electronic malfunctions like ISO 26262 does, but rather other unreasonable risks related to the environment.

However, the industry does not yet have a solution to certify the neural network software used in level 4 and 5 software, although it must be mandatory before we start seeing autonomous cars on the road. Within CS Canada we are exploring various research methods to address this huge industry challenge. Given our experience in Formal Methods, we are looking to it as a potential solution, especially to cover corner cases. Nevertheless, we are also exploring the development of a ‘Doer/Checker’ architecture with redundant inputs where the ‘checker’ would be responsible for the safety of the system while the artificial intelligence continues to learn in real-time.

Sur la toile

17 juin 2024


4 juillet 2023


4 juillet 2023