Skip to main content
Research

Assuring safety of Artificial Intelligence

Bosch Research Blog | Post by Lydia Gauerhof, 2021-04-29

Lydia Gauerhof explains how to assure safety of Artificial Intelligence.

Co-authors: Roman Gansch, Christoph Schorn, Markus Schweizer, Andreas Heyl, Andreas Rohatschek

The use of Artificial Intelligence (AI) in products we use in our daily life has surged in the past years. While it offers many benefits, there are also risks associated when an AI algorithm performs a safety-critical decision, e.g., for object detection in an automated vehicle. It is essential to assure that the decision is taken by a safe AI component.

However, there are numerous challenges in assuring the safety of Artificial Intelligence.

Prerequisites for assuring AI safety

  • First, it is necessary to specify what constitutes a safe behavior of the AI function, including under which conditions the component will provide which service.
  • Second, it must be assured that the behavior implemented by the AI component is safe under all conditions.

This doesn’t sound new in terms of safety assurance. What is different in AI Safety assurance?

Blackbox AI

AI is used for complex tasks, which makes an exhaustive specification unfeasible. The nature of an AI component is that it learns the required behavior on its own from supplied data. For us as humans it acts mostly as a blackbox, i.e. its inner workings on how a decision is made are not transparent to us. It may happen that the AI exhibits a so-called functional insufficiency, meaning it does not work as intended, for example by misdetecting an object under unfavorable conditions.

Safety assurance involves mastering these functional insufficiencies of the AI components and the a priori often unknown influences of the data, some of which come from the open context the AI must operate in. For further information, please have a look at this publication about structuring validation targets of a machine learning function applied to automated driving.

But what does that mean for the use of AI components in safety critical systems?

5 phases of AI lifecycle

Let’s look at an example from automated driving, where neural networks are used for object detection. A crucial task is to detect pedestrians in order to prevent them from getting harmed. To achieve this goal, we pursue a safety assurance case based on the five phases of the AI lifecycle:

5 phases of AI Safety Assurance: specification, data management, design and training, verification and validation, deployment.

The specification phase includes the requirements elicitation that is discussed in one of our recent publications. We intend to use these requirements to guide the activities in developing and training a neural network. Important, but not that easy: These requirements shall be formulated not only for the function, but also for the data!

One root cause of functional insufficiencies lies in unsuitable data used for training. If the provided data does not enable the AI component to distinguish between similar objects, it might happen that trees are detected as pedestrians. Therefore, we must analyze what other patterns might have been learned that have little or nothing to do with our functionality. This means that we can and must optimize not only the neural network, but also our data. Data curation strategies should be applied with the goal of reaching data suitability.

Design and training of Neural Networks seems to be in the focus of many publications. The aim is often to improve the performance by some percentages – or even a fraction of a percentage. However, in the context of safety assurance we aim to satisfy the requirements that might include performance as well as robustness requirements. Thereby, it is essential to understand which measures are contributing to desired properties and explain the undertaken decisions, e.g. design decisions.

Furthermore, we want to know how to deal with the fact that our data is always just a subset of the reality. As part of verification and validation, these and other testing challenges are discussed in this blog about machine learning testing. Thereby, we aim to make sure that the AI function also works as intended in an embedded environment with limited resources. Despite all the fault prevention measures mentioned so far, we still have to find solutions for situations in which functional insufficiencies or any other failures emerge during runtime: There are already ways to mitigate errors before they lead to system failures, i.e. see anomaly detection and adversarial attacks in this blog article on how to increase robustness of AI perception.

Even if a product with an AI component is released and deployed, it will have to be monitored and checked for unknown behavior during operation in the field. We can only deploy an AI-based function if we are aware of its strengths and weaknesses, and if we are certain that the residual risk in the overall system is acceptably low.

For this, it is necessary to have a consensus on the effectiveness of the methods and approaches used in the safety assurance case. To this end, we co-operate with many industrial and academic partners on that topic, such as AI assurance project and AAIP, to name a few.

All in all, it is essential to assure that AI components are safe. With our 5-phase approach we consider each stage of the lifecycle, identify causes why the AI component does not work as intended, and provide measures how to mitigate them.

What are your thoughts on this topic?

Please feel free to share them via LinkedIn or to contact me directly.

Author: Lydia Gauerhof

Lydia is a researcher in the field of dependable systems and software engineering. She started working at Bosch in 2016 and focused on Safety Assurance of Artificial Intelligence applied in Automated Driving in 2017. Lydia is passionate about bringing together industry and research as well as linking the topic of Safety and AI. In addition to contributing to Bosch research and internal development, she is a Fellow of the Assuring Autonomy International Program.

Lydia Gauerhof

Co-authors: Roman Gansch, Christoph Schorn, Markus Schweizer, Andreas Heyl, Andreas Rohatschek

Share this on: