Artificial Intelligence (AI) — Opportunities, risks, and responsibility
Bosch is pursuing the aim of developing technology “Invented for life“ in the realm of artificial intelligence as well. Christoph Peylo, head of the Bosch Center of Artificial Intelligence, gives insights into current projects, describes what AI is capable of, and explains why the technology of the future must be managed responsibly.
Mr. Peylo, why do we need artificial intelligence?
At Bosch, we are guided by the aim of developing technologies that make life easier. Artificial intelligence makes an important contribution to achieving this goal. Technological development and digitization have made many areas of the modern world so complex that it’s now difficult for individuals to manage them. Cyber security is a good example of this. Because of the large flow of data, guaranteeing security in the digital world is hardly possible without artificial intelligence. AI can also support people in many other areas by reducing complexity, accelerating processes, and simplifying decision-making. Take the medical sector for example, where individualized, customized treatment is becoming increasingly common. But such therapies are still very expensive and thus only accessible to a small group of patients. AI can make these treatment concepts more scalable and less expensive.
150
AI projects have already been initiated
When it comes to AI, how did Bosch progress in 2018 and what are you currently working on?
Until now, we have initiated more than 150 AI projects. We are currently working intensively on a manufacturing analytics system to optimize production processes. The system aims to identify errors in production processes and then correct them more quickly. To this end, we are developing an intelligent, data-based decision support system that provides the associates concerned with the information they need to make decisions. In another project, we are using intelligent control systems to understand their influence on vehicle emissions and thus reduce these emissions. This example shows another important application for AI: the eco-friendly design of technical systems.
Does this mean AI can contribute to fighting climate change?
Absolutely. To make the best possible use of the potential of AI in this realm, we are currently conducting an internal analysis in several areas. We are examining the ways in which AI can help save energy, for instance at our computing centers, with heating and air conditioning systems, and at production facilities. Simply put: we want to find out how energy consumption can be better managed by drawing on the experience of learned usage profiles.
Critics are concerned that intelligent machines will one day outsmart people. Do you understand this worry?
Of course, but I don’t share this view. The more you learn about AI, the more you begin to admire the “system architecture” of human beings. If you compare the relation between energy consumption and cognitive performance, people almost always outperform machines. For example, you can talk on the phone while simultaneously climbing up stairs, eating a cheese sandwich, and greeting a colleague who is coming down the steps. And you do this in a very energy efficient way. In fact, you might be able to get through the entire day on that one cheese sandwich and still be able to master complex tasks. In contrast, AI requires high computing power, hardware, software, power, and much more. And besides, replacing people with AI is not the aim. AI should help humans with their weaknesses. It should be an aid, like my glasses, for example. While they don’t see better than I do, they help me see better.
To a certain degree, intelligent machines act independently. This means they could potentially make wrong decisions. How can this risk be mitigated, and how can AI be used responsibly?
The use of AI calls for clear rules because intelligent objects can act independently. According to our understanding of social values, action also includes responsibility. This means that the actor must make sure an action is compatible with social rules and values, and their actions are then judged accordingly. But a machine cannot tell whether its actions meet this requirement. This is why people are needed: they must make the rules that determine the machine’s actions.
How does that work in practice?
The rules must be made part of the system. Human beings cannot always be available as a last resort; this would take far too much time. For instance, it would be ill advised for an airbag to ask someone whether it should open or not during a car crash. Within the framework of the rules that a human being has provided, there are areas of application in which AI should not decide on its own, or in which the decisions it makes should be reversible. In any case, we need to make sure that decisions made by artificial intelligence do not lead to discrimination against certain groups of people.
What principles are AI activities at Bosch based on?
Bosch develops technology “Invented for life”, and we take this approach with AI as well. We see AI as a beneficial technology that makes life easier and more pleasant. It does not replace people; it supports them. It goes without saying that we also analyze and manage AI-related risks, just as we do with every other product we develop. With any AI product that we develop, we thus aim to ensure that it is “safe, robust, and transparent.” What is more, our AI solutions have to be explainable. It must be possible to understand how the intelligent system reached a decision based on its action.
AI is the subject of heated and polarized debate. How can people’s trust in AI be increased?
We have to involve people. We need to have a broad debate about what our society expects of artificial intelligence, in which contexts AI should be used, and what the limits should be. While AI is a very powerful tool, and it makes sense to use it in a broad range of areas, it will also fundamentally change our society, as we will have systems that are capable of acting independently. We must agree on what these systems should and shouldn’t be allowed to do. Experience has shown that if we do not engage in this debate, AI could die before it reaches its full potential. However, if we put enough time into a social debate, a broad majority of the population is likely to support this technology. At Bosch, we actively contribute to shaping this debate, for instance with the European Commission’s “High-Level Expert Group on Artificial Intelligence”, which I am a member of. This platform advises the European Commission on AI and drives the discussion within Europe forward. At Bosch, too, we are working on a Code of Ethics that addresses AI.
If you could create the AI of your choice: what would it look like, what would it be capable of, and where would it be used?
We want to connect and design our products in such a way that they can adapt even better, respond to their environment, and better support human beings. I think that is what our customers really want: good products that adapt to individual users. Personally, I see a great deal of potential for AI as an assistant for thought, because many of the problems humanity faces are the result of people being able to understand mainly simple, linear concepts. Things get difficult very quickly the moment people start dealing with complex concepts that involve many variables. A tool that compensates for this weakness would really allow us to progress as a society.