Skip to main content
Artificial intelligence

More transparent AI with visual analytics

Liu Ren is chief scientist at Bosch Research and Technology Center in Silicon Valley. He is standing in front of a wall-sized display and explains how artificial intelligence works with a method called visual analytics.

Liu Ren is chief scientist at the Bosch Research and Technology Center in Silicon Valley. With visual analytics software, he and his team are combining artificial intelligence with human knowledge to bring out the best of both worlds.

Liu Ren is standing in front of a display. In the background a visualization of visual analytics can be seen, which is supported by artificial intelligence.
For Liu Ren, there is no doubt that visual analytics has huge potential.

Artificial intelligence (AI) provides the tools to mine big data, and self-driving cars would not be possible without it. Wherever it is applied, it helps make people’s jobs and lives easier. But users have to be able to understand and control the way AI works and the grounds for its decisions. To make sure this is so, Dr. Liu Ren is exploring a method called visual analytics. Ren leads a team of Bosch experts at the Bosch Research and Technology Center in Silicon Valley. As chief scientist for intelligent human-machine interaction (HMI) technologies and systems, his job is to explore how human and machine intelligence can be combined. “Machines make mistakes, humans make mistakes. Visual analytics helps both avoid mistakes,” Ren says. In this interview, Liu Ren explains just how it works.

“We want to know what AI doesn’t know, and why. Once we’ve achieved that, we can help it acquire the knowledge it lacks.”
Liu Ren, chief scientist for intelligent HMI technologies and systems at the Bosch Research and Technology Center, Silicon Valley

Mr. Ren, what is visual analytics all about?
AI-assisted visual analytics, or AiVA for short, is an exploration of an AI’s reasoning. It helps us understand how an AI system arrives at a decision and how this decision-making process can be improved. There are three phases to this. First, data from an AI system is processed so it can be understood by humans. In the next step, the data is visualized. Finally, with a minimum amount of interaction, people can draw on this visualized information to guide and optimize the AI system.

What’s the point of that?
AI algorithms are often like a black box. They churn out a result, but we have no idea how they got there. This can give rise to questions, such as whether decisions are truly unbiased in situations such as automated recruitment processes or credit approvals. Visual analytics can allay these doubts by providing a transparent picture of the decision process. Seeing is understanding!

Explainable artificial intelligence — Liu Ren finds the right words to convey important information about visual analytics.
Liu Ren: casting light into the dark recesses of AI.
Visual Analytics graphic
Liu Ren and his team are standing in front of a large display full of data and discussing a data visualization of visual analytics.
Teamwork with Liang Gou, Panpan Xu, Nanxiang Li, Michael Hofmann to generate more trust in AI — in this case, for an Industry 4.0 application.

Can you give us an example?
Let’s take the systems we’re working on with the Bosch Functional Testing Team for automated driving. When it comes to image recognition, these cars depend on AI. But they also have to contend with what we call “corner cases.” These are rarely occurring situations where several unusual conditions converge — for example, when a car faces a traffic light at a certain angle in inclement weather. What is needed for the system to distinguish a red light in those conditions? Visual Analytics helps detect blind spots, supplement the data, and increase overall system accuracy.

What happens when it detects one of these shortcomings?
Our visual analytics approach uses a second AI that automatically fills the gaps in the data. The process is transparent and involves human interaction. In this way, the shortcomings of the first AI are remedied.

More than 10

different scenarios will usually be defined for describing a traffic light.

This is how visual analytics spots gaps in an AI’s knowledge

Liu Ren is standing in front of a display showing a data visualization. This visualization consists of different colored boxes representing the data analysis of the AI. Next to it the text: “AI hits the road. Visual analytics and the traffic-light challenge”.
Each of these small rectangles groups between 10 and 1,000 examples of traffic lights of similar appearance. The color of the rectangles indicates whether the AI correctly recognized each group. Red stands for no. But for automated driving, it is essential that AI gets it right every time. The visual analytics process helps it do that.
The image visualizes the problem of corner cases that can occur when artificial intelligence does not have sufficient data material for the analysis.
One problem with AI is that there is just not enough training data available for each variation of a traffic light. If several unusual conditions converge, there is a certain risk of “corner cases” — AI might misinterpret a signal because it is not sufficiently represented in the training data set.
Several cars in an urban environment stand at a red light. Above the text: " No rules without exceptions".
“Even if you were to capture as many traffic lights as possible, you might still not have a complete picture,” Liu Ren says. “What’s more, you might still not be able to capture enough corner cases. And on top of all that, collecting all this data would also take a tremendous amount of effort.”
A diagram shows how the traffic light is better recognized by representation learning with a second AI. By color, symbol, background and direction.
To address this issue, an AI approach called representation learning is used. Here, a second AI comes into play. In our example, this second AI teaches the system the characteristics used to define a traffic light. To this end, each traffic light can be roughly represented as a combination of a dozen variations. Each of them can be easily understood by people. Our example assumes that four representative variations can serve to describe any traffic light: color, symbol, orientation, and background.
A visualization full of colorful boxes shows how errors are detected and visualized in data analysis.
Whenever AI fails to recognize a traffic light, the outcome can be summarized and visualized, allowing people to spot the instances of error at a glance. Using representation learning, a second AI can map these failure cases to the representative variations and help users to understand where and how the AI recognizer failed.
During data analysis, two AI are used to detect errors.
Every small rectangle in this visual interface summarizes anything from ten to a thousand cases where the same problem occurred. In this way, the user can see why the AI failed to recognize the traffic light.
More homework for the AI - new data can improve artificial intelligence.
The existing data can be used to provide ever new variations of the representations of traffic lights. In this way, a landscape view of what these traffic lights look like can be obtained (top), as well as where the AI may potentially fail (the yellow and red bands in the landscape view below). This is very helpful, since it allows the AI to be improved, either by generating new data or by collecting more data that resemble these unusual failure cases.
/

How does the second AI generate this data?
It leverages a method called representation learning. To stay with our traffic-light example: based on the training data, the second AI learns a representation such that each traffic light can be categorized into roughly a dozen cases and their variants, all of which are easy for people to understand. To keep it simple for our example, we’ll stick to just four variants — the traffic light’s color, the symbol in the signal light, the background, and the direction it’s pointing. These four representative categories can serve to describe every traffic light. That’s how the second AI categorizes training data, recognizes and classifies error cases associated with the traffic light detector, the first AI. In the event of a corner case, the second AI can also effectively generate new training data to further improve the performance of our traffic light recognizer, based on the four categories and human input.

Liu Ren is standing in front of a display that shows a data analysis. In this case the data originates from the retail trade and was processed and visualized by visual analytics.
Liu Ren. In the background, a visual analytics application for the retail trade.
Liu Ren is standing in front of a display that shows a data analysis. The data is visualized by visual analytics in such a way that allows people to spot errors immediately.
Liu Ren has tomorrow’s trustworthy AI firmly in his sights.

Where do people come in to this?
The way the data is visualized allows people to spot errors immediately. People can easily analyze these cases by associating them with the learned representations, and pinpoint the gaps in the training data for the AI. In a second step, the system either generates new data or offers guidance on how to collect real data to fill these gaps. This way, people and machine work together to increase the performance of our AI system.

“In the industrial AI field, Bosch is a force to be reckoned with — the awards we have won show that.”
Liu Ren, chief scientist for intelligent HMI technologies and systems at the Bosch Research and Technology Center, Silicon Valley

Where else does Bosch use visual analytics?
A recently developed algorithm called Tensor Partition Flow, or TPFlow for short, allows retailers to analyze customer traffic data. And urban traffic flows can be examined better, allowing ride-sharing services to be dispatched properly and make the most of the capacity available. TPFlow won the best paper award at the 2018 IEEE VISUALIZATION conference, the leading conference on big data and visual analytics. We have also won an award for a solution that spots bottlenecks on Industry 4.0 production lines. Here, we are working closely with the Bosch Center for Artificial Intelligence to make a large-scale rollout possible.

What will future visual analytics applications be able to do?
Their objective will still be to make sense of the black box that is AI. Without this understanding, people will not develop trust in AI – trust that will be a crucial quality feature in tomorrow’s connected world. That’s why we at Bosch want to develop safe, robust, and explainable AI products. I firmly believe the visual analytics approach that keeps people in the loop will continue to play a key role here.

Profile

A portrait of Liu Ren, Director and Chief Scientist for Intelligent Human Machine Interaction Technologies and Systems at the Bosch Research and Technology Center in Silicon Valley. His work focuses on artificial intelligence and machine learning, among other things.

Liu Ren

Chief scientist for HMI at the Bosch Research and Technology Center

Demand for transparent and understandable AI is on the rise. Algorithms have to be explainable.

Dr. Liu Ren is VP and Chief Scientist for Intelligent HMI Technologies and Systems at the Bosch Research and Technology Center in Silicon Valley. He is also the global head overseeing AI research for human machine collaboration program with research teams located in Silicon Valley and Pittsburgh in the United States and Renningen in Germany. Liu received his PhD and MSc degrees in computer science at Carnegie Mellon University. He also has a BSc degree in computer science from Zhejiang University in Hangzhou, China.

Share this on: