Skip to main content
Research

Assisting the technical workforce with Neuro-symbolic AI

Bosch Research Blog | Post by Oltramari Alessandro, 2022-02-28

Alessandro Oltramari works as a researcher for Bosch Corporate Research and focuses on neuro-symbolic reasoning for decision support systems.

Neuro-symbolic AI methods can combine machine-generated data and human technical know-how into an integrated knowledge corpus, ultimately generating recommendations that domain experts can use in the workplace.

Supporting human decisions with Neuro-symbolic AI

Behind the products and services that populate the Bosch universe lies the passionate work of subject-matter experts, researchers, and engineers. Added to this, AI-based technologies have become increasingly relevant when it comes to assisting our highly specialized workforce in making technical decisions, reflecting the general trend in industry to adopt AI at different stages of the product lifecycle – from design to commercialization.

It is crucial in this context to have a framework that guarantees that AI-based decision support can evolve effectively and efficiently, and in which human technical expertise, typically accumulated over years of hands-on experience and constant learning, can be coherently aggregated and leveraged together with computational algorithms. Failing to address this requirement could lead to an erosion of trust in AI, which defies one of the core purposes of this technology, i.e. to foster human-machine collaboration. The integration between symbolic and sub-symbolic AI methods, which we refer to as Neuro-symbolic AI, can assume the role of the above-mentioned framework: Neuro-symbolic AI can harvest expert know-how, combining it with semantically-structured data, ultimately transforming this knowledge corpus into actionable recommendations that experts can follow to make decisions in a timely manner. This transformation occurs through neuro-symbolic reasoning, emerging from the interplay between rule-based inferences, a cognitively adequate way to formalize expert know-how, and machine learning, which can be used to rapidly gain insights from high volumes of data – a process that would otherwise be time consuming and require extraordinary manual effort (e.g., finding errors, recurring patterns, etc.).

Use case: Assisting specialists in emission calibration

Project Feasibility Assessment (PFA) in the ECU (Engine Car Unit) calibration of powertrain systems can be defined as the comprehensive process of establishing whether a motor vehicle can achieve a target emission standard. PFA typically covers conventional combustion engines (gasoline, diesel, or alternative fuels), as well as hybrid powertrain configurations. This process is performed by experts, tasked with aggregating and analyzing information from various sources, including emission measurements, vehicle data, etc., and with evaluating engines based on different requisites and functionalities. Experts in emission calibration must understand correlations and interdependencies on vast troves of heterogeneous data, making PFA a challenging task. Accordingly, we developed a Neuro-symbolic-AI-based decision support solution, which includes computational rules that model how emission calibration experts decide if a project is feasible or not, and machine learning algorithms that are used to cluster projects based on similarity features and to predict missing measurements. Our approach provides a coherent and compact semantic representation of the data at scale through a knowledge graph-based integration pipeline. The integration between rule-based reasoning and machine learning is governed by different factors, among which information completeness is key: for instance, when a new engine is evaluated, but the emissions for specific pollutants are unavailable, the system would first fill the gaps by predicting missing measurement values, and then apply context-relevant rules modeled on the expert’s heuristic technique. It is important to point out that this Neuro-symbolic-AI-based decision support system is dynamic: it can be retrained to account for new data and rules, where the former become naturally available as combustion and car technology evolve, and where the latter reflect the need for the experts to update their rules as new standards and, consequently, new public policies are put in place.

Following the Bosch code of ethics for AI, our Neuro-symbolic AI solution for PFA aims to be trustworthy, by leveraging large data repositories, expert rules, and well-established machine learning techniques; robust, by testing hybrid reasoning across ECU configurations, with historical records as ground truth; and explainable, by eliciting reasoning methods and details for each recommendation.

What are your thoughts on this topic?

Please feel free to share them or to contact me directly.

Research Expert Alessandro Oltramari standing in his office at Bosch Corporate Research Pittsburg, USA.

Author: Alessandro Oltramari

Alessandro joined Bosch Corporate Research in 2016, after working as a postdoctoral fellow at Carnegie Mellon University. At Bosch, he focuses on neuro-symbolic reasoning for decision support systems. Alessandro’s primary interest is to investigate how semantic resources can be integrated with data-driven algorithms, and help humans and machines make sense of the physical and digital worlds. Alessandro holds a PhD in Cognitive Science from the University of Trento (Italy).

LinkedIn

Research Gate

Google Scholar

Share this on: