Skip to main content
Matthias Hein

Limitations and potential of artificial intelligence

Discussing AI with the Bosch-endowed professor at the University of Tübingen

AI-expert Matthias Hein stands in front of a blue background. The graphic shows a computer board, greatly enlarged

2019-02-08

What is artificial intelligence capable of and what not? Matthias Hein researches machine learning at the University of Tübingen and says: AI has to get to know the limits of its knowledge.

“I know that I don’t know anything,” — the words the thinkers of classical antiquity used to describe the limitations of mankind’s striving for knowledge. Over 2000 years later, Matthias Hein is trying to transfer this ability of human intelligence to recognize the limits of knowledge to machines — and therefore make them smarter. The computer scientist has occupied himself with artificial intelligence for more than two decades. He knows that society’s trust in learning machines, neural networks and deep learning basically depends on how AI arrives at its decisions. “The biggest current challenges are the robustness, the explainability and the fairness of decision-making processes,” says Hein.

Minor errors, grave consequences

Matthias Hein, AI-expert, talking to someone

The scientist has a striking example for the robustness — error-proneness — that is lacking in decision-making processes: Automatic image recognition in road traffic. “If there are little stickers on a stop sign, artificial intelligence can be duped into thinking it’s a give way sign.” Such errors can have grave consequences. It is the same in medicine where, despite all technological advantages, there is a danger of false diagnoses due to AI errors. Whenever there are uncertainties, the risk increases if AI steadfastly persists with its erroneous decisions instead of signalizing “I don’t know”.

Calling refusals into question

Also important: Explainability. “If an AI can explain why it has arrived at another particular decision, then the results are easier to understand. Errors occurring within the process are therefore easier to identify.” Hein pointed to credit and job applications: “A ‘no’ without a reason does not create trust.” If the machine however indicates the refusal is down to certain parameters, humans can then perhaps make adjustments to the intelligent algorithm. Or it understands the decision-making process which simultaneously boosts the credibility of artificial intelligence.

Matthias Hein, AI-expert, talking to someone
“We have to paint a realistic picture of what the technology can do and of what it can’t. Above all what it can’t.”
Matthias Hein

It also goes hand-in-hand with an improved understanding of fairness. Hein for example wants to train artificial intelligence in such a way “so that it is equally inclined to decide in favor of a woman or a man whenever people apply for bank loans.” Social discriminations have to be excluded within machines. That the goal is heavily dependent on the data the algorithm uses to make calculations is something Hein is naturally aware of. Should they for example be based on a fundamental inclination towards male applicants, intelligent systems will also be unable to take countermeasures. “However, I think it is easier to configure a machine’s decision-making process to make it fair than ridding people of bias,” he says. One thing is however also clear: Society has to continue debating about what fairness means exactly.

Transparency is the goal

In Matthias Hein’s mind, the discussion about the future of artificial intelligence has to be conducted publicly: “It’s important that people understand what’s behind artificial intelligence.” In Hein’s view, playing around with thoughts about an artificial superintelligence is science fiction – reality holds other challenges and solutions in store for the physicist: “Artificial intelligence can have a positive influence on society. It’s why we have to make it more transparent.” And transparency also means being able to say: “I don’t know.”

Profile

Portrait of Matthias Hein

Matthias Hein, 42

University of Tübingen

The notion of contributing to society is something that motivates me.

Matthias Hein has been working at the University of Tübingen as a Bosch-endowed professor in the machine learning sector since 2018. From 2002 to 2007, he was a part of a research group at the Max Planck Institute for Biological Cybernetics. This was followed by research into machine learning as the professor of mathematics and computer science at Saarland University in Saarbrücken.

Summary

To boost artificial intelligence’s credibility within society and therefore people’s trust in it, Matthias Hein wants to make machine decision-making processes robust, explainable and fair. He wants to show how machines think — and therefore allay people’s scepticism.

Share this on: