“AI will never replace a human being”

The very specific strengths of artificial intelligence can help improve decision-making

Everyone, it seems, is talking about artificial intelligence. Yet people still have reservations, particularly regarding data protection and the perceived threat to jobs. What’s the significance of AI for Fraunhofer IOSB?

Rauschenbach: Well, firstly, I should say that artificial intelligence is not all that new. We’ve been using AI since the 1990s. It’s what power companies use to predict the consumption of tens of thousands of customers, or automakers to ensure quality in vehicle production. But AI doesn’t replace an actual person in these or any other areas. When we use AI in our projects, we use it to provide support with human decision-making.   

In which areas is AI already better than humans?

Rauschenbach: The human brain is extremely good at pattern recognition. For example, we have no problem distinguishing a cup of coffee from a glass of water. However, the picture changes when it comes to recognizing patterns in extremely large datasets, also knnown as big data. Artificial intelligence is far better at this than humans. At Fraunhofer IOSB, we use pattern recognition to analyze so-called PMU data (see overleaf: Enhancing grid stability with AI). Here, every single sensor generates 150 datasets every second. The aim here is to recognize malfunctions in the power grid before they even occur. This is the kind of application for which we can really exploit the strengths of AI. Other areas currently being investigated by our researchers include predictive maintenance for Industrie 4.0, automated detection of physical assaults, and object recognition for autonomous underwater vehicles. In other words, all of these involve the use of AI-based systems to assist with the human decision-making process.

Where is research still needed? And what are the drawbacks of AI compared to conventional statistical analysis?

Rauschenbach: One of the big challenges we face with a number of deep-learning processes is that it is impossible to explain with absolute certainty just how they arrive at the results they do. By contrast, we understand very well how a statistically based predictive model, such as the similar days method, generates its results. A lot of AI is a bit like a black box: if the result isn't quite right, itłs very difficult to say what's gone wrong. That’s why "Explainable AI" (XAI) is such a big research topic right now. And it’s one we’re looking at very closely here, too.

 

Prof. Dr.-Ing. habil. Thomas Rauschenbach is director of the Fraunhofer IOSB-AST and was spokesperson of the business unit Artificial Intelligence and Autonomous Systems in 2019/2020.

 

Artificial Intelligence and Autonomous Systems

Learn more about the fields of application and technologies of our business unit Artificial Intelligence and Autonomous Systems.