“Our XAI tools shed light on the black box”

Explainable Artificial Intelligence (XAI): important tool for AI systems engineering and the predictable, systematized use of AI in demanding applications

Mr. Frey, why is the concept of explainability such a big issue when it comes to AI methods?

Christian Frey: Although AI algorithms such as deep learning methods often deliver impressively good results, it is usually hard to understand how and why these algorithms produce certain results. XAI aims to fill this gap. XAI stands for a group of methods that in a sense shed light on the black box and are designed to make it easier to interpret the decisions of AI models. This is not only important for the issue of acceptance – in other words, that humans trust and accept the decisions made by an AI. It is also an important tool for the development phase and the subsequent life cycle of AI components in the context of AI systems engineering, for example to track down the causes of errors.

Do you have any concrete examples?

Frey: We use a variety of methods, depending on whether explanations are required only for a specific case or in general, how much is known about the underlying AI model, etc. Semantic XAI, for example, is based on knowledge models and can generate textual reasoning along the lines of: “This object is very probably a car because I have detected two wheels, headlights, exterior mirrors and a license plate in the image.” Or, in the case of AI-assisted route planning for autonomous systems, XAI methods visualize the decisions that lead a robot system on the best path through unknown terrain in a way that humans can clearly understand.

What specific range of services does the IOSB offer in this area?

Frey: In the context of an internal research and development program, we have focused on building up targeted XAI expertise for more than two years. This expertise is embedded in our comprehensive and applied AI knowledge in sectors ranging from industrial production, automotive and medicine to energy. One of the results is an XAI toolbox that is available immediately for analyzing data, debugging and explaining the predictions of any blackbox model. This means that we can support virtually any type of AI application project and contribute significantly to developing not just “pretty” proof-of-principle demonstrators, but robust, practical and accepted productive solutions.

 

Dipl.-Ing. Christian Frey is spokesperson for the Artificial Intelligence and Autonomous Systems business unit and head of the Systems for Measurement, Control and Diagnosis (MRD) department.

Annual report »We make AI fly«

The above interview is taken from our Annual report 2021/2022, which focuses on Fraunhofer IOSB's AI application expertise.

 

Artificial Intelligence and Autonomous Systems

Learn more about the fields of application and technologies of our business unit Artificial Intelligence and Autonomous Systems.