Mr. Frey, why is the concept of explainability such a big issue when it comes to AI methods?
Christian Frey: Although AI algorithms such as deep learning methods often deliver impressively good results, it is usually hard to understand how and why these algorithms produce certain results. XAI aims to fill this gap. XAI stands for a group of methods that in a sense shed light on the black box and are designed to make it easier to interpret the decisions of AI models. This is not only important for the issue of acceptance – in other words, that humans trust and accept the decisions made by an AI. It is also an important tool for the development phase and the subsequent life cycle of AI components in the context of AI systems engineering, for example to track down the causes of errors.
Do you have any concrete examples?
Frey: We use a variety of methods, depending on whether explanations are required only for a specific case or in general, how much is known about the underlying AI model, etc. Semantic XAI, for example, is based on knowledge models and can generate textual reasoning along the lines of: “This object is very probably a car because I have detected two wheels, headlights, exterior mirrors and a license plate in the image.” Or, in the case of AI-assisted route planning for autonomous systems, XAI methods visualize the decisions that lead a robot system on the best path through unknown terrain in a way that humans can clearly understand.
What specific range of services does the IOSB offer in this area?
Frey: In the context of an internal research and development program, we have focused on building up targeted XAI expertise for more than two years. This expertise is embedded in our comprehensive and applied AI knowledge in sectors ranging from industrial production, automotive and medicine to energy. One of the results is an XAI toolbox that is available immediately for analyzing data, debugging and explaining the predictions of any blackbox model. This means that we can support virtually any type of AI application project and contribute significantly to developing not just “pretty” proof-of-principle demonstrators, but robust, practical and accepted productive solutions.
Dipl.-Ing. Christian Frey is spokesperson for the Artificial Intelligence and Autonomous Systems business unit and head of the Systems for Measurement, Control and Diagnosis (MRD) department.