Explainable and Transparent AI

In today's AI applications, developers and users, or those affected by AI results, often do not know why and how certain results were achieved. When an AI application delivers an incorrect result, it is often unclear how this can be avoided in the future. This problem particularly affects, but is not limited to, machine learning processes such as deep learning.

Explainable and transparent AI (also known as XAI, explainable artificial intelligence) refers to a group of methods that can remedy this situation. The aim is to make the results delivered by AI applications comprehensible and, if necessary, to understand how the AI itself works.

Explainability is essential for the acceptance and responsible and legally compliant use of AI applications, including compliance with fundamental ethical principles and ensuring their trustworthiness. Last but not least, the EU AI Act sets requirements for the trustworthiness of certain AI applications. Understanding AI applications also enables targeted optimization and adaptation of the underlying models and processes.

At Fraunhofer IOSB, we have been researching not only different AI methods for many years, but also XAI methods and approaches that explain AI decisions. By bringing together different departments with their respective methods, expertise, and application contexts, we can address the cross-cutting nature of explainable AI comprehensively and holistically, and respond specifically to specific customer requirements.

 

Project/Product

XAI-Toolbox

Our XAI toolbox enables rapid evaluation of various XAI methods and can be used, for example, for data analysis, debugging, and explaining the predictions of any black-box models.

 

Science-Journal visIT

Explainable AI in Practice

The issue reports on a series of practical projects and applications of explainable AI, with a special focus on the XAI toolbox.

Research topic

Semantic XAI

While machine learning processes are purely data-driven, humans also use knowledge about semantic relationships to gain insights, check their plausibility, and justify decisions.

By supplementing machine learning approaches with semantic knowledge models, Semantic XAI enables AI models and their results to be made more comprehensible by taking semantic context into account. The use of semantic interaction techniques also facilitates the exploration of models.