In today's AI applications, developers and users, or those affected by AI results, often do not know why and how certain results were achieved. When an AI application delivers an incorrect result, it is often unclear how this can be avoided in the future. This problem particularly affects, but is not limited to, machine learning processes such as deep learning.
Explainable and transparent AI (also known as XAI, explainable artificial intelligence) refers to a group of methods that can remedy this situation. The aim is to make the results delivered by AI applications comprehensible and, if necessary, to understand how the AI itself works.
Explainability is essential for the acceptance and responsible and legally compliant use of AI applications, including compliance with fundamental ethical principles and ensuring their trustworthiness. Last but not least, the EU AI Act sets requirements for the trustworthiness of certain AI applications. Understanding AI applications also enables targeted optimization and adaptation of the underlying models and processes.
At Fraunhofer IOSB, we have been researching not only different AI methods for many years, but also XAI methods and approaches that explain AI decisions. By bringing together different departments with their respective methods, expertise, and application contexts, we can address the cross-cutting nature of explainable AI comprehensively and holistically, and respond specifically to specific customer requirements.
Fraunhofer Institute of Optronics, System Technologies and Image Exploitation IOSB