AI methods, especially methods in the field of deep learning, deliver excellent results in many application areas and are used in a variety of ways. However, they have the disadvantage that the paths to arriving at these results are often not comprehensible to the user. Explainable and transparent AI (often abbreviated as XAI) refers to a group of methods whose goal it is to make the decisions and results of AI models more interpretable and comprehensible for users. An explanation can consist, for example, of marking those elements of the input data that had the greatest influence on the result of a specific AI model.
Traceable decision making increases user confidence in AI procedures and enables improved analysis of error cases. XAI enables clearer insight into AI models for the end users who want to understand decisions and results, for developers who want to embed and parameterize AI models in their application, and for the scientists who want to develop and improve their AI models.
XAI methods are being researched and developed at Fraunhofer IOSB across a wide range of AI applications.