Explainable and transparent AI

AI methods, especially methods in the field of deep learning, deliver excellent results in many application areas and are used in a variety of ways. However, they have the disadvantage that the paths to arriving at these results are often not comprehensible to the user. Explainable and transparent AI (often abbreviated as XAI) refers to a group of methods whose goal it is to make the decisions and results of AI models more interpretable and comprehensible for users. An explanation can consist, for example, of marking those elements of the input data that had the greatest influence on the result of a specific AI model.

Traceable decision making increases user confidence in AI procedures and enables improved analysis of error cases. XAI enables clearer insight into AI models for the end users who want to understand decisions and results, for developers who want to embed and parameterize AI models in their application, and for the scientists who want to develop and improve their AI models.

XAI methods are being researched and developed at Fraunhofer IOSB across a wide range of AI applications.

Fields of application

The explainability of detection and classification decisions of objects in video mass data is currently developed for fine-grained classification of vehicles. In many cases, interactive workflows are implemented with evaluation personnel, for whom the comprehensibility of the results of AI procedures represents a significant added value.

We offer AI-based assistance systems that support humans with knowledge-based XAI in classification tasks. The semantic knowledge models can be used to subject results generated by machine learning to a plausibility check to reason about them in a way that is understandable to humans (e.g., in natural language).

Semantic segmentation of image data refers to the pixel- or point-wise classification of two- and three-dimensional data. In off-road robotics, this classification significantly influences the decision whether to classify a potential path as traversable. Thus, when using AI methods for obstacle detection and avoidance, the comprehensibility of classification decisions is of utmost relevance.

For vessel traffic control and monitoring, we develop methods for situation awareness and anomaly detection. With the help of trajectory classification, different ship types can be classified based on AI methods and geographical characteristics. To ensure that these models provide meaningful results, XAI methods are used to build transparent prediction models.

AI techniques can accurately predict when losses will occur in a power grid. To procure this power efficiently and cost-effectively, accurate predictions of grid losses are needed. A traceability of loss prediction by AI represents a significant added value for the grid operators.

Manufacturing processes such as the plastic injection molding process are characterized by an abundance of free parameters. In practice, these are carefully set by experienced personnel to ensure product quality. In order to make the supporting process of AI-based quality estimation more transparent and repeatable, an assistance system based on XAI is being developed at IOSB.