Explainable and transparent AI

AI methods, especially methods in the field of deep learning, deliver excellent results in many application areas and are used in a variety of ways. However, they have the disadvantage that the paths to arriving at these results are often not comprehensible to the user. Explainable and transparent AI (often abbreviated as XAI) refers to a group of methods whose goal it is to make the decisions and results of AI models more interpretable and comprehensible for users. An explanation can consist, for example, of marking those elements of the input data that had the greatest influence on the result of a specific AI model.

Traceable decision making increases user confidence in AI procedures and enables improved analysis of error cases. XAI enables clearer insight into AI models for the end users who want to understand decisions and results, for developers who want to embed and parameterize AI models in their application, and for the scientists who want to develop and improve their AI models.

XAI methods are being researched and developed at Fraunhofer IOSB across a wide range of AI applications.

Object Detection and Classification  

The detection and classification of objects in video mass data often form the first step in a variety of AI-assisted applications. In many cases, interactive workflows are implemented with evaluation personnel, for whom the comprehensibility of the results of AI procedures represents a significant added value. The explainability of detection and classification decisions is currently being developed on the topic of fine-grained classification of vehicles, person detection and vehicle detection.

Knowledge-based XAI  

Semantic knowledge models can be used to subject results generated by machine learning to a plausibility check and to reason about them in a way that is understandable to humans (e.g., in natural language). Current applications at IOSB include AI-based assistance systems for classification tasks.

Semantic Segmentation  

Semantic segmentation of image data refers to the pixel- or point-wise classification of two- and three-dimensional data. In off-road robotics, this classification significantly influences the decision whether to classify a potential path as traversable.  Thus, when using AI methods for obstacle detection and avoidance, the comprehensibility of classification decisions is of utmost relevance.

Trajectory Classification  

Various methods in the area of situation awareness and anomaly detection are being developed at IOSB for the control and monitoring of vessel traffic. One way to support humans in this task is AI-based ship type classification based on the route taken as well as additional features, e.g. geographical characteristics. To ensure that these models provide meaningful results, XAI methods are used to build transparent prediction models.

Loss Prediction  

Power system operators inject electricity into the grid to compensate for grid losses. To procure this power efficiently and cost-effectively, accurate predictions of grid losses are needed. This prediction can be done using AI techniques and the traceability of the predictions represents a significant added value for the grid operators.

Quality Assurance  

Manufacturing processes such as the plastic injection molding process are characterized by an abundance of free parameters. In practice, these are carefully set by experienced personnel to ensure product quality. In order to make the supporting process of AI-based quality estimation more transparent and repeatable, an assistance system based on XAI is being developed at IOSB.