Artificial intelligence (AI) can support humans in a variety of ways, for example in optimizing industrial production processes, in driving or in medicine in the selection of therapy options. As AI solutions play an increasingly important role in everyday life, ethical principles - such as those described by the EU in the AI Act currently being negotiated - are important cornerstones for their development. Not least in order to observe these principles, traceability and transparency of AI models are important: they support the development of AI models and strengthen trust in AI results.
A long tradition in AI is inherently comprehensible white-box models such as small decision trees or simple Bayesian networks. Their simple and clear structure makes it possible to directly understand algorithmic relationships.
For more than ten years, methods from the field of machine learning, in particular deep learning, have come into focus. These are trained on large amounts of data and provide AI models of high quality for specific tasks. Such models are very complex; we cannot intuitively penetrate their approach. They are therefore referred to as black boxes. So-called explanatory models from the Explainable Artificial Intelligence (XAI) research area make it possible to better understand the processes in the black box. These can be developed explicitly for specific AI models or generally for different AI models. Explanations are generated across the entire AI model or for specific AI outcomes. The use of specially developed procedures for user interface design and usability engineering ensures optimal usability.
Fraunhofer IOSB masters the development of AI solutions and associated explanatory models and applies them in different application areas. As a partner for SMEs and industry, we are happy to bring this experience to your use case. In this visIT, we provide insight into concrete XAI use cases from the areas of quality assurance and production, sensor-based reconnaissance and surveillance, and medicine. Finally, we present our XAI toolbox, which enables us to generate various explanations for your AI process as well - and thus ensure targeted development, trust and transparency in your application as well.
We hope you enjoy reading our issue "Explainable AI in Practice".