Understand and comprehend AI models

The importance of explainable AI

© Pixabay/Samdraft

AI offers great potential to assist humans in many areas. Doctors can use AI to determine the best therapeutic option, inefficiencies in industrial productions can be detected at an early stage, and AI-equipped vehicles can relieve humans during their driving tasks.

The key to a successful use of AI is not only to maximize performance and achieve the best results possible, but also to build trust in the AI's decisions. It is therefore essential that AI’s decisions should be trustworthy, transparent and comprehensible for humans. Thus, at Fraunhofer IOSB, we not only develop AI methods, but also the methods that explain the AI's decisions.

Examples of applications include the analysis of patient data with regard to therapy recommendations, the analysis of ship navigation data to assist operators in the monitoring of maritime areas, or the analysis of cyber data to provide indications of cyber attacks.

Developing, interpreting, and explaining Artificial Intelligence is our goal and a long-standing research topic at Fraunhofer IOSB.

Briefly explained: Terms of Artificial Intelligence

Artificial Intelligence (AI) is a branch of computer science that aims to equip computer systems, robots, or other machines and systems with intelligence. This is done by mimicking the thought and decision processes of humans by mapping them in a computer program.

Machine Learning methods are trained using training or reference data and learn to make decisions based on this data. One particular form of Machine Learning is Deep Learning.

Deep Learning is based on neural networks and has become very popular due to its exceptional results in many fields of application. Based on large amounts of reference data, the process can almost independently learn to replicate the pre-made decisions. However, one major drawback for the acceptance of such a process is the in-explainable nature of its predictive models. Therefore, methods for explaining the AI’s decisions are required.

In order to be able to understand the decisions of Deep Learning methods, there are global explanation methods, which consider the entire process behavior at once to provide an insight into the AI method, while, in contrast, local explanation methods consider a certain subset of the process behavior to draw explanations.

We can support you in this:

Develop AI methods

You want to find correlations such as typical patterns or anomalies in your database? We support you in the selection of the appropriate AI procedure, the preparation of your database as well as the implementation of the AI.

Understanding AI

Besides the use of different XAI methods to look into the black box of AI decisions, we also offer intuitive visualizations that make the explanation of the AI decision visually comprehensible.

AI User Interface Design

Only a suitable user interface makes it possible to use an AI process efficiently. With our experience in the corresponding design processes, we can design the user-friendly interface suitable for your needs.

 

Application examples in our projects

We are researching AI procedures and their traceability and transparency in the following areas: Healthcare, maritime surveillance, autonomous systems, mobility and data management.

 

eXplainable Artificial Intelligence (XAI)-Toolbox

The XAI toolbox developed by Fraunhofer IOSB is designed with AI explainability at its core. It enables the swift evaluation of various XAI methods for AI procedures. That is, the toolbox can be used, for example, for data analysis, debugging, and explaining the prediction of any black-box model. This ensures trustworthiness in AI decisions.  

 

MED²ICIN

In MED²ICIN, IAD is developing, among other things, an XAI interface design to make the complexity of the AI decision well comprehensible to both doctors and patients by means of suitable user interfaces. 

 

Maritime Situation Analysis

We have gained many years of research expertise in maritime situational awareness. The goal is, among other things, to assist operators in maritime surveillance tasks. Based on AIS or radar data we can detect critical situations (e.g. smuggling), classify ship types, detect deviations from ship routes and much more.

VALDERRAMA

In this Dutch-German research project, AI-equipped autonomous systems are examined for three central elements of trustworthiness: reliability, transparency and control of Artificial Intelligence. 

 

CyberProtect

In this project, we are exploring the question of why the IT security system in Machine-Learning-based production facilities triggered an alarm. 

Besides generating explanations, we develop reliable AI methods to reduce so-called false alerts. 

 

Interested in an AI consultation?

Would you like to have your AI process reviewed or understand why it makes a certain recommendation? We can help you with that.