Skip to content
Titelbild Blogbeitrag Explainable AI
05 April 2022| doi: 10.5281/zenodo.6397649

Explaining AI – How to explain the unexplainable?

Automated decision making (ADM) systems have become ubiquitous in our everyday lives and are complex to understand. So should we even care how AI-based decisions are made? Most definitely, since the use of these systems entails risks on an individual as well as on a societal level — such as perpetuated stereotypes or incorrect results due to incorrect data input. These risks are not new to be discussed. And nonetheless, there is strong evidence that humans working with automated decisions tend to follow the systems in almost 100 % of the cases.  So how can we empower people to think with AI, to question or challenge it, instead of blindly trusting their correctness? The solutions are meaningful explanations for lay users by and about the system and its decision-making procedure. But how? This blog article shows which criteria these explanations should fulfil and how they can meet the criteria of the General Data Protection Regulation (GDPR).

Explainable AI – a possible solution

Explanations of how automated decision making (ADM) systems make decisions (explainable AI, or XAI) can be considered a promising way to mitigate their negative effects. Explanations of an ADM system can empower users to legally appeal a decision, challenge developers to be aware of the negative side effects, and increase the overall legitimacy of the decision. These effects all sound very promising, but what exactly has to be explained to whom, and in which way, to best reach these goals?  

The legal approach towards a good explanation

To find an answer to this complex question, one could start looking at what the law says about ADM systems. The term “meaningful information about the logic involved”, found in the GDPR, could be seen as the legal codification of XAI within the EU. Although the GDPR is among the world’s most analysed privacy regulations, there is no concrete understanding on what type of information developers have to provide (and at which time and to what type of user).

Only some parts can be understood from a legal perspective alone: First, the explanation has to enable the user to appeal the decision. Second, the user needs to actually gain knowledge through the explanation. Third, the power of the ADM developer and of the user has to be balanced through the explanation. Last but not least, the GDPR focuses on individual rather than collective rights or in other words: an individual without any technical expertise must be able to understand the decision.

Interdisciplinary approach: Tech and Design

Since legal methods alone do not lead to a complete answer, an interdisciplinary approach seemed a promising way to better understand the legal requirements on explainable AI. A suggestion of what such an approach could look like is made by the interdisciplinary XIA report of the Ethics of Digitisation project. It combines views of legal, technical and design experts to answer the overarching question behind the legal requirements. What is a good explanation? We started with defining three questions towards a good explanation: Who needs to understand what in a given scenario? What can be explained about the system in use? And what should explanations look like in order to be meaningful to the user? 

Who needs to know what?

What a good explanation looks like highly depends on the target group. For instance: In a clinic setting, a radiologists might need to know more about the general functioning of the model (global explanation) while a patient would need an explanation on the result of a single decision (local explanation).

Besides this expert (radiologist) and lay (patient) users, another target group of an explanation are public or community advocates. The advocate groups support individuals confronted with an automated decision. Their interest will be more in understanding the models and their limitations as a whole (global), instead of only focussing on the result of one individual decision (local). The importance of the advocates group is already understood in other political contexts in society, such as inclusive design for AI Systems, i.e., that design teams need more people of colour and women to avoid problems of bias and discrimination. They should also play a bigger role in the field of explainable AI. 

The Design What should explanations look like? 

The type of visualisation also depends on the contexts, point in time, and, among many other factors, the target group. One answer to this question which could fit all types of explanations does not exist. Therefore, we propose to introduce a participatory process of designing the explanation into the development process of the ADM system. The advocates group should be part of this process representing the lay users. This might lead to an explanation to be “meaningful” to the user and compliant with the GDPR.

The technical view – What can be explained about the system in use? 

A solution to provide an explanation might be post-hoc interpretations. They are delivered after the decision was made (post-hoc). An example is a saliency map, commonly used to analyse deep neural networks. These maps highlight the parts of the input (image, text, etc.) that are deemed most important to the model prediction. However, they do not prevail in the actual functioning of the model. Therefore, we do not conceive them to be able to empower the user to appeal a decision.

We rather propose making the underlying rationale, design and development process transparent and document the input data. This may require obligations to document the processes of data gathering and preparation including annotation or labelling. The latter can be achieved through datasheets. The method selection for the main model as well as the extent of testing and deployment should also be documented. This could be “the logic involved” from a technical perspective.

Another major issue of explainable AI are the so-called black box models. These are models which are perceived as non interpretable. However, such systems tend to come with a very high performance. Therefore, we propose to weigh up the benefits of high performance with the risks of low explainability. From a technical perspective, it might be useful to work with such a risk based approach, although this might contradict with the legal requirement of the GDPR to always provide an explanation.

Bringing the views together

As shown in this article as well as the report, law, design, and technology have a different, in some points even contradicting perspective on what “meaningful information about the logic involved” are. Although we did not find the one definition for these terms, we found some common grounds: The explanation should be developed and designed in a process involving representation of the user. The minimum requirement is documentation of the input data as well as architectural choices. However, it is unlikely that only documenting this process enables the user to appeal an automated decision. Therefore, other types of explanations have to be found in the participatory process in order to be compliant with the GDPR.

I would like to thank Hadi Ashgari and Matthias C. Kettemann, both also authors of the clinic report, for their thoughts and suggestions for this blogpost.

This post represents the view of the author and does not necessarily represent the view of the institute itself. For more information about the topics of these articles and associated research projects, please contact info@hiig.de.

Vincent Hofmann

Former Researcher: AI & Society Lab

Sign up for HIIG's Monthly Digest

HIIG-Newsletter-Header

You will receive our latest blog articles once a month in a newsletter.

Explore Research issue in focus

Du siehst Eisenbahnschienen. Die vielen verschiedenen Abzweigungen symbolisieren die Entscheidungsmöglichkeiten von Künstlicher Intelligenz in der Gesellschaft. Manche gehen nach oben, unten, rechts. Manche enden auch in Sackgassen. Englisch: You see railway tracks. The many different branches symbolise the decision-making possibilities of artificial intelligence and society. Some go up, down, to the right. Some also end in dead ends.

Artificial intelligence and society

The future of artificial Intelligence and society operates in diverse societal contexts. What can we learn from its political, social and cultural facets?

Further articles

Modern subway station escalators leading to platforms, symbolizing the structured pathways of access rights. In the context of online platforms, such rights enable research but impose narrow constraints, raising questions about academic freedom.

Why access rights to platform data for researchers restrict, not promote, academic freedom

New German and EU digital laws grant researchers access rights to platform data, but narrow definitions of research risk undermining academic freedom.

Three groups of icons representing people have shapes travelling between them and a page in the middle of the image. The page is a simple rectangle with straight lines representing data used for people analytics. The shapes traveling towards the page are irregular and in squiggly bands.

Empowering workers with data

As workplaces become data-driven, can workers use people analytics to advocate for their rights? This article explores how data empowers workers and unions.

A stylised illustration featuring a large "X" in a minimalist font, with a dry branch and faded leaves on one side, and a vibrant blue bird in flight on the other. The image symbolises transition, with the bird representing the former Twitter logo and the "X" symbolising the platform's rebranding and policy changes under Elon Musk.

Two years after the takeover: Four key policy changes of X under Musk

This article outlines four key policy changes of X since Musk’s 2022 takeover, highlighting how the platform's approach to content moderation has evolved.