Skip to content
Zeichenfläche – 2
30 July 2021

Research Clinic: “Explainable AI”

Bridging explainability gaps in automated decision-making from a governance, technical and design perspective  


Artificial Intelligence (AI) changes how we think about deciding – and about thinking. It challenges economic dependencies, enables new business models and intensifies the datafication of our economies. Yet, the use of AI entails risks on an individual as well as on a societal level, especially for marginalized groups. AI systems are trained with data which handle people only as members of groups rather than as individuals. Groupthink can lead to the objectification of a person which means a violation of human dignity. But not only the outcomes of AI-based decision-making can pose challenges and lead to discriminatory results. The opacity of machine-learning algorithms and the functioning of (deep) neural networks make it difficult to adequately explain how AI systems reach results (‘black box phenomenon’). Calls for more insights into how automated decisions are made have increasingly grown louder over the past couple of years.

The solution seems clear: We need to make sure that we know enough about automated decision-making processes in order to be able to provide the reasons for a decision to those touched by that same decision – in a way they understand (explainability). Simple enough for it to be understood- sufficiently complex so that the AI’s complexity is not glossed over. Explainability is the necessary first step in a row of conditions which lead to a decision to be perceived as legitimate: Decisions which can be justified are perceived as legitimate. But only what is questioned is justified and only what is understood is questioned. And to be understood, the decision has to be explained. Thus, explainability is a precondition for a decision to be perceived as legitimate (justifiability).

Given these circumstances, it is therefore not easy to ensure we can harness the power of AI for good, and to make it explain to us how decisions were reached – albeit being a requirement  under European law, such as the GDPR. 


In our Clinic “Explainable AI” we aim to tackle these challenges and explore the following key questions from an interdisciplinary perspective:

  • governance perspective: What are the requirements regarding explainability of the GDPR and what must be explained in order to meet these requirements?
  • technical perspective: What can be explained?
  • design perspective: What should explanations look like in order to be meaningful to affected users?

We will invite around 10 international researchers from law, computer science, and UX design to participate in an impact-driven, interdisciplinary Clinic focused on specific use cases. The Clinic will span five intense days (8 – 12 September 2021) and is hosted by the Alexander von Humboldt Institute for Internet and Society (HIIG), is a joint initiative by the Ethics of Digitalisation project and the AI & Society Lab, a research structure within the institute that explores new formats and perspectives on AI. 



About the research project

The clinic is part of the NoC research project “The Ethics of Digitalisation – From Principles to Practices”, which aims to develop viable answers to challenges at the intersection of ethics and digitalisation. Innovative formats facilitate interdisciplinary scientific work on application-, and practice-oriented questions and achieve outputs of high societal relevance and impact. Previous formats included a research sprint on AI in content moderation and a clinic on fairness in online advertising. The project promotes an active exchange between science, politics and society and thus contributes to a global dialogue on the ethics of digitalisation.

Besides the HIIG, the main project partners are the Berkman Klein Center at Harvard University, the Digital Asia Hub, and the Leibniz Institute for Media Research I Hans-Bredow-Institut.

Nadine Birner

Former Coordinator: The ethics of digitalisation | NoC

Sign up for HIIG's Monthly Digest

HIIG-Newsletter-Header

You will receive our latest blog articles once a month in a newsletter.

Explore current HIIG Activities

Research issues in focus

HIIG is currently working on exciting topics. Learn more about our interdisciplinary pioneering work in public discourse.