Skip to content
projektseite_project page_human in the loop

Human in the loop?

Automated decisions and AI systems are becoming increasingly important in our digital world. A well-known example is the granting of loans, where banks use technological systems to automatically assess the creditworthiness of applicants. Similar decision-making processes can also be found in content moderation on digital platforms such as Instagram, Facebook or TikTok. Here, algorithms make decisions about which posts, images and videos are approved by users or labeled as inappropriate, among other things.

These automated decisions are often not flawless. This is because they contain unintentional biases from training data or there is a lack of human contextual understanding. This means that single, automated decisions often do not do justice to the individual situations of people. This is why there have long been calls for humans to be integrated into such processes so that they can play a certain role in monitoring and improving technological systems.

The research project Human in the loop? Autonomy and automation in socio-technical systems investigates how the active involvement of humans can make a difference in automated decision-making processes. The central questions are: How should meaningful interaction between humans and machines be designed? What role do human decisions play in the quality assurance of automated decisions? How can we ensure that this interaction is not only legally compliant, but also transparent and comprehensible? And what requirements apply to the interaction between humans and machines when considering the technical system, the human decision-makers, their context and their environment?

Project focus and transfer

Four case studies

Analysis of human participation in automated decision-making processes through field analyses, workshops and dialogue formats in four selected scenarios.

Taxonomy of influencing factors

Examination of the factors that influence human decisions and identification of the errors, vulnerabilities and strengths of all technical systems and people involved in decision-making processes.

Recommendations for action

Development of practical solutions to optimise collaboration between humans and machines and improve the implementation and interpretation of existing legislation and regulations (GDPR, AI Act and DSA).

 

Credit granting: between automation and ethical challenges

Automated credit granting brings efficiency benefits in the case of consumers, but also raises ethical and trust issues. We investigate risks such as biases in loan decisions due to factors such as gender and place of residence as well as the possible prioritisation of profit maximisation by credit institutions over the needs of borrowers. Our research questions focus on the influence of automated credit decisions on consumers' trust in their credit institutions. We also examine the principle of non-discrimination and the role of legal frameworks. Where is the human being in the process? What responsibility do they bear for the final decision?

Field analysis

May to September 2024

Dialogue formats with experts and practitioners

September to December 2024

Recommendation for action for automated credit granting

2025

Automated content moderation: power, law and the role of human decisions

Content moderation is very important in the online world. It involves the control and regulation of content to ensure that it complies with platform guidelines. This includes identifying and editing or deleting inappropriate or harmful content such as insults, hate speech and misinformation. This process requires both automated decisions, where algorithms use certain criteria to evaluate content, and human decisions. Our research questions aim to analyse the interaction between automated and human decisions. The aim is to determine ethical standards and develop proposals for their implementation. In particular in the area of tension between the current approach of large US platforms and European standards.

Field analysis

January to April 2025

Dialogue formats with experts and practitioners

April to June 2025

Recommendation for action for automated content moderation

August 2025

"When automation – accelerated by artificial intelligence – creates risks, it is often pointed out that a human must ultimately be involved and make the final decision. But under what conditions does this‚human in the loop‘ really make a difference? That depends on many conditions: their qualifications, the ability to influence the machine processes, liability regulations and much more. In the project that is now starting, we want to analyse these conditions indifferent areas of society. The results should help to enable the use of AI that is orientated towards rights and values."

Wolfgang Schulz

"We are fascinated by the question of how humans and AI systems interact in decision-making processes. What can machines control for us? When do humans have to make decisions? These are topics that are becoming increasingly relevant and help us to contribute to redefining the role of people in digital times. Humans are often seen as a panacea for the problems and sources of error in automated decision-making. However, it is often unclear exactly how such integration should work. With our case studies in the Human in the loop? research project, we are looking for solutions to this problem that also hold up in practice."

Matthias C. Kettemann

"In many areas, the supervision and final decision in the interaction between humans and AI systems and algorithms should remain with humans. We are asking ourselves how this interaction needs to be organised in order for this to succeed and what ‚good‘ decision-making systems look like. Because in our society, the interaction between humans and machines plays a role in more and more decisions. In the Human in the loop? research project, we are asking ourselves how the interaction between humans and AI systems must be designed so that they can continue to safeguard democratic values and civil rights in the future."

Theresa Züger

Other publications

Mahlow, P., Züger, T., & Kauter, L. (2024). KI unter Aufsicht: Brauchen wir ‘Humans in the Loop’ in Automatisierungsprozessen? Digital Society Blog. Publication details

Lectures and presentations

Workshop: Zukunft der Content Moderation durch effektive Mensch-Maschine-Kollaboration
Vortrag: „Recht und Ethik der Mensch-Maschine-Interaktion“. Humboldt Institut für Internet und Gesellschaft, Berlin, Germany: 07.10.2024 Further information

Matthias C. Kettemann

Organisation of events

Human in the Loop: Content Moderation
Zukunft der Content Moderation durch effektive Mensch-Maschine-Kollaboration. 07.10.2024. Humboldt Institut für Internet und Gesellschaft, Berlin, Germany (National) Further information

Philipp Mahlow, Ann-Kathrin Watolla, Lara Kauter, Daniel Pothmann, Sarah Spitz, Katharina Mosene, Matthias C. Kettemann, Wolfgang Schulz, Theresa Züger

Human in the Loop: Kreditvergabe im Fokus
Human in the Loop: Kreditvergabe im Fokus. 10.04.2024. Humboldt Institut für Internet und Gesellschaft, Berlin, Germany (National) Further information

Philipp Mahlow, Lara Kauter, Daniel Pothmann, Sarah Spitz, Vincent Hofmann, Katharina Mosene, Matthias C. Kettemann, Wolfgang Schulz, Theresa Züger

Funded by

 

 

DurationOktober 2023 - September 2027
FundingStiftung Mercator

 

Contact

Sarah Spitz

Head of Dialogue & Knowledge Transfer | Project Coordinator Human in the Loop?

AI & Society Lab

The AI & Society Lab is a research group at HIIG. It functions as an interface between research, industry and civil society.

Events

Round-table: The future of content moderation
Digitaler Salon: Damage Control
Round-table: credit granting
Digitaler Salon: Fahrplan 4.0

Blog articles