Skip to content
The image shows a football field from above. The players are only visible because of their shadows, symbolizing Humans in the Loop.
17 July 2024| doi: 10.5281/zenodo.12755262

AI under supervision: Do we need Humans in the Loop in automation processes?

Who is this human that is kept in the loop? In the era of artificial intelligence, vast amounts of data are processed to generate automated recommendations and even make decisions. These systems are increasingly integral to our daily lives: for instance, automated systems in banks assess the creditworthiness of potential applicants, while aiding doctors in diagnosing illnesses. Thus, the benefits of automation seem clear to many. On the one hand, they accelerate numerous work processes and operations, with the aim to reduce costs. On the other hand, AI-driven systems could identify hidden relationships and patterns that humans may overlook. However, despite their advantages, automated decisions are not always flawless. For example, they tend to adopt unintended biases from the data they are trained on. Many believe that involving a human in the process, the so-called Human in the Loop, could be the solution to many automation issues. They could optimise the output of an automated system or oversee its decisions. But does human involvement automatically lead to better outcomes? And is it only the outcomes that matters? How do we ensure that human interventions genuinely add value? Who decides what in the automated process and on what basis? In the following article, we explain how we are exploring these questions in the research project Human in the Loop? at the Alexander von Humboldt Institute for Internet and Society.

Human-Machine Interactions: Where Errors Arise and Consequences Follow

Instances of consequential, inadequate human-machine interactions have recently become more frequent. For example, in the British Post Office scandal, a finance software incorrectly accused post office managers of embezzlement and faulty bookkeeping, leading to convictions of over 900 employees as well as payment demands. Another negative example was the Dutch childcare benefits scandal, where Dutch tax authorities used an AI system to identify potential welfare fraud cases, which, combined with human oversight, resulted in discriminatory outcomes. Over 20,000 parents were wrongly asked to repay significant amounts, plunging many into financial difficulties. These scenarios clearly demonstrate that many current automation processes are still highly error-prone. The reasons are manifold: for instance, humans tend to overly trust machine-made pre-decisions in some contexts, known as automation bias. Another issue can be the opacity of machine decision-making. Specifically, how can a human effectively intervene if they do not sufficiently understand the workings of the system they are monitoring – including the logic and reasoning behind its decision (or recommendation)? Moreover, this lack of clarity makes it difficult for those affected by erroneous automated decisions to legally challenge them, as they cannot prove they are unlawful (for example, due to discrimination). However, human biases can still affect human-machine interactions. If the training data of an AI system, for example, are not adequately prepared, biases inherent in those data can persist in the AI system, perpetuating learned discriminatory human decisions.

The European Union is trying to address some of these issues with the AI Act adopted in May. This marks the world’s first comprehensive attempt at AI regulation. The regulation specifically mandates that AI systems used for high-risk applications must be designed in a way they can be effectively supervised. This is especially true for areas where errors could have severe consequences. Thus, the Human in the Loop plays a crucial role as a hopeful figure in steering such human-machine interactions towards good decisions. We further clarify what ‘good’ entails later in the article.

Human Involvement: Who Takes on Which Role?

Firstly, who exactly is this Human in the Loop? For our research project, we define them as individuals actively participating in an automated process to enhance the system’s performance or monitor the quality of its decisions. Our definition incorporates technical descriptions, which primarily locate human involvement in the development stage of an AI system, such as data preparation or monitoring machine learning processes. It also considers regulatory perspectives, understanding the Human in the Loop mainly as a supervisor of an operational system, as described by the Bundesrat in its file 165/19. Therefore, examples of Humans in the Loop in our view include both, doctors using AI systems for initial X-ray assessments and human actors cleaning training data for such systems. However, how effective such human intervention can be remains not fully settled and depends on the specifics of each case of automation.

Decisions Under Scrutiny: Case Studies on AI-Supported Decision Processes

In our research project, we examine the interplay described between those Humans in the Loop and ‘machines’ in automated processes to better understand it. The goal is to generate new insights into how this interplay must be shaped to achieve good decisions. Through various case studies, we identify and gather the most relevant influencing factors affecting decision quality.

The initial case study focuses on the field of credit granting decisions. The use of AI systems offers efficiency gains but raises fundamental questions. For instance, does this process involve a Human in the Loop who reviews individual credit decisions? In a second case study, we delve into the realm of content moderation on digital platforms and social networks. Here, we analyse decision interactions between algorithms and humans, for example, aiming to enforce community rules and remove problematic content such as hate speech or misinformation.

Based on these findings, we develop concrete recommendations on how decision systems can be designed to facilitate successful human-machine interactions. Various factors influence the final decision, including how information is presented, personal values, legal liability issues, economic incentives, and the time available to make a decision. Each case study brings us closer to achieving overarching project goals. Firstly, we develop a comprehensive taxonomy – a practical overview of decision-relevant factors and characteristics. Secondly, we create specific action recommendations for the cases studied, contributing to improving collaboration between humans and machines in decision-making.

Efficiency and Ethics: Managing the Complexity of Our Research Questions

Examining the interplay between humans and machines in AI-supported decision processes presents several challenges. Particularly, detailed information on human involvement in areas such as credit granting is hard to access. Companies in this sector, for instance, seek to protect their internal processes and decision criteria to maintain competitive advantages and avoid the exploitation of potential weaknesses in their systems. Therefore, many of these processes remain undocumented practices or business secrets. In our research project, we have therefore actively engaged stakeholders willing to share their expertise. They assist us in understanding which actors (such as humans, AI systems, or companies) are involved in decision processes in the case studies of interest, how they collaborate, and which factors critically influence decisions. Simultaneously, we grapple intensely with the fundamental question of what criteria define a ‘good’ decision. Combined, this helps us assess how decisions in human-machine interactions should function factually, procedurally, and structurally. The subsequent question is much more complex: How do we measure the quality of those decisions? Assessing decision quality heavily depends on the perspective of the observer, as illustrated by an example from credit granting: Is the decision good for the individual, the bank, or society? The outcome can significantly differ depending on the viewpoint.

Conclusion: The Future of the Human in the Loop

In the future, automated decision-making processes will be established in many more industries, thus occupying more space in our daily lives. Therefore, it is crucial that we, as a society, understand their impacts and risks to ensure fair and transparent decision-making processes for all. We must ensure that capable individuals with specific qualifications receive sufficient and meaningful opportunities to influence automation processes. They must be empowered to accurately assess the quality of automated outputs and intervene as necessary. We will investigate the conditions under which this can be achieved in the coming years. Our research aims to facilitate the integration of algorithmic systems into human-guided decision processes in an ethically responsible and practically feasible manner.

This post represents the view of the author and does not necessarily represent the view of the institute itself. For more information about the topics of these articles and associated research projects, please contact info@hiig.de.

Philipp Mahlow

Research assistant: Human in the loop?

Theresa Züger, Dr.

Research Group Lead: Public Interest AI | AI & Society Lab, Co-Lead: Human in the Loop

Lara Kauter

Student assistant: Human in the loop?

Sign up for HIIG's Monthly Digest

HIIG-Newsletter-Header

You will receive our latest blog articles once a month in a newsletter.

Explore Research issue in focus

Du siehst Eisenbahnschienen. Die vielen verschiedenen Abzweigungen symbolisieren die Entscheidungsmöglichkeiten von Künstlicher Intelligenz in der Gesellschaft. Manche gehen nach oben, unten, rechts. Manche enden auch in Sackgassen. Englisch: You see railway tracks. The many different branches symbolise the decision-making possibilities of artificial intelligence and society. Some go up, down, to the right. Some also end in dead ends.

Artificial intelligence and society

The future of artificial Intelligence and society operates in diverse societal contexts. What can we learn from its political, social and cultural facets?

Further articles

Modern subway station escalators leading to platforms, symbolizing the structured pathways of access rights. In the context of online platforms, such rights enable research but impose narrow constraints, raising questions about academic freedom.

Why access rights to platform data for researchers restrict, not promote, academic freedom

New German and EU digital laws grant researchers access rights to platform data, but narrow definitions of research risk undermining academic freedom.

Three groups of icons representing people have shapes travelling between them and a page in the middle of the image. The page is a simple rectangle with straight lines representing data used for people analytics. The shapes traveling towards the page are irregular and in squiggly bands.

Empowering workers with data

As workplaces become data-driven, can workers use people analytics to advocate for their rights? This article explores how data empowers workers and unions.

A stylised illustration featuring a large "X" in a minimalist font, with a dry branch and faded leaves on one side, and a vibrant blue bird in flight on the other. The image symbolises transition, with the bird representing the former Twitter logo and the "X" symbolising the platform's rebranding and policy changes under Elon Musk.

Two years after the takeover: Four key policy changes of X under Musk

This article outlines four key policy changes of X since Musk’s 2022 takeover, highlighting how the platform's approach to content moderation has evolved.