Skip to content
Algorithmische Entscheidungen und Menschenrechte
16 January 2018| doi: 10.5281/zenodo.1148297

Algorithmic decision making and human rights

Embedded in smart technologies, algorithms are taking decisions on a daily basis. What challenges arise from algorithmic decision making for human rights and for the regulation of artificial intelligence? In this blog article, Wolfgang Schulz and Anne-Kristin Polster document some initial findings which could be observed at the Global Network of Internet and Society Research Center’s discussion. 

There is a societal debate that never seems to go away and is shaped by the notion that artificial intelligence could threaten individuals’ rights. Undoubtedly, from a legal sciences viewpoint, automated tools and innovations in fields like machine learning, IoT and robotics – often put together under the umbrella term AI – are already deeply involved in areas of our everyday lives that are protected by human rights under various frameworks.

How these frameworks play out in the face of specific challenges of technical innovations in the AI field, was discussed during a recent gathering of the Global Network of Internet and Society Research Centers on Algorithmic Decision Making and its Human Rights Implications hosted by the HIIG and Hans-Bredow-Institute for Media Research, held on 12 – 13 October 2017. The workshop’s findings will feed into a Global Symposium on Artificial Intelligence & Inclusion from 8 – 10 November in Rio de Janeiro. The following six cross-cutting observations, which are partly intertwined, seek to capture some cornerstones of the ongoing discussion within the network.

We find that, when it comes to some of the risks discussed in the context of algorithmic decision making, the automation of decision making processes in morally sensitive situations often only brings further visibility and public attention to pre-existing problems.

On the one hand, there are considerable implicit components to human decision-making processes that need explicit discussion in the process of automation. Questions of the moral and ethical foundations of decision making demand new attention when we have to decide how to decide. Also, when they make certain decisions, technical tools mirror what they are shown. This is especially apparent when we think about problems of bias and machine discrimination; these might be the result of the structure of the training data, which reflects existing inequalities.

Those expressing reasonable concerns are not necessarily questioning automation itself, neither do they blame algorithms for making harmful decisions. Still, they recognise the risk of locking these decisions into algorithmic black boxes. As AI tools continue to permeate processes of communication and govern public and private spheres, severe concerns will arise as a result of the consequent power shifts and broad possibilities for misuse of power.

More articles about algorithmic decisions and human rights

Linked to the notion of power structures playing out through the design and use of technology, many of the ongoing research projects show that, when it comes to processes of automated decision making, context is key. This context might be a particular social, economic or political frame. Many of the effects on a societal level that we are interested in are, to take an example, the result of social practices that have emerged in the process of using the technology. Given the importance of technical expert systems, even for the construction of reality, the economic context under which people decide whether to use a particular algorithm or whether they have an economic incentive to divert from a suggestion given by an expert system also urgently requires further research.

Within those frames, AI systems can again be looked at from different angles, with the result that the system appears as a mere tool, a kind of hybrid construction consisting of a human operator and an actor in its own right. While it might seem as if we are not yet encountering many completely autonomous decision-making systems of the kind found in algorithmic trading, the relevance of possible de facto autonomous algorithmic decision making should not be underestimated. The empirical findings Jedrzej Niklas from the London School of Economics presented on the use of automated profiling mechanisms for the allocation of unemployment benefits in Poland are one such example: On the basis of a short questionnaire, the software sorts applicants into three groups and thereby suggesting one of several forms of support packages. Although serving as a mere suggestion or first indicator, the research shows that the advice given is followed in the vast majority of cases. Arguably, this can lead to the suggestion that in some cases it is just a formal notion to say that there is “a human in the loop”. We can easily imagine similar outcomes when looking at the role of technical assistants in medical diagnostics or the legal system.

To further assess the benefits and risks of autonomous artificial agents, we have to keep in mind some characteristics of AI systems. For instance, AI systems rely on generalisation. To behave accurately, they need to understand what the “concept” of a given entity in reality is, let’s say a cat. As shown by Douglas Eck, a research scientist at Google, this can be explored in a playful way, for instance, by looking at how AI can be trained to draw cat faces. In the process, the program recognises certain features as necessary conditions. If, during the training process, the AI links the concept of a cat face to a symmetrical allocation of whiskers, it will stick to that principle and either not recognise or automatically correct a cat face with different numbers of whiskers on each side. While the ability to generalise or recognise the “essential” is helpful in many ways, there might be a very basic tension between it and concepts like “creativity” and “diversity”. The latter also needs to be taken into consideration when automated systems perform administrative tasks.

Another factor shaping the way we interact with automated systems is the principle that AI often needs to rely on probabilities. This is in fact true for all participants involved in the communication process. Gathering empirical evidence for the role and impact of social bots in the German Federal Election process in 2017, a study presented by Ulrike Klinger from the Institute of Mass Communication and Media Research at the University of Zurich, is examining bot activities on Twitter around the time of the elections. When using a “Botometer” as a tool to predict the probability of someone being a bot, the result will always be a score based on the software’s analysis of the behaviour of an account to determine its nature. Any given actor might be human/bot to a certain extent. We have to consider that we do not know when, for instance, we are active on digital platforms, whether we are interacting with a bot or a human being and, if it is a human being, whether it is a male or a female or has any other features. We might only know the likelihood of it being one or the other. This works both ways. Within online communication spaces, we also have to be aware that others, be they humans or artificial agents, are only interacting with our “probabilistic alter ego”.

Furthermore, the importance of how we talk about algorithmic decision making and “AI” is very often underlined during the debate. The various narratives we use might have substantial implications for a wide range of issues, for instance, for how we design decision making procedures. We need to maintain a self-reflective discourse in the development of guidelines and regulatory instruments or when forming new institutions for the governance of AI. This discussion might also revolve around the question of whether we are actually having a conversation about AI specifics or about algorithmic decision making or even about automation in general.


Wolfgang Schulz is Research Director: Internet and Media Regulation at HIIG and member of the Hans-Bredow Insitute’s directorate. Furthermore, he is chairman of the committee of experts ‘Communication and Information’ of the German UNESCO-Commission. Anne-Kristin Polster is junior researcher at the Hans-Bredow-Institut.


The article above is part of a series on algorithmic decisions and human rights. If you are interested in submitting an article yourself, send us an email with your suggestions.


This post represents the view of the author and does not necessarily represent the view of the institute itself. For more information about the topics of these articles and associated research projects, please contact info@hiig.de.

This post represents the view of the author and does not necessarily represent the view of the institute itself. For more information about the topics of these articles and associated research projects, please contact info@hiig.de.

Wolfgang Schulz, Prof. Dr.

Research Director

Anne-Kristin Polster

Former Researcher: Internet and Media Regulation

Sign up for HIIG's Monthly Digest

HIIG-Newsletter-Header

You will receive our latest blog articles once a month in a newsletter.

Explore Research issue in focus

Platform governance

In our research on platform governance, we investigate how corporate goals and social values can be balanced on online platforms.

Further articles

Three groups of icons representing people have shapes travelling between them and a page in the middle of the image. The page is a simple rectangle with straight lines representing data used for people analytics. The shapes traveling towards the page are irregular and in squiggly bands.

Empowering workers with data

As workplaces become data-driven, can workers use people analytics to advocate for their rights? This article explores how data empowers workers and unions.

A stylised illustration featuring a large "X" in a minimalist font, with a dry branch and faded leaves on one side, and a vibrant blue bird in flight on the other. The image symbolises transition, with the bird representing the former Twitter logo and the "X" symbolising the platform's rebranding and policy changes under Elon Musk.

Two years after the takeover: Four key policy changes of X under Musk

This article outlines four key policy changes of X since Musk’s 2022 takeover, highlighting how the platform's approach to content moderation has evolved.

The picture shows a tractor cultivating a field from above. One side of the field is covered in green, the other is dry and earthy. This is intended to show that although sustainable AI can be useful in the fight against climate change, it also comes at a high ecological cost.

Between vision and reality: Discourses about Sustainable AI in Germany

This article explores Sustainable AI and Germany's shift from optimism to concern about its environmental impact. Can AI really combat climate change?