Zum Inhalt springen
Algorithmische Entscheidungen und Menschenrechte
16 Januar 2018| doi: 10.5281/zenodo.1148297

Algorithmische Entscheidungen und Menschenrechte

Eingebettet in intelligente Technologien, treffen Algorithmen täglich Entscheidungen für uns. Welche Herausforderungen ergeben sich für den Schutzbereich der Menschenrechte und die Regulierung künstlicher Intelligenz? Wolfgang Schulz und Anne-Kristin Polster dokumentieren einige Beobachtungen der aktuellen Diskussion zu algorithmischen Entscheidungen innerhalb des „Global Network of Internet and Society Research Centers.“

There is a societal debate that never seems to go away and is shaped by the notion that artificial intelligence could threaten individuals’ rights. Undoubtedly, from a legal sciences viewpoint, automated tools and innovations in fields like machine learning, IoT and robotics – often put together under the umbrella term AI – are already deeply involved in areas of our everyday lives that are protected by human rights under various frameworks.

How these frameworks play out in the face of specific challenges of technical innovations in the AI field, was discussed during a recent gathering of the Global Network of Internet and Society Research Centers on Algorithmic Decision Making and its Human Rights Implications hosted by the HIIG and Hans-Bredow-Institute for Media Research, held on 12 – 13 October 2017. The workshop’s findings will feed into a Global Symposium on Artificial Intelligence & Inclusion from 8 – 10 November in Rio de Janeiro. The following six cross-cutting observations, which are partly intertwined, seek to capture some cornerstones of the ongoing discussion within the network.

We find that, when it comes to some of the risks discussed in the context of algorithmic decision making, the automation of decision making processes in morally sensitive situations often only brings further visibility and public attention to pre-existing problems.

On the one hand, there are considerable implicit components to human decision-making processes that need explicit discussion in the process of automation. Questions of the moral and ethical foundations of decision making demand new attention when we have to decide how to decide. Also, when they make certain decisions, technical tools mirror what they are shown. This is especially apparent when we think about problems of bias and machine discrimination; these might be the result of the structure of the training data, which reflects existing inequalities.

Those expressing reasonable concerns are not necessarily questioning automation itself, neither do they blame algorithms for making harmful decisions. Still, they recognise the risk of locking these decisions into algorithmic black boxes. As AI tools continue to permeate processes of communication and govern public and private spheres, severe concerns will arise as a result of the consequent power shifts and broad possibilities for misuse of power.

More articles about algorithmic decisions and human rights

Linked to the notion of power structures playing out through the design and use of technology, many of the ongoing research projects show that, when it comes to processes of automated decision making, context is key. This context might be a particular social, economic or political frame. Many of the effects on a societal level that we are interested in are, to take an example, the result of social practices that have emerged in the process of using the technology. Given the importance of technical expert systems, even for the construction of reality, the economic context under which people decide whether to use a particular algorithm or whether they have an economic incentive to divert from a suggestion given by an expert system also urgently requires further research.

Within those frames, AI systems can again be looked at from different angles, with the result that the system appears as a mere tool, a kind of hybrid construction consisting of a human operator and an actor in its own right. While it might seem as if we are not yet encountering many completely autonomous decision-making systems of the kind found in algorithmic trading, the relevance of possible de facto autonomous algorithmic decision making should not be underestimated. The empirical findings Jedrzej Niklas from the London School of Economics presented on the use of automated profiling mechanisms for the allocation of unemployment benefits in Poland are one such example: On the basis of a short questionnaire, the software sorts applicants into three groups and thereby suggesting one of several forms of support packages. Although serving as a mere suggestion or first indicator, the research shows that the advice given is followed in the vast majority of cases. Arguably, this can lead to the suggestion that in some cases it is just a formal notion to say that there is “a human in the loop”. We can easily imagine similar outcomes when looking at the role of technical assistants in medical diagnostics or the legal system.

To further assess the benefits and risks of autonomous artificial agents, we have to keep in mind some characteristics of AI systems. For instance, AI systems rely on generalisation. To behave accurately, they need to understand what the “concept” of a given entity in reality is, let’s say a cat. As shown by Douglas Eck, a research scientist at Google, this can be explored in a playful way, for instance, by looking at how AI can be trained to draw cat faces. In the process, the program recognises certain features as necessary conditions. If, during the training process, the AI links the concept of a cat face to a symmetrical allocation of whiskers, it will stick to that principle and either not recognise or automatically correct a cat face with different numbers of whiskers on each side. While the ability to generalise or recognise the “essential” is helpful in many ways, there might be a very basic tension between it and concepts like “creativity” and “diversity”. The latter also needs to be taken into consideration when automated systems perform administrative tasks.

Another factor shaping the way we interact with automated systems is the principle that AI often needs to rely on probabilities. This is in fact true for all participants involved in the communication process. Gathering empirical evidence for the role and impact of social bots in the German Federal Election process in 2017, a study presented by Ulrike Klinger from the Institute of Mass Communication and Media Research at the University of Zurich, is examining bot activities on Twitter around the time of the elections. When using a “Botometer” as a tool to predict the probability of someone being a bot, the result will always be a score based on the software’s analysis of the behaviour of an account to determine its nature. Any given actor might be human/bot to a certain extent. We have to consider that we do not know when, for instance, we are active on digital platforms, whether we are interacting with a bot or a human being and, if it is a human being, whether it is a male or a female or has any other features. We might only know the likelihood of it being one or the other. This works both ways. Within online communication spaces, we also have to be aware that others, be they humans or artificial agents, are only interacting with our “probabilistic alter ego”.

Furthermore, the importance of how we talk about algorithmic decision making and “AI” is very often underlined during the debate. The various narratives we use might have substantial implications for a wide range of issues, for instance, for how we design decision making procedures. We need to maintain a self-reflective discourse in the development of guidelines and regulatory instruments or when forming new institutions for the governance of AI. This discussion might also revolve around the question of whether we are actually having a conversation about AI specifics or about algorithmic decision making or even about automation in general.


Wolfgang Schulz ist Forschungsdirektor: Internet- und Medienregulierung am HIIG und Mitglied im Direktorium des Hans-Bredow-Insituts. Er ist Vorsitzender des Fachausschusses Kommunikation und Information der Deutschen UNESCO-Kommission. Anne-Kristin Polster ist wissenschaftliche Mitarbeiterin am Hans-Bredow-Institut.


Der Artikel ist Teil eines Dossiers über algorithmische Entscheidungen und Menschenrechte. Sie möchten selbst einen Artikel im Rahmen dieser Serie veröffentlichen? Dann senden Sie uns eine Email mit Ihrem Themenvorschlag.


Dieser Beitrag spiegelt die Meinung des Autors und weder notwendigerweise noch ausschließlich die Meinung des Institutes wider. Für mehr Informationen zu den Inhalten dieser Beiträge und den assoziierten Forschungsprojekten kontaktieren Sie bitte info@hiig.de.

Dieser Beitrag spiegelt die Meinung der Autorinnen und Autoren und weder notwendigerweise noch ausschließlich die Meinung des Institutes wider. Für mehr Informationen zu den Inhalten dieser Beiträge und den assoziierten Forschungsprojekten kontaktieren Sie bitte info@hiig.de

Wolfgang Schulz, Prof. Dr.

Forschungsdirektor

Anne-Kristin Polster

Ehem. Wissenschaftliche Mitarbeiterin am Hans-Bredow-Institut

Auf dem Laufenden bleiben

HIIG-Newsletter-Header

Jetzt anmelden und  die neuesten Blogartikel einmal im Monat per Newsletter erhalten.

Forschungsthema im Fokus Entdecken

Plattform Governance

In unserer Forschung zur Plattform Governance untersuchen wir, wie unternehmerische Ziele und gesellschaftliche Werte auf Online-Plattformen miteinander in Einklang gebracht werden können.

Weitere Artikel

Ein moderner U-Bahnhof mit Rolltreppen, die zu den Plattformen führen – ein Sinnbild für den geregelten Zugang zu Daten, wie ihn Zugangsrechte im NetzDG und DSA ermöglichen.

Plattformdaten und Forschung: Zugangsrechte als Gefahr für die Wissenschaftsfreiheit?

Neue Digitalgesetze gewähren Forschenden Zugangsrechte zu Plattformdaten, doch strikte Vorgaben werfen Fragen zur Wissenschaftsfreiheit auf.

Drei Gruppen von Menschen haben Formen über sich, die zwischen ihnen und in Richtung eines Papiers hin und her reisen. Die Seite ist ein einfaches Rechteck mit geraden Linien, die Daten darstellen. Die Formen, die auf die Seite zusteuern, sind unregelmäßig und verlaufen in gewundenen Bändern.

Beschäftigte durch Daten stärken

Arbeitsplätze werden zunehmend datafiziert. Doch wie können Beschäftigte und Gewerkschaften diese Daten nutzen, um ihre Rechte zu vertreten?

Eine stilisierte Illustration mit einem großen „X“ in einer minimalistischen Schriftart, mit einem trockenen Zweig und verblichenen Blättern auf der einen Seite und einem leuchtend blauen Vogel im Flug auf der anderen Seite. Das Bild symbolisiert einen Übergangsprozess, wobei der Vogel das frühere Twitter-Logo darstellt und das „X“ das Rebranding der Plattform und Änderungen im Regelwerk von X symbolisiert.

Zwei Jahre nach der Übernahme: Vier zentrale Änderungen im Regelwerk von X unter Musk

Der Artikel beschreibt vier zentrale Änderungen im Regelwerk der Plattform X seit Musks Übernahme 2022 und deren Einfluss auf die Moderation von Inhalten.