Skip to content
169 HD – AI is neutral – 1
10 May 2021| doi: 10.5281/zenodo.4745653

Myth: AI will end discrimination

As an allegedly objective state-of-the-art technology, there are hopes that AI may overcome human weaknesses. Some people believe that AI might be able to gain privileged access to knowledge, free of human biases and errors and thus end discrimination by realizing all in all fair and objective decisions.
We approach the de-mystification of this claim by looking at concrete examples of how AI (re)produces inequalities and connect those to several aspects which help to illustrate socio-technical entanglements. Drawing on a range of critical scholars, we argue that this simplifying myth might even be dangerous and point out what to do about it.

Myth

AI will end discrimination (or is at least less discriminatory than fallible and unfair human beings).

As part of society, AI is deeply rooted in it and as such not separable from structures of discrimination. Due to this socio-technical embeddedness, AI cannotmake discrimination disappear by itself.

Watch the talk

Material

Presentation slides
CORE READINGS

Benjamin, R. (2019a): Captivating Technology. Race, Carceral Technoscience, and Liberatory Imagination in Everyday Life. Durham: Duke University Press.

Benjamin, R. (2019b): Race after technology: abolitionist tools for the new Jim code. Cambridge: UKPolity.

Criado-Perez, C. (2020): Unsichtbare Frauen. Wie eine von Daten beherrschte Welt die Hälfte der Bevölkerung ignoriert. München: btb Verlag.

D’Ignazio, C.; Klein, L. F. (2020): Data Feminism.
Strong ideas series Cambridge, Massachusetts London, England: The MIT Press.

Buolamwini, J.; Gebru, T. (2018): Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. In: Proceedings of Machine Learning Research 81. Paper präsentiert bei der Conference on Fairness, Accountability, and Transparency, 1–15.

ADDITIONAL READINGS

Eubanks, V. (2017): Automating inequality. How high-tech tools profile, police, and punish the poor. First Edition. New York, NY: St. Martin’s Press

O’Neil, C. (2016): Weapons of math destruction. How big data increases inequality and threatens democracy. First edition. New York: Crown.

Zuboff, S. (2020): The Age of Surveillance Capitalism. The Fight for a Human Future at the new Frontier of Power. First Trade Paperback Edition. New York: PublicAffairs.

Cave, S.; Dihal, K. (2020): The Whiteness of AI. In: Philosophy & Technology 33(4), 685–703.
UNICORN IN THE FIELD

Epicenter.works
AlgorithmWatch
netzforma* e.V.

About the authors

Miriam Fahimi, Digital Age Research Center (D!ARC), University of Klagenfurt

Miriam, MA BSc is Marie Skłodowska-Curie Fellow within the ITN-ETN Marie Curie Training Network „NoBIAS – Artificial Intelligence without Bias“, funded by the EU through Horizon 2020 at the Digital Age Research Center (D!ARC), University of Klagenfurt. She is also a PhD candidate in Science and Technology Studies at the University of Klagenfurt, supervised by Katharina Kinder-Kurlanda. Her research interests include algorithmic fairness, philosophy of science, science and technology studies, and feminist theory.

@feminasmus

Phillip Lücking, Gender/Diversity in Informatics Systems (GeDIS), University of Kassel

Phillip is a research associate and PhD candidate at the University of Kassel. He graduated from Bielefeld University in Intelligent Systems (MSc). His research interest encompasses machine learning and robotics in relation to their societal impacts, as well as questions on how these technologies can be utilized for social good.


Why, AI?

This post is part of our project “Why, AI?”. It is a learning space which helps you to find out more about the myths and truths surrounding automation, algorithms, society and ourselves. It is continuously being filled with new contributions.

Explore all myths


This post represents the view of the author and does not necessarily represent the view of the institute itself. For more information about the topics of these articles and associated research projects, please contact info@hiig.de.

Sign up for HIIG's Monthly Digest

HIIG-Newsletter-Header

You will receive our latest blog articles once a month in a newsletter.

Explore Research issue in focus

Du siehst Eisenbahnschienen. Die vielen verschiedenen Abzweigungen symbolisieren die Entscheidungsmöglichkeiten von Künstlicher Intelligenz in der Gesellschaft. Manche gehen nach oben, unten, rechts. Manche enden auch in Sackgassen. Englisch: You see railway tracks. The many different branches symbolise the decision-making possibilities of artificial intelligence and society. Some go up, down, to the right. Some also end in dead ends.

Artificial intelligence and society

The future of artificial Intelligence and society operates in diverse societal contexts. What can we learn from its political, social and cultural facets?

Further articles

Three groups of icons representing people have shapes travelling between them and a page in the middle of the image. The page is a simple rectangle with straight lines representing data used for people analytics. The shapes traveling towards the page are irregular and in squiggly bands.

Empowering workers with data

As workplaces become data-driven, can workers use people analytics to advocate for their rights? This article explores how data empowers workers and unions.

A stylised illustration featuring a large "X" in a minimalist font, with a dry branch and faded leaves on one side, and a vibrant blue bird in flight on the other. The image symbolises transition, with the bird representing the former Twitter logo and the "X" symbolising the platform's rebranding and policy changes under Elon Musk.

Two years after the takeover: Four key policy changes of X under Musk

This article outlines four key policy changes of X since Musk’s 2022 takeover, highlighting how the platform's approach to content moderation has evolved.

The picture shows a tractor cultivating a field from above. One side of the field is covered in green, the other is dry and earthy. This is intended to show that although sustainable AI can be useful in the fight against climate change, it also comes at a high ecological cost.

Between vision and reality: Discourses about Sustainable AI in Germany

This article explores Sustainable AI and Germany's shift from optimism to concern about its environmental impact. Can AI really combat climate change?