Skip to content
Eine große Kreuzung mit viel Verkehr.
26 October 2021| doi: 10.5281/zenodo.5596584

New toolkit collects easy tips for intersectional AI

By drawing on marginalized practices to fundamentally reshape the development and use of AI technologies, intersectional approaches to AI (IAI) are key in ensuring more inclusiveness. Our new toolkit provides an introductory guide to IAI and argues that anyone should be able to understand what AI is and what AI ought to be.

AI Bias reinforces discrimination 

AI systems have made how some of us work, move and socialise much easier. However, their promises to enhance user experiences and provide opportunities have not held true equally for everyone. On the contrary: For many, AI systems have further widened the gaps of inequality and worsened discrimination, instead of tackling them at their roots. Even so-called intelligent systems merely reproduce the existing analogue world, including underlying power structures. This means AI applications – like any technology – are never neutral. Allowing only a small but powerful fraction of society to design and implement AI systems means power imbalances remain, or even get amplified by computation. Unfair internet infrastructures will continue to be passed off as impartial ones — and with no one else to say otherwise, we may never be able to imagine it any other way.

Why we need inclusive AI

Already marginalised communities are often left out of conversations about what kinds of AI systems should and should not exist, and how they should be created and used – despite the fact that these groups are disproportionately affected by the harmful impacts of AI systems. Scholars like Joy Buolamwini and 2021 MacArthur Fellow Safiya Noble cite the dangers of algorithmic injustice across insidious but widespread examples from shadow banning to predictive policing. 

With the increasing automation of public and private infrastructures, future AI systems should be made by diverse, interdisciplinary and intersectional communities rather than by a select few. In addition to needing community support in order to address the adverse effects they face, system designers can improve AI for everyone by listening to knowledge gained from many perspectives. Diverse groups — for example Black feminists, and queer and disability theorists — have long been considering aspects of the same questions exacerbated by problematic AI. We can and must rely on a broader variety of perspectives if we are to shift the course of AI’s future toward more inclusive systems.

Building on its research on public interest AI, the HIIG’s AI & Society Lab puts a strong focus on questions in this area: How can AI and other technologies be made more approachable for everyone, to ensure people better understand AI systems and how they affect them? What do particularly marginalised communities wish to change about AI, and how can we support them in doing so?  

How Intersectional AI can help

The Intersectional AI Toolkit helps answer these questions by connecting communities in order to create introductory guides to AI from multiple, approachable perspectives. Developed by Sarah Ciston during a virtual fellowship at the AI & Society Lab, the Intersectional AI Toolkit argues that anyone can and should be able to understand what AI is and what AI ought to be. 

Intersectionality describes how power operates structurally, and how multiple forms of discrimination have compounding, interdependent effects. American lawyer Kimberlé Crenshaw introduced the term in 1989, using the image of an intersection where paths of power cross to illustrate the interwoven nature of social inequalities (1989).

As imagined by this toolkit, Intersectional AI will bring decades of work on Intersectional ideas, ethics, and tactics to the issues of inequality faced by AI. By drawing on established ideas and practices, and understanding how to combine them, Intersectionality can help reshape AI in fundamental ways. Through its layered, structural approach, Intersectional AI connects the dots between concepts — as seen from different disciplines and operating across systems — so that individuals and researchers may be able to help address the gaps that others could not see. 

A toolkit that helps to think about intersectionality and code inclusive AI

The Intersectional AI Toolkit is a collection of small magazines (or zines) that offer practical accessible guides to both AI and Intersectionality. They are written for engineers, artists, activists, academics, makers and anyone who wants to understand the automated systems that impact them. By sharing key concepts, tactics, and resources, they serve as jumping-off points to inspire readers’ own further research and conversation across disciplines and communities, asking questions like “Is decolonizing AI possible?” or “What does it mean to learn to code?” 

The toolkit is available as a digital resource that continues to grow with community contributions, as well as printable zines that can be folded, shared, and discussed offline. With issues like a two-sided glossary: “IAI A-to-Z,” strategy flashcards: “Tactics for Intersectional AI,” and a guide to concepts for skeptics: “Help Me Understand Intersectionality,” the zine collection focuses on using plain language and fostering tangible impacts.

This toolkit is not the first or only resource on intersectionality or AI. Instead, it gathers together some of the amazing people, ideas, and forces working to re-examine the foundational assumptions built into these technologies, such as Catherine D’Ignazio and Lauren Klein’s work on “Data Feminism” or Ruja Benjamin’s “Race after Technology”. It also looks at which people are (not) involved when AI is developed or which processes and safeguards do or should exist. In this way, it helps us understand power and aims to link AI development back to democratic processes. 

Why is the future of AI intersectional?

Current approaches to AI fail to address two major problems. First: Those who create AI systems – from code to policy to infrastructure — fail to listen to the needs or wisdom of the marginalised communities most injured by those systems. Second: Current language and tools for AI put up intimidating barriers that prevent outsiders from understanding, building, or changing these systems. If we want improved, inclusive AI systems, we must consider a broader range of people’s needs as much as we must consider a broader range of people’s knowledge. Otherwise we face a future perpetuating the same problems, under the guise of fairness and automation. 

The Intersectional AI Toolkit tries to intervene by facilitating much-needed exchange between different groups around these issues. The AI & Society Lab hosted the launch of the Toolkit as an Edit-a-thon workshop, in order to gain multiple valuable perspectives through diverse public participation. Over the next months, more digital and in-person zine-making workshops are planned to keep building the Toolkit while advocating for Intersectional approaches to AI in various sectors like AI governance. 

All AI systems are socio-technical; they interconnect humans and machines. Intersectionality reminds us how power imbalances affect those connections. By addressing the gap between those who want to understand and shape AI, and those who already make and regulate it, Intersectional AI can help us find the shared language we need to reimagine AI together. 

References

Crenshaw, K. (1989). Demarginalizing the Intersection of Race and Sex: A Black Feminist Critique of Antidiscrimination Doctrine, Feminist Theory and Antiracist Politics

tl;dr

The Intersectional AI Toolkit will remain accessible for contributions and comments at intersectionalai.com.
The Intersectional AI Toolkit Edit-a-thon took place on Sep 1, 2021 and was hosted by HIIG’s AI & Society Lab in collaboration with our partner from MOTIF, netzforma* e.V., SUPERRR and the The Leibniz Institute for Media Research | Hans-Bredow-Institut (HBI).

This post represents the view of the author and does not necessarily represent the view of the institute itself. For more information about the topics of these articles and associated research projects, please contact info@hiig.de.

Sarah Ciston

Former Associated Researcher: AI & Society Lab

Daniela Dicks

Fromer Co-Lead & spokesperson: AI & Society Lab

Sign up for HIIG's Monthly Digest

HIIG-Newsletter-Header

You will receive our latest blog articles once a month in a newsletter.

Explore Research issue in focus

Du siehst Eisenbahnschienen. Die vielen verschiedenen Abzweigungen symbolisieren die Entscheidungsmöglichkeiten von Künstlicher Intelligenz in der Gesellschaft. Manche gehen nach oben, unten, rechts. Manche enden auch in Sackgassen. Englisch: You see railway tracks. The many different branches symbolise the decision-making possibilities of artificial intelligence and society. Some go up, down, to the right. Some also end in dead ends.

Artificial intelligence and society

The future of artificial Intelligence and society operates in diverse societal contexts. What can we learn from its political, social and cultural facets?

Further articles

Three groups of icons representing people have shapes travelling between them and a page in the middle of the image. The page is a simple rectangle with straight lines representing data used for people analytics. The shapes traveling towards the page are irregular and in squiggly bands.

Empowering workers with data

As workplaces become data-driven, can workers use people analytics to advocate for their rights? This article explores how data empowers workers and unions.

A stylised illustration featuring a large "X" in a minimalist font, with a dry branch and faded leaves on one side, and a vibrant blue bird in flight on the other. The image symbolises transition, with the bird representing the former Twitter logo and the "X" symbolising the platform's rebranding and policy changes under Elon Musk.

Two years after the takeover: Four key policy changes of X under Musk

This article outlines four key policy changes of X since Musk’s 2022 takeover, highlighting how the platform's approach to content moderation has evolved.

The picture shows a tractor cultivating a field from above. One side of the field is covered in green, the other is dry and earthy. This is intended to show that although sustainable AI can be useful in the fight against climate change, it also comes at a high ecological cost.

Between vision and reality: Discourses about Sustainable AI in Germany

This article explores Sustainable AI and Germany's shift from optimism to concern about its environmental impact. Can AI really combat climate change?