Skip to content
AI-busted-2
29 October 2019

Busted: AI will fix it

There is a strong belief on the internet that AI will solve basically all of future society’s problems, if we just give it enough time. Christian Katzenbach took a close look at this myth to determine whether there is truth to it.


In time for this year’s Internet Governance Forum (IGF), Matthias C. Kettemann (HIIG) and Stephan Dreyer (Leibniz Insitut für Medienforschung | Hans-Bredow-Institut (HBI)) will be publishing a volume called “Busted! The Truth About the 50 Most Common Internet Myths“. As an exclusive sneak peek, we are publishing an assortment of these myths here on our blog – some of those have been busted by HIIGs own researchers and associates.

The entire volume will be accessible soon at internetmyths.eu.


Myth

“Artificial intelligence” (AI) is the key technological development of our time. AI will not only change how we live, communicate, work and travel tomorrow, AI-based solutions will fix the fundamental problems of our societies from the detection of illnesses and misinformation to online hate speech and urban mobility.

Busted

The current hype about AI is strongly connected to the myth that AI will by itself solve key problems of our societies. In the 2018 US congressional hearings, Facebook’s CEO Marc Zuckerberg used phrases such as “AI will fix this” and “in the future we will have technology that addresses these issues” more than a dozen times when pressed upon issues of misinformation, hate speech and privacy. In other sectors, businesses and technologists promise that AI-powered technologies and products will detect cancer in early stages, identify tax fraud patterns, guide vehicles efficiently through urban areas and identify antisocial and criminal behaviour in public spaces.

The narrative that technology will fix social problems is a recurrent theme in the history of technology and society. The “technological fix” (Rudi Volti) seeks functional solutions for problems that are social and political in nature: Autonomous vehicles might drive more safely through the city (by some criteria), but will not provide urban mobility to broad segments of the population. Filtering software might get better by identifying misinformation and hate speech, but will not eradicate it and will always be unable to strike the perfect (and widely accepted) balance between freedom of expression and harmful speech. These problems are fundamentally social in nature, so there is not that one single right answer that can be technologically implemented.

Talk about ‘AI fixing things’ is also misleading because it obfuscates the human labour and the social relations that the seemingly autonomously operating technologies are building upon. AI-based products don’t just appear, they are man-made. Typical AI-powered devices and services such as autonomous vehicles and imagedetection solutions are products of companies with commercial interests and normative assumptions – and these are inscribed into the products itself. What is more, AI products are the results of immense amounts of human labour, ranging from developing complex mathematical models to mundane activities such as training image recognition AIs picture by picture.

Consequently, even if AI-powered services and devices perform perfectly functional according to preset criteria in the future, the phrase “AI will fix this” will still be utterly misleading. Many of these problems are fundamentally social in nature and do not yield a functional solution. AI technology is not an autonomous agent but constructed by humans and society.

Thruth

While AI cannot fix everything, humans using AI might fix some things. Rapid developments in AI technologies provide opportunities for many stakeholders to be more responsive to societal challenges. These technologies will contribute to innovations across many societal sectors and change the way we live, communicate, work and travel – not automatically for the public good, though.


Sources

Evgeny Morozov, To save everything, click here: The folly of technological
solutionism (New York: PublicAffairs, 2013)
Julia Powles and Helen Nissenbaum, The Seductive Diversion of ‘Solving’ Bias in Artificial Intelligence, Medium, 8 December 2018, https://medium.com/s/story/the-seductive-diversion-of-solving-bias-in-artificial- intelligence-890df5e5ef53.

This post represents the view of the author and does not necessarily represent the view of the institute itself. For more information about the topics of these articles and associated research projects, please contact info@hiig.de.

Christian Katzenbach, Prof. Dr.

Associated researcher: The evolving digital society

Sign up for HIIG's Monthly Digest

HIIG-Newsletter-Header

You will receive our latest blog articles once a month in a newsletter.

Explore current HIIG Activities

Research issues in focus

HIIG is currently working on exciting topics. Learn more about our interdisciplinary pioneering work in public discourse.

Further articles

Three groups of icons representing people have shapes travelling between them and a page in the middle of the image. The page is a simple rectangle with straight lines representing data used for people analytics. The shapes traveling towards the page are irregular and in squiggly bands.

Empowering workers with data

As workplaces become data-driven, can workers use people analytics to advocate for their rights? This article explores how data empowers workers and unions.

A stylised illustration featuring a large "X" in a minimalist font, with a dry branch and faded leaves on one side, and a vibrant blue bird in flight on the other. The image symbolises transition, with the bird representing the former Twitter logo and the "X" symbolising the platform's rebranding and policy changes under Elon Musk.

Two years after the takeover: Four key policy changes of X under Musk

This article outlines four key policy changes of X since Musk’s 2022 takeover, highlighting how the platform's approach to content moderation has evolved.

The picture shows a tractor cultivating a field from above. One side of the field is covered in green, the other is dry and earthy. This is intended to show that although sustainable AI can be useful in the fight against climate change, it also comes at a high ecological cost.

Between vision and reality: Discourses about Sustainable AI in Germany

This article explores Sustainable AI and Germany's shift from optimism to concern about its environmental impact. Can AI really combat climate change?