Making sense of our connected world
Busted: AI will fix it
There is a strong belief on the internet that AI will solve basically all of future society’s problems, if we just give it enough time. Christian Katzenbach took a close look at this myth to determine whether there is truth to it.
In time for this year’s Internet Governance Forum (IGF), Matthias C. Kettemann (HIIG) and Stephan Dreyer (Leibniz Insitut für Medienforschung | Hans-Bredow-Institut (HBI)) will be publishing a volume called “Busted! The Truth About the 50 Most Common Internet Myths“. As an exclusive sneak peek, we are publishing an assortment of these myths here on our blog – some of those have been busted by HIIGs own researchers and associates.
The entire volume will be accessible soon at internetmyths.eu.
Myth
“Artificial intelligence” (AI) is the key technological development of our time. AI will not only change how we live, communicate, work and travel tomorrow, AI-based solutions will fix the fundamental problems of our societies from the detection of illnesses and misinformation to online hate speech and urban mobility.
Busted
The current hype about AI is strongly connected to the myth that AI will by itself solve key problems of our societies. In the 2018 US congressional hearings, Facebook’s CEO Marc Zuckerberg used phrases such as “AI will fix this” and “in the future we will have technology that addresses these issues” more than a dozen times when pressed upon issues of misinformation, hate speech and privacy. In other sectors, businesses and technologists promise that AI-powered technologies and products will detect cancer in early stages, identify tax fraud patterns, guide vehicles efficiently through urban areas and identify antisocial and criminal behaviour in public spaces.
The narrative that technology will fix social problems is a recurrent theme in the history of technology and society. The “technological fix” (Rudi Volti) seeks functional solutions for problems that are social and political in nature: Autonomous vehicles might drive more safely through the city (by some criteria), but will not provide urban mobility to broad segments of the population. Filtering software might get better by identifying misinformation and hate speech, but will not eradicate it and will always be unable to strike the perfect (and widely accepted) balance between freedom of expression and harmful speech. These problems are fundamentally social in nature, so there is not that one single right answer that can be technologically implemented.
Talk about ‘AI fixing things’ is also misleading because it obfuscates the human labour and the social relations that the seemingly autonomously operating technologies are building upon. AI-based products don’t just appear, they are man-made. Typical AI-powered devices and services such as autonomous vehicles and imagedetection solutions are products of companies with commercial interests and normative assumptions – and these are inscribed into the products itself. What is more, AI products are the results of immense amounts of human labour, ranging from developing complex mathematical models to mundane activities such as training image recognition AIs picture by picture.
Consequently, even if AI-powered services and devices perform perfectly functional according to preset criteria in the future, the phrase “AI will fix this” will still be utterly misleading. Many of these problems are fundamentally social in nature and do not yield a functional solution. AI technology is not an autonomous agent but constructed by humans and society.
Thruth
While AI cannot fix everything, humans using AI might fix some things. Rapid developments in AI technologies provide opportunities for many stakeholders to be more responsive to societal challenges. These technologies will contribute to innovations across many societal sectors and change the way we live, communicate, work and travel – not automatically for the public good, though.
Sources
Evgeny Morozov, To save everything, click here: The folly of technological
solutionism (New York: PublicAffairs, 2013)
Julia Powles and Helen Nissenbaum, The Seductive Diversion of ‘Solving’ Bias in Artificial Intelligence, Medium, 8 December 2018, https://medium.com/s/story/the-seductive-diversion-of-solving-bias-in-artificial- intelligence-890df5e5ef53.
This post represents the view of the author and does not necessarily represent the view of the institute itself. For more information about the topics of these articles and associated research projects, please contact info@hiig.de.
You will receive our latest blog articles once a month in a newsletter.
Research issues in focus
Why access rights to platform data for researchers restrict, not promote, academic freedom
New German and EU digital laws grant researchers access rights to platform data, but narrow definitions of research risk undermining academic freedom.
Empowering workers with data
As workplaces become data-driven, can workers use people analytics to advocate for their rights? This article explores how data empowers workers and unions.
Two years after the takeover: Four key policy changes of X under Musk
This article outlines four key policy changes of X since Musk’s 2022 takeover, highlighting how the platform's approach to content moderation has evolved.