Unsere vernetzte Welt verstehen
Der kulturelle Faktor im KI-Zeitalter
Künstliche Intelligenz ist in aller Munde – und ruft sowohl große Hoffnungen als auch dystopische Schreckensszenarien hervor. Doch wie kann man die Begeisterung für die Technologie erklären? Der technologischen Debatte über KI scheint eine tiefe kulturelle Schicht zu Grunde zu liegen, in der menschliche Sehnsüchte und Ängste verborgen sind. In ihrem Beitrag betrachtet Theresa Züger den kulturellen Faktor von KI etwas genauer und zeigt, dass wir durch die KI-Debatte auch etwas über uns selbst lernen können.
Artificial intelligence in culture
Artificial intelligence is a cultural reference more than it is a technological one. This doesn’t question the existence of trained machines labelled as artificial intelligence or the transformative impact that these developments will have on current societies. Artificial intelligence as a cultural myth refers to the narrative of machines overtaking and leading the human world to a higher form of existence. The term artificial intelligence signifies a subconscious collective meaning – a myth in the extended definition of Roland Barthes (Mythologies 1957).
Entdecke unser Dossier zu Künstliche Intelligenz und Governance
The myth of AI
In Barthes’ sense, a myth is more than an ancient story known to many. By his definition, potentially any semiotic process can gain a subconscious collective meaning. Artificial intelligence as a myth stands for a human narrative that is both deeply feared and deeply longed for.
In a growingly secular society, where religious belief in the afterlife has become rare, powerful personalities like Ray Kurzweil (Director of Engineering at Google) speaking with the authority of a scientist, are representing a belief that a singularity will inevitably emerge from AI and transform human life into a higher mode of existence. As a fantasy of redemption, the human is giving the world a superhuman machine as the eternal creator of a superior being.
On the side of human fears, others like Oxford Professor Nick Bostrom, predict that AI will grow out of control. He finds an intelligence-explosion of AI very likely. In this scenario humanity is facing a machine dictatorship.
As new as Nick Bostrom’s and Ray Kurzweil’s fears of a non-human entity destroying or saving human life may seem, they represent a very old human fear and longing in a new (robotic) outfit. In this fantasy, AI becomes the placeholder for a human reflection on our own making – they become what has, for a long time, been called a demon.
Todays’ demons wear wire
In his book In the Dust of this Planet (2010) Eugene Thacker introduces his understanding of demons. Humans of nearly all cultures have known demons as non-human and supernatural creatures. In many myths, demons play the role of the antagonist to human life and well-being – as seducer and dark power. The demon seems to fulfil an important cultural role of personifying human fears and hopes for divine intervention. In this sense, our projections into AI can be seen as the demons of our times.
In his book, Thacker explains: “The demon functions as a metaphor for the human – both in the sense of the human’s ability to comprehend itself, as well as the relations between one human being and another. The demon is not really a supernatural creature, but an anthropological motif through which we human beings project, externalize, and represent the darker side of the human ourselves” (p. 26).
The outdatedness of humankind
To better understand our subconscious fears of artificial intelligence, the idea of Promethean shame by Günther Anders is helpful. Anders used the term to describe the human discomfort of realising our own limitations in the comparison to the machines we have created.
Science has led to several disappointments for humankind (as Freud already described).
- the cosmological disappointment, which occurred with Copernicus and the realisation that the earth is not the centre of the universe,
- the biological, with Darwin, when humankind had to recognise that it was not simply made by God but a part of evolution,
- the psychological, which Freud saw in his method of psychoanalysis and the discovery of the subconscious
- the technological, which Anders added to with his idea of the Promethean shame as the technological disappointment of humankind realising its inferiority in comparison with its own making.
This feeling confronts us with our own reliance and even dependence on technological objects that usually slips from our consciousness – and in extreme forms even lets us wish we could function like a machine. Behind this shame lies the frustration with humanness, as a state of being, which can never be fully understood or controlled, which is inevitably painful at times, powerless against many twists of fate and eventually – ends in death.
In his book The Outdatedness of Humankind (1956) Anders argues that a gap is growing between the human ability to develop technologies, that both create and destroy our world, and our capacity to comprehend this power and imagine its consequences. Anders wrote this book under the peril of the nuclear threat. Even though the prospect of humanity destroying itself with nuclear weapons is no less real today, we are additionally and equally urgently facing a different threat.
Today we need to face the fact, that we are a species that destroys (or hopefully only comes close to destroying) the planetary basis of its own existence. Maybe that can be seen as a fifth disappointment to humankind – at least in a western worldview in which religion as well as philosophy told us that homo sapiens is the superior being amongst all beings on earth. If any of the human ego was still remaining after the disappointments described before, realising that our own choices and inventions are most likely killing our planet and potentially most of us, must crush whatever human pride is remaining. Besides living in the age of AI, we are also living with the outlook to an age of a realistic existential crisis for humankind and nature.
Why modern myths matter
Why does it matter, that myths are a part an essential part of the discourse on AI? Roland Barthes argued in his theory of myths, that a myth is de-politicised speech. De-politicised here means that all human relations, in their structure and their power of making the world, are stripped from its narrative. He argues that by becoming a myth, things lose the memory of the way they were made.
And that is what happens when we mystify AI: We forget how and what for it is created and lose track of the invisible power relations machine dependence and AI will extend. The myths around AI, as culturally interesting and important as they are – cloud our sight to the actual dangers and decisions ahead.
Stripping away the mythical creature of the demon, we can see the myth of AI as a human reflection on our own dark impulses. We are looking at a realistic human fear. It is the fear of creating entities and structures that implement our own failures, weaknesses and wrongdoings.
More than anything, AI development is a run for power, since it will be used in critical infrastructures of economy and governance. We need to look at the men (and few women) who hold this power and ask ourselves if we trust them to make choices that benefit all and don’t exclude vulnerable groups from their equation. The rightful fear we have should concentrate on the human weaknesses that show in AI today, as in biases of data sets and unreflected use of AI in surveillance and military.
As for any powerful technology, our question should be how the power to govern AI is distributed, who will benefit and who will be overlooked and de-humanised by the loving grace of the machines we create.
A slightly different version of this article was first published in Goethe-Institut Australia’s magazine “Kultur”.
Dieser Beitrag spiegelt die Meinung der Autorinnen und Autoren und weder notwendigerweise noch ausschließlich die Meinung des Institutes wider. Für mehr Informationen zu den Inhalten dieser Beiträge und den assoziierten Forschungsprojekten kontaktieren Sie bitte info@hiig.de
Jetzt anmelden und die neuesten Blogartikel einmal im Monat per Newsletter erhalten.
Künstliche Intelligenz und Gesellschaft
Beschäftigte durch Daten stärken
Arbeitsplätze werden zunehmend datafiziert. Doch wie können Beschäftigte und Gewerkschaften diese Daten nutzen, um ihre Rechte zu vertreten?
Zwei Jahre nach der Übernahme: Vier zentrale Änderungen im Regelwerk von X unter Musk
Der Artikel beschreibt vier zentrale Änderungen im Regelwerk der Plattform X seit Musks Übernahme 2022 und deren Einfluss auf die Moderation von Inhalten.
Zwischen Vision und Realität: Diskurse über nachhaltige KI in Deutschland
Der Artikel untersucht die Rolle von KI im Klimawandel. In Deutschland wächst die Besorgnis über ihre ökologischen Auswirkungen. Kann KI wirklich helfen?