Zum Inhalt springen
franck-v-795970-unsplash
17 Mai 2017

KI – Eine Metapher oder werden Maschinen in einer digitalen Gesellschaft zu Personen?

Ist Künstliche Intelligenz Metapher oder Programm? Können Maschinen in einer Weise intelligent sein wie es Menschen sind? Seit Anbeginn des Begriffs ist diese Frage strittig. Die sogenannte schwache Variante einer Künstlichen Intelligenz versteht den Begriff als Metapher, die starke nimmt den Begriff wortwörtlich. Die Antwort auf diese Frage verspricht Hinweise auf die zukünftige Rolle von intelligenten Maschinen in der Digitalen Gesellschaft. Dieser Artikel ist Teil einer fortlaufenden Serie über die Politik der Metaphern in der Digitalen Gesellschaft. HIIG-Forscher Christian Katzenbach und Stefan Larsson (Lund University Internet Institute) editieren die Beiträge dieser Serie. 

Dossier: Wie Metaphern die digitale Gesellschaft gestalten

Will computers, robots and machines one day be considered intelligent persons? Will automatic agents imbued with artificial intelligence become members of our society? The concept of personality is fluid. There were times when slaves had no personality rights. Recently, there has been a growing movement arguing to confer legal personality upon animals. Hence, our concept of personality might change in the course of digitisation of society. This might be due to advances in artificial intelligence, but also to the way the term is framed.

Is Artificial Intelligence a metaphor or a descriptive concept? This question cannot be answered in one way or another as there is a semantic struggle surrounding the concept of artificial intelligence. As it will be shown, some people treat artificial intelligence rather as a broad metaphor for the ability of machines to solve specific problems. Others take it word for word and conceive of artificial intelligence as being the same as human intelligence. Some researchers go as far as to reject the concept completely. To shed more light on the issue, it is worthwhile to go back to the time the term was coined.

Historical Origins

Artificial intelligence was first used in 1956 in Dartmouth, New Hampshire, where John McCarthy, Claude Shannon and Marvin Minsky organised a six-week summer workshop supported by the Rockefeller foundation. They introduced their grant application in the following terms:

…The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves.

Interestingly, even the organisers of the conference did not really approve of the term artificial intelligence. John McCarthy stated that “one of the reasons for inventing the term ‘artificial intelligence’ was to escape association with ‘cybernetics’”, as he did not agree with Norbert Wiener. Yet, the term artificial intelligence was used frequently. Today, it constitutes a subdiscipline of computer science.

The weak AI thesis vs the strong AI thesis

As can be seen from the statement above, there has always been an ambiguity to the term. This statement can be taken to mean that every aspect of human intelligence can be replicated. Yet, it can also be interpreted as a conjecture, as the use of the word simulate suggests artificial intelligence and humane intelligence remain different. The different interpretations of artificial intelligence have been conceptualised as the strong and weak AI thesis. The strong AI thesis suggests that such a simulation in fact replicates the mind and that there is nothing more to the mind than the processes simulated by the computer. On the contrary, the weak AI thesis suggests that machines can act as if they were intelligent. The weak AI thesis transfers the concept of intelligence to a context in which it normally would not apply. Therefore, the term intelligence is used in a metaphorical sense in the context of weak AI.

One of the active proponents of the concept of weak AI was Joseph Weizenbaum, a Jewish German-American computer scientist who was responsible for some important technical inventions, but who remained critical of the societal impacts of computers. He programmed the famous chatbot ELIZA (try here). Weizenbaum used a few formal rules for the chatbot to keep the conversation going. The chatbot analyzes sentence structure and grammar and rephrases the former phrase with a question or replies with a standard utterance.

The proponents of the thesis of strong artificial intelligence have tried to find ways to replicate processes in the brain, for example, by designing neural networks. The strong artificial intelligence thesis suggests that machines can be intelligent in the same way as human beings can. One of the proponents of the strong AI thesis, Klaus Haefner, once had an exchange with Weizenbaum. They use arguments that were already foreseen by Allan Turing in his seminal text “Computing, Machinery and Intelligence” from 1949. He famously replaced the question “Can machines think?” with an “imitation game”. In this game, the interrogator has a written conversation with one human being and one machine, both of which are in separate rooms. The task Turing describes is to design a machine that acts such that the interrogator cannot distinguish it from the human being based on its communication. Therefore, the goal is not to design a system that equals a human being, but one that acts in such a way that a human beings cannot tell the difference. Whether this is achieved by replicating the human brain, or in any other way, was not important for Turing.

Flying Different than Birds

In the literature on AI, the possible advances of the field are compared to other technologies like aeroplanes. Early models tried to simulate birds, while in the end, airplanes manage to fly in a very different way. One aim of Turing’s article was to shift the focus from a general and teleological debate to the actual problems to be solved. According to his approach, there is no great general solution to the question to what AI can achieve in the future, but there are many small improvements to machines.

There might be a day when we suddenly realize that in many respects the line between humans and machines is blurred. Games like chess or Go are examples of problems in which machines have surpassed humans. If this trend continues, it might give a completely different connotation to the term digital society. While we cannot say that we are there yet, does that mean it can never happen? Try “bot or not”, an adaptation of the Turing test for poems. You will find that even today, it can be tricky to distinguish machines from human beings.

Sie möchten selbst einen Artikel im Rahmen dieser Serie veröffentlichen? Dann senden Sie uns eine Email mit Ihrem Themenvorschlag.

Dieser Beitrag spiegelt die Meinung der Autorinnen und Autoren und weder notwendigerweise noch ausschließlich die Meinung des Institutes wider. Für mehr Informationen zu den Inhalten dieser Beiträge und den assoziierten Forschungsprojekten kontaktieren Sie bitte info@hiig.de

Christian Djeffal, Prof. Dr.

Ehemaliger Projektleiter | Assoziierter Forscher

Auf dem Laufenden bleiben

HIIG-Newsletter-Header

Jetzt anmelden und  die neuesten Blogartikel einmal im Monat per Newsletter erhalten.

Forschungsthema im Fokus Entdecken

Plattform Governance

In unserer Forschung zur Plattform Governance untersuchen wir, wie unternehmerische Ziele und gesellschaftliche Werte auf Online-Plattformen miteinander in Einklang gebracht werden können.

Weitere Artikel

Drei Gruppen von Menschen haben Formen über sich, die zwischen ihnen und in Richtung eines Papiers hin und her reisen. Die Seite ist ein einfaches Rechteck mit geraden Linien, die Daten darstellen. Die Formen, die auf die Seite zusteuern, sind unregelmäßig und verlaufen in gewundenen Bändern.

Beschäftigte durch Daten stärken

Arbeitsplätze werden zunehmend datafiziert. Doch wie können Beschäftigte und Gewerkschaften diese Daten nutzen, um ihre Rechte zu vertreten?

Eine stilisierte Illustration mit einem großen „X“ in einer minimalistischen Schriftart, mit einem trockenen Zweig und verblichenen Blättern auf der einen Seite und einem leuchtend blauen Vogel im Flug auf der anderen Seite. Das Bild symbolisiert einen Übergangsprozess, wobei der Vogel das frühere Twitter-Logo darstellt und das „X“ das Rebranding der Plattform und Änderungen im Regelwerk von X symbolisiert.

Zwei Jahre nach der Übernahme: Vier zentrale Änderungen im Regelwerk von X unter Musk

Der Artikel beschreibt vier zentrale Änderungen im Regelwerk der Plattform X seit Musks Übernahme 2022 und deren Einfluss auf die Moderation von Inhalten.

Das Bild zeigt einen Traktor von oben, der ein Feld bestellt. Eine Seite des Feldes ist grün bewachsen, die andere trocken und erdig. Das soll zeigen, dass nachhaltige KI zwar im Kampf gegen den Klimawandel nützlich sein, selbst aber auch hohe Kosten für die Umwelt verursacht.

Zwischen Vision und Realität: Diskurse über nachhaltige KI in Deutschland

Der Artikel untersucht die Rolle von KI im Klimawandel. In Deutschland wächst die Besorgnis über ihre ökologischen Auswirkungen. Kann KI wirklich helfen?