Zum Inhalt springen
franck-v-795965-unsplash
17 April 2019

Die Herausforderungen von Social Robots

Was sind die wichtigsten ethischen, rechtlichen und sozialen Herausforderungen von sozialen Robotern? Und wie können diese Herausforderungen angegangen werden? Um diese Fragen zu beantworten, führte Christoph Lutz mit zwei Forscherkollegen vier interaktive und interdisziplinäre Workshops an führenden Robotik Konferenzen durch. Im folgenden Blog Post schildern wir zentrale Erkenntnisse.

Social robots: what are they and what do they do?

Social robots are robots that interact with humans, by engaging with us in conversations or acting as emotional companions. Examples include SoftBank Robotics’ Nao and Pepper as well as the cute robot seal Paro, which was developed in Japan and is used in elderly care to to help patients with dementia. Research has started to look into the advantages and disadvantages of social robots. An upside of robots, including social robots, is that they can do dangerous, dull and dirty tasks, thus giving humans time for more fulfilling activities. At the same time, social robots can be used for nefarious purposes and are a source of ethical, legal and social (ELS) concerns, for example in terms of their privacy risks.

A solution-oriented approach

Philosophers in the field of robot and machine ethics have long discussed the ELS challenges of social robots. However, empirical ELS research remains scarce and discussions are fragmented across different scientific communities. To provide a more holistic and empirical understanding of the ELS challenges of social robots, particularly in therapy and education, Eduard Fosch Villaronga, Aurelia Tamò-Larrieux, and Christoph Lutz organized four workshops at leading robotics conferences in Europe and Japan. The workshops, held between 2015 and 2017, invited participants from all backgrounds, including academics and practitioners, to engage in open discussions on the key ELS challenges of social robots. 43 participants in total, from more than ten countries, took part in the workshops. Aiming for a solution-oriented format, we not only discussed the ELS challenges but also asked for recommendations on how these ELS challenges could be overcome. After the workshops, we synthesized the results into a working paper.

Key findings

Based on the workshop discussions, ELS challenges can be grouped into five broad categories: (1) privacy and security, (2) legal uncertainty including liability questions, (3) autonomy and agency, (4) economic implications, and (5) human-robot interaction including the replacement of human-human interaction. Within each category, specific challenges emerged. For example, discussions on autonomy and agency centered on the question of legal personhood for social robots as well as hierarchies in decision-making processes (e.g.:  should a robot in a hospital be allowed to override an incorrect decision by a nurse?). Recommendations to address these ELS challenges were of both a legal and technological nature. Within the privacy and security category, technological solutions included the removal of cameras and strategies of visceral notice, such as a robot making a noise whenever it takes a picture of its surroundings. Legal approaches stressed the importance of a more dynamic consent model and a potential revision of privacy understandings. Across different categories, living labs were mentioned as a promising approach, especially the Japanese Tokku zones, where robots are tested in realistic scenarios with concrete policy implications in mind.

Methodological insights

In addition to community-building, the workshops also provided methodological insights, demonstrating the value of participant-focused research approaches. A key take-away was the importance of keeping the discussions open, allowing for flexibility. While we had prepared three case studies for structuring the workshops, some emergent categories and themes only evolved outside the boundaries of these case studies. A further insight was the usefulness of conducting the workshop at more than one conference and at different types of venue. This guaranteed a plurality of voices and a broader representation of different research cultures. The third workshop, held at the Japanese Society for Artificial Intelligence’s Annual Symposium on Artificial Intelligence (JSAI-isAI), was particularly fruitful in opening up new perspectives. Finally, documenting the workshops with notes and audio recordings (of course with the permission of the participants) was an important part of preserving the conversations for further analysis. So: Make sure to always bring enough post-it’s.


Christoph Lutz is an Associate Professor at the Department of Communication and Culture and at the Nordic Centre for Internet and Society, BI Norwegian Business School (Oslo). The article was written in follow-up to the conference “AI: Legal & Ethical Implications” of the NoC European Hub taking place in Haifa.

Dieser Beitrag spiegelt die Meinung der Autorinnen und Autoren und weder notwendigerweise noch ausschließlich die Meinung des Institutes wider. Für mehr Informationen zu den Inhalten dieser Beiträge und den assoziierten Forschungsprojekten kontaktieren Sie bitte info@hiig.de

Christoph Lutz

HIIG Monthly Digest

HIIG-Newsletter-Header

Jetzt anmelden und  die neuesten Blogartikel einmal im Monat per Newsletter erhalten.

Forschungsthema im Fokus Entdecken

Plattform Governance

In unserer Forschung zur Plattform Governance untersuchen wir, wie unternehmerische Ziele und gesellschaftliche Werte auf Online-Plattformen miteinander in Einklang gebracht werden können.

Weitere Artikel

Das Foto zeigt ein Pfeil-Schild an einer Wand, das die zukünftigen Auswirkungen des Gesetzes über digitale Dienste auf die Macht der Plattformen repräsentiert.

Die Macht der Plattformen: Die Zukunft der Regulierung nach der Europawahl

Rückblickend auf die Europawahlen 2024 zieht dieser Beitrag Bilanz über die Auswirkungen des Gesetzes über digitale Dienste auf die Macht der Plattformen.

Das Bild zeigt ein Fußballfeld von oben. Die Spieler sind nur als Schatten zu erkennen, was den Human in the Loop repräsentieren soll.

KI unter Aufsicht: Brauchen wir ‘Humans in the Loop’ in Automatisierungsprozessen?

Automatisierte Entscheidungen haben Vorteile, sind aber auch fehleranfällig. Ein Human in the Loop könnte helfen. Aber garantiert er bessere Ergebnisse?

Das Bild zeigt miteinander verbundene blaue Würfel, das symbolisiert digitale B2B-Plattformen.

Die Vielfalt von digitalen B2B-Plattformen

Dieser Blogbeitrag untersucht die Vielfalt digitaler Business-to-Business-Plattformen und kategorisiert sie nach Governance-Stilen und strategischen Zielen.