Zum Inhalt springen
Website_banner_blogpost_strategising_ai
01 März 2022| doi: 10.5281/zenodo.6320730

Zivilgesellschaft und KI: Streben nach ethischer Governance

Die Einbindung der Zivilgesellschaft wurde als Schlüssel zur Sicherstellung ethischer und gerechter Ansätze für die Steuerung der KI durch eine Vielzahl staatlicher und nichtstaatlicher Akteure identifiziert. Die Zivilgesellschaft hat das Potenzial, Organisationen und Institutionen zur Rechenschaft zu ziehen, sich dafür einzusetzen, dass marginalisierte Stimmen gehört werden, ethisch vertretbare Anwendungen von KI anzuführen und zwischen einer Vielzahl unterschiedlicher Perspektiven zu vermitteln (Sanchez, 2021). Doch trotz der verkündeten Ambitionen und des sichtbaren Potenzials stehen zivilgesellschaftliche Akteure vor großen Herausforderungen, wenn sie sich aktiv an der Steuerung von KI beteiligen wollen.

Involving civil society actors is fundamental to the human-centric development and deployment of Artificial Intelligence (AI), proclaims the German government in their Update to the National Artificial Intelligence Strategy (NAIS), released on December 20, 2020. This call for the involvement of civil society acknowledges its increasing role in the governance of AI. Independent actors, such as the watchdog organisation AlgorithmWatch, or the Gesellschaft für Informatik (German Informatics Society), address topics ranging from the monitoring of Instagram’s newsfeed algorithm to the AI auditing project ExamAI.

Despite proclaimed national ambitions aimed at the human-centric development of AI through the involvement of civil society, researchers at the Stiftung Neue Verantwortung (SNV) find that “European civil society organisations that study and address the social, political and ethical challenges of AI are not sufficiently consulted and struggle to have an impact on the policy debate” (Beining et al., 2020: pp. 1). The HIIG Discussion Paper Towards Civil Strategization of AI in Germany explores this stark discrepancy between lofty ambitions and the reality of policy-making through the lens of the NAIS. The following paragraphs provide a look into the core themes and findings of this study on civil society and AI.

Civility in the governance of AI

The involvement of civil society in the governance of AI has been identified as crucial by a wide-range of state and non-state actors. The World Economic Forum (WEF) sees the involvement of civil society actors as key in ensuring ethical and equitable approaches towards AI in benefit of the common good. As watchdogs, they hold the power to move beyond mere principles for AI ethics towards holding organisations accountable. As advocates, they enable the participation of marginalised voices and communities to participate in the governance of AI. By making use of AI technologies, they can spearhead AI applications for the common good. As intermediaries, they can function as mediators between diverse sets of voices and perspectives. 

Civil society is in a unique position to put critical topics on the governance agenda that economic and state actors might not be aware of. Especially in contexts that proclaim the ethical, human-centric, or for-the-common-good approaches towards AI. Why then is there such a great discrepancy between identified opportunities, proclaimed ambitions, and the reality of civil society participation in AI governance? 

Algorithmic civil society in Germany

Many issues faced by civil society in the governance of AI are not new but rather rooted in historical, sociopolitical conceptions of the role of civil society vis-à-vis the state. In Germany, the state is envisioned as enabling the participation of a self-activating civil society (Strachwitz et al., 2020).

Unlike other countries, where AI tends to be treated as an independent subject, in Germany its regulation is largely seen as a subtopic of greater questions of digitalisation. As such, it is not only governments, corporations, and academia, but also civil society actors that address questions of AI through the broader lens of digitalisation. This is commonly referred to as the digital civil society.

The digital civil society ecosystem in Germany is strongly interwoven. Organisations such as the Gesellschaft für Informatik (German Informatics Society), the Bertelsmann Stiftung, the Stiftung Neue Verantwortung, AlgorithmWatch, or the iRights.lab frequently collaborate on a variety of projects. Exemplary of these is for instance the previously mentioned Algo.rules a joint project and study by the iRights.lab and the Bertelsmann Stiftung’s Ethik der Algorithmen (Ethics of Algorithms) initiative, which outlines a set of standards for the ethical design of algorithmic systems. 

These actors are not only active on the national level but are also spearheading European initiatives On November 30, 2021, AlgorithmWatch, for instance, was at the forefront of a group of 119 civil society organisations under the umbrella of the European Digital Rights (EDRi) association. This consortium released a collective statement calling upon the European Union to put consideration for fundamental rights at the forefront of the European Artificial Intelligence Act (EAIA). Despite this strong entanglement, the wide range of organisations is far from presenting a unified view but rather a diverse yet shared criticality towards AI and digitalization more broadly.

Strategizing AI: Lofty ambitions, faulty procedures, lacking expertise

On December 20, 2020, the German government in a concerted effort by the three leading ministries, the Federal Ministry for Education and Research (BMBF), the Federal Ministry for Economic Affairs and Energy (BMWi) and the Federal Ministry for Labour and Social Affairs (BMAS) released the latest Update to its National Artificial Intelligence Strategy (NAIS). The NAIS is the result of a participatory policy-making process spanning online consultations and expert hearings encompassing representatives from the government, the private sector, academia, and civil society.

This consultative process faced several challenges, which are reflected by both the resulting policy documents, as well as the feedback of involved civil society actors. Among these challenges were:

  • a lack of systematic approaches towards participatory governance processes; 
  • a disregard for inviting relevant civil society actors in favour of public, private, academic actors;
  • unfolding and continuing interministerial competition;
  • a hardening of individual argumentative positions;
  • and above all, a lack of expertise across the board.

While these issues challenged the participatory governance effort there was an evolution throughout the process. The original NAIS released on November 15, 2018, beyond a reference to the development of AI for the common good (which is never clearly defined in any of the policy documents), barely touched upon any critical topics of civil society and AI. In contrast, the Update to the NAIS, which involved a broader range of civil society organisations throughout the consultative process, touched upon concrete topics of human-centric AI, curbing effects of automation on labour, and environmental protection. In addition, the policy document references the involvement of civil society actors as key in addressing these questions. 

This certainly points towards greater involvement of civil society and its concern throughout the unfolding policy-making process. As one involved representative pointed out though:

“Overall the focus lies on things such as the AI competence centres, which help to bring AI applications to corporations. It is less centred on how to use potentials for the common good or how to use regulatory tools that can aid corporations in implementing ethical and societal visions in the development of AI. […] To summarise, civil society concerns are mainly found in the headlines of the AI strategy.”

By merely referencing human-centred AI in the headlines of policy documents, these refrain from deeper critical engagement of what this means in concrete terms. This is further illustrated by the fact that any sorts of concrete measures backed by the allocation of resources were negotiated behind the closed doors of interministerial negotiations. Despite the lofty proclamations of inclusive policy-making processes this rather underlines the black boxing not only of the technology but its governance.

Where to now?

Many of these hurdles faced by civil society and AI are not particular to the governance of this technology but rather reflect existing systemic issues in the organisation of participatory governance processes. The rapid development of AI in light of larger digital transformations rather multiplies the negative effects of inadequate governance, a disregard for equal and equitable representation, and lacking expertise in decision-making.

This then poses fundamental questions to participatory governance processes including: How valuable is a participatory process that is more of a knowledge-making exercise but lacks any sort of formal decision-making power? How democratic are these participatory processes when the actual allocation of resources is hidden behind closed doors? Why, with notably existing experience and research on participatory governance, are processes still poorly designed? Are these processes in support of the envisioned enabling function of the state vis-a-vis civil society?

A lack of expertise among all involved actors further questions what counts as expertise? Is technical understanding of AI fundamental? An understanding of societal effects? An understanding of policy-making processes? What about the usually unheard expertise of often already marginalised people that are most affected by the deployment of AI systems?

These and other fundamental questions related to governance processes at the interface between government, research, industry, civil society and AI are addressed by the HIIG’s AI & Society Lab.

References

Beining, L., Bihr, P., & Heumann, S. (2020). Towards a European AI & Society Ecosystem. Stiftung Neue Verantwortung.

Sanchez, C. (2021, July). Civil society can help ensure AI benefits us all. Here’s how. World Economic Forum. https://www.weforum.org/agenda/2021/07/civil-society-help-ai-benefits/

Strachwitz, R. G., Priller, E., & Triebe, B. (2020). Handbuch Zivilgesellschaft (Sonderausgabe für die Bundeszentrale für Politische Bildung). Bpb.

Dieser Beitrag spiegelt die Meinung der Autorinnen und Autoren und weder notwendigerweise noch ausschließlich die Meinung des Institutes wider. Für mehr Informationen zu den Inhalten dieser Beiträge und den assoziierten Forschungsprojekten kontaktieren Sie bitte info@hiig.de

Maurice Jones

Ehem. Assoziierter Forscher: Entwicklung der digitalen Gesellschaft

Auf dem Laufenden bleiben

HIIG-Newsletter-Header

Jetzt anmelden und  die neuesten Blogartikel einmal im Monat per Newsletter erhalten.

Forschungsthema im Fokus Entdecken

Du siehst Eisenbahnschienen. Die vielen verschiedenen Abzweigungen symbolisieren die Entscheidungsmöglichkeiten von Künstlicher Intelligenz in der Gesellschaft. Manche gehen nach oben, unten, rechts. Manche enden auch in Sackgassen. Englisch: You see railway tracks. The many different branches symbolise the decision-making possibilities of artificial intelligence and society. Some go up, down, to the right. Some also end in dead ends.

Künstliche Intelligenz und Gesellschaft

Die Zukunft der künstliche Intelligenz funktioniert in verschiedenen sozialen Kontexten. Was können wir aus ihren politischen, sozialen und kulturellen Facetten lernen?

Weitere Artikel

Drei Gruppen von Menschen haben Formen über sich, die zwischen ihnen und in Richtung eines Papiers hin und her reisen. Die Seite ist ein einfaches Rechteck mit geraden Linien, die Daten darstellen. Die Formen, die auf die Seite zusteuern, sind unregelmäßig und verlaufen in gewundenen Bändern.

Beschäftigte durch Daten stärken

Arbeitsplätze werden zunehmend datafiziert. Doch wie können Beschäftigte und Gewerkschaften diese Daten nutzen, um ihre Rechte zu vertreten?

Eine stilisierte Illustration mit einem großen „X“ in einer minimalistischen Schriftart, mit einem trockenen Zweig und verblichenen Blättern auf der einen Seite und einem leuchtend blauen Vogel im Flug auf der anderen Seite. Das Bild symbolisiert einen Übergangsprozess, wobei der Vogel das frühere Twitter-Logo darstellt und das „X“ das Rebranding der Plattform und Änderungen im Regelwerk von X symbolisiert.

Zwei Jahre nach der Übernahme: Vier zentrale Änderungen im Regelwerk von X unter Musk

Der Artikel beschreibt vier zentrale Änderungen im Regelwerk der Plattform X seit Musks Übernahme 2022 und deren Einfluss auf die Moderation von Inhalten.

Das Bild zeigt einen Traktor von oben, der ein Feld bestellt. Eine Seite des Feldes ist grün bewachsen, die andere trocken und erdig. Das soll zeigen, dass nachhaltige KI zwar im Kampf gegen den Klimawandel nützlich sein, selbst aber auch hohe Kosten für die Umwelt verursacht.

Zwischen Vision und Realität: Diskurse über nachhaltige KI in Deutschland

Der Artikel untersucht die Rolle von KI im Klimawandel. In Deutschland wächst die Besorgnis über ihre ökologischen Auswirkungen. Kann KI wirklich helfen?