Zum Inhalt springen
kyle-glenn-336141-unsplash
23 April 2019

KI-beeinflusste Entscheidungen: „und ein Löffelchen Würde“

KI hat das Potential, Entscheidungen abzunehmen und Prozesse zu optimieren – beispielsweise bei medizinischen Behandlungen. Doch die neue Art der KI-beeinflussten Entscheidungsfindung funktioniert oft auf obskure Weise, für die wir verständliche Übersetzungen brauchen. Aviva de Groot beschreibt in ihrem Blogbeitrag, wie wir den Aspekt der Würde – einen schwer fassbaren Bestandteil des „Rechts auf Erklärung“ – bei automatisierten Entscheidungen wertschätzen sollten.

The ability to make decisions is a salient shared feature of the multifold applications referred to under the umbrella term AI. Its use affects existing decisional practices and produces transformative experiences like personal communications in the health and political domains. Where decisional elements of input, analysis and output become harder to trace or even start to escape our human understanding capacities, AI-infused decisions can no longer be explained with previous methods.And where such analysis inevitably only produces correlations, causations still need to be investigated before results can be understood. Technical-operational fixes are being developed, but researchers also call attention to human(e) ingredients. Some of these need some explanation themselves to use responsibly. This blogpost shortly treats the volatile entry of dignity, preceded by some professed catalysers.[1]

Augmented Intelligence

Confusingly abbreviated as ‘AI’ too, the A here stands for Augmented. It communicates the understanding that in certain situations, the combination of human and artificial intelligence holds the greatest positive potential. The term also scores lower in the ‘scary headlines’ department, which gained it some industry popularity. Use responsibly: although it boasts the distinct natures of human and machine thinking, it potentially obscures the human color of the artificial input as it becomes increasingly challenging to separate each intelligence’s contribution.

Raw data

Don’t be fooled, this does not exist. It is said to grow in parts of the AI landscape, where the idea that technology is neutral still flourishes. Disagreeing, Feenberg and other scholars stress the importance of recognising our (possibly hidden) motives at play in the human-technology “co-construction” of reality, as what we design and implement in society shapes how we live and interact. These experiences in turn seed further designs.

Automation pessimism

This substance induces a high sense of the kind of awareness advised under the previous lemma. Seen as characteristically European and inspiring legal restrictions and safeguards on automated decision making, it boosts calls for transparency and understandability. Administrative innovations in support of the destructive machinery of the Second World War are seen to have facilitated dehumanising decisional processes in an unacceptable way.

De-objectification

Often combined with automation pessimism, this element benefits both parties of the explanational exchange. It is promoted to (re-)instate them with an understanding of how people are represented in the digital age and treated on its basis. AI is seen to exacerbate earlier upgrades for controlling humans: predicting their behaviour now depends even less on knowledge and understanding of them. Based on digitally ‘observed’ behaviour, their choice environments are set. It is a popular ingredient with those who oppose such treatment on principled grounds.

The capability approach

To (re)instate people in the described way, they will need to be (re-)instilled with the right capabilities. A known supplement in the realisation of human rights, one central idea here is that merely providing a resource – like a right to explanation – may ignore the actual possibilities of people to enjoy its functions. People will actually need to be able to provide and assess explanations to (re-)act as responsible decision makers. This is an ingredient to watch as it is becoming very popular. Think of the problem of ‘deskilling’ in light of the declining demand for people’s own decision making capabilities.

Care ethics

Not to be confused with ‘AI ethics’ varieties that currently spring up like mushrooms in industry, academic and political environments. Care ethics call upon the virtues of humans, accepting them as co-dependent and vulnerable. Its primary principles, shared within the medical domain, harbor proven beneficial potential. ‘Autonomy’ for example contains a strong obligation to explain and inform patients. Frequently used together with dignity, as these ethics activate the benign forces of the latter.

Dignity

The dignity-informed move from ‘doctor knows best’ to ‘informed consent’ has urged doctors to afford insight into what lies within and beyond the limits of their medical knowledge, in support of patients’ decisional capabilities. Ensuing challenges to the power relationship bring us to an important care-related value of dignity: its mutuality. Dignity is cultivated within us and feeds upon what we come to understand as proper, humane behaviour. The user should understand that to withhold another (and even herself) of such treatment will drain her own supply. Grand misuses of the past and present are looked to for examples. Some progress is made, slavery and genocide have been legally recognised as harmful to the shared value space we all depend on and qualified as crimes against humanity. Grave harms are still inflicted, where powerful players wield dignity as a dependent blessing. A wrongful conflation with freedom or autonomy rights – which can be legally restricted for defendable reasons relative to age, state or behaviour. Progress in humanity’s appreciation of dignity continues to redefine the limits to these limitations. And so we develop ..

A spoonful of dignity may serve to highlight human relations that are seen to fade through puzzling use of automation and as binding agent in developing prescriptions. It propels the need to identify proper understandings of augmented intelligence. As a bonus, it may relieve exhaustive calls on individual autonomy. Increasingly disqualified as a universal fix, a shift of focus to human dignity may allow the former to be nursed to a healthy resource. But that is another story.


Aviva de Groot is a PhD researcher at Tilburg Institute for Law, Technology and Society. Her research focuses on automated decision processes. The article was written in follow-up to the conference “AI: Legal & Ethical Implications” of the NoC European Hub in Haifa.

Dieser Beitrag spiegelt die Meinung der Autorinnen und Autoren und weder notwendigerweise noch ausschließlich die Meinung des Institutes wider. Für mehr Informationen zu den Inhalten dieser Beiträge und den assoziierten Forschungsprojekten kontaktieren Sie bitte info@hiig.de

Aviva de Groot

Auf dem Laufenden bleiben

HIIG-Newsletter-Header

Jetzt anmelden und  die neuesten Blogartikel einmal im Monat per Newsletter erhalten.

Forschungsthema im Fokus Entdecken

Plattform Governance

In unserer Forschung zur Plattform Governance untersuchen wir, wie unternehmerische Ziele und gesellschaftliche Werte auf Online-Plattformen miteinander in Einklang gebracht werden können.

Weitere Artikel

Ein moderner U-Bahnhof mit Rolltreppen, die zu den Plattformen führen – ein Sinnbild für den geregelten Zugang zu Daten, wie ihn Zugangsrechte im NetzDG und DSA ermöglichen.

Plattformdaten und Forschung: Zugangsrechte als Gefahr für die Wissenschaftsfreiheit?

Neue Digitalgesetze gewähren Forschenden Zugangsrechte zu Plattformdaten, doch strikte Vorgaben werfen Fragen zur Wissenschaftsfreiheit auf.

Drei Gruppen von Menschen haben Formen über sich, die zwischen ihnen und in Richtung eines Papiers hin und her reisen. Die Seite ist ein einfaches Rechteck mit geraden Linien, die Daten darstellen. Die Formen, die auf die Seite zusteuern, sind unregelmäßig und verlaufen in gewundenen Bändern.

Beschäftigte durch Daten stärken

Arbeitsplätze werden zunehmend datafiziert. Doch wie können Beschäftigte und Gewerkschaften diese Daten nutzen, um ihre Rechte zu vertreten?

Eine stilisierte Illustration mit einem großen „X“ in einer minimalistischen Schriftart, mit einem trockenen Zweig und verblichenen Blättern auf der einen Seite und einem leuchtend blauen Vogel im Flug auf der anderen Seite. Das Bild symbolisiert einen Übergangsprozess, wobei der Vogel das frühere Twitter-Logo darstellt und das „X“ das Rebranding der Plattform und Änderungen im Regelwerk von X symbolisiert.

Zwei Jahre nach der Übernahme: Vier zentrale Änderungen im Regelwerk von X unter Musk

Der Artikel beschreibt vier zentrale Änderungen im Regelwerk der Plattform X seit Musks Übernahme 2022 und deren Einfluss auf die Moderation von Inhalten.