Skip to content
kyle-glenn-336141-unsplash
23 April 2019| doi: 10.5281/zenodo.3087441

AI-infused decisions: “and a spoonful of dignity”

AI has the potential to make decisions and optimise processes – for example in medical treatments. But the new kind of AI-infused decision making works in obscure ways and we need transcriptions. Aviva de Groot describes in her blogpost, how to appreciate the aspect of dignity – an elusive ingredient of a ‘right to explanation’ – when thinking about automated decision-making.

The ability to make decisions is a salient shared feature of the multifold applications referred to under the umbrella term AI. Its use affects existing decisional practices and produces transformative experiences like personal communications in the health and political domains. Where decisional elements of input, analysis and output become harder to trace or even start to escape our human understanding capacities, AI-infused decisions can no longer be explained with previous methods. And where such analysis inevitably only produces correlations, causations still need to be investigated before results can be understood. Technical-operational fixes are being developed, but researchers also call attention to human(e) ingredients. Some of these need some explanation themselves to use responsibly. This blogpost shortly treats the volatile entry of dignity, preceded by some professed catalysers.[1]

Augmented Intelligence

Confusingly abbreviated as ‘AI’ too, the A here stands for Augmented. It communicates the understanding that in certain situations, the combination of human and artificial intelligence holds the greatest positive potential. The term also scores lower in the ‘scary headlines’ department, which gained it some industry popularity. Use responsibly: although it boasts the distinct natures of human and machine thinking, it potentially obscures the human color of the artificial input as it becomes increasingly challenging to separate each intelligence’s contribution.

Raw data

Don’t be fooled, this does not exist. It is said to grow in parts of the AI landscape, where the idea that technology is neutral still flourishes. Disagreeing, Feenberg and other scholars stress the importance of recognising our (possibly hidden) motives at play in the human-technology “co-construction” of reality, as what we design and implement in society shapes how we live and interact. These experiences in turn seed further designs.

Automation pessimism

This substance induces a high sense of the kind of awareness advised under the previous lemma. Seen as characteristically European and inspiring legal restrictions and safeguards on automated decision making, it boosts calls for transparency and understandability. Administrative innovations in support of the destructive machinery of the Second World War are seen to have facilitated dehumanising decisional processes in an unacceptable way.

De-objectification

Often combined with automation pessimism, this element benefits both parties of the explanational exchange. It is promoted to (re-)instate them with an understanding of how people are represented in the digital age and treated on its basis. AI is seen to exacerbate earlier upgrades for controlling humans: predicting their behaviour now depends even less on knowledge and understanding of them. Based on digitally ‘observed’ behaviour, their choice environments are set. It is a popular ingredient with those who oppose such treatment on principled grounds.

The capability approach

To (re)instate people in the described way, they will need to be (re-)instilled with the right capabilities. A known supplement in the realisation of human rights, one central idea here is that merely providing a resource – like a right to explanation – may ignore the actual possibilities of people to enjoy its functions. People will actually need to be able to provide and assess explanations to (re-)act as responsible decision makers. This is an ingredient to watch as it is becoming very popular. Think of the problem of ‘deskilling’ in light of the declining demand for people’s own decision making capabilities.

Care ethics

Not to be confused with ‘AI ethics’ varieties that currently spring up like mushrooms in industry, academic and political environments. Care ethics call upon the virtues of humans, accepting them as co-dependent and vulnerable. Its primary principles, shared within the medical domain, harbor proven beneficial potential. ‘Autonomy’ for example contains a strong obligation to explain and inform patients. Frequently used together with dignity, as these ethics activate the benign forces of the latter.

Dignity

The dignity-informed move from ‘doctor knows best’ to ‘informed consent’ has urged doctors to afford insight into what lies within and beyond the limits of their medical knowledge, in support of patients’ decisional capabilities. Ensuing challenges to the power relationship bring us to an important care-related value of dignity: its mutuality. Dignity is cultivated within us and feeds upon what we come to understand as proper, humane behaviour. The user should understand that to withhold another (and even herself) of such treatment will drain her own supply. Grand misuses of the past and present are looked to for examples. Some progress is made, slavery and genocide have been legally recognised as harmful to the shared value space we all depend on and qualified as crimes against humanity. Grave harms are still inflicted, where powerful players wield dignity as a dependent blessing. A wrongful conflation with freedom or autonomy rights – which can be legally restricted for defendable reasons relative to age, state or behaviour. Progress in humanity’s appreciation of dignity continues to redefine the limits to these limitations. And so we develop ..

A spoonful of dignity may serve to highlight human relations that are seen to fade through puzzling use of automation and as binding agent in developing prescriptions. It propels the need to identify proper understandings of augmented intelligence. As a bonus, it may relieve exhaustive calls on individual autonomy. Increasingly disqualified as a universal fix, a shift of focus to human dignity may allow the former to be nursed to a healthy resource. But that is another story.


Aviva de Groot is a PhD researcher at Tilburg Institute for Law, Technology and Society. Her research focuses on automated decision processes. The article was written in follow-up to the conference “AI: Legal & Ethical Implications” of the NoC European Hub in Haifa.

Aviva de Groot

Sign up for HIIG's Monthly Digest

HIIG-Newsletter-Header

You will receive our latest blog articles once a month in a newsletter.

Explore current HIIG Activities

Research issues in focus

HIIG is currently working on exciting topics. Learn more about our interdisciplinary pioneering work in public discourse.

Further articles

Modern subway station escalators leading to platforms, symbolizing the structured pathways of access rights. In the context of online platforms, such rights enable research but impose narrow constraints, raising questions about academic freedom.

Why access rights to platform data for researchers restrict, not promote, academic freedom

New German and EU digital laws grant researchers access rights to platform data, but narrow definitions of research risk undermining academic freedom.

Three groups of icons representing people have shapes travelling between them and a page in the middle of the image. The page is a simple rectangle with straight lines representing data used for people analytics. The shapes traveling towards the page are irregular and in squiggly bands.

Empowering workers with data

As workplaces become data-driven, can workers use people analytics to advocate for their rights? This article explores how data empowers workers and unions.

A stylised illustration featuring a large "X" in a minimalist font, with a dry branch and faded leaves on one side, and a vibrant blue bird in flight on the other. The image symbolises transition, with the bird representing the former Twitter logo and the "X" symbolising the platform's rebranding and policy changes under Elon Musk.

Two years after the takeover: Four key policy changes of X under Musk

This article outlines four key policy changes of X since Musk’s 2022 takeover, highlighting how the platform's approach to content moderation has evolved.