Skip to content
mike-wilson-181835-unsplash-Cropped
20 March 2018| doi: 10.5281/zenodo.1204395

Omens and algorithms: A response to Elena Esposito

Can algorithms actually predict the future? And if so, does this make them the gods of our modern society? In her lecture ‘Future and uncertainty in the digital society’, Elena Esposito questions these assumptions and gives us reason to be concerned about over-reliance on prediction. HIIG researcher Rebecca Kahn responds to the lecture, and argues that by creating algorithms in our own image, we risk creating monstrosities.

Faith in algorithms

Are algorithms a substitute for god? Do they know things that people don’t and can’t know? And if so, then who are their priests – which figures have the knowledge to interpret their predictions? These were some of the provocations posed by Elena Esposito in her lecture ‘Future and Uncertainty in the Digital Society’.


While the use of religious terms such as ‘god’ and ‘priest’ may have made some of us uncomfortable, they were entirely appropriate in the context. Many people are more likely to put faith in an algorithm than in the traditional idea of an omnipotent god. Esposito’s lecture explored the relationships between algorithmic prediction and the ancient art of divination, both practices which claim to make predictions about the future based on the processing of data or information gleaned from the present day.

In the era of boundless data and unlimited computing capacity, the possibility offered by algorithmic prediction is for a certainty free of subjectivity, and correlations computed at scale, without the risk of the uncertainties created by sampling and generalisations. Rather than providing a broad view of the overall picture, algorithmic predictions offers a specific ‘truth’ tailored to the individual as a result of ‘their’ data, and regardless of context.

Revival of a divinatory tradition

In the ancient world, divination was a mechanism for seeing into a future which was unknowable to most humans, but was pre-existing and determined, and most significantly, known to the gods. From the Latin, divinare, meaning “to foresee” or “to be inspired by a god”, divination was (and in many places, still is) practiced by priests, oracles and soothsayers who read and interpret certain omens and signs.

Esposito argues that algorithmic prediction revives many of the characteristics of the divinatory tradition. Unlike in science, which is interested in explaining why a phenomena occurs, divination and algorithmic prediction have no interest in explaining ‘why’ – they focus on the ‘what’.  They are invoked in response to a particular reality, but do not try to understand how it has come about. Rather, both mechanisms share the goal of producing a response which can be coordinated with the cosmic or algorithmic order, and produce a future which optimises the use of available resources. In the ancient world, this may have been knowing when to plant crops or when to go to war. In the present time, it may be automated fraud detection, pre-emptive illness prevention or predictive policing.

In this context, it is easy to conflate the idea of the algorithmic prediction and the idea of an all-knowing god. However, Esposito pointed to one critical difference between the result of algorithmic prediction and divination – namely the context in which they take place and the temporal aspect of this context. In the ancient world, divination depended on the unavoidability of the outcomes. They were essential for preserving the existence of an invisible higher order and a pre-established, already existent (although unknown) future.  Algorithms, on the other hand, cannot predict anything more than a present-future, based only on the data which is used to power them. They are unable to know what might happen in a slightly more distant future, in which their predictions are acted upon. Put in another way, while divination needed to produce true outcomes, in order to justify the practice, algorithms aren’t required to be true, to prove their value – they just have to be accurate.

In ancient world, the inevitability of the prediction proved the existence of a higher order. In our time, the accuracy of the prediction is not a reflection of the all-encompassing ability of the algorithm, but proof only that it knows it’s own data. And here is the critical issue, which Esposito touched upon, and which is increasingly causing unease among scholars and researchers: we know that data is not, and can never be, neutral[1].

Lecture series “Making sense of the digital society”

The AI bias

Esposito’s anxieties dovetail with other red flags raised by those who work on the theoretical and practical implications of predictive algorithms, Big Data an AI for our society. Just as successful divination depended on balancing accurate predictions with just the right amount of mystique about the methods of prediction, the black box nature of algorithmic prediction and deep machine learning depends on the majority of people accepting the results without questioning the mechanics which created them too closely. However, issues such as algorithmic bias, which may already be prevalent in some AI systems[2] are a reminder that if machines are given biased data, they will produced biased results. These biases may not be intentional, or even visible, but they affect the accuracy of the prediction in significant ways.

Many people of colour who uploaded selfies to the recent Google Art and Culture selfie-matching service noticed that the results were heavily skewed towards images of non-white people represented in exoticized ways, and some reported having their race misread by the algorithm[3]. This example illustrates the complex nature of the problem: the dataset of cultural heritage materials used by Google is heavily Eurocentric to begin with; meanwhile the creators of the algorithm may have been unaware of (or not accounted for) that bias before releasing the tool into the public. The algorithm itself is not capable of responding to the contextual complexities it highlighted, resulting in a reinforcement of the representative bias in the results.

Less benign examples of this opacity, which researchers and civil society groups are increasingly concerned about, is the use of algorithms in predictive policing. A study by ProPublica in 2016[4] showed how algorithmic prediction, as well as being a less-than-accurate when it came to predicting whether or not individuals classed as “high-risk” were in fact likely to commit certain crimes, was also found to falsely flag individuals of colour as being likely future criminals, at almost twice the rate of white individuals.

Algorithmic bias, and the overall lack of will on the part of tech companies to address the risk this poses in real-world application[5] is a real cause for concern. The influence of algorithms in our day-to-day knowledge gathering practices means that their bias has the potential to subtly reinforce stereotypes already in existence, as explored by Dr Safiya Umoja Noble in her book Algorithms of Oppression (NYU Press, 2018). As Esposito put it “About the future they produce, algorithms are blind.” And it is in the blindness, and society’s blindness to it, that the risk is located. If we don’t spend time considering the ‘how’ of the algorithm, and critically questioning the ways in which we deploy them, they risk duplicating and mirroring our worst traits.

References

[1] Boyd, Keller & Tijerina (2016) Supporting Ethical Data Research: An Exploratory Study of Emerging Issues in Big Data and Technical Research; Working Paper, Data&Society.net
https://www.datasociety.net/pubs/sedr/SupportingEthicsDataResearch_Sept2016.pdf

[2 ]https://www.technologyreview.com/s/608986/forget-killer-robotsbias-is-the-real-ai-danger/

[3] https://mashable.com/2018/01/16/google-arts-culture-app-race-problem-racist/#1htlxqJqpsqR

[4] https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing

[5] https://www.technologyreview.com/s/608248/biased-algorithms-are-everywhere-and-no-one-seems-to-care/


Rebecca Kahn completed her PhD in the Department of Digital Humanities at King’s College, London in 2017. Her research examines the impact and effect of digital transformation on cultural heritage institutions, their documentation, data models and internal ontologies. Her research also examines how the identity of an institution can be traced and observed throughout their digital assets.


This article is a response to Elena Esposito’s lecture in our lectures series Making Sense of the Digital Society.


This post represents the view of the author and does not necessarily represent the view of the institute itself. For more information about the topics of these articles and associated research projects, please contact info@hiig.de.

This post represents the view of the author and does not necessarily represent the view of the institute itself. For more information about the topics of these articles and associated research projects, please contact info@hiig.de.

Rebecca Kahn, Dr.

Associated Researcher: Knowledge & Society

Sign up for HIIG's Monthly Digest

HIIG-Newsletter-Header

You will receive our latest blog articles once a month in a newsletter.

Explore Research issue in focus

Platform governance

In our research on platform governance, we investigate how corporate goals and social values can be balanced on online platforms.

Further articles

Modern subway station escalators leading to platforms, symbolizing the structured pathways of access rights. In the context of online platforms, such rights enable research but impose narrow constraints, raising questions about academic freedom.

Why access rights to platform data for researchers restrict, not promote, academic freedom

New German and EU digital laws grant researchers access rights to platform data, but narrow definitions of research risk undermining academic freedom.

Three groups of icons representing people have shapes travelling between them and a page in the middle of the image. The page is a simple rectangle with straight lines representing data used for people analytics. The shapes traveling towards the page are irregular and in squiggly bands.

Empowering workers with data

As workplaces become data-driven, can workers use people analytics to advocate for their rights? This article explores how data empowers workers and unions.

A stylised illustration featuring a large "X" in a minimalist font, with a dry branch and faded leaves on one side, and a vibrant blue bird in flight on the other. The image symbolises transition, with the bird representing the former Twitter logo and the "X" symbolising the platform's rebranding and policy changes under Elon Musk.

Two years after the takeover: Four key policy changes of X under Musk

This article outlines four key policy changes of X since Musk’s 2022 takeover, highlighting how the platform's approach to content moderation has evolved.