Skip to content

Research in Focus

Artificial intelligence and society

Artificial intelligence (AI) unveils a world where the capabilities of technical systems are similar to those of human intelligence. But, AI isn't just about algorithms; it's deeply interwoven with our society. The future of AI technologies is strongly interlinked with the automation of social processes and will touch every facet of our lives: From tailoring your social media feeds to driving innovations in healthcare and even climate research. It's beyond just the screens we scroll; it’s in our offices, our hospitals, our roads, and even in the cutting-edge robotic systems we design. Our research investigates the interplay of AI within the political, social, and cultural landscapes, and explores the impact of AI discourses on society.

About our Research

The manifold benefits AI-systems are evident as we witness its pervasive impact on various domains. For example, technological automation is helping to evaluate large amounts of data and thus to generate new knowledge. Further examples include the analysis of diagnostic images in medicine, and the pattern recognition or analysis of voice, visual, or text data.

AI within our Society

AI systems are poised to undertake tasks that were once the exclusive domain of human capability. This intent to create proximity of AI technologies to the nature of human beings is illustrated through the widespread use of technical terms like "machine learning" or "neural networks", where the capabilities of technical systems are linked to rational intelligence similar to that found in humans. 

Today, AI systems are used to support or replace humans in everyday life, and are associated with an increasing automation of social processes. This shift prompts critical reflections on many levels about AI in society, encompassing questions about accountability, autonomy, and trust.

As we contemplate the dynamic between AI and society, it's clear that the impact of artificial intelligence extends far beyond technology.

AI Systems for Public Good

At HIIG we are particularly interested in systems serving the public interest. In our research, we explore what public interest AI means in political theory, what challenges these projects face globally, and how AI systems with this purpose should be built and governed. The question of how AI affects society resonates deeply, urging us to explore the multidimensional impact of public interest AI on society. With the publicinterest.ai interface, our aim is to increase the data and discourse on public interest AI systems worldwide to increase exchange and share standards and learnings on how they are shaping our society.

Different traditions of thinking about AI

The imaginaries associated with AI also play a major role in shaping the societal implications of these technologies. What is associated with the term AI varies considerably from culture to culture, which is why in one of our projects we compare AI-related controversies in different countries, their media and policy-making. In more theoretical work, we investigate how different traditions of thought in European and Asian countries shape the ways AI is interpreted as both a risk or a solution.

Relatedly, it is impossible to think about artificial intelligence or artificial humans without creating particular ideas about what is human in the first place. In our research we pay particular attention to the relationship between humans and machines, which is strongly related to questions on machine autonomy.

In summary, it is crucial for us to further deepen our understanding of the complex relationship between AI. Our research emphasises the need for an informed dialogue to collaboratively shape the role that AI will play in our future.

From the HIIG Channel

MAKING SENSE OF THE DIGITAL SOCIETY

Louise Amoore: Our lives with algorithms

Explore how machine learning algorithms are radically changing the way we find meaning in society.

MAKING SENSE OF THE DIGITAL SOCIETY

Judith Simon: The ethics of AI and big data

How exactly can fundamental rights and moral values ​​be taken into account when developing various AI systems?

Digitaler Salon

AI – The last one is cleaning up the internet

How is AI actually negotiated in our society, and do we need more information about the use and handling of AI?

From the Press

Feature with Theresa Züger about the use of artificial intelligence for the common good

Feature with Wolfgang Schulz about AI text generators

From the HIIG Blog

Two hands are holding a paper ribbon, that shows the outline of people holding hands

Participation with Impact: Insights into the processes of Common Voice

What makes the Common Voice project special and what can others learn from it? An inspiring example that shows what effective participation can look like.

public interest AI

Public Interest AI – Quo vadis?

A lot has happened since the founding of our research group on public interest-oriented AI, in science, society and politics. We provide an insight.

Analysing Hugging Face helps us to understand the dynamics of the machine learning hype

Inside Hugging Face

Understanding what actors and organisations are on Hugging Face is crucial for understanding the current dynamics of open-source research in machine learning. 

Detecting easy language on the German web

Lowering the barriers: Accessible language and “Leichte Sprache” on the German Web

How much of the German web uses understandable language? And how much of it is in Leichte Sprache? Our AI & Society Lab takes a closer look.

ecological sustainability is getting more and more important in the tech sector

Public Interest Tech: A take on the actors’ perspectives on ecological sustainability

We asked actors in the field of public interest AI how they deal with ecological sustainability. What do they know about it in general and what measures do they take...

Picture shows a transparent umbrella from above. It stands symbolic for the AI Transparency Circle.

The AI Transparency Cycle

A common notion of AI transparency is to either make code public or explain exactly how an algorithm makes a decision. Both ways sound plausible, but fail in practice.