Skip to content
public interest AI
21 August 2023| doi: 10.5281/zenodo.8289392

Public Interest AI – Quo vadis?

Our research group on public interest-oriented AI has been in existence since 2020. Since then, a lot has happened around the topic, not only from a scientific perspective, but also politically and socially. This blogpost gives an introduction to the topic, explains important findings and gives an outlook on what we have in mind.

Since Chat GPT became publicly available, suddenly everyone is talking about AI and the hype seems to be taking on a new upswing. Now, at the latest, everyone is aware that AI systems accompany far-reaching social changes, such as the transformation of entire industries, the automation of once social interactions or the hardening of power structures. Who do chatGPT and other AI applications actually serve – and who should benefit from them? Shouldn’t that ultimately be all of us and our shared societal goals?  

From the beginning we have been asking how AI systems can serve public interest goals and what conditions for the technology and its governance these goals entail. But what actually is in the public interest? This is another question we are exploring in our research.

On the common good

There is a rich debate on the idea of the common good in political theory and legal philosophy. Our work is based on an understanding of the public interest as proposed by Barry Bozeman: according to Bozeman, the public interest “refers to the outcomes best serving the long-run survival and well-being of a social collective construed as a public” (Bozeman, 2007, p. 12).

Understood in this way, the public interest cannot be universally defined, but must always be publicly negotiated on a participatory, deliberative and case-by-case basis by those affected by an issue. We as a research group are now orienting ourselves towards this and deriving factors from this theoretical foundation. The question we ask ourselves is: How can this understanding change the process and technical implementation of AI development?

Our approach

We share our thoughts on this at www.publicinterest.ai and present, for example, criteria that we consider important for the development of public interest-oriented AI. We are also trying to implement these conditions that we demand for public interest-oriented AI in our own prototypes. Two of the PhD students in the research team are working with Natural Language Processing, on the one hand to translate German texts into simplified language, and on the other hand to support fact checkers in their work. The third PhD student explores ways to manage data in a participatory way to strengthen the public good orientation of projects.

Public Interest AI Interface

Through the publicinterest.ai interface, we also want to improve the data on public interest-oriented AI projects by inviting projects to take part in a survey and mapping the results globally here. Two projects that can be found on the map are the Seaclear project, which uses robots and sensors to fish rubbish from the seabed while sparing animals using Computer Vision, and VFRAME, a project that enables conflict zones analysis via Computer Vision for human rights activist groups. We want to support these projects by increasing their visibility and at the same time empowering scientific research on such projects.

Common good needs many voices

But fortunately, we are far from the only ones interested in the topic of public good AI. This year, the Civic Coding Network launched an office to support public good-oriented AI projects. And we are very happy about the study by Wikimedia, which, among other things, is testing various federal data projects for their public good orientation on the basis of our considerations.

Our biggest next goal is to further expand our work on public interest AI, i.e. to initiate more application-oriented research projects, support prototypes and expand a network for public interest AI. For example, in  October 2023 we are launching our first Public Interest AI Fellowship round to connect students from different technical universities with NGOs working on public interest AI projects. Over the next five years, we want to work on making public interest AI a real and sustainable alternative to purely commercial AI projects.

References

Bozeman, B. (2007). Public values and public interest. Counterbalancing economic individualism. Georgetown University Press. 

This post represents the view of the author and does not necessarily represent the view of the institute itself. For more information about the topics of these articles and associated research projects, please contact info@hiig.de.

Theresa Züger, Dr.

Research Group Lead: Public Interest AI | AI & Society Lab, Co-Lead: Human in the Loop

Sign up for HIIG's Monthly Digest

HIIG-Newsletter-Header

You will receive our latest blog articles once a month in a newsletter.

Explore Research issue in focus

Du siehst Eisenbahnschienen. Die vielen verschiedenen Abzweigungen symbolisieren die Entscheidungsmöglichkeiten von Künstlicher Intelligenz in der Gesellschaft. Manche gehen nach oben, unten, rechts. Manche enden auch in Sackgassen. Englisch: You see railway tracks. The many different branches symbolise the decision-making possibilities of artificial intelligence and society. Some go up, down, to the right. Some also end in dead ends.

Artificial intelligence and society

The future of artificial Intelligence and society operates in diverse societal contexts. What can we learn from its political, social and cultural facets?

Further articles

Modern subway station escalators leading to platforms, symbolizing the structured pathways of access rights. In the context of online platforms, such rights enable research but impose narrow constraints, raising questions about academic freedom.

Why access rights to platform data for researchers restrict, not promote, academic freedom

New German and EU digital laws grant researchers access rights to platform data, but narrow definitions of research risk undermining academic freedom.

Three groups of icons representing people have shapes travelling between them and a page in the middle of the image. The page is a simple rectangle with straight lines representing data used for people analytics. The shapes traveling towards the page are irregular and in squiggly bands.

Empowering workers with data

As workplaces become data-driven, can workers use people analytics to advocate for their rights? This article explores how data empowers workers and unions.

A stylised illustration featuring a large "X" in a minimalist font, with a dry branch and faded leaves on one side, and a vibrant blue bird in flight on the other. The image symbolises transition, with the bird representing the former Twitter logo and the "X" symbolising the platform's rebranding and policy changes under Elon Musk.

Two years after the takeover: Four key policy changes of X under Musk

This article outlines four key policy changes of X since Musk’s 2022 takeover, highlighting how the platform's approach to content moderation has evolved.