Making sense of our connected world
When scholars sprint, bad algorithms are on the run
Der erste Research Sprint des von der Mercator-Stiftung finanzierten Projekts zur „Ethik der Digitalisierung“hat die Ziellinie erreicht. Dreizehn internationale Fellows beschäftigten sich mit den Herausforderungen, die mit dem Einsatz von KI in der Moderation von Online-Inhalten einhergehen. Nach zehn intensiven Wochen interdisziplinärer Forschung geben wir einen Überblick über die zentralen Ergebnisse.
In response to increasing public pressure to tackle hate speech and other challenging content, platform companies have turned to algorithmic content moderation systems. These automated tools promise to be more effective and efficient in identifying potentially illegal or unwanted material. But algorithmic content moderation also raises many questions – all of which eschew simple answers. Where is the line between hate speech and freedom of expression – and how to automate this on a global scale? Should platforms scale the use of AI tools for illegal online speech, like terrorism promotion, or also for regular content governance? Are platforms’ algorithms over-enforcing against legitimate speech, or are they rather failing to limit hateful content on their sites? And how can policymakers ensure an adequate level of transparency and accountability in platforms’ algorithmic content moderation processes?
Research sprint within the framework of “The Ethics of Digitalisation”
These were just some of the issues that drove the research sprint on AI and content moderation hosted by the Alexander von Humboldt Institute for Internet and Society. The sprint, which took place virtually over the course of ten weeks from August until October 2020, was the first research format of the project “The Ethics of Digitalisation – from Principles to Practices” under the patronage of the German Federal President Frank-Walter Steinmeier. This project, which will run until July 2022, aims to foster a global dialogue on the ethics of digitalisation by involving stakeholders from academia, civil society, policy, and the industry. The project comprises research sprints and smaller clinic formats hosted by several research institutes of the Global Network of Centers. Main partners of the project are the Stiftung Mercator, the HIIG, the Berkman Klein Center at Harvard University, and the Digital Asia Hub.
Thirteen fellows, nine countries, seven time zones
In line with the project’s interdisciplinary approach, the HIIG team led by Nadine Birner, Christian Katzenbach, Matthias C. Kettemann, Alexander Pirang and Friederike Stock,assembled a highly diverse group of participants for the first research sprint. They selected thirteen brilliant fellows working in nine different countries and across seven different time zones, whose academic expertise ranged from law and public policy to data science and digital ethics.
The fellows formed working groups to address key challenges arising from the use of automation and machine learning in content moderation. They were mentored in this effort by Julia Reda (Gesellschaft für Freiheitsrechte, Berkman Klein Center), Mackenzie Nelson (AlgorithmWatch), and Juan Carlos de Martin (Politecnico di Torino, Berkman Klein Center). To engage with industry perspectives, the fellows also met with representatives from Facebook and Google. Most importantly, however, the fellows had intense discussions among themselves, which we – as scientific leads – found as captivating as thought-provoking.
This is a sprint, not a marathon: three policy briefs to guide the way
This journey was challenging at times. Research usually feels more like a marathon than a sprint, yet, in our case, the time pressure was high right from the start. And mind you, all this took place virtually during a pandemic.
The fellows more than met our high expectations, constantly pushing the boundaries of the research sprint’s format with their motivation and intellectual curiosity. Starting with the premise that algorithmic content moderation is here to stay, the fellows identified glaring gaps in our knowledge of how platform companies automate content moderation processes. Moreover, they recognized that highly imperfect machines pose grave risks for fundamental rights, particularly freedom of expression. Against this background, the working groups produced policy briefs that make recommendations on how to address these challenges across the following key areas.
Meaningful transparency obligations: In order to overcome the current information gap, the fellows propose wide-ranging measures to establish a multi-level transparency regime, thus facilitating evidence-based platform regulation and society-wide debate about how algorithmic content moderation systems should be designed.
Effective appeal mechanisms: Given a lack of redress against automated enforcement decisions, the fellows recommend imposing binding and enforceable obligations on platforms to provide users with effective appeal mechanisms. The proposals also recommend establishing an independent Ombudsperson with the powers to supervise and evaluate platforms’ algorithmic content moderation practices.
Principle-based algorithmic auditing: Lastly, the fellows identify algorithmic audits as the most promising mechanism for monitoring the risks associated with the use of AI in content moderation. In order to ensure carefully crafted legal mandates, the fellows recommend the four guiding principles of independence, access, publicity, and resources.
For more details, we invite you to read the policy briefs here:
- Bloch-Wehba, H., Fernandez, A., Morar, D. (2020). Making Audits Meaningful. HIIG Research Sprint “AI in Content Moderation”, Project: “Ethics of Digitalization”.
- Iramina, A., Spencer-Smith, C., Yan, W. (2020). Disclosure Rules for Algorithmic Content Moderation. HIIG Research Sprint “AI in Content Moderation”, Project: “Ethics of Digitalization”.
- Cowls, J., Darius, P., Golunova, V., Mendis, S., Prem, E., Santistevan, D., Wang, W. (2020). Freedom of Expression in the Digital Public Sphere. HIIG Research Sprint “AI in Content Moderation”, Project: “Ethics of Digitalization”.
What’s next? Interested in the ethics of digitalisation? Take a look at our upcoming virtual clinic on “Increasing Fairness in Targeted Advertising: The Risk of Gender Stereotyping by Job Ad Algorithms”.
This post represents the view of the author and does not necessarily represent the view of the institute itself. For more information about the topics of these articles and associated research projects, please contact info@hiig.de.
You will receive our latest blog articles once a month in a newsletter.
Platform governance
Why access rights to platform data for researchers restrict, not promote, academic freedom
New German and EU digital laws grant researchers access rights to platform data, but narrow definitions of research risk undermining academic freedom.
Empowering workers with data
As workplaces become data-driven, can workers use people analytics to advocate for their rights? This article explores how data empowers workers and unions.
Two years after the takeover: Four key policy changes of X under Musk
This article outlines four key policy changes of X since Musk’s 2022 takeover, highlighting how the platform's approach to content moderation has evolved.