Skip to content
Bild zeigt Mülltonnen
23 August 2022

Content Moderation – What can stay, what must go?

Automated deletion on social networks is putting freedom of expression at risk. That’s why we need a few rules. The research project “Ethics of Digitalisation” has worked out what these might be.

Guest article by Alexandra Borchardt

Should Donald Trump be allowed to tweet? Should the state broadcaster Russia Today be allowed to spread war propaganda on Facebook, Instagram and YouTube? For a long time, those in charge of the large social media platform corporations such as Facebook/Meta or Google have evaded these and similar questions. This was not just out of ignorance or the naïve belief that having more opinions on the internet would automatically guarantee diversity. Above all, they wanted to make money and relied on a simple formula: more speech = more money. Moreover, they did not want to have to play a policing role in public discourse. But the more tense the political climate has become and the louder the discussion about hate, incitement to hate, violence and lies on the internet has grown, the more resistance has crumbled. This did not always happen voluntarily. Laws like the German Network Enforcement Act (Netzwerkdurchsetzungsgesetz or NetzDG for short) have increased the pressure.  Platforms such as Facebook or YouTube face fines if they do not remove certain content quickly, and they are required to be more transparent. But when the US Capitol building was stormed on 6 January 2021, many doubters in the corporate world had to concede: staying neutral is not an option in the face of such calls for violence. Hate on the web can put democracy in danger.  

Why do machines often systematically delete the wrong content? 

Today, corporations systematically delete content that violates their internal rules and the law. Systematically means they leave the task of deleting and hiding content to machines, at least initially. Such automated moderation programs, some of which are self-learning, can decide on their own that this can stay but that must go. The technical term for this is automated content moderation.

But software is not as smart as many people think and perhaps even fear. It can only compare content to something it has previously seen but it cannot put it into context, especially when it comes to cultural idiosyncrasies or humour. For example, an image that might thrill the art scene in one country could be considered pornographic in another. The platform’s algorithms are only really well-trained in a few languages, because most platform corporations are based in the USA. And they can often interpret certain formats poorly, for example, visual symbols such as memes and GIFs. That is why deletion often goes wrong: posts that clearly violate the law stay up, but harmless or even important statements are taken down. This, in turn, puts freedom of expression at risk. What needs to happen so that social networks can continue to be central forums for debating and sharing information without providing a platform for propagandists?

Scientific debating on content moderation

This question is being addressed by the civil society organisations that have developed the Santa Clara Principles on Transparency and Accountability in Content Moderation and, in particular, by the EU, with its Digital Services Act. However, scientific input is needed to create good regulations. In the research project “Ethics of Digitalisation”, which is supported by internet institutes worldwide and financed by the Mercator Foundation under the patronage of Federal President Frank-Walter Steinmeier, 13 researchers  from nine countries in seven time zones and various disciplines have been working intensively on the topic of automated content moderation. From August to October 2020, they analysed the situation in so-called research sprints – with the support of mentors. The Alexander von Humboldt Institute for Internet and Society (HIIG) in Berlin was in charge of this project. As a result, recommendations have been developed for policy-makers.

Nothing works without people as monitors

The researchers made several assumptions. First, in the internet’s communication channels, algorithms do the sorting out and removal of content. It will stay that way, because anything else is unthinkable simply because of the large volumes. Second, there are huge knowledge gaps among all stakeholders about how these algorithms operate and learn. This makes it particularly difficult to develop appropriate and effective regulation. Third, up to now it has often been unclear who is responsible in the world of digital information channels and who is not only called upon to act but also bears liability for their actions. And fourth, software will never be able to sort content perfectly. When it is tasked with doing so, fundamental rights, especially freedom of expression, are put at risk. Policymakers will not be able to find perfect answers to these questions, because in most cases, it is the context that matters when it comes to content. A quote, a film clip, or a picture can be interpreted very differently depending on who posts them and under which heading.

Experts advise caution when states begin to remove content that can be classified as merely “problematic” or “harmful” but not illegal. The reason for this is that such vague categories open up the door to censorship. However, hardly any software will be able to correctly classify the legal situation at all times and in all places. Therefore, nothing will work without people to monitor the process. In their proposals, researchers in the Ethics of Digitalisation project have developed a few principles that could guide all parties involved – not just governments and parliamentarians but also the platform corporations.

We need facts and the opportunity to fight back

The first concern is far-reaching transparency. This demand is directed at the platform corporations on the one hand and the regulators on the other. Google, Facebook and similar companies should be obliged to disclose how their systems work and regularly verify that they respect fundamental rights such as freedom of expression and privacy. This demand has largely been addressed by the Digital Services Act. Legislators, on the other hand, should above all disclose their intentions, justify them and offer reliability. When the regulatory process is underway, then everyone should know how it is happening, what is the aim and what are the results. Regulations should be based on findings from research and be flexible enough to be adapted to new technical developments. This requires a broad social debate on how content should be sorted and automatically filtered.

Currently, platform companies’ algorithms largely optimise content according to its chances of attracting attention. Behind this is an ad-driven business model that relies on views – the so-called “battle for the eyeballs”. By applying this model, the companies automatically encourage the posting of all kinds of nonsense – which then has to be checked for its legality. However, positive selection would also be possible: algorithms could give preference to posts and information whose factual accuracy has been checked or that comes from sources certified as reputable. The Journalism Trust Initiative of the organisation Reporters Without Borders is campaigning for a system like this, with the aim of helping to make quality journalism more visible. Other content that is less conducive to constructive debate then automatically moves further down the list and becomes barely visible.

But because machines and people inevitably make mistakes, it is not enough to sort content by output. Citizens and institutions must also be given simple, fast and unbureaucratic ways to enforce their rights when they feel they have been treated unfairly, i.e. when they have been blocked or censored for no apparent reason. Platform corporations should be obliged to create such structures – for example, they should create options to appeal incorrect decisions with just a few clicks. The researchers recommend that an independent ombudsperson, or platform advisory board, be appointed as the final authority to arbitrate in disputes and make binding decisions for all parties. In addition, of course, there is always the option of going to the national and international courts.

Who actually monitors the algorithms that monitor me?

In addition, the researchers suggest that the algorithms themselves should be subject to regular monitoring and that audits should be established for this purpose, i.e. a kind of algorithm MOT. These audits are intended to ensure, among other things, that the algorithms comply with the law, i.e. that they do not discriminate, for example. Discrimination can arise quickly, because artificial intelligence “learns” what works best. If these algorithms are not kept in mind and regularly checked, stereotypes will not only be perpetuated but possibly even reinforced. In recent years, there has been a growing social debate about what algorithms must be able to do and what they are used for. Whereas, initially, algorithms were primarily concerned with solving tasks with the greatest possible efficiency – for example, granting loans, selecting applicants or personalising content – values now play an increasingly important role. It is now considered crucial to define from the beginning what goals automated selection should achieve and to check whether they are achieved.     

However, enforcing such an MOT will be a challenge, because algorithms are the modern equivalent of the Coca-Cola formula: the platform companies regard them as trade secrets; they want to keep others in the dark when it comes to optimising their sorting software and don’t want to give up control over it. Critics find this unacceptable. After all, these are powerful instruments that influence public debates and may mean that  life-defining information is not spread widely enough. The researchers have therefore established four basic principles for such audits: auditorial independence, access to data, publication of results and sufficient resources.

Who will safeguard my freedom of expression now: the state, industry or civil society? 

Audits would not be entirely new. This instrument already exists in European law – for example, in the GDPR data protection package, where “data protection audits” are included as a control option. Another possibility would be public registers for algorithms to disclose the basis on which automated decisions are made. In Europe, this tool is currently being tested by authorities in Amsterdam, Helsinki and Nantes. Such registers could also be set up for the private sector.

However, the researchers admit that such regulation could also open the door to abuse. Governments could use them as a pretext to restrict privacy, suppress dissent or prevent people from exercising other fundamental rights, as is happening in Russia, for example. “Like any regulation, audits would need to be prudently established to protect against abuse, misuse, politicisation and disproportionate interference,” the team writes. The relevant processes would therefore have to be set up in such a way that neither the states nor the industry alone could exert undue influence over them, either because both have equal voting rights or because civil society actors are involved.

Companies should provide data

In a world that is changing rapidly – not only but also due to technological developments – it is important in any event to involve all actors and groups with the requisite knowledge and experience in political processes. Lengthy, hierarchically controlled decision-making processes do not do justice to dynamic developments. Flexibility and adaptability are needed. Making policy decisions through algorithmic content management is increasingly coming to resemble open-heart surgery. More than ever, it is important to shorten the pathways between action and impact through practice-oriented research. For this to happen, however, scientists need access to data. There is certainly no shortage of this data, but there likely is a lack of willingness on the part of influential companies to make it available. In a society built on knowledge and facts, knowledge cannot be shared quickly enough. Everyone is called upon to contribute to this.

Sign up for HIIG's Monthly Digest

HIIG-Newsletter-Header

You will receive our latest blog articles once a month in a newsletter.

Explore current HIIG Activities

Research issues in focus

HIIG is currently working on exciting topics. Learn more about our interdisciplinary pioneering work in public discourse.

Further articles

Modern subway station escalators leading to platforms, symbolizing the structured pathways of access rights. In the context of online platforms, such rights enable research but impose narrow constraints, raising questions about academic freedom.

Why access rights to platform data for researchers restrict, not promote, academic freedom

New German and EU digital laws grant researchers access rights to platform data, but narrow definitions of research risk undermining academic freedom.

Three groups of icons representing people have shapes travelling between them and a page in the middle of the image. The page is a simple rectangle with straight lines representing data used for people analytics. The shapes traveling towards the page are irregular and in squiggly bands.

Empowering workers with data

As workplaces become data-driven, can workers use people analytics to advocate for their rights? This article explores how data empowers workers and unions.

A stylised illustration featuring a large "X" in a minimalist font, with a dry branch and faded leaves on one side, and a vibrant blue bird in flight on the other. The image symbolises transition, with the bird representing the former Twitter logo and the "X" symbolising the platform's rebranding and policy changes under Elon Musk.

Two years after the takeover: Four key policy changes of X under Musk

This article outlines four key policy changes of X since Musk’s 2022 takeover, highlighting how the platform's approach to content moderation has evolved.