Skip to content
The photo shows an arrow sign on a brick wall, symbolising the DSA in terms of navigating platform power.
18 July 2024

Navigating platform power: from European elections to the regulatory future

For the first time since the new EU rules on digital services and markets came into force, an EU-wide election took place. Six weeks later, it’s time to take stock. How are the new EU rules against platform-based challenges for democracy working in real life conditions? Has the vaunted Digital Services Act (DSA) worked in terms of navigating platform power? This article shows that the challenges of implementing the DSA have only just begun to emerge.

The anticipated wave of digital challenges

In the weeks leading up to the European elections, news outlets, researchers, authorities, and civil society actors alike warned against an anticipated wave of disinformation. Taking place in early June 2024, EU elections are a “a flagship of European democracy”, as the European Parliament puts it. However, experts across the globe feared the very real possibility that actors both inside and outside the European Union would try to undermine the democratic processes of European elections. Their main concern was about the dissemination of false information about voting procedures, and the sowing of division and polarisation within the EU. Apart from disinformation, the NGO Democracy Reporting International (DRI) identified hate speech, foreign interference, and Paid Political Ads (PPAs) as the most dominant digital threats to the EU elections. However, if there is one thing we can learn from DRI’s stakeholder meeting, it is that these threats do not come in one massive wave, but rather as separate, smaller surges that slowly try to erode the foundations of democratic discourse. In an effort to halt these highly dispersed attacks, projects like Elections24Check, with its fact-checking database ahead of the 2024 EU elections, are important to counterbalance online disinformation online.

No major incidents to report, so good news?

Now that the elections are behind us, it is safe to say that the anticipated massive wave did not wash over the EU. According to the European Digital Media Observatory’s (EDMO) Task Force on the EU Parliament elections, there were no major disinformation incidents. Is this good news? Not quite. Just because there were no radical accounts of disinformation does not mean that the digital threats identified by DRI have not become a reality. Let’s take disinformation as an example. Correctiv and DRI have identified that chatbots have provided misinformation about the EU elections. Irrespective of the chatbot used, whether it be Google Gemini, Microsoft Copilot, or ChatGPT, accurate information about the voting process was not always provided. This is all the more alarming as people also use chatbots like a search engine. As DRI reminds us, “when voters are wrongly informed on electoral requirements, they may be deterred from voting (for example, thinking it is more complicated than it is), miss deadlines, or make other mistakes. In short, this unintentional misinformation can impact the right to vote and electoral outcomes.” (DRI, p. 2) This is why navigating platform power is crucial to protect democratic principles.

But it’s not just chatbots that have provided disinformation around the EU elections, studies also show a variety of specific disinformation narratives. These often include targeting politicians, anti-EU sentiments or fake election results.

Problems – meet laws

Since February 2024, a new set of EU rules has been in place to counteract these digital challenges and to create a safer and more equitable digital space where users’ fundamental rights are protected. In particular, the Digital Services Act (DSA) requires platforms to “put in place reasonable, proportionate and effective mitigation measures” (Art. 35 (1)) targeting for example “any actual or foreseeable negative effects on civic discourse and electoral processes, and public security” (Art. 34 (1) (c)). 

As the DSA has been in force for five months now, this would mean that platforms are now obliged to develop measures to counter the digital threats mentioned above. However, as a recent study suggests, we may not be quite there yet. Looking at the five very large online platforms (VLOPs) Facebook, Instagram, TikTok, X and YouTube, the study conducted by the Spanish fact-checking portal Maldita found that in 45% instances of disinformation content, no visible action was taken by the platforms. With digital platforms as the “primary public opinion battleground”, this is not something to be taken lightly. And while the European Commission continues to open formal procedures against service providers to ensure these obligations are met, we still have a long way to go in navigating platform power to create a safer digital environment. 

Where do we go from here? 

While it may seem somewhat disappointing that the commencement of the DSA in February 2024 did not immediately solve all the challenges of the digital space, this does not mean that things are not changing. To begin with, recent analyses on how disinformation agendas were deployed in the context of the EU elections are already providing us with valuable insights about platform-based challenges to democracy. We anticipate that we can gain even more insights from the individual services’ systemic risk assessments (Art. 34) to be published in autumn 2024. Moving forward, further research can build on this to better understand “what the most pressing sources of systemic risk are, where common vulnerabilities arise, and what mitigations can effectively reduce negative effects”, as the Centre for Regulation in Europe (CERRE) points out. Since navigating platform power is a highly complex endeavour, we have built our DSA Research Network on the principles of communication and collaboration: bringing together stakeholders from NGOs, academia and regulatory bodies, we set out to provide early recommendations on how to implement the DSA properly and identify early on possible areas for a needed reform. With our Circle of Friends, we provide a unique space for merging diverse perspectives on the challenges and opportunities of the DSA in need of further research, putting further emphasis on the multi-stakeholder engagement.

As a collaborative initiative of the Leibniz Institute for Media Research | Hans Bredow Institute, the DSA Observatory of the Institute for Information Law at the University of Amsterdam, and the Humboldt Institute for Internet and Society, funded by the Mercator Foundation, an interdisciplinary approach was built into the design of the project: bringing together legal and social-science perspectives, we aim to provide a comprehensive view on the implementation of the DSA. Therefore, we are currently working on three focus areas: 

  • operationalisation of risk-based governance approaches
  • impact of hybrid and risk-based governance on collective rights and values
  • assessment of due diligence and hybrid governance from a fundamental rights perspective 

We will showcase preliminary findings from these focus areas in autumn 2024, when our Circle of Friends comes together for the first time. With the individual services’ risk assessment to be published around that time as well, we will have a solid basis for better understanding and navigating platform power in the context of the DSA.

This post represents the view of the author and does not necessarily represent the view of the institute itself. For more information about the topics of these articles and associated research projects, please contact info@hiig.de.

Ann-Kathrin Watolla

Researcher: DSA research network

Matthias C. Kettemann, Prof. Dr. LL.M. (Harvard)

Head of Research Group and Associate Researcher: Global Constitutionalism and the Internet

Sign up for HIIG's Monthly Digest

HIIG-Newsletter-Header

You will receive our latest blog articles once a month in a newsletter.

Explore Research issue in focus

Platform governance

In our research on platform governance, we investigate how corporate goals and social values can be balanced on online platforms.

Further articles

The image shows a football field from above. The players are only visible because of their shadows, symbolizing Humans in the Loop.

AI Under Supervision: Do We Need ‘Humans in the Loop’ in Automation Processes?

Automated decisions have advantages but are not always flawless. Some suggest a Human in the Loop as a solution. But does it guarantee better outcomes?

The image shows blue dices that are connected to eachother, symbolising B2B platforms.

The plurality of digital B2B platforms

This blog post dives into the diversity of digital business-to-business platforms, categorising them by governance styles and strategic aims.

The picture shows a hand with a pink glove and a cleaning spray, symbolising that this blog post wants to get rid of popular Science Myths.

Debunking Science Myths: Preconceptions about science put to the test

What is really true about preconceptions about science? Four popular myths about a constantly quarrelling group of professionals explained simply.