Unsere vernetzte Welt verstehen
Ist die COVID-19-Pandemie die Chance zur Wiedergutmachung für Social-Media-Plattformen?
Online-Plattformen stemmen sich gegen die Flut der über das Coronavirus verbreiteten Falschmeldungen und Verschwörungstheorien im Netz. Die Maßnahmen zur Bekämpfung der „Infodemie“ geben Anlass zu Optimismus, allerdings bergen sie auch erhebliche Risiken.
Over the past weeks, as the coronavirus pandemic has disrupted public life across the globe, misinformation around COVID-19 has surged on social media platforms—from kooky origin myths linking the virus outbreak to 5G technology deployment in Wuhan, China, to miracle cures such as drinking chlorine dioxide, an industrial bleach. As early as mid-February, the WHO sounded alarm about an online “infodemic,” i.e. a barrage of information about the virus—some accurate, some misleading, and some outright dangerous—that makes it difficult to find trustworthy sources.
In response, major social media platforms have ramped up their efforts to address COVID-19-related misinformation. Facebook in particular appears to seize the chance at redemption; the company, long beleaguered by various scandals, seems to have implemented a surprisingly robust response to the pandemic. Since COVID-19 was declared a global public health emergency in January, Facebook has been working to ensure that “everyone can access credible and accurate information,” which, as Mark Zuckerberg noted in a post in early March, “is critical […] when there are precautions that you can take to reduce the risk of infection.”
To this end, the company launched a COVID-19 Information Center for real-time updates from authoritative sources and started to prominently display pop-ups connecting users to information from the WHO and other regional health authorities. Facebook also expanded its partnerships with fact-checking organizations that review content in more than 50 languages. As of April 16, Facebook claims to have displayed warning labels on about 40 million posts containing false information related to COVID-19 and to have removed “hundreds of thousands of pieces of misinformation that could lead to imminent physical harm.”
Deepening of policy change already underway
Although tempting, it would be too simplistic to portray these efforts as a complete paradigm shift in how Facebook approaches content regulation. Rather, they fall in line with a gradual, but profound policy change that has been already underway before the virus outbreak. After years of drawing sustained criticism for ruthlessly optimizing their systems for user engagement, Facebook and other major platforms have recently taken steps to reduce the spread of click-bait, manipulated media, and other problematic content, and started to work with independent fact-checkers.
Yet, the current developments substantially reinforce this policy change. Two aspects stand out in particular: First, platform companies have taken a much more assertive stance on removing misinformation from high-level politicians. Prominent politicians such as Brazil’s President Jair Bolsonaro, who endorsed an unproven drug as treatment for COVID-19, have seen posts and videos deleted on Twitter, Facebook, and Instagram. These takedowns are remarkable, as platforms have long shied away from removing misinformation, especially if shared by heads of states. Platforms now appear to belatedly acknowledge that dangerous misinformation, if not taken down swiftly, can have a harmful real-life impact. This particularly applies to COVID-19 misinformation from politicians and other public figures, which, according to Oxford researchers, disproportionately drives up social media engagement, whereas misinformation coming from ordinary people is far less visible.
Second, social media platforms are reinventing themselves as major providers for virus-related news. This especially pertains to Facebook, where an internal analysis reportedly found an “unprecedented increase in the consumption of news articles on Facebook” over the past weeks. The company even announced that it would begin showing messages in News Feed to users who have interacted with harmful misinformation about COVID-19 in the past. According to Facebook, these messages will direct users to fact-checked information on the disease.
This marks a sharp contrast to Facebook’s previous strategy; in 2018, the company overhauled its News Feed algorithm to prioritize content by family and friends, essentially turning the platform into “the virtual equivalent of a sleepy bingo parlor,” as the New York Times wrote, “an outmoded gathering place populated mainly by retirees looking for conversation and cheap fun.” Faced with a pandemic, Facebook and other platforms at last seem to fully embrace their responsibility for users’ information diet—which brings to mind Sheryl Sandberg’s evasive answer on Facebook’s role in an interview from 2017: “We’re very different than a media company. […] We don’t cover the news. But when we say that, we’re not saying we don’t have a responsibility. In fact we’re a new kind of platform… [and] as our size grows, we think we have more responsibility.”
It is not entirely clear why platform seem to really rise to the occasion this time, given their track record of staggering from one scandal to the next. Perhaps executives like Mark Zuckerberg truly see the pandemic as a (last?) chance to prove their platforms’ value to a wary public. There is also another important aspect that sets the current moment apart: unlike, say, the bungling of the US Presidential election in 2016, today’s misinformation crisis does not feel like it was the platform companies’ own making. Facebook and others are therefore able to build public support for their actions by presenting themselves as part of the solution. Moreover, there exists broad scientific consensus regarding the basic facts about COVID-19, so platform companies do not have to worry about allegations of bias.
Lessons learned?
Given the unique nature of COVID-19 misinformation, it is also too early to tell if the new efforts will crystallize into more long-term rules and practices, and what the takeaways from the battle against the coronavirus infodemic will be. Thus far, platform companies have given no indication that they intend to expand their new policies on COVID-19 to other types of misinformation, such as political ads.
What we have already learned, however, is that public-value driven content governance is no far-fetched ideal once platform companies start pulling their weight. At the same time, we should be mindful that the measures rolled out in the wake of COVID-19 are no panacea. If not implemented cautiously, they can be problems posing as solutions. Large-scale removal of misinformation, especially if carried out by automated systems, will likely lead to massive amounts of questionable decisions. Platforms’ newfound role as news powerhouses also raises gatekeeping concerns. Major challenges for the health of the online information eco-system will therefore remain post-COVD-19. If platforms are serious about redeeming past mistakes, then their work has only just begun.
Dieser Beitrag spiegelt die Meinung der Autorinnen und Autoren und weder notwendigerweise noch ausschließlich die Meinung des Institutes wider. Für mehr Informationen zu den Inhalten dieser Beiträge und den assoziierten Forschungsprojekten kontaktieren Sie bitte info@hiig.de
Jetzt anmelden und die neuesten Blogartikel einmal im Monat per Newsletter erhalten.
Forschungsthemen im Fokus
Beschäftigte durch Daten stärken
Arbeitsplätze werden zunehmend datafiziert. Doch wie können Beschäftigte und Gewerkschaften diese Daten nutzen, um ihre Rechte zu vertreten?
Zwei Jahre nach der Übernahme: Vier zentrale Änderungen im Regelwerk von X unter Musk
Der Artikel beschreibt vier zentrale Änderungen im Regelwerk der Plattform X seit Musks Übernahme 2022 und deren Einfluss auf die Moderation von Inhalten.
Zwischen Vision und Realität: Diskurse über nachhaltige KI in Deutschland
Der Artikel untersucht die Rolle von KI im Klimawandel. In Deutschland wächst die Besorgnis über ihre ökologischen Auswirkungen. Kann KI wirklich helfen?