Zum Inhalt springen
brian-mcgowan-GmKHxZZJcb4-unsplash
30 April 2020

Digitale Technologien und die Pandemie

Gefahren und Möglichkeiten

Wie wird sich die Coronavirus-Pandemie auf bestimmte Arten von digitalen Technologien und Praktiken auswirken? HIIG-Forschende geben einige vorläufige Antworten.


At this point, we have all been overwhelmed by an avalanche of predictions about the coronavirus pandemic. In part, these forecasts regard the public health emergency itself: when will it peak, how will it end, how many will die? Others consider not the crisis itself but its wider societal consequences. Is this the end of neoliberalism? Can globalization survive such a shock? Are we ever going to shake hands again? These conjectures are not, of course, disconnected from the scenarios they envision. Not all prophecies are self-fulfilling. But, in influencing people’s perceptions about what is happening, they might somehow help to shape what will indeed occur. Put another way, predictions matter, in particular at moments of great uncertainty.

In regard to digital technologies, a common notion is that this pandemic will not only increase the use of current surveillance systems – reinforcing practices that were already prevalent. It will also enable the development and deployment of new kinds of bodily monitoring, and do so in a way that is morally justified. When life itself is at stake, some might think it is acceptable to renounce some civil freedoms. While surveillance is paramount to understand today’s digital technologies, it does not exhaust the immense amount of areas that have been somehow digitalized in the past decades, and might be impacted by the COVID-19 crisis.

With that in mind, I asked HIIG researchers:

How will the coronavirus pandemic change, or not, particular aspects of digital technologies – and why?

The range of the responses, in the form of a few sharp paragraphs or full posts, reflect the breadth of expertise housed in the institute: platform governance and regulation of content, AI, cybersecurity, innovation, open access and scientific collaboration. Some highlight the opportunities created by this moment; others focus on the perils. Taken together, they provide a kaleidoscopic perspective from which to think of the highly complex ways in which digitalization might change in response to this crisis.


Robert Gorwa, fellow, on platform regulation

When faced with lawmakers demanding greater intervention in the types of content that users post and access online, platform companies have historically deployed a few rhetorical strategies in order to justify their reluctance to intervene. The oldest, and by now the most widely debunked, was their claim to ‘neutrality’ — of being a mere conduit and carrier of user-behaviour, rather than the algorithmic facilitator of it. The latest playbook has been dominated by a combination of foregrounding the combination of (a) technical difficulty of making content decisions at scale, for billions of users across billions of topics, and (b) the fundamental fuzziness and subjectivity of speech, making firm boundaries and bright line rules extremely difficult to establish. Science communication has provided a perfect example of this: in the debate about how platforms should handle content with public health ramifications (such as anti-vaccine conspiracy theories), or environmental ramifications (such as climate change denial), firms have deployed a combination of ideological arguments about the nature of free expression with technical arguments about the unfeasibility of policing the boundaries –  a contested and complex scientific discourse. Famously, Mark Zuckerberg argued it would be impossible and undesirable for Facebook to become such an ‘arbiter of truth’.

If it’s changing anything about today’s platform regulation landscape, the current COVID-19 pandemic is punching holes into this line of argument. As many observers have noted in the past few weeks, search engines, social networks, and other major information intermediaries have begun displaying warning notices on content related to the Coronavirus, interpreting the pandemic as a clear mandate to intervene far more aggressively. The balancing act between public-harms and public-speech rights has shifted to privilege the former, as firms seem to be increasing the prevalence of fully automated takedowns in the most-problematic areas. This relatively muscular response, while imperfect, has led commentators to wonder why firms don’t take similar steps for types of content seen to be harmful. Why not run vaccine information interstitials for all anti-vax search keywords? Or ensure that searches linked with holocaust denial get authoritative sources, rather than conspiracy forums? Firms have shown that they can do more. Will policymakers let them go back to the previous status quo?


Alexander Pirang, doctoral student, on content regulation. See the full blog post here

In response to rampant online misinformation around COVID-19, major social media platforms have ramped up their efforts to address the “infodemic.” Facebook in particular appears to seize the chance at redemption; the company, long beleaguered by various scandals, seems to have implemented a surprisingly robust response to the pandemic. 

Given the unique nature of COVID-19 misinformation, it is too early to tell if the new efforts will crystallize into more long-term rules and practices, and what the takeaways from the battle against the coronavirus infodemic will be. Thus far, platform companies have given no indication that they intend to expand their new policies on COVID-19 to other types of misinformation, such as political ads.What we have already learned, however, is that public-value driven content governance is no far-fetched ideal once platform companies start pulling their weight. At the same time, we should be mindful that the measures rolled out in the wake of COVID-19 are no panacea. If not implemented cautiously, they can be problems posing as solutions. Large-scale removal of misinformation, especially if carried out by automated systems, will likely lead to massive amounts of questionable decisions. Platforms’ newfound role as news powerhouses also raises gatekeeping concerns. Major challenges for the health of the online information eco-system will therefore likely remain post-COVD-19.


Daniela Dicks, research coordinator at the AI & Society Lab, on artificial intelligence

During this crisis, the field of artificial intelligence has recently seen promising developments. This challenging situation pushes technology enthusiasts to become more creative and innovate. In recent weeks, promising and exciting ideas have emerged, like how AI may help develop a cure or vaccine for coronavirus.

Despite optimism and technological capabilities, times like these show that we need inclusive debates on artificial intelligence. For one thing is clear: The increasing integration of AI in political, social and cultural processes will challenge the status quo.

But it’s on us to shape the future of AI according to our needs and goals. To this end, we need to address all the pressing questions surrounding AI today. As a society, we have to agree – not only in times of crisis – which way we want to go and how AI as a technology can ‘serve’ us. This is one of the topics that currently interest us at the HIIG’s AI & Society Lab the most. From autumn 2020 onward, a new research project will thus focus on “Public Interest AI”. The goal is to move away from abstract debates about ethical AI and to examine how AI can be implemented for the common good. AI is becoming so important for the future of our society that we should all have a say in how AI is used.


Philip Meier, doctoral researcher, on social innovation

Historian Yuval Noah Harari recently stated in a Financial Times article that “many short-term emergency measures will become a fixture of life. That is the nature of emergencies. They fast-forward historical processes”. Therefore, I make a statement to actively design emergency-accelerated innovation for lasting social benefit.

The bottom-up innovation which can be observed in almost all infected regions is astounding. Digitally enabled platforms, products, and services are developed and brought to market in record times to relieve pains for the most vulnerable among us. This includes local virus information applications, peer to peer services for grocery shopping or remote classwork for young pupils.

By nature, a significant number of these innovations, like serving a person in need, address fundamental tenets of human society. If we believe Harari, at least some of them will be here to stay. Thus, my claim is to ask about the operating and ownership model for these products and services when the time of the urgent need is over. Then, the innovators have to decide how digital social benefit ought to be sustained in the field of tension between the monetization of business activity and the social mission with which they started out.


Marcel Wrzesinski, Open Access officer, on the academic publishing system. See his full interview with Frédéric Dubois, managing editor of the Internet Policy Review here

As researchers, we build upon the results of others. Right now, the global community is hugely affected: Research on SARS-CoV-2 needs to be accessible immediately and worldwide. This is done through several research repository hubs (e.g., ZB Med, medRxiv/bioRxiv, or Elsevier). While this is great, providing access to research literature remains a politicised and economic decision: One could ask herself why the global community does not similarly respond to the HIV pandemic. Or why barely any publisher opens up their research papers to counter recurrent public health crises in the global south.

That aside, access to publicly funded research is key for a researcher’s everyday work, all over the world. The digitalisation of society enables us to be more collaborative in our work; now the publishing system needs to catch-up. Open licensing and sustainable archiving are matters of fairness, particularly in a world where research funding and therefore acquisition budgets are unevenly distributed.


Benedikt Fecher, head of research programme Knowledge & Society. See the full blog post here, originally published in Elephant in the Lab

Perhaps the most important insight I have gained over the years is that scholarly impact is a matter of complexity and that attempts by researchers to avoid complexity may ultimately reduce the impact of their work.

As serious as the COVID-19 situation is, I believe that the pandemic can be an opportunity for research to embrace complexity and to prove that things can be done better. And the good news is that it is happening right now.


Bruna Toso de Alcântara, fellow. See the full blog post here

The pandemic has generated a mine of gold for malicious actors as people’s fear or curiosity toward the virus outbreak makes them more susceptible to psychological manipulation, allowing cyberattacks through social engineering to happen. However, the cybercriminal activity related to COVID-19 is not restricted to individuals trying to obtain financial gains.

There have been some findings on suspected state-sponsored groups conducting cyber operations. The Thales group’s Cyber Threat Intelligence Center and the threat intelligence company, IntSights, showed in their reports that more state-sponsored groups are using COVID-19 as part of their espionage campaigns. The reports showed that, in essence, the malicious actors emulate a trusted source and offer documents with COVID-19 information, luring their targets into opening these documents and, without knowing, downloading a hidden malware. Once downloaded, the malware provides remote control of the infected device.

These findings are significant as the targets typically are related to governmental agencies, making it possible for malicious actors to get access to sensitive state information, and thus making it feasible to conduct espionage campaigns.

Dieser Beitrag spiegelt die Meinung der Autorinnen und Autoren und weder notwendigerweise noch ausschließlich die Meinung des Institutes wider. Für mehr Informationen zu den Inhalten dieser Beiträge und den assoziierten Forschungsprojekten kontaktieren Sie bitte info@hiig.de

João Carlos Magalhães, Dr.

Ehem. Senior Researcher: Die Entwicklung der digitalen Gesellschaft

Auf dem Laufenden bleiben

HIIG-Newsletter-Header

Jetzt anmelden und  die neuesten Blogartikel einmal im Monat per Newsletter erhalten.

Forschungsthema im Fokus Entdecken

Man sieht einen leeren Büroraum ohne Möbel und braunen Teppichboden. Das Bild steht sinnbildlich für die Frage, wie die Arbeit der Zukunft und digitales Organisieren und Zukunft unseren Arbeitsplatz beeinflusst. You see an empty office room without furniture and brown carpeting. The image is emblematic of the question of how the work of the future and digital organising and the future will influence our workplace.

Digitale Zukunft der Arbeitswelt

Wie werden KI und Digitalisierung die Zukunft der Arbeit verändern? Wir erforschen ihre Auswirkungen sowie die Chancen und Risiken.

Weitere Artikel

Drei Gruppen von Menschen haben Formen über sich, die zwischen ihnen und in Richtung eines Papiers hin und her reisen. Die Seite ist ein einfaches Rechteck mit geraden Linien, die Daten darstellen. Die Formen, die auf die Seite zusteuern, sind unregelmäßig und verlaufen in gewundenen Bändern.

Beschäftigte durch Daten stärken

Arbeitsplätze werden zunehmend datafiziert. Doch wie können Beschäftigte und Gewerkschaften diese Daten nutzen, um ihre Rechte zu vertreten?

Eine stilisierte Illustration mit einem großen „X“ in einer minimalistischen Schriftart, mit einem trockenen Zweig und verblichenen Blättern auf der einen Seite und einem leuchtend blauen Vogel im Flug auf der anderen Seite. Das Bild symbolisiert einen Übergangsprozess, wobei der Vogel das frühere Twitter-Logo darstellt und das „X“ das Rebranding der Plattform und Änderungen im Regelwerk von X symbolisiert.

Zwei Jahre nach der Übernahme: Vier zentrale Änderungen im Regelwerk von X unter Musk

Der Artikel beschreibt vier zentrale Änderungen im Regelwerk der Plattform X seit Musks Übernahme 2022 und deren Einfluss auf die Moderation von Inhalten.

Das Bild zeigt einen Traktor von oben, der ein Feld bestellt. Eine Seite des Feldes ist grün bewachsen, die andere trocken und erdig. Das soll zeigen, dass nachhaltige KI zwar im Kampf gegen den Klimawandel nützlich sein, selbst aber auch hohe Kosten für die Umwelt verursacht.

Zwischen Vision und Realität: Diskurse über nachhaltige KI in Deutschland

Der Artikel untersucht die Rolle von KI im Klimawandel. In Deutschland wächst die Besorgnis über ihre ökologischen Auswirkungen. Kann KI wirklich helfen?