Making sense of our connected world
Digital tech and the pandemic
Perils and Opportunities
How will the coronavirus pandemic affect specific kinds of digital technologies and practices? HIIG researchers offer some tentative answers.
At this point, we have all been overwhelmed by an avalanche of predictions about the coronavirus pandemic. In part, these forecasts regard the public health emergency itself: when will it peak, how will it end, how many will die? Others consider not the crisis itself but its wider societal consequences. Is this the end of neoliberalism? Can globalization survive such a shock? Are we ever going to shake hands again? These conjectures are not, of course, disconnected from the scenarios they envision. Not all prophecies are self-fulfilling. But, in influencing people’s perceptions about what is happening, they might somehow help to shape what will indeed occur. Put another way, predictions matter, in particular at moments of great uncertainty.
In regard to digital technologies, a common notion is that this pandemic will not only increase the use of current surveillance systems – reinforcing practices that were already prevalent. It will also enable the development and deployment of new kinds of bodily monitoring, and do so in a way that is morally justified. When life itself is at stake, some might think it is acceptable to renounce some civil freedoms. While surveillance is paramount to understand today’s digital technologies, it does not exhaust the immense amount of areas that have been somehow digitalized in the past decades, and might be impacted by the COVID-19 crisis.
With that in mind, I asked HIIG researchers:
How will the coronavirus pandemic change, or not, particular aspects of digital technologies – and why?
The range of the responses, in the form of a few sharp paragraphs or full posts, reflect the breadth of expertise housed in the institute: platform governance and regulation of content, AI, cybersecurity, innovation, open access and scientific collaboration. Some highlight the opportunities created by this moment; others focus on the perils. Taken together, they provide a kaleidoscopic perspective from which to think of the highly complex ways in which digitalization might change in response to this crisis.
Robert Gorwa, fellow, on platform regulation
When faced with lawmakers demanding greater intervention in the types of content that users post and access online, platform companies have historically deployed a few rhetorical strategies in order to justify their reluctance to intervene. The oldest, and by now the most widely debunked, was their claim to ‘neutrality’ — of being a mere conduit and carrier of user-behaviour, rather than the algorithmic facilitator of it. The latest playbook has been dominated by a combination of foregrounding the combination of (a) technical difficulty of making content decisions at scale, for billions of users across billions of topics, and (b) the fundamental fuzziness and subjectivity of speech, making firm boundaries and bright line rules extremely difficult to establish. Science communication has provided a perfect example of this: in the debate about how platforms should handle content with public health ramifications (such as anti-vaccine conspiracy theories), or environmental ramifications (such as climate change denial), firms have deployed a combination of ideological arguments about the nature of free expression with technical arguments about the unfeasibility of policing the boundaries – a contested and complex scientific discourse. Famously, Mark Zuckerberg argued it would be impossible and undesirable for Facebook to become such an ‘arbiter of truth’.
If it’s changing anything about today’s platform regulation landscape, the current COVID-19 pandemic is punching holes into this line of argument. As many observers have noted in the past few weeks, search engines, social networks, and other major information intermediaries have begun displaying warning notices on content related to the Coronavirus, interpreting the pandemic as a clear mandate to intervene far more aggressively. The balancing act between public-harms and public-speech rights has shifted to privilege the former, as firms seem to be increasing the prevalence of fully automated takedowns in the most-problematic areas. This relatively muscular response, while imperfect, has led commentators to wonder why firms don’t take similar steps for types of content seen to be harmful. Why not run vaccine information interstitials for all anti-vax search keywords? Or ensure that searches linked with holocaust denial get authoritative sources, rather than conspiracy forums? Firms have shown that they can do more. Will policymakers let them go back to the previous status quo?
Alexander Pirang, doctoral student, on content regulation. See the full blog post here
In response to rampant online misinformation around COVID-19, major social media platforms have ramped up their efforts to address the “infodemic.” Facebook in particular appears to seize the chance at redemption; the company, long beleaguered by various scandals, seems to have implemented a surprisingly robust response to the pandemic.
Given the unique nature of COVID-19 misinformation, it is too early to tell if the new efforts will crystallize into more long-term rules and practices, and what the takeaways from the battle against the coronavirus infodemic will be. Thus far, platform companies have given no indication that they intend to expand their new policies on COVID-19 to other types of misinformation, such as political ads.What we have already learned, however, is that public-value driven content governance is no far-fetched ideal once platform companies start pulling their weight. At the same time, we should be mindful that the measures rolled out in the wake of COVID-19 are no panacea. If not implemented cautiously, they can be problems posing as solutions. Large-scale removal of misinformation, especially if carried out by automated systems, will likely lead to massive amounts of questionable decisions. Platforms’ newfound role as news powerhouses also raises gatekeeping concerns. Major challenges for the health of the online information eco-system will therefore likely remain post-COVD-19.
Daniela Dicks, research coordinator at the AI & Society Lab, on artificial intelligence
During this crisis, the field of artificial intelligence has recently seen promising developments. This challenging situation pushes technology enthusiasts to become more creative and innovate. In recent weeks, promising and exciting ideas have emerged, like how AI may help develop a cure or vaccine for coronavirus.
Despite optimism and technological capabilities, times like these show that we need inclusive debates on artificial intelligence. For one thing is clear: The increasing integration of AI in political, social and cultural processes will challenge the status quo.
But it’s on us to shape the future of AI according to our needs and goals. To this end, we need to address all the pressing questions surrounding AI today. As a society, we have to agree – not only in times of crisis – which way we want to go and how AI as a technology can ‘serve’ us. This is one of the topics that currently interest us at the HIIG’s AI & Society Lab the most. From autumn 2020 onward, a new research project will thus focus on “Public Interest AI”. The goal is to move away from abstract debates about ethical AI and to examine how AI can be implemented for the common good. AI is becoming so important for the future of our society that we should all have a say in how AI is used.
Philip Meier, doctoral researcher, on social innovation
Historian Yuval Noah Harari recently stated in a Financial Times article that “many short-term emergency measures will become a fixture of life. That is the nature of emergencies. They fast-forward historical processes”. Therefore, I make a statement to actively design emergency-accelerated innovation for lasting social benefit.
The bottom-up innovation which can be observed in almost all infected regions is astounding. Digitally enabled platforms, products, and services are developed and brought to market in record times to relieve pains for the most vulnerable among us. This includes local virus information applications, peer to peer services for grocery shopping or remote classwork for young pupils.
By nature, a significant number of these innovations, like serving a person in need, address fundamental tenets of human society. If we believe Harari, at least some of them will be here to stay. Thus, my claim is to ask about the operating and ownership model for these products and services when the time of the urgent need is over. Then, the innovators have to decide how digital social benefit ought to be sustained in the field of tension between the monetization of business activity and the social mission with which they started out.
Marcel Wrzesinski, Open Access officer, on the academic publishing system. See his full interview with Frédéric Dubois, managing editor of the Internet Policy Review here
As researchers, we build upon the results of others. Right now, the global community is hugely affected: Research on SARS-CoV-2 needs to be accessible immediately and worldwide. This is done through several research repository hubs (e.g., ZB Med, medRxiv/bioRxiv, or Elsevier). While this is great, providing access to research literature remains a politicised and economic decision: One could ask herself why the global community does not similarly respond to the HIV pandemic. Or why barely any publisher opens up their research papers to counter recurrent public health crises in the global south.
That aside, access to publicly funded research is key for a researcher’s everyday work, all over the world. The digitalisation of society enables us to be more collaborative in our work; now the publishing system needs to catch-up. Open licensing and sustainable archiving are matters of fairness, particularly in a world where research funding and therefore acquisition budgets are unevenly distributed.
Benedikt Fecher, head of research programme Knowledge & Society. See the full blog post here, originally published in Elephant in the Lab
Perhaps the most important insight I have gained over the years is that scholarly impact is a matter of complexity and that attempts by researchers to avoid complexity may ultimately reduce the impact of their work.
As serious as the COVID-19 situation is, I believe that the pandemic can be an opportunity for research to embrace complexity and to prove that things can be done better. And the good news is that it is happening right now.
Bruna Toso de Alcântara, fellow. See the full blog post here
The pandemic has generated a mine of gold for malicious actors as people’s fear or curiosity toward the virus outbreak makes them more susceptible to psychological manipulation, allowing cyberattacks through social engineering to happen. However, the cybercriminal activity related to COVID-19 is not restricted to individuals trying to obtain financial gains.
There have been some findings on suspected state-sponsored groups conducting cyber operations. The Thales group’s Cyber Threat Intelligence Center and the threat intelligence company, IntSights, showed in their reports that more state-sponsored groups are using COVID-19 as part of their espionage campaigns. The reports showed that, in essence, the malicious actors emulate a trusted source and offer documents with COVID-19 information, luring their targets into opening these documents and, without knowing, downloading a hidden malware. Once downloaded, the malware provides remote control of the infected device.
These findings are significant as the targets typically are related to governmental agencies, making it possible for malicious actors to get access to sensitive state information, and thus making it feasible to conduct espionage campaigns.
This post represents the view of the author and does not necessarily represent the view of the institute itself. For more information about the topics of these articles and associated research projects, please contact info@hiig.de.
You will receive our latest blog articles once a month in a newsletter.
Digital future of the workplace
Empowering workers with data
As workplaces become data-driven, can workers use people analytics to advocate for their rights? This article explores how data empowers workers and unions.
Two years after the takeover: Four key policy changes of X under Musk
This article outlines four key policy changes of X since Musk’s 2022 takeover, highlighting how the platform's approach to content moderation has evolved.
Between vision and reality: Discourses about Sustainable AI in Germany
This article explores Sustainable AI and Germany's shift from optimism to concern about its environmental impact. Can AI really combat climate change?