Unsere vernetzte Welt verstehen
Content Moderation auf Social Media-Plattformen
Während sich Social-Media-Plattformen als neutral verstehen, wird die Analyse ihrer Struktur, ihrer operativen Dynamik sowie der damit verbundenen regulatorischen Rahmenbedingungen immer komplexer. Unsere Gastautorin Sana Ahmad erforscht die Content Moderation-Industrie in Indien.
The terminology ‘social media’ has come a long way. From its early inception as a ‘computer-mediated communication’ in the form of emails, forums, BBSs identified by information scientists and inter-personal communication researchers in the early 1980s to the evolutionary terminology (from broadcast media) of ‘new media’ in the 1990s to ‘Web 2.0’ in mid-2000s for the then growing social technologies such as MySpace, Wikipedia, Reddit etc., to even the broad ranging ‘digital media’ which also included video games, e-books and internet radio, the term ‘social media’ has become a commonly used jargon.
However, it was Tarleton Gillespie, a Microsoft researcher, who helped facilitate the definitional evolution of social networking sites as ‘social media platforms’. He draws light on the growth of those digital intermediaries who he identifies as ‘platforms’ and recommends looking at social media sites as platforms, in the way of their ‘technical design, economic imperatives, regulatory frameworks and public character’.
This is a great development, to which other critical theorists of political economy have contributed as well. However, with the prevalence of hate speech, violent content, fake news and other illicit material on social media platforms, as well as its role in democracy manipulation and influence on the election outcomes, sudden public interest has been sparked in the way these otherwise leniently regulated social media platforms operate.
Countries such as Germany, Austria and now the USA (especially in wake of the 2016 US election scandal and the Cambridge Analytica controversy) are using legal channels to prohibit the presence of hate speech and violent content on social media platforms. However, there exist the conceptual difficulties in defining hate speech and its commixture at times with the users’ freedom of expression. Leafing through the examples of the GamerGate scandal, a misogynistic campaign aimed at hate-driven harassment of women in the world of video games, or the manifestation of online hate through the periods of the bulletin board systems or the progression of 4Chan and its infamous random board /b/ etc., provide grounds for analyzing the complex interplay of factors such as power, history, culture, subjectivity and others in networked communicative practices.
Content moderation practices are treated as industrial secrets
While there is an ongoing discussion on the need to protect the social media users from harmful content on these platforms and enable stringent regulatory measures to do so, there is not enough information on the industrial level processes of moderation and controlling the illicit content on these platforms. Of what is known, the content moderation practices are treated as industrial secrets by the social media companies, on grounds of protecting the identity of the workers (the moderators) or guarding their tech property or simply because it would warrant further liability for the moderators.
Further, moderation on social media platforms is publicly understood through automation. Technologies such as PhotoDNA – image detection software against child exploitation, or the developments in Adaptive Listening technology to assess user intent or even the 3D Modeling Technology, modeled on industrial assembly line moderation, are assistive in moderating the mammoth amount of content posted online. However, the question worth asking is if these automated technologies are capable of detecting cases related to satire, awareness- or education-related issues or even politically sensitive issues.
There is much that can be written about the discrepancies involved in assuming the appropriation of human jobs by machines, contradicting the wealth of academic literature that indicates the human occupation of menial service sector jobs. However, the focus for this blog remains on negotiating the importance of researching on the existing content moderation labour practices. Researchers such as Sarah Roberts and Gillespie have been occupied with shedding light on industrial level commercial content moderation. However, these research pieces along with the sporadic media articles and carefully packaged audio-visual documentaries have to gander through closed doors of an industry that guards its secrets heavily.
Mehr zum Thema im Dossier: Arbeit im digitalen Zeitalter
My doctoral project looks at the content moderation industry’s production model, with focus on the labour practices in India. Wanting to do this research is exciting, especially because it enables me to learn about this invisible work, laboured by moderators in exchange for low wages and lacking basic work standards. While there is also a small and high-skilled in-house moderation team of social media companies, the work is often outsourced across national borders, either to a content management company and/or online to a global pool of freelancers through both international and domestic online labour markets. A large section of this work is outsourced to India, where dreams of belonging to the Information and Communication Technology sector run high. India’s history of pre-existing business connections, glaringly lower wage rates as well as weak regulatory frameworks, have made the country a popular destination for exuviating work from the Global North.
The process of studying this subject is not uncomplicated, especially due to lack of access to companies’ policies and workers’ testimonials. Nevertheless, it is a risk worth taking in order to start understanding the current production and consumption models of networked communication systems.
Sana Ahmad promoviert an der Freien Universität Berlin über die Content Moderation-Industrie in Indien. Derzeit arbeitet sie als Gastforscherin in der Gruppe “Globalisation, Arbeit und Produktion” am Wissenschaftszentrum Berlin für Sozialforschung (WZB).
Dieser Beitrag spiegelt die Meinung der Autorinnen und Autoren und weder notwendigerweise noch ausschließlich die Meinung des Institutes wider. Für mehr Informationen zu den Inhalten dieser Beiträge und den assoziierten Forschungsprojekten kontaktieren Sie bitte info@hiig.de
Jetzt anmelden und die neuesten Blogartikel einmal im Monat per Newsletter erhalten.
Plattform Governance
Beschäftigte durch Daten stärken
Arbeitsplätze werden zunehmend datafiziert. Doch wie können Beschäftigte und Gewerkschaften diese Daten nutzen, um ihre Rechte zu vertreten?
Zwei Jahre nach der Übernahme: Vier zentrale Änderungen im Regelwerk von X unter Musk
Der Artikel beschreibt vier zentrale Änderungen im Regelwerk der Plattform X seit Musks Übernahme 2022 und deren Einfluss auf die Moderation von Inhalten.
Zwischen Vision und Realität: Diskurse über nachhaltige KI in Deutschland
Der Artikel untersucht die Rolle von KI im Klimawandel. In Deutschland wächst die Besorgnis über ihre ökologischen Auswirkungen. Kann KI wirklich helfen?