Skip to content
zuzanna-adamczyk-573592-unsplash
24 July 2018| doi: 10.5281/zenodo.1327555

A risk worth taking: Studying content moderation on social media platforms

While social media platforms choose to identify themselves as neutral platforms, there is growing complexity around the ways to analyze their structure, operational dynamics as well as the regulatory frameworks which must accompany. Guest author Sana Ahmad researches the content moderation industry in India.

The terminology ‘social media’ has come a long way. From its early inception as a ‘computer-mediated communication’ in the form of emails, forums, BBSs identified by information scientists and inter-personal communication researchers in the early 1980s to the evolutionary terminology (from broadcast media) of ‘new media’ in the 1990s to ‘Web 2.0’ in mid-2000s for the then growing social technologies such as MySpace, Wikipedia, Reddit etc., to even the broad ranging ‘digital media’ which also included video games, e-books and internet radio, the term ‘social media’ has become a commonly used jargon.

However, it was Tarleton Gillespie, a Microsoft researcher, who helped facilitate the definitional evolution of social networking sites as ‘social media platforms’. He draws light on the growth of those digital intermediaries who he identifies as ‘platforms’ and recommends looking at social media sites as platforms, in the way of their ‘technical design, economic imperatives, regulatory frameworks and public character’.

This is a great development, to which other critical theorists of political economy have contributed as well. However, with the prevalence of hate speech, violent content, fake news and other illicit material on social media platforms, as well as its role in democracy manipulation and influence on the election outcomes, sudden public interest has been sparked in the way these otherwise leniently regulated social media platforms operate.

Countries such as Germany, Austria and now the USA (especially in wake of the 2016 US election scandal and the Cambridge Analytica controversy) are using legal channels to prohibit the presence of hate speech and violent content on social media platforms. However, there exist the conceptual difficulties in defining hate speech and its commixture at times with the users’ freedom of expression. Leafing through the examples of the GamerGate scandal, a misogynistic campaign aimed at hate-driven harassment of women in the world of video games, or the manifestation of online hate through the periods of the bulletin board systems or the progression of 4Chan and its infamous random board /b/ etc., provide grounds for analyzing the complex interplay of factors such as power, history, culture, subjectivity and others in networked communicative practices.

Content moderation practices are treated as industrial secrets

While there is an ongoing discussion on the need to protect the social media users from harmful content on these platforms and enable stringent regulatory measures to do so, there is not enough information on the industrial level processes of moderation and controlling the illicit content on these platforms. Of what is known, the content moderation practices are treated as industrial secrets by the social media companies, on grounds of protecting the identity of the workers (the moderators) or guarding their tech property or simply because it would warrant further liability for the moderators.

Further, moderation on social media platforms is publicly understood through automation. Technologies such as PhotoDNA – image detection software against child exploitation, or the developments in Adaptive Listening technology to assess user intent or even the 3D Modeling Technology, modeled on industrial assembly line moderation, are assistive in moderating the mammoth amount of content posted online. However, the question worth asking is if these automated technologies are capable of detecting cases related to satire, awareness- or education-related issues or even politically sensitive issues.

There is much that can be written about the discrepancies involved in assuming the appropriation of human jobs by machines, contradicting the wealth of academic literature that indicates the human occupation of menial service sector jobs. However, the focus for this blog remains on negotiating the importance of researching on the existing content moderation labour practices. Researchers such as Sarah Roberts and Gillespie have been occupied with shedding light on industrial level commercial content moderation. However, these research pieces along with the sporadic media articles and carefully packaged audio-visual documentaries have to gander through closed doors of an industry that guards its secrets heavily.

Also read our issue in focus: Work in the digital age

My doctoral project looks at the content moderation industry’s production model, with focus on the labour practices in India. Wanting to do this research is exciting, especially because it enables me to learn about this invisible work, laboured by moderators in exchange for low wages and lacking basic work standards. While there is also a small and high-skilled in-house moderation team of social media companies, the work is often outsourced across national borders, either to a content management company and/or online to a global pool of freelancers through both international and domestic online labour markets. A large section of this work is outsourced to India, where dreams of belonging to the Information and Communication Technology sector run high. India’s history of pre-existing business connections, glaringly lower wage rates as well as weak regulatory frameworks, have made the country a popular destination for exuviating work from the Global North.

The process of studying this subject is not uncomplicated, especially due to lack of access to companies’ policies and workers’ testimonials. Nevertheless, it is a risk worth taking in order to start understanding the current production and consumption models of networked communication systems.


Sana Ahmad is a PhD student at the Freie Universität Berlin and is writing her thesis on the content moderation industry in India. She is currently affiliated as a guest researcher with the “Globalisation, Work and Production” unit at the Wissenschaftszentrum Berlin für Sozialforschung (WZB).

This post represents the view of the author and does not necessarily represent the view of the institute itself. For more information about the topics of these articles and associated research projects, please contact info@hiig.de.

Sana Ahmad

Sign up for HIIG's Monthly Digest

HIIG-Newsletter-Header

You will receive our latest blog articles once a month in a newsletter.

Explore Research issue in focus

Platform governance

In our research on platform governance, we investigate how corporate goals and social values can be balanced on online platforms.

Further articles

Three groups of icons representing people have shapes travelling between them and a page in the middle of the image. The page is a simple rectangle with straight lines representing data used for people analytics. The shapes traveling towards the page are irregular and in squiggly bands.

Empowering workers with data

As workplaces become data-driven, can workers use people analytics to advocate for their rights? This article explores how data empowers workers and unions.

A stylised illustration featuring a large "X" in a minimalist font, with a dry branch and faded leaves on one side, and a vibrant blue bird in flight on the other. The image symbolises transition, with the bird representing the former Twitter logo and the "X" symbolising the platform's rebranding and policy changes under Elon Musk.

Two years after the takeover: Four key policy changes of X under Musk

This article outlines four key policy changes of X since Musk’s 2022 takeover, highlighting how the platform's approach to content moderation has evolved.

The picture shows a tractor cultivating a field from above. One side of the field is covered in green, the other is dry and earthy. This is intended to show that although sustainable AI can be useful in the fight against climate change, it also comes at a high ecological cost.

Between vision and reality: Discourses about Sustainable AI in Germany

This article explores Sustainable AI and Germany's shift from optimism to concern about its environmental impact. Can AI really combat climate change?