Skip to content
5261568726_d51149d62c_z
30 June 2014

All the world’s a laboratory? On Facebook’s emotional contagion experiment and user rights

by Cornelius Puschmann and HIIG Fellow Engin Bozdag

How significant is the impact of what we read on Facebook on what we post there, specifically on our emotions? And to what extent can we trust what we read on social media sites more generally, when what we see is increasingly filtered using algorithmic criteria that are largely opaque? The first question is addressed in a controversial study recently published in the journal PNAS, the second we want to critically discuss in this blog post.

The article Experimental evidence of massive-scale emotional contagion through social networks by Adam D. Kramer (Facebook), Jamie E. Guillory (University of California) and Jeffrey T. Hancock (Cornell University) recently provoked some very strong reactions both on international news sites and among scholars and bloggers (e.g. The Atlantic, Forbes, Venture Beat, The Independent, The New York Times; James Grimmelman, John Grohol, Tal Yarkoni, Zeynep Tufekci, Michelle N. Meyer, Thomas J. Leeper, Brian Keegan, David Gorski). The New York Times’ Vindu Goel surmises that “to Facebook, we are all lab rats” and The Atlantic’s Robinson Meyer calls the study a “secret mood manipulation experiment”. Responses from scholars have been somewhat more mixed: several have noted that the research design and the magnitude of the experiment have been poorly represented by the media, while others argue that there has been a massive breach of research ethics. First author Adam D. Kramer has responded to the criticism with a Facebook post in which he explains the team’s aims and apologizes for the distress that the study has caused.

So what is the issue? The paper tests the assumption that basic emotions, positive and negative, are contagious, i.e. that they spread from person to person by exposure. This has been tested for face-to-face communication in laboratory settings before, but not online. The authors studied roughly three million English language posts written by close to 700,000 users in January 2012. The researchers adjusted the Facebook News Feed of these users to randomly filter out specific posts with positive and negative emotion words the users would normally have been exposed to and then studied the emotional content of the subjects’ posts in the following period. Kramer and colleagues stress that no content was added to anyone’s News Feed, and that the percentage of posts filtered out in this way from the News Feed was very small. The basis for for the filtering decision was the LIWC software package, developed by James Pennebaker and colleagues at the University of Texas, which is used to correlate physical well-being with word usage. LIWC’s origins lie in clinical environments and originally the approach was tested using diaries and other very personal (and fairly wordy) genres, rather that short Facebook status updates, a potential methodological issue that John Grohol points out in his blog post.

What did Kramer and his co-authors discover? The study’s central finding is that basic emotions are in fact contagious, though the influence the authors measured is relatively small. However, they note that given the large sample, the global effect is still important and argue that the emotional contagion has not been observed in a computer-mediated setting based purely on textual content before. Psychologist Tal Yarkoni has responded that given the small size of the observed effect, speaking of manipulation is really overblown — that similar ‘nudges’ are made in online platforms all the time without the knowledge or consent of users.

So much for the results — it is rather the ethical aspects of the experiment that unsurprisingly provoked a strong response. The 689,003 users whose News Feeds were changed between January 11 and January 18 2012 were not aware of their participation in the experiment and had no way of knowing how their news feeds were adjusted. To their defense, Kramer and colleagues point out that the content omitted from the News Feed as part of the experiment was still available by going directly to the user’s Wall (1), that the percentage of omitted content was very small (2) and that the content of the News Feed is generally the product of algorithmic filtering (3), rather than a verbatim reproduction of everything that your friends are posting — in other words, that they merely added a filter to the News Feed and conducted an A/B test for the impact of the filtering. Furthermore, they stress that no content was examined manually, that is, read by a human researcher, but that all classification was achieved by LIWC automatically. This step was both taken to achieve the large scale of the study and to ensure that no breach of privacy took place. From the paper:

“LIWC was adapted to run on the Hadoop Map/Reduce system (11) and in the News Feed filtering system, such that no text was seen by the researchers. As such, it was consistent with Facebook’s Data Use Policy, to which all users agree prior to creating an account on Facebook, constituting informed consent for this research”

It is a subject of intense debate whether or not agreeing to the Facebook Terms of Service constitutes informed consent to an experiment in which the News Feed is manipulated in the described way — what seems certain is that this kind of research raises a whole host of questions, from the responsibility of institutional review boards (IRBs) who are charged with ensuring that academic research is ethically acceptable to Facebook’s right to conduct such research in the first place. As has been pointed out, internet companies change their algorithms and filtering mechanisms all the time without informing anyone about it, and legally there is no need for them to do so, but for many commentators a line has apparently been crossed, from optimizing a product to studying and influencing behavior without seeking consent.

While the study has provoked strong reactions, it is worth pointing out that this is not the first time that Facebook has filtered users’ News Feed for research purposes without acquiring prior consent. In a 2012 study on information diffusion, Facebook researchers Eytan Bakshy, Itamar Rosenn, Cameron Marlow, and Lada Adamic found that novel information in online platforms mainly propagates through weak ties, in other words that news travels through groups of relatively informal acquaintances. To study this effect in more detail than had previously been possible, the researchers randomly blocked some status updates from the News Feeds of a pool of some 250 million users, many more than in the emotion contagion experiment.

While the blame has focused on Facebook, it is by no means the only company that performs such experiments. A/B testing is a standard practice among all internet companies to improve their products. Google provides a set of tools to conduct A/B tests for website optimization, as does Amazon. Beyond A/B testing to improve the quality of search results, issues become yet more complicated when experiments around information exposure are conducted with social improvement in mind and without explicit consent. In another recent experiment, researchers at Microsoft changed search engine results in order to promote civil discourse. In the study in question, the authors modified search results that were displayed when users entered specific political search queries, so that users entering the query obamacare would be exposed both to liberal and conservative sources, rather than just to content biased into one ideological direction. In the light of the discrepancy between the ethical standards of academic research on human subjects and the entirely different requirements of building and optimizing social media platforms and search engines, it’s tempting and simplistic to single out Facebook for filtering content algorithmically. But the public outcry underlines that there is increasingly an expectation towards more transparency regarding how content is filtered and presented, beyond assuming a ‘take it or leave it’-attitude. Social media platforms cater to consumers, not to citizens, but they increasingly carry a responsibility for vital information that influences people’s decisions, and for the transparency of the mechanisms used to filter that information.

What the storm of criticism in response to the research clearly shows is the implicit expectation of users, scholars and the media alike, that what we see on Facebook should be a reflection of what our friends and acquaintances are actually saying, rather than an intransparent curated experience, even when the curation is to our supposed benefit. The paper’s editor, Susan Fiske (Princeton), noted the complexity of the situation in a response to The Atlantic, pointing out that the Institutional Review Board of the authors’ institutions did approve the research, and arguing that Facebook could not be held to the same standards as academic institutions. Kramer and colleagues clearly saw their experiment in line with Facebook’s continued efforts to optimize the News Feed. Again, from the paper:

“In Facebook, people frequently express emotions, which are later seen by their friends via Facebook’s “News Feed” product … Which content is shown or omitted in the News Feed is determined via a ranking algorithm that Facebook continually develops and tests in the interest of showing viewers the content they will find most relevant and engaging. One such test is reported in this study: A test of whether posts with emotional content are more engaging.”

This characterization clarifies that the approach taken in the study is quite consistent with Facebook’s goal of producing a more engaging user experience. Increasing engagement is also consistent with the goal of achieving higher click-through rates for targeted advertisements, which is in the legitimate interest of any social media platform. What is less clear is whether the same approach is also consistent with social science research ethics and with the public’s moral expectations towards companies with virtually unlimited access to their data. There is justifiably the expectation that we are all subject to the same basic type of filtering and that Facebook should be open about how it determines the content of the News Feed. The deterministic argument made by some that users should simply accept that platform providers can run their service as they please seems short-sighted in light of these expectations, which have grown just as the reach of these platforms has grown.

In the following we summarize and discuss five arguments that have been made in defense of Facebook’s (and other companies’) approach to this type of experimental research.

1. No manipulation has occured because the researchers did not insert messages, but just  filtered existing ones, and the effects were minimal

It has been argued that because Facebook did not insert emotional messages into the News Feed, but only hid certain posts for certain users, the experiment does not constitute manipulation. However, according to some scholars who have worked on persuasion (Smids, Spahn), if persuasion does not happen voluntarily and if the persuader does not reveal his intentions before the persuading act takes place, this is to be considered manipulative. Others argue that involuntary persuasion is acceptable only if there is a very significant benefit for society that would outweigh possible harms. In the case of Facebook, it is difficult to justify the action as it was not voluntary and the benefits hardly seem to outweigh the harms. The lack of transparency towards the participants is likely to weigh more strongly in the eyes of most users than the small size of the effect and the details of how the filtering was conducted.

2. The News Feed is the result of algorithmic filtering anyway

Another claim is that the News Feed is constantly being adjusted and improved as a result of countless ongoing experiments. For instance, Facebook already personalizes various aspects of the platform in order to keep the experience interesting so that users are more likely to return to the site frequently. Since Facebook use is always an experiment, it is argued that this is not a special case and therefore the complaint is unjustified. However, Facebook has previously been criticized for tailoring its News Feed without properly explaining the criteria and this criticism is unlikely to just disappear. While personalization is an instrument to counter information overload, corrupt personalization seems an increasingly relevant issue. An average Facebook post reaches only 12% of a user’s followers. As of January 2014, community and organization pages cannot reach its subscribers by regular status updates, they are instead forced to use ad campaigns. Tailoring content for a user for his/her best interest and then making profit of this service is one thing, prioritizing commercial content over regular content is another (Google has also been accused of this). If Facebook makes a profit, but also aims to serve the public’s best interest, then its algorithms must be transparent enough for the public to judge them on that.

3. Social media companies are not bound to the same standards as publicly-funded research

Another widely-held argument is that private companies can make changes to their services as they see fit. End users have generally accepted the Terms of Service and should accordingly accept the consequences. However, Facebook, like Google, is not any Internet company. It has reached a dominant market position in which there is barely any competition and it in many respects acts as a public service that many people completely rely on. Public goods are important for the society and democracy, and are therefore often regulated.

Another argument along these lines is that social media companies are under constant pressure to improve their products and such experiments are the most efficient way of achieving this. However, there are inherent risks associated with systematic user profiling. Different people react differently to strategic persuasion, therefore companies are likely to devise different persuasion strategies. These include authority (user values the opinion of an expert), consensus (users do as others do), and liking (users say yes to people they like). Different strategies can be applied to different persuasion profiles. Depending on a user’s susceptibility to a particular strategy, the system can be tailored to achieve persuasiveness.

Once a platform provider knows which persuasion strategy works for a particular user, such a persuasion profile can be sold to third parties or used for other purposes, such as political advertisement. In a 61 million user experiment in 2010, Facebook users were shown messages at the top of their news feeds that encouraged them to vote, pointed to nearby polling places, offered a place to click “I Voted” and displayed images of select friends who had already voted (the “social message”). The data suggest that the Facebook social message increased turnout by about 340,000 votes. Recently Jonathan Zittrain argued that, if Facebook can persuade users to vote, it can also persuade them to vote for a certain candidate.

4. Experiments are constantly performed by social media companies

Some claim that online experiments should be accepted as a fact of life, since every social media company conducts them. However, just because this is how it is, it does not mean this is how it should be. In a paper on nanotechnologies, Ibo van de Poel lists a number of criteria that must be fulfilled in order to justify a social experiment: A social experiment is only acceptable when (1) there is an absence of alternatives, (2) the experiment is controllable, (3) users give their informed consent, (4) the hazard and benefits must be proportional, (5) the experiment is approved by democratically legitimized bodies, (6) subjects can influence the set-up, carrying out and stopping the experiment if needed, and (7) vulnerable subjects are either not subject or protected. Clearly many online intermediaries do not adhere to most of these principles. In the case of Facebook it is clear that the company has done this testing for their own purposes, but as we have mentioned, it can also be done with intentions for a better society, with implications which are no less problematic. It follows that all actors involved need to jointly discuss and devise criteria for the ethics of online experiments and big data research using human subjects in accordance with existing guidelines.

5. Criticism will lead to less open publication of industry research results

There is a very real danger that the wave of public outrage (and in some cases very personal attacks on the authors of the study) will lead to less cooperation between industry and academia. This may ensure that researchers at publicly funded institutions do not participate in research with potentially questionable aims, but it also has a number of problematic consequences. If social media industry research becomes a complete black box, ethical standards in industry research are likely to suffer, rather than improve. There is a potential for real cross-fertilization between the two, but it lies in industry research becoming more like academia, rather than the other way around. Perhaps this is illusory, but it seems certain that industry research will grow in coming years, as the user bases of social media platforms and other online services continue to grow.

Data science must follow stricter standards — both methodologically and ethically — to deliver on its many promises. Laboratories, regardless of their size, are governed by rules ensuring that the research conducted in them is not just legal, but also ethical. We need to start devising similar rules for online research as well. Hiding behind the ToS will not do.


Image: Flickr, Paul Butler, Planet Facebook or Planet Earth? 

Cornelius Puschmann is an associated researcher of the Alexander von Humboldt Institute for Internet and Society, Engin Bozdag is an HIIG Fellow. The post does not necessarily represent the view of the Institute itself. For more information about the topics of these articles and associated research projects, please contact presse@hiig.de.

This post represents the view of the author and does not necessarily represent the view of the institute itself. For more information about the topics of these articles and associated research projects, please contact info@hiig.de.

Cornelius Puschmann, Dr.

Ehem. Assoziierter Forscher: Entwicklung der digitalen Gesellschaft

Sign up for HIIG's Monthly Digest

HIIG-Newsletter-Header

You will receive our latest blog articles once a month in a newsletter.

Explore current HIIG Activities

Research issues in focus

HIIG is currently working on exciting topics. Learn more about our interdisciplinary pioneering work in public discourse.

Further articles

Modern subway station escalators leading to platforms, symbolizing the structured pathways of access rights. In the context of online platforms, such rights enable research but impose narrow constraints, raising questions about academic freedom.

Why access rights to platform data for researchers restrict, not promote, academic freedom

New German and EU digital laws grant researchers access rights to platform data, but narrow definitions of research risk undermining academic freedom.

Three groups of icons representing people have shapes travelling between them and a page in the middle of the image. The page is a simple rectangle with straight lines representing data used for people analytics. The shapes traveling towards the page are irregular and in squiggly bands.

Empowering workers with data

As workplaces become data-driven, can workers use people analytics to advocate for their rights? This article explores how data empowers workers and unions.

A stylised illustration featuring a large "X" in a minimalist font, with a dry branch and faded leaves on one side, and a vibrant blue bird in flight on the other. The image symbolises transition, with the bird representing the former Twitter logo and the "X" symbolising the platform's rebranding and policy changes under Elon Musk.

Two years after the takeover: Four key policy changes of X under Musk

This article outlines four key policy changes of X since Musk’s 2022 takeover, highlighting how the platform's approach to content moderation has evolved.