Skip to content
claudio-schwarz-purzlbaum-9XFwNI21Qsk-unsplash
11 February 2020| doi: 10.5281/zenodo.3752951

Packaging and scholarship – a summary of Zuboff’s talk during the Making Sense of the Digital Society lecture series.

Is the following information relevant to you?: Tech companies collect and sell your data to business consumers, in some cases since the early 2000s. Strategies developed from analysis “nudge” your feelings and behaviour, without you being aware of it. During the Making Sense of the Digital Society event, Shoshana Zuboff – scholar, activist and professor emerita at Harvard Business School – elaborated on the development of and outlook for data usage. She foresees the “remote control” of humans as systems and concluded with an appeal for stronger regulations. Tina Krell sums up Zuboffs’ lecture , and wonders if her predictions are as likely to come true as she thinks they are.


Expelled from Wonderland.

On the 1st and 2nd decade of internet use, hopes and understanding.

Our attitudes toward the world wide web have changed. In the early 2000s people were like Alice in Wonderland chasing the white rabbit. The internet era promised internet users free use and exchange: Internet services were offered for no charge, privacy was considered private, and information on literally anything was just a fingertip away. Yet, two decades later, public understanding changed. Internet users realized that they were “searched as well”. They were paying for non-monetized services with their data. Most importantly, companies were selling this information to business consumers. Those buyers then use them to for example optimize their facial recognition algorithms, run statistical analysis and make more precise predictions of users that then help them to tailor advertising and marketing strategies around that. This is how Zuboff started her lecture and – as we will see in the following – this is what her major point of criticism is.

A new era is beginning, which Zuboff calls “economies of action”. As is common among scholars, she does not disappoint in coming up with her own terminology for this new phenomenon. She describes the productification of human data as the “trade in human futures”, and the generated profit “surveillance dividend”, based on her most recent book “The Age of Surveillance Capitalism”.

Economies of action – she describes – are the shift from the monitoring of people to the “actualisation of people”. By that, Zuboff says, she draws on an idea from system management: We have collected so much information about people that we can manage and remote control them as human systems. To her, especially the large tech cooperates follow “epistemic dominance plans”, by asking how to “automate remote control human behavior at scale”.Her prologue ended with the observation that “we enter the third decade of the 21st century, marked by an extreme new form of social inequality that threatens to remake society as it unmakes democracy.”

Before I dive deeper into her content, a few things on the evening itself: the venue was packed. Zuboff entertained an audience of roughly 900 people. She managed that impressively well. Over the course of 60 minutes she kept the audience’s attention through illustrations, anecdotes and interactions. She structured her talk, starting with exemplary cases, followed by her analysis and finishing with a call to action. It was easy to follow. Yet, to some extent, I was left confused about some core aspects of her analysis and terminology. In the following, I will give you a summary of her key points and some of my thoughts and concerns about it. In short: (Dis)agreeing with Zuboff in this lecture is not so much a scholarly decision as a political one.

From the beauty of probability to “behavioral nudging”.

On the fundamental change of advertisement strategies.

There are multiple ways to get and keep people’s attention. Zuboff chose a collection of exemplary cases and continued her analysis from there. The examples illustrate the different phases (the length and variety of trials) that companies undertook and undertake to “remote control human behaviour at scale”. She thereby focused exclusively on the well-known large American tech companies:

WhenWhoWhat
2000GoogleGoogle started collecting user information,
which became clear in their accompanying
prospectus, at their initial public offering (IPO).
2014FacebookA study of Facebook users indicated
how emotional states can be transmitted
to others through networks (not in-person)
without their awareness.
2016Niantic Labs
(Google)
Google owned Niantic Labs introduced
Pokémon Go, using gamification to incentivise
players with rewards (Pokémon collection)
to go somewhere they would not go otherwise.
2017FacebookAn article about a leaked Facebook document
on youth-targeted advertising practices indicated
how the company can estimate the emotional
states of teens in Australia and New Zealand
(age 14+) during different times of the
day and week. Advertising companies were
encouraged to exploit that by establishing
strategies centring on when teens are
(on average) at their “most vulnerable” state.
2018FacebookAnother article on another leaked Facebook
document revealed that their
“prediction engine” (FBLearner Flow)
now makes on average more
than 6 million predictions per second.
2019Sidewalk Labs
(Google)
Alphabet Inc. owned Sidewalks Lab is in
negotiations to create a “Smart City”
neighborhood in Toronto, Canada.
The project is under ethical
scrutiny for its own data collection strategy
concerning residents and visitors.

Based on this chronology, Zuboff made two clear and distinct points: 1) We are witnessing a fundamental paradigm shift of corporate centralisation that impacts business, governance and society globally. 2) There seem to be alarming signals that concentrated data collection processes are enabling companies to manipulate users at large. So, what does she mean by that, and how does she argue for it?

Paradigm Shifts & Human Systems.

On propaganda. Old story in a new costume?

Propaganda has existed throughout the history of mankind. To Zuboff, the paradigm shift is marked by the aforementioned “economies of action”. That is: companies that buy and analyse human data to build strategies based on it. What’s different to traditional propaganda is something she did not mention explicitly, but what I understand as higher predictability.

In probability theory, we apply the law of large numbers to obtain more certainty about the average. The larger the number of repetitions within an experiment, the better the approximation of our expected outcome tends to be. In statistics, increased sample size increases statistical power. Higher statistical power is a more powerful microscope that allows you to see smaller objects. It tells the difference between two small numbers, and is therefore a more granular tool for testing effects. As a rule of thumb, increased access to data and access to previously unobtainable, new sorts of data benefits statistics and our ability to make more precise and differentiated predictions. That is, “Given x, what is the likelihood of y doing z?” (“y” as in: you, falling in a given category defined for that question).

To Zuboff, business customers with those data have more precise information that allows them to tailor their marketing, but also to feed their own training algorithms. Zuboff mentioned for example the military as a customer of digital images and video frames to train their facial recognition algorithms. Her outlook concerns the loss of the autonomy of internet users, their exploitation by advertising companies, and their inability to see through all of that. She brought up a few examples of that, all of them emotionally connecting. Likely, the most controversial one pertains to “digital instrumentation”, which, for her, differs from propaganda in previous ages, because:

“[i]t’s not sending anybody to our homes at night to take us to the Gulag or the camp. It’s not threatening us with murder or terror. It is not totalitarian power […], it works its will remotely. It comes to us secretly, quietly. And if we ever know it’s there, it might actually greet us with a cappuccino and a smile.”

It is important to note that Zuboff self-describes on Twitter as scholar and activist. Emotionally strong examples and exaggerations are powerful and common stylistic means to make people stop in their tracks. Therefore, it might have been the activist in her that chose to do that. People remember on average just a fraction of what they hear and see. Exaggerations therefore increase the likelihood that key words and ideas stick with the audience long after the talk, and it is an individual choice to use it. Yet, using it also increases the emotions that it creates and it creates distrust in the scholar, whos core attribute it is to stay nuanced. If she was not exaggerating, I would certainly question whether data-enabled targeting and surveillance is even similar to the experience of the Gulag. Even if it was possible to change internet users (consuming) behaviour at scale, it is fundamentally different than the experience in forced-labour camps.

The outlawing of … what exactly?

On the outlaw of supply and demand. Is government regulation the answer?

Despite Zuboff’s largely dystopian assessment throughout the beginning and the middle part of the lecture, she was optimistic in her outlook towards the end. Referring to the fall of the Berlin wall, she says that “anything that humans made, can be unmade”. She also stressed that countries like Germany – with their surveillance alertness stemming from its own history – give her hope. In her finishing remarks she offered hands-on suggestions that could be described as more fair capitalism.

Ideally, innovators would produce “without having to compete on the surveillance dividend.” That is that companies would not be able to generate profit from the analysis of Internet user behaviour. Additionally, companies without access to those data would not need to compete with companies that currently do have access to those data. To make that a reality, Zuboff appealed for two forms of governmental regulations. First, to “interrupt supply”, by “outlawing the taking and translating of human experience into insightful information”. Secondly, to “interrupt demand”, by eliminating “the incentives to sell predictions of human behaviour”.

The core of scholarly work is to provide meaningful insights and analysis, which requires needs specialised knowledge. What you usually tend to get is a comprehensive and detailed analysis. What you rarely get though, is clear further steps to move on from there. The fact that Zuboff takes a clear position and announces the call for governmental action appears strong to me, and emphasizes again her self-description as an activist. Yet, to better understand what she wants to be regulated, I was still puzzled. 

“Markets that trade in human futures should be illegal”, such as “markets that trade human organs, […] in human slaves [and] babies are illegal”.

What does she mean by that? By “Trade in human futures” you would immediately think of the futures market. In the futures market, though, buyers and sellers secure the price for a transaction in the future that they think will give them the biggest advantage. (Buyers anticipate increases in prices, while sellers fear drops of them). Trade in futures serves as a form of security. The same thing applies to futures traded on the derivatives market. There, a futures contract is a financial product that derives its value from a change in the price of the underlying asset (human data in our example) without owning it. Then, Zuboff would mean that the speculation on changes in the price of human data in stocks should be outlawed.

In either case though, the futures market doesn’t try to change or influence prices (to make them cheaper or more expensive). It’s purpose is to reduce risks in highly price-fluctuating markets (commodities, i.e.) and to predict. Advertising on the contrary, tries to change human behaviour. 

What she more likely means is that advertising companies buy information on human behaviour to build their marketing strategies based on that. And with her assessment on more precise and accurate predictions, they are able to change Internet users future behaviour through that. This assessment, though, appears quite “thin” to me. Using a widely established term is then even more misleading.

The data economy encompasses a whole value chain of all sorts of different B2B parties involved. Even now, I remain clueless about what exactly she wants to outlaw when she says “trade in human futures”.

Boomers, Millennials and Gen Z.

On generational differences. Do we actually have a problem?

Change is a constant in society, and so is change in technology, data collection as well as people’s attitudes and values. Undoubtedly, advertising and a lot of other aspects of our lives are changing fundamentally, due to the internet. Yet, it is unclear whether those disrupted fields were any better in the past, nor if sticking to strategies that don’t reflect recent market demands or opportunities in analysis is productive.

Zuboff is optimistic when she says that anything human-made can be unmade. Yet, history tells us that these are outliers and not the rule. It is much harder to un-do established things than to prevent them from emerging in the first place. Additionally, countries are very well aware of the economic consequences. Outlawing the development and use of data trade will likely set domestic R&D back, and ultimately benefit other countries in the global race in the data economy. I also want to appeal to Zuboff as a scholar. Two of her six mentioned examples at the beginning take their authority from newspaper articles on leaked documents. One of them is behind a paywall. The other quoted article refers to the leaked document without providing it. Hence, trained professionals cannot test, back-up or reject what is written in these news articles without accessing the source of the controversy. Additionally, the study she quoted on the contagion of emotions through networks, explicitly says “the effect sizes from the manipulations are small (as small as d = 0.001)”. By no means, small effects can aggregate to large consequences, and they do matter. Yet, the original authors stress that it is “first experimental evidence”. Selling these results for more than they are, is unscientific.

I do give credit to Zuboff for going on stage and advocating for regulations with her face, persona and brand. That is brave and we do need scholars with a voice and that are outspoken. Her ultimate assessment and outlook, though, kept me unsatisfied.

Her talk completely neglected any positive aspects of the new data-enabled possibilities – i.e. technology leapfrogging, cancer predictability. Yet, even if that was necessary – so that she could focus better on the points that she wanted to make – her analysis seems flawed. It seems to steam from a dichotomous understanding of end users on the one side, and algorithms and companies on the other.

People are not powerless and algorithms are not able to remote control them. In fact, these networks are affected by the people the same way in that they affect people. A system is not free to do what it wants! It is bound to the conditions which have helped it to grow. A social network, for instance, can’t play freely with its users data. The limits – the simple nonlinearity of a system – are the crucial point. These systems are bound by very hard limitations, imposed on them by the systems that nourish them.

Zuboff has been working in the information technology field for longer than I have been alive, and her credentials are unchallenged. I want to believe that that evening it was more the activist rather than the scholar that spoke to the audience. Also, conveying your thoughts in a talk is different from the written word, which is why I encourage everyone to read her book “The Age of Surveillance Capitalism” for a more insightful understanding of her thoughts.

Yet, although I hope that it was just the packaging of the talk that did not speak to me – her terminology and activist rhetoric –  I was left with the feeling of scholarly discomfort. I will leave the task of assessing her written word to others.


TL;DR

Scholar and activist Shoshana Zuboff’s outlook on the future is grim. For her, governments need to outlaw the collection and selling of human data (data that captures internet users online behaviour) and the trade in these data to business consumers (advertising firms, etc.). Otherwise, she predicts that online users will be remotely controlled on a vast scale. Enough data in the “wrong” hands – she predicts – will make it possible to manage individuals as human systems, stripping them of their autonomy. Tina Krell summed up the key points that Zuboff made during the Making Sense of the Digital Society lecture series and added her own understanding on the possibilities and likelihood of “at scale” analysis and targeting of internet users.

This post represents the view of the author and does not necessarily represent the view of the institute itself. For more information about the topics of these articles and associated research projects, please contact info@hiig.de.

Tina Krell

Ehem. assoziierte Forscherin: Innovation, Entrepreneurship & Gesellschaft

Sign up for HIIG's Monthly Digest

HIIG-Newsletter-Header

You will receive our latest blog articles once a month in a newsletter.

Explore Research issue in focus

Platform governance

In our research on platform governance, we investigate how corporate goals and social values can be balanced on online platforms.

Further articles

Three groups of icons representing people have shapes travelling between them and a page in the middle of the image. The page is a simple rectangle with straight lines representing data used for people analytics. The shapes traveling towards the page are irregular and in squiggly bands.

Empowering workers with data

As workplaces become data-driven, can workers use people analytics to advocate for their rights? This article explores how data empowers workers and unions.

A stylised illustration featuring a large "X" in a minimalist font, with a dry branch and faded leaves on one side, and a vibrant blue bird in flight on the other. The image symbolises transition, with the bird representing the former Twitter logo and the "X" symbolising the platform's rebranding and policy changes under Elon Musk.

Two years after the takeover: Four key policy changes of X under Musk

This article outlines four key policy changes of X since Musk’s 2022 takeover, highlighting how the platform's approach to content moderation has evolved.

The picture shows a tractor cultivating a field from above. One side of the field is covered in green, the other is dry and earthy. This is intended to show that although sustainable AI can be useful in the fight against climate change, it also comes at a high ecological cost.

Between vision and reality: Discourses about Sustainable AI in Germany

This article explores Sustainable AI and Germany's shift from optimism to concern about its environmental impact. Can AI really combat climate change?