Skip to content
kaltheuner_twitter.com-kalogatias
12 July 2018

“It’s about human dignity and autonomy”

Privacy and data protection are currently being debated more intensively than ever before. In this interview, Frederike Kaltheuner from the civil rights organisation Privacy International explains why those terms have become so fundamentally important to us. The article was first published in the newly launched magazine ROM. The interview was conducted by ROM publisher Khesrau Behroz and writers Patrick Stegemann and Milosz Paul Rosinski.

Frederike Kaltheuner, you work for Privacy International. The first, seemingly basic question then is: What exactly is privacy and what does it mean these days?

Privacy has never been more alive. But it has never been more under threat. We have unprecedented threats from governments, companies, and others who can now know more about us than was ever previously possible. It is indeed the ‘golden age of surveillance’. At the same time, we are seeing the richest and most informed debate on privacy that we have ever had — and across the world. This wasn’t the case even as recently as three years ago. It’s a live right, constantly being questioned, explored, elaborated upon, built upon, adapted.

Essentially, people want to be able to create barriers and manage boundaries to protect themselves from unwarranted interference in their lives. People want to establish boundaries to limit who has access to their bodies, places and things, as well as their communications and information. People also want to negotiate who they are and how they want to interact with the world around them. Privacy is about enabling all of this and empowering individuals to do this all. Framed like this, privacy isn’t the opposite of connecting and sharing – it’s fundamentally about human dignity and autonomy.

I thought it was quite fascinating to watch the Facebook hearings in the US Senate and Congress. Facebook wants privacy to be exclusively about settings and controls, so about actively navigating whether others can see your posts. Targeted advertising, for instance, is not part of the platform’s privacy settings – it’s under ad settings.

Privacy = data protection?

It’s important to stress that international human rights law recognises the fundamental right to privacy. States owe human rights obligations to all individuals subject to their jurisdiction, at a minimum are required to respect the right to privacy of all persons whose communications they handle, and also have positive obligations to ensure and protect the individual’s privacy when the act of conducting surveillance renders individuals within their effective control.

The human rights dimension is important, since there’s sometimes the misconception that privacy is a European concept. We work with partners around the world, some of whom put themselves at great risk to fight for human rights, including the right to privacy. The Indian supreme court just recognised privacy as a constitutional right last year.

It feels like “privacy” and “data protection” are used synonymously, though they are two very different terms.

They are indeed related, but also different. The fundamental rights charter of the European Union, for instance, recognises both a fundamental right to privacy and a fundamental right to the protection of personal data. Privacy is more than data protection and data protection is more than privacy.

Data protection regulates how personal data can be processed (collected, stored, analysed and shared etc.). You need to have a legal basis to process personal data. It places obligations on those that process data, and grants rights to “data subjects”. This sounds very technical but is actually quite powerful (when the law is actually enforced): no matter who holds personal data about you, you have rights over those data. Such as the right of access to that data or the right to correct inaccurate data. Data protection seeks to address the inherent power imbalances between those that collect, analyse and hold personal data about people and those people. It’s also a tool to hold companies – and governments to a lesser extent, because there are more exceptions – to account. Some of these rights, for instance the right to portability, i.e. the right to easily transfer your data to a competitor, is about much more than privacy.

More from ROM magazine?

As important as data protection is, many novel privacy threats don’t necessarily involve personal data and harms often affect groups, or entire segments of society. Data protection is organised around the individual and thus not always able to effectively protect from these more collective harms. One example would be emotion detection technologies that are employed in public spaces. They clearly pose a threat to privacy (amongst other rights), but don’t necessarily fall under data protection.

That said, data protection is still absolutely crucial to safeguarding the right to privacy – perhaps more than ever. In a world of ubiquitous data collection and machine learning, sectoral approaches to privacy regulation simply don’t work anymore. You can have strong laws that govern health data (the US, for instance, has HIPPA), but when health data can be derived and inferred from your browsing history and purchasing data, such sectoral regulation is easily undermined. Governments around the world have or are in the process of enacting laws that protect data. Regulatory frameworks around the world are diverse, but are all designed to protect individuals’ data and reflect a judgment that such protections are an important aspect of the right to privacy.

Surveillance Capitalism and Political Targeting

The industry narrative is: Targeting is important because you get ads/information that are relevant to you. What’s the problem then? After all, isn’t this better than getting random ads for things one doesn’t care for?

Sure, everyone wants relevant ads but who defines what is relevant? The industry narrative assumes that there are two kinds of ads: relevant ads and obnoxious ads. I like to think of it in terms of more and less invasive ads instead. Over the course of the past years, so called “relevant” ads have become ever more invasive. It’s one thing to target ads on self-disclosed interests. It’s a very different thing to target ads based on ever more granular targeting options that are derived from everything you’ve ever searched for, everything you’ve ever read online, everywhere you’ve ever been, everyone you’ve ever talked to online, and everything you’ve ever bought, plus all the patterns and inferences that can be derived from such granular data.

Secondly, the drive to create ever more targeted (“relevant”) ads has created an entire ecosystem made up of thousands of companies that are all in the business of tracking, profiling and targeting people 24 hours a day. Targeted advertisement seems harmless but the ecosystem that is required to sustain it is so complex and often unaccountable that it can be hijacked and tapped into by all sorts of people.

Political targeting is just one example – scammers and other nefarious actors can and are exploiting these systems and troves of data in unexpected ways. The same is true for the vast data troves that fuels these systems. From credit scoring to law enforcement, foreign and domestic intelligence and insurance companies – a whole range of actors is interested in such data.

Maybe one of the key questions is: What is persuasion and what is manipulation?

We are living under surveillance capitalism, a term coined by Shoshana Zuboff, a wholly new subspecies of capitalism in which profits derive from the unilateral surveillance and modification of human behaviour. The entire point of building intimate profiles of individuals, including their interests, personalities, and emotions, is to change the way that people behave. This is the definition of marketing – political or commercial and itself nothing new. What is new is the scale, granularity and automated way in which such persuasion is now possible.

We don’t like to think of ourselves as easily persuasive or susceptible to influence and manipulation, but it’s really a lot more complex. To hijack a discourse, or to distract from an issue, you don’t necessarily need to persuade individuals – you need to influence the influencers.
It’s quite fascinating to look at the Russian sponsored ads that were released to Congress a few weeks ago. It’s not just ads, it’s also a lot of groups and pages that seek to build or strengthen communities around specific beliefs or grievances. Not all of them are alt-right causes; many are also left causes, such as LGBT and women’s rights issues.

We’re talking about a whole range of issues here, from misinformation to political ads and the inherent politics of platforms that all play together in ways that we are just beginning to understand. I do believe, though, that targeting does play at least some role. When companies know that you are depressed or feeling lonely to sell you products you otherwise wouldn’t want, political campaigns and lobbyists around the world can do the same: target the vulnerable, and manipulate the masses.

For many, Facebook is the Internet

After so many scandals and breaches, people still use Facebook and other social media. 
Accepting terms and conditions is a running joke nowadays. What are your thoughts on that? Do you think that people are actually aware of how valuable their data is – or do they just not care?

Social media plays into our deepest fears and desires. We all want to connect. We don’t want to be abandoned, ignored and neglected, or excluded. Just today Facebook reminded me that 400 people have LOVED my posts over the past weeks and that this means that my friends care about me. It’s important to acknowledge these desires and fears.

The problem is: companies track and profile people in ways they cannot understand or meaningfully consent to. We strongly believe that people should not have to be tech experts to have their rights respected. As a result, they can be targeted in ways that are invasive and manipulative. This is not limited to Facebook – this is systemic and self-regulation will not solve this part of the puzzle.

Apps and platforms are designed to keep us hooked and to share as much as possible. Notifications give us instant gratification. Technology is often designed for addiction. Many apps nudge you to share all of your contacts, for instance, without users understanding what that means.

Social media is not just a tool that you can decide whether or not to use. For many people around the world, Facebook is the internet – that’s why violence-inciting posts in Myanmar are so dangerous: in Myanmar Facebook is the internet.

It’s simply not possible to opt out.

But ultimately, leaving social media will not stop the ad tech ecosystem from tracking your every move. We’re beginning to see facial recognition technologies in retail, smart home devices that are often designed to mine data from your social interactions at home; loyalty cards, trackers on apps and websites. We are moving towards a world where your hairbrush has a microphone and your toaster a camera; where the spaces we move in are equipped with sensors and actuators that make decisions about us in real time. All of these devices collect and share massive amounts of personal data that will be used to make sensitive judgements about who we are and what we are going to do next. It’s simply not possible to opt out.

Why is the narrative of the politician who just doesn’t understand technology so problematic and maybe even dangerous?

It’s dangerous because it suggests that technology cannot be regulated. The law always lags behind new and emerging technologies. That doesn’t mean that every new technology inevitably calls for new laws and legislation, but new technologies, especially in the context of surveillance, often challenge (and thereby undermine) existing legal protections.

We see an interesting pattern in our work on policing technology. Every single time there’s a new technology – from IMSI catchers to facial recognition or mobile phone extraction – there is a tendency to assume that all existing laws and protections somehow don’t apply. Take mobile phone extraction technology: police in the UK need a warrant to search your home, but don’t need a warrant to copy all data from your phone, including deleted messages and photos, as well as other metadata that you don’t have access to yourself. That’s just crazy (luckily the situation is different in Germany and the US).

The Facebook scandal: What now?

We all heard about Cambridge Analytica and its meaning for the presidential elections in the United States. Still, the company acted on a much more global scale, across all five continents. Its website boasts over 100 campaigns. Could you please elaborate on those campaigns we don’t get to hear about in the news, especially the ones outside the United States and Europe? Kenya, for example, as a country without any data protection laws.

My sense is that nefarious companies use countries with lower protections as a testing ground. Cambridge Analytica or its parent company SCL Group worked on the 2013 and 2017 campaigns of Kenya’s President Uhuru Kenyatta. The company was also hired to support the failed re-election bid of then-president Goodluck Jonathan of Nigeria in 2015, according to The Guardian. There’s a long history of Western PR companies working for governments around the world and researchers have been studying them for an equal amount of time.

Cambridge Analytica is insolvent now. Is the danger over then? What’ll happen in the next big elections in the US and elsewhere – or rather, what should we look out for?

Oh, of course not. It has concerned me from the beginning that this scandal was so narrowly focussed around a single company. The part that freaked people out was their reliance on psychometric profiling.
Psychometrics is a field of psychology that is devoted to measuring personality traits, aptitudes, and abilities. Inferring psychometric profiles means learning information about an individual that previously could only be learned through the results of specifically designed tests and questionnaires: how neurotic you are, how open you are to new experiences or whether you are contentious.

That sounds sinister (and it is), but again, psychometric predictions are a pretty common practice. Researchers have predicted personality from Instagram photos, Twitter profiles and phone-based metrics. IBM offers a tool that infers personality from unstructured text (such as Tweets, emails, your blog). The start-up Crystal Knows gives customers access to personality reports of their contacts from Google or social media and offers real-time suggestions for how to personalise emails or messages. This is a systemic problem that needs a systemic response.

History also teaches us that data is valuable even after a company ceases to exist. Do you remember Myspace – the social media network of the early 2000s? The website is long dead, but it sold its data on 1 billion users to the ad tech giant Viant in 2011, which was bought by Time Inc in 2016. The data is obviously old, so in order to keep it relevant, Time combines it with “consumer interactions across the Time universe” (so things like Magazine subscriptions) but they also partner with data brokers like Experian.

Any recommendations for politicians – what is the teachable moment here, what are the consequences, what should be done next?

This entire scandal is a wakeup call for companies and governments around the world. An under-regulated data ecosystem is a real threat to democracy.

Far too many countries around the world have little to no legal frameworks (or weak to non-existent enforcement) – that’s simply not sustainable. For example: More than half of Africa’s 54 countries have no data protection or privacy laws. And, of the 14 countries that do, nine have no regulators to enforce them. In a region has clocked the world’s fastest growth in internet use over the past decade, that is leaving a lot of Africans, many of whom are accessing the internet for the first time, with little or no protection.

The onus though is not entirely on politicians. Many governments have a vested interest in not introducing such laws because they use citizens’ data for their own ends – whether for government surveillance, political campaigning or for suppressing political dissent.

Here’s an example from the US: Nearly every single 2016 US presidential candidate has either sold, rented, or loaned their supporters’ personal information to other candidates, marketing companies, charities, or private firms. Marco Rubio alone made $504,651 by renting out his list of supporters. This sounds surprising but can be legal as long as the fine print below a campaign donation says that the data might be shared.

That’s why I think global companies with billions of users have a special responsibility. What we’ve seen so far, unfortunately, is too little too late. But the problem goes beyond large platforms. For too long, too many companies have used data in ways that people could not reasonably expect. This simply needs to stop.


Frederike Kaltheuner works for the civil rights organisation Privacy International, where she heads the data department. She speaks regularly at technology, political and art conferences and comments on new technologies in the British and international press.

In 2016, she was a Transatlantic Digital Fellow for cyber security and platform regulation at the Global Public Policy Institute in Berlin and the New America Foundation in Washington, D.C. Previously she worked for the Centre for Internet & Human Rights as a technology reporter and in the R&D department of an online newspaper.

Kaltheuner holds a master’s degree in Internet Studies from Oxford University and a bachelor’s degree in Philosophy and Politics from Maastricht University. Previously, she was a researcher at the University of Amsterdam and a visiting scholar at Bogazici University, Istanbul.

ROM is a new society magazine. It is about people and the realities of a digitised world – and the changes in it, the resistances, the visions. ROM is not a professional journal. For the digital society is not a parallel society in which only so-called “nerds” reside – it is the society. In the first issue ROM tells about virtual realities, permeable filter bubbles and crypto rave parties. It’s about privacy as a fundamental human right and about trolls supporting the aims of the New Right – in addition, state minister Dorothee Baer reveals background information on her instagram images. And there is much more at stake: ROM has a total of 164 pages.

Title image: ‘las amigas hacker’ by Constanza Figueroa for Derechos Digitales, CC BY 3.0

This post represents the view of the author and does not necessarily represent the view of the institute itself. For more information about the topics of these articles and associated research projects, please contact info@hiig.de.

Lukas Fox

Ehem. Studentischer Mitarbeiter: Webredaktion

Sign up for HIIG's Monthly Digest

HIIG-Newsletter-Header

You will receive our latest blog articles once a month in a newsletter.

Explore current HIIG Activities

Research issues in focus

HIIG is currently working on exciting topics. Learn more about our interdisciplinary pioneering work in public discourse.

Further articles

Three groups of icons representing people have shapes travelling between them and a page in the middle of the image. The page is a simple rectangle with straight lines representing data used for people analytics. The shapes traveling towards the page are irregular and in squiggly bands.

Empowering workers with data

As workplaces become data-driven, can workers use people analytics to advocate for their rights? This article explores how data empowers workers and unions.

A stylised illustration featuring a large "X" in a minimalist font, with a dry branch and faded leaves on one side, and a vibrant blue bird in flight on the other. The image symbolises transition, with the bird representing the former Twitter logo and the "X" symbolising the platform's rebranding and policy changes under Elon Musk.

Two years after the takeover: Four key policy changes of X under Musk

This article outlines four key policy changes of X since Musk’s 2022 takeover, highlighting how the platform's approach to content moderation has evolved.

The picture shows a tractor cultivating a field from above. One side of the field is covered in green, the other is dry and earthy. This is intended to show that although sustainable AI can be useful in the fight against climate change, it also comes at a high ecological cost.

Between vision and reality: Discourses about Sustainable AI in Germany

This article explores Sustainable AI and Germany's shift from optimism to concern about its environmental impact. Can AI really combat climate change?