Skip to content
PRIVACY
07 July 2015

“The experience of privacy does not necessarily depend on the mercy of the law”

A short legal-psychological interview on privacy between Julian Staben and Ricarda Moll.
First published on www.juwiss.de.

The German Constitution protects a right to privacy under several labels. Firstly, there are explicit rights that protect aspects of a person’s personality and privacy, such as the right to private communication (Art. 10) and the right to a private estate (Art. 13). Additionally, the German Constitutional Court has found a general right to personality that is implicit to the Constitution. (Similar rights can be found in Art. 7 and 8 of the Charter of Fundamental Rights of the European Union and Art. 8 of the European Convention on Human Rights.) In the first place, all these personality rights are designed to keep the government out of a certain realm of the individual. Additionally, these constitutional rights can be – indirectly and under certain circumstances – invoked to keep society out of the protected realm. The mentioned personality rights guarantee something that is often called privacy. Underlying these rights is the assumption that privacy is needed in order to feel free and act in an uninhibited manner. With this presumption in mind, the question arises how psychological research on privacy can contribute to our legal understanding. The following spontaneous conversation identifies some common talking points between the disciplines.

Julian Staben, doctoral researcher at HIIG writing on the chilling effect in constitutional law, asked Ricarda Moll, psychologist and a member of the Research Training Group 1712/1 “Trust and Communication in a Digitized World” of the German Research Foundation (DFG), at the conference re:publica 2015.

Julian Staben: How do psychologists define privacy and how do they study it?

Ricarda Moll: Psychological research on privacy has a quite long tradition that goes at least back into the 60s. Back then privacy was mainly studied with regard to the establishment of relationships (for example by Alan Westin or Erving Altman).

In the understanding of psychologists, privacy is first of all something that individuals experience and manage. Privacy is often seen as a trading good that users can exchange for certain benefits. Both ideas of privacy – privacy as a right and privacy as an experience – do not necessarily contradict each other. However, privacy as a psychological experience is a much more layered concept and as such completely subjective. Additionally, the experience of privacy does not necessarily depend on the mercy and possibilities of the law: People may still feel private in public spaces, because they have beliefs and expectations about what privacy means, what exactly will happen (or not happen) to their data, and if they have to fear any negative consequences from privacy breaches.

One of the key questions when it comes to people’s communicative behaviour on Social Networking Sites is: Why do they disclose so much about themselves although they know that these contents are persistent and potentially accessible to a large public?

In our own research we focused on people’s knowledge and beliefs about their online audience, because this is the only kind of privacy-related information that people have access to. It is also strongly related to the legal idea of informational self-determination, namely the idea that people should be able to control who has access to their information.

What did you find out?

Firstly, it turned out that people have surprisingly little knowledge about their audience when it comes to their communication on Social Networking Sites. For example, we found that Facebook users struggle to name the correct privacy settings of their self-disclosed contents, and were not aware how little they knew about their privacy. In other words, people have no idea who their potential audience is. People seem to also have cognitive restraints when answering the question “to whom am I making my information accessible to”: We found in several experiments that people generally have difficulties to remember their audience even when information about the audience is very salient. In this context we hypothesised that although (or because) users have little knowledge about their audience, they would develop beliefs about their audience – for example how their audience behaves and who actually accesses their information. In other words, they might limit their perceived audience to those who like, comment on, or share their posts. This is a question we are currently investigating.

Data protection law mostly operates under the assumption that it is sufficient to give people control over whom they disclose certain facts to. This control is mainly exercised by giving and withholding consent (also cf. Art. 4 para. 8, Art. 6. no. 1 (a) of the General Data Protection Regulation Draft). It is enough to give people control over who they disclose certain facts to in order to achieve a feeling of privacy. Is this assumption valid?

It is important to recognise in this context that people often do not actually read what they agree to. Most of the time they just consent in order be able to use the application or service – and they should not be blamed for that. You cannot ask everyday users to try to understand complicated and time-consuming legal agreements regarding every application they are about to use. The question then is, if you can really call it “consent” if people do not read or understand what they agree to. But even if they understand what they agree to, their expectations may differ from what they are legally agreeing to – which is the second reason why I doubt that informed consent really contributes to actual control over information. For example, they might agree that their data is stored on a server, but they might expect by default that their data will not actually be accessed (not least because it is so intransparent what actually happens to data once it is stored and transferred). We have done a series of experiments on users’ audience expectations, and they seem to confirm that users have these kinds of expectations (publication forthcoming; audio to a presentation). Niklas Lundblad called this the “collective privacy expectation“: People seem to discount the probability that their information is accessed because they know that this requires cognitive and/or technological resources that are limited.

What could be alternative concepts to consent in order to help users achieve privacy?

A possible alternative could be layered concepts of consent that go beyond the current approach of “take it or leave it”. People may need to have the ability to make different choices regarding different aspects of the use of their data – within one and the same application or service.

Did you become more privacy aware throughout the process of your research?

At first I did not. I was focused on what is driving people’s behaviour, because while I was not totally privacy aware, I was still wondering why people are putting their whole life on social networking sites. Then, the Snowden revelations made me re-think a lot of aspects. Naturally, this also triggered new research ideas. I started to think about my own research from a broader societal perspective. My research group and my supervisor had a great impact on this as well.

Some people believe that privacy is out-dated and that we need to advance to a post-privacy society. They believe that if a person feels the need to hide something either they need to change their behaviour or social norms need to change. What is your answer to a person who believes that they have nothing to hide?

The problem with giving an answer to “I’ve got nothing to hide” is – so it seems to me – that it reveals a sort of stable mindset, or else, it reveals the ways and categories people generally think in. Many people seem to have accepted things as they are without questioning if this is adequate. That is why it may be really difficult to give a short reply to “I’ve got nothing to hide” that will actually have an impact on a person’s opinion. I guess the specific reasons and conditions for the development of such statements need to be investigated. In my opinion, people need to really understand privacy as their democratic right – this does not seem to be the case when people say that they have nothing to hide. Also, things that you may not want to hide right now may be worth hiding in the future – because the criteria for which pieces of information you want to hide may change in the future. Historic examples of these developments are plentiful. A fact that may not be compromising today can get you into trouble tomorrow or the day after that in retrospect. But then again, people in general seem to prefer certain and immediate benefits over uncertain future risks.

Foto: Perspecsys Photos, CC BY-SA 2.0

This post is part of a weekly series of articles by doctoral canditates of the Alexander von Humboldt Institute for Internet and Society. It does not necessarily represent the view of the Institute itself. For more information about the topics of these articles and asssociated research projects, please contact info@hiig.de.

This post represents the view of the author and does not necessarily represent the view of the institute itself. For more information about the topics of these articles and associated research projects, please contact info@hiig.de.

Julian Staben, Dr.

Former Associate Doctoral Researcher: Internet and Media Regulation

Sign up for HIIG's Monthly Digest

HIIG-Newsletter-Header

You will receive our latest blog articles once a month in a newsletter.

Explore Research issue in focus

Platform governance

In our research on platform governance, we investigate how corporate goals and social values can be balanced on online platforms.

Further articles

Three groups of icons representing people have shapes travelling between them and a page in the middle of the image. The page is a simple rectangle with straight lines representing data used for people analytics. The shapes traveling towards the page are irregular and in squiggly bands.

Empowering workers with data

As workplaces become data-driven, can workers use people analytics to advocate for their rights? This article explores how data empowers workers and unions.

A stylised illustration featuring a large "X" in a minimalist font, with a dry branch and faded leaves on one side, and a vibrant blue bird in flight on the other. The image symbolises transition, with the bird representing the former Twitter logo and the "X" symbolising the platform's rebranding and policy changes under Elon Musk.

Two years after the takeover: Four key policy changes of X under Musk

This article outlines four key policy changes of X since Musk’s 2022 takeover, highlighting how the platform's approach to content moderation has evolved.

The picture shows a tractor cultivating a field from above. One side of the field is covered in green, the other is dry and earthy. This is intended to show that although sustainable AI can be useful in the fight against climate change, it also comes at a high ecological cost.

Between vision and reality: Discourses about Sustainable AI in Germany

This article explores Sustainable AI and Germany's shift from optimism to concern about its environmental impact. Can AI really combat climate change?