Making sense of our connected world

How People Analytics can affect the perception of fairness in the workplace
People Analytics promises optimised decision-making in personnel management. However, when data-driven systems are integrated into decision-making processes, the relationship between employees and managers can suffer. A new study suggests that managers are perceived as more unfair when they make an already negatively received decision with the help of People Analytics. This heightened sense of injustice can not only strain workplace relationships but also alter long-term workplace dynamics.
People Analytics is a key topic in the modern workplace. As a data-driven approach to personnel management, People Analytics refers to the systematic collection and analysis of employee data. The primary aim is often to optimise decision-making while reducing personal biases or stereotypes.
A common example in everyday HR management is the use of People Analytics in promotion decisions: rather than relying solely on managers’ subjective assessments, these systems analyse collected data such as project successes or team feedback. The goal is to ensure that talented employees are identified and supported regardless of personal biases or stereotypes.
The objective is to enable fair, data-driven decisions that enhance employee satisfaction while also securing long-term business success. The systems used for data collection and analysis draw on a variety of sources, including the number of emails sent, working hours recorded, and meeting activities. These insights are aimed at helping to identify behavioural patterns and provide recommendations for decision-making—whether in performance evaluations, employee development, or team composition.
Data-driven personnel management at a glance
Traditionally, personnel decisions have often been based on the experience and intuition of managers—alongside their personal preferences and inherent biases. Today, data-driven analytics is meant to support these processes. At the same time, People Analytics enables managers to monitor and assess nearly every aspect of employee performance, even when staff work remotely.
People Analytics thus operates within a tension between promises of more precise, discrimination-free decision-making and the risks associated with new concerns around transparency, fairness, and trust. As the balance between surveillance and control, as well as employee autonomy and privacy, is challenged, significant impacts on the relationship between employees and management can emerge.
People Analytics systems, particularly those based on machine learning techniques, do more than collect data and present it in statistical form—they can also generate independent recommendations for action. In some cases outside Germany, these systems can even implement decisions autonomously. Such approaches are already embedded in the platform and gig economy, where they fall under the broader concept of algorithmic management, which includes the automation of managerial tasks.
A well-known example is Uber, which employs algorithmic management to collect and analyse data such as trip duration, location, and typical driving speed. These analyses are used not only to determine the most efficient routes but also to adjust prices during peak times. In some countries, the system can even implement these decisions autonomously, without human intervention. For instance, Uber drivers who reject too many ride requests may receive automated emails warning them that further refusals could lead to account deactivation.
Between precision and control: The ambivalence of data-driven systems
A key issue associated with People Analytics is the growing power imbalance between managers and employees, which arises primarily from the informational advantage held by leadership and questions of voluntariness. While managers actively choose to implement such systems to support their decision-making, employees often have no choice—their work-related data is collected and processed without their direct input or control. As a result, they become passive subjects of these systems, whose functionality and consequences they may not fully understand. This asymmetry can create tension and exacerbate power imbalances.
The problem becomes particularly acute when the system makes or suggests incorrect or unfair decisions, such as in bonus allocation or promotions. Such decisions could undermine employees’ trust in management and negatively impact their perception of fairness. However, in Germany, it is still far from reality that algorithms fully determine salary adjustments or promotions. The ultimate responsibility remains with human managers. Algorithms and data analytics primarily serve as tools to inform decision-making, meaning that key aspects, such as resource distribution, remain in managerial hands. Thus, if unfair or misguided recommendations are implemented, the accountability still lies with management.
Given the rapid development of People Analytics, the question arises as to whether this will remain the case in the long term or whether the trend towards autonomous systems, as discussed in international research, could also take hold in this country.
Study: How algorithms influence perceptions of fairness
How do employees react to unfair decisions made with the help of People Analytics? We explored this question in a study conducted as part of our research project Between Autonomy and Surveillance: Employee-Centred Use of People Analytics. The study was carried out in collaboration with Prof. Uwe Messer.
In an experiment, participants took on the role of employees and completed a real effort task: using a computer, they had to adjust sliders with their mouse to the correct position, making their work performance easily quantifiable. Their reward, in the form of an individual bonus, was then assigned in one of three ways:
- By a manager without access to People Analytics.
- By a manager who could assess People Analytics.
- By a manager who delegated the bonus decision to an autonomous People Analytics system.
The study was deliberately manipulated so that some participants who had performed very well in the task still received a very low bonus, placing them in a perceived unfair situation.
Our findings indicate that employees perceive decisions as more unfair when data collection and analysis systems are involved. As soon as employees suspected that an algorithm had played a role in the decision—even if only by providing information—they perceived the outcome as even more unjust compared to an unfair decision made without the use of People Analytics. The involvement of such systems intensified feelings of injustice and even led to a sense of betrayal by managers. This, in turn, triggered stronger demands for compensation and increased the likelihood of negative reactions, such as retaliatory behaviour towards managers or the organisation. The use of People Analytics can therefore further strain relationships between employees and managers, particularly in already problematic situations.
Doubly unfair?
Employees understandably feel unfairly treated when they do not receive compensation that reflects their performance. However, this feeling was not the core finding of our study—it was a deliberately induced reaction. The key result of our study is that employees perceive unfair situations as even more unjust when People Analytics is involved. Here, the voluntary nature of its use, as previously mentioned, plays a crucial role.
In particular, the second scenario we examined reflects the realistic conditions under which People Analytics is used in Germany: a manager has access to analytical insights but is not required to follow them. However, if they make an unfair decision based on these insights, employees in our experiment felt not only unfairly treated but also betrayed. They may perceive that their manager failed to exercise due diligence. This creates a stressful dynamic in which employees feel both at the mercy of an unfair system and abandoned by their leadership.
Building trust through transparency: The future of People Management
It is therefore crucial that managers recognise the responsibility they bear when relying on such systems. An awareness of the potential and limitations of People Analytics should be combined with a thoughtful approach to interpersonal interactions to maintain trust and satisfaction in the workplace. Our study highlights the need for not only transparent communication, but managers’ responsible handling of algorithmic systems. Only in this way can organisations ensure that the implementation of such approaches does not lead to a sense of injustice or employee alienation.
Instead, their introduction should contribute to an environment where both individual and organisational goals can be aligned. Moreover, it is essential that organisations develop an awareness of the ethical implications of People Analytics—beyond just data protection, which often takes center stage in discussions. Managers should place greater emphasis on the human aspects of work, which risk being overshadowed by the use of seemingly rational and value-neutral algorithm-driven systems.
Author
Miriam Klöpper is an Information Systems Researcher from Karlsruhe. As part of her doctoral research, she critically examined the impact of People Analytics on power structures in traditional organisations. She served as research project manager for the project Between Autonomy and Surveillance (ZAUber), which explored employee-oriented use of People Analytics. In general, she is interested in the social and ethical implications of algorithmic systems, with a particular focus on equal opportunities and surveillance capitalism.
This post represents the view of the author and does not necessarily represent the view of the institute itself. For more information about the topics of these articles and associated research projects, please contact info@hiig.de.

You will receive our latest blog articles once a month in a newsletter.
[wysija_form id="6"]
Research issues in focus
Do Community Notes have a party preference?
This article explores whether Community Notes effectively combat disinformation or mirror political biases, analysing distribution and rating patterns.
What could German digital policy look like after the Bundestag election?
What are the digital policy positions of Germany's parties for the 2025 Bundestag election?
Do Alice Weidel and the AfD benefit from Musk’s attention on X?
Elon Musk has expressed support for Alice Weidel, while her reach on X has grown significantly. Could these developments be connected?