Skip to content
Picture shows a transparent umbrella from above. It stands symbolic for the AI Transparency Circle.
24 March 2023| doi: 10.5281/zenodo.8273113

The AI Transparency Cycle

Why AI transparency?

AI is omnipresent and invisible at the same time. Do you notice every time you interact with an algorithm? What data is being collected and processed while you casually scroll through social media or browse products on retail websites? Privacy statements by platform providers promise full transparency, but what does this even mean and what is the underlying goal?

The devil’s in the details

Defining transparency has never been straightforward and defining transparency in the context of AI systems is no exception to that. Transparency, in a broad sense, is what one can perceive, comprehend and lets one act in light of that knowledge. Considering big tech companies’ privacy statements spanning well beyond 10.000 words, aiming to inform users about their intentions and protective rights, the effectiveness of transparency measures in place appear questionable. Do you understand, for example, when you interact with an AI system and why platforms recommend certain content to you? Even if this information might be available it might not be transparent, since availability does not always equal attainability.

Metaphors of transparency

Research on the use of the metaphor transparency (Ball, 2009) reveals from the context of non-governmental-organisations and other political stakeholders, that by transparency we imply different ends of information sharing. Ball (2009) identified three: accountability, openness and efficiency. Openness is probably the most intuitive goal of transparency. Openness enforces transparency to create trust. For instance, it creates trust by allowing viewers to see what is protected from others, e.g. to protect one’s privacy. This includes not only informed decision-making, but also knowing which questions to ask in the first place. Efficiency might be less intuitive as a goal of transparency, but it’s none the less crucial for today’s complex societies. Only by knowing and understanding complex systems can we allow them to function efficiently, since we do not need to question their workings each time we depend on them. Therefore, transparency is also important for progress in societies. Last, but not least, let’s look closely at accountability. 

Accountability

The third important goal of transparency often recognized is accountability. Regarding AI systems, this refers to the question of who is responsible for each step in the development and application of machine learning algorithms. Mark Bovens, who researches public accountability, defined it “as a social relationship in which an actor feels an obligation to explain and to justify his or her conduct to some significant other.” (Bovens, 2005). He sees five characteristics for public accountability, namely 1. public access to accountability, 2. proactive explanation and justification of the actions, 3. addressing a specific audience, 4. an intrinsic motivation for accountability (in contrast to action only on demand), and 5. the possibility of debate, including potential sanctions in contrast to unsolicited monologues. Especially characteristic four presents a challenge, considering the common perception of accountability as a tool for preventing blame and legal ramifications. For accountability to be realised, practising diligent AI transparency is crucial, so it does not turn “into a garbage can filled with good intentions, loosely defined concepts, and vague images of good governance.” (Bovens, 2005).

One-size-does-not-fit-all

Transparency is a constant process – not an everlasting fact. It is to be viewed in its context and the perspective of stakeholders affected (Lee & Boynton, 2017). A large company providing transparency regarding its software to a governmental agency cannot give the same explanation and information to a user and expect transparency to be achieved. In a way, more transparency can lead to less transparency through the overwhelming quantity of information provided to the wrong recipient. Relevant factors to tailor AI transparency measures include the necessary degree of transparency, the political or societal function of the system, target group(s) and specific function of transparency. At the core of it lies the need for informed decision-making.

AI Transparency is a Multi-Stakeholder Effort

In practice, transparency cannot be implemented by a single actor, but has to be applied in every step of the process. A data scientist is often not aware of ethical and legal risks; and a legal counsel, for example, cannot spot those by reading through code. This becomes especially apparent in the case of unintended outcomes, calling for not only prior certifications, but also periodic auditing and possibilities of intervention for stakeholders at the end of the line. A frequent hurdle for clearer transparency standards in this area arises from the conflict between the protection of business secrets and the necessity to get access to software codes for reasons of auditing. 

The  ‘AI Transparency-Cycle’ (see graphic above) provides an overview on how the many dimensions of AI development and deployment and its ever-changing nature could be modelized and serves as a roadmap to solve the transparency conundrum. It is important not to interpret the cycle as a chronological step-by-step manual, but rather as a continuous, self-improving feedback process where development, validation, interventions, and education by the actors involved happen in parallel.

References

Ball, C. (2009). What is Transparency?. Public Integrity, 11, 293-308.

Bovens, M. (2005). The Concept of Public Accountability. In Ferlie, E., Lynn Jr., L. E. and Pollitt, C., Eds., The Oxford Handbook of Public Management, Oxford University Press, Oxford, 182.

Lee, T., and Boynton, L. A. (2017). Conceptualizing transparency: Propositions for the integration of situational factors and stakeholders’ perspectives. Public Relations Inquiry, 6, 233-251.

This post represents the view of the author and does not necessarily represent the view of the institute itself. For more information about the topics of these articles and associated research projects, please contact info@hiig.de.

Theresa Züger, Dr.

Research Group Lead: Public Interest AI | AI & Society Lab, Co-Lead: Human in the Loop

Daniel Pothmann

Project Assistant: Knowledge Transfer | Public Interest AI

Sign up for HIIG's Monthly Digest

HIIG-Newsletter-Header

You will receive our latest blog articles once a month in a newsletter.

Explore Research issue in focus

Du siehst Eisenbahnschienen. Die vielen verschiedenen Abzweigungen symbolisieren die Entscheidungsmöglichkeiten von Künstlicher Intelligenz in der Gesellschaft. Manche gehen nach oben, unten, rechts. Manche enden auch in Sackgassen. Englisch: You see railway tracks. The many different branches symbolise the decision-making possibilities of artificial intelligence and society. Some go up, down, to the right. Some also end in dead ends.

Artificial intelligence and society

The future of artificial Intelligence and society operates in diverse societal contexts. What can we learn from its political, social and cultural facets?

Further articles

Three groups of icons representing people have shapes travelling between them and a page in the middle of the image. The page is a simple rectangle with straight lines representing data used for people analytics. The shapes traveling towards the page are irregular and in squiggly bands.

Empowering workers with data

As workplaces become data-driven, can workers use people analytics to advocate for their rights? This article explores how data empowers workers and unions.

A stylised illustration featuring a large "X" in a minimalist font, with a dry branch and faded leaves on one side, and a vibrant blue bird in flight on the other. The image symbolises transition, with the bird representing the former Twitter logo and the "X" symbolising the platform's rebranding and policy changes under Elon Musk.

Two years after the takeover: Four key policy changes of X under Musk

This article outlines four key policy changes of X since Musk’s 2022 takeover, highlighting how the platform's approach to content moderation has evolved.

The picture shows a tractor cultivating a field from above. One side of the field is covered in green, the other is dry and earthy. This is intended to show that although sustainable AI can be useful in the fight against climate change, it also comes at a high ecological cost.

Between vision and reality: Discourses about Sustainable AI in Germany

This article explores Sustainable AI and Germany's shift from optimism to concern about its environmental impact. Can AI really combat climate change?