Skip to content
ai imaginaries
21 May 2019| doi: 10.5281/zenodo.3089175

“I miss imaginaries that spell out ideas of AI as a public good”

Christian Katzenbach recently worked with his colleague Jascha Bareis on a project that compared the strategies of various countries such as France, the US and China to partake in a Global AI race, and identified distinct approaches between the countries. The researcher and writer Nicolas Nova interviewed him for a report commissioned by the City of Lyon about the different strategies and their impact on the imaginaries of AI. The original French version of the interview has been pusblished in the report and by the French magazine “Millénaire 3”.

NN: Who are you (as a researcher)? What are your working on? And what was the focus of your project about AI policies and national visions?

CK: Having studied media and communications, philosophy, computer science and Science and Technology Studies (STS), I have an interdisciplinary research agenda that addresses the entanglements of technology, communication, and politics. From this perspective I study platform governance, algorithmic decision-making, communication dynamics and the formation of the public sphere, and media and internet governance.

In recent years, I have become interested in the rush towards AI – which seems to be happening in all domains: media, business, politics, research. AI appears to be a catchword that is used to frame so many diverse things at once. Also the debates about bias, fairness, agency, and transparency seem to have shifted from from the notion of algorithms to the notion of AI without much of a substantial change. Based on an analysis of policy reports – such as strategy papers, plans and policies issued by institutions like the Chinese Communist Party, the White House or the French Parliament – and public discourse of state representatives, we both noted commonalities in striving to become top research hubs and economic competitive leaders, and differences in focus, approach and values between the countries, even touching upon well-known national narratives.

The term “AI” (Artificial Intelligence) is indeed very polysemic, do you see differences in the way it is framed in the different countries you looked at?

As a term “AI” is indeed used in various ways – so at the semantic level there is plenty of variation, but both within the countries that we studied as well as across the different countries. Where we have identified striking differences is the general framing of AI and its relation to social, political and economic issues. The French AI strategy, for example, that Emmanuel Macron presented in 2018 is called “AI for humanity“ and draws tight connections to a “new renaissance“, calling AI a “promethean promise“, stressing the role a strong regulatory state and the need to consider AI a “public good“. In order to “boost the potential of French research”, Macron announced his intention to strengthen public research institutes (in addition to notable public-private research partnerships) and stated his aim to create a national coordination research hub, including a network of four or five institutes across France. In total, Macron plans to spend €1.5 billion in AI during his current presidency, with the biggest part of the sum for research and industrial projects.

The US, by contrast, focus their national strategy on deregulation and competitive advantages. Policy aims at removing barriers to AI innovation “wherever and whenever we can“. The US government wants to foster the combined strength of government, industry and academia and generate competitive advantage over other nations. Concretely, according to the strategy document, the US has loosened the regulative frameworks for AI in autonomous driving, the use of commercial and public drone operations and medical diagnostics. Concerning Research and Development (R&D) and the private sector, the Trump government emphasises its ambition to remain “the global leader in AI”, increasing investment in unclassified R&D for AI by over 40% since 2015 ($1.1 billion in 2015).

Read here the full analysis!

Among all the governments, the Chinese Communist Party (CCP) presents the most detailed, comprehensive and ambitious AI strategy. The CCP is planning to use AI as an universal problem solver. To concretise things, their detailed plan gives technical specifications of how to integrate AI into information and manufacturing industry in order to turn “China into a manufacturing (…) and a cyber superpower.” Neither the French nor the American strategy papers have such accuracy and detail, once more stressing the CCP’s determination to fulfil its ambitious three-step future plan. What is noticeable about the Chinese strategy is also the ambition to fuse such “civilian” AI technology with military innovations and applications.

Based on your search, how do you think such differences in framing AI lead to various policies?

The national AI strategies that we analysed are a peculiar hybrid between policy and discourse. They are at the same time tech policy, national strategic positioning and an imaginary of public and private goods. In most cases, they sketch broad visions and ambitions – and are rather scarce when it comes to concrete measures and policies. Most do allocate – or at least promise – resources to AI research, list already issued policies and regulations, and present roadmaps for future measures and initiatives. So their function is a mix of strategic positioning, jumping the bandwagon and giving orientation and legitimation for future measures – much less to initiatiate concrete policies and regulations. So the impact of the papers’ own framings on the policies is hard to evaluate for now. But taken as a whole, these documents most probably already reflect the different framings and imaginaries that circulate in the different countries. We are currently planning follow-up studies that look at the media discourses around AI over time and across countries to understand how different imaginaries travel across domains and become dominant or marginal.

As you mentioned, these visions can guide and reinforce imaginaries of AI. That’s a common pattern in the history of technology. Why is that important to shape these imaginaries? Or, said differently, what do national states such as China, the US and France expect from that?

These national AI strategies are first cornerstones in the institutionalisation – and naturalisation – of AI into our lives and societies. Although AI is severely over-hyped – creating a myth of human intelligence and empathy, which AI is simply not able to deliver – we are currently setting the frames of how to understand this development and identifying problems – thus setting the frame in which we articulate the need to take action and to start searching for solutions. In this way, these framings are more than mere talk. Socio-technological imaginaries materialise in the drafting of policies, the mobilisation of industries and the allocation of resources. Thus, the imaginaries are not only to be understood as constitutive but as performative: they create situations of irreversibility as investments ask for return and political promises have to be met. For instance, the Chinese Communist Party is strategically tapping civilian innovation for military use and vice versa. Whereas Google retreated from working together with the Pentagon, in the Chinese governmental actors work hand in hand with commercial companies or simply strategically appropriate innovations from the private sector. The CCP is taking advantage of its authoritarian centralising power, enforcing synergies wherever it can and leaving aside ethical considerations in order to push China to become the leading AI nation.

For this reason, national states currently struggle to balance the perceived need for quick action with the  setting adequate frameworks for understanding and coping with AI, and the design of desirable futures. Thus, the national governments try to shape the currently negotiated sociotechnical imaginaries along their institutional and national interests, be that competitiveness, surveillance, or public welfare – and most often mixtures of all of that.

Down the road, do you think these various policies lead to different imaginaries of AI?

The strategy papers and policies are part of the broader social process of negotiating sociotechnical imaginaries and shared understandings of technologies and social developments. Thus, they the reinforce, slightly change or fundamentally reorientate specific imaginaries and frames. Policies once in place are very solid materialisations of imaginaries. They may have a strong impact because they can be enforced in cases of non-compliance. However, without broad social legitimation they usually fall short. In other words: the strict anti-smoking regulations in the early 2000s in Europe would probably not have proved successful without the increasing interest in European societies for health and fitness issues.

The striking differences between France, the US and China we identified obviously point to striking political and cultural differences, but they also show that the future, and especially the role of automation and AI in the future, is highly contested. We are currently negotiating how we want to live with automation and AI in the future. And this negotiation is not only about technology, policy and budgets – it is strongly entrenched in myths and metaphors. Let’s be aware of that.

In your opinion, based on your research, what kind of imaginaries of AI are missing? What isn’t considered? Why is that?

Although there is some talk about AI for humanity, about ethics and fairness, most concrete imaginaries and concrete scenarios are strongly lead by economic and technological arguments. What is possible, what is convenient, what is efficient? I miss imaginaries that spell out ideas of AI as a public good, and using it for public welfare – and I particularly miss imaginaries that highlight scenarios that do without AI, identifying domains where we do not want automatic sorting and decision making to take place. We currently seem to take for granted that AI technologies will necessarily permeate every domain of society and all aspects of our lives. But this is not the case. It could be different. We are currently living in a crucial and critical time where we (re)build the infrastructures of our lives and societies. Let’s talk about what we want, collectively – and how we can achieve that.

This post represents the view of the author and does not necessarily represent the view of the institute itself. For more information about the topics of these articles and associated research projects, please contact info@hiig.de.

Christian Katzenbach, Prof. Dr.

Associated researcher: The evolving digital society

Sign up for HIIG's Monthly Digest

HIIG-Newsletter-Header

You will receive our latest blog articles once a month in a newsletter.

Explore Research issue in focus

Platform governance

In our research on platform governance, we investigate how corporate goals and social values can be balanced on online platforms.

Further articles

Modern subway station escalators leading to platforms, symbolizing the structured pathways of access rights. In the context of online platforms, such rights enable research but impose narrow constraints, raising questions about academic freedom.

Why access rights to platform data for researchers restrict, not promote, academic freedom

New German and EU digital laws grant researchers access rights to platform data, but narrow definitions of research risk undermining academic freedom.

Three groups of icons representing people have shapes travelling between them and a page in the middle of the image. The page is a simple rectangle with straight lines representing data used for people analytics. The shapes traveling towards the page are irregular and in squiggly bands.

Empowering workers with data

As workplaces become data-driven, can workers use people analytics to advocate for their rights? This article explores how data empowers workers and unions.

A stylised illustration featuring a large "X" in a minimalist font, with a dry branch and faded leaves on one side, and a vibrant blue bird in flight on the other. The image symbolises transition, with the bird representing the former Twitter logo and the "X" symbolising the platform's rebranding and policy changes under Elon Musk.

Two years after the takeover: Four key policy changes of X under Musk

This article outlines four key policy changes of X since Musk’s 2022 takeover, highlighting how the platform's approach to content moderation has evolved.