Making sense of our connected world
Myth: AI algorithms decide what you see online
There’s more than one myth about algorithmic visibility regimes: one posits that AI algorithms are tools used unilaterally by corporations to control what we see; the other argues that these algorithms are mere mirrors, and we are the ones who control what we see online.
Myth
AI algorithms decide what you see online.
Some say that AI algorithms decide what we see online; others argue that algorithms just do what we tell them to do. Neither idea is really accurate. What we see online is the result of several relationships between several actors and things — users and algorithms but also platforms, coders, data, interfaces etc. It is key to understand the inequality that marks these relationships.
Watch the talk
Materials
Presentation | |
KEY LITERATURE Bucher, T. (2018). IF…THEN: Algorithmic power and politics. Oxford: Oxford University Press. Gillespie, T. (2013). The relevance of algorithms. In T. Gillespie, J. B. Pablo & K. A. Foot (Eds.), Media technologies: Essays on communication, materiality, and society (pp. 167-193). Cambridge, MA: MIT Press. Introna, L. and H. Nissenbaum (2000). “Shaping the Web: Why the Politics of Search Engines Matters.” The Information Society 16(3): 1-17. | |
UNICORN IN THE FIELD The Social Media Collective is ‘a network of social science and humanistic researchers’, funded by Microsoft but working on their own independent agendas. Much of what they do concerns the broad field of platforms’ algorithmic visibility, and often helps steer debates on the theme. |
About the author
João Carlos Magalhães
Senior researcher at the Alexander von Humboldt Institute for Internet and SocietyMuch of João’s work explores the political and moral ramifications of algorithmic media and technologies. At the HIIG, he heads an EU-funded project that is mapping out social media platforms’ governance structures, with a focus on copyright policies and automated filters. In 2020, he was awarded a fellowship from the Wikimedia Foundation to help create an open database of platforms’ policies.
Why, AI?
This post is part of our project “Why, AI?”. It is a learning space which helps you to find out more about the myths and truths surrounding automation, algorithms, society and ourselves. It is continuously being filled with new contributions.
This post represents the view of the author and does not necessarily represent the view of the institute itself. For more information about the topics of these articles and associated research projects, please contact info@hiig.de.
You will receive our latest blog articles once a month in a newsletter.
Artificial intelligence and society
Why access rights to platform data for researchers restrict, not promote, academic freedom
New German and EU digital laws grant researchers access rights to platform data, but narrow definitions of research risk undermining academic freedom.
Empowering workers with data
As workplaces become data-driven, can workers use people analytics to advocate for their rights? This article explores how data empowers workers and unions.
Two years after the takeover: Four key policy changes of X under Musk
This article outlines four key policy changes of X since Musk’s 2022 takeover, highlighting how the platform's approach to content moderation has evolved.