Unsere vernetzte Welt verstehen
5 Fragen an…
Max Hänska, Dozent (Lecturer/Assistant Prof.) an der De Montfort Universitaet in Leicester, GB. Seine Forschungsschwerpunkte sind die sozialen Medien in der politischen Kommunikation und dem Graswurzel-Journalismus, und die Rolle von hinterfragender Kommunikation in der Entscheidungsfindung. Als Gastforscher am HIIG beschäftigt sich Max Hänska derzeit mit Twitter-Diskussionen und dem Aufkeimen einer europäischen politischen Öffentlichkeit.
Interview von Cornelius Puschmann.
1. Max, on your website it says that your work explores the role of social media in facilitating a European public sphere. What does that mean in more concrete terms?
Many have argued that the emergence of a genuinely European public sphere would be desirable, but that it faces several obstacles. Foremost among them are Europe’s linguistic and cultural diversity, as well as nation-specific media systems. Broadcast media, for the most part, are decidedly national institutions that operated in a particular language, and address national audiences. Social media’s scope is not linguistically or geographically defined. This absence of linguistic and geographic determination make it plausible that social media could provide a platform for the emergence of a European public sphere.
Now, in some respects such a European public sphere hosted on social media may look similar to a broadcast public sphere, in that we can ask whether European issues are thematised and discussed by users in different parts of Europe—just as we can ask whether broadcast media across Europe thematise a particular issue.
Yet, unlike broadcast media, social media are not in the business of creating content. Rather, it is their users that interact, generate content and leave other traces of their digital lives. So a public sphere hosted on social media would also look quite different to one facilitated through broadcast media. Because social media affords users the opportunity to create content and interact with each other, we would expect a social media hosted public sphere to involve interactions between users. For instance, we would expect users to engage with each other across Europe’s national borders. So, if social media users in different parts of Europe discuss European issues, and these users engage with each other across national boundaries, that starts to look like the beginnings of a genuinely European public sphere.
2. Any research project with the phrase ‘public sphere’ in it inevitably evokes Habermas. How comparable is social media to his deliberative ideal?
Habermas’ conception of the public sphere is decision-oriented, and stipulates a set of discursive qualities that, if satisfied, are said to make collective decisions legitimate. For the purposes of my current work we don’t evaluate the quality of discourse. In any case, the constraints of a tweet significantly limit discursive possibilities. Instead we start by simply asking whether there is any kind of discourse, and kind of communicative interaction, between people in different parts of Europe. In a second step we could then evaluate the quality of this discourse. That said, from what we know, different kinds of social platforms seem to host different kinds of discourse. For instance, engagement on twitter is said to be more adversarial than on Instagram. The rules of engagement, as it were, vary from platform to platform.
3. Your recent work relies on Twitter in particular. What issues have you encountered when working with Twitter data, both technically and conceptually?
Conceptual and technical challenges are really two sides of the same coin. Twitter makes a significant amount of its data accessible to researchers free of charge, which is why there is so much twitter research. A twitter update, unlike a Facebook post, is also (mostly) public, a general requirement for data collection. However, the free APIs (there is a search and a streaming API) come with significant constraints. The search API has very limited uses, as it only grants access to ‘indices of recent popular tweets.’ The streaming API allows access to up to 1% of all tweets collected synchronously; that is, you can only collect tweets live, but not search for historical tweets. Buying historical twitter data is expensive. Though you can get cheaper access to historical twitter data through some third party providers, these lock your analysis into their proprietary data analysis platforms—effectively black-boxing your analysis. That isn’t really an option for academic research, as we need to know exactly how the analysis is carried out. For most purposes that leaves researchers with the streaming API.
The conceptual problems mostly derive from the way data is generated (twitter data, unlike survey data, is not generated for research), and the different ways we can collect or extract data sets for analysis. These problems are familiar, and encountered in most ‘big data’ research. Namely, we don’t know how representative our data is. The streaming API requires you to set filter parameters, but that means that we work with selective data sets that match particular parameters. And, it is hard to know how our parameters bias our data collection, which, in turn, raises the question how representative our sample is of twitter more generally.
The API also allows us to collect a random sample of 1% of global tweets (though twitter tells us little about the sampling process). A random sample may thus allow us to generalise about twitter. Nevertheless, as we don’t have a sampling frame for social media, we still don’t know how representative the users in our sample are of the population in general. There are efforts to infer the demographic characteristics of twitter users, which would allow us to weight results accordingly, but this is tricky and certainly no exact science.
So the assumption that is sometimes made that ‘lots of data = all data’ is certainly fallacious. We need to pay careful attention to how data is generated, how we may access it, and what (if anything) we may have that could fulfil the function of a sampling frame. These considerations make it much more complicated to figure out how to interpret results when analysing twitter data. That is, whether we can generalise, and if so, about which population. You really have to check your desire to overstate results.
4. One case that you have studied is the discourse surrounding the European financial crisis, another one is BREXIT. Can you think of other issues that are widely discussed by the emerging European public sphere — perhaps more positive ones –, or are theses types of political crises characteristic for provoking cross-cutting social media debates?
Twitter discussions are event-driven, so you could really study any significant event on twitter: the Olympics, Eurovision Song Contest, or European elections. Whether the patterns that we observed in the case of the European fiscal and sovereign debt crisis would also be observed in these cases (in terms of pan-european attention on the event, and interactions between users in different countries) is an empirical question that would be interesting to investigate.
5. Your to-date research has also examined the impact of user generated content (UGC) on newsrooms, particularly at the BBC. What role do you see UGC play for the future of journalism?
UGC is not going anywhere, it is a natural part of an environment where communication technology affords everyone the opportunity to produce content (for lack of a a better word). Think of it this way: If a newsworthy event occurs, particularly an unplanned one, it is much more likely that the first footage to emerge was shared by a non-journalist. The news value of such UGC will not be diminished by the fact that it wasn’t captured by a professional. If such a video of an important event is publicly available, and if no equivalent professional footage exists, journalists will render themselves irrelevant by ignoring it. So, in my view, UGC is here to stay.
What will change is the ways newsrooms and journalists integrate UGC into their reporting—there has been much experimentation in this area. Here it is key to recognise that UGC exists in a broader media ecology. It is not merely that audiences can now capture news-relevant material with their smartphones, but that they also use the same smartphone to read and share news stories. The question is, what kind of role news professionals carveout for themselves within this social media ecology, within which news is produced, consumed, and shared. UGC is an important part of the story, but the bigger picture is one where content production is not the exclusive preserve of journalists, where gatekeepers can be bypassed, where news is read and shared across platforms, and where journalists have to discover ways of renewing their vital societal function in very different media environments.
Dieser Beitrag spiegelt die Meinung der Autorinnen und Autoren und weder notwendigerweise noch ausschließlich die Meinung des Institutes wider. Für mehr Informationen zu den Inhalten dieser Beiträge und den assoziierten Forschungsprojekten kontaktieren Sie bitte info@hiig.de
Jetzt anmelden und die neuesten Blogartikel einmal im Monat per Newsletter erhalten.
Forschungsthemen im Fokus
Plattformdaten und Forschung: Zugangsrechte als Gefahr für die Wissenschaftsfreiheit?
Neue Digitalgesetze gewähren Forschenden Zugangsrechte zu Plattformdaten, doch strikte Vorgaben werfen Fragen zur Wissenschaftsfreiheit auf.
Beschäftigte durch Daten stärken
Arbeitsplätze werden zunehmend datafiziert. Doch wie können Beschäftigte und Gewerkschaften diese Daten nutzen, um ihre Rechte zu vertreten?
Zwei Jahre nach der Übernahme: Vier zentrale Änderungen im Regelwerk von X unter Musk
Der Artikel beschreibt vier zentrale Änderungen im Regelwerk der Plattform X seit Musks Übernahme 2022 und deren Einfluss auf die Moderation von Inhalten.