Upload-Filters: Bypassing Classical Concepts of Censorship?
Author: | Heldt, A. |
Published in: | JIPITEC – Journal of Intellectual Property, Information Technology and E-Commerce Law, 10(1), 56-65 |
Year: | 2019 |
Type: | Academic articles |
Protecting human rights in the context of automated decision-making might not be limited to the relationship between intermediaries and their users. In fact, to adequately address human rights issues vis-à-vis social media platforms, we need to include the state as an actor too. In the German and European human rights frameworks, fundamental rights are in principle only applicable vertically, that is, between the state and the citizen. Where does that leave the right of freedom of expression when user-generated content is deleted by intermediaries based on an agreement with a public authority? We must address this question in light of the use of artificial intelligence to moderate online speech and its (until now lacking) regulatory framework. When states create incentives for private actors to delete user-content pro-actively, is it still accurate to solely examine the relationship between platforms and users? Are we facing an expansion of collateral censorship? Is the usage of soft law instruments, such as codes of conduct, enhancing the protection of third parties or is it rather an opaque instrument that tends to be conflated with policy laundering? This paper aims to analyze the different layers of the usage of artificial intelligence by platforms when it is triggered by a non-regulatory mode of governance. In light of the ongoing struggle in content moderation to balance freedom of speech and other legal interests, it is necessary to analyze whether or not intelligent technologies could meet the requirements of freedom of speech and information to a sufficient degree.
Visit publication |
Connected HIIG researchers
Amélie Heldt
- Open Access
- Peer Reviewed