Making sense of our connected world
Two years after the takeover: Four key policy changes of X under Musk
Since controversial businessman and investor Elon Musk took over X (formerly Twitter) in October 2022, the platform has implemented a range of policy changes. Some changes, such as the removal of a rule against the misgendering of trans people have made headlines, while others have gone largely unnoticed. What overall direction has X’s rulebook taken? And how have the platform’s policies on hate speech, misinformation and child abuse evolved? This blog post charts out of four central developments within the platform’s complex web of policies.
Policy shifts since Musk’s acquisition
Much has been written about the transformation of the platform previously known as Twitter since Elon Musk announced that “the bird is freed” on 28 October 2022. With regard to the complex web of policies that Twitter had developed over its 16-year-existence, his acquisition sparked concerns about a potential rollback of content moderation on issues such as hate speech and misinformation.
This blog post provides an overview of four key developments in X’s Rules and policies since Musk has taken control of the platform. The analysis is based on the Platform Governance Archive, which enables the continuous tracking of the historical evolution of X’s policies.
1. Stepping back from misinformation policies
The most pronounced changes in X’s policies have concerned misinformation. Since Musk’s takeover, X has removed its policies on Crisis misinformation, COVID-19 misleading information, and misinformation about election outcomes, as well as the concept of “informational harm”
- Crisis Information: X no longer addresses “false or misleading information that could bring harm to crisis-affected populations (…) such as in situations of armed conflict, public health emergencies, and large-scale natural disasters” as part of its corporate policies. This included cases such as false allegations of war crimes or false reporting about the conditions on the ground in an armed conflict, if they had the potential to bring serious harm on people.
- COVID-19 misleading information: X removed former policies targeting “demonstrably false or misleading information about COVID-19 (…) which may lead to harm.” The policy previously aimed to curb harmful misinformation and false claims about the virus, vaccines, and treatments.
- Civic integrity policy: The platform has eliminated provisions prohibiting misleading information about election results. They previously banned posts containing claims like “unverified information about election rigging” or “claiming victory before election results have been certified.”
- Informational harm: Along with its policy on coordinated harmful activity, X discarded this type of harm from its rules and regulations. Previously it was defined by Twitter as negatively impacting an individual’s ability to access essential information for exercising their rights. It also covered information that “significantly disrupts the stability and/or safety of a social group or society”.
Through these changes, X under Musk has stepped back from addressing different domains of misinformation on issues, such as election outcomes, medical crises and international conflicts. In doing so, the company has abandoned the “arbiter of truth” role which Twitter had assumed with regard to these issues and the arguably vague definition of “informational harm”.
2. Expansion of child abuse policies
One area where X has extended its rules and regulations is the physical and sexual abuse of children. Previously, Twitter’s child sexual exploitation policy only covered cases of sexual child abuse. Now, X’s new policy on child safety also addresses the physical abuse of children.
- Media depicting physical child abuse: X has broadened its policy to cover most instances of physical child abuse, aiming to prevent revictimization and the normalization of violence against children.
- Media of minors in physical altercation: This new category of the platform may require the removal or restrict the reach of content showing minors in physical fights, depending on the context.
- Removal of Exceptions: The updated policy also removed an exception for “depictions of nude minors in a non-sexualized context”. It previously allowed non-sexualized depictions of nude minors in scientific, educational, or artistic contexts, which had been protected from removal.
The tightening of X’s provisions on child abuse has been accompanied by communications from the company about their strengthening and the increasing use of automated detection technologies. According to the platform, these measures have resulted in a steep increase in account suspensions related to the sexual exploitation of children.
3. Mixed changes in hate and violent speech policies
With regard to hate speech and violent threats, the changes to X’s policies have given a more mixed picture. While protections against misgendering and deadnaming have been reduced, other provisions on violent speech have been extended.
- Misgendering and deadnaming: One change, widely covered by the media, is the platform’s removal of a passage which specifically prohibited the “misgendering or deadnaming of transgender individuals” from its policy on hateful conduct. X also removed a sentence which listed “women, people of color, and LGBTQ+ individuals” as examples of groups which are disproportionately targeted with online abuse from the introductory paragraph of the policy.
- Retention of core prohibitions: The overarching provision which prohibits “to attack people on the basis of race, ethnicity, national origin, caste, sexual orientation, gender, gender identity, religious affiliation, age, disability, or serious disease”, however, remains intact. X furthermore introduced a new provision on the “the Use of Prior Names and Pronouns” to its policy on abuse and harassment. It states that “Where required by local laws, we will reduce the visibility of posts that purposefully use different pronouns to address someone other than what that person uses for themselves”. However, Musk himself has pointed to the involuntary nature of this change by pointing out that „Turns out this was due to a court judgment in Brazil, which is being appealed, but should not apply outside of Brazil“.
- Expansion of violent threats policies: X has expanded its approach to handling violent threats. Previously, Twitter’s policy only prohibited “statements to inflict serious physical harm” on others. Now, X’s policy covers all threats and not just those causing serious physical harm. Additionally, the policy has been extended to include threats aimed at damaging civilian homes or critical infrastructure through “physical violence and/or violent rhetoric.”
- Update to provisions on violent content: X’s policies on violent content were further revised to include new guidelines addressing the use of “coded language (often called “dog whistles”) to indirectly incite violence”. Additionally, the platform removed an exception from its policy on the perpetrators of violent attacks. It previously exempted individuals and content related to “violent resistance against those actively participating in hostilities in an armed conflict” from enforcement under this policy.
These changes indicate that, although X has reduced protections against misgendering and deadnaming, other policies related to hateful and violent speech have been strengthened.
4. Softer sanctions for policy violations
Although X’s definitions of permitted and prohibited content have remained largely consistent across many policy areas, the consequences for violations have shifted significantly. Instead of focusing on content removal and account suspensions, the company has moved towards softer measures, like limiting the visibility of problematic content and profiles.
- Civic integrity policy: X’s civic integrity policy provides a clear example of this shift. Before Musk’s takeover, the platform used a strike system that included measures like labeling content, limiting visibility, removing posts, and suspending accounts. Accumulating more than five policy violations led to a permanent suspension from the platform.
- Freedom of Speech, Not Freedom of Reach: As of now, X primarily imposes reach restrictions for policy violations, such as excluding posts and profiles from search results and recommendations or adding informative labels. This aligns with X’s “Freedom of Speech, Not Freedom of Reach” philosophy, which limits account suspensions and permanent bans to “severe violations” like illegal activities, violent threats, targeted harassment, privacy violations, and platform manipulation or spam.
What to make of X’s policy transformation?
An analysis of these four developments within X’s policies reveals that their transformation cannot be seen as a simple rollback of content moderation. Rather than eliminating all previous rules, the platform’s policies have been selectively removed, expanded, or restructured. However, there has been a noticeable shift towards softer sanctions across different policy areas.
Nonetheless, policies represent only one aspect of X’s transformation. Changes to the platform’s functionalities, algorithmic operations and overall culture are other important factors to consider. Musk’s very outspoken political agenda and his self-proclaimed crusade against the “woke mind virus”, raises concerns about how the richest man in the world is using his power to influence which voices are suppressed and amplified on the platform.
Ultimately, X’s extensive and evolving rulebook remains a focal point in the ongoing debate over how social media platforms should govern digital communications and interactions, making it a rich subject for future research.
This post represents the view of the author and does not necessarily represent the view of the institute itself. For more information about the topics of these articles and associated research projects, please contact info@hiig.de.
You will receive our latest blog articles once a month in a newsletter.
Platform governance
Empowering workers with data
As workplaces become data-driven, can workers use people analytics to advocate for their rights? This article explores how data empowers workers and unions.
Between vision and reality: Discourses about Sustainable AI in Germany
This article explores Sustainable AI and Germany's shift from optimism to concern about its environmental impact. Can AI really combat climate change?
One step forward, two steps back: Why Artificial Intelligence is currently mainly predicting the past
While AI is seen as technology of the future, it often relies on historical data. This blog post examines how AI can reproduce social inequalities and bias.