Skip to content
john-cameron-IrkHdv88Xp8-unsplash
20 January 2021| doi: 10.5281/zenodo.4452615

Trump’s very own platform? Two scenarios and their legal implications

Should we applaud social media platforms for finally restricting Donald Trump’s accounts? May we hold them responsible for allowing incitement to violence to spread? Should it be up to private actors to decide whether or not to ban the US President from the digital public sphere? Most probably have a clear opinion on these questions, but in fact, they aren’t as easy to answer as it may seem.


The run on the Capitol on January 6th has dramatically shown, once again, that the spread of lies and hateful speech leads to real-life harm. But protecting freedom of expression and the free formation of opinion online while setting rules for a civilized communicative space is a complicated endeavor. It poses many questions of constitutional law and power structures in democratic societies, especially in the US but with repercussions for the rest of the world. What would change if Trump would either launch his own service or if he shifted to another service, assuming there would be no content moderation in both cases?

Scenario 1: Trump starts his own social network.

After being “indefinitely” (Facebook) and “permanently” (Twitter) suspended from the largest social media platforms, Trump suggested he could start his own service.

Two factual aspects make this scenario rather unlikely (I’ll get to the legal questions afterwards): Trump isn’t interested in dialogue. He has been using Twitter as a “typewriter”, not as a place to exchange viewpoints. He wants a channel to send unilaterally, not a forum. Secondly, he would still need the infrastructure necessary to operate. Usually, web infrastructure providers are invisible to the public and do not interfere, but when they do it makes a difference. After the attack on the Capitol last Thursday, host providers could be reluctant to support a Trump-Twitter or Trumpbook, like Amazon suspended Parler (also because the public awareness on these matters has greatly increased). App-stores too could ban the new social network’s app (they’ve already banned Parler). The lack of infrastructure doesn’t make it impossible for Trump, of course, but less probable.

Assuming Trump starts his own service, he will be very free to express his viewpoints – even more than so far. Without Twitter’s or Facebook’s community guidelines/standards, there will be no control over what he decides to share. The protection of free speech under the First Amendment is very broad and it allows only very few exceptions. Trump will be even more untamed after Joe Biden’s inauguration because his social media profile will no longer meet the requirements for a governmental public forum. According to a court decision in the case of Knight First Amendment Institute v. Trump, the tweets sent by the President qualified as a designated public forum and he was therefore not allowed to exclude (by “blocking”) people from accessing his Twitter account. Governmental communication via private digital actors is a whole other aspect of the underlying question, but what counts here is that: once Trump isn’t a government official anymore he will no longer have to allow people in his bubble. His private social network could become yet another refugium for extremists and conspiracy ideologists. Authorities will have almost no legal means to restrict it without violating the First Amendment.

Scenario 2: Trump uses a conservative platform to continue spreading his viewpoints without the barrier of content moderation.

He could turn to a platform such as Parler to avoid “strict” content rules. Indeed, Trump and other conservatives have been criticizing platforms such as Facebook, YouTube, and Twitter for propagating a left-wing perspective and “censoring” them (Trump just before his supporters stormed the Capitol: “We will finally hold big tech accountable.”).

Under Sec. 230 CDA, social media platforms don’t have to moderate: they can simply act as mere “pipes”, as opposed to editors according to the Supreme Court (Smith v. California, 361 U.S. 147). Due to the provided immunity from liability, it is up to social networks to moderate content by, for instance, banning hate speech and misinformation, or not (as we have witnessed over the past four years). Because, under the First Amendment, the State is, in principle, not allowed to regulate speech (“Congress shall make no law…abridging the freedom of speech”), this power over what may or may not be said is reserved to non-state actors, i.e. in legal relationships between private parties. Under this legal regime, Twitter is allowed to moderate or ban content because social media platforms are private actors and therefore not bound by the First Amendment. Under the state action doctrine, private parties are exempt from applying third-party fundamental rights enshrined in the Bill of Rights. The rationale behind the state action doctrine is to preserve private autonomy, leaving the relationship between private parties immune to the application of the Constitution. Private parties may only be subject to the same obligations as the government if they fall under the public function or the entanglement exception.[1]

One might argue that social media platforms are not mere private actors, like other market participants. Instead, they could be considered state actors in order for them to respect their users’ right to free speech (under to the public function exception). This has been a constant discussion among First Amendment scholars but courts are reluctant to treat social media platforms as state actors even when, according to the Supreme Court, they provide “the most important places for the exchange of views” (Packingham v. North Carolina, 582 U.S. ___ (2017)). It is, therefore, at the platform’s discretion to remove or not certain types of speech.

Could Trump then behave the way he has so far if he were to use another platform? Most likely yes, but that’s not connected to the type of platform: it has to do with the First Amendment’s protection. While one of the very few exceptions to the scope of protection is incitement to violence, the criteria established by the jurisprudence are hardly applicable to online speech. Incitement to violence may only be forbidden if it leads to “imminent lawless action” (Brandenburg v. Ohio, 395 U.S. 444) and the point in time may not be in “some indefinite future time” (Hess v. Indiana, 414 U.S. 105). According to these criteria, online hate speech isn’t concrete enough in most cases because its effects can unfold at any future point in time. Hence, even if constant incitement fuels violent actions and can have real consequences as we have seen (not only at the Capitol), it has to be more specific. This leads me to the conclusion that Trump’s speech just before the storm on the Capitol (“we are going to walk down Pennsylvania Avenue, and we are going to the Capitol, (…) and give them the kind of pride and boldness that they need to take back our country.”) would have been (if posted on a conservative network) probably concrete enough to justify an exception to the First Amendment. But for the removal of other, less specific incitements via social media, we would depend on the respective platform to act.

What’s next?

Overall, the whole situation would become even worse if Trump turns away from the platforms he has been using so far. This result leaves no room for doubt: adapting First Amendment doctrines to the digital public sphere is a pressing need, not a matter of scholar opinion. It needs to be addressed by the courts (because Congress has only very limited options under the First Amendment) in order to overcome the current obstacles and to provide a democratically legitimate answer, instead of allocating these substantial matters to the goodwill of the biggest social media platforms. One option being a new interpretation of the criteria mentioned supra, allowing for a contemporary protection of First Amendment values.


[1] For a comparison of the Drittwirkungslehre and the US public forum theories: HeldtMerging the Social and the Public: How Social Media Platforms Could be a New Public Forum, Mitchell Hamline Law Review, Vol. 6 Issue 5, Art. 1


This article was posted first on JuWiss:

Amélie Heldt, Trump’s very own platform? Two scenarios and their legal implications, JuWissBlog Nr. 3/2021 v. 11.01.2021, https://www.juwiss.de/03-2021/

This post represents the view of the author and does not necessarily represent the view of the institute itself. For more information about the topics of these articles and associated research projects, please contact info@hiig.de.

Amélie Heldt

Former Associated Researcher: Platform Governance

Sign up for HIIG's Monthly Digest

HIIG-Newsletter-Header

You will receive our latest blog articles once a month in a newsletter.

Explore Research issue in focus

Platform governance

In our research on platform governance, we investigate how corporate goals and social values can be balanced on online platforms.

Further articles

Modern subway station escalators leading to platforms, symbolizing the structured pathways of access rights. In the context of online platforms, such rights enable research but impose narrow constraints, raising questions about academic freedom.

Why access rights to platform data for researchers restrict, not promote, academic freedom

New German and EU digital laws grant researchers access rights to platform data, but narrow definitions of research risk undermining academic freedom.

Three groups of icons representing people have shapes travelling between them and a page in the middle of the image. The page is a simple rectangle with straight lines representing data used for people analytics. The shapes traveling towards the page are irregular and in squiggly bands.

Empowering workers with data

As workplaces become data-driven, can workers use people analytics to advocate for their rights? This article explores how data empowers workers and unions.

A stylised illustration featuring a large "X" in a minimalist font, with a dry branch and faded leaves on one side, and a vibrant blue bird in flight on the other. The image symbolises transition, with the bird representing the former Twitter logo and the "X" symbolising the platform's rebranding and policy changes under Elon Musk.

Two years after the takeover: Four key policy changes of X under Musk

This article outlines four key policy changes of X since Musk’s 2022 takeover, highlighting how the platform's approach to content moderation has evolved.