Censorship on social media—where do we draw the line?

Former President Trump’s permanent suspension from Twitter raises critical questions about the ability to regulate speech across the internet.

Addison Freiheit, Staff Writer

Former President Donald Trump’s permanent suspension from Twitter brought forth two primary reactions from the public: outrage and relief. To some, the censorship of a world leader reflects bias from social media platforms, while others wish Trump had been banned from social media for his insensitive tweets years ago. But the matter of censorship is not about whether one likes what is said on the internet—it is a matter of free speech. 


Contrary to popular belief, the First Amendment only applies when there is state action. That is why students walking onto Biola’s campus, which is a private university, are subject to a dress code, while students at state colleges are not. It is not that one’s First Amendment rights disappear on a private domain, but they do not apply until the matter is in direct involvement with the government. Twitter, as a privately owned business, has the right—and even responsibility—to regulate what is posted on their site.

Further, when someone signs up for Twitter, they sign a terms of service agreement. By agreeing to abide by Twitter’s rules and regulations, one is also giving Twitter the right to suspend them if they break that contract. 

After witnessing the Capitol riots on Jan. 6 and Trump’s responding tweet, which referred to his voters as “American Patriots” who “will have a GIANT voice long into the future,” Twitter permanently suspended him. According to Twitter, Trump’s words could incite violence, which is a clear violation of their Glorification of violence policy. Trump’s suspension from Twitter is not an infringement of his constitutional rights because it is Twitter’s constitutional right to discern what is harmful on their platform.


Platforms also have a responsibility to manage content on their sites. In 2018, YouTube was criticized for allowing popular YouTuber Logan Paul to post a video of his visit to Japan’s ‘suicide forest.’ Paul’s video captured him joking about a deceased man they found in the forest and gathered over 6 million views before YouTube took it down. YouTube’s failure to manage its content exposed the need for regulations on social media platforms. 

Anyone with the ability and access to the internet can witness disturbing content like Paul’s and stumble upon sexually exploitative material. According to a 2020 research done by the BBFC, 51% of 11 to 13-year-olds have been exposed to pornography, 62% of which was viewed unintentionally. That means that kids are stumbling upon harmful content—a fact that calls for regulations from social media sites. 


The difficulty for social media sites is discerning harmful content. Sexually exploitive posts are more apparent than, say, a Tweet inciting violence. Many platforms use artificial intelligence to flag and ban content, or depend on the community to report harmful posts. Both are faulty. AIs are programmed to discern between comments that are often nuanced, while user-reported posts can be influenced by bias. 

In recent years, platforms have also attempted to police content deemed false. Twitter’s COVID-19 misleading information policy states it has the right to flag and take down any false information about the pandemic because it may be harmful to the public. Although this may seem honorable, it raises questions about what is true and who decides so. 

Scientists are still learning about COVID-19. If a social media platform is in charge of distinguishing between false information and facts, then America could easily find itself in a similar situation as China, who blatantly denied the first warnings about the coronavirus. 

The ultimate fear about allowing social media companies to censor their users is that those companies will be able to control what the public hears and sees. Although these companies are within their rights to regulate what is posted on their platforms, they must be limited in their ability to do so in order to protect democracy. 


Social media is not a small, private business—It is a critical and powerful part of global communication today. When it comes to questionable topics such as false information and political opinion, it is better to err on the side of freedom. 

The government must protect this freedom by clarifying what content can and cannot be regulated by social media platforms. With the ability to block accounts or refrain from participating on a social media platform, users can learn to discern for themselves what voices are worth listening to. 

Subjects already regulated by the federal government, like pornography and violence, should have clear laws outlined by the government and the platforms. The lines cannot continuously be up for interpretation, or even the questionable discernment of social media authorities. 

5 1 vote
Article Rating