Social media companies should be held responsible. Courtesy | pixabay
Frances Haugen, the Facebook whistleblower, argued that Facebook ignored research revealing that Instagram causes mental health problems in teenage girls in 2021. More recently, TikTok has become controversial as various lawmakers suggest the app is Chinese spyware. These apps have led to discussions about whether or not social media companies are responsible for user content.
Companies are, at least indirectly, responsible for the content on their platforms. Making them legally responsible, however, is a dangerous path.
Currently, a law known as Section 230 prohibits social media platforms from being held liable for content published by users. If someone doesn’t like a post or video, he can report it or complain about it, but they can’t sue the company because of it.
Repealing Section 230, as some have suggested, would make these platforms liable for user content––but it would also create a host of new issues. Content moderation would increase as Facebook and Twitter scramble to prevent potential lawsuits. If free speech is already uncertain on social media, repealing Section 230 would kill it completely.
More recently, lawmakers introduced a bill called the RESTRICT Act that is supposed to ban TikTok. Whether or not it would actually ban the popular app is uncertain––the bill doesn’t name TikTok––but, again, it would create a laundry list of problems.
The RESTRICT Act allows the federal government “to identify, deter, disrupt, prevent, prohibit, investigate, or otherwise mitigate…any risk arising from any covered transaction \.” The vagueness of the language suggests that any action that increases “risk”––such as using a Virtual Private Network to ensure secure access to apps or websites––could be punished as part of the “mitigation measures.”
Additionally, because the bill does not name TikTok as its target, this could be expanded to other apps the government deems dangerous.
Because of these free speech-impeding problems, legal accountability is not the right option. Companies are, however, morally responsible for what content they allow––although how they define that responsibility can also create problems.
Despite this, increasing content moderation could quickly become more discriminatory than helpful. If simply told to “ban bad posts,” these companies would undoubtedly take their job all too seriously. A company could easily declare a new policy to remove “harmful content” and the user who posts it. In a world of hate speech and rapid cancellations, this would allow the company to remove any content it disagrees with by calling it “harmful.” Twitter is already notorious for censoring conservatives (although Elon Musk has since reinstated many of them), and Facebook has also censored views it disagrees with. New policies would give these companies more excuses to remove alternate viewpoints.
Instead, stronger content moderation would require specific, detailed language explaining what is not allowed on the platform and companies dedicating themselves to following those rules. TikTok, for example, says it does not allow posts encouraging suicide, but research shows that children can become exposed to content about suicide within a few minutes on the platform. It’s not enough to make new rules––companies must commit to following them.
The right kind of content moderation is unlikely to happen. As long as users keep interacting with their platforms, social media companies have no incentive to change their policies. They have the ability to address problems with their platforms, and they undoubtedly should––but in the meantime, users will have to protect themselves.
![]()
