Thursday, July 3, 2025

Does Internet Censorship Protect Society—or Control It?

Imagine waking up and checking your feed, only to find that your favorite creator has disappeared overnight.

No warning, no explanation, just gone. Or maybe you post something about a political issue and find your account shadow banned for a week. These scenarios aren’t hypothetical. They’re becoming more and more common as governments and tech platforms increase their control over what can and can’t be said online. The question is: are these actions protecting us from harm, or are they quietly reshaping the boundaries of what we’re allowed to think, say, and believe?

The scale between safety and protection are interchangeable. The debate over internet censorship is one of the defining issues of the digital age. On one hand, it’s obvious why we want some control over harmful content, no one wants misinformation, hate speech, or incitement to violence running wild. But on the other hand, the tools used to stop that content can just as easily be aimed at dissent, criticism, or inconvenient truths. And unlike traditional media censorship, which was often top-down and obvious, modern internet censorship is algorithmic, opaque, and disturbingly quiet.

So, does internet censorship actually protect us? Or is it quietly shaping a reality where control is disguised as safety?

The Rise of Algorithmic Gatekeepers

Let’s start with the platforms. Social media companies have built vast infrastructures that filter, promote, or bury content at scale. But they don’t do it manually they use algorithms. These algorithms aren't neutral. They’re programmed to prioritize engagement, which often means amplifying content that is sensational or polarizing. To curb the harm, platforms then use AI tools to detect and remove "harmful" posts. Sounds reasonable, right? But studies show that these tools are often biased. According to R. Binns [3], algorithmic moderation can disproportionately target minority voices or controversial but fact-based statements, because the training data is often skewed or lacks context.

Mozilla's [9] "YouTube Regrets" project further revealed how recommendation algorithms frequently steered users toward conspiracy theories, extremist content, or polarizing opinions, even when the user never searched for them in the first place. So now we're in a situation where the algorithm pulls you into a dangerous rabbit hole, then punishes creators who say anything even remotely risky.

Even platforms with well-meaning intentions often lack transparency. Appeals are frequently automated, with little explanation. Users don’t even know which policy they violated or how to avoid the same mistake. That opacity breeds confusion, fear, and silence.

Governments Step In: Protection or Power Grab?

Governments haven’t been sitting idle either. In fact, many are taking aggressive steps to control online speech in the name of safety.

Germany has some of the strictest hate speech laws in the world. In 2025, German authorities raided over 65 homes for suspected online hate speech, confiscating devices and pressing charges against individuals based on their social media posts [11]. Their justification? Protecting society from incitement and extremism. And while many of the arrested were linked to genuine hate groups, critics argue that these laws are so broad they can easily be misused against legitimate political expression.

In the UK, the Communications Act 2003 has been used to arrest people for tweets deemed "grossly offensive." One retired police officer was arrested by six officers at home in 2023 for a comment about rising antisemitism [12]. The tweet was controversial, yes, but was it criminal? Police later admitted it was a mistake and dropped the charges, but the damage was done. The man’s devices were seized, his reputation dragged, and public trust took another hit.

These aren’t isolated incidents, they’re evidence of how laws originally meant to stop terrorism or hate speech can bleed into everyday expression. The line between protecting society and silencing it grows blurrier by the year.

When Science Says Censorship Helps

Of course, there’s also solid evidence that some forms of censorship do protect people.

A groundbreaking study published in Science by Vosoughi et al. [1] found that false news spreads faster, deeper, and more broadly than true news, especially on Twitter. That may justifiy some interventions. During the COVID-19 pandemic, misinformation about vaccines led to real-world consequences, from people refusing treatment to deaths that could have been prevented. Wilson et al. [2] showed that removing COVID criticism content from YouTube actually increased public trust in some cases, especially when transparency was part of the process.

Another dimension is mental health. Studies on algorithmic exposure to harmful content (such as body shaming or suicide glorification) have shown that unchecked recommendation engines can contribute to anxiety, depression, and even self-harm in teenagers [13]. This has led to calls for age-specific censorship protocols that limit content exposure without silencing ideas entirely.

So yes, removing harmful misinformation can be beneficial. But it has to be done with transparency, accountability, and public trust, or it becomes a form of manipulation.

Public Opinion Is Divided

The support to restrict false information online has increased in the past few years. 

Figure 1: Growing public support in the U.S. for restricting false information online—even at the expense of free expression. “Most Americans favor restrictions on false information, violent content online,” 2023


According to Pew Research [8], 65% of Americans believe tech companies should remove false information from their platforms. But that same study shows a deep partisan divide: while 80% of Democrats support censorship of misinformation, only 39% of Republicans do. Trust in platforms has also eroded over time, especially as users feel their content is unfairly targeted as can be seen in figure 1.


Figure 2: Americans are far less willing to discuss government surveillance online than in-person Source: Pew Research Center, “Social Media and the Spiral of Silence,” 2014


There’s also a psychological cost. The “spiral of silence” effect, first identified by Hampton et al. [6], shows that people are less likely to share their opinions online if they believe their views are in the minority or fear backlash. Sharing opining in the subject is not common as well, most as shown in figure 2, people will hold their opinion at work and community meeting as in compared to family and friends. That leads to a chilling effect where important perspectives vanish not because they're wrong, but because people are afraid.

Control Disguised as Safety

What’s particularly troubling is how seamlessly censorship blends into the user experience.

There are no red stamps saying “REJECTED BY THE GOVERNMENT.” Instead, your reach is reduced, your video is demonetized, your account is silently suppressed. This is what makes internet censorship so dangerous: it's invisible and you can't fight back against what you can't see.

As Zeynep Tufekci [7] writes, we've entered a "democracy-poisoning golden age of free speech," where everyone can speak but only a few can be heard. Platforms shape what we see, not through brute force, but through subtle filtering, nudges, and design.

And while platforms argue that algorithmic curation is about safety and personalization, it also means that different people are seeing entirely different versions of reality—a recipe for polarization.

So, Where Does That Leave Us?

There’s no easy answer. The internet isn’t the same chaotic frontier it was 20 years ago. The stakes are higher now. Bad actors can manipulate millions. Foreign governments can wage information wars. And public health can crumble under waves of misinformation. But we can’t give platforms and governments a blank check to decide what truth is. That path leads to control, not protection.

The way forward isn't to abandon moderation entirely. It’s to make it transparent, accountable, and fair. Platforms should disclose how decisions are made. Governments should narrowly define harmful speech and create independent review boards. And users should be given real appeal processes and explanations.

We need to talk about censorship not as a binary issue but as a spectrum. And we need to stop pretending that tech platforms are just "neutral tools." They're the gatekeepers now, and if we don’t hold them accountable, then yes censorship won’t just protect society. It will quietly, effectively, and invisibly control it.




References:

[1] R. Vosoughi, S. Roy, and D. Aral, “The spread of true and false news online,” Science, vol. 359, no. 6380, pp. 1146–1151, Mar. 2018.

[2] S. Wilson et al., “Social media censorship and COVID 19 misinformation: Content removal and public trust,” EMBO Reports, vol. 21, no. 11, 2020.

[3] R. Binns, “Algorithmic accountability and public reasoning,” Philosophy & Technology, vol. 31, pp. 543–556, 2018.

[4] J. Ong and J. Cabañes, “Architects of networked disinformation,” Newton Tech for Dev Report, Feb. 2018.

[5] J. Rydzak, “Of blackouts and bandhs: The strategy and structure of disconnected protest in India,” Global Network Initiative, 2019.

[6] K. Hampton et al., “Social media and the ‘spiral of silence’,” Pew Research Center, Aug. 26, 2014.

[7] Z. Tufekci, “It’s the (democracy-poisoning) golden age of free speech,” WIRED, Jan. 16, 2018.

[8] Pew Research Center, “Most Americans favor restrictions on false information, violent content online,” Jul. 20, 2023.

[9] Mozilla Foundation, “YouTube Regrets: A crowdsourced investigation into harmful algorithmic recommendations,” 2021.

[10] United Nations, “Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression,” A/HRC/50/55, Jun. 2022.

[11] The New York Times, “Where Online Hate Speech Can Bring the Police to Your Door,” The New York Times, Sept. 2022.

[12] J. Vincent, “Twitter user sentenced to 150 hours of community service in UK for posting ‘grossly offensive’ tweet,” The Verge, Mar. 31, 2022.

[13] A. Orben, A. K. Przybylski, “The association between adolescent well-being and digital technology use,” Nature Human Behaviour, vol. 3, pp. 173–182, 2019.






1 comment:

  1. Real interesting post! Its interesting to see that support for censorship has actually increased in recent years, and its fascinating to see that Americans are far less likely to discuss censorship online. I also think you show the devastating effects of online censorship very well, showing that it is impossible, in many cases, to know what a person did wrong after they are censored. Well done!

    ReplyDelete