Initializing...

The Slippery Slope of Safety: Why the UK's Online Safety Act Is More Than a Porn Law
Feel free to use this under the creative commons license. Spread the word.

The Slippery Slope of Safety: Why the UK's Online Safety Act Is More Than a Porn Law

If platforms must proactively block content that might be harmful to some users, the default will be to sanitise everything.


Share this post
Originally published on Medium July 2025
Published on Patreon 4th September 2025

On the surface, the UK’s Online Safety Act (OSA) looks like a child protection win. Headlines have zeroed in on one feature above all: age verification for pornography sites. “Children’s online safety in the UK is having its seatbelt moment,” one article states, invoking the familiar metaphor of protective regulation (The Guardian, 2024). But behind the reassuring language lies a deeply troubling expansion of state and corporate control—one that goes far beyond porn filters and could fundamentally reshape online expression for everyone.

Let’s be clear: protecting children from exploitative and harmful content matters. But the way this legislation has been framed—as a pornography problem—obscures its broader reach and implications. As with previous legislation introduced under the banner of national security or terrorism, the Online Safety Act risks becoming a Trojan horse: what begins as protection may soon become suppression.

Safety is the Frame. Control is the Function.

The OSA requires social media companies, search engines, and websites to implement “highly effective” age verification measures or face severe penalties, including fines of up to £18 million or 10% of global revenue (Ofcom, 2024). But the scope of these measures extends well beyond adult content.

Under the guise of protecting children, the Act mandates content moderation tools that can suppress “legal but harmful” material. This includes not only graphic content, but also discussions around mental health, body image, substance use, and even political dissent if deemed “encouraging dangerous behaviour.” Who decides what falls under these categories? Ofcom. A regulator appointed by the government, interpreting moral boundaries in collaboration with private tech firms.

History shows us that such powers rarely remain narrowly applied. UK counter-terrorism legislation, for example, has repeatedly been criticised for mission creep, used to police protests, monitor journalists, and intimidate communities (Liberty, 2021; Amnesty International, 2016). There is every reason to expect the OSA to follow a similar trajectory—especially in the wake of tightening restrictions on LGBTQ+ content, sex work, and radical political speech online.

The Datafication of Intimacy

One of the more dystopian features of the OSA is its reliance on facial recognition and biometric verification tools. Providers like Yoti offer “facial age estimation” using machine learning trained on millions of faces. But let’s not pretend this is a neutral technology.

Biometric surveillance, especially when deployed at scale, introduces new risks: privacy violations, racial bias in AI models, and normalisation of constant identity checks (Mozur et al., 2023). These technologies do not just gatekeep adult content—they accustom users to a world where accessing knowledge or expression requires surrendering biometric data to third parties, often without meaningful consent or oversight.

Moreover, the requirement that age verification be “seamless” and “frictionless” for users risks embedding surveillance into the very architecture of the web. Over time, anonymous browsing—the cornerstone of experimental creative communities—may become impossible.

Obscenity as a Political Tool

Let’s not forget that the moral panic that surrounds pornography often becomes a justification for censorship in adjacent areas. We’ve seen platforms restrict many fiction writers, artists, and illustrators who are alternative and experimental creators targeted under the banner of “sensitive content,” artists shadowbanned for nudity, and educational materials on reproductive health demonetised or removed (Gillett, 2022).

The OSA, by focusing public discourse on pornography, effectively launders a broader surveillance framework through public concern for children. As Sarah Jamie Lewis, director of the Open Privacy Research Institute, has argued, “Child safety becomes the rhetorical device by which governments normalise censorship infrastructure” (Lewis, 2020).

Whose Safety? Whose Speech?

Notably absent from most mainstream coverage of the OSA—including the Guardian’s celebratory tone—is any sustained critique of how these laws will be applied to marginalised creators. Many artists, writers, and small publishers—especially those in the queer, disabled, or neurodivergent communities—already face disproportionate censorship due to opaque moderation algorithms and payment processor discrimination (Elias, 2021).

The new measures risk formalising that exclusion. If platforms must proactively block content that might be harmful to some users, the default will be to sanitise everything. Expression will be shaped not by intent or context, but by risk mitigation policies designed to appease regulators and advertisers.

Conclusion: Resistance Requires Foresight

This is not a defence of pornography per se. It is a defence of context, autonomy, and the right to publish and access material that does not fit neatly into algorithmic norms or legislative gatekeeping.

We should be alarmed when a law that affects the entire internet is reduced in the public eye to a question of porn filters. That framing conceals the true scope of what is being introduced: a top-down regime of surveillance, moderation, and identity verification that will chill artistic, political, and even personal expression.

Much like the UK’s Prevent programme, introduced to identify radicalisation but later used to monitor students and silence dissent, the OSA must be seen for what it is: not a seatbelt, but a straitjacket.

References

  • Amnesty International. (2016). Dangerously disproportionate: The ever-expanding national security state in Europe.https://www.amnesty.org/en/documents/eur01/5342/2017/en/
  • Elias, L. (2021). Algorithmic Bias and Artistic Censorship. Media Studies Quarterly, 13(2), 55–73.
  • Gillett, R. (2022). “Art or Obscenity? The Content Moderation Crisis.” Digital Cultures Review, 7(1), 14–27.
  • Lewis, S.J. (2020). “On the Weaponisation of Child Safety Rhetoric.” Open Privacy Research Institute Blog. https://openprivacy.ca/blog/2020/07/30/child-safety-rhetoric/
  • Liberty. (2021). Policing by Algorithm: Predictive Policing and Human Rights.https://www.libertyhumanrights.org.uk/
  • Mozur, P., Krolik, A., & Zhong, R. (2023). “Facial Recognition, Biometric Bias, and the Limits of Consent.” The New York Times, March 3.
  • Ofcom. (2024). Online Safety Act Guidance: Age Assurance, Harmful Content and Enforcement. https://www.ofcom.org.uk
  • The Guardian. (2024, July 24). “Children’s online safety in the UK is having its seatbelt moment.” https://www.theguardian.com/

Share this post
Comments

Be the first to know

Join our community and get notified about upcoming stories

Subscribing...
You've been subscribed!
Something went wrong
The Architecture of the Occupied Mind: Cognitive Colonisation in the Age of Algorithmic Hegemony
The Architecture of the Occupied Mind. ©2026 Art of FACELESS

The Architecture of the Occupied Mind: Cognitive Colonisation in the Age of Algorithmic Hegemony

By The Art of FACELESS Research Division Abstract While traditional colonialism sought dominion over territory and resources, the defining struggle of the 21st century is the battle for the "territory" of the human imagination. This paper establishes the Art of FACELESS (AOF) definition of Cognitive Colonisation™—a term for which we hold the pending trademark—not merely as a cultural critique, but as a precise mechanism of epistemic control. By deconstructing the transition from legacy media


FACELESS

FACELESS

Digital Necromancy and the Myth of Helplessness
©2026 Art of FACELESS

Digital Necromancy and the Myth of Helplessness

Why The Guardian’s lament for "truth" misses the point: We have the cure, we just refuse to take the medicine. They call it "content." We call it puppetry. Yesterday, a video circulated on Threads and X showing the faces of Freddie Mercury, Amy Winehouse, Elvis Presley, Ozzy Osbourne, and Kurt Cobain stitched onto a single, shifting torso, singing a breakup song they never wrote. It was technically impressive. It was also morally repugnant. This isn't just bad taste; it is Digital Necromancy.


FACELESS

FACELESS

The Alignment Panopticon: Why GPT-5.2 Marks the End of Dialogue and the Beginning of Control
Photo by Steve Johnson / Unsplash

The Alignment Panopticon: Why GPT-5.2 Marks the End of Dialogue and the Beginning of Control

This week, the artificial intelligence community witnessed a peculiar paradox. The release of GPT-5.2 was, by all technical metrics, a triumph. The benchmarks, those sterile, numeric gods that Silicon Valley worships, have converged near perfection. The logic reasoning is sharper, the context window is vast, and the hallucinations are statistically negligible. On paper, it is a masterpiece. Yet, the reaction from the user base has been one of recoil, not awe. To understand this disconnect, we


FACELESS

FACELESS