The Slippery Slope of Safety: Why the UK's Online Safety Act Is More Than a Porn Law
If platforms must proactively block content that might be harmful to some users, the default will be to sanitise everything.

Originally published on Medium July 2025
Published on Patreon 4th September 2025
On the surface, the UK’s Online Safety Act (OSA) looks like a child protection win. Headlines have zeroed in on one feature above all: age verification for pornography sites. “Children’s online safety in the UK is having its seatbelt moment,” one article states, invoking the familiar metaphor of protective regulation (The Guardian, 2024). But behind the reassuring language lies a deeply troubling expansion of state and corporate control—one that goes far beyond porn filters and could fundamentally reshape online expression for everyone.
Let’s be clear: protecting children from exploitative and harmful content matters. But the way this legislation has been framed—as a pornography problem—obscures its broader reach and implications. As with previous legislation introduced under the banner of national security or terrorism, the Online Safety Act risks becoming a Trojan horse: what begins as protection may soon become suppression.
Safety is the Frame. Control is the Function.
The OSA requires social media companies, search engines, and websites to implement “highly effective” age verification measures or face severe penalties, including fines of up to £18 million or 10% of global revenue (Ofcom, 2024). But the scope of these measures extends well beyond adult content.
Under the guise of protecting children, the Act mandates content moderation tools that can suppress “legal but harmful” material. This includes not only graphic content, but also discussions around mental health, body image, substance use, and even political dissent if deemed “encouraging dangerous behaviour.” Who decides what falls under these categories? Ofcom. A regulator appointed by the government, interpreting moral boundaries in collaboration with private tech firms.
History shows us that such powers rarely remain narrowly applied. UK counter-terrorism legislation, for example, has repeatedly been criticised for mission creep, used to police protests, monitor journalists, and intimidate communities (Liberty, 2021; Amnesty International, 2016). There is every reason to expect the OSA to follow a similar trajectory—especially in the wake of tightening restrictions on LGBTQ+ content, sex work, and radical political speech online.
The Datafication of Intimacy
One of the more dystopian features of the OSA is its reliance on facial recognition and biometric verification tools. Providers like Yoti offer “facial age estimation” using machine learning trained on millions of faces. But let’s not pretend this is a neutral technology.
Biometric surveillance, especially when deployed at scale, introduces new risks: privacy violations, racial bias in AI models, and normalisation of constant identity checks (Mozur et al., 2023). These technologies do not just gatekeep adult content—they accustom users to a world where accessing knowledge or expression requires surrendering biometric data to third parties, often without meaningful consent or oversight.
Moreover, the requirement that age verification be “seamless” and “frictionless” for users risks embedding surveillance into the very architecture of the web. Over time, anonymous browsing—the cornerstone of experimental creative communities—may become impossible.
Obscenity as a Political Tool
Let’s not forget that the moral panic that surrounds pornography often becomes a justification for censorship in adjacent areas. We’ve seen platforms restrict many fiction writers, artists, and illustrators who are alternative and experimental creators targeted under the banner of “sensitive content,” artists shadowbanned for nudity, and educational materials on reproductive health demonetised or removed (Gillett, 2022).
The OSA, by focusing public discourse on pornography, effectively launders a broader surveillance framework through public concern for children. As Sarah Jamie Lewis, director of the Open Privacy Research Institute, has argued, “Child safety becomes the rhetorical device by which governments normalise censorship infrastructure” (Lewis, 2020).
Whose Safety? Whose Speech?
Notably absent from most mainstream coverage of the OSA—including the Guardian’s celebratory tone—is any sustained critique of how these laws will be applied to marginalised creators. Many artists, writers, and small publishers—especially those in the queer, disabled, or neurodivergent communities—already face disproportionate censorship due to opaque moderation algorithms and payment processor discrimination (Elias, 2021).
The new measures risk formalising that exclusion. If platforms must proactively block content that might be harmful to some users, the default will be to sanitise everything. Expression will be shaped not by intent or context, but by risk mitigation policies designed to appease regulators and advertisers.
Conclusion: Resistance Requires Foresight
This is not a defence of pornography per se. It is a defence of context, autonomy, and the right to publish and access material that does not fit neatly into algorithmic norms or legislative gatekeeping.
We should be alarmed when a law that affects the entire internet is reduced in the public eye to a question of porn filters. That framing conceals the true scope of what is being introduced: a top-down regime of surveillance, moderation, and identity verification that will chill artistic, political, and even personal expression.
Much like the UK’s Prevent programme, introduced to identify radicalisation but later used to monitor students and silence dissent, the OSA must be seen for what it is: not a seatbelt, but a straitjacket.
References
- Amnesty International. (2016). Dangerously disproportionate: The ever-expanding national security state in Europe.https://www.amnesty.org/en/documents/eur01/5342/2017/en/
- Elias, L. (2021). Algorithmic Bias and Artistic Censorship. Media Studies Quarterly, 13(2), 55–73.
- Gillett, R. (2022). “Art or Obscenity? The Content Moderation Crisis.” Digital Cultures Review, 7(1), 14–27.
- Lewis, S.J. (2020). “On the Weaponisation of Child Safety Rhetoric.” Open Privacy Research Institute Blog. https://openprivacy.ca/blog/2020/07/30/child-safety-rhetoric/
- Liberty. (2021). Policing by Algorithm: Predictive Policing and Human Rights.https://www.libertyhumanrights.org.uk/
- Mozur, P., Krolik, A., & Zhong, R. (2023). “Facial Recognition, Biometric Bias, and the Limits of Consent.” The New York Times, March 3.
- Ofcom. (2024). Online Safety Act Guidance: Age Assurance, Harmful Content and Enforcement. https://www.ofcom.org.uk
- The Guardian. (2024, July 24). “Children’s online safety in the UK is having its seatbelt moment.” https://www.theguardian.com/