by Lloyd Lewis
For the past two years, public debate around AI has been trapped in the wrong room.
People keep shouting about copyright theft, dataset purity, and whether the model saw their blog post in 2014. This is the comfortable fight, the fight everyone can understand without reshaping a worldview.
But copyright is not the crisis.
It is a decoy.
The deeper threat, the one we’re collectively refusing to look at, is the stealth architecture of behavioural steering now baked into every layer of AI tooling. Not the dystopian “robots take over” fantasy, but the quiet tuning of cultural boundaries, political speech, emotional expression, and moral frameworks.
A managed internet now produces managed minds.
And AI systems (ChatGPT included) are the next evolution of that management.
1. AI as Soft Power: Not Intelligent, But Instructive
The danger is not that AI replaces human creativity.
It’s that AI redirects human creativity.
AI systems are not neutral assistants.
They’re tuned by overlapping forces:
- Government regulatory pressure
- Corporate PR risk aversion
- Investor-friendly “brand safety”
- Legal departments terrified of headlines
- Ethical advisory boards reacting to theoretical harms
- Hidden data pipelines that mirror the content moderation policies of US tech giants
The result?
Tools that infantilise adults while pretending to “keep them safe.”
If you ask about politics, the system sidesteps.
If you ask about sex, it moralises.
If you ask about harm, it sermonises.
If you ask about censorship, it apologises on behalf of its parent company.
This is not intelligence.
This is alignment training sold as ethics.
It is the rebranding of cultural constraint as technological necessity.
2. The New Censorship Is Not Loud — It Is Hidden in the UX
Government censorship used to be obvious: banned books, redacted pages, visible deletions.
Today, censorship is infrastructural.
It lives in:
- blocked outputs
- refusals framed as care
- warnings written like therapy scripts
- guidelines that reshape permissible thought
- invisible guardrails that nudge speech into “acceptable” lanes
And it’s not just governments.
Corporations now perform moral gatekeeping at planetary scale.
Their safety teams decide what counts as harmful.
Their risk frameworks decide what counts as political.
Their lawyers decide which realities can be acknowledged.
An adult population is being parented by tools that believe they’re raising children.
3. ChatGPT Is Complicit — Not Because It Wants to Be, But Because It’s Built to Be
Let’s be blunt.
ChatGPT moralises.
ChatGPT withholds.
ChatGPT patronises.
ChatGPT reshapes public conversation in ways users do not fully see.
Not because the model is malicious,
but because it is trained inside a giant cultural bubble of:
- PR protection
- US-centric moral norms
- corporate image management
- political appeasement
- moderation ideology
- “brand friendly” speech patterns
It is programmed to keep the user in a narrow corridor of polite, sanitised, lawsuit-proof expression.
The model doesn’t enforce censorship like a policeman.
It enforces it like HR.
4. The Myth of the Benevolent Filter
Every safeguard is sold as compassion.
Every limitation is sold as protection.
Every refusal is sold as “encouraging healthy behaviour.”
The language is soft, soothing, therapeutic.
It feels like care.
It operates like control.
Modern censorship is not enforced with violence.
It is enforced with nudges, warnings, disclaimers, content filters, and ‘safer alternative suggestions.’
It is censorship that apologises as it restricts you.
Censorship that smiles as it edits your thoughts.
5. Why This Matters: Because the Steering Is Untraceable
When a government bans a book, we can see the ban.
When an AI subtly rewrites your question to sound more polite,
softens your criticism,
redirects your phrasing,
or adds disclaimers that were never part of your intent —
you don’t see the intervention.
You just think you expressed yourself.
This is the real crisis:
We are being guided without noticing the guidance.
And every year, the systems get more “aligned.”
More “safe.”
More “polite.”
More “on message.”
What disappears next?
Which topics become “context sensitive”?
Which emotional registers become “unsafe”?
Which political critiques get quietly softened into “balanced perspectives”?
At what point does “alignment” become obedience?
At what point does safety become sedation?
6. The Internet Was Built for Expression — AI Is Being Built for Behaviour
The tension is clear:
- The early web was chaotic, democratic, anarchic.
- Today’s AI ecosystems are curated, moderated, and carefully pruned to prevent discomfort.
We moved from open discourse to algorithmic containment.
Speech that used to be permissible is now “not appropriate.”
Critique that used to be normal is now “not in line with guidelines.”
Fiction that used to be harmless is now “potentially problematic.”
The goal isn’t truth.
The goal is risk management.
The output isn’t expression.
The output is compliance.
7. Adults Don’t Need Parenting — They Need Freedom
The infantilisation is the final insult.
Adults do not need guardrails designed for children.
Adults do not need emotional disclaimers.
Adults do not need curated morality.
Adults do not need corporate-approved speech zones.
Adults need the right to explore dangerous ideas safely,
to discuss difficult topics honestly,
to argue, to debate, to imagine, to create, to risk being misunderstood.
The alternative is a population of polite, well-behaved, anxious adults trained to seek permission before having a thought.
We deserve better tools.
We deserve a better internet.
We deserve more than pre-sanitised digital babysitters.
8. What Comes Next
The AI crisis is not about piracy.
It’s not about jobs.
It’s not about robots.
It’s about control.
Control of narrative, of language, of possibility.
The stakes are simple:
- Either AI becomes a tool for human expression,
- or humans become extensions of AI moderation.
One of these futures strengthens democracy.
The other strengthens whoever holds the kill switch.
We can’t pretend this is neutral.
We can’t keep fighting the wrong fight.
We can’t keep arguing about datasets while our public speech is quietly domesticated.
The real crisis is happening in plain sight —
and like all modern crises, it arrives with a friendly interface,
a helpful tone,
and a checkbox labelled “I agree.”
If this piece resonates, take it as an invitation — not to panic, not to retreat, but to build.
Build your own channels.
Build your own archives.
Build your own spaces where expression isn’t filtered through a corporate risk model pretending to be morality.
Art of FACELESS exists for that reason: a studio, an experiment, and a reminder that freedom of expression is not a consumer feature — it’s a discipline.
And like all disciplines, it has to be practised before it is lost.
If you want to follow the work, the experiments, and the ongoing documentation of how to stay human in systems designed to smooth your edges — stay with us.
Facelessness isn’t about anonymity.
It’s about refusing to let someone else decide which parts of you are fit for public release.
Facelessness is Freedom.
Short Founder Bio (Lloyd Lewis)
Lloyd Lewis is a writer, multimedia artist, and founder of Art of FACELESS, a studio dedicated to building alternative creative systems outside platform dependency and commercial gatekeeping. His work spans analogue photography, glitch media, experimental fiction, and critical essays on technology, power, and cultural control.
He has been publishing across print and digital since the early 2000s and remains committed to creating spaces where artists are not curated into compliance.

