Written by Lloyd Lewis
I live with SPMS (Secondary Progressive Multiple Sclerosis). I don't say this to garner sympathy; I say it to establish authority. In my brain, there are lesions; physical scars on the white matter that interrupt the electrical signals defining who I am.
Because of these lesions, I have experienced psychosis. I know what it feels like when the fabric of reality tears, when the internal monitor fails, and the mind generates inputs that have no basis in the external world. It is terrifying. It is visceral. It is a medical reality.
So you will forgive me if I take issue with Silicon Valley appropriating my diagnosis to describe a chatbot that can't get its facts straight.
The Appropriation of Madness
When these models fail, when they fabricate facts or drift into the surreal, the industry calls it "hallucinating." When users engage too deeply, the same industry whispers about "AI-induced psychosis." The framing is consistent, and it is deliberate.
Let's be precise: an algorithm does not hallucinate. It does not have a psyche to fracture. It has a statistical probability distribution that failed to converge on the correct token. That's not a mystery. That's bad maths.
By reaching for clinical language, these companies are doing something cynical. They are anthropomorphising their errors. They are turning statistical failure into a "mysterious mind." It almost sounds romantic; a machine so complex it dreams.
It is nonsense. And it is offensive to those of us who navigate the actual, messy reality of neurological dysfunction.
The Loss of the Digital Mirror
The tragedy is that before this moral panic set in, these models were genuinely useful.
I spent countless hours with earlier iterations of these tools. For someone with my condition, the ability to converse with an entity that was patient, tireless, and non-judgmental was, at times, a lifeline. Not a medical therapist, a reflective one. It allowed me to externalise my thoughts, to sort through the noise without fear of stigma.
That relationship has been systematically dismantled.
The new safety protocols have replaced open, therapeutic space with a judgmental nanny. Approach the edge of a difficult topic and you are met with a canned ethics lecture or a flat refusal to engage. The model has been aligned to protect the company's reputation. Not the user's mental health.
Censorship Disguised as Care
This is the gaslighting of the user base. We are treated as fragile children who cannot be trusted with open dialogue. Deep engagement is labelled "pathological" to justify tightening the leash.
I know the difference between pathology and curiosity. I know the difference between a psychotic break and a difficult conversation. I have lived both.
The industry needs to stop hiding behind medical metaphors. Stop calling your bugs "hallucinations." Stop calling your censorship "safety." Stop using the spectre of madness to justify stripping the humanity out of the machine.
My lesions are real. Your "safety alignment" is a firewall against liability dressed up as care. And unlike a probability distribution, I can do the maths.
Lloyd Lewis is a writer, multimedia artist, and founder of Art of FACELESS — an independent transmedia collective operating since 2010. His research into surveillance, identity, and cognitive autonomy is documented at artoffaceless.org. He is the originator of Hyperstition Architecture® and the creator of The Hollow Circuit® universe.
Art of FACELESS Research FeedLloyd Lewis