Art of FACELESS | February 19, 2026
Every major AI company ships its product with some version of the same disclaimer.
This output may contain errors. Do not rely on this for medical, legal, or financial decisions. Always consult a qualified professional.
Fine. Except.
These are the same companies telling us their technology will transform healthcare. Accelerate scientific research. Replace junior lawyers. Augment doctors. Reshape education. Rewrite how knowledge work gets done.
You cannot do both.
You cannot sell a product as a reasoning system capable of transforming professional practice — and simultaneously disclaim responsibility for everything it produces.
The Calculator Doesn't Do This
My calculator doesn't come with a disclaimer. Neither does the analytical balance in a pharmaceutical laboratory. Both are validated, calibrated, and certified. The data they produce is legally defensible. If the balance is wrong, the manufacturer is accountable. That's what validation means.
Lab instruments used in regulated industries are certified to ISO standards precisely because the output carries consequences. Someone makes a decision based on that number. The instrument is therefore accountable for the number.
AI companies are currently selling consequences — your business will run better, your research will go faster, your decisions will be sharper — while refusing accountability for the output those consequences depend on.
That's not a minor inconsistency. That's the entire business model.
Pick One
The honest position is simple. Pick one of the following and commit to it:
Option A: Your output is validated, documentable, and carries a known, certified error rate. You accept product liability. You market accordingly.
Option B: Your output is probabilistic text generation with no guaranteed accuracy. You market accordingly — as a useful drafting tool, a starting point, a thinking aid. Not a replacement for validated professional judgement.
What you cannot do is market Option A and ship Option B with a disclaimer stapled to the bottom.
That's, well, a bit shit to be honest.
Why This Matters Now
AI agents are being deployed at scale into healthcare, finance, legal services, and critical infrastructure. Anthropic published research on this yesterday. The technology is moving faster than the accountability frameworks around it.
The disclaimer culture isn't caution. It's liability offloading. The companies capture the commercial upside of appearing to be authoritative intelligence systems. The user absorbs all the risk when it goes wrong.
That asymmetry needs to be named. And challenged.
This is just the beginning of that conversation.
Art of FACELESS | Cardiff | Est. 2010
Research infrastructure: artoffaceless.org