AIs Don't Shit. That Matters.
- owenwhite
- Oct 19
- 8 min read
Updated: Nov 9

Why the machines will never be truly intelligent in a humanly meaningful sense – no matter how clever they get.
1. The Cult of Computation
There’s a new kind of priesthood rising from the glass towers of Silicon Valley. They wear hoodies instead of robes, speak in code rather than Latin, and promise a form of salvation: Artificial General Intelligence.
Sam Altman, Geoffrey Hinton, Demis Hassabis, Ray Kurzweil – these men are the prophets of a new faith. They believe that intelligence can be abstracted, digitised, and scaled. That consciousness can be uploaded, replicated, improved. That the messy, meaty business of being human can be transcended through data and computation.
In this worldview, the mind is just software and the body merely outdated hardware. Human frailty – hunger, fatigue, emotion, mortality – are bugs in the code. And soon, they tell us, the code will be rewritten.
It’s an intoxicating vision. It borrows the glamour of science, the prestige of progress, the utopian shimmer of a world liberated from suffering.
But there’s a rather pungent fact missing from this dream.
AIs don’t shit.
And that, believe it or not, changes everything.
Because the fact that you and I do shit – that we are embodied, vulnerable creatures dependent on the world around us – isn’t some incidental detail about being human. It’s the foundation of everything that makes us intelligent, wise, and capable of meaning.
You can build a machine that knows everything about digestion, but you can’t build one that knows what it’s like to have bowels. You can teach an algorithm the chemistry of tears, but it won’t ever know what it means to cry.
Human intelligence begins in the body. AI begins nowhere.
⸻
2. The Dream of Disembodied Mind
To understand why this matters, it’s worth looking at how we got here.
The belief that intelligence can be detached from the body has deep roots. It runs back through Silicon Valley into the Enlightenment — to Descartes, Galileo, and Newton — when Western thought first split the world into two domains: mind and matter.
Mind was the realm of reason, logic, calculation. Matter was everything else — the world of sensation, emotion, and decay. The great dream of modernity was to purify the mind from the mess of the body, to build a clean, objective, disembodied intelligence that could stand outside the world and understand it perfectly.
AI is the 21st-century offspring of that dream. It’s the purest expression yet of the idea that intelligence is about thinking rather than being.
For the tech evangelists, there is nothing that cannot be replicated if you have enough data and compute.
A child’s curiosity? Just pattern recognition.
Empathy? A large enough emotional dataset.
Love? Neural correlates waiting to be modelled.
Death? Merely a technical glitch awaiting repair.
In this cosmology, “knowing how” – the practical, embodied, context-sensitive knowledge that governs real life – is ultimately reducible to “knowing that.” If you can gather enough facts, you can simulate the feeling. If you can simulate the feeling, you can replace the experience.
Give the machine enough data and processing power, the argument goes, and eventually it will know what it’s like to be human.
But that’s where the fantasy curdles. Because knowing about something isn’t the same as knowing it. And experience can’t be compiled.
⸻
3. The View from the Ground
You don’t need a PhD in philosophy to understand this. You just need a body.
Watch a toddler learning to walk — the wobbling balance, the triumph of a single step, the tumble that follows. That’s intelligence in motion. It’s learning born from failure, coordination, sensation, and care.
Or think of a carpenter, hands roughened by years of practice, sensing grain and weight through touch. No algorithm can do what those hands can do, because the knowledge isn’t in the head alone. It’s in the muscles, the tendons, the eyes that see and feel at once.
It’s the same for a good nurse, who can read pain before it’s spoken, or a teacher who senses confusion in the hush of a classroom. These are forms of knowing that live in the body and in relationship with others.
You can’t get them by sitting outside the world; you get them by being in it.
But the culture that builds AI lives outside. Its luminaries live in the sterile glow of screens, abstracted from the tactile reality that grounds understanding. They inhabit what the philosopher Hubert Dreyfus once called “the nowhere view” — intelligence as detached observation rather than embodied engagement.
It’s why, for all their brilliance, many of the Valley’s great minds seem oddly unworldly. They can code a universe but struggle to read the room. They can model cognition but not conversation. They dream of superintelligence while failing, daily, to understand ordinary life.
⸻
4. Knowing How, Not Just Knowing That
Philosophers have long seen the distinction the technologists fail to grasp.
Gilbert Ryle called it the difference between “knowing that” and “knowing how.”
“Knowing that” is propositional – a storehouse of facts and data. “Knowing how” is practical – the embodied skill of applying those facts in the flow of life.
A pianist doesn’t “know that” the A key vibrates at 440 hertz; she “knows how” to make it sing. A comic doesn’t “know that” surprise creates laughter; he “knows how” to time a pause so it does.
AI, by contrast, is trapped in the realm of “knowing that.” It can process vast amounts of information, identify patterns, generate plausible answers — but it doesn’t live those answers. It doesn’t inhabit them.
It can tell you how to build a campfire but not what it feels like to sit beside one. It can write a sonnet about grief but not ache.
And that’s why the boosterish claim that “knowing how will follow from enough knowing that” is a dead end. It mistakes the map for the territory, the menu for the meal.
You can feed a machine every cookbook ever written, but this doesn't mean it understands food in the way humans do.
⸻
5. The Grand Illusion
Still, the believers march on.
To them, human intelligence is a primitive prototype; AI is version 2.0.
Each new model – GPT-5, Gemini, Claude – is hailed as another leap toward the singularity, the moment when machines become not just clever but conscious.
It’s a seductive story because it flatters our own illusions. It suggests that what makes us special is our computational power — our ability to think fast and solve problems. And if that’s all intelligence is, then yes, the machines are catching up.
But intelligence, as anyone who’s lived a real life knows, isn’t just problem-solving. It’s discernment, empathy, intuition, moral imagination. It’s the ability to feel the texture of a situation and act wisely within it.
That kind of intelligence doesn’t live in code. It lives in the body, in the gut, in the eyes that meet another’s and sense unease.
AI’s great trick is to mimic this — to simulate understanding so convincingly that we start to mistake simulation for comprehension. But behind the fluent sentences there’s nobody home. There’s no consciousness, no context, no “there” there.
It’s the world’s most brilliant impression of intelligence — and it’s being taken far too seriously by people who should know better.
⸻
6. Complicated vs Complex
This is the other thing the boosters miss: the difference between complicated systems and complex ones. Complicated systems, like engines or chessboards, obey clear rules. They can be solved. They’re the natural domain of machines.
Complex systems, like families, ecosystems, or societies, are different. They change as you interact with them. They’re unpredictable, emergent, deeply contextual. You don’t solve them; you navigate them.
Human life is a complex system par excellence. It’s full of ambiguity, contradiction, and emotion. It requires judgment, humility, moral courage — all qualities that arise from experience, not computation.
AI can thrive in the complicated world of rules and data, but it will always flounder in the complex world of relationships and meaning.
⸻
7. The Nowhereness of Machine Minds
And this, ultimately, is the strangest thing about AI: its nowhereness.
Human beings are always somewhere — situated in time and space, born into culture, history, and flesh. Our intelligence is shaped by that context.
AI is nowhere. It exists in data centres humming in the desert, trained on the debris of human language scraped from the web. It has no home, no body, no childhood, no death. It floats above the world like a ghost of reason, clever but clueless.
That’s why it can imitate almost anything and inhabit nothing.
Why it can write poems about love but never fall in it.
Why it can predict what empathy sounds like but never feel it.
To be human is to be located — to live in a body that sweats, aches, and eventually gives up. To be AI is to be placeless, painless, and consequence-free.
And that absence of consequence is the giveaway.
When a surgeon makes a mistake, they feel it. When an AI does, it simply updates its model. When a friend lies, they feel shame. When an AI hallucinates, it just outputs another token.
Without the capacity for consequence, there can be no morality, no growth, no wisdom. Intelligence without consequence is not superintelligence. It’s idiocy at scale.
⸻
8. The Human Condition, Revisited
None of this means AI isn’t extraordinary.
It is. It’s already transforming research, medicine, education, art. It’s the most powerful cognitive tool we’ve ever built.
But it’s still a tool.
It can help us write essays, design vaccines, predict protein folds. It can amplify our cleverness. But it can’t replace the forms of intelligence that give life its meaning — empathy, judgment, love, courage, and care.
It can help us manage the world, but not understand it.
And understanding, for humans, is never just cognitive. It’s moral and emotional and relational. It’s a way of being in the world.
That’s what the AI luminaries miss when they speak of “superintelligence.” They mistake intellect for wisdom, calculation for consciousness. They see the human body as a liability rather than the source of everything that makes us human.
They’re trying to build minds without bodies — and in doing so, they reveal how little they understand of either.
⸻
9. The Final Joke
So let’s return to that crude but clarifying truth: AIs don’t shit.
It’s funny, yes. But it’s also profound.
Because to shit is to belong to the world — to depend on it, to be entangled with it, to need it to keep you alive.
To shit is to have skin in the game.
It’s to be implicated in the cycle of life and death, creation and decay. It’s to be a creature rather than a god.
AI can simulate everything except that.
It can pretend to understand, to empathise, to be wise. But it cannot be any of those things because it cannot be. It only processes.
And that’s why, for all its brilliance, AI remains strangely hollow — a genius without presence, a brain without a body, a mind without a world.
When the next tech luminary declares that Artificial General Intelligence is just around the corner, remember this: the corner they’re turning leads nowhere.
AI will change the world, yes. It will revolutionise industries, accelerate knowledge, and challenge our sense of self. But it will never join the party of human life. It will never dance, or weep, or smell rain on the pavement. It will never know the relief of shitting after a long journey, or the humility that comes from realising you too are made of meat and mud.
That’s the difference that matters.
Because intelligence isn’t just the ability to think. It’s the ability to live.
And until the machines can do that — until they can laugh and cry and love and lose and shit — they’ll remain what they are: extraordinary tools, but nowhere near the heart of what it means to be alive.
So yes: AIs don’t shit.
That matters.
Pay attention.



Comments