The Simulation Trap: Why AI's Failure to "Understand" Matters
- owenwhite
- Dec 27, 2024
- 15 min read

PART I: THE GREAT PROMISE—AND ITS SHADOW
It was March 2016 in downtown Seoul when the world first heard the news: a computer program had beaten one of humanity’s top players at the ancient game of Go. In a hushed hotel arena, Lee Sedol—widely regarded as one of the greatest Go masters—was locked in a match with AlphaGo, a system created by London-based Google DeepMind. Though physically taking place thousands of miles from Silicon Valley, the tech world’s gaze sharpened as if this drama were unfolding right there on Sand Hill Road. If a machine could outplay a top champion in a game of such formidable complexity, many wondered what else lay on the horizon.
From a distance, you can’t help but admire the scale of this accomplishment. AI systems today can drive cars, generate art, compose music, and even write essays that mimic human style. Companies like OpenAI offer large language models so attuned to subtle linguistic cues that it’s easy to assume they truly understand every word. Such performance has led AI boosters—investors, entrepreneurs, and even some scientists—to proclaim the imminent arrival of machine intelligence. Sam Altman, Ray Kurzweil, and other luminaries tell us that given enough time, AI will replicate (and quite possibly surpass) our capacity to reason, empathise, and understand the world.
But amid this technophilic exuberance, a profound question nags: Is this really intelligence? More pointedly, what does it mean for something to “understand”? A computer program can be extremely good at appearing to know what it’s talking about. But appearance isn’t the same as reality. Indeed, the story of AI’s development—from Marvin Minsky’s early conviction that intelligence resides in rule-following, to Geoffrey Hinton’s conviction that ever more sophisticated neural networks model how the brain really works—contains a thread of caution that’s often overlooked.
Hubert Dreyfus, a philosopher who famously tangled with AI pioneers in the 1960s, argued that genuine human intelligence is bound up with experience, embodiment, and context. He was an early critic of Minsky’s approach, pointing out that we don’t simply parse the world through discrete rules. Rather, we navigate it through an intangible matrix of personal history, cultural norms, bodily sensations, and unspoken assumptions. Dreyfus drew heavily on Martin Heidegger, who wrote that our being-in-the-world is fundamentally non-computational. We don’t stand outside of reality, applying formulas or pattern extraction. As humans, we’re immersed in it.
Over the decades, AI research pivoted. Rule-based systems—the old “expert systems” of the 1970s and 80s—gave way to data-driven statistical models. Neural networks, championed by Hinton and others, now reign supreme. Instead of following explicit instructions, these models “learn” from massive datasets. They excel at recognising patterns far too subtle for human eyes. This new approach has fuelled breathtaking advances, including language models that can churn out reams of text in seconds, gleaned from patterns in billions of words scraped off the internet. Yet the fundamental critique remains. Even if an AI can produce text that looks mindful or empathic, it does so without any embodied reference to the world. It has no childhood, no body, no sense of pain, no emotional heartbeat that ties experience into a living tapestry of meaning.
To many boosters, this gap is merely temporary. “Just wait,” they say. “Give it more processing power, more data, more sophisticated algorithms. In time, these systems will truly understand.” But there’s a growing chorus of experts who beg to differ. They argue that no amount of data or advanced computing can replicate the qualitative nature of human consciousness. There’s a vital distinction between knowing about empathy, for instance, and feeling empathy. One is a matter of gleaning patterns; the other is a deeply personal phenomenon—something no disembodied AI can claim, no matter how many words it processes.
This gap matters, deeply. At stake is not only our philosophical conception of the mind but the very trajectory of our technological future. If we treat advanced language models as if they truly understand the world, we risk ceding power to simulacra. Worse, we become complacent about how these systems can mislead, manipulate, or accelerate social problems—much like social media did in the last decade, but at a scale and pace unimaginable before. For those who believe in the inevitability of AI dominance, these dangers are minor footnotes. For the rest of us, they are flashing red lights on the control panel of progress, warning us to look beyond the shiny façade and to question what it really means for a machine to “think.”
PART II: APPEARANCE VS. REALITY—THE UNDERSTANDING GAP
Imagine you’re studying a map of your hometown. The map is detailed: every street, every corner shop, every park meticulously laid out. You could learn a great deal from this map about where certain things are located or how to navigate from one point to another. But no matter how thorough or accurate it is, the map is not the territory. It doesn’t capture the smells of the bakery on the corner, the warmth of the sun on your face as you sit in the park, or the memories you have of the pizza place you used to visit with your friends.
AI, in its current forms, is like the most elaborate map you can imagine. A large language model can ingest countless texts, building a massive statistical representation of word associations. It’s an amazing feat of engineering, no doubt. But there’s a crucial understanding gap between these representations and the direct, lived reality they aim to depict.
Yes, AI can collate enough data to appear deeply knowledgeable. Ask a generative model about the emotional nuance in Tolstoy’s Anna Karenina, and it might deliver a polished analysis, at times even enlightening. Still, it’s a simulation of literary insight, not a genuine, experienced understanding. It doesn’t love or despair. It doesn’t sense the weight of heartbreak or the complexities of 19th-century Russian society the way a human immersed in that story might.
“But look how far we’ve come,” AI’s defenders insist. “We used to have clunky expert systems that spat out lifeless responses. Now GPT-style models generate text that feels so human!” Yet that sense of humanness is precisely the trap. We see a reflection of ourselves in these outputs and assume there’s a self behind them—an entity that truly understands the world. But this is the “Eliza effect” on steroids: our tendency to attribute human-like qualities to anything that mimics them.
Part of this confusion stems from a long-standing emphasis in AI on performance rather than genuine cognition. The famous Turing Test measures whether a machine’s responses can fool a human judge into believing it’s human. While revolutionary for its time—shifting the question from “Can machines think?” to “Can they act as though they think?”—it inevitably puts the spotlight on deception rather than authenticity. ChatGPT, for example, might well pass many Turing Test scenarios. But fooling a person into believing there’s a mind behind its words doesn’t mean the AI has genuine understanding or lived context. It’s a powerful demonstration of mimicry, not a realisation of consciousness.
We can turn here to Iain McGilchrist, a scientist and philosopher whose work on the brain’s two hemispheres offers a revealing insights. In his great book The Master and His Emissary, McGilchrist argues that the two hemispheres of the brain, while functionally similar, attend to the world in different ways. The left hemisphere attends to the world in an analytical, mechanistic way—prizing representation over direct engagement. It’s brilliant at dissecting problems and applying methods but tends to assume it has the full picture. Meanwhile, the right hemisphere attends to the world in a more direct and holistic way; it dwells in the immediate, lived reality. It sees nuance, change, and the continuous flow of experience. According to McGilchrist, modern culture often overemphasises left-hemisphere modes of thinking. AI, with its reliance on algorithms, symbols, and statistical pattern recognition, is the ultimate extension of that left-hemisphere worldview.
The tension McGilchrist describes in the human brain is captured in his use of Nietzsche’s parable of the Master and his Emissary. The Master (the right hemisphere) has the broader vision and understanding, while the Emissary (the left hemisphere) is adept at carrying out specific tasks and dealing with abstractions—but mistakenly believes it knows more than it does. In the realm of AI, this parable becomes doubly apt. We have a technologically sophisticated Emissary—highly capable, enamoured with its own prowess—yet cut off from the Master’s depth of insight, context, and felt reality. What we call “progress” here is the Emissary climbing ever higher in its realm of representations, without recognising the deeper territory it fails to inhabit.
This is where the Dreyfus’s critique resurfaces with renewed force. Back in the 1980s, Stuart and Hubert Dreyfus described the eternal optimism of AI researchers as “the belief that someone climbing a tree is making progress toward reaching the moon.” Every rung up the trunk feels like progress, but in reality, the distance to the moon is of an entirely different order. No matter how tall the tree, you’ll never make that cosmic journey by continuing to climb. In the same way, the leaps in generative AI—impressive though they are—don’t actually bring us any closer to bridging the qualitative gap between simulation and true, situated, experiential understanding.
Even pioneers like Geoffrey Hinton acknowledge that current AI lacks true comprehension. Hinton’s genius lies in harnessing pattern recognition, not conjuring consciousness out of silicon. So when Sam Altman or others tout “reasoning” abilities, they often blur the line between highly sophisticated pattern matching and bona fide reasoning. By the latter, we mean a capacity tied to awareness of context, goals, emotional states, cultural background, and personal stakes. We reason because we are living, feeling creatures engaged with the messy realities of existence.
Critics point out that AI’s success is ironically making this gap less visible. The more seamless the simulation, the easier it is for us to lapse into complacency. We start to treat these systems as if they were wise advisors, fellow thinkers, or empathic counsellors. This creep toward anthropomorphising AI becomes a real danger when it shapes public policy or corporate decision-making. If we forget the difference between map and territory—if the Emissary convinces us it is the Master—we might trust AI with moral or existential terrain that it simply cannot grasp. As the Dreyfuses warned, no amount of “tree-climbing” will get us to the moon of genuine intelligence.
PART III: LIVING, DYING, AND FEELING—THE HUMAN ROOTS OF INTELLIGENCE
Picture a toddler taking her first steps, teetering on pudgy legs, arms outstretched for balance, the wide-eyed wonder on her face as she experiences the incredible sensation of walking upright. That child isn’t following a rule book or dissecting thousands of YouTube clips about the mechanics of ambulation. Her learning is embodied, honed by trial and error, guided by the innate drive to explore, and cheered on by empathetic caregivers. It is messy, risky, and infused with feeling—fear, determination, delight.
This is precisely the dimension of intelligence that generative AI lacks. Humans exist as finite, vulnerable creatures who come to know the world through experience, memory, and emotional resonance. Our reasoning is shaped by hunger, desire, pain, and love—forces no disembodied algorithm can replicate. We don’t deduce empathy by sifting through data. We live it, forged by shared hardships and joys. We resonate with others because we ourselves have tasted sorrow and euphoria, not because we have calculated it.
Philosophers like Heidegger would call this “being-in-the-world.” We aren’t spectators applying rules to the external environment; we are inseparably woven into that environment. Our intelligence arises from a deep, pre-reflective entanglement with all the textures and contours of life. When we speak of “context,” we don’t mean just the immediate topic or setting. We mean the entire web of cultural, historical, and personal threads that shape our perceptions.
AI, no matter how advanced, remains an outsider to this tapestry of lived human meaning. A language model can tell us about the chemical changes in the brain that accompany sadness. It can identify signs of sadness in the human face. It can produce hauntingly poetic lines about heartbreak. But it has never cried from heartbreak, never lost a parent, never worried that it might fail an exam or lose a friend’s trust. That lived dimension is missing—and with it, a massive slice of what we call understanding.
Indeed, this difference is not just quantitative (“we need more parameters”). It’s qualitative. Human understanding arises from subjective experience, from a body that aches and delights, from relationships that nurture and injure. Without any of that, AI is a locked window. It can show you a reflection of life’s complexities but never open the pane to feel the breeze or smell the earth after rain.
McGilchrist’s dual-hemisphere view underscores this again: the left hemisphere thinks it can master reality by categorising and dissecting, but it fails to appreciate the fuller, more direct engagement that the right hemisphere provides. AI, as the pinnacle of left-brain-style cognition, can simulate our outputs but can’t join us in the existential weight of living. The toddler’s experience of walking—full of sensation, risk, and emotional support—remains forever out of reach for an algorithmic entity. It can climb every branch in sight, but that moon is still a universe away.
PART IV: THE LIMITS OF MEANS-ENDS THINKING
Many of the misunderstandings about what AI can and can't do stem from a deeper assumption about intelligence and how we think: that intelligence is fundamentally about identifying hidden rules and applying them to achieve goals. This assumption is baked into our scientific culture, our educational systems, and much of modern management philosophy. We see it in STEM curricula emphasising formulas and laws of nature, and we see it in the corporate world’s reliance on methods like Key Performance Indicators (KPIs) or Six Sigma. Define the goal, design the system, and execute.
Yet, this kind of means-ends thinking again is not the sum of all thinking. It works in some contexts but not others. It works best in contexts where cause and effect are predictable and stable. Baking a cake follows a straightforward recipe: gather ingredients, combine them in the correct proportions, apply heat, and voilà. Even extremely complicated contexts like building a space rocket are, can be accommodated by means-ends reasoning. This is because rockets are machine-like: intricate, yes, but ultimately solvable through expertise, engineering, and discrete processes.
The problem arises when we extend this approach to complex human environments—cultures, markets, communities, or political systems. These are not well-contained machines but dynamic, ever-evolving networks shaped by unpredictable interactions. Dave Snowden’s Cynefin framework offers a compelling way to see why. It distinguishes between obvious, complicated, complex, and chaotic environments, each requiring a different method of understanding and intervention.
Means-ends thinking can handle the obvious and the complicated: set goals, deploy best practices or expert analysis, and outcomes generally follow. But in the complex domain, cause and effect are only clear in hindsight, and outcomes emerge through feedback loops. A neat solution in one corner can trigger unintended consequences elsewhere. Business transformation and culture change is a classic example. Leaders set out a plan—“We’ll have a more open, innovative culture”—as if it were an engineering project. But, as many will soon realise, a stray remark in a meeting, a rumour about management’s ulterior motives, or the subtle dynamics of trust can easily unravel the entire plan. These sorts of problem don’t yield to rule-based approaches.
The same goes for what are sometimes called "wicked problems" —like affordable housing, healthcare reform, homelessness or climate action—span multiple interdependent causes and refuse tidy technical solutions. When we treat these as machine-like puzzles, we often end up with partial “fixes” that create new problems elsewhere. This mismatch between means-ends thinking and the real world is precisely the mismatch between AI’s rule-and-pattern-based approach and our lived, context-rich experience. The AI sees a recipe for culture change or a policy fix in abstract, siloed terms. Human reality plays out in tangled social currents that follow no simple script.
Here we come back to Heidegger, Dreyfus, and McGilchrist: human intelligence isn’t about extracting universal rules from behind the curtain of phenomena. It’s about being attuned to ever-shifting contexts, about experimenting and improvising in the face of changing conditions, about dwelling in a world rather than standing outside it. AI, to the extent it inherits a mechanistic worldview, remains trapped in the mindset of means-ends rationality. It can be an astonishingly effective Emissary for certain tasks—optimizing processes, analyzing data, streamlining logistics—but it cannot replace the Master who engages holistically, adaptively, and experientially with the complexities of life.
PART V: WHEN DREAMS BECOME NIGHTMARES—THE RISKS AND CONSEQUENCES OF AI
The ongoing expansion of AI capabilities isn’t merely academic; it has real and potentially far-reaching consequences. We’ve already glimpsed a cautionary example with social media. When Mark Zuckerberg launched Facebook, he sold it as a tool for deeper human connection. He envisioned a world where friendships would be strengthened by digital networks, free from physical barriers. And indeed, that vision partly came to pass—millions found communities, reconnected with old friends, and discovered spaces for self-expression.
But then, a darker side emerged, fuelled by algorithms designed to maximise engagement. Content that provoked outrage or fear got more clicks, so the platform doubled down on pushing it. Misinformation metastasised. Political echo chambers hardened. Depression and anxiety soared among teens. Even Zuckerberg—once the consummate optimist—had to reckon with unintended consequences.
Now imagine AI systems far more advanced than social media algorithms, woven into every aspect of life—chatbots guiding mental health decisions, machine learning engines running financial markets, and maybe even AI-driven war machines deciding on targets. The fundamental problem is that these systems, no matter how advanced, do not understand the moral weight of their choices. They can’t experience empathy or guilt. They can’t weigh the intangible elements of a decision that make it truly ethical.
Tech evangelists respond with reassurances about “guardrails” and “regulations.” But these are often afterthoughts, erected by the same people who view AI through rose-tinted glasses and have vested interests in its unimpeded growth. Business and science both share this sense of inevitability: if it can be done, it should be done. The fact that these breakthroughs might spawn enormous social fractures is lamentable but never quite reason enough to slow down.
Yet all this “progress” stands on the shaky foundation of an AI that simulates understanding rather than embodying it. When simulation meets real-world stakes, outcomes can be disastrous. We’re dealing with black-box systems that humans themselves struggle to interpret, let alone control. If an AI recommends disbanding a rural hospital because it’s “inefficient,” who’s to say it’s wrong without moral intelligence that weighs the hospital’s community role? If an autonomous weapon calculates it should neutralise a population to “reduce risk,” does it grasp the horror behind that action? Of course not. It’s all simulation.
That’s why the earlier cautionary note remains so relevant: “The danger lies not in what AI can do but in what we wrongly imagine it to be.” We risk fetishising AI as a wise, neutral arbiter that can rescue us from human frailty. But it’s neither wise nor neutral. It’s a tool shaped by the data and objectives we feed it. When those objectives are shortsighted or profit-driven, we reap tragedies instead of utopias.
To top it off, none of the big names—Sam Altman, Geoffrey Hinton, Demis Hassabis, Mustafa Suleyman, or Ray Kurzweil—can offer a convincing path to bridging the gap between simulation and reality. They promise a future where AI “truly understands,” but that promise remains speculative—many argue it’s conceptually impossible without a drastic overhaul of what we even mean by intelligence. Meanwhile, they press onward, building ever-larger models, reaping accolades and funding, and championing a vision that might be unattainable or, worse, a stepping stone to unanticipated harms.
PART VI: A CALL FOR CAUTION AND WISDOM
What if we questioned the narrative of inevitability? What if we acknowledged that the gap between simulating understanding and actually being an understanding entity is not just one or two breakthroughs away but is instead a chasm that might never be bridged? Instead of doubling down on an AI arms race, we could channel our human ingenuity into ensuring technology remains our tool, not our master.
There is a place for AI. Used judiciously, it can accelerate research, sift through massive datasets, and automate tedious work. It can even generate creative prompts that spark genuine human insight. But if we treat AI as a replacement for human intelligence or empathy, we delude ourselves. If we give it moral or social authority, we risk empowering a system that is fundamentally blind to our lived realities.
Human intelligence is messy, contradictory, alive. It emerges from bodily sensations and social interactions—from the way a child learns to walk to the way an adult grapples with loss and love. Our reasoning is not purely logical; it’s marinated in emotion, tradition, and personal experience. AI can approximate the outputs of that reasoning, but it remains a carnival mirror—reflecting, refracting, and distorting.
This doesn’t mean we should shun technological advancement. But we must keep our eyes open. Too often, those pushing AI forward have a deep faith in progress for progress’s sake. They see the wonders of modern computing—the leaps made in just a decade—and extrapolate toward a horizon of limitless potential. Rarely do they pause to consider that some problems aren’t simply “scaling issues.” As Dreyfus and Heidegger both noted, human intelligence isn’t about dissecting the world into rules; it’s about dwelling in it, an ongoing negotiation with complexity rather than a one-time engineering fix.
Many challenges, especially “wicked problems,” call for approaches that balance technical expertise, human relationships, and social nuance. When we mistake complex systems for complicated machines, we deploy means-ends thinking in realms where it’s a poor fit—where everything is entangled, emergent, and impossible to reduce to a linear series of steps. AI, as a marvel of means-ends engineering, is unsuited to fully tackle those domains. It climbs higher in the tree, but the moon remains out of reach.
The genie is out of the bottle, but we still shape its future. We can demand transparency about AI systems and how they’re developed. We can insist on regulatory frameworks that sometimes say “No” rather than always “Yes, but carefully.” We can foster public discourse that goes beyond PR, grappling honestly with the existential implications of AI. And we can reaffirm the uniqueness of human insight—rooted in emotion, embodiment, and context—in ways no machine can replicate.
Ultimately, if there’s one lesson worth carrying forward, it’s that real intelligence isn’t a matter of faking it until you make it. It’s an ongoing, lived process bound to the reality of being human. AI can mimic forms of knowledge and generate outputs that astound us, but the difference between map and territory remains—and it’s enormous. Mistaking one for the other has always been a dangerous error. In an age of advanced AI, it may well be the defining error of our time.
And that is why we must pay attention to the illusions we create—lest they come to rule us, and lead us to misunderstand not only the machines we build, but the very essence of what it means to understand at all



Comments