top of page

The Limits of Machine Intelligence

  • owenwhite
  • Jan 5
  • 11 min read

ree

In the mid-1960s, the hallways of MIT seemed to glow with promise. Artificial Intelligence, many believed, was just a few breakthroughs away from equaling—or even surpassing—human intelligence. Computers that had once clunked through simple arithmetic would soon think, learn, and solve the world’s most pressing problems. Marvin Minsky, a brilliant and charismatic scientist, stood at the centre of this storm. He and his collaborators declared that human intelligence could be reduced to symbols, logic, and formal operations—a neat grid of data in which reasoning was the outcome of rule application. The mind, on this view, might be complex, but it was ultimately computational, and Minsky was determined to (de-)code it.


Not far from Minsky’s lab at MIT, a philosopher named Hubert Dreyfus saw things quite differently. Dreyfus, immersed in the writings of Martin Heidegger and Maurice Merleau-Ponty, believed that intelligence was more than problem-solving. It was, he argued, a way of being in the world. Humans don’t just shuffle mental symbols around; they live and breathe in a cultural setting, guided by tacit rules, social norms, and a sense of personal vulnerability related to their bodies.  To Dreyfus, Minsky’s approach—treating the mind as a symbol-manipulating machine—was not just incomplete, it was blind to the very qualities that make human understanding human. The stage was set for a legendary standoff.


Minsky vs. Dreyfus: A Philosophical Showdown

For Minsky, the brain was essentially an information processing machine and therefore human intelligence hinged on cracking the “codes” or “rules” by which we process information. He believed that if you could break down knowledge into symbolic units and define precise procedures for combining those symbols, you’d replicate the inner workings of the mind. Early AI programs, which performed tasks like playing checkers or solving algebra problems, seemed to confirm his vision: to Minsky, these successes suggested that, with enough time and logic, machines could one day master more sophisticated challenges—conversation, perception, even moral judgment.  Philosophy, in Minsky’s eyes, had spent centuries speculating without building anything practical. Now science would do the job.


Dreyfus, however, questioned the very idea that the mind was like a computer that processed information.  He pointed to the massive gaps and limits to this model: Where is the body? Where is the sense of mortality, the childhood shaped by culture, the moral awareness that grows from vulnerability? Where is the tacit knowledge that lets us read a room, sense irony, or spontaneously shift register when speaking to a child rather than a coworker? These intangible skills—so important for humans and so easy for humans—proved devilishly difficult for machines. Dreyfus published a damning report for the Rand Corporation, spelling out how symbolic AI would continue to falter on everyday tasks. Minsky was furious, seeing not only a philosophical challenge but a threat to research funding.


Minsky never managed to rise to the challenge presented by Dreyfus.  Probably the majority of computer and cognitive scientists chose to ignore rather than attempt to refute Dreyfus.  And the AI Winter that eventually arrived seemed to vindicate Dreyfus. Symbolic systems excelled at carefully defined puzzles—like geometry theorems or contrived dialogues—but they continuously stumbled in messy, real-world environments. Real people lived in a universe thick with context, nuance, and intuitive leaps that refused to be captured by mere rule lists. Dreyfus insisted that one cannot replicate understanding without replicating the lived experiences that inform it. A bodiless computer, however sophisticated, was missing the ground from which meaning arises.


The Connectionist Revolution: Hinton and Deep Learning

While Minsky’s logic-based approach eventually stalled, another AI tradition quietly gained traction—one explicitly inspired by how real neurons in the brain seem to work. Instead of relying on hand-coded facts and rules, this “connectionist” doctrine proposed that machines could learn from data, much like the human brain’s neurons strengthen and weaken their connections through experience. Geoffrey Hinton led this movement, arguing that intelligence might emerge from the interplay of countless simple units (or “neurons”) that adapt and form patterns as they process information. This stood in stark contrast to Minsky’s symbolic framework: Hinton’s networks didn’t need to be told how to structure knowledge, they simply absorbed examples and self-organised, uncovering relationships in the data that no single programmer had ever explicitly defined.


For years, connectionism remained on the fringes. Critics, including Minsky, highlighted theoretical limitations in early neural nets, and the idea of “perceptrons” fell out of favour. But Hinton persisted, refining backpropagation algorithms, amassing bigger datasets, and harnessing more powerful hardware. By the 2010s, the deep learning boom was on, and neural networks began to outperform almost every previous AI method in fields like speech recognition, image classification, and strategic gameplay.


Yet with the success of the new connectionist approach came a familiar question: Do these deep nets really understand anything? Beneath the reams of data and multiple layers of processing, a deep-learning system still forms internal representations—albeit “distributed” patterns of weights rather than Minsky’s discrete symbols. It’s impressive, but is it akin to a mind, or is it just a statistical powerhouse that can mimic a mind’s outputs under certain conditions? The question of understanding loomed larger when “Generative AI” like GPT-4 appeared. Suddenly, chatbots could spout fluent prose on quantum physics, romantic poetry, or historical analysis. But many times, they “hallucinated,” confidently asserting nonsense or mixing up facts. They lacked any sense of reality behind their words. This problem, once hammered by Dreyfus, returned with a vengeance: Without a lived sense of context, how can a machine discern truth from a plausible but incorrect statement?


Gary Marcus: A Cognitivist Reborn

Into this debate stepped Gary Marcus, a psychologist who echoes Minsky’s cognitivist perspective. Marcus acknowledges that deep learning has extraordinary powers but insists it’s missing essential “symbolic reasoning” for true reliability, consistency, and conceptual coherence. Generative models might sound empathic or knowledgeable, but they can slip into bizarre contradictions. The solution, in Marcus’s view, is to merge symbolic logic with neural networks, forging a hybrid that marries pattern recognition to robust, structured knowledge.


At first glance, one might think Marcus was channeling Dreyfus by pointing to generative AI’s inability to truly “understand.” But actually, Marcus’s aim is quite different. Whereas Dreyfus said the machine could never replicate human understanding without the experience of being a vulnerable, embodied being, Marcus still places his hope in representation—only with more refined conceptual scaffolding. He is returning to the old cognitivist notion that better logical frameworks might “fix” the illusions. But, as Dreyfus would retort, more logic doesn’t conjure a childhood, a sense of mortality, or the moral perspective that emerges from living among others. You can’t reduce qualitative human experience to quantitative models. The gap between simulating empathy and having empathy remains unbridged.


The Empathy Issue: Mustafa Suleyman’s Claim

That gap emerges starkly when we look at the statements of Mustafa Suleyman, co-founder of DeepMind and Inflection AI and now heading up Microsoft’s AI division. Suleyman has publicly claimed that AI systems have already “mastered” empathy—referring to how an AI can be fine-tuned to display warm, supportive language or “human-sounding” concern. In practice, these systems respond with phrases that mirror empathy’s outer form: “I’m so sorry you’re feeling that way,” or “I understand how difficult that must be.” For those steeped in computational or cognitivist assumptions, that might look like “mastering empathy.” The AI has a representational module that leads it to produce the behaviours we associate with empathic communication.


But from a real-world standpoint—especially for people who treasure the virtues of genuine human connection and depth— the AI isn't even close to mastering empathy. This is simply a simulation of empathy, not the real thing. The difference is profound. True empathy isn’t just about patterning the words of care, it’s about feeling concern for another’s distress, shaped by the knowledge that we, too, can be hurt. It involves the experience of being in a body, facing mortality, having lived through joys and losses. Dreyfus argued that a system lacking real vulnerability or emotional life is inevitably faking it. The cognitivist lens, however, reduces empathy to “behaviours,” which can indeed be captured and manipulated in numerical or symbolic terms. This mismatch highlights the “understanding gap” in stark relief.


Computer Science Eyes vs. Human Experience

Why do so many AI luminaries seem untroubled by this difference between performance and lived understanding? Part of the reason traces back to the backgrounds of pioneers like Turing, McCarthy, Minsky, and today’s boosters—most are mathematicians or computer scientists, schooled in a tradition that sees the mind in quantifiable, computational terms.  Their methodology is about reducing the qualitative to the quantitative so that it can be manipulated and mastered.


When Suleyman claims AI can “master empathy,” he’s effectively claiming that empathy can be coded as a set of observable signals or behaviours. But is that true?  Is that what matters about empathy for real people in real-life situations? For someone in genuine distress—say, a terminally ill patient facing deep existential fears—these signals are always going to ring hollow if they’re known to come from a system that has no true sense of pain, mortality, or moral accountability. It’s the difference between “simulating” a phenomenon and “living” it. This is precisely the realm that defies straightforward computational measure.


This tension isn’t trivial. Our culture is profoundly shaped by scientific narratives, and the achievements of mathematics and computer science rightly inspire admiration. Yet that very success can overshadow the possibility that some human experiences—empathy, wisdom, moral insight—lie outside the scope of numeric representation. It’s easy for those with a purely computational worldview to treat intangible experiences as mere illusions or forms of “input–output.” But for the rest of us, particularly those who treasure a humanistic perspective, and who find meaning in relationships, culture, vulnerability, and the sense of shared existence, the computational approach shrinks the dimension of what it means to be human.


Enter Iain McGilchrist: A Different Critique

While earlier debates circled around “symbolic” vs. “connectionist” AI, and whether more representation would suffice, Iain McGilchrist offers an upstream critique that extends beyond Minsky or Dreyfus or any purely philosophical stance. After decades studying the neurology of the human brain, McGilchrist argues that our cognition operates in two distinct modes linked to the two hemispheres of the brain. The left hemisphere is adept at dissecting reality, forming abstractions, and controlling them—precisely what AI does so well. Meanwhile, the right hemisphere directly engages the richness of lived experience, perceiving context, novelty, and the intangible qualities that make events unique and meaningful.


To McGilchrist, AI’s brand of intelligence is almost entirely “left hemisphere,” focusing on analysis, patterns, and representation. Even when Gary Marcus says “we need more logic,” he’s doubling down on left-hemisphere logic. Even when Hinton refines neural networks, it’s still left-hemisphere if the aim is to manipulate or replicate patterns without a ground in experience as we feel it. And that is why machines can simulate empathy or irony—replicate the outward markers—yet remain fundamentally disconnected from the underlying lived resonance that shapes those human phenomena.


McGilchrist’s deeper warning, however, concerns society itself. If we systematically elevate a left-hemisphere style—quantitative, optimising, representational—we risk crowding out the right hemisphere’s gift of relational understanding, embodied sense-making, and what might be called “wisdom.” In a culture enthralled by economic metrics and the illusion of total control, AI can appear to confirm that everything can be engineered. This trend feeds into the mental health crisis, a pervasive sense of disconnection, and the broader meaning crisis that many see in modern life. It is, in McGilchrist’s metaphor, a case of “the emissary” (the left hemisphere) trying to seize power from “the master” (the right hemisphere) and insisting its narrower perspective is the only valid reality.


Why “Simulation” Can’t Become “Real Experience”

Dreyfus hammered the point that AI lacks human embodiment, moral sense, and cultural embedding. McGilchrist’s hemispheric research offers a neurological underpinning for why that matters. True experience arises from living in a body that can be hurt or die, from a childhood shaped by family and culture, from intangible moments that defy neat classification. A machine, no matter how intricately programmed, can’t replicate that reality; it can only assemble representations or stimulate outward signals.


Thus, even if we bolster AI’s model with more rules or bigger neural nets, we don’t magically produce a creature that knows what heartbreak feels like or truly cares when a patient’s life is on the line. The “understanding gap” remains, not because we lack a certain algorithmic trick, but because we’re dealing with a fundamental difference between life and simulation. AI’s “expert systems” may transform scientific research or provide breathtaking solutions to computational challenges, but they stay on the far side of the empathy threshold. And no amount of computing power bridges that divide.


This is why many in the humanities push back against the fervor of AI boosters who see, in every problem, a data puzzle. Literature is prized precisely because it captures the “show, don’t tell” ethos: it reveals contexts, vulnerabilities, moral dilemmas, and layers of character that no numeric parameter can fully encode. Similarly, music, art, and theater traffic in emotional truths that do not boil down to discrete signals. A cognitivist or connectionist framework might generate passable imitations, but “passable” is not “lived.” Confusing the two leads to illusions of mastery—“AI has empathy!”—that vanish on closer contact with real human anguish.


Future Directions: Why It Still Matters

Society today stands at a crossroad. Figures like Sam Altman, Mustafa Suleyman, Ray Kurzweil, and Demis Hassabis proclaim an AI future that could overshadow human intelligence. They conjure scenarios of fully automated empathy, improved doctor–patient interactions, even entire fields of creative production handled by generative models. On the surface, these visions dazzle. But from a McGilchrist perspective, it’s all part of a left-hemisphere intoxication—giddy with data processing, brilliant at short-term optimisation, but missing the intangible wellsprings of meaning.


The real risk is cultural. If economic imperatives push us to accept AI-based empathy as “good enough,” we might find ourselves in medical clinics staffed by hyper-efficient chatbots that do a polite pantomime of concern. That scenario might cut costs and speed up service, but it also robs patients of genuine human connection. A cynic would say it’s about boosting share prices more than helping souls in distress. McGilchrist’s caution is that such a shift further entrenches a worldview that sees all phenomenon as mechanical and quantifiable, edging out the “right hemisphere” approach that fosters depth, moral sense, and real emotional resonance.


In a world enthralled by technological expansions, we could end up forced—by capitalism’s productivity demands—into a version of intelligence that is brilliant at engineering but impoverished at humanity. The central failing wouldn’t be a lack of “clever code” but a lack of respect for the dimension of intelligence that can’t be reduced to code at all. This is precisely the domain Hubert Dreyfus insisted was untranslatable into data structures, and that McGilchrist’s lateralisation studies reveal is crucial for a balanced human society.


Conclusion: The Master and the Emissary in the Age of AI

The story of AI, from Minsky’s optimism to Dreyfus’s skepticism, from Hinton’s deep learning to Marcus’s symbolic revival, circles back to the same question: What does it mean to understand? Does an AI that “fakes” empathy or irony really grasp what those states are, or is it an elaborate conjuring trick? Could a machine, lacking bodily vulnerability, cultural enculturation, and moral imagination, ever experience real empathy? Most boosters either assume it’s inevitable with bigger models or dismiss the question as philosophical fluff. But the debate is alive, fueled by AI’s repeated inability to ground itself in the experiential realities that make our intelligence meaningful.


Iain McGilchrist’s contribution is to show how these limitations mirror the dominance of a left-hemisphere style of attention, a form of intelligence that breaks the world down into parts and manipulates them, but does not dwell in it. The risk is that we lean so heavily on the left-hemisphere approach—across medicine, education, and personal relationships—that we estrange ourselves from the right-hemisphere dimension: the capacity for presence, vulnerability, moral reflection, and genuine emotional resonance.


This is not a minor philosophical quibble; it’s a civilizational issue. If society is seduced by the illusions of total computational mastery—like “AI doctors who empathize better than any human”—we could accelerate a crisis of meaning, where real emotional experiences are sidelined by “efficient” simulations. Conversely, acknowledging the gap between simulation and lived experience keeps us honest about what AI can and cannot do. It also preserves the crucial space for the arts, humanities, and the intangible aspects of life that defy mechanistic representation.


The question, then, is not whether AI will keep transforming our world—it will—but whether we, in turn, will keep hold of the other side of our intelligence, the one that McGilchrist identifies with the right hemisphere. That side of us cultivates wisdom, fosters genuine connection, and embraces the truth that not every vital human experience can be reduced to data. In that sense, the old dispute between Minsky and Dreyfus never really ended; it has only grown more urgent. And it may be Iain McGilchrist, with his scientific insights into how the brain balances two ways of attending, who shows us why AI’s left-hemisphere brilliance could be a gift—so long as we remember it’s only half the story of intelligence, and that real understanding may remain firmly on the side of embodied, culturally enmeshed human beings.

 
 
 

Comments


© 2019

Designed by Owen White
 

Call

T: +353 87 116 4905   

bottom of page