top of page

What Matters vs. What Works: Why AI Hype Misunderstands Human Intelligence

  • owenwhite
  • Jan 19
  • 14 min read

ree

PART I: THE ILLUSION OF “INTELLIGENCE” IN THE AGE OF IQ TESTS

In the early 20th century, French psychologist Alfred Binet devised a test intended to help identify schoolchildren who might need additional support. Over time, this modest diagnostic tool grew into something far larger: the Intelligence Quotient, or IQ test, which evolved into a global obsession with measuring “intelligence.” By the mid-century, IQ came to be strongly associated with success in academic and professional arenas. Suddenly, “intelligence” was linked to skill in logic, mathematics, abstract reasoning—all the “hard” cognitive capabilities.


And yet, if you’ve moved through the world long enough, you’ll know that a high IQ is no guarantee of success, happiness, or even good judgment. We’ve all met that person who can solve complex equations but can’t hold a conversation without offending half the room. Or the colleague who always had the right answers in college but lacks the emotional radar to build strong workplace relationships. Academic aptitude can take you far, but it rarely gets you all the way—certainly not in the messy terrain of human relationships, leadership, or self-understanding.


Over the past few decades, educators, psychologists, and business leaders have come to appreciate the crucial role of “soft skills” like empathy, communication, self-awareness, and emotional regulation. Daniel Goleman’s Emotional Intelligence popularised the notion that many of life’s greatest challenges depend less on computational or analytical prowess and more on how we perceive and manage our own emotions and those of others. It’s not that “book smarts” are irrelevant—ask any doctor who’s had to memorise volumes of clinical data—but that raw cognition alone is insufficient to navigate real-life complexities.


Yet for reasons both historical and cultural, when we hear the word “intelligence,” our minds often revert to that old 20th-century model of IQ. Media coverage about top-tier universities and child prodigies perpetuates the mystique of mental fireworks—fast problem-solving, deep recall, elegant proofs. That mythos orbits around academic brilliance, relegating interpersonal savvy and emotional attunement to a subsidiary status.


Enter artificial intelligence. In the popular imagination, AI is heralded as the next step in “intelligent” evolution. Enthusiasts point out that these systems can solve equations at lightning speed, detect patterns buried in oceans of data, even outscore humans on standardised tests. Indeed, if intelligence is measured by a test or a puzzle, then generative AI—well-trained on vast corpuses—will keep surpassing human benchmarks with breathtaking efficiency.


The trouble is, those benchmarks measure something that’s more akin to “book smarts” than the fullness of real-life intelligence. AI’s speed and capacity for analysis make it an unquestionable genius in the realm of “hard cognitive capabilities.” But when we think about day-to-day human life—what truly shapes our relationships, our happiness, and our moral judgments—it’s the intangible, context-dependent, emotional, and nuanced side of intelligence that tends to matter most.


Paradoxically, most of us grasp this. The difference between “book smarts” and “street smarts” is part of cultural common sense. We see it in the colleague who might not have a fancy degree but excels at reading a room. We see it in the friend who never studied psychology yet offers the best advice. We see it in family members who dropped out of formal education but have a deep intuitive wisdom about life.


However, despite this broad societal awareness, we collectively forget it when it comes to AI hype. Every new iteration of generative AI is treated like a milestone in “surpassing human intelligence.” Journalists ask, “Have machines become smarter than us?” But if we scratched the surface, we might realise they’re asking about a thin, specialized definition of “smart”—a version that elevates pattern recognition and problem-solving above all else.


That’s where the concept of “what matters” enters. If the yardstick for intelligence is heavily based on rational-analytical performance, then sure, AI is barreling ahead. But if intelligence also includes empathy, context, moral judgment, humor, authenticity—traits we rely on for making life meaningful—then these machines have barely stepped up to the starting line. They can simulate emotional expressions, produce therapy-like chat responses, and even pretend at empathy, but they don’t live those emotions. They cannot hold a trembling friend’s hand in a hospital waiting room or see the flicker of fear in someone’s eyes and respond with heartfelt assurance.


And that, as we’ll explore, is where the hype around AI’s intelligence quickly dissolves into something more akin to an illusion—a story we tell ourselves based on incomplete assumptions about what “intelligence” really is.


PART II: WHEN AI IS BOOK-SMART BUT SOCIALLY CLUELESS

You're in a meeting, pitching a new idea to your team. The atmosphere is tense—some colleagues fidget nervously, others stare fixedly at their phones. A new manager, fresh from a top MBA program, begins a PowerPoint demonstration riddled with complicated flowcharts, each one reflecting weeks of precise analysis. The data is immaculate; the logic is sound. But as the manager speaks, the room’s temperature drops. Nobody is inspired or even engaged. The manager, brimming with “book smarts,” fails to notice the human undercurrent: the fear of budget cuts, the bruised egos from last quarter’s restructuring, the quiet cynicism that no new initiative ever sees adequate funding. In short, the manager can’t “read the room.”


In a corporate environment, the best leaders aren’t merely good at number-crunching or strategic planning. They excel at empathy, communication, conflict resolution, and forging trust—those often-dismissed “soft skills” that make a team gel. You can’t reduce that to an equation. It’s not about the manager’s IQ. It’s about understanding the nuances that shape human interaction.


Now, let’s consider generative AI. From an analytical standpoint, these systems are the ultimate bright manager: they can juggle spreadsheets, generate dazzling presentations, and produce reams of well-structured, academically-toned content. AI can demonstrate cunning pattern recognition—say, anticipating stock market shifts or diagnosing medical conditions from X-ray images. But the crucial question is this: would you let a language model run your meeting? Would you trust it to foster team morale, sense hidden resentments, or calm the tension in a heated debate?


Probably not. Because even if it could generate convincing dialogue or mimic a pep talk, it wouldn’t feel tension, sense vulnerabilities, or know how to adapt intuitively to the swirl of human emotions in the room. Chatbots can produce text that reads like empathy, but that’s only a clever string of probabilities pulling from training data. They have no actual stake in the emotional well-being of anyone involved.


This is why, for all its computational brilliance, AI lacks the soft skills that help people navigate the complexities of social and professional life. The AI boosters—those who predict that human-level or even superhuman AI will arrive any day now—are mostly talking about a narrow slice of intelligence: the capacity to solve problems systematically, optimise decisions, or produce creative outputs from a vast data reservoir. Indeed, that’s impressive. But it’s also incomplete. It reduces intelligence to a measurable, mechanical process.


The boosters often retort that, eventually, AI will learn emotional intelligence the same way it’s learned logic. They’ll point to nascent projects that analyze facial expressions or vocal intonations. They’ll show off language models “simulating” empathy in roleplay scenarios. But a simulation is not the real deal. Even if a machine learns to identify that a user’s voice quavers with sadness, it doesn’t share in the existential experience of sorrow. It doesn’t have the lived memory of losing someone dear, nor does it abide in the same precarious condition of mortality that allows humans to connect so deeply through shared vulnerability.


Why does this matter beyond philosophical navel-gazing? Because corporate success and personal fulfillment alike hinge on precisely those intangible skills. In a world increasingly shaped by digital interactions, the tendency to conflate “book-smart intelligence” with “human intelligence” can lead us down a treacherous path. We might begin offloading tasks that require genuine sensitivity to a system that can only ape sensitivity. Or we might interpret a chatbot’s prepackaged wisdom as genuine empathy, and thus grow even more alienated in our relationships.


A telling example is social media. Mark Zuckerberg once promised that connecting billions of people on a single platform would bring them closer. The data-driven approach “worked” to keep eyes glued to the screen, but the deeper result was a surge in polarised discourse, misinformation, and mental health challenges. The platform lacked the nuance of real human facilitation—no capacity to read the emotional climate or foster authentic, mutual understanding. It optimized only for “engagement,” not for emotional or social well-being.


So here we are, again on the cusp of a new wave of AI mania, with super-smart systems that can pass advanced math tests and write plausible essays. The question is whether we’ll remember the lesson we collectively understand in everyday life: that intelligence is not the same as wisdom, or empathy, or moral discernment. These intangible qualities can’t be replaced by an algorithm, no matter how elegantly it handles data. To forget that is to invite more illusions—the kind that lead to social and personal harm despite glossy short-term gains.


PART III: THE HIDDEN WORLDVIEW—TECHNOCRATIC NEOLIBERALISM AND “WHAT WORKS”

Why, then, do we keep falling for the hype? Why do we let tech luminaries assure us that “super intelligence” is on the horizon, and that this will solve everything from climate change to loneliness, while we quietly ignore the gaping holes in AI’s grasp of empathy and moral awareness?


Part of the answer lies in the deep currents of our present era. We inhabit a society shaped by what many call “technocratic neoliberalism”—a worldview that prizes efficiency, productivity, and market-driven solutions above all else. Over the last half-century, global culture has leaned heavily into a belief that private enterprises, guided by data and expertise, can optimize nearly every domain of human life. This perspective aligns beautifully with the “what works” mindset.


In that mindset, the measuring rod for success is functional performance. Can this system produce results? Can it scale? Can it deliver returns on investment or reduce costs? If so, it’s considered an unambiguous good. The “soft” stuff—human connection, moral depth, spiritual longing—doesn’t factor easily into the profit-and-loss statement. Therefore, it’s relegated to the realm of personal hobbies or intangible ideals. The world keeps spinning on the axis of “what works.”


AI emerges as the crowning achievement of this paradigm. It’s data-driven, it’s efficient, it’s an optimisation engine on steroids. If the goal is to boost productivity, AI is unstoppable. If we want to glean insights from massive data sets—be they consumer behaviours or genomic sequences—AI can see patterns we’d otherwise miss. This synergy between AI and the technocratic worldview is why so many venture capitalists, CEOs, and even government leaders gush about generative AI’s potential. It slots perfectly into the “what works” framework.


But here’s the rub: “what works” is not necessarily what matters. Productivity and profit are not synonyms for a meaningful life, nor do they guarantee the health of societies. Indeed, as the social media experiment taught us, tools optimized for certain kinds of engagement can produce unforeseen cultural and psychological damage. The same risk looms larger with AI, because its capabilities are far more expansive than simply showing us cat videos and vacation photos.


For all their brilliance, AI researchers who fixate on building increasingly powerful systems rarely address the deeper question: Powerful in service of what? Indeed, many of them are unaware of how thoroughly their worldview is shaped by technocratic neoliberal assumptions. They measure success by the ability of a machine to perform tasks that either generate revenue or push the boundaries of what they consider “intellectual breakthroughs.” They rarely step back to ask whether the tasks themselves align with deeper human needs: the sense of belonging, the yearning for purpose, the moral imperative to protect vulnerable communities, or the interplay of love and grief that defines our days.


This unexamined worldview underlies the widespread fascination with “super intelligence.” Figures like Ray Kurzweil foresee a “Singularity,” a moment when AI transcends human intellect and presumably ushers in a new era of techno-salvation. Sam Altman and Demis Hassabis speak earnestly about AGI “solving intelligence,” as though intelligence were purely a puzzle of computational complexity. In their zeal, these luminaries conflate intellectual horsepower with wisdom, ignoring that wisdom has always been about ends rather than means. A super-intelligent AI can find means to a goal, but it can’t tell us which goals are worth pursuing. It can reconfigure the world but can’t advise us on why or whether we should.


Look deeper into these statements, and you notice that they implicitly champion a particular social order: one where skill, data, and algorithmic might shape the future, and where the intangible dimensions of human life are secondary—or expected to somehow materialise as a byproduct. This, in a nutshell, is the essence of the “what works” mindset. The “willing stooges” of technocratic neoliberalism, as some critics put it, aren’t necessarily bad people; they’re often well-intentioned innovators. But they have internalised a worldview so thoroughly that they no longer see its boundaries. They assume “progress” is measured by how effectively we harness technology, not by how deeply we connect with one another or how wisely we steward our planet.


And so the public discourse about AI remains skewed. We hear about breakthroughs in language modelling but not about how to foster human empathy in a world of automated chat. We debate whether AI can replace certain jobs—usually focusing on productivity metrics—without questioning whether those jobs deliver intangible forms of community, purpose, or identity. We marvel at AI’s capacity to pass standardized tests without noting that such tests themselves are relics of a narrow notion of intelligence.


All the while, the world is losing sight of the intangible virtues that don’t show up in the code or in a CEO’s earnings report. Qualities like empathy, love, and moral courage—what truly matters in human affairs—are sidelined as “emotional” or “subjective.” Yet these qualities shape the destinies of individuals, families, and entire civilizations. They are the bedrock of stable communities and flourishing relationships, the guiding lights that help us navigate the darkest nights. And they’re precisely the facets AI cannot replicate, no matter how sophisticated it becomes in raw processing power.


PART IV: RECLAIMING WHAT MATTERS—AND REFRAMING THE AI DEBATE

In a calmer, more reflective universe, we’d start our assessment of AI not with the question, “What amazing tasks can it do?” but rather, “How can this technology serve the deeper ends that give human life meaning?” For instance, if emotional well-being, social connection, and moral wisdom are crucial to living well, how might AI support rather than replace these qualities?


One answer might be that AI can take on some mundane or repetitive tasks, freeing humans to cultivate relationships, practice creativity, and invest time in moral reflection. In that scenario, “what works” is subordinate to “what matters”: the technology is a tool for enabling more authentic human flourishing. But that vision demands intentional design decisions and robust public dialogue. It cannot simply be left to market forces.


Even more challenging is the fact that the media environment loves a good hype story. “AI passes the bar exam!” or “AI masters new scientific puzzle!” are headlines that garner clicks and convey excitement. More nuanced points—like how AI might inadvertently erode empathy or feed into existing systems of inequality—don’t translate into a glossy PR pitch. To resist this shallow discourse, we need journalists, educators, and policymakers who can dissect the assumptions driving AI coverage and highlight the intangible aspects of intelligence that remain outside AI’s purview.


The Difference Between “Knowing About” and “Knowing”

One central point that underscores AI’s limitations is the gap between “knowing about something” and “knowing it.” A language model can read the entire corpus of philosophical and religious texts on love, but it doesn’t know love in the visceral sense of holding a newborn child, or in the tortured heartbreak of losing a lifelong partner. That experiential dimension—call it embodiment, call it consciousness—shapes human perspectives in ways that can’t be learned by devouring data.


When Mustafa Suleyman (co-founder of DeepMind) or other AI influencers speak about AI and empathy, they often describe how machines can be trained to detect emotional states or to respond with empathy-like messages. But that’s akin to reading a script about grief versus actually grieving. It’s a chasm so large that it might never be bridged by computational means alone. Pretending it’s a minor detail is precisely how illusions about “superintelligence” gain traction.


The Stakes Are High

We only need to recall the social media saga to see how naive enthusiasm can blind us to a technology’s larger societal impact. Mark Zuckerberg genuinely believed Facebook would bring people together, only to see it become a driver of polarization, misinformation, and mental-health crises. Similarly, if AI systems that can pass advanced exams start displacing human experts in certain domains, we might gain efficiency but lose the human touch that transforms knowledge into wisdom. We risk letting crucial decisions about ethics, compassion, and justice slip into a black box of algorithmic optimization.


A hyper-technocratic world might solve certain tasks elegantly, but it could also diminish the richness of human contact and moral striving. The challenge is to see these trade-offs clearly—without the distortions of hype—and choose a path that preserves the “soft” aspects of intelligence. If we outsource all decision-making to systems that handle data more deftly than we do, we might inadvertently forfeit our own ability to wrestle with moral ambiguities, face the raw vulnerability of being human, and cultivate the empathy that underpins genuine community.


Bringing It All Together

So how do we reframe the AI debate to emphasize what truly matters?


1. Elevate the “Soft” in the Conversation

We need a shift in the popular imagination around intelligence. Every time a headline trumpets a new AI milestone, we should ask: “What aspect of intelligence does this milestone represent?” Is it purely computational or does it involve genuine interpersonal nuance, empathy, or moral reflection? By making that distinction explicit, we counter the tendency to treat all intelligence as fungible.

2. Include Diverse Voices

AI should not be the exclusive domain of data scientists and venture capitalists. Philosophers, sociologists, artists, spiritual leaders, and ethicists all have perspectives on human intelligence and flourishing that can broaden the technology’s aims beyond “what works.” Instead of tacking on an “ethics committee” at the end, we can involve such voices at the inception of AI projects—when the goals and metrics of success are still malleable.

3. Encourage Transparency and Accountability

If big tech companies or government labs are pushing AI forward, they must be open about their assumptions and objectives. A big part of the problem with social media’s trajectory was how little everyday users knew (or understood) about the platforms’ algorithms. The more we publicly scrutinise AI’s design and constraints, the less likely we are to be blindsided by unintended consequences.

4. Redefine Progress

We need cultural guardrails that remind us: progress is not measured solely by computational power or GDP. Progress must also be measured by improvements in overall well-being, mental health, social trust, and moral depth. If AI can’t bolster these dimensions—or worse, if it undermines them—then it’s not real progress.

5. Cultivate Human Resilience

Finally, let’s remember that technology doesn’t exist in a vacuum. Even if AI becomes a fixture in every sphere of life, humans still shape social norms and personal choices. Education systems can emphasise emotional intelligence, empathy training, critical thinking, and the art of real dialogue—skills that AI cannot replicate. By nurturing these human capabilities, we guard against a future where “what works” steamrolls “what matters.”


Looking Ahead

At this juncture, it’s tempting to ask: Should we slow AI development? Should we ban certain applications? Those are debates worth having. But perhaps a more immediate step is to simply recognise what AI is—and what it isn’t. The technology is extraordinary at processing data and solving well-defined problems. It is far less capable in the domain of moral imagination, empathy, and genuine human connection.


We can and should harness AI’s strengths—medical breakthroughs, climate modeling, scientific discovery—while being vigilant about its limitations. That means challenging the persistent myth that “book-smarts” and “true intelligence” are one and the same. We must remember that, in the real world, people with average IQs but high emotional intelligence can outperform geniuses who lack basic interpersonal skills. The same principle applies to AI: it’s exceptional at a certain type of thinking, but that alone does not constitute the fullness of intelligence, let alone wisdom.


In short, AI might be “what works” for many tasks, but humanity thrives on “what matters.” Those intangible, delicate dimensions—love, compassion, empathy, moral courage, authentic connection—are the cornerstones of our shared humanity. And if we allow the current AI discourse to sideline them, we risk losing something far more precious than a fleeting competitive advantage. We risk losing the very qualities that make life worthwhile.


EPILOGUE: KNOWING WHAT WE TRULY VALUE

When we tune into mainstream media discussions about AI, the rhetoric often frames the debate as one of excitement versus caution—engineers marveling at possibilities while skeptics warn of job displacement or dystopian outcomes. Rarely do we see a conversation that reframes “intelligence” itself as something more than computational might. That’s the key oversight.


If we fail to unmask the assumptions embedded in the AI hype—assumptions birthed by a technocratic neoliberal culture that prizes efficiency and profit above all else—we’ll continue to chase illusions. We’ll celebrate each AI milestone as though it were the next step toward a machine that “knows everything,” forgetting that life’s greatest joys and sorrows arise from experiences no machine can share. Knowing about empathy isn’t the same as knowing empathy in the crucible of real relationships. And that, ultimately, is where the line is drawn between “what works” and “what matters.”


In the end, we may find ourselves at a crossroads: Will we continue to measure intelligence by the narrow yardstick of IQ-style problem-solving, letting AI overshadow the intangible yet crucial aspects of humanity? Or will we reclaim a more expansive definition of intelligence that includes the emotional, moral, and existential depth that makes life meaningful? The choice is ours—and how we answer will shape our shared future in the age of AI.

 
 
 

Comments


© 2019

Designed by Owen White
 

Call

T: +353 87 116 4905   

bottom of page