AI & the Seductive Power of Metrics
- owenwhite
- Sep 29, 2024
- 5 min read
Updated: Oct 6, 2024

Generative AI promises immense benefits in healthcare, education, and labor markets. Its ability to sift through vast datasets, identify patterns, and optimise processes at lightning speed is undeniably impressive. For institutions desperate to show improvement—whether hospitals or schools—AI offers a powerful tool for measuring success. On paper, it looks like progress: diagnostic rates improve, drug development accelerates, and student test scores rise. The metrics are clear and persuasive.
But there’s a deeper issue at play. In the rush to embrace what is measurable, we risk neglecting what truly matters in human experience. The problem with AI is not just that it changes what we do, but that it potentially helps us become more efficient at doing the wrong things—or more precisely, the things that matter less to real people in real life situations.
Take my mother’s experience as an example. A number of years ago, she was admitted to a hospital widely regarded for its technological sophistication. The latest digital systems were in place, the medical equipment was state-of-the-art, and the hospital had a stellar reputation. And yet, despite all of this, she was deeply unhappy. The problem wasn’t the technology itself; it was the absence of something more fundamental—the human touch. The "care" dimension in "healthcare". The doctors and nurses, while competent, seemed more focused on efficiency than empathy. The care felt mechanical, the warmth lacking. My mother didn’t want cutting-edge systems; she wanted kindness, attention, and human interaction.
This experience illuminates the core issue: AI may help hospitals become more efficient, but it can also lead to a hollowing out of care, the very thing patients like my mother value most. In our technocratic obsession with progress—measured through reduced wait times, faster diagnoses, or digital precision—we overlook the qualitative, experiential aspects of care that can't be easily quantified.
2. The Blind Spots in Healthcare Transformation
AI will undoubtedly bring improvements in healthcare. It will likely improve diagnostic accuracy and reduce medical errors. But these advances, impressive as they are, address only part of what constitutes quality healthcare. What happens to the emotional support patients need? The empathy that comforts and reassures? These are aspects of healthcare that AI cannot touch, and yet they are often the most important to patients.
As AI becomes more embedded in healthcare systems, there’s a danger that we will focus solely on the metrics it can improve, like survival rates or the speed of service, at the expense of the human elements that make healthcare meaningful. The result? A more efficient system that feels colder, more distant, and ultimately less humane.
This is where the AI boosters and technocrats promoting healthcare transformation fall short. They focus on the explicit benefits—those that can be measured and reported in neat statistical packages—but ignore the qualitative aspects of care. AI may make hospitals better at treating diseases, but it will not make them better at caring for people. And this is not a small oversight; it’s a profound one, particularly when the experience of care, rather than the process of treatment, is what patients like my mother value most.
3. Education: The Misleading Metrics of Progress
The same dynamic is unfolding in education, where governments around the world trumpet improvements based on rising test scores. Politicians and educational leaders herald these numbers as evidence that schools are improving, that students are learning more effectively, and that education systems are becoming more successful. But as with healthcare, the metrics are misleading.
Test scores are a tempting indicator of progress. They provide clear, measurable data points that allow governments to declare success. But are they reliable indicators of the true quality of education? Do they reflect whether students are engaged, curious, and developing a lifelong love of learning? Or do they simply show that schools are becoming better at teaching to the test, focusing on the narrow criteria that get measured and rewarded?
There is a growing concern among educators that test scores are not a good proxy for genuine learning. The ability to perform well on an exam is not the same as developing critical thinking, creativity, or a passion for discovery. But because these qualities are difficult to measure, they get sidelined in favor of the easily quantifiable.
This is where the technocratic approach to education reform, driven by AI and data analytics, falls into the same trap as healthcare. It focuses on what can be measured—improved test scores—rather than what really matters in the long run. A school may produce students who perform well on standardized tests, but if it fails to ignite curiosity, foster independent thinking, or instill a love of learning, has it truly succeeded?
4. The Alienation of Phony Progress
Herein lies the crux of the issue: AI-driven systems help us become more efficient at achieving what is easily measurable, but not necessarily what is important. This leads to a disconnect between the metrics that institutions use to measure success and the actual lived experience of individuals. When healthcare systems boast of improved outcomes based on quantitative metrics, patients like my mother may still feel uncared for. When governments celebrate higher test scores, students may still feel disengaged and uninspired by their education.
This disconnect creates a kind of alienation—a sense that progress is being claimed, but not truly felt. It is a crisis of meaning that stems from our over-reliance on metrics. The metrics tell us things are getting better, but in our hearts, we sense that something essential is being lost.
In healthcare, patients might live longer, but the quality of that extended life may feel diminished by a lack of empathy and personal connection. In education, students may perform better on tests, but they may not leave school with a sense of purpose or a love for learning. This is the deeper crisis AI risks exacerbating: a world where everything appears to be improving on the surface, yet life itself feels emptier and more disconnected.
5. A Broader Vision for AI’s Role
The challenge, then, is not to reject AI, but to ensure that its deployment is aligned with what truly matters in human life. This means acknowledging that while AI can improve measurable outcomes, it is not a substitute for the qualitative aspects of healthcare, education, or any other human service. We need a broader vision of progress—one that includes both the measurable and the unmeasurable, the explicit and the implicit.
In healthcare, this means valuing the human touch alongside technological advancements. It means recognizing that empathy, communication, and personal connection are as important to healing as diagnostic accuracy and treatment efficacy. In education, it means fostering creativity, curiosity, and a love of learning, even if these qualities are harder to quantify than test scores.
Generative AI will undoubtedly continue to transform our world, but we must be careful not to let it define progress solely in terms of efficiency and quantifiable outcomes. True progress requires a balance between the measurable and the immeasurable, the technological and the human. Without this balance, we risk creating systems that deliver surface-level improvements but fail to meet the deeper needs of those they serve.
In the end, the real measure of success is not in the numbers, but in how we feel about the care we receive, the education we experience, and the lives we live.



Comments