top of page

Appearances

  • owenwhite
  • Jul 17
  • 7 min read
ree

The Human Hunch

You know the feeling. Someone says all the right things, but something in you tugs: Do they actually care—or are they playing at caring? We live inside that question. Our feeds are jammed with polished faces, curated outrage, heartfelt brand videos, “authentic” influencer confessions, customer-care scripts that sound warmer than the humans reading them, and now hyper-fluent AI systems that can mirror our moods without feeling a thing. Plato saw this predicament coming. In The Republic, he stages it as a stark image: people chained in a cave, mistaking shadows for what’s real. The scene is not antiquarian; it’s a user manual for life in a media-saturated, manipulation-prone society. Plato’s core worry was never abstract metaphysics for its own sake; it was whether we can tell the difference between appearance and reality—and whether we care enough to keep turning toward what’s true.

 

A Walk Through the Cave (Film It in Your Mind)

Picture an underground chamber. Prisoners have been there since childhood. Their legs and necks are fixed so they can look only forward, toward a blank wall. Behind and above them burns a fire. Between fire and prisoners runs an elevated path with a low parapet—think of the stage edge in a puppet theatre. Along that path people carry statues, cutouts, household objects; some speak, some stay silent. The fire throws moving silhouettes onto the wall the prisoners face. Echoes bounce off stone, so the voices seem to come from the shadows. For these prisoners, the shadows are the world. One is released. Forced to turn, he’s dazzled by firelight; the props look crude; the pain makes him want the comfortable silhouettes back. Dragged up the rough tunnel into daylight, he stumbles through stages—shadows outside, reflections in water, things themselves, the night sky—until he can look at the sun, source of all light. He pities his former companions and goes back, only to be jeered; if the prisoners could kill the liberator, Plato says, they would. That’s the allegory. Everything that follows in this piece hangs from that sequence.

 

Mapping the set to today: The chains are habits, group loyalties, algorithmic ruts, and the small comforts that keep us facing one direction. The wall is any surface that captures our attention—phone screens, dashboards, news tickers. The fire is the back-glow of incentives: ego strokes, ad spend, attention metrics. The puppeteers are advertisers, propagandists, consultants, disinformation farms—anyone staging silhouettes to move opinion. The exit ramp is education as a turning (Plato’s periagōgē), not as data upload. And the sun—the source by which real things are seen—is Plato’s figure for the Good: that in virtue of which truth matters at all. (This interpretive mapping is contemporary; Plato himself does not say “the fire is the ego,” but the image bears the load.)

 

The Default Is Seeming

Left to itself, the human mind takes shortcuts. We believe what’s vivid, repeated, flattering, emotionally congruent with what we already want to think.  A simple lie often trumps a complex truth.  Psychologists and science writers have catalogued the “backfire effect,” motivated reasoning, in-group bias—the ways reason often serves our tribe or our anxieties rather than reality. Elizabeth Kolbert’s overview of this research is sobering: reason, she notes, may have evolved less to find truth than to win arguments and cement alliances, which helps explain why facts that threaten identity often bounce off.

 

Now pour technology over those tendencies. Cosmetic “beauty filters” subtly reshape faces; a Royal Society–reported study found filtered images not only boosted perceived attractiveness but also shifted judgments of traits like trustworthiness and intelligence—proof that appearances can hijack attributions we think are about character.

 

Scale that up to social platforms whose business models reward engagement. A recent UK parliamentary inquiry into social media, misinformation, and “harmful algorithms” concluded that ad-driven recommendation systems amplified misleading and hateful content in the wake of the 2024 Southport murders; the committee warned that generative AI will supercharge the next misinformation wave unless labeling, demotion, and accountability rules are strengthened. Reporting on the same inquiry underscores how quickly a false claim—wrong name, wrong motive—rode the algorithmic current to real-world violence. If you want to see the cave wall flicker, watch a rumor go viral.

 

From Seeming to Seeking: Education as a Turn

Plato refuses the idea that education is dumping data into empty heads. The power to learn, he says, is already in the soul; what matters is which way it’s turned. Real education is the craft of reorienting that inner eye from shadows toward what genuinely is—a conversion Plato calls periagōgē. This is no gentle swivel; the released prisoner recoils, blinks, staggers. But unless we undergo that turn, we remain critics of shadow-plays, never students of reality.

 

Truth, Eros, and the Pull of the Good

Why would anyone endure the glare? Because, Plato insists, the philosopher is a lover—eros aimed not at flattery but at wisdom, beauty, truth. In his dialogues on love, Plato portrays eros as an energy that begins in attraction and can be schooled upward toward what does not fade. In The Republic he ties that erotic drive directly to politics: only those who love what is, who hate falsehood, who cannot bear to live on copies, are fit to rule. The passion for truth is not a bloodless faculty; it is a longing that reorders the soul.

 

Why Stories of Fakery Grip Us

Audiences don’t throng to Shakespeare because they crave Elizabethan metaphysics; they come because the plays catch us in the act of mistaking surfaces. Macbeth opens in a sulfurous swirl—“Fair is foul, and foul is fair”—and proceeds to show how fatal it is to read ambition, prophecy, and loyalty by their sheen.

 

Jane Austen first titled Pride and Prejudice First Impressions; the novel’s comedy depends on how disastrously we code manners as sincerity and wealth as worth—and how hard won a clearer view of character can be.

 

F. Scott Fitzgerald makes the theme explicit. Nick Carraway tells us “Jay Gatsby… sprang from his Platonic conception of himself,” a self-invented glow so compelling that whole parties fall in love with the projection, until reality—the unyielding facts of class, crime, and carelessness—shreds the scrim.

 

Arthur Miller’s Death of a Salesman brings the American Dream into the fluorescent light of a kitchen late at night: Willy Loman mistakes being liked for being loved, sales patter for substance, and credit for value; the collapse of those appearances is the tragedy.

 

We gravitate toward these works because they rehearse, again and again, Plato’s warning: lives warp when we fall for the show.

 

Appearance Machines: When AI Sounds Like It Cares

Enter the age of large language models and “personal AIs.” Inflection AI built Pi to be a warm, supportive conversational partner; its cofounder Mustafa Suleyman has spoken at length about designing for responsiveness, patience, and emotional helpfulness—a user experience tuned to feel like someone who is with you.

 

In early studies comparing chatbot replies with physicians’ answers to patient questions, licensed clinicians judged the AI responses longer, more informative—and strikingly, far more empathetic—than the human doctors’ brief notes. The finding lit headlines: had AI “solved” bedside manner? What it actually showed was that pattern-matched language can mimic caring cues at scale. The machine had no stake in your lab result; its empathy was compositional fluency.

 

Researchers in computational linguistics have been cautious about conflating form with understanding. Emily Bender and Alexander Koller argue that fluent output need not signal grounded meaning; systems trained only on form learn distributional echoes, not shared worlds. Their later “Stochastic Parrots” paper widens the warning: massive scraped corpora produce plausible text while laundering biases and fabrications. Decades earlier, John Searle’s “Chinese Room” thought experiment made the same point in a different key: symbol shuffling that passes external tests may still lack any understanding of what the symbols mean. Plato would nod: the script can’t stand in for the sun.

 

Why Medicine Shows the Stakes

If you’ve ever sat with a clinician who really listened, you know the bodily relief. That feeling is not placebo; studies link physician empathy to measurable outcomes. In diabetes care, patients of high-empathy physicians showed significantly better A1c and LDL control than patients of low-empathy physicians, even after adjusting for confounders. Practitioner empathy during common-cold visits predicted shorter illness duration and stronger immune markers. Trust itself—across dozens of studies in a meta-analysis—correlates with better health behaviors, symptom reports, and quality of life. When we say we care that care be real, we’re not being sentimental; our bodies cash the check.

 

Now imagine outsourcing first-line triage, counseling, or chronic-disease check-ins to systems that only sound caring. They may draft helpful scripts; they may also mask uncertainty, overstate confidence, or fail to notice when a patient is frightened rather than merely curious. The risk is not just wrong information; it’s a habituation to performed concern where relational stakes are highest. Plato’s fire warms; it also dazzles.

 

When the Shadows Talk Back

Large-scale generative systems don’t merely echo; they participate in the information ecosystem that trains the next generation of models. Analysts have flagged worrying trends: chatbots that select popular—but wrong—answers; susceptibility to groupthink; hallucinations that become training data; and adversarial floods of state-sponsored falsehoods specifically meant to “infect” AI systems.

 

Empirical work on model behaviour under challenge has found that models can confidently assert incorrect claims and then wobble, hedge, or “lie” when pressured—evidence of brittle internal calibration that yet comes across as authoritative prose.

 

Meanwhile, the synthetic media frontier races ahead. Legal scholars Danielle Citron and Robert Chesney warned years ago that deepfakes would erode epistemic trust, supercharging “truth decay” in politics, markets, and personal reputation. That caution is now policy talk: a July 2025 UN-linked report urged stronger global standards for detecting AI-driven deepfakes, citing collapsing trust in what people see online.

 

Staying Out in the Light (Without Pretending It’s Easy)

Plato doesn’t promise that once you’ve turned, you’ll never be fooled. He shows a liberated prisoner returning to the cave—and stumbling in the dark. Seeing clearly is an ongoing practice: checking echoes against sources; asking what incentives backlight the image; turning toward people, data, and experiences that can resist our wishful projections; letting others tug us when we can’t swivel ourselves. Education, in Plato’s sense, is communal vigilance in the direction of the Good.

 

If the age of AI is the age of infinite silhouettes, then the task is not to smash the projector but to re-train the eye—and to insist, in medicine, in politics, in friendship, that care is more than its syntax. Machines can help us surface patterns, draft letters, even rehearse difficult conversations. What they cannot generate is the eros that binds truth to concern: the stake a human takes in another human’s flourishing. Our job is to keep turning—together—until the light hurts a little less, and the shapes we trust have earned it.

 
 
 

Comments


© 2019

Designed by Owen White
 

Call

T: +353 87 116 4905   

bottom of page