Rooms Without Mirrors
- owenwhite
- Sep 20
- 16 min read
Updated: Sep 21

Part I — What’s Water?
There’s a well-known story from the American writer David Foster Wallace. Two young fish are swimming along when they meet an older fish going the other way. The older fish nods at them and says, “Morning boys, how’s the water?” The two young fish swim on a bit, and then one looks at the other and asks, “What the hell is water?”
I love this parable. The most pervasive realities are the ones we seldom notice. We live inside them. We assume them.
Public life in the early twenty-first century swims in its own water. Call it neoliberalism, technocracy, behaviourism, scientism. Four overlapping doctrines that feel less like ideologies than like common sense. We breathe them in, pass them on, and forget that they are only one set of pictures among many that could be drawn.
Let's start by looking inside the fish tank.
On a wet Tuesday morning a dozen well-meaning people gather in a government building to “fix” something: hospital delays, school outcomes, crime in city centres. The meeting has everything modern problem-solving requires. There are dashboards and charts. There is a PowerPoint with a traffic-light schema and a target dated to the quarter. There is a line about “value for money” and a slide about “evidence-based policy.” There is also something missing, though no one notices, because rooms like this seldom have mirrors. Nobody is invited to look at the assumptions on their faces.
It isn’t a conspiracy. It’s closer to the way fish probably don't know they are swimming in water. Policy people, economists, consultants, philanthropists, tech founders—most of us—inherit a background picture of how the world works and what people are like. We treat the picture as the world. We do not pause to ask where the picture came from, what it leaves out, or what happens to the humans who don’t fit inside its frame.
I'm not entirely sure what to call the water of modern life, but let's call this background picture the modern doctrine of technique. Its family includes neoliberalism, technocracy, behaviourism and scientism. They differ in accent and temperament, but they share a set of quiet convictions: that people are best understood as calculative individuals; that problems yield to analysis, method and technical expertise; that numbers tell the truth and what cannot be counted is, at best, sentimental; that systems are controllable from the centre if we pick the right levers; that values are private decorations hung after the engineering is done.
These convictions slipped into public life like water fills a basin. Thatcher and Reagan did a lot of the water pouring and they got their water bottles from economists such as Hayek and Friedman. They shaped the post-1980s settlement in which markets and market-like incentives became the preferred tools for organising common life; in which experts spoke the language of models and metrics; in which politics styled itself as delivery. They also fashioned a habit of mind that grew suspicious of history and blind to context. If the method is right, the setting shouldn’t matter. If the data is clean, the decision is neutral. If the model is elegant, the world should comply.
Reality, alas, is not neutral or compliant. It is thick with history. It is made of people who love and fear, who imitate and resist, who are loyal to tribes and stories and memories. It is an ecology of feedback loops in which small acts cascade and apparently rational incentives backfire. In other words, reality is complex.
You can see the collision everywhere. A hospital “efficiency” drive trims seemingly redundant posts and squeezes idle time from staff rotas, raising productivity on the spreadsheet, then winter comes and the system has no slack; resilience, which looked like waste, has been hollowed out. A school chases a league-table boost by narrowing the curriculum to what is measured; grades rise while the appetite to learn grinds down to dust. A policing unit is handed a detection target; the easiest arrests multiply, while the hard cases—the ones community trust depends on—age out of sight. The target is met. The public sense things are worse.
It is tempting, at this point, to present an alternative doctrine. But replacing one totalising scheme with another simply repeats the error. The more human move is lower to the ground and older in spirit: learn to see your assumptions before you act. Hold them up to the light. In rooms without mirrors, install mirrors.
So before we argue about policy or “what works,” I'd like to surface and foreground the shared picture we carry in our heads. Four clusters of assumptions do most of the work: assumptions about persons, about systems, about knowledge, and about value. If we don’t interrogate these, we will go on repeating the same mistakes with greater efficiency.
Consider this your assumption audit. Not a manifesto. More a way of noticing the water.
Part II — The Human Picture: From Homo Economicus to Human Beings
I think the first and most consequential assumption is about what a person is. Neoliberalism paints the human being as a sovereign calculator: a chooser with stable preferences, a shopper in a cosmic supermarket. Behaviourism offers a cousin to this picture: a creature of stimulus and response whose behaviour can be shaped by cues and rewards. Technocracy inherits both: if you can model preference and design incentives—or nudge the stimulus—you can steer the crowd. The policy task becomes an engineering problem: optimise the choice architecture; align rewards; measure outputs.
There is a surface plausibility here. People do respond to prices and incentives. Habits can be nudged. If your goal is to sell sneakers or increase organ-donor sign-ups, these tools have their place. But as a picture of human agency, it is radically thin. It ignores how we become the kind of people who want what we want. It treats preferences as given, not grown. It reduces attention, character and relationship to noise in the model.
Complexity thinking does not romanticise people, but it does treat us as situated: embodied, social, imitative, status-seeking, story-driven. What we do depends heavily on what people like us appear to be doing now. Norms and networks outrun incentives. Emotions recruit reason after the fact to justify decisions already made under the skin. In such a world, policy that treats citizens as isolated utility-maximisers misfires. The behaviourist fix—“just add nudges”—does not repair the picture; it entrenches it. A nudge assumes we know the direction of the good independent of dialogue, and it turns the citizen into an object to be acted on.
Certain traditions in moral philosophy can helpful here. Iris Murdoch described morality as an achievement of attention rather than a sequence of choices made by a sovereign chooser. We learn to see others, and ourselves, more truthfully; freedom is not simply selecting from a menu, but growing the kind of vision that recognises the good in context. That is a profoundly different anthropology from the one neoliberalism and behaviourism share. It tells us that better health, education or climate action are not only matters of optimised incentives but of cultivated vision—practices, institutions and narratives that enlarge what people notice and care about together.
You could call this humanism, but it is also plain empiricism in the older sense. Watch what humans are like in the wild. We don’t live as solitary calculators. We live in webs of mutual recognition and mimicry. We don’t simply “have” values; we are formed by families, schools, congregations, unions, teams. Our agency is real and limited and supported by culture. When policy neglects that ecology, it defaults to manipulation: tweak the dial and wait for the graph to move. When it honours that ecology, it works with communities to co-create the conditions for wiser action.
If you change your human picture, everything else begins to move. Targets look less like the point and more like a partial feedback signal. Participation looks less like a box to tick and more like the heart of the work. And success starts to sound like a richer sentence than “metric up, cost down.”
So here's the question to ask yourself: Do I imagine people as isolated choosers to be steered, or as relational agents to be engaged? I think your answer will silently shape everything that follows.
Part III — The System Picture: From Clocks to Weather
The second assumption hides in the way we picture systems. Neoliberalism borrowed from mid-century economics a fondness for equilibrium: markets, left alone, tend toward order. Technocracy imports a mechanical image: if you can map the parts and set the parameters, complex wholes will behave. Scientism provides the final gloss: there is one proper method—the method of natural science—for knowing how the world works; find the law and drive the system to its optimum.
I don't think these images aren’t wicked; I just think they are misapplied. In contexts where a system is stable and decomposable—industrial production lines, for instance—this way of thinking delivers miracles. The trouble begins when we carry the same picture into human ecosystems.
Complexity science offers a humbler image. Human systems are like weather, not like clocks. Causes are entangled; small actions cascade; patterns emerge that you can see and shape but not control. Dave Snowden’s Cynefin framework is one practical expression of this. It distinguishes the simple (cause and effect obvious), the complicated (cause and effect discoverable by expertise and analysis), the complex (cause and effect only clear in retrospect), and the chaotic (no discernible relation yet). Most public problems—poverty, addiction, attainment gaps, public safety—live in the complex domain. Approaching them as if they were merely complicated tempts us to hunt for a best practice and roll it out at scale. The right move is different: probe, sense, respond. Run multiple small, safe-to-fail experiments. Watch for emergent patterns. Amplify what works here; dampen what doesn’t. Build enabling constraints that invite desirable behaviour without pretending to script it.
Once you see the systems picture, many policy pathologies make sudden sense. “Efficiency savings” that strip redundancy out of hospitals or power grids look smart in isolation and short time horizons. In complex systems, redundancy is another name for resilience—the capacity to absorb shocks without collapse. Likewise the obsession with standardisation: uniform protocols ease oversight but reduce diversity, which is the raw material for adaptation. The more you optimise a network for a single metric, the more brittle it becomes.
Markets themselves display these dynamics. Deregulation was sold on the assumption that dispersal of decision-making would reduce risk. But financial markets are networks of imitation as well as calculation; leverage and information cascades create tight couplings; the optimisation of return under thin capital buffers erodes system slack. Fragility follows. Equilibrium thinking misses this because it treats correlations as independent and assumes shocks fade. Complex systems hoard surprises.
Technology intensifies both sides of the story. Dashboards and real-time telemetry can widen our peripheral vision if we use them as instruments of sensemaking. Too often we reverse the order: we contort reality to match the dashboard. A city opens a new analytics centre for policing or social care. We celebrate the map and stop walking the streets. The map becomes a blindfold.
What does a better systems picture demand? Three habits. First, distinguish domain: simple, complicated, complex, chaotic. Resist the one-size-fits-all playbook. Second, prefer portfolio approaches to flagship solutions. Spread bets, learn quickly, keep options open. Third, design for reversibility: when you act under uncertainty, choose moves you can back out of without catastrophic cost—what the mountaineers call “climbing with three points of contact.”
So here's the second question to ask yourself: Do I imagine the world as an engine to be tuned, or as an ecology to be stewarded? I suspect if you pick engine, you will look for the right spanner. If you pick ecology, you will look for gardeners, paths, hedges, and weather forecasts.
Part IV — The Knowledge Picture: What Counts as Evidence?
The third assumption hides in our epistemology: what we count as knowing. Modern public life is fluent in numbers. We have never had so much data, and so much that is good. The danger isn’t measurement itself; it’s the silent belief that only measurement is knowledge and that quantified knowledge is neutral.
Neoliberalism was delighted to inherit this creed. It helped redraw arguments about the common good as arguments about growth rates and productivity curves. Scientism—the overreach of scientific method into domains it cannot govern—gave the creed its gravitas. Behaviourism chipped in with the promise that with enough data you can predict and shape human conduct. Technocracy fused the lot into a politics of benchmarking. If you can’t show it on a chart, don’t bring it to the table.
But charts are not omniscient. Numbers are theories with decimal points. Every metric packages a choice about what matters, and those choices carry values. When a school system prizes exam scores, it demotes untested goods—confidence, curiosity, community ties. When a health service chases the four-hour target, it pushes staff to move bodies faster, even if the right care takes longer. When welfare policy obsesses over fraud prevention because that is visible, it neglects the harder task of dignity preservation because that resists counting. In all three cases the number becomes a magnet for attention, and attention is what we live by.
A richer knowledge picture has at least three planks. First, plural methods: quant and qual, RCTs where they fit, ethnography where they don’t, historical sensibility always. Second, proportion: treat statistical significance with humility and practical significance with care; learn to ask “so what?” before declaring victory. Third, interpretation as a craft: sensemaking in groups that include insiders and outsiders, users and providers, frontline workers and analysts. This is where complexity and humanism meet: the recognition that understanding emerges from conversation across perspectives, not from a single privileged view from nowhere.
We also need to remember what philosophers from Plato to Murdoch and Charles Taylor have argued: vision can be educated. Attention is moral muscle. The more we exercise it on the right objects—the lives of those we serve, the texture of actual practice, the costs paid by the least advantaged—the better our judgement becomes. This is not a rejection of science; it is a refusal of scientism. It is to place science in conversation with history, literature, law, and lived experience; to hold the measured and the meaningful together.
So here's the third question to ask yourself: Do I treat numbers as the truth and stories as decoration, or do I treat both as kinds of evidence that correct each other’s blindness? Again, I suspect your stance will decide who gets a hearing and, ultimately, who gets served.
Part V — The Value Picture: What Are We Optimising For?
The final assumption is the one we most often deny we are making. What are we optimising for? Neoliberalism’s answer—efficiency and growth—sounds technical, even natural. But it is not neutral. Efficiency for whom, growth of what, at what cost, for how long? These are ethical questions disguised as engineering.
This is where technocracy is most seducing. It offers a release from the messy work of democratic judgement: pick an output, find a proxy, drive the metric. In my experience, if anyone raises questions about what the proxy leaves out, accuse them—gently—of being “unrealistic” or overcomplicating things. The polite fiction is that ends are obvious; only means require expertise. The reality is the other way up. As soon as you choose a metric you choose a moral: what will be seen and what will be left out.
Because values are unavoidable, the only grown-up move is to bring them into the open. Name the ends and argue about them. When we do, we discover there are always many. Some may insist schools are fundamentally around test scores and preparing young people to drive competitiveness in national economies. Fine. But is that sufficient? A school is a place to transmit knowledge, cultivate character, raise citizens, and shelter the vulnerable—at least. So, let's discuss the purposes for our country.
In complex systems, wise ends are not lofty slogans but enabling constraints: clear statements of purpose that narrow the field enough to guide action while leaving room for judgement. “Every child known by an adult who notices them.” “No one dies alone.” “Streets where a ten-year-old can walk to school.” These are not measurable in a single metric, but they are actionable. They invite plural measures and a culture of practical wisdom.
There is a political bite here. The technocratic reflex treats disagreement about ends as a failure to be ironed out by more data. A democratic reflex treats disagreement as the point: the way we discover, together, what matters in this place, now. That is slow work. It can be infuriating. It is also how free people make peace with reality and with each other.
So here's the fourth question to ask yourself: Do I believe that ends are obvious and private, or that they are contested and must be articulated in public? If you choose the latter, you will need thicker institutions and a better class of meeting—ones with mirrors.
Part VI — From Audit to Action: Practising Post-Neoliberal Sensemaking
Auditing assumptions is not an academic parlour game. It is a change of posture. Here is how it moves from page to practice.
1) Start with an assumption brief. Before you launch a programme or a product, write one page titled “What we think is true.” Include your human picture (how we think people decide here), your system picture (what domain we are in—simple, complicated, complex, chaotic), your knowledge picture (what kinds of evidence we will accept), and your value picture (what purposes we will guard). Make this public inside your team. Invite critique.
2) Design portfolios, not silver bullets. If your problem lives in the complex domain, run a portfolio of safe-to-fail experiments. Vary the approach, context and actors. Decide beforehand the signs that will tell you to amplify, dampen or kill each strand. Build time in for learning. Reward the teams that kill their own darlings when reality teaches them to.
3) Build enabling constraints. Replace over-specification with clear, value-laden boundaries. In healthcare: “We will not trade staff well-being for short-term throughput.” In education: “We will not pursue gains that require narrowing the experience that makes school humane.” In climate: “We will not shift costs onto those with the least capacity to pay.”
4) Cultivate attention. Borrow from Murdoch here. Make space for practices that re-train vision: shadow frontline workers; hold listening sessions with those whose lives the policy touches; read a case study aloud in the meeting where the budget is set. This is not sentimentality. It calibrates judgement.
5) Treat numbers as servants. Use metrics as feedback, not as aims. Rotate what you measure to prevent gaming. Pair every quantitative indicator with a qualitative check. If your dashboard looks perfect and your staff look broken, the dashboard is lying.
6) Keep slack on purpose. Build redundancy into systems you cannot afford to see fail. Measure resilience explicitly: how quickly can we recover from a shock; how many points of failure have been eliminated by “efficiency”; where is our optionality? If the answer is “we’ve squeezed all the give out,” you are not efficient; you are brittle.
7) Make reversibility a design criterion. Prefer interventions you can unwind. Commit hard where the evidence is strong and the stakes require it; elsewhere, keep your weight over your feet.
8) Re-embed markets. Markets are astonishing tools. They are also social institutions that depend on norms they cannot produce—trust, honesty, a rough sense of fairness. Treat them as means, not masters. Price signals help, but they cannot answer the question “what kind of life together do we seek?”
9) Tell thicker stories. Humans change when stories do. Replace the tale of the isolated chooser with stories of mutual recognition and real competence. Celebrate practitioners who exercise judgement under pressure, not just managers who move numbers.
10) Keep the mirror in the room. End meetings with the question, “What assumptions did we just act from? Which ones served us? Which ones shrank our view?” The first few times will feel awkward. Then it will feel like oxygen.
And here we can borrow from organisational scholar Chris Argyris, who distinguished between single-loop learning—adjusting tactics without questioning the governing variables—and double-loop learning—surfacing and testing the assumptions beneath our actions. What this article calls an assumption audit is simply double-loop learning at the scale of public life. It is the recognition that without mirrors, without deliberate reflection on what we take for granted, we will remain stuck in the same water, swimming harder but never seeing where we are.
Part VII — A Different Kind of Expertise
If you have read this far you may hear a charge forming: isn’t this just anti-science romanticism dressed as systems talk? My answer is No, it isn’t. Rather, it is a plea to put disciplines in their proper places. The natural sciences remain our best tools for understanding repeatable causal structures. They are less good at telling us what to love, what to prioritise, how to govern trade-offs, or how to act under deep uncertainty in the company of other free people. That is not a critique; it is a boundary. Cross it without care and you do damage.
The point I am trying to make is that we need a develop a type of expertise that is more than technical expertise. It's human expertise: the ability to make good judgements in challenging contexts. Aristotle called this type of expertise practical wisdom (or phronesis in the Greek) and it is based on experience in the world that enables practically wise individual to "read the room" and to discern the fitting thing to do here and now. You cannot standardise practical wisdom. But you can cultivate it: by apprenticing novices to masters; by practising in communities that honour truthfulness over face-saving; by rewarding candour when an approach fails; by making room for context in a culture addicted to general rules. You will know you are on the way when your senior people are prized not for having the right answer but for asking the right question at the right time.
If you want a picture, return to the room where we began, the one with the dashboard and the target. Imagine a mirror installed on the far wall. Imagine, too, a second screen, not for KPIs but for stories: a circulating feed of short vignettes from patients, pupils, officers on the beat; fieldnotes from pilots that were wound down and the reasons why; quick sketches of what surprised teams this week. Picture the chair of the meeting asking, before the numbers roll, “What are we assuming about people? About the system we’re in? About the kind of knowing we trust? About the ends that justify these means?” Watch how the tone of the room changes. Watch how the decisions change.
Part VIII — The Work After the Audit
Neoliberalism did not fail because markets are evil or because numbers lie. It failed because we mistook a part for the whole and a method for a morality. Behaviourism did not fail because habits don’t matter; it failed when it pretended that human beings are nothing but habits. Technocracy failed where it treated disagreement about ends as an embarrassment rather than the business of politics. Scientism failed by forgetting that science is a glorious tool housed in a human world where love, loss, loyalty and meaning are not noise but the signal.
We will go on failing, and faster, if we don’t learn to see the water we swim in. The good news is that learning to see is something humans can do. We can put mirrors in our rooms. We can change the pictures we carry of persons, systems, knowledge and value. We can keep markets and metrics and still say out loud the things that matter but refuse to sit still for measurement: dignity, belonging, beauty, honour, mercy.
The young fish in Wallace’s story asked, “What the hell is water?” Our task is to ask the same of ourselves, not once but habitually. The water we swim in—neoliberalism, technocracy, behaviourism, scientism—is not destiny. It is a picture, a set of assumptions. And once we see them, we can choose differently.
A final image. On another wet Tuesday, in a different building, a team starts its assumption brief. They write: “People are social and situated.” They write: “This problem is complex.” They write: “Evidence will be plural.” They write: “Our ends are: no one left behind; long-term resilience over short-term optics; learning faster than the problem changes.” When they finish, they turn to the work. It will be slower in some ways and quicker in others. There will be fewer grand launches and more small, honest adjustments. Fewer certainties, more competence. The graphs may rise more gently; the lives behind them may feel less like they are being managed and more like they are being served.
That is not an ideology. It is a posture: modest about method, ambitious about meaning. And it begins with the question that rooms without mirrors never ask: What are we assuming?



Comments