Site icon Smart Again

How to understand this hidden driver of the modern world

How to understand this hidden driver of the modern world


We have a choice. We could leave our goals and values fuzzy. We can value things like wisdom, communication, friendship, or community. These are recognizable and very human values. But without further sharpening, we will probably disagree viciously about how to apply them.

Or we can make our values mechanical. The more explicit and mechanical we make our goals and values, the easier it will be to coordinate, and the easier it will be to figure out exactly how well we’ve done. Instead of aiming at health, we can aim at achieving our step-count goals. Instead of aiming at community and connection, we can aim at Likes and Follows. Instead of aiming at educating our students for wisdom and reflection, we can aim at standardized test scores, fast graduation rates, and higher salaries.

Yet it can also feel like these mechanical values are systematically missing out on something else, something crucial — something hard to name but absolutely essential to human life. There is a gap between what’s easy to count and what’s really important.

To get a clearer grip on what that something is, we need to understand what happens when we change between fuzzy values and mechanical values. Mechanical values — and the mechanical rules at their heart — are one of the most important hidden drivers of the modern world.

The upside to mechanical values is that they’re easy to apply. It’s very hard to agree with other people about what counts as a full life, as great art, or as a soul-nourishing vocation. But it’s easy to agree about what leads to statistically longer lifespans, more page views and engagement hours, or more money. When we turn our values mechanical, we make it easy to agree on who did better. We can compare our achievements instantly, and automatically; there is no arguing about which post got more Likes. But we also lose something.

Why do mechanical values seem so thin and insensitive? Mechanical values turn life into something like a game; at least, they take away our fuzzy, but deeply felt human values, and replace with clear, cut-and-dried rules for judging how well we did, and who won. But what does that do? For help, we can turn to the intellectual historian Lorraine Daston, who has given us a profound investigation into the nature of a rule. Because mechanical values are rule systems for evaluating success — and failure.

Historically, says Daston, we’ve used three incredibly different ideas of a rule. The first kind of rule is a principle. This is a general abstract statement about what to do — but there are exceptions. A principle isn’t meant to be applied unthinkingly and automatically. It’s supposed to be applied with judgment and care — and the knowledge that the rule won’t always work.

When I was taking creative writing classes, my teachers always told us the rule: “Show, don’t tell.” And for the most part, good fiction follows the “show, don’t tell” rule. But if you search through great literature, you’ll find plenty of exceptions. Tolstoy starts Anna Karenina with one of the finest opening lines in all of literature: “Happy families are all alike; every unhappy family is unhappy in its own way.” Tolstoy is telling and not showing.

I was the kind of smart‑ass who loved pointing out exceptions like this. But my creative writing teacher said that I was missing the point. “Show, don’t tell” is a general guideline, not an absolute rule. If you really know what you’re doing — if you understand the deep reason underneath the simple rule — you know when to break it. Principles are generalities meant to be applied with care, judgment, and discretion.

To be an algorithm is to be a rule that has been written to be used without significant skill, judgment, or discretion.

The second kind of rule is what Daston calls a model. This is an ideal — a role model, an exemplar. Daston turns to an old religious manual, The Rule of Saint Benedict. And it turns out the rule here isn’t some explicit statement in words. The rule is Saint Benedict himself, the actual historical person. To follow the rule of Saint Benedict isn’t to follow some explicit procedure, but to model yourself on Saint Benedict, to do what he would have done. This is the kind of rule embodied in mottoes like “What would Jesus do?” And notice that when you apply a model, you aren’t following some formula. You have a complex and open‑ended process: activating your understanding of this model person, and imagining how they’d act in some new situation.

Principles and models both require careful judgment to apply. Whether such a rule applies to this particular case will always be open to interpretation and up for debate.

The third kind of rule is completely different. This is a rule as an algorithm, an explicit directive meant to be applied mechanically — without discretion or judgment, exactly as it is written, with no exceptions. And it is this algorithmic conception of a rule that has become dominant in the past century, says Daston.

You might think algorithmic rules arose with the rise of computing machines. But they actually showed up about a hundred years earlier, she says, in a move to cheapen labor. Older mathematical methods often involved principles — that is, mathematical rules that had to be applied with care and discretion. For a given problem, you would have your choice of different mathematical tools and methods, each of which would yield a different result.

A simplified example: There are many different methods to split a slice of pie in two. You can do it by weight. You can divide it according to an exact angle, which you have measured with a protractor. Or you can use the “I cut, you choose” method. Each yields a slightly different result, and different situations call for different methods. If the goal is to create two perfect‑looking slices of pie for an advertising photo shoot, maybe you want a protractor. If you are a chemist trying to figure out the exact caloric count of a serving of pie, you should use weight. If the goal is to let two feuding siblings split a piece of pie without either of them feeling like they got screwed, then use the “I cut, you choose” method.

For complicated problems, choosing the right method took a considerable amount of expertise, so you had to hire people with lots of mathematical training and experience in the right field. Such trained mathematicians were rare and expensive. So corporations and governments spent a lot of resources creating an alternative to expensive, experienced experts: rule sets that anybody could mechanically follow.

Early examples of algorithmic rules include logarithm tables and various tables for performing navigational calculations. Such tables could be used by virtually anybody, and their work could be checked by virtually anybody else. There was no choice about which method to use: You just took some numbers and plugged them into the charts. So now you could hire cheaper workers — basically unskilled labor — to do the same job. And anybody could audit a worker’s performance; you didn’t need to pay another specialized expert to check over your first specialized expert’s work.

To be an algorithm here doesn’t mean that something is executed on a machine rather than by a human. To be an algorithm is to be a rule that has been written to be used without significant skill, judgment, or discretion. It is a procedure that anybody can follow.

This all might seem pretty abstract. So let’s turn to thinking about a very familiar and concrete case of mechanical rules: recipes.

Old-school recipes vs. modern recipes

My mother was an excellent cook. She learned to cook not from cookbooks and recipes, but from her family and friends in Vietnam. Like a lot of people from her generation, she cooked relatively few dishes, but she cooked them extraordinarily well.

I learned to cook through recipes, using a few key classic cookbooks: Julia Child’s French cookbooks and Marcella Hazan’s Italian cookbooks. And I got great results. So on one visit home, I asked my mom to teach me my very favorite Vietnamese dish: hot and sour catfish soup. So she did — or she tried to.

What she gave me wasn’t anything I could follow; it was nothing like a recipe at all. There were no clear measurements, nothing like “add 2 cups of broth” or “simmer for 30 minutes.” It seemed to me, at the time, like this vast and disorganized ramble, a weird organic messy flowchart of possibilities and decisions and judgment calls. I was supposed to add tomato and pineapple but I was supposed to taste the ingredients first. If one was sweet and the other sour, I was probably fine. But if they were both particularly sweet, I would need to balance them with some extra vinegar. Or if they were both sour, I might need to add a little brown sugar. My mom wouldn’t ever tell me how much; it all depended on how things were tasting that day. And I had to smell the catfish — was it a particularly clean, farmed variety, or was it one of those funky‑smelling wild ones?

I was horrified by the mess of what she was giving me. And what I said to her then, to my eternal shame, was: “Mom, what is this Third World bullshit? Give me a real recipe!”

What I didn’t understand then was that my mom was giving me something profoundly real. But it was completely different from the kinds of tidy recipes I was used to from my modern cookbooks. I was used to recipes where I didn’t have to taste the food as I went, where I didn’t have to make judgment calls — where I could just dump in the required ingredients in the exact specified amounts.

These precise, modern recipes had, in a weird way, disrupted my sense of what cooking was and could be. I had come to assume that cooking — real cooking — had to proceed via an algorithm. I had refused to accept that real cooking might involve a messy and organic decision space, full of a thousand decision points and judgment calls.

Recipes didn’t always look so precise. If you look at cookbooks from the early 20th century, you often find recipes like this: “Beat 2-3 eggs, and then blend handsful of flour until just barely workable. Knead until firm, and bake in a hot oven until it makes a nice ringing sound when knocked.”

That old‑school recipe is made up of principles: meant to be applied with judgment and flexibility. Our new‑school recipes are made up of algorithms, meant to be applied mechanically.

Let me be clear here: I learned to cook from algorithmic recipes, and I never would have been able to get a start with cooking Japanese, Mexican, or Russian food without them. Algorithmic recipes are a useful means of finding your way into a new cuisine.

Algorithmic recipes are also great if you’re a massive fast‑food franchise and want to use low‑skill, replaceable employees to produce food that tastes the same in every location. To make this work, industrial‑scale food companies need to make sure that the buns and the burger patties and the cheese are exactly the same. Standardized inputs plus algorithmic procedures equals consistent results, without any need for expert workers. But notice: Here, the algorithmic recipe isn’t just a starting point; it’s where we end up.

The relative disadvantages of the algorithmic recipe are rather subtler, but they are very real. So what are the downsides of following an algorithmic recipe precisely, exactly as it’s written? What is the cost of engineered accessibility?

Back in an earlier era of my life, when I was a food writer, I interviewed an incredible pizza chef in LA. He ran a wild‑yeast sourdough Neapolitan pizza shop called Mother Dough that made some of the most glorious, absurdly radiant pizzas I have ever had in my life.

He wasn’t from Naples; he was Lebanese. He told me that one day, eating a perfect Neapolitan pizza, he’d had a mystical insight about the unity of all the flatbread traditions, about the beautiful spectrum that encompassed both Lebanese flatbreads and Neapolitan pizza. So he moved to Naples and apprenticed himself to a pizza master for a decade, and then moved to LA and opened up his shop.

When we adopt mechanical values, we make ourselves perfectly replaceable — in valuing and judging what’s important.

I asked him how he made pizza so good — God‑pizza, pizza that sang with the most delicate balance of crispy to chewy, that gave me the most angelic hit of pure beauty, while smacking me with pure animal gut‑joy. He pointed out the enormous wood‑fired copper pizza oven at the back of the shop.

“See that?” he said. “That’s the temperature gauge. I painted it over, with black paint, so I couldn’t look at it. It’s a distraction. You have to put your hand here”— he placed his hand directly at the open mouth of the pizza oven — “and feel how it’s breathing. It will tell you how the pizza wants to be cooked that day. You can’t trust the temperature gauge to tell you the truth.”

What he meant was that temperature wasn’t all that mattered, but if you had the temperature gauge, you would be tempted to hyper‑focus on it to the exclusion of all else. Baking is a complex act, where a live product — yeast and dough — reacts to a complex set of ever‑changing environmental qualities. Temperature matters, but also humidity, air flow, air pressure. And all that atmospheric stuff is changing, every day. There is no single correct baking time and temperature. What you need to do changes each day, in response to those changing variables.

And this pizza chef had learned to perform, by feel, a complex synthesis of all these factors. He had painted over his temperature gauge because it was a distraction. It tempted you to focus completely on one thing, to treat the single measurement it tracked — temperature — as the all‑important one, instead of looking at the complex interaction of all the relevant qualities.

His shop was also incredibly consistent. It produced amazing wood‑fired pizzas, and he nailed that exact texture of dough every single day. Other shops — those where people were following an algorithm, baking a pizza with an exact amount of time and temperature each day — were actually far more inconsistent in their results. They had crispy pizza dough one day and rubbery dough the next, even though their procedure was more consistent. Why? Because the rigidity of the algorithm, when followed mechanically, prevents the cooks from adapting to continually changing conditions. Air pressure changes, humidity changes, yeast changes, but the algorithm remains the same. The true master cook adapts their procedures to changing inputs.

A recipe is an example of a mechanical procedure. A procedure is mechanical if it’s consistently applicable, by different people, without the need for judgment. And that consistency is often quite narrow. A mechanical recipe leads to consistency in procedure, but not necessarily in the final results.

By implementing mechanical procedures, we can rid ourselves of the need for skill or sensitivity, to varying degrees. Your typical recipe is built to be followed by anybody who can read and has access to some basic cup measures. There are also mechanical procedures for experts, like a standard procedure for running a test for chem lab techs. These are written assuming a greater level of background knowledge, but once you have that background knowledge, it’ll be executed in the same way, by all users — without the need for judgment.

Here is the trade‑off between fuzzy principles and mechanical procedures. Fuzzy principles, like those old‑school recipes, require the trained judgment of a highly experienced person to apply, which means they can use all the experience, sensitivity, and attunement of that judge. They can use complex cues for action, like “Bake until it makes a hollow ringing sound,” or “Add pineapple until it tastes balanced.” That linguistic fuzziness opens a space for expertise and sensitivity. Fuzzy language is a cue for the person executing the rule to exercise their own judgment, to adapt. But that procedure won’t be as easy to execute by the public at large, nor can the public as easily inspect and oversee other people’s applications of the procedure. And different experts might end up doing different things following that procedure.

A mechanical procedure, on the other hand, is highly repeatable and accessible. Mechanical procedures work best with the kinds of things that are naturally public and observable by everybody. This is when we truly get to harness the power of scale, and reap the rewards of massive collections of data. But other times, the situation may be subtler — something that requires discernment, sensitivity, and expertise to notice. And in those cases, mechanical procedures are far less accurate. They will miss the mark in subtle terrain, because they’re bound by the demand that the steps be clear and explicit enough that they may be consistently applied by anybody.

So: When we’re adopting mechanical values, what we’re really doing is accepting mechanical rules for evaluating what’s important. We’re taking on a mechanical procedure for evaluating our successes and failures.

Mechanical evaluation systems have power. They protect us from a certain kind of corruption and bias. They make agreement automatic, our conclusions inarguable. But they also introduce a new kind of bias: a bias toward paying attention to the kinds of things that we can count mechanically.

From the perspective of the big organization, though, mechanical procedures are incredibly efficient. When we standardize a system, we make the parts easy to replace. When we standardize nuts and bolts, it’s no big deal when you lose a ¼” bolt. You can just get another one. And when you standardize work — when you give workers mechanical rules to follow — then it’s no big deal to fire a particular worker, because you can find another and just slot them into place.

So when we adopt mechanical values, we make ourselves perfectly replaceable — in valuing and judging what’s important. If you’re running a restaurant and aiming for the delicate balance between crisp and soft chewiness of true Neopolitan pizza, then you need to hire people with the experience and sensitivity to tell — who know in their hearts the subtle feel of true Neopolitan pizza. But if you’re aiming to maximize sales, then anybody can judge that — and anybody can understand it. Mechanical values make it easier to communicate, easier to explain ourselves. Mechanical values are designed to make self-justification frictionless. They make our loves and desires utterly transparent.

But to get that, we must sacrifice any subtlety in our values. We can no longer seek targets that require experience or discernment to detect. The power of mechanical values is accessibility. The price we pay is our sensitivity.

From The Score: How to Stop Playing Somebody Else’s Game by C. Thi Nguyen, published by Penguin Press, an imprint of Penguin Publishing Group, a division of Penguin Random House, LLC. Copyright © 2026 by Christopher Thi Nguyen. The book is available for purchase.



Source link

Exit mobile version