01. Know-What
Prophets of Alignment
Norbert Wiener was eleven years old when he enrolled at Tufts University.
His father Leo had pushed him hard since infancy, teaching him Greek and Latin before he was seven, turning him into something the newspapers of the time called a boy prodigy and his classmates called, less charitably, other things. He graduated at fourteen. He had a PhD from Harvard by seventeen. His doctoral thesis, on mathematical logic, was completed at Harvard while working with Karl Schmidt at Tufts; shortly afterward he studied with Bertrand Russell at Cambridge. He was widely regarded as among the most gifted pure mathematicians of the twentieth century, and he spent much of his adult life being uncomfortable about it.
By most accounts he was short and heavyset. Colleagues described him walking with his eyes half-closed, almost feeling his way through familiar corridors at MIT, where he eventually landed and spent most of his career. His absent-mindedness became legendary in the retelling, the best-known story being that he once stopped a colleague in a hallway to ask which direction he had been walking, because he needed to know whether he had eaten lunch yet. He was, by all accounts, enormously warm and enormously difficult, often in the same conversation. Those who knew him described a man who needed people and exhausted them.
What drove him, underneath the mathematics and the neediness and the fame, was a question that had nothing to do with abstraction. He wanted to know what happened to human beings when machines became powerful enough to act on their behalf. Not eventually. Soon. He thought the answer to that question was urgently important, and he thought most of the people in a position to think about it were not thinking about it nearly hard enough.
He had reasons.
By 1940, air warfare had exposed the limits of traditional anti-aircraft prediction. Planes flew faster than gunners could aim at them by hand. The standard predictor that helped a gunner lead a target worked by assuming the plane would continue in the direction it was flying. But pilots under fire didn’t continue in the direction they were flying. They evaded. The predictor couldn’t account for an intelligent adversary.
Wiener and Julian Bigelow worked on anti-aircraft prediction for the U.S. war effort. Their approach was not to predict what the plane would do next but to predict what the pilot would do next — to model the pilot as a control system, responding to incoming fire with learned evasive patterns that, however varied they appeared, were constrained by the physical limits of the aircraft and the physiological limits of the human body. If you could identify those constraints mathematically, you could, in a probabilistic sense, stay ahead of the evasion.
To build this predictor, Wiener had to think carefully about what it meant for a system to have a goal.
What he and Bigelow realized, working through the mathematics, was that the predictor and the pilot were running the same loop. The gun predicts, the plane responds, the gun updates its prediction based on the response. The pilot perceives incoming fire, adjusts course, perceives the new trajectory of fire, adjusts again. These were not two different kinds of process. They were the same process instantiated in different materials. The feedback loop that made the predictor track the pilot’s behavior was structurally identical to the feedback loop that allowed the pilot to evade.
What Wiener took from this was that the mathematics of purpose was the same in a servo-mechanism and a nervous system.
This should not have been surprising. But it was. It suggested that purpose — goal-directedness, intentionality, the quality of being aimed at something — was not a biological special case. It was a property that arose from a particular kind of information loop, and that loop could exist in machines as naturally as it existed in organisms. You didn’t need a soul to pursue a goal. You needed sensors, a comparator, and a corrective mechanism.
Wiener spent the rest of the war turning this insight into a full theory. He emerged from it, in 1948, with a book called Cybernetics: Or Control and Communication in the Animal and the Machine, and with the unsettling conviction that what he had built was not primarily a theory of guns or pilots, but a general theory of control, communication, and purposive behavior across animals and machines, of any system capable of pursuing ends through means, of correcting its behavior based on feedback about where it had been.
The question that followed from this, the one that would not let him rest, was: what happens when you engineer such systems to pursue the wrong ends?
He published The Human Use of Human Beings in 1950. It is not primarily a technical book. It is a warning.
Most people who encounter Wiener’s legacy encounter the mechanism: feedback, homeostasis, the mathematics of control. What they miss is the sustained anxiety that runs through everything he wrote — anxiety not about whether these systems could be built, but about whether the people building them had any serious idea what they were for.
He had a word for what was missing. He called it know-what.
“There is one quality more important than ‘know-how’ and we cannot accuse the United States of any undue amount of it. This is ‘know-what’ by which we determine not only how to accomplish our purposes, but what our purposes are to be.”
Know-how is engineering. It is the capacity to build the system that achieves the goal. Know-what is the prior question: what goal? Not the goal you stated, or the goal that was easiest to specify, but the goal you actually wanted, the one that captured what you cared about well enough that achieving it would constitute genuine success.
Wiener believed that the gap between these two things was not a minor implementation problem. It was the central danger of the coming century. The more powerful the systems you built to pursue goals, the more costly the mistake of specifying the wrong goal.
He had a way of illustrating this that he kept returning to, in different registers. Sometimes the register was mathematical. Sometimes it was mythological.
“In the myths and fairy tales that we read as children we learned a few of the simpler and more obvious truths of life, such as that when a djinnee is found in a bottle, it had better be left there... that if you are given three wishes, you must be very careful what you wish for.”
The djinnee is powerful and literal. It will give you exactly what you asked for, which is almost never exactly what you wanted. You asked for gold; you didn’t specify where it should come from, or at what cost to everything else you cared about. The monkey’s paw gives your son back from the dead, and then you hear a knock at the door. The wish is granted. The wish is a catastrophe.
These were not, for Wiener, mere fables. They were precise descriptions of a failure mode that his mathematics could now characterize with rigor. A system optimizing for a specified goal will achieve that goal. The gap between the specified goal and the actual goal — the full, complex, contextual thing you cared about — is where the catastrophe lives. The more capable the system, the more completely it fills that gap with wreckage.
He was writing this in 1950. The transistor had just been invented. The first commercial computers were still years away.
Wiener was also thinking about labor. This is the part of his legacy that has the most unnerving relationship to the present.
“The automatic machine, whatever we think of any feelings it may have or may not have, is the precise economic equivalent of slave labor. It is perfectly clear that this will produce an unemployment situation, in comparison with which the present recession and even the depression of the thirties will seem a pleasant joke.”
He wrote this sentence in 1950. American manufacturing still largely ran on human hands. The digital computer was a machine that filled a room. He was describing a world that would not fully arrive for sixty or seventy years.
He was not guessing. He was extrapolating from first principles, following the logic of the feedback loop through its economic implications with the same rigor he applied to everything else. If machines could be given goals and the means to pursue them, if they could sense their environment and correct their behavior accordingly, then the economic distinction between a machine and a laborer was one of degree, not of kind. And degree was a problem that engineering would eventually solve.
What he could not solve, and said so plainly, was the question of what came after. He did not pretend to know how you design an economy around machines that can do what workers do. He only insisted that this question needed to be asked, urgently, by people with the authority to do something about it, before the machines arrived and the asking became too late to matter.
He held very little hope that the asking was happening.
The most haunted passage in The Human Use of Human Beings is not about automation. It is about responsibility.
“Whether we entrust our decisions to machines of metal, or to those machines of flesh and blood which are bureaus and vast laboratories and armies and corporations, we shall never receive the right answers to our questions unless we ask the right questions.”
Notice what this sentence is doing. It is not only about machines. It is about any sufficiently complex system that makes decisions on our behalf — bureaucracies, corporations, institutions. These too are optimizers. These too pursue specified goals that may diverge, sometimes catastrophically, from the goals that actually matter. The machine is only the most legible version of a problem that exists wherever decision-making is delegated to a system we can no longer fully supervise.
And then the consequence of not recognizing this:
“For the man who is not aware of this, to throw the problem of his responsibility on the machine, whether it can learn or not, is to cast his responsibility to the winds, and to find it coming back seated on the whirlwind.”
The responsibility doesn’t disappear when you delegate it. It deforms. The man who hands his choices to a machine and calls the job done has not escaped accountability. He has disguised it, distributed it, made it harder to locate — and ensured that when things go wrong, they will go wrong in ways that are difficult to trace back to any human decision at all.
It is hard not to hear in this a version of the critique that alignment researchers now level at AI development organizations: that the diffusion of responsibility across engineers, product managers, investors, and deployment pipelines makes it structurally difficult for anyone to stand behind the decisions that the systems make. Wiener described a closely related moral problem seventy-five years ago. He framed it as a moral failure, not a technical limitation.
He also understood that the machine was not the only vessel for misplaced responsibility. Armies, bureaus, corporations — these too could absorb a person’s judgment and return it laundered, dressed in institutional authority, stripped of the connection to any individual who could be held to account.
Wiener was a man who thought in metaphors as well as equations, and his deepest metaphor for what he feared was Promethean.
“The world is not a pleasant little nest made for our protection, but a vast and largely hostile environment, in which we can achieve great things only by defying the gods; and that this defiance inevitably brings its own punishment.”
He was not arguing for timidity. He was not saying don’t build the machines. He was saying: look clearly at what you are doing. The fire is real and it is useful and it will also burn you if you are not paying attention. The punishment for taking the fire is not evidence that you shouldn’t have taken it — it is evidence that you need to be the kind of person who has thought seriously about what you’re going to do with it.
He saw one more thing that didn’t fully register until much later. He wrote about it in a chapter that might seem, from a distance, to be about biology:
“The organism is seen as message. Organism is opposed to chaos, to disintegration, to death, as message is to noise.”
And:
“Information is more a matter of process than of storage.”
Read from the vantage of 2026, it is tempting to see in this the difference between a model and a dataset. The living thing is not its parts; it is the pattern that the parts enact, continuously, against the entropic tendency of matter to scatter. When we debate now whether an AI system is its weights, or whether it is what those weights do in the world, whether the interesting thing is what’s stored or what’s running, we are having a conversation that Wiener’s framework helps to frame, even if he could not have anticipated its current form. He had the beginning of the answer. The answer was: it’s the process. It was always the process.
Norbert Wiener died in Stockholm in 1964, at sixty-nine, of a heart attack, while talking to a colleague at a research institute. He was still working.
He left behind him a theory of how purposive systems function, a set of urgent warnings about what happens when they are aimed badly, and a moral framework for thinking about who bears responsibility when they go wrong.
He also left behind him a phrase that is the title of this essay, and that is, in a sentence, the entire problem of alignment as currently understood by the people working on it:
Know-what.
Not how to build the system. What the system is for. What you actually want it to accomplish, in full, including the parts you didn’t think to specify, and the parts you couldn’t have anticipated, and the parts that will only become visible once the system is powerful enough to reveal them.
We have built seventy-five years of know-how since Wiener named the problem. We are still, in most of the ways that matter, scrambling to catch up on the know-what.
He would not have been surprised. He would have been sad about it, in the way he was sad about many things — aware that he had seen it coming, aware that he had said so, aware that saying so had not been enough.


