AI · ETHICS · DECISION-MAKING · 4 MIN READ · 2026-04-08

What the Machine Cannot Decide

AI is good at predicting. A decision is a different operation. The two are often confused.

What the Machine Cannot Decide

"Knowledge is one thing, wisdom another. Knowledge accumulates, wisdom chooses." — a fragment of Heraclitus, in Diels's reconstruction.

Modern machine learning systems are prediction machines. They estimate the probability of an outcome given the inputs. That is useful. It is not a decision. The distinction is not pedantic — it is practical: confusing prediction with decision leads to delegating moral choice to an algorithm that never claimed the role of moral subject.

A decision is the act that adds value to the prediction. What do we prefer? What are we willing to pay? What kind of type-I error do we accept against a type-II? The machine does not answer these — it takes them as given. If a human did not state them explicitly, they were stated by default by the engineer who wrote the loss function three years ago, without knowing his choice would become the ethics of a clinic, a bank, or a court.

The ancient frame

Aristotle in the Nicomachean Ethics distinguished techne from phronesis. Techne is the knowledge of how to make a thing correctly by rule: the carpenter knows the technique, the doctor knows the protocols. Phronesis is practical wisdom: knowing what to do in the particular case, where the rule is not enough. A good doctor is not reducible to a walking algorithm: his art is to choose which rule applies to this particular case, and to bear the consequences of the choice.

AI is techne in pure form. The algorithm knows how to compute the probability; it does not know whether to act on it. The decision is phronesis, and it remains with the human always, even when the human does not notice.

The modern confusion

When we delegate to AI a "decision" — hiring, credit, medical diagnosis — we are in fact delegating prediction. The decision stays with us: we choose the threshold, we choose the loss function, we choose what success means. If we do not choose consciously, we delegate it to the system's defaults. Then the defaults become our ethics.

The hidden danger here is not "the rise of the machines" but the transfer of responsibility. When a credit denial is presented as "the algorithm decided," the person who actually picked the threshold is hidden behind the machine. Ancient ethics called such hiding sacrilege against phronesis: an attempt to avoid responsibility by dressing it as rule.

Where the line runs

The line of machine decision lies wherever three things are at stake: values that cannot be enumerated in advance; contexts where the rule itself requires exception; and responsibility that someone must personally bear. Aristotle said phronesis requires experience — that is, a historical body, formed by mistakes and holding their memory. The machine has no such body; it "remembers" only what is in the dataset.

The machine can help you decide. It cannot decide for you. To delegate the choice is to make a choice without calling it one.

What "keeping the decision" means

It means asking, each time AI hands you a score: what do I do with this? Not "is the prediction right" but "what action does the prediction warrant". That is the line the machine cannot cross. A good AI interface honestly shows where prediction ends and where decision begins, and does not try to disguise the second as the first.

Organisations that understand this document not just the models but the thresholds: who chose them, on the basis of what values, with what review date. That is ancient discipline rewritten in the language of machine ops.

Delegation as corporate anaesthesia

In large organisations there is a hidden benefit of delegating a decision to a machine, never named aloud: no one is to blame. If a loan is denied by an algorithm, the customer's anger dissipates into air; if denied by a person, the anger has a name, and the name carries a cost. The machine becomes corporate anaesthesia: it removes the pain of responsibility without removing the fact of the decision.

This is convenient in the short run and corrosive in the long. Ancient ethics called such transferring sacrilege against phronesis: responsibility cannot be handed to an algorithm because the algorithm has no status as a moral subject. When we pretend that "the machine decided," we do not remove the decision — we only remove the name of the one who made it. The decision remains, the name vanishes, and what sociologists call a deficit of legitimacy accumulates.

The best organisations build AI infrastructure so that the name always remains. Every automatic decision has an owner; every threshold an author; every refusal to intervene a review date. That is the technical form of responsibility translated into the era of algorithms.

What to do

When you use AI for classification, forecasting, or ranking — write down three things explicitly. What the model predicts. What threshold turns prediction into action. Who is responsible for the threshold. The three answers should be on paper and signed by a human. If you cannot name any of them, you are not using AI; AI is using you, quietly substituting its defaults for your decisions. Phronesis does not delegate. Ever.


A letter from the portico

Once a week — a long-read, a quote, a practice. No promotions. Unsubscribe in one click.

By subscribing you agree to receive letters from Stoa.


More chronicles