Module II·Article II·~3 min read
Inductive Logic, Probabilistic Thinking, and Decision-Making
Formal Logic and Argumentation Theory
Turn this article into a podcast
Pick voices, format, length — AI generates the audio
Uncertainty as the Norm
In real life, we rarely have complete information. We make decisions under conditions of uncertainty: data are incomplete, models are approximate, the future is unpredictable. Probabilistic thinking is the skill of working with this uncertainty—without denying it and without paralysis.
Basic Probability Theory
The probability of an event is a number from 0 to 1, showing relative frequency or degree of confidence. P(A) = 1: event A will certainly occur. P(A) = 0: never. P(A) = 0.5: equally likely yes or no.
Addition (mutually exclusive events): P(A ∨ B) = P(A) + P(B). If a coin: P(heads) + P(tails) = 1. Multiplication (independent events): P(A ∧ B) = P(A) × P(B). Two coin tosses: P(two heads) = 0.5 × 0.5 = 0.25. Conditional probability: P(A|B) — probability of A given B.
Bayes' Theorem
Perhaps the most important formula for rational thinking:
P(H|E) = P(E|H) × P(H) / P(E)
Where H is the hypothesis, E is the observed evidence. We update the prior probability P(H) considering the new evidence E to obtain the posterior probability P(H|E).
Example: A test for a disease has 99% sensitivity and 99% specificity. The disease occurs in 1 out of 1,000 people. You test positive—what is the probability that you are actually sick?
Intuition says: “99% accuracy means you’re surely sick.” Bayes says differently: out of 100,000 people, about 100 are sick (all will test positive) and 99,900 are healthy (1% false positives = ~999 people). In total, ~1,099 positive tests, of which only 100 are truly sick. P(sick|+test) ≈ 9%. Unexpected!
Bayesian thinking teaches: always consider the base rate (prior). A rare event remains rare, even if the test “detects” it.
Errors in Probabilistic Thinking
Base rate error: ignoring prior probability. “A serial killer is a middle-aged man, reserved, likes weapons.” This profile fits millions of people. You can’t diagnose a rare event by a vague profile.
Simpson’s paradox: a trend observed in subgroups disappears or changes direction when data are combined. Hospital A has higher survival for both mild and severe cases. Hospital B—for overall. How? Hospital A accepts more severe cases. The choice of data aggregation level is critically important.
Neglecting sample size: a statistically significant result in a small sample may be due to chance. Four successful quarters is too little to conclude about a long-term trend.
Regression to the mean: extreme results in the next period are closer to average. “The curse of the Sports Illustrated cover”: athletes on the cover often perform worse afterward—because they appeared on the cover during peak form. It is erroneous to explain the decline by “stardom.”
Expected Value and Risk
Expected Value (EV) = sum of (probability × outcome) over all outcomes. Should you play the lottery: P(win) × prize amount - P(loss) × ticket price. If EV is negative—it’s rational not to play.
But EV is not the only criterion. Risk aversion: a rational person may prefer a guaranteed $100 to a lottery with EV $150, because variability costs them dearly. Loss asymmetry: losing $100 is subjectively more painful than winning $100. Hence different strategies for large and small bets.
Question for reflection: Name a major decision in your industry that “did not work out.” Can its failure be partially explained by an error in probabilistic thinking—base rate error, regression to the mean, insufficient sample size?
§ Act · what next