§ PILLAR B · 42 MIN READ · Updated 2026-05-13
SISTER PILLAR
Logical Fallacies: The Complete List with ExamplesCognitive Biases: The Complete List with Examples
The systematic errors in human reasoning — discovered, named, and (occasionally) corrected. With the honest acknowledgment that knowing about them isn't enough.
"Nothing in life is as important as you think it is, while you are thinking about it."

A cognitive bias is a systematic pattern of deviation from rationality in judgment — an error in thinking that affects everyone, in similar ways, in similar circumstances. The study of cognitive biases began with Daniel Kahneman and Amos Tversky in the 1970s, and the field they founded has transformed our understanding of human reasoning.
This guide catalogues the thirty most important cognitive biases — what each is, why it exists, where it shows up, and (cautiously) what you can do about it. The intended audience is anyone who wants to think more clearly about their own thinking: decision-makers, analysts, anyone making consequential decisions, anyone trying to understand why others make decisions that seem irrational.
The guide also covers the meta-question: what to do with the knowledge that we're all biased. The honest answer — that simply knowing about biases doesn't reliably reduce them — is the most important thing to understand.
What a cognitive bias is
A cognitive bias is a systematic departure from rational judgment. It is predictable (occurring in particular circumstances), consistent (across people), and usually invisible to the person experiencing it (we don't feel ourselves being biased; we feel ourselves thinking clearly).
The term emerged from the heuristics and biases research program of Tversky and Kahneman. Their core insight: human reasoning relies on mental shortcuts (heuristics) that work well in many circumstances but produce systematic errors (biases) in others. The biases aren't random — they follow patterns determined by the structure of the shortcuts.
This was a revolutionary view. Earlier economics had assumed homo economicus — a fully rational decision-maker. Tversky and Kahneman showed that real humans systematically deviate from rationality in measurable, repeatable ways. The implication for economics, finance, medicine, law, and policy was substantial.
System 1 vs System 2 (Kahneman)
Kahneman's framework, popularized in Thinking, Fast and Slow (2011): we have two cognitive systems.
System 1 is fast, automatic, intuitive, emotional. It runs nearly all our thinking. It produces the immediate "answer" to most situations without conscious effort.
System 2 is slow, effortful, analytical, rational. It can override System 1 but requires deliberate engagement.
Most of our daily decisions are made by System 1. System 2 is "lazy" — it doesn't engage unless triggered. Cognitive biases are mostly errors in System 1 that System 2 fails to catch and correct.
This framework is heuristic (not literal — there aren't two physical systems in the brain), but useful for understanding when biases occur and how to interrupt them.
Categories of biases
For organization, biases cluster into several categories. The boundaries are fuzzy.
Category 1 — Memory and availability: how our memory and recent experiences shape judgment.
Category 2 — Anchoring and adjustment: how starting points distort judgments.
Category 3 — Representativeness: how surface features misleadingly substitute for underlying probabilities.
Category 4 — Confirmation and motivated reasoning: how our existing beliefs filter new information.
Category 5 — Loss aversion and framing: how options are presented changes choices.
Category 6 — Self-related biases: how self-perception distorts judgment about ourselves.
Category 7 — Social biases: how group dynamics affect individual reasoning.
We'll go through each.
Category 1: Memory and availability
1. Availability heuristic
We estimate the frequency or probability of events by how easily examples come to mind.
Example: After a plane crash dominates the news, people overestimate the risk of flying. The vividness of the recent crash makes air travel seem more dangerous, even though statistically it remains the safest form of transportation.
Why it exists: For most of human history, easily-recalled events were more frequent — recent threats to the tribe were closer in time and more relevant. The heuristic worked well in evolutionary contexts. In a world of mass media, where vivid events are amplified disproportionately to their frequency, it misleads us.
Where it shows up: shark attack fear (vastly overestimated), terrorism risk (vastly overestimated), opioid overdose statistics in news (focused on opioids while overlooking alcohol).
2. Recency bias
We weight recent information more heavily than older information.
Example: Investors overweight the last 6 months of returns when projecting future performance, even when 10 years of data is available.
Why it exists: Recent information is usually more relevant to current circumstances. The bias is the over-application.
Where it shows up: forecasting, hiring decisions (last impression dominates), customer satisfaction surveys (last interaction overweighted).
3. Survivorship bias
We focus on entities that survived a selection process while ignoring those that didn't.
Example: We study successful startups and conclude that their founders' traits cause success. We don't study the equally-talented founders whose startups failed.
Detail: Survivorship Bias: The Hidden Pattern.
4. Hindsight bias ("I knew it all along")
After an event, we remember our prior predictions as more confident and accurate than they were.
Example: After the 2008 financial crisis, many investors and analysts reported having "known all along" that the housing market was overheated. Pre-crisis writings of the same people showed they didn't.
Why it exists: Memory is reconstructive, not preserving. We update our memories of past predictions to align with what actually happened.
Where it shows up: post-mortems (everyone "saw it coming"), historical analysis (events seem inevitable in retrospect), regret.
Category 2: Anchoring and adjustment
5. Anchoring bias
We are unduly influenced by the first piece of information we encounter when making judgments.
Classic experiment (Tversky and Kahneman, 1974): Subjects watched a wheel of fortune that landed on a number, then were asked to estimate the percentage of African countries in the UN. The wheel's number influenced their estimates — even though everyone knew the wheel was random and unrelated.
Where it shows up: salary negotiations (first number offered anchors the discussion), real estate (list price anchors buyers' perception of value), pricing strategy (the "decoy" item makes other options seem reasonable by comparison).
6. Status quo bias
We have a preference for the current state of affairs over change, beyond what rational analysis would justify.
Example: People stay with default options in retirement plans, default 401(k) contribution rates, default privacy settings, even when other choices would be better for them.
Why it exists: The current state is known; changes have uncertain consequences. Combined with loss aversion (see below), this produces strong inertia.
Where it shows up: corporate inertia, voting patterns, choice of doctor, choice of profession.
Category 3: Representativeness
7. Representativeness heuristic
We judge probability based on how much something resembles a typical case, ignoring statistical frequency.
Classic experiment (Kahneman and Tversky, 1973): Subjects were given a description of "Linda" — a woman with characteristics matching a feminist activist. Asked which is more likely:
- (a) Linda is a bank teller.
- (b) Linda is a bank teller and a feminist.
Most subjects chose (b), even though (b) is logically a subset of (a) and therefore less probable.
The conjunction (bank teller AND feminist) cannot be more probable than the single component (bank teller). The representativeness heuristic ("Linda sounds like a feminist") overrides the logical structure.
8. Base rate neglect
We ignore underlying frequency information when judging probabilities.
Classic example: A medical test for a rare disease has 99% accuracy. The disease affects 1 in 1,000 people. If a random person tests positive, what's the probability they have the disease?
Intuition says 99%. The actual answer, by Bayes' theorem, is about 9% — because the rarity of the disease (base rate) dominates the calculation. Most people, including doctors, miss this.
Where it shows up: medical diagnosis, security screening, hiring decisions (judging from interview impressions without regard to baseline success rates).
9. Gambler's fallacy
Believing that past random events affect the probability of future random events.
Example: A roulette wheel has come up red 7 times in a row. People bet on black, believing "it's due."
Why it's wrong: Each spin is independent. The probability of black is the same as it was 7 spins ago.
10. Hot hand fallacy
Believing that streaks in random events will continue.
Example: A basketball player has made 5 shots in a row. Spectators (and the player) believe the next shot is more likely to go in.
For pure chance (like coin flipping), this is a fallacy — past success doesn't affect future probability. For skilled activities, the empirical question is more complicated (some research suggests genuine momentum in basketball; other research disputes it).
11. Conjunction fallacy
Believing that the conjunction of two events is more probable than one alone.
Same as the Linda problem. The conjunction "A and B" cannot be more probable than A alone, because A is a superset.
Category 4: Confirmation and motivated reasoning
12. Confirmation bias
We seek, interpret, and remember information that confirms our prior beliefs.
Example: A reader who supports a political party reads articles favorable to that party, dismisses critical articles as biased, and remembers favorable points more vividly. Over time, their beliefs become more entrenched even when contrary evidence accumulates.
Detail: Confirmation Bias in Everyday Decisions.
13. Motivated reasoning
We reason differently when the conclusion matters to us emotionally or politically. We accept weak evidence for conclusions we want; we demand strong evidence against them.
Example: A smoker dismisses studies showing cigarettes cause cancer ("flawed methodology") while accepting at face value studies suggesting any benefit from smoking ("interesting").
Why it exists: Reasoning evolved partly for argumentation (Mercier and Sperber). We're built to find reasons for what we already believe, not to find what's true.
14. Belief perseverance
Beliefs persist even after the evidence that produced them is discredited.
Example: A study is published suggesting X causes Y. Public belief in the connection forms. Later, the study is retracted as flawed. Public belief largely persists.
Why it exists: We don't reset beliefs on the basis of a single piece of evidence; we accumulate them. Once a belief is in place, removing the evidence that produced it doesn't fully remove the belief.
15. Backfire effect
Confronted with evidence against our beliefs, we sometimes become more committed to the beliefs.
Status: Initially supported by Nyhan and Reifler (2010), the backfire effect has been less reliably reproduced in later research. Current consensus: the effect exists but is weaker and more conditional than the original studies suggested. People mostly don't change their minds in response to contrary evidence, but they don't always strengthen their original beliefs either.
Category 5: Loss aversion and framing
16. Loss aversion
Losses feel roughly twice as bad as equivalent gains feel good.
Classic experiment (Tversky and Kahneman, 1979 — Prospect Theory paper): Subjects offered:
- (a) Certain gain of $100.
- (b) 50% chance of 0.
Most choose (a) — risk aversion in the gain frame.
Same subjects offered:
- (a) Certain loss of $100.
- (b) 50% chance of 0 loss.
Most choose (b) — risk-seeking in the loss frame.
This is the foundation of Prospect Theory, for which Kahneman received the Nobel Prize in Economics in 2002 (Tversky died in 1996 and was not eligible).
Where it shows up: investor behavior (selling winners too soon, holding losers too long), insurance decisions, status quo bias.
17. Endowment effect
We value things we own more than equivalent things we don't own.
Classic experiment: Half of subjects given a coffee mug; half not. Subjects offered to trade or sell. Those given the mug demanded much higher prices to sell it than those without it would pay to buy.
Why it exists: Loss aversion. Giving up the mug feels like a loss; the comparison "would I buy this for X?" The answers differ.
18. Sunk cost fallacy
Continuing to invest in a failing course of action because of past investments.
Example: You've spent 500 is spent regardless. The rational question is: forward from here, is continuing worth it?
Why it exists: Loss aversion (acknowledging sunk cost feels like accepting a loss) combined with effort justification.
Where it shows up: failed projects continued past their stop date, bad relationships maintained out of "investment," persistence with strategies that aren't working.
19. Framing effect
Identical options described differently produce different choices.
Classic experiment (Tversky and Kahneman): Medical treatment with two framings:
- "90% survival rate" → more people choose treatment.
- "10% mortality rate" → fewer people choose treatment.
Same information; different choices.
Where it shows up: marketing (price framing), policy presentation, medical communication, negotiation.
Category 6: Self-related biases
20. Overconfidence bias (and Dunning-Kruger effect)
We systematically overestimate our knowledge, abilities, and the accuracy of our beliefs.
The Dunning-Kruger effect is a specific form: people with low ability in a domain often overestimate their ability, because the same skills required to perform are required to recognize lack of performance.
Calibration: When experts are asked to provide 90% confidence intervals, they often miss the true value 30-50% of the time. They are overconfident in their estimates.
Where it shows up: project planning (almost everyone is overconfident about timelines), financial forecasting, business strategy, intellectual self-assessment.
21. Optimism bias
We tend to believe we are less likely to experience negative events than average.
Example: Most people believe they are above-average drivers (impossible if "average" is the median). Most smokers believe they are less likely than other smokers to get cancer.
Where it shows up: financial planning (people don't save enough because they underestimate negative scenarios), health behaviors, project planning.
22. Illusion of control
We overestimate our ability to influence outcomes that are partly or entirely random.
Example: People who pick their own lottery numbers feel more confident about winning than people who get random numbers, even though the probability is identical.
Where it shows up: gambling, day trading, business decisions in high-variance industries.
23. Self-serving bias
We attribute our successes to internal factors (skill, effort) and our failures to external factors (luck, circumstance, others).
Example: After a winning quarter, a manager attributes it to her strategic decisions. After a losing quarter, she attributes it to market conditions, competition, regulatory changes.
Where it shows up: performance reviews, post-mortems, sports analysis, political analysis.
24. Fundamental attribution error
We attribute others' behavior to their character; we attribute our own behavior to circumstances.
Example: Another driver cuts you off — they're a jerk. You cut someone off — you were in a hurry.
Why it exists: We see others' behavior; we don't see their context. We see our context; we don't always see how our behavior appears to others.
25. Actor-observer asymmetry
Related: we explain our own behavior differently when we're the actor vs. the observer.
When we observe ourselves: emphasis on situational factors ("I was tired"). When we observe others: emphasis on dispositional factors ("they were lazy").
Category 7: Social biases
26. In-group bias
We favor members of our own group over members of other groups, often without realizing it.
Examples: Hiring decisions (preferring people who share our background), patriotism, sports loyalty.
Why it exists: Evolutionary — in-group cohesion was adaptive. In modern multicultural contexts, it produces discrimination and tribalism.
27. Bandwagon effect
We adopt beliefs and behaviors as they become more popular.
Example: Stocks rising in price draw in more buyers, who push prices higher, drawing in more buyers. Bubbles.
Where it shows up: stock market bubbles, fashion, social movements.
28. Groupthink
Groups make worse decisions than their individual members would because of conformity pressure.
Classic example: Bay of Pigs invasion (1961). Members of the Kennedy administration who individually had serious doubts about the plan agreed in group meetings, partly to maintain group cohesion. The result: a disastrous policy decision.
Why it exists: Social pressure to conform, particularly when the group is high-status or cohesive. Disagreement feels costly.
Where it shows up: corporate boards (especially CEO-dominated), investment committees, government policy groups.
Why biases exist
A critical question: if these biases are so common, why do they exist? Three answers, in increasing depth.
Answer 1 — Computational efficiency. The mind has limited processing capacity. Heuristics provide fast judgments in situations where slow, careful analysis would be too costly. Many biases are heuristics that work well most of the time and fail in specific situations.
Answer 2 — Adaptive in evolutionary contexts. Many biases are adaptive in ancestral environments. Loss aversion makes sense when small losses (lost food, lost shelter) could mean death. Availability heuristic makes sense when recent events were usually more relevant. The biases mismatch modern environments — but they're not random errors.
Answer 3 — Reasoning evolved for argumentation, not truth. Mercier and Sperber's Enigma of Reason (2017) argues that reasoning evolved primarily for social and argumentative purposes: convincing others, justifying our own behavior, identifying weak arguments from opponents. This explains confirmation bias and motivated reasoning naturally — they're features of reasoning's actual function, not bugs.
These three answers aren't mutually exclusive. Biases are computationally efficient, evolutionarily adaptive, and shaped by reasoning's social functions.
What you can actually do about biases
The honest answer: not as much as you'd hope.
What doesn't work (or works less than expected):
-
Just knowing about biases. Studies show that simply teaching people about cognitive biases does not reliably reduce them in real decisions. People learn the concepts but apply them to others' thinking, not their own.
-
General "be more careful" instructions. Vague increases in care don't reliably correct specific biases.
-
Confidence in being unbiased. People who believe themselves unbiased tend to be more biased, not less — they don't engage System 2.
What works (modestly):
-
Pre-commitment to procedures. Decisions made by following a procedure (checklist, structured analysis) are less biased than decisions made by judgment. This is why airline pilots use checklists.
-
Outside view (reference class forecasting). Comparing your situation to a class of similar situations and using base rates from that class. Kahneman recommends this as the most powerful debiasing technique.
-
Pre-mortem analysis. Imagine the project failed; explain why. Surfaces concerns that confirmation bias would suppress.
-
Adversarial collaboration. Working with someone who has different priors. Forces argumentation, exposes weak reasoning.
-
Structured decision-making tools. Decision matrices, scoring systems, formal forecasting — these constrain System 1 from dominating.
-
Time delays for emotionally loaded decisions. Sleeping on it. Emotional intensity decays; System 2 can engage when System 1's grip loosens.
-
Calibration training. Specifically for overconfidence. Make predictions with explicit probabilities; track results; recalibrate. Superforecasters do this systematically.
Frequently asked
- Are cognitive biases the same as logical fallacies?
- No, though they're related. Biases are patterns in how humans think (psychological). Fallacies are flaws in arguments (logical/structural). Biases often produce fallacies — confirmation bias produces cherry-picking, for example — but they live at different levels.
- Can you become bias-free?
- Probably not. Biases are deep features of how the human mind processes information. You can reduce specific biases in specific decisions through structured procedures, but eliminating biases entirely isn't realistic. The goal is *less* bias, not *no* bias.
- Are biases always bad?
- No. Many biases are adaptive in many contexts. Loss aversion protects you from disastrous gambles. The availability heuristic alerts you to real recent threats. Confirmation bias maintains stable beliefs through normal noise. The biases become problematic when they're misapplied — when the situation calls for different reasoning.
- Why don't more people use debiasing techniques?
- Because they're costly. They require time, attention, and the willingness to consider that you might be wrong. Most of the time, the cost-benefit favors using System 1's fast judgments. Debiasing is appropriate for high-stakes decisions; using it for everything would be paralyzing.
- Is the Dunning-Kruger effect real?
- The original effect (Dunning and Kruger 1999) has been challenged in recent research. The pattern they observed in their data may partly reflect statistical artifacts (regression to the mean) rather than psychological reality. The core observation — that some people overestimate their ability in domains where they lack the skills to evaluate themselves — appears robust, but the precise magnitude is contested.
- How do biases affect organizations?
- Organizations amplify biases through group dynamics (groupthink, conformity) and incentive structures (motivated reasoning to please leaders, sunk cost in failing projects, hindsight bias in post-mortems). They can also reduce biases through structured decision-making, diverse teams, and procedural discipline. Whether organizations are more or less biased than individuals depends on their design.
- What's the most consequential bias in everyday life?
- Probably confirmation bias and motivated reasoning combined. They distort how we process political information, how we evaluate evidence, how we update beliefs. Loss aversion is probably second — it underlies many financial errors and many resistance-to-change behaviors.
— ACT —
Cited works & further reading
- ·Kahneman, D. (2011). Thinking, Fast and Slow. Farrar, Straus and Giroux. — Essential.
- ·Tversky, A. and Kahneman, D. (1974). "Judgment under Uncertainty: Heuristics and Biases." Science, 185: 1124–1131. — The founding paper.
- ·Tversky, A. and Kahneman, D. (1979). "Prospect Theory." Econometrica, 47: 263–291.
- ·Mercier, H. and Sperber, D. (2017). The Enigma of Reason. Harvard University Press.
- ·Stanovich, K. (2011). Rationality and the Reflective Mind. Oxford University Press.
- ·Ariely, D. (2008). Predictably Irrational. HarperCollins. — Popular treatment.
- ·Tetlock, P. and Gardner, D. (2015). Superforecasting. Crown. — On calibration.
More from this cluster
About the author
Tim Sheludyakov writes the Stoa library.
By Tim Sheludyakov · Edited 2026-05-13
A letter from the portico
Once a week — a long-read, a quote, a practice. No promotions. Unsubscribe in one click.
By subscribing you agree to receive letters from Stoa.