The Conjunction Fallacy
People consistently judge a specific, detailed scenario as more probable than a broader one that contains it. This violates a foundational rule of probability. Understanding why it happens, and what it looks like in the wild, is the protection.
Opening Hook
Meet Linda. She is 31 years old, single, outspoken, and very bright. As a student she majored in philosophy, was deeply concerned with issues of discrimination and social justice, and participated in anti-nuclear demonstrations.
Now answer this question: which of the following is more probable?
(a) Linda is a bank teller.
(b) Linda is a bank teller who is active in the feminist movement.
When Amos Tversky and Daniel Kahneman ran this experiment in the early 1980s, 85 percent of respondents chose option (b). The result held across different populations, different phrasings, and different professional groups, including statistically trained graduate students who had just attended lectures on probability theory.
The correct answer is (a). It cannot be anything else.
Here is why. Option (b) is option (a) with an extra condition attached. For Linda to be a feminist bank teller, she first has to be a bank teller. The set of feminist bank tellers is a subset of the set of bank tellers. Every feminist bank teller is a bank teller, but not every bank teller is a feminist. So the probability of being a bank teller who is also a feminist cannot be higher than the probability of simply being a bank teller. It can only be equal to it or lower.
If this is not immediately obvious, try a physical version. Imagine a bag containing 100 balls. Some are red. Some are red and spotted. The number of red-and-spotted balls cannot exceed the number of red balls, because every red-and-spotted ball is, by definition, also a red ball. The more specific category cannot be bigger than the general category that contains it.
Most people, shown this argument, agree with it immediately. They also still get the Linda question wrong.
That is the puzzle Tversky and Kahneman spent years unravelling, and the answer they found reveals something important about how the human mind actually works, as opposed to how we assume it works.
The Concept
The conjunction fallacy is the error of judging a conjunction of two events as more probable than one of those events alone. “Conjunction” simply means the joint occurrence of two things: A and B both being true. The fallacy is treating that compound event as more likely than the simpler event it contains.
The formal rule being violated is basic enough that it appeared in Unit 1.3: the probability of A and B cannot exceed the probability of A, and it cannot exceed the probability of B. Written out: P(A and B) is always less than or equal to P(A), always less than or equal to P(B). Always. Without exception.
This is not a subtle statistical result. It follows directly from the definition of probability. Any rule that assigns a higher probability to “A and B” than to “A alone” is not a probability rule. It is something else masquerading as one.
So why do 85 percent of people, including trained statisticians, make this error with Linda?
The answer Tversky and Kahneman identified is representativeness. When people are asked how probable something is, they frequently substitute a different question: how well does this description match my mental picture of that category? Linda, with her philosophy degree, her social activism, and her concern for justice, fits the mental image of a feminist activist almost perfectly. She fits the image of a bank teller rather badly. When people compare the two options, they are not comparing probabilities at all. They are comparing fit, comparing similarity, comparing how much Linda “sounds like” each description. And in that contest, option (b) wins easily.
The mind runs a representativeness calculation and reports the result as a probability. The output feels like a probability judgment. It is nothing of the sort.
This substitution is the engine behind the error. Representativeness is a reasonable heuristic in many situations: things that are more typical of a category often are more common. But it fails badly when specificity is involved, because specificity always reduces probability even as it increases narrative coherence. A story that adds details becomes more vivid, more believable, more like something that would really happen. But each added detail also adds a new condition that must be satisfied. Coherence goes up; probability goes down. The mind tracks coherence. It should be tracking probability.
This failure shows up in two domains beyond psychology experiments.
In legal reasoning, the conjunction fallacy inflates the persuasive weight of detailed accusations. A prosecution narrative that specifies the defendant’s motive, method, timing, and opportunity feels more credible than a vague assertion of guilt, even though more specified claims are always less probable than less specified ones. Jurors are asked to assess probability, but they respond to narrative coherence. Detailed, internally consistent stories are rated as more likely than their own components. Experimental research has found evidence of conjunction errors in legal probability judgments across both legally trained and untrained participants.
In investment analysis, the same trap appears as scenario thinking. An analyst who projects that a company will report strong earnings next quarter because of a specific product launch, specific cost reductions, and a specific favourable currency movement is constructing a story that may feel compelling precisely because it is specific. But the probability of that exact combination of events is lower than the probability of any single element occurring. The more complete the narrative, the lower its probability, and the more convincing it feels.
Why It Matters
The most consequential version of this error is in intelligence and strategic analysis, where detailed narratives about future threats are routinely rated as more likely than simple general ones.
Tversky and Kahneman demonstrated this directly. They asked policy experts to estimate the probability that the Soviet Union would invade Poland in the following year, and the United States would subsequently break off diplomatic relations. The experts assigned a higher average probability to this two-step scenario than they had assigned to the simpler question of whether the United States would break off diplomatic relations with the Soviet Union for any reason at all. The specific causal story, Soviet invasion leading to diplomatic rupture, rated as more likely than the broader outcome it was a path towards.
Think about what this means in practice. An intelligence report that describes a precise chain of events leading to a terrorist attack, specifying the group, the method, the target, and the triggering conditions, will be read as more credible and more probable than a report that simply says “increased likelihood of terrorist activity in this region.” The detailed report is coherent. It hangs together. It feels like the truth. But by the laws of probability it can only be less likely than the general prediction it is a specific version of.
This is not a marginal problem. It means that the more an analyst works to produce a complete, coherent, plausible account of how a threat might materialise, the more their readers may overestimate the probability of that specific account and underestimate the probability of the general category of threat. The vivid story crowds out the base rate.
Investment scenarios built on specific multi-factor narratives carry the same flaw. A business case that specifies exactly how and why a new market will open, exactly which competitors will fail, and exactly what regulatory conditions will hold is likely to receive more investment support than a simple statement of the general opportunity, even though the specific case is arithmetically less probable than the general one. Board members are approving narrative coherence and calling it probability assessment.
Legal arguments built on richly specified evidence chains carry it too. A case theory that accounts for every piece of evidence in a single coherent story is more persuasive than one that leaves gaps, but the probability of that complete theory is constrained by every one of its specified elements.
How to Spot It
The tell is the detailed narrative rated as more probable than a broader category.
The documented case is the policy expert experiment conducted by Tversky and Kahneman as part of their original 1983 research programme. Experts in geopolitics assigned higher probability to a specific two-event chain, Soviet invasion followed by US diplomatic break, than to the simpler event of a US diplomatic break for any reason. They were not naive students. They were people paid to make probability assessments for a living.
The mechanism was the same as the Linda problem. The two-step story had a causal logic that the single event lacked. It told a coherent, plausible tale about how things would unfold. That narrative quality made it feel more probable. Probability had been replaced by plausibility, and no one in the room noticed.
In practice, watch for this pattern: a claim is presented in the form of a specific scenario, and the scenario is used to argue that an outcome is probable. The more detailed the scenario, the more it has this effect. Each new specified element, the who, the how, the when, the why, makes the story easier to picture and harder to discount. It also makes it formally less probable. You are being given an increasingly vivid mental film and asked to assess its likelihood. The film is getting more compelling while the probability is getting smaller.
The correction is mechanical. When you encounter a detailed scenario and find yourself believing it, ask: what is the probability of each element of this scenario, taken separately? For all of them to occur together, each one has to come true. The joint probability is constrained by every link in the chain.
Thinking in frequencies rather than in stories helps. Instead of asking “does this seem plausible?”, ask “out of 100 times this situation arises, how many would involve this exact combination of events?” The frequency question forces you to think about how often each element occurs, which makes the shrinking probability visible.
Your Challenge
An intelligence agency produces two assessments of a regional threat.
Assessment A: “There is an elevated risk of a significant terrorist attack in the region over the next twelve months.”
Assessment B: “There is an elevated risk of a significant terrorist attack in the region over the next twelve months, specifically by a domestic cell responding to the upcoming election results, targeting transport infrastructure in the capital, using vehicle-based methods.”
A senior official reads both reports. She notes that Assessment B is “much more informative” and rates it as more likely to be an accurate description of what will happen.
What error is she making? Which assessment must be assigned the higher probability, and why? What would you need to know about each specified element in Assessment B to estimate how much lower its probability is compared to Assessment A?
There is no answer on this page.
References
Original experiment and formal analysis: Tversky, A., and Kahneman, D., “Extensional versus intuitive reasoning: The conjunction fallacy in probability judgment,” Psychological Review, 90(4), 293–315 (1983). Available at: https://pages.ucsd.edu/~cmckenzie/TverskyKahneman1983PsychRev.pdf
Book treatment of the Linda problem: Kahneman, D., Thinking, Fast and Slow, Farrar, Straus and Giroux (2011), Chapter 15: “Linda: Less Is More.” The policy expert experiment (Soviet invasion scenario) is also discussed in this chapter and in the original 1983 paper.
Legal applications: Fischbacher, U., and Handgraaf, M., reviewed in Fitelson, B., et al., “The Conjunction Fallacy,” Memory and Cognition, 30(2), 191–198 (2002). Available at: http://fitelson.org/confirmation/confal.pdf
Legal probability judgment study: Villejoubert, G., and Mandel, D.R., “Is There a Conjunction Fallacy in Legal Probabilistic Decision Making?”, Frontiers in Psychology (2018). Available at: https://pmc.ncbi.nlm.nih.gov/articles/PMC5895783/
Wikipedia overview of the conjunction fallacy with further citations: https://en.wikipedia.org/wiki/Conjunction_fallacy
Continue by email
Get one unit delivered to your inbox every day for 44 days. Free. No spam. Unsubscribe any time.