◆ Powerful

The Availability Heuristic

We judge the probability of an event by how easily examples come to mind. Vivid, recent, or emotionally charged events are overweighted. This one cognitive shortcut shapes public policy, insurance markets, and personal decisions in ways that have nothing to do with actual risk.

Time: 12 minutes
Requires: Unit 1.1

The Hook

Which kills more Americans each year: sharks or lightning?

Most people, if they give it any thought at all, picture the shark. They have seen the films. They have read the headlines. They have a clear, terrifying image of it. The lightning strike is somehow less defined, less cinematic.

Lightning kills roughly 20 to 27 Americans per year. Sharks kill, on average, about one. You are somewhere between twenty and twenty-seven times more likely to be killed by lightning than by a shark, depending on the year you pick.

Almost everyone gets this wrong. And the reason they get it wrong is not stupidity. It is a specific, named cognitive mechanism that affects every human mind regardless of intelligence or education. Once you understand it, you will start finding it everywhere, because it is everywhere.


The Concept

In 1973, psychologists Amos Tversky and Daniel Kahneman published a paper called “Availability: A Heuristic for Judging Frequency and Probability.” A heuristic, in this context, means a mental shortcut: a rule of thumb the mind uses to make judgements quickly, without doing the full calculation. The availability heuristic is the rule that goes like this: if examples come to mind easily, the thing they are examples of must be common.

This is not an irrational rule. In most everyday contexts, it is a reasonable proxy. Things you have encountered often are more easily recalled than things you have rarely encountered, so ease of recall tends to track frequency reasonably well. If you are trying to estimate whether it rains more in Glasgow or Seville without doing any research, the fact that you can quickly picture a dozen rainy days in Glasgow is a fair guide.

The problem arises when ease of recall is decoupled from actual frequency. And the force most reliably responsible for that decoupling is the media.

Media coverage is not proportional to risk. It is proportional to vividness, novelty, and emotional impact. A shark attack on a beach in Florida generates dozens of news reports, photographs, survivor interviews, and documentary follow-ups. The 17,000 Americans who will die of diabetes-related causes in the same month generate almost no news coverage at all. The shark attack is a story. The diabetes deaths are a statistic.

The result is systematic distortion. Events that are rare but dramatic become highly available in memory. Events that are common but undramatic remain invisible. And because availability feels like a signal about frequency, people walk around with a map of risk that is almost precisely inverted in the places that matter most.

This distortion can amplify itself in a process that legal scholars Timur Kuran and Cass Sunstein named the availability cascade. The mechanism works like this. A dramatic event occurs. Media coverage makes it highly available in the public mind. People express alarm. The alarm itself becomes news, generating more coverage. More coverage increases availability further. More availability increases alarm. Political pressure builds for a response. Policy is made in response to the cascade rather than in response to the underlying risk. The event has, by this point, grown to occupy a place in public consciousness that bears no relationship to its statistical significance.

The corrective the availability heuristic demands is straightforward to state and genuinely difficult to apply: replace the vividness of individual examples with frequency data about the population. Do not ask “can I think of a case?” Ask “what is the actual rate?” Do not let narrative substitute for numbers. This is easier said than done, because the narrative is always right there, vivid and compelling, and the frequency data requires you to go and look for it.


Why It Matters

The terrorism and diabetes comparison is the clearest illustration of the policy consequences.

In the years following the September 11 attacks, the United States spent trillions of dollars and reorganised entire government agencies around the threat of terrorism. In the decade from 2001 to 2011, domestic terrorist attacks killed approximately 3,380 Americans, including those killed on September 11 itself. Diabetes mellitus killed approximately 73,000 Americans per year across the same period, meaning that in any single year, diabetes killed roughly twenty times as many Americans as terrorism killed in a decade.

This is not an argument that terrorism does not matter or that it should not be countered. Security has dimensions, including deterrence and psychological security, that are not captured in body counts. The point is narrower: the allocation of public attention and policy energy was shaped in significant part by availability, not by comparative risk. The shark on the poster generates more votes than the slow rise in blood sugar.

Aviation versus road travel tells the same story at the personal level. Commercial aviation accidents kill, in a typical year, fewer than 100 Americans. Road accidents kill approximately 40,000. A person who is frightened of flying but comfortable driving is, in statistical terms, choosing the more dangerous option every time they get in the car. The fear of flying is driven by availability: a plane crash is vivid, total, and heavily covered. A car accident is private, dispersed, and barely newsworthy unless multiple people die.

Gerd Gigerenzer, a German risk scholar, documented the cost of this misperception directly. After the September 11 attacks, millions of Americans avoided flying and drove instead. His 2004 analysis estimated that this shift caused approximately 353 additional road fatalities in the three months following the attacks. Later, more precise analyses put the figure higher, at over 1,000 excess road deaths in the same period. The people who died in those cars were, in a measurable sense, casualties of the availability heuristic. They were killed by fear calibrated on media coverage rather than on actual comparative risk.

Post-disaster insurance purchasing is a smaller but revealing example of the same mechanism. After a flood or earthquake, sales of insurance against that specific disaster spike sharply, then fade as the images leave the news cycle. People’s assessment of their risk moves almost exactly in step with the availability of recent images of the disaster, not with any underlying change in actual risk. The ground does not become more earthquake-prone because you saw footage of one last week.


How to Spot It

The tell for availability-driven reasoning is a mismatch between emotional weight and statistical frequency. When the response to a risk is large relative to the number of people it actually affects, when the policy argument rests heavily on vivid case studies rather than comparative figures, and when the urgency began immediately after a high-profile incident rather than in response to accumulating evidence, you are probably looking at an availability cascade.

The Alar scare in 1989 is one of the most thoroughly documented examples. Alar was a chemical used to regulate the ripening of apples. In February 1989, a CBS “60 Minutes” report warned that Alar was “the most potent cancer-causing agent in our food supply” and could pose a cancer risk, particularly to children. The report drew on a model produced by the Natural Resources Defense Council. Apple sales collapsed immediately. Schools removed apples from cafeterias. The apple industry lost approximately $250 million. The EPA eventually banned Alar in 1989, though subsequent risk analyses suggested the original cancer risk estimate was based on an extrapolation from extremely high doses in animal studies that did not translate meaningfully to human dietary exposure.

The availability cascade was textbook: a single vivid news story, a sympathetic victim class (children), heavy media repetition, public alarm, reputational pressure on retailers and schools to act visibly, and policy response calibrated to the cascade rather than to a sober comparative risk assessment. The underlying science was disputed. The coverage was not.

The tell is always the same: search for the base rate. How many people does this actually affect, per year, in the population? What is the comparative figure for risks that are not in the news? If those questions are hard to answer from the coverage itself, that is diagnostic. A news environment built on availability will systematically suppress the denominator.


Your Challenge

Here are five causes of death in the United States. Before you read any further, rank them from most to least common in terms of annual deaths.

The five causes are: heart disease, homicide, diabetes, commercial airline crashes, and lightning strikes.

Write down your ranking. Then, alongside each one, write down how confident you feel in its position.

Once you have your ranking, find the actual figures from the CDC’s National Vital Statistics Reports. Compare your ranking with reality. Then ask yourself: for each cause you misranked, where does your mental image of that cause come from? How many news stories, films, or conversations contributed to your sense of how dangerous it is? And how many actual death statistics did?

There is no answer key here, because the exercise is about noticing the process, not getting the right number. The discomfort of a large gap between your felt sense of risk and the statistical record is the beginning of calibrated thinking.


References

  1. Tversky A, Kahneman D. Availability: A heuristic for judging frequency and probability. Cognitive Psychology. 1973;5(2):207–232. The original paper naming and defining the availability heuristic.

  2. Kahneman D. Thinking, Fast and Slow. New York: Farrar, Straus and Giroux; 2011. Chapters 12–13 cover availability and its policy consequences in full, accessible depth.

  3. Kuran T, Sunstein CR. Availability cascades and risk regulation. Stanford Law Review. 1999;51(4):683–768. The paper that named the availability cascade and documented its role in the Alar scare, Love Canal, and TWA Flight 800 panics.

  4. Gigerenzer G. Dread risk, September 11, and fatal traffic accidents. Psychological Science. 2004;15(4):286–287. The study estimating 353 excess road fatalities in the three months after 9/11 from people switching from flying to driving.

  5. Gaissmaier W, Gigerenzer G. 9/11, dread risk, and the ripple effects on terrorism. Psychological Science. 2012. A broader examination of driving fatalities following 9/11, with higher estimates of excess deaths.

  6. Centers for Disease Control and Prevention. National Center for Health Statistics. Deaths and Mortality. cdc.gov/nchs/fastats/deaths.htm. Source for 2023 US mortality figures: heart disease (680,909 deaths), diabetes (approximately 100,000 deaths annually), homicide (approximately 22,000 deaths annually).

  7. International Shark Attack File, Florida Museum of Natural History, University of Florida. Risk of Death: Lightning Strikes vs Shark Attacks. floridamuseum.ufl.edu/shark-attacks/odds/compare-risk/lightning-strikes. Source for the lightning vs shark comparison: lightning kills approximately 20–27 Americans per year; shark attacks kill approximately one.

  8. National Highway Traffic Safety Administration and National Safety Council. Annual road traffic fatality data. Source for approximately 40,000 annual road deaths in the United States.

  9. Aviation Safety Network / National Transportation Safety Board. Annual civil aviation fatality data. Source for commercial aviation deaths typically under 100 per year in the United States.