"

10-1: Heuristics & Biases

Psychology of Learning

Module 10: Decision-Making 2

Part 1: Heuristics & Biases

Looking Back

In Module 09, we explored decision-making from empirical, theoretical, & prescriptive/descriptive perspectives. Normative models—expected utility theory, subjective utility theory, utilitarianism, & probability theory—prescribe optimal decision-making strategies. Descriptive models—satisficing, prospect theory, & regret theory—reveal how people actually decide, often deviating systematically from normative prescriptions. But most real-world decisions involve uncertainty—incomplete information, unpredictable outcomes, & time pressure. How do people make judgments & choices under uncertainty? They use cognitive shortcuts—heuristics—that enable rapid decisions with limited information.

The Role of Intuition in Decision-Making

Intuition is a type of thinking that involves understanding that is quick & effortless & often involves insight. Generally, people cannot verbalize the thought processes leading to intuitive judgments—they “just know” something feels right or wrong (Kahneman, 2011).

Intuition operates without conscious deliberation. When you meet someone & immediately sense they’re trustworthy or untrustworthy, that’s intuition. When you instantly recognize a chess pattern & know the right move without calculating variations, that’s intuition. When you read a sentence & immediately detect that something’s grammatically wrong without being able to articulate the rule, that’s intuition. These rapid, effortless judgments feel compelling—we experience strong confidence even when we can’t explain our reasoning.

Unfortunately, our intuitions also often fail us & lead to misjudgments. Intuition works well in stable, predictable environments where we’ve accumulated extensive experience. Chess masters’ intuitions about good moves reflect decades of pattern recognition. Native speakers’ grammatical intuitions reflect years of language exposure. But intuition fails spectacularly in novel situations, statistically complex problems, & contexts where feedback is delayed or absent. Our intuitions about probability are notoriously poor. Our intuitions about causal relationships often mislead us. Our intuitions about other people’s thoughts & motivations frequently prove wrong.

Why Use Intuition Despite Its Failures?

If intuition can lead to such stunning failures in thinking, why would we possibly use it instead of cold, logical, & rational thought? Because we don’t have the time or cognitive resources for exhaustive analysis of every decision. The human brain evolved to make rapid decisions with incomplete information in environments demanding immediate action. A predator approaches—there’s no time to calculate probabilities & expected utilities. You must react instantly based on pattern recognition & heuristic processing.

Modern life presents thousands of decisions daily. What to wear, what to eat, which route to drive, how to respond to an email, whether to trust a claim, which product to buy. Analyzing each decision rationally would paralyze us. Intuition enables rapid, “good enough” decisions that free cognitive resources for situations truly requiring deliberation. The key is knowing when to trust intuition (familiar domains with rapid feedback) & when to override it with deliberate analysis (novel situations, statistical problems, high-stakes decisions).

Intuition Requires Mastery

Though intuition seems to involve no conscious thinking, it can really only occur after one has mastered a skill. An expert in English grammar (often a native speaker) can look at a sentence & instantly know that it’s wrong—”That don’t sound right”—even though the expert cannot articulate which rule was broken. This grammatical intuition develops through years of language exposure, allowing automatic pattern recognition. Beginners lack this intuition; they must consciously apply rules.

Similarly, chess masters develop intuition about board positions through thousands of hours of practice. They see patterns instantly that beginners must analyze consciously. Medical diagnosticians develop intuition about diseases through extensive experience with patients. Experienced teachers develop intuition about which students need help. This expertise-based intuition is reliable within its domain but doesn’t generalize—a chess master’s intuition about chess positions doesn’t extend to financial investments or relationship decisions.

Algorithms Versus Heuristics in Decision-Making

An algorithm is a methodical rule that guarantees that a problem will be solved. Algorithms provide systematic procedures that, if followed correctly, always produce correct solutions (Newell & Simon, 1972).

Algorithm for solving a maze: (1) Go left at all choice points if possible. (2) If blocked, turn around & go right at the previous choice point. (3) Continue until you reach the exit. This algorithm guarantees finding the exit if one exists, though it may not find the shortest path.

Algorithm for determining leap years: (1) If the year is divisible by 400, it’s a leap year. (2) If the year is divisible by 100 (but not 400), it’s not a leap year. (3) If the year is divisible by 4 (but not 100), it’s a leap year. (4) Otherwise, it’s not a leap year. This algorithm correctly identifies every leap year.

Algorithms are valuable when correctness is essential & computational resources are available. Computer programs use algorithms extensively. Mathematical proofs follow algorithmic procedures. Scientific experiments use algorithmic data analysis. But algorithms have limitations: they can be computationally expensive (requiring extensive time or mental effort), they require complete information (all relevant factors must be known), & they’re often unavailable (many real-world problems lack algorithmic solutions).

A heuristic is a mental shortcut that helps decision-makers make judgments & choices quickly in the face of uncertainty. Heuristics work from partial information & provide approximate solutions rather than guaranteed correct answers (Gigerenzer & Gaissmaier, 2011).

Heuristic for solving a maze: Work from front & end of maze & try to make them meet in the middle. This usually finds a solution quickly but doesn’t guarantee finding any solution or the optimal solution.

Heuristics are mental shortcuts for solving problems in a quick way that delivers a result that is sufficient enough to be useful given time constraints. They sacrifice guaranteed correctness for speed & efficiency. When you need a quick decision with limited information, heuristics enable action. Unfortunately, like intuition, heuristics often lead to errors in thinking—systematic biases that cause predictable misjudgments.

The Availability Heuristic: Easy to Recall Means Common

The availability heuristic is a mental shortcut which makes us believe that things that come easily to mind occur more often than things that do not come easily to mind. We judge frequency or probability based on how readily examples come to mind (Tversky & Kahneman, 1973).

Example: Are there more words in the English language that start with the letter K or more words that have K as the third letter? Most people say more words start with K. Actually, there are about three times as many words with K in the third position. Why the error? Words starting with K come to mind more easily (king, kite, kitchen) than words with K in third position (acknowledge, making). Easy retrieval suggests high frequency, so we misjudge actual frequency.

The availability heuristic often works well because frequent events generally are easier to recall. If you’ve experienced many car accidents, car accidents probably are relatively common in your life. But the heuristic fails when factors other than frequency affect availability:

Vividness: Dramatic events are more memorable than mundane events. Plane crashes receive extensive media coverage, making them highly available, leading people to overestimate aviation risk while underestimating automobile risk—even though driving is far more dangerous per mile traveled.

Recency: Recent events are more available than distant events. After seeing news coverage of a shark attack, people overestimate shark risk for weeks, avoiding beaches unnecessarily.

Personal Experience: Events we’ve experienced personally are more available than statistics. Someone who knows a person harmed by a vaccine may overestimate vaccine risks despite overwhelming statistical evidence of safety.

Salience: Distinctive or unusual events are more memorable. People overestimate the frequency of rare but memorable events (lottery winners, celebrities) while underestimating common but unremarkable events (routine commutes, typical work days).

The Representativeness Heuristic: Stereotypes & Probability

Linda is 31 years old, single, outspoken, & very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination & social justice, & also participated in anti-nuclear demonstrations. Which is more probable: A. Linda is a bank teller. B. Linda is a bank teller & is active in the feminist movement.

Nearly everyone believes that Linda is a feminist bank teller (option B). This is a logical impossibility: The category of bank tellers necessarily includes all feminist bank tellers, so “bank teller” must be more probable than “feminist bank teller.” The conjunction (bank teller & feminist) cannot be more probable than either constituent alone.

The representativeness heuristic is a mental shortcut used to assess probability whereby the more an object appears to represent a class of objects, the more likely we judge it to belong to that class. We judge probability based on similarity to stereotypes rather than actual statistical likelihood (Kahneman & Tversky, 1972).

Linda’s description fits the stereotype of a feminist beautifully—philosophy major, concerned with social justice, participated in demonstrations. This strong representativeness overwhelms logical reasoning about probability. The description seems representative of feminists, so we judge Linda likely to be a feminist, & the conjunction “feminist bank teller” seems more probable than “bank teller” alone because it better matches the description.

As Tversky & Kahneman noted: “As the amount of detail in a scenario increases, its probability can only decrease steadily, but its representativeness & hence its apparent likelihood may increase.” More specific scenarios feel more probable when they match stereotypes, even though adding details always reduces actual probability.

The representativeness heuristic produces several systematic errors:

Base Rate Neglect: People ignore statistical base rates when given individuating information. If told someone is shy & likes poetry, people judge “librarian” more probable than “salesperson” even when salespeople vastly outnumber librarians, making any randomly selected person far more likely to be a salesperson.

Conjunction Fallacy: Specific scenarios seem more probable than general categories when they’re more representative, violating basic probability rules.

Gambler’s Fallacy: After seeing several heads in coin flips, people judge tails “due” because the sequence HHHHT seems more representative of randomness than HHHHH. But each flip is independent; past flips don’t affect future probabilities.

The Recognition Heuristic: Familiar Means Better

The recognition heuristic is a mental shortcut that biases us towards choosing those things that are familiar to us. When choosing between options, we often prefer recognized options over unrecognized options, assuming familiarity indicates quality or importance (Goldstein & Gigerenzer, 2002).

When children are presented with French fries from McDonald’s & identical fries (unbranded or from unknown sources), they overwhelmingly prefer the McDonald’s fries, reporting they taste better. The fries are identical; only the branding differs. Recognition creates preference. This demonstrates why companies invest billions in advertising—not primarily to inform consumers about products, but to create recognition that triggers the recognition heuristic.

The recognition heuristic often works well: Widely recognized brands often are higher quality (quality leads to market success leads to recognition). Widely recognized cities often are larger (size leads to prominence leads to recognition). But the heuristic fails when recognition correlates poorly with the criterion of interest: Heavily advertised products aren’t necessarily superior. Famous people aren’t necessarily wise. Recognized investment strategies aren’t necessarily profitable.

Additional Heuristics: Take-the-Best & Fast-and-Frugal Trees

The take-the-best heuristic is a mental shortcut that is based on a single “good” reason, ignoring all other information. You identify the most important cue, make your decision based on that cue alone, & ignore everything else (Gigerenzer & Goldstein, 1996).

If asked which is larger, Richmond, Virginia or Salem, Oregon, many people recognize Richmond but not Salem. The take-the-best heuristic uses recognition as the single cue: Richmond is larger (it is—Richmond has about 230,000 people; Salem about 175,000). This heuristic is computationally efficient—requiring only one cue comparison—but ignores potentially relevant information. It works well when the chosen cue strongly predicts the criterion but fails when other factors matter.

Fast-and-frugal trees are shortcuts for decision-making involving a series of yes/no questions to come to a decision very quickly. These decision trees use minimal information, making sequential binary decisions until reaching a conclusion (Martignon et al., 2008).

Example: Medical triage deciding whether to immediately treat a patient or wait: Question 1: Is the patient unconscious? If yes → treat immediately. If no → continue. Question 2: Is there severe bleeding? If yes → treat immediately. If no → continue. Question 3: Is breathing labored? If yes → treat soon. If no → can wait.

Fast-and-frugal trees enable rapid decisions under time pressure with limited information. They sacrifice comprehensiveness for speed. Emergency medical personnel, military commanders, & others facing urgent decisions often rely on such trees, making quick determinations based on sequential evaluation of critical factors rather than comprehensive analysis of all available information.

Evolution & Choice: Why Heuristics Exist

Animals living in a world of constant struggle for survival often do make choices that have an immediate effect on their lives. They need to be more careful than modern humans about decisions. Natural selection clearly favors organisms which can make effective choices, choices which lead to survival & reproduction. When chased by a predator, choosing to run toward open ground rather than dense cover may determine whether the animal survives to reproduce.

Heuristics obviously played an important role in helping humans adapt to a changing environment. The recognition heuristic exemplifies this situation well: In ancestral environments, familiar foods, territories, & social partners generally were safer than unfamiliar ones. Familiarity indicated prior successful experience. The recognition heuristic evolved because it usually produced good outcomes—choosing recognized options avoided novel dangers.

Similarly, the availability heuristic makes sense evolutionarily. In small hunter-gatherer groups, ease of recall generally correlated with frequency. If predator attacks came to mind easily, they probably were common threats requiring vigilance. The representativeness heuristic enabled rapid categorization—this animal resembles dangerous predators we’ve encountered, so treat it as dangerous.

The problem is that modern environments differ dramatically from ancestral environments. Availability now reflects media coverage more than actual frequency. Representativeness reflects cultural stereotypes more than statistical reality. Recognition reflects advertising budgets more than quality. Heuristics that evolved for one context now operate in radically different contexts, producing systematic errors. Understanding these evolutionary origins helps explain both why heuristics feel intuitively compelling & why they so often mislead us in modern contexts.

Looking Forward

We’ve explored how people make decisions under uncertainty using intuition & heuristics—mental shortcuts enabling rapid judgment with incomplete information. The availability heuristic judges frequency by ease of recall. The representativeness heuristic judges probability by similarity to stereotypes, producing base rate neglect, conjunction fallacies, & gambler’s fallacy. The recognition heuristic biases us toward familiar options. But heuristics aren’t the only source of systematic judgment errors. In Part 2, we’ll examine unconscious biases—anchoring, framing, overconfidence, & gambler’s fallacy—showing how seemingly irrelevant factors systematically influence judgment.

License

Psychology of Learning TxWes Copyright © by Jay Brown. All Rights Reserved.