Glossary

149 terms from How Humans Get Stuck

# A B C D E F G H I L M N O P Q R S T U V W Y

#

"Disrupt the incumbents"
The survivors disrupted. Many failures also tried to disrupt and were destroyed. Disruption is a strategy that works spectacularly when it works and catastrophically when it doesn't. Studying only the successes tells you nothing about when it works. → Chapter 5: Survivorship Bias at Scale
"Follow your passion"
The survivors followed their passion. So did the failures. Passion is not a differentiating factor; it is a base-rate characteristic of anyone who starts a company. → Chapter 5: Survivorship Bias at Scale
"Move fast and break things"
The survivors moved fast. So did Theranos, WeWork, FTX, and thousands of other companies that moved fast and broke themselves. Speed is not inherently a virtue; it depends entirely on the context. → Chapter 5: Survivorship Bias at Scale
"Persistence is the key to success"
The survivors persisted and eventually succeeded. But so did the failures — they persisted and eventually went bankrupt. Persistence is a necessary condition for success (you can't succeed if you quit) but not a sufficient one. The survivorship bias makes it look sufficient because we never see the → Chapter 5: Survivorship Bias at Scale
"The founder's vision was key"
Every founder has a vision. The difference between a "visionary founder" and a "delusional founder" is often just the outcome. We retroactively label the winners as visionaries and the losers as delusional, but the *ex ante* characteristics may be indistinguishable. → Chapter 5: Survivorship Bias at Scale
36% produced statistically significant results
The average effect size in the replications was **roughly half** the average effect size in the originals - Many of the most famous and widely cited findings failed to replicate → Chapter 10: The Replication Problem
6 out of 53
an 11% replication rate. → Chapter 10: The Replication Problem
61% replicated
better than psychology but far from reassuring for a field that claims scientific rigor. → Chapter 10: The Replication Problem
⚠️ Common Pitfall
A mistake to avoid, with explanation of why it's wrong. → How to Use This Book
⚡ Quick Reference
Compact summary for future reference. → How to Use This Book
✅ Best Practice
Expert-recommended approach with rationale. → How to Use This Book
🌍 Global Perspective
How this differs across cultures or regions. → How to Use This Book
🎓 Advanced
Graduate-level extension. Skip on first reading. → How to Use This Book
💡 Intuition
A mental model or analogy to build understanding. → How to Use This Book
📊 Real-World Application
How this concept plays out in practice. → How to Use This Book
📐 Project Checkpoint
Your contribution to the Epistemic Audit progressive project. → How to Use This Book
📜 Historical Context
How this concept evolved over time. → How to Use This Book
📝 Note
Additional context or nuance. → How to Use This Book
🔄 Check Your Understanding
Retrieval practice: try to answer without looking back. → How to Use This Book
🔍 Why Does This Work?
Prompts you to explain the mechanism, not just the result. → How to Use This Book
🔗 Connection
Link to another chapter's concept. → How to Use This Book
🚪 Threshold Concept
An idea that, once understood, irreversibly transforms how you see the subject. → How to Use This Book
🧩 Productive Struggle
A problem posed *before* the technique is taught. The attempt primes learning. → How to Use This Book
🪞 Learning Check-In
Metacognitive reflection on your learning process. → How to Use This Book

A

acceleration levers
variables that can be deliberately changed to speed up correction. → Chapter 22: The Speed of Truth
ad hoc auxiliary hypotheses
additional assumptions added to a theory not because they're independently motivated but specifically to save the theory from falsification. Each epicycle was an ad hoc rescue operation, making the theory more complex without making it more predictive. → Chapter 3: Unfalsifiable by Design
After-action reviews (AARs)
systematic debriefs after every operation, from platoon patrols to major campaigns - **Lessons-learned centers** that collect, analyze, and disseminate operational lessons - **Red teams** that test plans and assumptions by arguing the enemy's perspective - **War games** that simulate future conflict → Chapter 28: Field Autopsy: Military Strategy
artificial consensus
consensus that reflects social pressure rather than genuine agreement. Artificial consensus is indistinguishable from genuine consensus from the outside: in both cases, the published literature supports the dominant view, conference presentations are paradigm-consistent, and public statements endors → Chapter 14: The Consensus Enforcement Machine
authority cascade
the most powerful entry mechanism for wrong ideas, where one prestigious wrong answer becomes everyone's wrong answer. You'll meet the full Barry Marshall story, alongside Wegener (continental drift), Semmelweis (hand-washing), and others. → Key Takeaways: The Archaeology of Error
Authority cascade at machine speed
AI's confident presentation propagates wrong answers to millions in seconds - **Consensus enforcement by algorithm** — engagement optimization suppresses dissent without anyone making a deliberate decision - **Survivorship bias at database scale** — every bias in every digitized dataset is inherited → Key Takeaways: The Failure Modes of the Future

B

Be patient
generational turnover may be required → Key Takeaways: The Zombie Idea — Part II Synthesis
Bounded rationality
Herbert Simon's insight that real humans optimize *satisfactorily* rather than *optimally* — was acknowledged as interesting but not incorporated into the mainstream framework → Case Study 2: The Rational Actor — Economics' Invisible Prison

C

Calibration examples:
Medicine: 6/10 — Dissent is tolerated but slow to influence; Marshall and Warren's experience was typical, not exceptional - Criminal justice: 2/10 — Challenging forensic science or prosecutorial practices carries severe professional risk - Technology (AI): 5/10 — Varies by subfield; the neural netw → Chapter 32: The Epistemic Health Checklist
caloric theory
the idea that heat was a fluid ("caloric") that flowed from hot objects to cold objects. The theory was endorsed by the most prestigious scientists of the era, including Lavoisier. It explained many observations, generated productive research, and was embedded in the vocabulary of chemistry and phys → Chapter 9: The Sunk Cost of Consensus
Career investment
Devaluation of professional work and expertise 2. **Reputational capital** — Damage to field's credibility 3. **Textbook infrastructure** — Coordination cost of revising training, guidelines, certifications 4. **Funding commitments** — Obsolescence of current research programs 5. **Identity investme → Key Takeaways: The Sunk Cost of Consensus
Cognitive psychology (core areas)
attention, memory, language processing — generally replicates well, though specific findings (particularly in the more "social" areas of cognition) have been challenged. → Chapter 25: Field Autopsy: Psychology
Common scoring pitfalls:
**Confirmation bias in scoring.** If you already suspect the claim is wrong, you'll tend to score everything red. If you believe the claim is right, you'll tend to score everything green. Counteract this by asking: "What would I score this if I had no opinion about the claim?" - **Insufficient inves → Case Study: Scoring a Current Controversy — What the Scorecard Says About Claims You're Evaluating Right Now
Complexity Hiding in Simplicity
the demand for clean answers in a complex world. → Key Takeaways: The Consensus Enforcement Machine
Conceptual path dependence
the realization that the first explanation a field adopts constrains all subsequent thinking, regardless of the evidence — is a threshold concept that transforms how you see knowledge claims. > > **Before this clicks:** "This is how depression works." / "This is how economies function." / "This is h → Chapter 7: The Anchoring of First Explanations
credibility tax
a reduction in their professional standing that comes from challenging the consensus. The tax is automatic, structural, and disproportionate: → Chapter 33: How to Disagree Productively
Crisis and Correction
why fields change only when forced to. → Key Takeaways: The Outsider Problem
Cultured the bacterium
eventually named *Helicobacter pylori* — proving it existed in the stomach, not as a contaminant but as a resident organism. → Case Study 1: The H. Pylori Revolution — A Structural Anatomy

D

Difficulty Guide:
⭐ Foundational (5–10 min each) - ⭐⭐ Intermediate (10–20 min each) - ⭐⭐⭐ Challenging (20–40 min each) - ⭐⭐⭐⭐ Advanced/Research (40+ min each) → Exercises: The Archaeology of Error

E

Einstellung
officers trained in conventional operations perceive new conflicts through the conventional framework → Key Takeaways: Field Autopsy — Military Strategy
Errors will persist for years or decades
because the combination of low dissent tolerance and low outsider access means that no one with the knowledge and standing to challenge errors has an incentive or an opportunity to do so 3. **When correction eventually happens, it will be driven by external crisis** — because the internal correction → Case Study: When the Checklist Reveals a Sick Organization
Examples across fields:
**Evolutionary psychology "just-so stories":** "Humans evolved X behavior because of Y adaptive pressure." For nearly any behavior X, a plausible adaptive story Y can be constructed after the fact. The question is whether these stories can predict behaviors we haven't yet observed. - **Market explan → Chapter 3: Unfalsifiable by Design
Examples:
**"Culture eats strategy for breakfast"** (attributed to Peter Drucker, though the attribution is disputed). What constitutes "culture"? If a company with a "great culture" fails, was the culture actually not great? How would you know? The claim is unfalsifiable not because it's wrong but because "c → Chapter 3: Unfalsifiable by Design

F

Failure modes active:
**Incentive structures (Ch.11):** Pharmaceutical companies spent billions marketing opioids to physicians. Sales representatives provided misleading information about addiction risk. "Key opinion leaders" — physicians paid by pharmaceutical companies — published favorable articles and gave lectures → Chapter 23: Field Autopsy: Medicine
Field Autopsy: Economics
the field that mathematicized its theories to the point of unfalsifiability and responded to its greatest empirical failure with remarkably little change. → Key Takeaways: Field Autopsy — Medicine
Field Autopsy: Nutrition Science
the field that made everyone distrust science. → Key Takeaways: Field Autopsy — Psychology
Field Autopsy: Psychology
the field with the most dramatic recent correction. → Key Takeaways: Field Autopsy — Economics
Follow the style guide:
Narrative tone: story-driven, never smug - Second person for engagement, first person plural for analysis - Vary section structures (don't use identical patterns) - Concrete examples before abstractions 3. **Respect the citation honesty system:** - Tier 1: Only for sources you can verify exist - Tie → Contributing to How Humans Get Stuck
Funding
Who pays? What do they want? 2. **Research** — What gets studied? What's rewarded? 3. **Evaluation** — Who reviews? What are their incentives? 4. **Publication** — What gets published? What's the selection criterion? 5. **Dissemination** — How does it reach the public? What drives selection? → Key Takeaways: How Incentive Structures Manufacture Error

G

Guiding questions:
Can you name a prominent dissenter in your field who challenged the consensus? What happened to them? - If you published a paper challenging a core assumption of your field tomorrow, what would happen to your career? - Does your field have formal mechanisms for structured dissent (red teams, devil's → Case Study: Scoring Your Own Field — A Guided Health Assessment

H

hindsight bias
the well-documented tendency to believe, after learning an outcome, that you "knew it all along." Hindsight bias doesn't just distort memory; it distorts the *feeling* of understanding. Once you know that the housing market collapsed, the contributing factors seem obvious. But they didn't seem obvio → Chapter 6: The Plausible Story Problem
Historical evidence:
The Open Science movement succeeded because it built a broad coalition *before* launching its critique. Brian Nosek, the Reproducibility Project, and the pre-registration advocates spent years building consensus among reformers before going public with their challenge to the replication status quo. → Chapter 33: How to Disagree Productively
How Incentive Structures Manufacture Error
the business model of being wrong. → Key Takeaways: The Replication Problem
hydraulic systems
building a physical machine with levers and water chambers to represent the economy. → Chapter 8: Imported Error

I

If risk had been reported qualitatively
"Our exposure is substantial and the models may underestimate extreme scenarios" — senior management and regulators would have been more cautious. - **If risk had been reported with wide confidence intervals** — "Our loss could be anywhere from $50 million to $500 million, with a small but non-negli → Chapter 12: Precision Without Accuracy
implementation fidelity
the drug is the same drug, the dosage is the same dosage, the protocol is the same protocol. Educational interventions depend on *teachers* — human beings with different skills, beliefs, motivations, and contexts. An instructional approach that works brilliantly with a skilled, motivated teacher in → Chapter 30: Field Autopsy: Education
Imported Error
what happens when fields borrow ideas from other fields and they calcify faster because of borrowed prestige. → Key Takeaways: The Anchoring of First Explanations
In this chapter, you will learn to:
Distinguish between being individually wrong (a cognitive bias) and being *systematically* wrong (an institutional failure mode) - Recognize the "lifecycle of a wrong idea" — the predictable trajectory that wrong consensuses follow - Understand why this book is not about stupid people, and why that → Chapter 1: The Archaeology of Error
incentive misalignment
when the structures that fund, produce, evaluate, and reward knowledge are designed in ways that systematically bias outcomes toward error. Unlike the sunk cost of consensus (Chapter 9), which maintains error through the cost of changing, and the replication problem (Chapter 10), which maintains err → Chapter 11: How Incentive Structures Manufacture Error
Initial framing
First plausible explanation establishes vocabulary and metaphor 2. **Vocabulary adoption** — Field adopts the language because it needs *some* vocabulary 3. **Research channeling** — Studies designed within the framing; evidence accumulates within it 4. **Invisible constraint** — The framing becomes → Key Takeaways: The Anchoring of First Explanations
Insufficient allies
all three dissenters faced opposition from more powerful institutional actors without sufficient coalition support 2. **Wrong framing** (Semmelweis) or right framing overwhelmed by counter-framing (Born, climate scientists) — the defender's narrative dominated 3. **Evidence short of undeniable** — i → Case Study: When Dissent Fails — Lessons From the Casualties
Intellectual honesty
The willingness to consider that things you currently believe might be wrong. - **Comfort with ambiguity** — This book often lands on "it's complicated" rather than a clean answer. That's deliberate. - **Curiosity about your own field** — You'll get the most from this book if you're willing to turn → Prerequisites
interregnum
the period after the old metaphor has been recognized as inadequate but before a new metaphor has been established. During this period, practitioners know the old framework is wrong but don't have a replacement. The result is often paralysis or, worse, a retreat to the old metaphor under the pressur → Chapter 7: The Anchoring of First Explanations
Introduction
A plausible wrong idea enters the field 2. **Adoption** — It gains traction, gets cited, enters textbooks 3. **Entrenchment** — It becomes the unquestioned default 4. **Counter-evidence** — Anomalies accumulate; outsiders notice 5. **Resistance** — The establishment uses institutional mechanisms to → Key Takeaways: The Archaeology of Error
Intuitive appeal
Feels true on a gut level 2. **Usefulness to powerful interests** — Economic constituency maintains it 3. **Institutional embedding** — Woven into practice, training, regulations 4. **Narrative stickiness** — Tells a compelling story 5. **Simplicity** — Easier to understand and implement than the tr → Key Takeaways: The Zombie Idea — Part II Synthesis

L

Learning and conditioning
the legacy of behaviorism — also replicate well. The effects are robust, the paradigms are simple, and decades of animal research established strong baselines. → Chapter 25: Field Autopsy: Psychology
learning styles
the widely believed claim that students learn better when instruction is matched to their preferred modality (visual, auditory, kinesthetic). The review examined decades of research and reached a clear conclusion: "there is no adequate evidence base to justify incorporating learning-styles assessmen → Chapter 16: The Zombie Idea
learning styles hypothesis
sometimes called the **meshing hypothesis** — claims that students have distinct learning preferences (visual, auditory, kinesthetic, reading/writing — the **VARK model** is the most popular classification) and that instruction is most effective when it is "matched" to the student's preferred style. → Chapter 30: Field Autopsy: Education
Limitations:
**Willingness problem:** Both parties must agree to participate. Defenders of the consensus have no incentive to enter an adversarial collaboration — they're winning under the current system. The tool is most useful when the disagreement is genuine and both sides are uncertain. - **Asymmetric stakes → Chapter 34: Adversarial Collaboration and Other Tools for Producing Less Wrong Knowledge

M

Makes current errors invisible
if the system "always self-corrects," why suspect current errors? 2. **Understates the cost of delay** — the human suffering is erased along with the messy history 3. **Produces complacency** — no incentive to improve correction mechanisms if the existing system "works" 4. **Betrays the people who p → Key Takeaways: The Revision Myth
Markers of a wasted crisis:
Investigations produce reports that are shelved - Temporary reforms are quietly rolled back - Institutional memory of the crisis fades within 5-10 years - The next crisis involves the same failure mode - The field's official history minimizes or sanitizes the crisis → Chapter 19: Crisis and Correction
Markers of cosmetic correction:
Reforms focus on procedures rather than assumptions - The same people remain in charge, implementing the reforms - No change in how the field trains new practitioners - Similar failure recurs within a generation - Reforms create compliance burden without changing decision-making → Chapter 19: Crisis and Correction
Markers of genuine correction:
New training curricula, not just new procedures - New hiring criteria reflecting new values - Changes that persist after the crisis leaves the news cycle - Former defenders publicly acknowledging the old paradigm's failure - The correction extends to *adjacent* areas, not just the specific point of → Chapter 19: Crisis and Correction
McNamara's fallacy
named for the defense secretary whose technocratic approach exemplified it. The fallacy has two steps: → Chapter 28: Field Autopsy: Military Strategy
metaphor awareness
the ability to see the frame as a frame and to recognize its constraints. → Chapter 7: The Anchoring of First Explanations

N

narrative survivorship
the construction of compelling causal stories from survivorship-biased evidence. Narrative survivorship is responsible for most of what passes for strategic wisdom in business, most of what passes for "lessons learned" in organizational failure analysis, and a distressing amount of what passes for h → Chapter 6: The Plausible Story Problem
No single variable is decisive
variables interact 2. **Alternative availability is the hidden key** — determines depth, not just speed 3. **Switching cost × defender power is the main brake** — strongest predictor of slow correction 4. **Crisis accelerates timing but not depth** — depth depends on alternative availability 5. **Re → Key Takeaways: The Speed of Truth
normalization of deviance
the process by which an institution gradually accepts increasingly risky conditions as normal because nothing bad has happened yet. Each successful launch in the presence of a known risk made the next launch seem acceptable. The deviation from design specifications became the *new* specification. → Chapter 19: Crisis and Correction

O

observational epidemiology
studying what people report eating and correlating it with health outcomes. This methodology has fundamental limitations: → Chapter 26: Field Autopsy: Nutrition Science
One-off training
knowledge decays, incentives unchanged - **Exhortation** — "be humble" is correct and useless - **Punishing error without celebrating correction** — produces error-hiding → Key Takeaways: Teaching Epistemic Humility
Only then
sometimes decades later, sometimes never — is a rigorous RCT conducted → Chapter 23: Field Autopsy: Medicine
Original error
the field holds a wrong position 2. **Crisis** — a visible, costly failure forces confrontation 3. **Traumatic correction** — reform shaped by "never again" rather than balanced analysis 4. **Equal and opposite error** — the new position overshoots the optimal point 5. **Invisible cost accumulation* → Key Takeaways: When Correction Overcorrects
overshot
not because the reformers were wrong, but because the structural forces of overcorrection (trauma, political asymmetry, absent stopping mechanisms) pushed the correction past the optimal point. The overcorrection produced its own costs, measured in different but no less real currency: patients who d → Chapter 21: When Correction Overcorrects

P

P-hacking
Analyzing data multiple ways until p < 0.05 appears 2. **HARKing** — Hypothesizing After Results are Known 3. **Researcher degrees of freedom** — The "garden of forking paths" of analytical choices → Key Takeaways: The Replication Problem
Part II: The Persistence Engine
how wrong ideas STAY. Chapter 9: The Sunk Cost of Consensus. → Key Takeaways: Imported Error
Part III: The Correction
How wrong ideas finally die. Chapter 17: Planck's Principle. → Key Takeaways: The Zombie Idea — Part II Synthesis
Part IV: Field Autopsies
applying everything from Parts I–III to eight specific disciplines for deep diagnostic examination. → Key Takeaways: The Speed of Truth
Peer review gatekeeping
Asymmetric scrutiny of paradigm-challenging work 2. **Conference culture** — In-group citation and visibility networks exclude outsiders 3. **Hiring orthodoxy** — Each generation selects for paradigm consistency 4. **Chilling effect** — Anticipation of enforcement produces self-censorship 5. **Reput → Key Takeaways: The Consensus Enforcement Machine
Peptic Ulcers
Marshall & Warren vs. the gastroenterology establishment (30 years) 2. **Dietary Fat Hypothesis** — How Ancel Keys's flawed study became gospel (50 years) 3. **Neural Networks** — Minsky & Papert killed a correct approach (30 years) 4. **Challenger Disaster** — Normalization of deviance at NASA 5. * → How Humans Get Stuck
Peptic Ulcers (Marshall & H. pylori)
The 30-year wrong consensus. Authority cascade + sunk cost + outsider punishment in one story. (Medicine, institutional failure, heroic correction) 2. **Dietary Fat Hypothesis (Ancel Keys, 1960s–2010s)** — Flawed Seven Countries Study → nutritional gospel. Authority cascade, survivorship bias, incen → How Humans Get Stuck — Complete Outline
Peptic ulcers / Marshall & Warren
Deep treatment Ch.2, referenced in Ch.9, Ch.14, Ch.17, Ch.18. Most frequent anchor. 2. **Dietary fat hypothesis / Ancel Keys** — Threaded through Ch.2, Ch.5, Ch.7, Ch.9, Ch.15. Extensive treatment. 3. **Neural networks / Minsky & Papert** — Ch.2 case study, Ch.13, Ch.17. Ch.29 DEEPEST treatment (com → Session Handoff
Perception and psychophysics
which study basic sensory processing — have high replication rates. The reason: the phenomena are directly measurable, the methods are well-standardized, and the effects are large. → Chapter 25: Field Autopsy: Psychology
Plausible story problem
imposing coherent narrative patterns on diverse, complex cases 2. **Survivorship bias** — selecting cases where dissenters were vindicated; consensus is right more often than examples suggest 3. **AI author problem** — credibility based on source rather than evidence 4. **Framework overconfidence** → Key Takeaways: The Meta-Question
Post-hoc rationalization
Explains any outcome after the fact without predicting outcomes in advance (Freud, market commentary) 2. **Ad hoc auxiliary hypotheses (epicycles)** — New assumptions added specifically to save the theory (Ptolemaic astronomy, DSGE models) 3. **Moving goalposts** — Success criteria redefined in resp → Key Takeaways: Unfalsifiable by Design
precedent
the principle that similar cases should be decided similarly, based on prior decisions. Precedent is a form of institutional expertise: it captures accumulated judicial wisdom, ensures consistency, and provides predictability. → Chapter 13: The Einstellung Effect
Precision Without Accuracy
the seduction of exact numbers that are exactly wrong. → Key Takeaways: How Incentive Structures Manufacture Error
premature closure
settling on a diagnosis too early because the initial narrative is compelling. → Chapter 6: The Plausible Story Problem
Presentation
Evidence dismissed based on credentials, not merit 2. **Escalating resistance** — Active opposition through institutional mechanisms 3. **Personal cost** — Career damage, isolation, emotional toll 4. **Vindication** — Sometimes. Average timeline: ~30 years. → Key Takeaways: The Outsider Problem
Prestige Investment
A prestigious individual or institution proposes the idea 2. **Deference Amplification** — Others cite and adopt the claim without independent verification 3. **Cascade Lock-In** — The cost of dissent exceeds the cost of conformity; the cascade becomes self-maintaining → Key Takeaways: The Authority Cascade
Principle analysis:
**P1 (Build Allies): Failed.** Semmelweis alienated potential supporters with personal attacks - **P2 (Frame as Extension): Failed.** He framed hand-washing as proof that doctors were killing their patients — the most threatening possible framing - **P3 (Positive Evidence First): Partially succeeded → Case Study: When Dissent Fails — Lessons From the Casualties
Productive borrowing
Initial cross-domain transfer that illuminates 2. **Invisible analogy** — The import stops being treated as analogy and becomes "description" 3. **Breakdown** — Phenomena emerge that the import can't handle 4. **Epicycles** — The field adds patches rather than questioning the import 5. **Maximum res → Key Takeaways: Imported Error
Properties:
**Visibility:** Moderate. The crisis received media coverage but didn't dominate front pages the way financial or engineering disasters do. - **Undeniability:** Moderate to high. The replication failures were real and documented, but defenders could argue about individual studies. - **Cost:** Low in → Chapter 19: Crisis and Correction

Q

quantification bias
the tendency to treat quantified claims as more objective and reliable than qualitative claims, regardless of whether the quantification is valid. → Chapter 12: Precision Without Accuracy

R

Randomization difficulty
ethical and logistical barriers to RCTs 2. **Measurement problems** — test scores are narrow proxies for real learning 3. **Implementation dependence** — interventions depend on individual teachers 4. **Time horizon mismatch** — important outcomes take years to manifest 5. **Opinion density** — ever → Key Takeaways: Field Autopsy — Education
Red Flag Scorecard
a structured assessment tool that compiles results across all 15 questions - **Traffic-light scoring** (green/yellow/red) for each question, enabling pattern recognition - **Structural risk assessment** — the scorecard detects the *conditions* that sustain wrong consensuses, not the wrongness itself → Chapter 31: Red Flags
Regular Confidence Audit
investigate one high-confidence belief per month 2. **Pre-Mortem** — imagine the decision was wrong and explain why 3. **Surprise Journal** — track surprises as evidence of model incompleteness → Key Takeaways: The Humility Chapter
root metaphors
the foundational analogies that shape how a field conceptualizes its subject matter. → Chapter 7: The Anchoring of First Explanations

S

Scoring indicators:
🟢 **Healthy (8-10):** Dissent is actively sought — through red teams, devil's advocates, pre-registered adversarial studies. Dissenters face no career penalty. Prominent examples of productive disagreement within the field. - 🟡 **Moderate (4-7):** Dissent is tolerated but not encouraged. Mild career → Chapter 32: The Epistemic Health Checklist
Scoring:
🟢 **Green:** Funding is independent of the outcome; funders have no financial stake in the claim being true - 🟡 **Yellow:** Mixed funding; some independence, some conflict - 🔴 **Red:** Primary funding comes from entities that benefit financially from the claim being true → Chapter 31: Red Flags
Selected Red Flag scores:
Q3 (Falsifiability): 🟡 — What would disprove the paradigm shift model? Since it describes a general pattern, any specific counter-example (a field that changed gradually rather than through revolution) can be accommodated as "not a paradigm shift case." - Q5 (Evidence age): 🟡 — Published in 1962. Th → Case Study: Scoring Other Frameworks — Applying Self-Critique to Popular Analytical Tools
self-correction illusion
is itself a failure mode. It is the revision myth (Chapter 20) applied to the institution's correction mechanisms. The institution tells itself a story in which errors are caught and fixed through existing processes, which reduces the perceived need for structural reform. → Chapter 37: Building Better Knowledge Systems
stare decisis
the principle that courts should follow prior decisions — functions as an error-preservation mechanism. Once a court admits bite mark evidence (citing a prior court that admitted it, which cited a prior court that admitted it), each subsequent court treats the prior admission as authority. The chain → Chapter 27: Field Autopsy: Criminal Justice
structural buffers
institutional protection (tenure), geographic distance from the power center, supportive collaborators, and/or dramatic evidence that bypassed normal channels. The destroyed lacked these buffers and were exposed directly to the full force of the persistence engine. → Chapter 18: The Outsider Problem
structural nature of epistemic failure
once you understand that being wrong is usually about systems, not stupidity, you can never unsee it. → Key Takeaways: The Archaeology of Error
Survivorship Bias at Scale
how fields build on the evidence that survived while ignoring what didn't. → Key Takeaways: The Streetlight Effect

T

That capability is now yours.
## Reflection Questions → Case Study: A Letter to the Reader
The Anchoring of First Explanations
why the first answer proposed becomes the hardest to dislodge. → Key Takeaways: The Plausible Story Problem
The Attribution Battle
Defenders successfully blame execution, not the paradigm 2. **The Reform Exhaustion Effect** — Cosmetic reforms consume the appetite for deeper change 3. **Generational Forgetting** — Institutional memory of the crisis fades (20–30 years) while the paradigm persists → Key Takeaways: Crisis and Correction
The Consensus Enforcement Machine
how social pressure maintains wrong answers. → Key Takeaways: The Einstellung Effect at Institutional Scale
The dangerous archer is C
precise but not accurate. Their consistency creates confidence. If you watched Archer C shoot, you would think: "They're very good — they hit the same spot every time." You would *not* think: "They're consistently missing the target." The precision masks the inaccuracy. → Chapter 12: Precision Without Accuracy
The Einstellung Effect
when expertise becomes a prison. → Key Takeaways: Precision Without Accuracy
The Outsider Problem
why correct dissenters are punished before they're celebrated. → Key Takeaways: Planck's Principle and Its Exceptions
The overcorrection concerns:
**Lending standards.** Post-crisis lending standards became dramatically more restrictive. This prevented the reckless lending that had fueled the housing bubble — but it also made it significantly harder for creditworthy borrowers, particularly in minority and low-income communities, to access mort → Chapter 21: When Correction Overcorrects
The overcorrection warning
every tool carries its own risk of swinging too far → Chapter 34: Adversarial Collaboration and Other Tools for Producing Less Wrong Knowledge
The Plausible Story Problem
when narrative coherence substitutes for evidence. → Key Takeaways: Survivorship Bias at Scale
The Predictable Ways Knowledge Goes Wrong
> *Why every field believes it's uniquely rational while making the same mistakes as every other field — the failure modes of human knowledge that only become visible from outside.* → How Humans Get Stuck
The ratio shifts over time:
Early career: low competence, low blindness → high potential for novelty, low ability to execute - Mid career: moderate competence, moderate blindness → optimal balance for innovation within the paradigm - Late career: high competence, high blindness → maximum ability to solve known problems, minimu → Chapter 13: The Einstellung Effect
The Replication Problem
what happens when nobody checks the homework. → Key Takeaways: The Sunk Cost of Consensus
The Revision Myth
how fields rewrite history to make corrections look inevitable, sanitizing the messy process described in this chapter into a tidy narrative of progress. → Key Takeaways: Crisis and Correction
The Speed of Truth
a synthesis of Part III, building a predictive model for how long correction takes and what can accelerate it. → Key Takeaways: When Correction Overcorrects
The Streetlight Effect
why fields study what's measurable instead of what matters. Introduces Goodhart's Law and the McNamara Fallacy. → Key Takeaways: Unfalsifiable by Design
The Zombie Idea
Part II's finale: why some wrong ideas cannot be killed. → Key Takeaways: Complexity Hiding in Simplicity
Then the Google SRE book
for the concrete blameless postmortem implementation 4. **Then Simpkin and Schwartzstein** (2016) — for the medical uncertainty argument 5. **Then Dweck** (*Mindset*) — for the psychological foundations of belief updating → Further Reading: Teaching Epistemic Humility
They don't ask journals to be more fair
they make fairness structural. When reviewers evaluate the question and method without seeing results, their biases toward exciting findings are eliminated. → Case Study: Registered Reports in Action — Psychology's Most Effective Reform
They don't ask researchers to be more honest
they make honesty the easiest path. When the publication decision is made before results exist, there is no incentive to p-hack, no incentive to suppress null results, and no incentive to engage in questionable research practices. → Case Study: Registered Reports in Action — Psychology's Most Effective Reform
They don't ask the field to value replication
they make replication valuable. When null results are published alongside positive results, the literature naturally becomes more accurate. → Case Study: Registered Reports in Action — Psychology's Most Effective Reform
This is the meta-question
it asks whether the field has the structural capacity to detect its own errors. A field where errors are invisible (criminal justice), where outcomes take decades to manifest (education), or where the metric doesn't track the reality (military body counts) can be wrong for a very long time without a → Chapter 31: Red Flags
Track A: The Complete Epistemic Audit
Professional-grade assessment of a field or organization (20–40 pages) 2. **Track B: The Failure Mode Field Guide** — Create a diagnostic reference for a specific domain 3. **Track C: The Correction Proposal** — Design an institutional reform proposal backed by the book's frameworks → How Humans Get Stuck — Complete Outline

U

unfalsifiable ideas
claims structured so that no evidence could ever disprove them. If authority cascades are about *who* says it, unfalsifiability is about *how* the idea is structured. → Key Takeaways: The Authority Cascade

V

Visibility
The failure must be visible to people *outside* the field 2. **Undeniability** — The failure must be too clear to reinterpret as something else 3. **Cost** — The failure must impose severe costs, especially on people outside the institution 4. **Attribution** — The failure must be traceable to the p → Key Takeaways: Crisis and Correction

W

What failure modes it addresses:
**Consensus enforcement (Ch.14):** Forces genuine engagement with opposing views rather than dismissal - **Confirmation bias:** Both sides design the study, preventing either from rigging it - **Researcher degrees of freedom:** Methodology is agreed in advance, eliminating post-hoc analytical choice → Chapter 34: Adversarial Collaboration and Other Tools for Producing Less Wrong Knowledge
What it looks like:
**In research:** Rapid replication of important findings; pre-prints that enable early critique; continuous data monitoring rather than end-of-study analysis - **In organizations:** Dashboard metrics that track error rates in real time; incident reporting systems with rapid response protocols; short → Chapter 37: Building Better Knowledge Systems
What would have caught:
The dietary fat hypothesis was sustained partly by food industry funding of research that favored sugar over fat as the dietary villain - Forensic science techniques were validated by the forensic science community itself, not by independent scientists (Chapter 27) - EdTech products are evaluated in → Chapter 31: Red Flags
When Correction Overcorrects
the pendulum problem. What happens when a field traumatized by being wrong swings too far in the other direction. → Key Takeaways: The Revision Myth

Y

Your own field's core consensus
the claim that most practitioners accept without question 2. **A current controversy** — a claim that is actively debated, where some experts are confident and others are skeptical 3. **A historical case from this book** — apply the scorecard to the H. pylori hypothesis circa 1985, or to neural netw → Chapter 31: Red Flags