
Think Twice
Harnessing the Power of Counterintuition
Categories
Business, Nonfiction, Self Help, Psychology, Finance, Science, Economics, Leadership, Audiobook, Personal Development
Content Type
Book
Binding
Hardcover
Year
2009
Publisher
Harvard Business Review Press
Language
English
ISBN13
9781422176757
File Download
PDF | EPUB
Think Twice Plot Summary
Introduction
Making decisions is an intrinsic part of human existence, yet our cognitive architecture often leads us astray in predictable and consequential ways. Even the brightest minds, equipped with impressive credentials and vast experience, routinely fall prey to cognitive mistakes that result in poor choices. These mistakes aren't random failures but systematic errors arising from the mismatch between our mental software—evolved for a simpler world—and the complex realities we now navigate. The real opportunity lies not in lamenting these shortcomings but in understanding them well enough to improve our decision-making processes. By identifying common mental traps and developing strategies to avoid them, we can significantly enhance our choices across personal, professional, and organizational domains. The approach is both practical and profound: prepare by learning about potential mistakes, recognize situations where they might occur, and then apply specific techniques to mitigate their effects. This three-step framework offers a path to better decisions that doesn't require superhuman abilities—just a willingness to think twice when it matters most.
Chapter 1: The Outside View: Why We Fail to Consider External Evidence
We often approach problems believing they are unique, focusing intensely on the specific details of our situation while ignoring valuable information about similar cases. This mistake—favoring the inside view over the outside view—leads to consistently poor decisions across domains from business to personal life. The inside view occurs when we focus narrowly on our specific situation, building predictions based on our immediate circumstances and personal experiences. For example, when planning a project, we typically concentrate on our particular challenges and capabilities, mapping out a path to completion that seems reasonable. This approach feels natural but almost invariably produces optimistic timelines and budgets. Research consistently shows that most projects run significantly over time and budget, yet we persist in believing our situation will be different. The outside view, by contrast, looks at the statistical reality of similar situations. It asks: "How have comparable projects or decisions typically turned out?" When executives consider corporate mergers, research shows roughly two-thirds result in lost value for acquiring shareholders. Yet executives consistently believe their acquisition will beat these odds. Similarly, when homeowners plan renovations, they often dismiss statistics about typical cost overruns, convinced their project will be the exception. Three psychological illusions fuel our preference for the inside view. The illusion of superiority leads us to believe we're above average in abilities and judgment. The illusion of optimism causes us to anticipate more favorable outcomes than statistics suggest are likely. And the illusion of control makes us overestimate our ability to influence events. Together, these illusions create a powerful resistance to considering external evidence. Overcoming this cognitive mistake requires deliberately seeking the outside view. We must identify an appropriate reference class of similar situations, assess the distribution of outcomes in that class, make a prediction based on those statistics, and then fine-tune our prediction based on specific circumstances. For instance, when evaluating a merger, executives should examine the success rates of similar deals rather than focusing exclusively on their unique synergy projections. Research by Daniel Kahneman and Amos Tversky shows that forcing yourself to consider the outside view can dramatically improve decision accuracy. In one telling example, a curriculum development team initially estimated their project would take 1-2 years. When asked about similar projects, they learned none had finished in under seven years, and 40% were never completed. This outside view radically altered their perspective—demonstrating how external evidence can provide a reality check against our natural optimism.
Chapter 2: Tunnel Vision: How Limited Perspectives Restrict Our Options
When making decisions, humans display a remarkable tendency to consider too few alternatives—a phenomenon that might be called tunnel vision. This cognitive limitation prevents us from seeing the full range of possibilities and frequently leads to suboptimal choices across personal, professional, and financial domains. At the root of this problem lies the way our minds construct mental models. Psychologist Philip Johnson-Laird explains that when we reason, we create internal representations of external realities that are necessarily incomplete. These mental models help us navigate complex situations by simplifying them, but they also blind us to alternatives that fall outside our current frame of reference. Essentially, our minds shine a narrow beam of attention on what seems most probable, leaving many viable options in darkness. This tunnel vision manifests in several predictable patterns. The anchoring-and-adjustment heuristic demonstrates how arbitrary starting points can constrain our thinking. In a classic demonstration, when people are asked to estimate the number of doctors in Manhattan after seeing the last four digits of their phone number, those with higher phone numbers provide significantly higher estimates. Even when people understand this bias intellectually, they struggle to overcome it because the effect operates largely beyond conscious awareness. Other manifestations of tunnel vision include representativeness and availability biases. Doctors sometimes misdiagnose patients because they focus too heavily on typical presentations of diseases, overlooking atypical cases. Financial investors extrapolate recent trends into the future, despite evidence that such patterns frequently reverse. And under conditions of stress or time pressure, our tendency to narrow our focus becomes even more pronounced, further limiting the alternatives we consider. Perhaps most insidiously, various psychological mechanisms like cognitive dissonance and confirmation bias actively prevent us from reconsidering our initial perspectives. Once we've committed to a particular view, we seek evidence that confirms it while dismissing contradictory information. This phenomenon explains why religious believers, political partisans, and even scientific researchers can remain steadfast in their positions despite compelling evidence to the contrary. Practical strategies for combating tunnel vision include explicitly forcing yourself to consider multiple alternatives, seeking out dissenting viewpoints, keeping track of previous decisions to counter hindsight bias, avoiding decisions during emotional extremes, and understanding how incentives might be narrowing your focus. These techniques won't eliminate the natural tendency toward tunnel vision, but they can significantly expand your field of view when decisions matter most.
Chapter 3: Expert Overreliance: When Algorithms and Crowds Outperform Specialists
We naturally defer to experts in fields requiring specialized knowledge, assuming their credentials and experience guarantee superior judgment. Yet extensive research across numerous domains shows this confidence is often misplaced. Experts frequently deliver predictions and recommendations that prove less accurate than simple statistical models or even the aggregated opinions of non-specialists. The evidence for this "expert squeeze" is both extensive and compelling. Studies dating back to the 1950s consistently demonstrate that mathematical models outperform expert judgment in domains from clinical psychology to financial forecasting. More recently, Philip Tetlock's landmark research, which tracked over 28,000 predictions from 300 experts across 15 years, found that experts performed worse than "crude extrapolation algorithms" in predicting political and economic outcomes. Even more striking, collectives of non-experts often generate more accurate forecasts than individual specialists. Best Buy, a consumer electronics retailer, discovered this phenomenon when it implemented a prediction market for sales forecasting. When the company compared its internal experts' holiday sales forecast with the aggregate prediction of hundreds of employees (most with no forecasting expertise), the crowd's estimate was 99.5% accurate while the experts' was off by 5 percentage points. Similarly, in evaluating wine quality, a simple equation based on weather data has proven more reliable than the judgments of renowned wine connoisseurs. This pattern appears because different problem types demand different solution approaches. Problems with clear rules and limited outcomes are initially understood by experts, but once the patterns are identified, computers can apply them more consistently. For probabilistic problems with wide ranges of possible outcomes, collectives excel because they aggregate diverse perspectives, canceling out individual errors. Experts maintain advantages only in domains requiring creative problem-solving or specialized knowledge not easily codified. Despite overwhelming evidence, we resist this reality for several reasons. We harbor an intuitive belief in the value of human experience, especially when decisions seem complex or nuanced. Experts themselves have professional and psychological incentives to maintain their privileged status. And many organizations lack the technical capabilities to implement algorithmic or collective decision-making approaches effectively. To navigate this changing landscape, decision-makers should carefully classify problems to determine whether expert judgment, algorithmic approaches, or collective wisdom will likely yield better results. For instance, Netflix has demonstrated that algorithmic recommendations based on viewing patterns outperform human curators in predicting viewer preferences. Similarly, prediction markets have proven remarkably accurate in forecasting everything from election outcomes to product launches. The expert's role is increasingly to create and maintain these systems rather than to make individual predictions.
Chapter 4: Situational Awareness: The Power of Context in Shaping Decisions
Our decisions are profoundly influenced by situational factors that operate largely beneath our conscious awareness. While we perceive ourselves as independent actors making choices based on rational deliberation, research consistently reveals that contextual elements—from environmental cues to social dynamics—shape our decisions in powerful and often unrecognized ways. Social conformity represents one of the most dramatic examples of situational influence. In Solomon Asch's famous experiments, participants frequently gave obviously incorrect answers to simple visual tasks when surrounded by confederates providing wrong responses. More remarkably, recent neuroscience research by Gregory Berns has shown that this conformity isn't merely people saying what they think others want to hear—their actual perception changes. Brain imaging reveals that when people conform to group judgments, activation occurs in perceptual areas rather than decision-making regions, suggesting the group's influence literally alters what they see. Beyond social influence, our decisions are shaped by subtle environmental cues through a process called priming. In one illuminating study, customers in a supermarket were more likely to purchase French wines when French music played and German wines when German music played. When asked afterward, 86% of shoppers denied the music had influenced their choice, highlighting our blindness to these contextual effects. Similar studies have shown that exposure to words associated with the elderly makes people walk more slowly, and the scent of cleaning products promotes tidier eating behaviors. Choice architecture—how options are presented—represents another powerful situational influence. Countries with "opt-out" organ donation policies achieve near-universal participation rates, while "opt-in" countries with identical public attitudes have much lower rates. This pattern emerges because many people simply accept default options rather than actively choosing. Such findings have led behavioral economists to advocate "libertarian paternalism," where defaults are designed to promote welfare while preserving freedom of choice. Our emotional reactions to risk also depend heavily on context. When outcomes have strong affective associations, we tend to focus on the outcomes rather than their probability. This explains why many people simultaneously play the lottery (accepting tiny chances of winning because the jackpot feels so good) and buy insurance (avoiding small risks of loss because the consequences feel so bad). The strength of these emotional reactions frequently overwhelms statistical reasoning. Perhaps most disturbing is how situations can trigger extreme behaviors at odds with personal values. Philip Zimbardo's Stanford Prison Experiment demonstrated how quickly ordinary students adopted abusive behaviors when placed in roles as prison guards. Similarly, Stanley Milgram's obedience studies showed that most people would administer apparently painful shocks to others when instructed by an authority figure. These findings suggest that situational power can override individual dispositions in ways that traditional Western psychology, with its focus on internal character, often underestimates.
Chapter 5: System Complexity: Why Understanding Parts Doesn't Explain the Whole
Complex adaptive systems—from honey bee colonies to financial markets—consistently defy our attempts to understand them by analyzing their components. This fundamental mistake—trying to extrapolate the behavior of a complex system from studying its individual parts—leads to persistent failures in prediction and management across domains. The mismatch occurs because in complex systems, the interactions between components create emergent properties and behaviors that cannot be deduced from knowledge of the parts alone. As Nobel physicist Philip Anderson noted, "More is different"—meaning that at each level of complexity, entirely new properties appear. This principle explains why studying individual ants tells us little about colony behavior, or why interviewing individual investors offers scant insight into market movements. Consider honey bee swarms seeking new homes. Only a few hundred scout bees evaluate potential sites, returning to perform "waggle dances" whose duration indicates site quality. When about fifteen scouts gather at a promising location, they return to trigger the swarm's departure. This decentralized decision-making process almost always identifies the optimal home site, yet no individual bee understands or directs the collective choice. The intelligence exists at the system level, not the component level. Our difficulty with complex systems stems partly from our evolved tendency to seek simple cause-effect relationships. This tendency served our ancestors well in natural environments where most patterns were predictable. However, in systems with many interacting parts, our intuitive causal models break down. The Yellowstone National Park ecosystem offers a sobering example. When park managers eliminated wolves to protect elk populations, they triggered cascading effects—exploding elk numbers led to overgrazing, which reduced beaver populations, which altered stream habitats, ultimately degrading the entire ecosystem. Each intervention aimed at fixing one component created unforeseen consequences throughout the system. Financial markets demonstrate similar dynamics. The 2008 decision to allow Lehman Brothers to fail assumed markets could absorb the impact of one investment bank's collapse. However, the bankruptcy triggered a global cascade of increased risk aversion that threatened the entire financial system. What seemed like a contained problem within one component rapidly propagated through the interconnected network of financial institutions. Even organizations' attempts to improve performance through hiring "star" performers often backfire because of systems thinking failures. Research tracking top-rated equity analysts who changed firms found their performance typically plummeted after moving. Their previous success depended not just on individual skill but on the surrounding system—firm reputation, supporting analysts, familiar processes—that couldn't transfer with them. The isolated individual, despite exceptional personal attributes, cannot replicate performance without the supporting system. Effectively addressing complex systems requires shifting focus from components to interactions, using simulation tools to model emergent behaviors, distinguishing between tightly and loosely coupled systems, and recognizing that interventions may have non-linear effects. Most importantly, it requires humility—acknowledging that complex systems cannot be fully controlled or predicted, only influenced with careful attention to potential unintended consequences.
Chapter 6: Circumstantial Evidence: The Critical Role of Context in Decision Frameworks
The effectiveness of any strategy or decision-making approach depends fundamentally on circumstances, yet we consistently overlook this crucial dimension. We habitually seek universal principles, best practices, or formulas for success without recognizing that what works brilliantly in one context may fail miserably in another. This tendency manifests clearly in how we develop and apply theories. When building theories to explain phenomena, we typically progress through stages of observation, classification, and definition. However, as Paul Carlile and Clayton Christensen emphasize, many theories remain stuck at primitive levels—identifying attributes correlated with success rather than understanding the circumstances under which different approaches succeed or fail. The result is dangerously incomplete guidance that overlooks context. Boeing's troubled 787 Dreamliner program illustrates this pitfall. Observing successful companies like Apple and Dell that extensively outsourced production, Boeing dramatically expanded outsourcing for its new aircraft. But Boeing failed to recognize that outsourcing works under specific circumstances—particularly when components are modular and integration is straightforward. Aircraft manufacturing, with its complex interdependencies and precision requirements, presented fundamentally different circumstances. The result was severe delays and billions in cost overruns as Boeing discovered outsourcing a tightly integrated product created insurmountable coordination challenges. The Colonel Blotto game—a mathematical model where players distribute limited resources across multiple battlefields—provides further insight into circumstantial decision-making. The game reveals that optimal strategies depend critically on both resources (attributes) and dimensions (circumstances). In low-dimension contests like tennis, the player with superior resources (skill, strength, speed) almost always wins. But in high-dimension contests like baseball, outcomes include substantial randomness, and even weaker players regularly defeat stronger ones. Understanding this contextual difference explains why baseball has more upsets than tennis and why tournaments often fail to reveal the "best" competitor. Another common mistake is confusing correlation with causation, making it seem as if certain attributes cause success when circumstances may be the determining factor. The "Super Bowl Indicator"—which notes that stock markets rise when NFC teams win the championship—has been correct nearly 80% of the time despite having no causal connection. Such spurious correlations lead to mistaking circumstantial patterns for universal principles. Perhaps most instructive is the fate of the Norse settlements in Greenland, which collapsed after four centuries despite the settlers' intelligence and industriousness. Their failure stemmed from inflexibility—continuing practices that had worked in Norway and Iceland but were unsuited to Greenland's environment. Even as their society deteriorated, they refused to adopt successful techniques from the native Inuit, eventually starving amid unused food resources. Their story exemplifies how clinging to context-free "best practices" while ignoring circumstances can lead to catastrophe. Effective decision-making requires identifying the circumstances that determine when different approaches succeed or fail. This means developing theories that specify not just what works but when and why it works, recognizing that causality depends on context, balancing simple rules with changing conditions, and understanding that in multi-dimensional domains, no single "best" practice exists. The appropriate answer to most strategic questions isn't "this is what works" but rather "it depends"—followed by a sophisticated analysis of what it depends on.
Chapter 7: Phase Transitions: How Small Changes Can Lead to Dramatic Outcomes
In many complex systems, small incremental changes in causes can produce sudden, dramatic shifts in effects—a phenomenon known as a phase transition. These "grand ah-whooms," as physicist Philip Ball calls them, occur when a system crosses a critical threshold, fundamentally changing its behavior. Understanding these transitions is crucial because they defy our intuitive expectations of proportionality between cause and effect. The Millennium Bridge in London vividly illustrates this principle. When the elegant footbridge opened in 2000, engineers had thoroughly tested it against established standards. Yet once about 165 pedestrians were on the span, it began to sway dramatically. Testing revealed that with 156 people walking normally, the bridge exhibited minimal movement. Adding just 10 more pedestrians triggered a phase transition where slight sideways forces from walking created feedback loops, amplifying the swaying motion and forcing people to synchronize their gait. The transition from stability to instability was not gradual but sudden and dramatic. This pattern appears across diverse systems. Financial markets typically operate with balance between negative feedback (stabilizing arbitrage) and positive feedback (trend-following). But research by economist Blake LeBaron shows that as investors increasingly adopt similar strategies, markets can continue rising even as diversity decreases. This creates invisible vulnerability—everything appears stable until a small perturbation triggers a phase transition, sending prices plummeting. The market doesn't gradually decline; it crashes once a critical threshold is crossed. Our cognitive architecture is poorly equipped to anticipate these transitions. We naturally expect proportional relationships between cause and effect and struggle to comprehend systems where small changes produce outsized results. This limitation leads to several decision mistakes. First, we fall prey to the problem of induction, extrapolating from past patterns into the future without recognizing potential tipping points. Second, we exhibit reductive bias, treating complex nonlinear systems as if they were simpler and more predictable than they actually are. The challenge of prediction in systems with phase transitions is further illustrated by research on cultural markets. When researchers created an experimental music download site, they found that song popularity varied dramatically across identical participant groups based solely on small initial differences in download patterns. One song ranked 26th in a control group (where participants couldn't see others' choices) became the number one hit in one social influence world and ranked 40th in another. Small initial differences amplified through social influence created radically different outcomes that were fundamentally unpredictable. For decision-makers navigating systems with phase transitions, several strategies can help. First, study the distribution of outcomes in your system to distinguish between normal variation and extreme events. Second, watch for declining diversity as a warning sign of potential phase transitions. Third, approach forecasts with appropriate skepticism, recognizing the inherent unpredictability in these systems. Finally, prepare for both positive and negative extreme outcomes, following investor Peter Bernstein's wisdom that "consequences are more important than probabilities." The key insight is that in systems with phase transitions, focusing exclusively on the most likely outcome is insufficient. Instead, decision-makers must consider the full range of possibilities, particularly extreme scenarios, and develop robust strategies that can withstand dramatic shifts rather than optimize for a single predicted future.
Chapter 8: Skill vs. Luck: Misinterpreting Results in Probabilistic Environments
In domains where outcomes reflect both skill and luck, we consistently make a fundamental error: we misattribute results, overemphasizing skill when outcomes are favorable and underestimating the role of chance. This mistake leads to poor decisions across fields from business to sports to investing, as we draw incorrect lessons from successes and failures. Francis Galton's discovery of reversion to the mean provides the scientific foundation for understanding this phenomenon. Studying hereditary traits like height, Galton observed that while tall parents tend to have tall children, those children are typically closer to average height than their parents. Similarly, short parents generally have short children who are taller than themselves. This pattern—extreme outcomes followed by more moderate ones—appears whenever a system combines stable skill differences with random variation. The implications for decision-making are profound yet frequently overlooked. When George Steinbrenner, owner of the New York Yankees baseball team, berated his players after a poor start to the 2005 season, he failed to recognize that their early losses largely reflected bad luck rather than lack of skill. The team's subsequent improvement wasn't due to his criticism but to reversion to the mean as their underlying skill level emerged over more games. This misunderstanding leads to three common mistakes. First, we think we're special, believing we can defy statistical regularities. Executives implementing turnaround strategies often attribute subsequent improvements to their interventions rather than regression effects that would have occurred regardless. Second, we misinterpret what the data reveals. Researchers examining corporate performance often note that extreme profitability tends to moderate over time, incorrectly concluding that all companies become mediocre. In reality, the overall distribution remains stable—companies simply shuffle positions as luck evens out. The third mistake involves feedback and reinforcement. Flight instructors in the Israeli air force noted that pilots typically performed worse after praise and better after criticism, leading them to conclude that criticism improved performance. What they missed was that pilots receiving praise had often performed exceptionally well partly due to good luck, making subsequent regression inevitable. Similarly, pilots who received criticism had performed poorly partly due to bad luck and naturally improved afterward. These misinterpretations contribute to what Phil Rosenzweig calls "the halo effect"—our tendency to attribute multiple positive qualities to successful organizations and multiple negative qualities to struggling ones, ignoring how performance itself colors our perception. Studies show that business magazines typically praise companies for visionary leadership and strong culture when performance is good, then criticize the same leadership and culture when performance declines. The same media that celebrated ABB's CEO Percy Barnevik as "charismatic and visionary" during good times later described him as "arrogant and imperial" when results faltered. To improve decision quality, we must carefully assess the relative contributions of skill and luck in any domain. Activities where outcomes are heavily influenced by chance require larger sample sizes to distinguish skill from luck. We should also recognize that streaks and hot hands, while psychologically compelling, often reflect statistical patterns rather than changes in underlying ability. Perhaps most importantly, we should focus feedback and evaluation on process rather than outcomes, as process more accurately reflects the skill component that individuals can control.
Summary
The journey through common cognitive errors reveals a fundamental gap between how we believe we make decisions and how our minds actually work. Our mental architecture, optimized for an ancestral world of immediate physical threats, leads us systematically astray when facing complex modern challenges. We favor the inside view over statistical evidence, tunnel our vision to exclude viable alternatives, trust experts when algorithms would serve us better, remain oblivious to situational influences, misunderstand complex systems, overlook crucial contextual factors, fail to anticipate phase transitions, and misattribute the roles of skill and luck. The path to better decisions lies not in lamenting these limitations but in developing practical strategies to overcome them. By learning to seek the outside view, explicitly considering alternatives, using technology and collective wisdom appropriately, recognizing situational factors, respecting system complexity, accounting for circumstances, preparing for nonlinear outcomes, and properly distinguishing skill from luck, we can significantly improve our choices. Decision journals, premortems, and checklists offer concrete tools to implement these insights. The ultimate lesson is that thinking twice—pausing to question our initial judgments and apply appropriate analytical frameworks—can transform our decision-making from a liability into a strategic advantage in both professional and personal domains.
Best Quote
“However, once you realize the answer to most questions is, “It depends,” you are ready to embark on the quest to figure out what it depends on.” ― Michael J. Mauboussin, Think Twice: Harnessing the Power of Counterintuition
Review Summary
Strengths: The review highlights that Mauboussin provides a precise summary of the black swan concept, which is a notable strength, especially for readers familiar with Taleb's complex writings. The book is also described as an enjoyable read, suggesting some level of engagement. Weaknesses: The book is criticized for lacking energy and spark, with the subject matter needing more excitement to be enjoyable. The reviewer also implies that the book's brevity is its most commendable feature, indicating a lack of substantial content or depth. Overall Sentiment: Mixed. While the book offers some valuable insights, particularly in summarizing complex concepts, it is generally perceived as dull and lacking in engaging presentation. Key Takeaway: The book provides a concise and precise explanation of complex ideas, particularly those related to the black swan concept, but suffers from a lack of engaging writing, making it less enjoyable despite its brevity.
Trending Books
Download PDF & EPUB
To save this Black List summary for later, download the free PDF and EPUB. You can print it out, or read offline at your convenience.

Think Twice
By Michael J. Mauboussin