Home/Business/The Undoing Project
Loading...
The Undoing Project cover

The Undoing Project

A Friendship That Changed Our Minds

4.0 (62,546 ratings)
23 minutes read | Text | 9 key ideas
In a world where the human mind is both a labyrinth and a marvel, Daniel Kahneman and Amos Tversky emerged as the audacious trailblazers who dared to map its intricacies. "The Undoing Project" by Michael Lewis captures the electrifying partnership of these two pioneering psychologists, whose groundbreaking insights into human decision-making have redefined our understanding of choice and reason. From the bustling halls of academia to the practical realms of economics, medicine, and governance, their Nobel-winning theories illuminate the invisible biases that guide our every move. With a narrative as captivating as it is enlightening, this book chronicles not just their scientific achievements but also the profound friendship that fueled a seismic shift in how we perceive reality. Perfect for readers eager to unravel the enigmas of human behavior, Lewis crafts a tale of intellectual courage and camaraderie that resonates far beyond the pages.

Categories

Business, Nonfiction, Psychology, Philosophy, Finance, Science, Biography, History, Economics, Audiobook

Content Type

Book

Binding

Paperback

Year

2017

Publisher

W. W. Norton & Company

Language

English

ASIN

0393354776

ISBN

0393354776

ISBN13

9780393354775

File Download

PDF | EPUB

The Undoing Project Plot Summary

Introduction

Have you ever wondered why you feel the pain of losing $100 more intensely than the pleasure of winning the same amount? Or why doctors sometimes make different treatment recommendations when presented with identical patient information in different formats? These puzzling aspects of human behavior remained largely unexplained until two brilliant psychologists, Daniel Kahneman and Amos Tversky, embarked on a remarkable intellectual journey together in the late 1960s. Their collaboration revealed something profound about the human mind: we don't make decisions as rationally as we think. Through ingenious experiments, they discovered that our minds rely on mental shortcuts—heuristics—that work well enough most of the time but lead to systematic errors in judgment. Their findings challenged the foundation of economic theory, which had long assumed humans were rational actors. By exposing the predictable irrationality in how we assess risks, make judgments, and form decisions, they created a new field at the intersection of psychology and economics. Their work not only transformed academic thinking but also revolutionized fields from medicine to public policy, offering practical insights into how we can make better decisions by understanding the quirks and biases of our minds.

Chapter 1: The Unlikely Partnership That Changed Science

In the psychology department at Hebrew University in Jerusalem, laughter regularly echoed from behind a closed door. Inside, two men with dramatically different personalities were engaged in one of the most productive intellectual partnerships in history. Danny Kahneman was introspective, perpetually self-doubting, and imaginative. Amos Tversky was confident, analytical, and brilliant. When they met in the late 1960s, they were already accomplished academics, but something extraordinary happened when they began working together. "When I gave him an idea," Danny later recalled, "he would look for what was good in it. For what was right with it. That, for me, was the happiness in the collaboration. He understood me better than I understood myself." Their conversations would stretch for hours, punctuated by bursts of laughter as they discovered new insights about human judgment. Danny would often balance precariously on the back legs of his chair, laughing so hard he nearly fell backwards. To outsiders, it seemed as if they were having too much fun to be doing serious work. Their first major paper together examined what they called "the belief in the law of small numbers"—people's tendency to draw sweeping conclusions from limited evidence. They demonstrated that even trained statisticians made this error, believing small samples were more representative of larger populations than they actually were. When they presented their findings to a group of statisticians, one remarked, "I know all this is true, but I don't really believe it." This reaction revealed exactly what Kahneman and Tversky were discovering: human intuition often conflicts with statistical reality. The partnership worked because they complemented each other perfectly. Danny generated creative ideas and identified psychological puzzles; Amos brought analytical rigor and mathematical precision. "In our work together," Danny said, "I was the worrier, identifying gaps and weaknesses, while Amos was the voice of confidence, finding solutions and pushing forward." They developed a unique working method—talking through ideas, challenging each other, and writing together so seamlessly that they often couldn't remember who had contributed which idea. Their collaboration produced insights that would eventually reshape multiple fields. They discovered that human judgment under uncertainty doesn't follow the rules of probability but instead relies on mental shortcuts that lead to predictable errors. These findings directly challenged economic theory, which had long assumed that people make rational decisions to maximize their self-interest. By revealing the systematic ways humans deviate from rationality, Kahneman and Tversky laid the groundwork for behavioral economics and influenced fields from medicine to public policy.

Chapter 2: Heuristics: Mental Shortcuts That Lead Us Astray

In 1969, Danny Kahneman invited Amos Tversky to speak to his graduate seminar at Hebrew University. The seminar was called "Applications of Psychology," and each week Danny brought in some real-world problem for students to address using psychological principles. Amos described an experiment where researchers presented people with two bags of poker chips—one containing 75% white chips and 25% red chips, the other with the opposite ratio. Participants would draw chips from a randomly selected bag and estimate the probability of which bag they were drawing from. When Amos finished describing the experiment, Danny was baffled. The research seemed trivially obvious—of course people would update their beliefs based on the chips they drew. But Danny's intuition told him something was missing. "I had never thought much about thinking," he later admitted. "But this research into the human mind bore no relationship to what I knew about what people actually did in real life." Danny suspected people weren't the careful statisticians that researchers assumed. "I knew I was a lousy intuitive statistician," he said. "And I really didn't think I was stupider than anyone else." This conversation sparked their investigation into what they would later call "heuristics"—mental shortcuts people use to make judgments under uncertainty. The first heuristic they identified was "representativeness." When making judgments, people compare whatever they're judging to some mental model. How similar are these clouds to my mental image of an approaching storm? How closely does this patient's symptoms match my conception of a particular disease? The more similar something seems to our mental model, the more likely we think it belongs to that category—even when this leads us astray. In one experiment, they found people thought the birth sequence G-B-G-B-B-G was more likely than B-B-B-G-G-G, even though both sequences are equally probable, because the first "looks more random." The second heuristic they discovered was "availability." The more easily we can recall examples of something, the more common or likely we believe it to be. After seeing a car accident, we drive more cautiously because the possibility of a crash seems more probable. News coverage of shark attacks makes us fear swimming in the ocean, despite the minuscule statistical risk. Our judgments are skewed by what comes easily to mind—vivid, recent, or emotionally charged examples—rather than by actual frequencies or probabilities. These mental shortcuts serve us well in many situations—they're efficient and usually get us close enough to the right answer. But they also lead to systematic errors in judgment. By identifying these patterns of error, Kahneman and Tversky revealed something profound: human irrationality isn't random. It follows predictable patterns that can be studied, understood, and potentially corrected. This insight would eventually transform our understanding of decision-making across domains from medicine to finance to public policy.

Chapter 3: Why We Fear Losses More Than We Value Gains

In 1979, Kahneman and Tversky published a paper in the economics journal Econometrica that would eventually become one of the most cited papers in the field. The paper, titled "Prospect Theory: An Analysis of Decision Under Risk," challenged the dominant economic theory of how people make decisions involving risk. The breakthrough came during a conversation when Amos asked a simple question: "What if we flipped the signs?" Until then, they had been studying how people choose between potential gains. Now they wondered how people would behave when facing potential losses. They posed questions like: Would you rather lose $500 for sure or take a 50-50 chance of losing $1,000? The results were striking. When dealing with gains, people were risk-averse, preferring a sure thing over a gamble with the same expected value. But when facing losses, people became risk-seeking, preferring to take chances to avoid a certain loss. "It was a eureka moment," Kahneman recalled. "We immediately felt like fools for not thinking of that question earlier." This asymmetry in how people approach gains versus losses became known as loss aversion—the tendency for losses to loom larger than gains of the same magnitude. To accept a 50-50 gamble, most people require the potential gain to be at least twice the potential loss. As they wrote, "The happiness involved in receiving a desirable object is smaller than the unhappiness involved in losing the same object." Their experiments revealed another crucial insight: people evaluate outcomes relative to a reference point, not in absolute terms. Imagine Jack and Jill each have $5 million today. Yesterday, Jack had $1 million and Jill had $9 million. Are they equally happy? Of course not—Jack is elated while Jill is distraught. What matters isn't the final state but the change from a reference point. This explained why the same objective outcome could be perceived as either a gain or a loss depending on expectations. Prospect Theory also explained several economic puzzles that traditional theory couldn't account for. Why do people simultaneously buy insurance (showing risk aversion) and lottery tickets (showing risk seeking)? Why do investors hold onto losing stocks too long but sell winners too quickly? Why do taxi drivers work longer hours on slow days and quit early on busy days? All these behaviors made sense through the lens of Prospect Theory. The implications were revolutionary. Economic models had long assumed that people make rational decisions to maximize their utility. Prospect Theory showed that real humans don't behave this way—they have predictable biases that lead to suboptimal decisions. This insight would eventually spawn behavioral economics, a field that incorporates psychological insights into economic models. As Richard Thaler, who would later win a Nobel Prize for his work in behavioral economics, put it: "That paper was what an economist would call proof of existence. It showed you could do math with psychology in it. It captured so much of human nature."

Chapter 4: The Framing Effect: Same Facts, Different Decisions

In the early 1980s, Kahneman and Tversky presented a scenario that would become one of their most famous demonstrations. Known as the "Asian Disease Problem," it went like this: Imagine that the U.S. is preparing for the outbreak of an unusual Asian disease, which is expected to kill 600 people. Two alternative programs to combat the disease have been proposed. If Program A is adopted, 200 people will be saved. If Program B is adopted, there is a 1/3 probability that 600 people will be saved, and a 2/3 probability that no people will be saved. Which program would you choose? When presented with this choice, the majority of participants chose Program A, preferring the certainty of saving 200 lives. Then they presented a second group with the same scenario but framed differently: If Program C is adopted, 400 people will die. If Program D is adopted, there is a 1/3 probability that nobody will die, and a 2/3 probability that 600 people will die. Which program would you choose? This time, the majority chose Program D, preferring to take a risk to avoid the certain death of 400 people. Yet the two scenarios are identical—Programs A and C both result in 200 people living and 400 dying, while Programs B and D both offer a 1/3 chance of saving everyone and a 2/3 chance of everyone dying. The only difference was how the options were framed: as gains (lives saved) or losses (lives lost). "This was a profound discovery," explained a public health official who later applied these insights. "It showed that people don't choose between things; they choose between descriptions of things." The same medical treatment can be presented as having a "90% survival rate" or a "10% mortality rate," and these different frames lead to different decisions, even among medical professionals. The power of framing extends far beyond hypothetical scenarios. When credit card companies label the difference between cash and credit prices as a "cash discount" rather than a "credit card surcharge," customers are more likely to pay with cash. When doctors describe a surgery as having a "90% survival rate" rather than a "10% mortality rate," more patients opt for the procedure. When retirement plans are framed as "opt-out" rather than "opt-in," participation rates increase dramatically. Kahneman and Tversky realized that framing effects violated a fundamental assumption of rational choice theory: description invariance. According to traditional economic theory, preferences shouldn't change based on how options are described. Yet their research showed that framing could completely reverse people's preferences. "Framing is not a mistake that people make," Kahneman later explained. "It's the way the human mind works." The implications of framing effects are enormous for anyone trying to influence decisions—from marketers and politicians to doctors and policy makers. By understanding how framing shapes perception and choice, we can become more aware of how our own decisions are influenced by the way information is presented to us. And we can make more deliberate choices about how we frame information when communicating with others.

Chapter 5: When Experts Make Systematic Errors

In 1960, Paul Hoffman established the Oregon Research Institute, a private institution devoted exclusively to the study of human behavior. One of their groundbreaking studies examined how radiologists diagnosed stomach ulcers. The researchers asked doctors to judge the probability of cancer in ninety-six different stomach ulcers on a seven-point scale from "definitely malignant" to "definitely benign." Without telling the doctors, they showed them each ulcer twice, mixing up the duplicates randomly so the doctors wouldn't notice they were diagnosing the same ulcer twice. The results were unsettling. First, the doctors' diagnoses could be predicted by a simple mathematical model, suggesting their thought processes weren't as complex as they believed. More surprisingly, the doctors frequently disagreed with each other about the same ulcers. Most disturbing of all, when presented with duplicates of the same ulcer, every doctor contradicted himself at least once, giving different diagnoses to identical cases. The researchers repeated similar experiments with psychiatrists, asking them to predict which psychiatric patients might become violent if released. Once again, the experts disagreed with each other, and graduate students with minimal training were just as accurate in their predictions as seasoned professionals. Experience appeared to provide little advantage in making these judgments. Then one of the researchers made a radical suggestion: "One of these models you built might actually be better than the doctor." They tested this hypothesis by creating a simple algorithm based on the factors doctors said they considered when diagnosing ulcers. Remarkably, this algorithm outperformed not just the average doctor but even the single best doctor in the study. As researcher Lewis Goldberg explained: "The clinician is not a machine. While he possesses his full share of human learning and hypothesis-generating skills, he lacks the machine's reliability. He 'has his days': Boredom, fatigue, illness, situational and interpersonal distractions all plague him." When Amos Tversky visited the Oregon Research Institute in 1970, he was fascinated by these findings. They aligned perfectly with what he and Danny had been discovering about human judgment: even experts were susceptible to systematic biases. The problem wasn't just that doctors occasionally had "off days"—their very approach to diagnosis relied on mental shortcuts that could lead them astray. They would leap from a patient's symptoms to a diagnosis that seemed representative of those symptoms, without adequately considering statistical base rates or alternative explanations. These findings had profound implications for fields from medicine to finance to criminal justice. In many domains, simple algorithms could outperform expert judgment not because the algorithms were sophisticated, but because they were consistent. They applied the same rules every time, without being influenced by irrelevant factors, recent experiences, or emotional states. This didn't mean human experts were worthless—their knowledge and experience were essential for identifying relevant factors and building the algorithms in the first place. But it suggested that in many situations, we might improve decision quality by supplementing or replacing human judgment with systematic procedures.

Chapter 6: Hindsight Bias: Why the Past Seems Predictable

In the summer of 1972, Amos Tversky gave a talk at the State University of New York at Buffalo titled "Historical Interpretation: Judgment Under Uncertainty." With a flick of the wrist, he showed a roomful of professional historians just how much of human experience could be reexamined in a fresh, new way, if seen through the lens he had created with Danny. "In the course of our personal and professional lives, we often run into situations that appear puzzling at first blush," Amos began. "We cannot see for the life of us why Mr. X acted in a particular way, we cannot understand how the experimental results came out the way they did, etc. Typically, however, within a very short time we come up with an explanation, a hypothesis, or an interpretation of the facts that renders them understandable, coherent, or natural." As an example, Amos described research being conducted by his graduate student Baruch Fischhoff. When Richard Nixon announced his surprising intention to visit China and Russia, Fischhoff asked people to assign odds to various possible outcomes—that Nixon would meet Chairman Mao at least once, that the U.S. and Soviet Union would create a joint space program, that Soviet Jews would be arrested for attempting to speak with Nixon. After Nixon's trip, Fischhoff asked the same people to recall the odds they had assigned to each outcome. Their memories were badly distorted. Everyone believed they had assigned higher probabilities to what actually happened than they actually had. They greatly overestimated the odds they had originally given to events that ended up occurring. Once they knew the outcome, they thought it had been far more predictable than they had found it to be before. Fischhoff named this phenomenon "hindsight bias." Amos described this tendency as "creeping determinism"—the sense that what happened was inevitable, even when it wasn't. "All too often, we find ourselves unable to predict what will happen; yet after the fact we explain what did happen with a great deal of confidence," he told the historians. "This 'ability' to explain that which we cannot predict, even in the absence of any additional information, represents an important, though subtle, flaw in our reasoning. It leads us to believe that there is a less uncertain world than there actually is." The historians in his audience prided themselves on their ability to construct explanatory narratives that made past events seem almost predictable. The only question that remained, once the historian had explained how and why some event had occurred, was why the people in the narrative had not seen what the historian could now see. "All the historians attended Amos's talk," recalled a colleague, "and they left ashen-faced." Hindsight bias affects us all. After a stock market crash, financial commentators confidently explain why it was inevitable. After a medical diagnosis, patients remember symptoms they believe should have made the condition obvious earlier. After an election, political analysts act as if the outcome was foreseeable all along. This bias has serious consequences: it makes us overconfident about our ability to predict future events, prevents us from learning from surprises, and leads us to unfairly blame decision-makers who couldn't have known what we now know. As Amos noted in his personal notes: "He who sees the past as surprise-free is bound to have a future full of surprises." By recognizing hindsight bias, we can maintain appropriate humility about our predictive abilities and make better decisions in an uncertain world.

Chapter 7: From Theory to Practice: Nudging Better Decisions

In 1986, Richard Thaler, an economist at the University of Chicago, received a phone call from a Wall Street firm. They wanted him to speak to their traders about a new field called behavioral economics—the application of psychological insights to economic problems. Thaler had been building this field on the foundation laid by Kahneman and Tversky, whose work he had discovered a decade earlier while compiling a list of "stupid things people do" that standard economic theory couldn't explain. "When I first read their papers, I felt like I was running from one to another," Thaler recalled. "As if I had discovered a secret pot of gold." Their research explained many of the anomalies he had observed: why people were reluctant to sell homes at a loss, why NFL teams overvalued draft picks, why investors held onto losing stocks too long. These weren't random errors that would cancel out in markets, as traditional economists claimed. They were systematic biases that could affect entire economies. The Wall Street presentation marked a turning point. Financial professionals, who had built careers on the assumption that markets were efficient and investors were rational, were now paying to hear about human irrationality. Soon, investment firms were hiring behavioral economists to help them understand market anomalies and investor psychology. Firms like Fuller & Thaler Asset Management explicitly based their investment strategies on exploiting the systematic biases Kahneman and Tversky had identified. The influence spread beyond finance. In the 1990s, the U.S. government began exploring how to increase retirement savings. Traditional economic incentives like tax breaks had limited impact. Drawing on behavioral insights, economists proposed automatic enrollment in 401(k) plans—making saving the default option rather than requiring employees to actively sign up. This simple change, which recognized people's tendency to stick with defaults, increased participation rates by over 30 percentage points. By the early 2000s, these ideas were transforming public policy. In the UK, a "Behavioural Insights Team"—nicknamed the "Nudge Unit" after Thaler and Cass Sunstein's influential book—was established to apply behavioral science to government programs. Simple changes to tax letters, emphasizing that most people pay their taxes on time, increased compliance rates. Redesigned forms made it easier for low-income students to apply for college financial aid. Text message reminders reduced missed court appearances and hospital appointments. In 2002, Kahneman was awarded the Nobel Prize in Economics—remarkable for a psychologist who had never taken an economics course. (Tversky, who had died in 1996, would certainly have shared the prize had he lived.) In his Nobel lecture, Kahneman acknowledged the distance their ideas had traveled: "The work that Amos Tversky and I did was a conversation between us, but we thought we were just doing psychology. The idea that it would have an impact on economics, or that I would be standing here—that would have seemed to us like a joke." The journey from abstract psychological theory to practical applications that improve millions of lives stands as a testament to the power of their collaboration. As Thaler put it, "They changed the way we think about thinking."

Summary

At its core, The Blind Spots of Brilliance reveals a profound truth: our brains are not designed for perfect rationality but for quick, adequate responses that sometimes lead us astray in predictable ways. By understanding the systematic biases in our thinking—from loss aversion to framing effects—we can design better systems and make better choices. Start by recognizing situations where your intuitions might mislead you, particularly when facing complex statistical problems or emotionally charged decisions. Create decision environments that work with human psychology rather than against it, using defaults, framing, and other "nudges" to guide behavior in beneficial directions. Finally, approach your own certainty with healthy skepticism; the more confident you feel about a judgment, the more carefully you should examine your thinking process. The greatest protection against poor decisions isn't perfect rationality, but awareness of when and how our minds are likely to err.

Best Quote

“Man is a deterministic device thrown into a probabilistic universe. In this match, surprises are expected.” ― Michael Lewis, The Undoing Project: A Friendship That Changed Our Minds

Review Summary

Strengths: The review highlights Michael Lewis's distinctive clarity and well-developed sense of irony, especially as he tackles the complex collaboration between Daniel Kahneman and Amos Tversky. The reviewer appreciates Lewis's ability to convey the chaos and unlikelihood of their success, ultimately making the reader emotionally resonate with the narrative. Weaknesses: Not explicitly mentioned. Overall Sentiment: Enthusiastic Key Takeaway: The review emphasizes Lewis's successful portrayal of the groundbreaking partnership between Kahneman and Tversky, illustrating how their collaboration led to significant achievements. The book effectively captures the essence of human behavior and decision-making, leaving a profound emotional impact on the reader.

About Author

Loading...
Michael Lewis Avatar

Michael Lewis

Michael Monroe Lewis is an American author and financial journalist. He has also been a contributing editor to Vanity Fair since 2009, writing mostly on business, finance, and economics. He is known for his nonfiction work, particularly his coverage of financial crises and behavioral finance.Lewis was born in New Orleans and attended Princeton University, from which he graduated with a degree in art history. After attending the London School of Economics, he began a career on Wall Street during the 1980s as a bond salesman at Salomon Brothers. The experience prompted him to write his first book, Liar's Poker (1989). Fourteen years later, Lewis wrote Moneyball: The Art of Winning an Unfair Game (2003), in which he investigated the success of Billy Beane and the Oakland Athletics. His 2006 book The Blind Side: Evolution of a Game was his first to be adapted into a film, The Blind Side (2009). In 2010, he released The Big Short: Inside the Doomsday Machine. The film adaptation of Moneyball was released in 2011, followed by The Big Short in 2015.Lewis's books have won two Los Angeles Times Book Prizes and several have reached number one on the New York Times Bestsellers Lists, including his most recent book, Going Infinite (2023).

Read more

Download PDF & EPUB

To save this Black List summary for later, download the free PDF and EPUB. You can print it out, or read offline at your convenience.

Book Cover

The Undoing Project

By Michael Lewis

0:00/0:00

Build Your Library

Select titles that spark your interest. We'll find bite-sized summaries you'll love.