
The Great Mental Models
General Thinking Concepts
Categories
Business, Nonfiction, Self Help, Psychology, Philosophy, Science, Memoir, Leadership, Spirituality, Productivity, Audiobook, Music, Personal Development
Content Type
Book
Binding
Audiobook
Year
2019
Publisher
Latticework Publishing Inc.
Language
English
ASIN
B0DM2F39T4
File Download
PDF | EPUB
The Great Mental Models Plot Summary
Introduction
In a world of overwhelming complexity and constant change, how can we make better decisions? The quality of our thinking directly impacts the quality of our lives, yet most of us have never been taught how to think effectively. We rely on intuition, follow conventional wisdom, or simply repeat what worked in the past—often with suboptimal results. The great mental models approach offers a powerful alternative. By learning fundamental concepts from various disciplines—physics, biology, psychology, economics, and more—we can build a latticework of mental models that helps us see reality more clearly. These models serve as thinking tools that reveal blind spots, challenge assumptions, and provide frameworks for understanding complex situations. Rather than being trapped in one way of seeing the world, mental models allow us to examine problems from multiple perspectives, leading to more nuanced understanding and better decisions. The approach isn't about accumulating knowledge for its own sake, but about developing practical wisdom that can be applied across all areas of life.
Chapter 1: The Map is Not the Territory: Understanding Reality vs Representation
The map is not the territory is a fundamental mental model that reminds us that our representations of reality are not reality itself. Even the best maps are imperfect because they are reductions of what they represent. If a map were to represent the territory with perfect fidelity, it would no longer be a reduction and thus would no longer be useful to us. This distinction between our models of reality and reality itself is crucial for clear thinking. Maps and models serve as abstractions that help us navigate complexity. Think of financial statements that distill thousands of transactions into digestible reports, or manuals that simplify complex procedures. These abstractions are necessary because we cannot process all the details of reality simultaneously. However, problems arise when we forget that our abstractions are not reality but merely representations with inherent limitations. When we mistake the map for the territory, we start to think we have all the answers and create static rules that fail to account for a changing world. The usefulness of any map depends on its purpose. The London Underground map is invaluable for travelers navigating the subway system but useless for train drivers who need detailed track information. Similarly, our mental models are useful only to the extent they serve our specific needs and accurately represent the aspects of reality relevant to our goals. When we forget this, we risk making decisions based on outdated or inappropriate models that no longer match current conditions. Three key considerations can help us use maps and models more effectively. First, reality is the ultimate update—when our models conflict with our experiences, we should update our models, not ignore reality. Second, we must consider the cartographer—maps reflect the values, standards, and limitations of their creators. Third, maps can influence territories—sometimes our models shape reality rather than just describing it, as when city planners design cities based on theoretical models without understanding how cities actually work. The limitations of maps don't mean we should abandon them. We must use some model of the world to simplify it and therefore interact with it. We cannot explore every bit of territory for ourselves. The key is to use maps as guides while remaining aware of their limitations and being willing to update them when they no longer serve us. By understanding the relationship between the map and the territory, we can make better decisions and avoid the pitfalls of confusing our models with reality itself.
Chapter 2: Circle of Competence: Knowing Your Boundaries
The circle of competence concept helps us understand where we have genuine knowledge and where we merely have opinions. When ego and not competence drives what we undertake, we develop blind spots. If you know what you understand, you know where you have an edge over others. When you're honest about where your knowledge is lacking, you know where you're vulnerable and where you can improve. Understanding your circle of competence improves decision-making and outcomes. Imagine an elderly person who has spent their entire life in a small town—the "Lifer." They know every detail of the town's history, the lineage and behavior of every resident, and all the social dynamics at play. Now imagine a visitor—the "Stranger"—who arrives and within days believes they understand the town completely. The difference between the Lifer's detailed web of knowledge and the Stranger's surface understanding illustrates the difference between being inside a circle of competence and outside it. The Lifer can make better decisions because they have a deeper understanding of reality, giving them more flexibility and efficiency in responding to challenges. A circle of competence cannot be built quickly. It requires years of experience, making mistakes, and actively seeking better methods of practice and thought. Within our circles of competence, we know exactly what we don't know. We can make decisions quickly and relatively accurately, anticipate objections, and draw on different information resources when confronting problems. Outside our circles, we lack this fluency and are more likely to make errors in judgment. Building and maintaining a circle of competence requires three key practices: curiosity and a desire to learn, monitoring your performance, and seeking external feedback. Learning comes when experience meets reflection—either from your own experiences or from others through books, articles, and conversations. Monitoring involves keeping honest records of your decisions and outcomes to identify patterns and areas for improvement. External feedback from trusted sources helps overcome blind spots that self-assessment might miss. Operating outside your circle of competence is inevitable—life doesn't allow us to stay only within our areas of expertise. When you find yourself outside your circle, learn the basics of the realm you're operating in while acknowledging your limitations, talk to someone whose circle of competence in the area is strong, and use a broad understanding of mental models to augment your limited understanding. By recognizing the boundaries of your knowledge and developing strategies for operating beyond them, you can make better decisions even in unfamiliar territory.
Chapter 3: First Principles Thinking: Breaking Down Complex Problems
First principles thinking is one of the best ways to reverse-engineer complicated situations and unleash creative possibility. Sometimes called reasoning from first principles, it's a tool to help clarify complicated problems by separating the underlying ideas or facts from any assumptions based on them. What remain are the essentials. If you know the first principles of something, you can build the rest of your knowledge around them to produce something new. The idea of building knowledge from first principles has a long tradition in philosophy, dating back to Plato, Socrates, and Aristotle. However, first principles thinking isn't necessarily about finding absolute truths. Rather, it identifies the elements that are, in the context of any given situation, non-reducible. First principles are the boundaries we have to work within in any given situation—so when it comes to thermodynamics, an appliance maker might have different first principles than a physicist. There are two powerful techniques for establishing first principles: Socratic questioning and the Five Whys. Socratic questioning is a disciplined process used to establish truths, reveal underlying assumptions, and separate knowledge from ignorance. It involves clarifying your thinking, challenging assumptions, looking for evidence, considering alternative perspectives, examining consequences, and questioning the original questions. The Five Whys technique involves repeatedly asking "why" to delve deeper into a statement or concept until you reach a fundamental truth. If your "whys" result in a statement of falsifiable fact, you've hit a first principle. If they end with "because I said so," you've landed on an assumption. First principles thinking helps us avoid the problem of relying on someone else's tactics without understanding the rationale behind them. The discovery that bacteria, not stress, caused most stomach ulcers illustrates what can be accomplished when we push past assumptions. For decades, scientists believed bacteria couldn't grow in the stomach due to acidity. When pathologist Robin Warren observed bacteria in stomach samples, he and gastroenterologist Barry Marshall challenged this dogma by questioning their assumptions and looking for evidence. Their work eventually earned them the Nobel Prize and revolutionized ulcer treatment. The real power of first principles thinking is moving away from random change into choices that have a real possibility of success. Instead of looking for ways to improve existing constructs, like mitigating the environmental impacts of the livestock industry, scientists are now developing lab-grown meat by understanding the first principles of what makes meat desirable: taste, texture, smell, and cooking properties. By starting with these fundamentals rather than the assumption that meat must come from animals, they're creating innovative solutions to significant environmental and ethical challenges.
Chapter 4: Inversion: Solving Problems Backward
Inversion is a powerful tool to improve your thinking because it helps you identify and remove obstacles to success. The root of inversion is "invert," which means to upend or turn upside down. As a thinking tool, it means approaching a situation from the opposite end of the natural starting point. Most of us tend to think one way about a problem: forward. Inversion allows us to flip the problem around and think backward. Sometimes it's good to start at the beginning, but it can be more useful to start at the end. There are two approaches to applying inversion in your life. First, you can start by assuming that what you're trying to prove is either true or false, then show what else would have to be true. The 19th century German mathematician Carl Jacobi became famous for his advice to "invert, always invert." When faced with proving an axiom in a difficult math problem, he might assume a property of the axiom was correct and then try to determine the consequences of this assumption. This approach has been used throughout history, from Greek mathematician Hippasus's discovery of irrational numbers to Sherlock Holmes's detective methods. The second approach involves asking what you're trying to avoid rather than what you're trying to achieve. Instead of thinking through the achievement of a positive outcome, ask how you might achieve a terrible outcome, and let that guide your decision-making. Index funds are a great example of stock market inversion promoted by Vanguard's John Bogle. Instead of asking how to beat the market, as so many before him, Bogle recognized the difficulty of the task and inverted the approach. The question became, how can we help investors minimize losses to fees and poor money manager selection? This led to one of the greatest ideas in finance. Psychologist Kurt Lewin's force field analysis provides a theoretical foundation for this type of thinking. His process involves identifying the problem, defining your objective, identifying forces that support change toward your objective, identifying forces that impede change, and then strategizing a solution. Even if we are quite logical, most of us stop after identifying supporting forces. Lewin theorized that it can be just as powerful to remove obstacles to change. The inversion happens when you consider not only what you could do to solve a problem, but what you could do to make it worse—and then avoid doing that. Florence Nightingale used this inversion approach to significantly reduce the mortality rate of British soldiers in military hospitals in the late 19th century. By collecting detailed statistics, she demonstrated that the leading cause of death was poor sanitation. Rather than just focusing on treating diseases, she worked to prevent them by improving sanitary conditions. Her advocacy for statistics as a means of prevention represents another instance of inversion thinking—not just "how do we fix this problem," but "how do we stop it from happening in the first place."
Chapter 5: Probabilistic Thinking: Navigating Uncertainty
Probabilistic thinking is essentially trying to estimate, using tools of math and logic, the likelihood of any specific outcome coming to pass. It is one of the best tools we have to improve the accuracy of our decisions. In a world where each moment is determined by an infinitely complex set of factors, probabilistic thinking helps us identify the most likely outcomes. When we know these, our decisions can be more precise and effective. We need the concept of probabilities because the future is inherently unpredictable. Things either are or are not, but we don't know which until they happen. Our lack of perfect information about the world gives rise to probability theory and its usefulness. The best we can do is estimate the future by generating realistic, useful probabilities. There are three important aspects of probability that can help us get into the ballpark: Bayesian thinking, fat-tailed curves, and asymmetries. Bayesian thinking, named after 18th century minister Thomas Bayes, concerns how we should adjust probabilities when we encounter new data. The core idea is that we should take into account what we already know when we learn something new. For example, if you read a headline "Violent Stabbings on the Rise," without Bayesian thinking, you might become genuinely afraid. But a Bayesian approach would have you putting this information into the context of what you already know about violent crime rates. If your chance of being a victim was previously one in 10,000 (0.01%) and has now doubled to two in 10,000 (0.02%), is that worth being terribly worried about? Fat-tailed curves represent another crucial aspect of probabilistic thinking. Unlike the familiar bell curve where extremes are predictable and limited, fat-tailed curves have no real cap on extreme events. In domains with fat tails, like wealth or stock market returns, you may regularly encounter outcomes that are 10, 100, or even 10,000 times more extreme than the average. This means that in fat-tailed domains, rare events can have disproportionate impacts, and we need to position ourselves to survive or even benefit from the wildly unpredictable future. Finally, we need to consider asymmetries—the probability that our probability estimates themselves are any good. We tend to be overconfident in our estimates, particularly in complex domains. Our estimation errors often skew in a single direction, usually toward over-optimism. For example, investors frequently aim for 25% annual returns but rarely achieve them consistently. Similarly, we regularly underestimate how much traffic will delay our travel but almost never overestimate it. Recognizing these asymmetries helps us make more realistic assessments and better decisions under uncertainty.
Chapter 6: Occam's Razor: Embracing Simplicity in Complexity
Simpler explanations are more likely to be true than complicated ones. This is the essence of Occam's Razor, a classic principle of logic and problem-solving. Instead of wasting your time trying to disprove complex scenarios, you can make decisions more confidently by basing them on the explanation that has the fewest moving parts. Named after medieval logician William of Ockham, this principle states that "a plurality is not to be posited without necessity"—essentially that we should prefer the simplest explanation with the fewest moving parts. We often jump to overly complex explanations about everyday occurrences. If your husband is late getting home, you might worry he's been in an accident. If your toe hurts, you might fear bone cancer. Although these worst-case scenarios are possible, without other correlating factors, it's significantly more likely that your husband got caught up at work and your shoe is too tight. Occam's Razor helps us avoid unnecessary complexity by identifying and committing to the simplest explanation possible. Why are more complicated explanations less likely to be true? We can understand this mathematically. If two competing explanations equally explain a phenomenon, but one requires the interaction of three variables and the other requires thirty variables, which is more likely to be wrong? If each variable has a 99% chance of being correct, the first explanation is only 3% likely to be wrong. The second, more complex explanation is about nine times as likely to be wrong, or 26%. The simpler explanation is more robust in the face of uncertainty. Astronomer Vera Rubin's work on dark matter illustrates Occam's Razor in action. In the 1970s, she observed that the Andromeda Galaxy was rotating in a way that violated Newton's Laws of Motion—the edges moved just as fast as the center, contrary to what gravity should allow. The simplest explanation for this phenomenon was the existence of dark matter, an invisible substance affecting galactic rotation. While dark matter has never been directly observed, it remains the simplest explanation with the most explanatory power for the observed behavior of galaxies. Simplicity can also increase efficiency. With limited time and resources, we can't investigate every possible explanation for complex events. Without Occam's Razor, we waste time chasing dead ends. For example, when the Ivanhoe Reservoir in Los Angeles needed protection from carcinogenic bromate formation, initial solutions included building a ten-acre tarp or a huge retractable dome. The simple solution that prevailed was deploying millions of "bird balls"—small, floating black balls that blocked sunlight without requiring construction, maintenance, or complex systems.
Chapter 7: Second-Order Thinking: Considering Consequences of Consequences
Almost everyone can anticipate the immediate results of their actions. This type of first-order thinking is easy and safe but it's also a way to ensure you get the same results that everyone else gets. Second-order thinking is thinking farther ahead and thinking holistically. It requires us to not only consider our actions and their immediate consequences, but the subsequent effects of those actions as well. Failing to consider the second- and third-order effects can unleash disaster. It is often easier to find examples of when second-order thinking didn't happen—when people did not consider the effects of the effects. When they tried to do something good, or even just benign, and instead brought calamity, we can safely assume the negative outcomes weren't factored into the original thinking. This concept is often referred to as the "Law of Unintended Consequences." A classic example is the British colonial government's attempt to reduce venomous cobras in Delhi by offering rewards for dead snakes. In response, Indian citizens began breeding cobras to kill and collect rewards, ultimately making the snake problem worse. In an example of second-order thinking deficiency, we have been feeding antibiotics to livestock for decades to make meat safer and cheaper. Only in recent years have we begun to realize that in doing so we have helped create bacteria that we cannot defend against. The first-order consequence of antibiotics in animal feed is that animals gain more weight per pound of food consumed, increasing profits for farmers. The second-order effects, however, include the development of antibiotic-resistant bacteria that enter our food chain, creating serious public health problems. Second-order thinking can be used to great benefit in two key areas: prioritizing long-term interests over immediate gains and constructing effective arguments. Regarding the first, consider Cleopatra's decision in 48 BC to align herself with Caesar despite the short-term risks. This choice initially angered her brother and the Egyptian people, effectively starting a civil war. However, Cleopatra recognized that if she could survive the short-term pain, her leadership had much greater chances of success with Roman support. By considering the effects of the effects, she made a decision that led to many years of successful rule. When constructing arguments, second-order thinking helps us anticipate challenges and address them in advance. In late 18th-century England, philosopher Mary Wollstonecraft didn't simply argue that women should have rights. She demonstrated the value these rights would confer by explaining the benefits to society that would result. She argued that educating women would make them better wives and mothers, more able to support themselves and raise intelligent, conscientious children. By discussing the logical consequences of women's rights, the second-order effects, she started a conversation that eventually led to significant social change.
Summary
The Great Mental Models approach offers a transformative framework for navigating our complex world with greater clarity and wisdom. By integrating these fundamental thinking tools—from understanding the difference between maps and reality to considering the consequences of consequences—we develop a more accurate perception of how the world actually works. These models don't just improve our decision-making; they fundamentally change how we process information and interact with the world around us. The true power of mental models lies in their practical application across all domains of life. When we build a latticework of these models and learn to apply them in combination, we develop a kind of intellectual superpower—the ability to see multiple layers of reality simultaneously. This multidimensional thinking allows us to avoid common errors, identify opportunities others miss, and make decisions with greater confidence and effectiveness. By mastering these timeless principles, we equip ourselves not just to understand the world as it is, but to navigate it with the kind of wisdom that leads to better outcomes and a more meaningful life.
Best Quote
“You can’t improve if you don’t know what you’re doing wrong.” ― Shane Parrish, The Great Mental Models: General Thinking Concepts
Review Summary
Strengths: The review highlights the book's ability to make mental models approachable, provide rich examples of general thinking concepts, and offer a unique perspective on problem-solving by integrating ideas from various disciplines. Weaknesses: The review does not provide specific examples of any weaknesses or areas for improvement in the book. Overall: The reviewer expresses high praise for the book, emphasizing its value in expanding one's thinking capabilities and problem-solving skills. The recommendation level is strong, with the reviewer suggesting that readers owe it to themselves to explore the content of the book.
Trending Books
Download PDF & EPUB
To save this Black List summary for later, download the free PDF and EPUB. You can print it out, or read offline at your convenience.

The Great Mental Models
By Shane Parrish