
Algorithms to Live By
The Computer Science of Human Decisions
Categories
Business, Nonfiction, Self Help, Psychology, Philosophy, Science, Technology, Audiobook, Programming, Computer Science
Content Type
Book
Binding
Hardcover
Year
2016
Publisher
Henry Holt and Co.
Language
English
ISBN13
9781627790369
File Download
PDF | EPUB
Algorithms to Live By Plot Summary
Introduction
Every day, we face countless decisions that shape our lives—from choosing a restaurant to selecting a life partner, from organizing our workspace to planning our careers. These choices often feel overwhelming, leaving us paralyzed by uncertainty or rushing into decisions we later regret. What if there were systematic approaches to these seemingly chaotic challenges? What if the same mathematical principles that power our digital world could help us navigate life's complexities? The fascinating intersection of computer science and human decision-making offers exactly this promise. By understanding the algorithms that solve complex computational problems, we gain powerful mental models for addressing our own challenges. These aren't cold, mechanical formulas but rather elegant frameworks that acknowledge our human limitations while maximizing our chances of success. Whether you're struggling with timing, organization, prioritization, or uncertainty, these principles can transform how you approach life's most difficult decisions.
Chapter 1: Apply the 37% Rule for Optimal Stopping
The 37% Rule provides a powerful solution to one of life's most common dilemmas: when should you stop searching and make a decision? Whether you're apartment hunting, dating, or interviewing job candidates, this mathematically proven approach optimizes your chances of selecting the best option when you must evaluate choices sequentially without the ability to go back. Consider the story of Johannes Kepler, the renowned 17th-century astronomer. After his first wife died, Kepler approached his search for a second wife with methodical precision. Historical records show he courted eleven women in sequence, carefully evaluating each candidate. He rejected the first few outright, using them to establish his standards. When he met Susanna Reuttinger, who exceeded his baseline expectations, he proposed—following what we now recognize as the optimal stopping strategy. Their marriage proved successful and lasted until Kepler's death. The mathematical principle behind Kepler's approach states that when facing sequential options, you should examine approximately 37% of the available choices without making a commitment. Use this time to establish your standards and understand what matters to you. Then, select the first subsequent option that exceeds all previous ones. This elegant rule applies whether you're interviewing job candidates, house hunting, or even dating. To implement this strategy effectively, first determine your time horizon or the number of options available. For a one-year apartment search, spend about 37% of that time (roughly 4.5 months) exploring without commitment. For job candidates, review about 37% of your applicant pool before making any offers. During this "looking phase," take careful notes about what you value and what disappoints you. Remember that this approach doesn't guarantee perfection—even with optimal stopping, you'll still miss the best option about 63% of the time. The goal isn't perfection but optimization given constraints. Also, consider adapting the rule when recall is possible (when you can return to previously seen options) or when rejection might occur. The 37% Rule reminds us that decision-making isn't about finding perfect solutions but about making the best choices possible with limited information and time. By embracing this calculated approach to uncertainty, we can move forward with confidence rather than endless deliberation.
Chapter 2: Balance Exploration and Exploitation
The tension between trying new experiences and sticking with what we know works represents one of life's fundamental tradeoffs. Should you order your favorite dish at a restaurant or try something new? Return to a familiar vacation spot or venture somewhere unexplored? This balance between exploration (seeking new information) and exploitation (leveraging what you already know) shapes countless decisions throughout our lives. Computer scientists formalize this challenge as the "multi-armed bandit problem," named after casino slot machines. Imagine facing several slot machines, each with unknown payout rates. How do you maximize your winnings? If you spend too much time exploring all machines, you waste opportunities to win from the best ones. If you commit too early to one machine, you might miss out on better options elsewhere. This same dilemma appears when music critics must decide between discovering new artists and enjoying established favorites, or when companies allocate resources between researching new products and optimizing existing ones. Chris Stucchio, a data scientist who worked with various tech companies, faced this exact challenge when optimizing websites. His team needed to determine which version of a webpage would generate the most user engagement. Traditional A/B testing would split traffic evenly between versions for a set period, then implement the winner. But Stucchio realized this wasted valuable opportunities—continuing to show underperforming versions to users even after patterns emerged. Instead, he implemented a dynamic allocation strategy that gradually shifted more traffic to better-performing versions while still maintaining some exploration. To apply this wisdom in your own life, adjust your exploration-exploitation balance based on your time horizon. When entering a new domain—whether restaurants in a new city, potential career paths, or learning opportunities—start with broad exploration. As you gather information and your remaining time decreases, gradually shift toward exploiting what works best, while still occasionally exploring new possibilities. As Stucchio noted, "I'm more likely to try a new restaurant when I move to a city than when I'm leaving it." Remember that your stage of life naturally influences this balance. Children appropriately spend more time exploring because they have longer time horizons. As Berkeley psychologist Alison Gopnik explains, childhood provides "a period in which you can just explore possibilities, and you don't have to worry about payoffs." Similarly, as we age, it makes mathematical sense to shift toward exploitation—focusing more on relationships and activities we know bring us joy. The wisdom of this approach lies in its dynamic nature—continuously adjusting based on accumulated knowledge and remaining time rather than rigidly following predetermined plans. By embracing strategic exploration while increasingly exploiting what works, you can navigate life's uncertainties with both curiosity and confidence.
Chapter 3: Organize with Strategic Sorting
Sorting—arranging items in a specific order—represents one of our most fundamental organizational challenges. Whether managing emails, prioritizing tasks, or organizing physical spaces, we constantly face decisions about what goes where. The science of sorting reveals that different organizational strategies excel in different contexts, and understanding these distinctions can transform how we manage the chaos in our lives. The importance of sorting is illustrated by its role in the birth of modern computing. In the late 19th century, the American population was growing rapidly, and the 1880 census took eight years to tabulate—barely finishing before the 1890 census began. Herman Hollerith's invention of a machine to sort punched cards revolutionized the process, completing the 1890 census in just one year. His company eventually became IBM, demonstrating how the humble task of sorting can have world-changing implications. While we often assume more sorting is better, computer science reveals a surprising insight: sorting involves steep diseconomies of scale. Sorting a shelf of a hundred books will take you longer than sorting two bookshelves of fifty apiece. This explains why organizing large collections feels so overwhelming—the effort grows disproportionately with size. Different sorting algorithms offer different approaches to this challenge, from the intuitive but inefficient Bubble Sort to the more sophisticated Mergesort developed by John von Neumann. To apply sorting principles effectively in your life, first determine the purpose of your organizational efforts. Are you organizing to find items quickly, to process them in a specific sequence, or simply for aesthetic satisfaction? Different goals demand different strategies. For frequently accessed items, consider using a "self-organizing list" approach—place recently used items at the front of storage areas, allowing your usage patterns to naturally create an efficient arrangement. Remember that perfect organization is rarely worth the effort it requires. As IBM researcher Steve Whittaker discovered in his study of email organization, many people waste significant time meticulously filing messages into folders when search functions make this unnecessary. His paper was aptly titled "Am I Wasting My Time Organizing Email?" The most effective systems acknowledge our cognitive limitations and optimize for actual use rather than theoretical perfection. By focusing on retrieval rather than arrangement—organizing based on how you'll actually use items rather than abstract categorization schemes—you can create organizational systems that truly serve your needs rather than becoming burdens themselves. Sometimes, the wisest approach is to leave a strategic mess.
Chapter 4: Optimize Your Mental Cache
Our minds, like computers, have limited capacity for immediate processing and storage. This constraint forces us to make constant decisions about what information to keep readily accessible and what to archive or discard. Understanding how to manage this "mental cache" effectively can dramatically improve your productivity, learning, and decision-making. The concept of caching emerged in the 1960s when computer scientists realized that having a small, fast memory and a large, slow one could be more efficient than a single memory system. The key was managing that small, precious memory so it contained what you needed most often. This principle now appears everywhere in computing, from processor chips to the global internet. In 1966, IBM researcher László "Les" Bélády published groundbreaking work showing that the optimal strategy is to evict whatever item won't be needed again for the longest time. Since perfect foresight is impossible, the next best approach is the Least Recently Used (LRU) principle: remove whatever hasn't been accessed for the longest time. Japanese economist Yukio Noguchi discovered this principle independently when developing his "super" filing system. Rather than grouping similar documents together, he simply inserted each document at the left side of his filing box after using it. This practice began because it was easier than finding the original spot, but he gradually realized it was remarkably efficient. Computer scientists later proved this approach optimal—the Noguchi Filing System is LRU in action. To optimize your own mental cache, start by recognizing your limited capacity. Instead of trying to remember everything, use external systems strategically. Create a "cache" of information you access frequently—perhaps a document with key contacts, important dates, or common reference materials. For physical items, apply the LRU principle by organizing your workspace so recently used tools stay within reach. Another powerful caching strategy is "pre-fetching"—anticipating what you'll need before you actually need it. Just as web browsers pre-load pages they predict you'll visit next, you can prepare materials for upcoming projects, review notes before meetings, or stage ingredients before cooking. This reduces the cognitive load of context-switching and makes you more efficient. This understanding of caching also changes how we think about human memory. Carnegie Mellon psychologist John Anderson realized that forgetting isn't a bug but a feature—our brains are optimally tuned to keep the most likely-to-be-needed information readily accessible. The pattern by which things fade from our minds mirrors the pattern by which things fade from use around us, making our memory a perfect cache for navigating the world.
Chapter 5: Schedule Tasks with Mathematical Precision
Deciding what to do when represents one of our most persistent challenges. Whether managing a personal to-do list or coordinating complex projects, we constantly face questions about sequence, timing, and priority. The science of scheduling offers powerful frameworks for making these decisions more effectively. The importance of scheduling is illustrated by a dramatic incident during the 1997 Mars Pathfinder mission. Shortly after landing on Mars, the spacecraft began experiencing mysterious system resets that threatened the entire mission. Engineers at NASA's Jet Propulsion Laboratory traced the problem to a scheduling issue called "priority inversion"—a lower-priority task was blocking a critical high-priority one. The solution they beamed to Mars, "priority inheritance," teaches us that sometimes the most urgent task isn't the one directly in front of us, but rather the one that's blocking other important work. This principle applies directly to human scheduling. When facing competing priorities, we should consider not just importance but dependencies—which tasks enable other tasks? Computer scientist Laura Albert McLay explains how this insight helps her manage family life: "We can't get out the door unless the kids get breakfast first, and they can't get breakfast if I don't remember to give them a spoon." Identifying these dependencies helps prevent bottlenecks that delay everything else. For personal productivity, scheduling theory offers several proven algorithms. If your goal is to minimize the maximum lateness of any task (meeting all deadlines), use the "Earliest Due Date" approach—simply do tasks in order of their deadlines. If you want to complete as many tasks as possible quickly, use "Shortest Processing Time"—tackle the quickest tasks first. For tasks with different importance levels, use "Weighted Shortest Processing Time"—divide each task's importance by the time it will take, then do them in descending order of this ratio. When implementing these strategies, be mindful of context-switching costs. Every time you shift between tasks, you pay a mental penalty. Batch similar activities together and consider setting aside blocks of uninterrupted time for focused work. And remember that preemption—the ability to pause one task for another—is both powerful and dangerous; use it strategically for genuinely urgent matters. The science of scheduling reminds us that how we organize our time is just as important as how much time we have. By applying these algorithmic principles, you can reduce stress, meet more deadlines, and accomplish more of what truly matters.
Chapter 6: Avoid Overthinking with Regularization
Sometimes thinking less leads to better decisions. This counterintuitive truth, well-established in machine learning research, offers a powerful antidote to analysis paralysis and overthinking. The key insight is that excessive complexity often leads to "overfitting"—becoming too attuned to the specific data you have, at the expense of generalizing well to new situations. Charles Darwin's approach to deciding whether to marry illustrates this challenge. In 1838, Darwin created a detailed pro-con list, meticulously weighing factors from "charms of music & female chit-chat" to concerns that "perhaps my wife won't like London." While this seems admirably thorough, such exhaustive analysis can actually lead to worse decisions by giving too much weight to irrelevant details. After all this deliberation, Darwin simply concluded: "Marry—Marry—Marry Q.E.D." Machine learning researchers have discovered that when building predictive models, there's a sweet spot of complexity. Models that are too simple miss important patterns, but models that are too complex start fitting to random fluctuations in the data. For example, a nine-factor model might perfectly explain past marriage satisfaction data but make wildly inaccurate predictions about future happiness. A simpler two-factor model, while less precise about the past, often makes more reliable predictions. To avoid overthinking in your own decisions, practice "regularization"—deliberately limiting complexity. For decisions with high uncertainty or limited data, simpler approaches often work better. When planning far into the future, use broader strokes rather than detailed plans that will likely need revision anyway. As entrepreneurs Jason Fried and David Heinemeier Hansson advise, use thicker markers for longer-term planning to force yourself to think in broad concepts rather than details. Another effective approach is "early stopping"—deliberately limiting how much you think about certain problems. Tom, a professor mentioned in the book, spent over ten hours preparing each hour of lecture for his first course, perfecting every detail. Surprisingly, his students preferred his second course, which he had less time to prepare. The extra preparation hours had been spent on nitty-gritty details that only confused students rather than helping them. Remember that the goal isn't to be lazy or careless, but to match your thinking process to the nature of the problem. For well-understood domains with ample high-quality data, detailed analysis makes sense. But for novel situations with limited information, excessive deliberation can be counterproductive. Sometimes your first instinct, based on the most important factors, is actually the most rational solution.
Chapter 7: Embrace Uncertainty with Bayesian Thinking
In a world where certainty is rare, the ability to make good decisions under uncertainty becomes invaluable. Bayesian thinking—named after 18th-century Presbyterian minister Thomas Bayes—offers a powerful framework for navigating uncertainty by treating beliefs as provisional rather than fixed, continuously updating them as new evidence emerges. J. Richard Gott, a Princeton astrophysicist, demonstrated this principle dramatically during a 1969 visit to the Berlin Wall. Standing before this seemingly permanent structure, Gott wondered how long it would remain. Rather than claiming certainty or avoiding prediction entirely, he applied what he called the Copernican Principle—the idea that we're unlikely to be observing something at a special moment in its history. Since the Wall had stood for eight years at that point, Gott estimated with 50% confidence that it would last between 2⅔ and 24 more years. His prediction proved remarkably accurate when the Wall fell in 1989, twenty years after his visit. This same approach applies to countless everyday situations. When you arrive at a bus stop and learn another tourist has been waiting seven minutes, the Copernican Principle suggests the bus will arrive within the next seven minutes. When your friend has been dating someone for a month, it suggests the relationship will last about another month. Without any prior knowledge, the best guess for how long something will last is exactly as long as it's lasted already. To apply Bayesian thinking in your own life, start by explicitly acknowledging your uncertainty. Rather than thinking in binary terms—"I know" versus "I don't know"—practice expressing beliefs in terms of probability. When facing a decision, ask yourself: "Based on what I currently know, how likely is each possible outcome?" This simple shift creates space for nuanced thinking. Next, actively seek disconfirming evidence—information that might challenge your current beliefs. This counterintuitive practice helps overcome confirmation bias, where we unconsciously favor information that supports existing views. By deliberately exposing yourself to contrary perspectives, you accelerate the learning process. The power of Bayesian thinking is illustrated by Stephen Jay Gould's experience after being diagnosed with abdominal mesothelioma. When he discovered the median survival time was just eight months, he dug deeper and found the distribution was "strongly right skewed, with a long tail." This meant the longer he lived, the more evidence it provided that he would live longer. Understanding this statistical pattern gave him hope and perspective—he ultimately survived twenty more years after diagnosis. Remember that embracing uncertainty doesn't mean abandoning conviction. Rather, it means holding beliefs with appropriate confidence—strongly where evidence warrants, tentatively where information remains sparse. This balanced approach allows you to act decisively when necessary while remaining open to new information that might change your perspective.
Summary
Throughout our exploration of algorithmic thinking, we've discovered powerful frameworks for navigating life's complexities. From the 37% Rule for optimal stopping to the balance of exploration and exploitation, from strategic sorting to efficient mental caching, these principles transform uncertainty from an enemy to be feared into a landscape to be navigated with wisdom and confidence. As Brian Christian notes in the original text, "The most powerful minds are those that can turn uncertainty into an ally rather than an enemy." The journey toward better decisions begins with a single step: choose one area where you've been struggling with uncertainty or overwhelm, and apply an appropriate algorithm. If you're apartment hunting, try the 37% rule. If you're wondering whether to try a new restaurant or return to a favorite, consider where you are in your time horizon. If your desk is cluttered, embrace the natural caching of the paper pile. By bringing algorithmic thinking to your daily choices, you might find that the path to better decisions isn't always more analysis—sometimes, it's knowing exactly when to stop.
Best Quote
“Seemingly innocuous language like 'Oh, I'm flexible' or 'What do you want to do tonight?' has a dark computational underbelly that should make you think twice. It has the veneer of kindness about it, but it does two deeply alarming things. First, it passes the cognitive buck: 'Here's a problem, you handle it.' Second, by not stating your preferences, it invites the others to simulate or imagine them. And as we have seen, the simulation of the minds of others is one of the biggest computational challenges a mind (or machine) can ever face.” ― Brian Christian, Algorithms to Live By: The Computer Science of Human Decisions
Review Summary
Strengths: The book is praised for making mathematics relevant to everyday life, particularly in decision-making contexts. It is noted for its engaging style, which makes complex concepts accessible and interesting. The application of mathematical principles to real-life scenarios, such as job interviews and house buying, is highlighted as a particularly compelling aspect. Weaknesses: Not explicitly mentioned. Overall Sentiment: Enthusiastic Key Takeaway: The book effectively demystifies the role of mathematics in human decision-making, presenting it in a way that is both accessible and applicable to everyday situations, thereby countering the common perception that mathematics lacks practical utility.
Trending Books
Download PDF & EPUB
To save this Black List summary for later, download the free PDF and EPUB. You can print it out, or read offline at your convenience.

Algorithms to Live By
By Brian Christian