
What We Owe the Future
A Guide to Ethical Living for the Fate of Our Future
Categories
Nonfiction, Philosophy, Science, History, Economics, Politics, Audiobook, Society, Environment, Futurism
Content Type
Book
Binding
Hardcover
Year
2022
Publisher
Basic Books
Language
English
ISBN13
9781541618626
File Download
PDF | EPUB
What We Owe the Future Plot Summary
Introduction
The fate of countless future generations hangs in the balance of decisions we make today. If humanity survives for even a fraction of its possible future, trillions of people who do not yet exist will experience lives filled with either flourishing or suffering based on the trajectory we set now. This perspective—that we have profound moral responsibilities to future people—challenges conventional ethical frameworks that focus primarily on present concerns. The moral weight of future generations stems from three simple yet powerful premises: future people matter morally, there could be astronomically many of them, and our actions today can meaningfully shape their world. Longtermism represents a revolutionary expansion of our moral circle across time. Just as we've gradually extended moral consideration beyond tribe, nation, and species, we now face the challenge of extending it to those separated from us by centuries or millennia. This temporal expansion doesn't require abandoning our obligations to contemporaries but rather recognizing that when actions affect both present and future people, the sheer scale of potential future lives often dominates the moral calculus. Through philosophical reasoning, empirical analysis, and historical perspective, we can develop a framework for understanding these obligations and identify concrete ways to safeguard humanity's vast potential.
Chapter 1: The Moral Weight of Future People: Why Time Doesn't Diminish Value
Future people matter morally, even though they don't yet exist. When we consider concrete examples, this becomes intuitive. If you drop a glass bottle on a hiking trail, the harm caused to a child who cuts herself tomorrow is morally equivalent to the harm caused to a child who cuts herself a century from now. The temporal distance doesn't diminish the moral significance of the injury. Similarly, the joy experienced by future generations matters just as much as our own. The scale of potential future flourishing is almost incomprehensibly vast. If humanity survives for even a small fraction of Earth's remaining habitable period—roughly 500 million years—future people would outnumber us by thousands or millions to one. Their collective experiences, whether positive or negative, would dwarf everything in human history thus far. This numerical reality creates a profound moral imperative to consider their interests in our decisions today. Time itself provides no inherent reason to discount future wellbeing. While we naturally have special obligations to those close to us in time and relationship, this doesn't justify ignoring distant future generations entirely. Geographic distance doesn't diminish moral worth—we recognize that people on the other side of the world deserve moral consideration—and temporal distance shouldn't either. A child born in the year 3000 has the same capacity for suffering and joy as a child born today. When we consider humanity's potential timeline, we discover that we are living remarkably early in history. Human civilization is only about 10,000 years old. If we survive even a million years—a mere 0.2% of Earth's remaining habitable lifetime—we stand at the very beginning of human history. We are effectively the ancients, making decisions that could shape the entire future course of civilization. This perspective transforms how we should think about our moral responsibilities. Our ability to affect the long-term future is more substantial than we might initially think. Historical figures like the American Founding Fathers deliberately aimed at long-term impact and succeeded. Today, our actions have even greater potential for lasting consequences. Climate change exemplifies this: carbon dioxide remains in the atmosphere for centuries, making our emissions choices relevant for countless future generations. Similarly, how we develop transformative technologies like artificial intelligence or biotechnology could shape civilization's trajectory for eons to come. What makes our era particularly significant is its unusual rate of change and technological development. For the vast majority of human history, economic and technological progress was minimal. Only in recent centuries has the world economy begun doubling every few decades rather than every few millennia. This rapid development, combined with our unprecedented global interconnectedness, gives us extraordinary power to influence humanity's long-term trajectory. We are living through a period of remarkable plasticity, where our choices could determine the shape of civilization for thousands or millions of years.
Chapter 2: Shaping Centuries Ahead: How Our Actions Impact the Long-term Future
We can meaningfully influence the long-term future despite deep uncertainty about specific outcomes. Historical analysis reveals that many pivotal developments—from constitutional democracy to nuclear weapons—resulted from contingent human choices that could have gone differently. These weren't inevitable historical forces but the product of specific decisions that permanently altered civilization's trajectory. The significance, persistence, and contingency framework offers a powerful lens for evaluating potential long-term impact. Significance measures how much better or worse a state of affairs is compared to alternatives. Persistence captures how long that state lasts once established. Contingency reflects the likelihood that the world would have been different without specific interventions. Multiplying these factors helps assess the expected value of actions aimed at shaping the future. Two primary mechanisms exist for influencing the long-term future. First, we can affect humanity's duration by preventing extinction or civilizational collapse. Second, we can change civilization's trajectory by influencing the values, institutions, and technologies that guide future generations. Both approaches can have enormous expected value when we consider the vast scale of potential future lives at stake. When evaluating long-term impacts, we must reason probabilistically rather than demanding certainty. Expected value theory provides a framework for making decisions under uncertainty by weighing the probability of different outcomes against their value. Climate change illustrates this approach: uncertainty about precise impacts doesn't justify inaction but rather heightens concern, since worst-case scenarios could be catastrophic. Similarly, the uncertain but potentially enormous stakes of artificial intelligence development demand serious attention despite our inability to predict specific outcomes with certainty. History reveals that moments of plasticity—periods when institutions, technologies, or norms are still forming—offer particularly valuable opportunities for long-term influence. After World War II, the international community debated various governance structures for nuclear weapons; once established, these arrangements became difficult to change. Similarly, climate action would have faced less political resistance if begun decades earlier when the science was already clear but polarization was minimal. The lesson from these examples is that we should pay special attention to emerging challenges before they solidify into rigid structures. By identifying today's moments of plasticity—whether in artificial intelligence governance, synthetic biology regulation, or great power relations—we can shape systems while they remain malleable rather than trying to reform them after they've hardened. This perspective transforms seemingly abstract future concerns into concrete present-day priorities.
Chapter 3: Existential Risks: Threats That Could End Human Potential Forever
Humanity faces several threats that could permanently end our civilization or cause our extinction. These existential risks deserve special attention because of their unique moral gravity—they would foreclose the entire future of humanity, representing an irreversible loss of potential. Unlike other catastrophes from which society might eventually recover, extinction represents a permanent foreclosure of all future possibilities. Engineered pandemics stand out as particularly concerning in the near term. The COVID-19 pandemic demonstrated how natural pathogens can disrupt global society, but engineered biological agents could be far deadlier. Biotechnology is advancing rapidly, with the cost of DNA synthesis and gene editing falling dramatically. These technologies promise medical breakthroughs but also enable the creation of novel pathogens with dangerous properties. Unlike nuclear weapons, which require rare materials and massive infrastructure, bioweapons could eventually be developed with widely available equipment and knowledge. Laboratory safety practices have proven alarmingly inadequate. Historical examples abound of dangerous pathogens escaping from labs, including multiple smallpox leaks in the UK and an anthrax release in the Soviet Union that killed dozens. Even after catastrophic breaches, institutions often fail to implement adequate safeguards, as evidenced by repeated foot-and-mouth disease escapes from the same British facility. The democratization of biotechnology means that accidents could occur in an increasing number of locations worldwide. Great power conflict represents another major threat to humanity's future. While the "Long Peace" since World War II is encouraging, it has involved considerable luck in avoiding nuclear escalation. Power transitions, like China's rise relative to the United States, historically increase the risk of conflict. A full-scale nuclear war could kill hundreds of millions directly and potentially billions through subsequent famine if nuclear winter disrupted global agriculture. Advanced artificial intelligence could pose existential risks through several mechanisms. AI systems with goals misaligned with human values might pursue those goals with superhuman capabilities, potentially viewing humans as obstacles or competitors for resources. Even AI systems designed with seemingly benign objectives could cause harm if those objectives are incompletely or incorrectly specified. The technical challenge of ensuring advanced AI systems reliably pursue their designers' true intentions—the alignment problem—remains unsolved. If humanity were to go extinct, would another technologically capable species evolve to take our place? This seems far from guaranteed. The evolutionary path from simple life to human-level intelligence may have involved several highly improbable steps. The Fermi paradox—the absence of observable alien civilizations despite the vastness of the universe—suggests that the emergence of technological intelligence might be extraordinarily rare. If so, human extinction could represent the permanent loss of the universe's self-understanding.
Chapter 4: Value Lock-in: The Danger of Permanent Moral Stagnation
Throughout history, human values have evolved dramatically. Practices once widely accepted—slavery, public torture, denial of women's rights—are now considered morally abhorrent in most societies. This moral progress resulted from the contingent spread of ideas, not inevitable historical forces. Had certain key events unfolded differently, our moral landscape might look radically different today. This historical contingency raises a crucial concern: the possibility of value lock-in. Value lock-in occurs when a particular set of values becomes permanently or near-permanently established, preventing further moral evolution. Several mechanisms could cause such lock-in, with advanced artificial intelligence being perhaps the most concerning. As AI systems approach or exceed human-level intelligence, they may develop the capacity to entrench whatever values are encoded in them. An AI system designed to maximize a specific metric—corporate profits, national power, or even seemingly benign goals like human happiness—might reshape civilization around that singular objective. Once such systems surpass human control, their values could become effectively permanent. Digital minds—whether AI systems or uploaded human consciousnesses—could potentially survive for billions of years, far outlasting biological humans. If created with flawed values, they might perpetuate those values indefinitely. Similarly, space colonization could spread and entrench whatever values guide the colonization process. The first civilization to develop interstellar travel capabilities might determine the values of all future civilizations throughout our cosmic neighborhood. The pursuit of value lock-in has been common throughout history. Religious crusades, ideological purges, and totalitarian regimes have all attempted to eliminate competing value systems. Cultural evolution helps explain this pattern—cultures that entrench themselves and suppress alternatives tend to persist longer. The combination of AI's potential immortality with humanity's demonstrated desire to spread and preserve particular values creates conditions for permanent value lock-in. This possibility is particularly concerning because our current moral understanding is almost certainly incomplete. Past generations held values we now find abhorrent, such as acceptance of slavery or extreme patriarchy. Future generations will likely view some of our current moral beliefs as similarly misguided. If today's flawed values were to become permanently locked in through advanced AI, we would foreclose the possibility of moral progress. Rather than rushing to lock in our current values, we should aim to create what might be called a "morally exploratory world"—one structured to enable ongoing moral reflection and progress. This would involve keeping options open by delaying potentially irreversible developments, encouraging political experimentation and cultural diversity, and structuring global society to favor the spread of better ideas through reason and good-faith debate rather than coercion or manipulation.
Chapter 5: Safeguarding Humanity: Preventing Extinction, Collapse, and Stagnation
The fall of the Roman Empire illustrates how even advanced civilizations can collapse. In AD 100, Rome was at its zenith—a sophisticated society with central heating, concrete construction, complex trade networks, and a population of one million in its capital city. Yet within a few centuries, Rome's population had dwindled to thirty thousand, and much of its technological sophistication was lost. While this collapse was dramatic, it was also local—other civilizations continued to flourish elsewhere, and Europe eventually recovered. Modern global civilization has demonstrated remarkable resilience in the face of catastrophes. Even after devastating events like the Black Death, which killed around 10% of the global population, or the atomic bombing of Hiroshima, societies have rebounded with surprising speed. This resilience stems from human adaptability and the preservation of knowledge across geographically dispersed locations. Even if a catastrophe killed 99% of humanity, survivors would likely retain critical skills and access to preserved knowledge in libraries worldwide. However, certain catastrophes could potentially cause unrecovered collapse. An all-out nuclear war might kill billions through direct effects and subsequent nuclear winter, making agriculture impossible across much of the Northern Hemisphere. If such a catastrophe coincided with the depletion of easily accessible fossil fuels, recovery might be significantly harder. Without readily available energy sources that powered the original Industrial Revolution, a post-collapse society might struggle to reindustrialize. Climate change represents another long-term challenge. While current projections suggest it's unlikely to directly cause civilization's collapse, extreme scenarios involving feedback effects could potentially trigger catastrophic warming. More concerning is how climate change might interact with other risks—a warmed world might make recovery from other catastrophes more difficult, and the continued burning of fossil fuels depletes resources that could be crucial for future recovery. Technological stagnation poses a subtler but potentially more serious risk. Economic data suggests that innovation is becoming harder over time—we need exponentially more researchers to maintain constant rates of progress. Meanwhile, global fertility rates are falling below replacement levels in most countries. These trends could eventually lead to near-zero growth and technological progress. Such stagnation would be dangerous because it would extend the period during which civilization remains vulnerable to extinction risks without developing the technologies needed to protect against them. The combination of these risks—extinction, collapse, and stagnation—means that safeguarding civilization requires a multi-faceted approach. We must reduce direct extinction risks through better biosecurity and nuclear security, preserve the conditions for recovery by maintaining accessible knowledge and resources, and prevent stagnation by supporting scientific progress and institutional innovation. The stakes are enormous—if we fail, we might permanently foreclose the possibility of a vast and flourishing future.
Chapter 6: The Value of Future Lives: Population Ethics and Moral Mathematics
Is bringing new happy people into existence morally good? This seemingly abstract question has profound implications for how we value the future. If creating new happy lives is good, then preventing extinction becomes overwhelmingly important—it preserves the possibility of trillions of valuable future lives. If creating new lives is merely neutral, then improving existing lives might take priority. The "intuition of neutrality"—the idea that creating happy lives is neither good nor bad—initially seems plausible. We rarely justify having children by claiming their existence will make the world better. However, this intuition faces devastating logical problems. If creating a child with a good life is neutral, and creating a child with a slightly worse (but still good) life is also neutral, then these options must be equally good—which is clearly false. Philosopher Derek Parfit revolutionized this field, arguing that the nonexistence of future people who would have had good lives represents a genuine moral loss. This view challenges the intuitive "neutrality" position that creating new happy people is neither good nor bad. Several powerful arguments undermine the neutrality intuition. First, while we might think creating miserable lives is bad, it seems inconsistent to deny that creating happy lives is good. Second, the fragility of identity means that almost any policy change will completely change which future people exist. If we believe such policies can be good despite changing identities, we must reject neutrality. Alternative views also face difficulties. The "average view," which holds we should maximize average wellbeing, implies absurd conclusions like preferring a tiny population of extremely happy people to a vast population of very happy people. It can even suggest creating lives filled with suffering is better than creating many happy lives if doing so would raise the average. The "total view"—that we should maximize total wellbeing—avoids these problems but leads to what philosopher Derek Parfit called the "Repugnant Conclusion": the idea that a vast population with lives barely worth living could be better than a smaller population with excellent lives. While initially counterintuitive, this conclusion follows from seemingly indisputable premises about improving lives and the value of equality. When we apply population ethics to assess the value of preventing extinction, the stakes become astronomical. If future people with good lives count morally, then extinction would represent the loss of potentially trillions of valuable future lives. This gives us powerful reasons to safeguard humanity's future and eventually expand sustainably into space, where civilization could flourish for billions or trillions of years.
Chapter 7: Taking Action: Practical Steps Toward a Flourishing Future
Positively influencing the long-term future requires identifying concrete priorities and actions. Three principles can guide this effort: take robustly good actions that work across many scenarios, build options that preserve future flexibility, and invest in learning to reduce uncertainty about crucial considerations. For individuals, longtermism transforms how we should approach personal decisions. It suggests prioritizing actions with potential long-term significance over those with only short-term benefits. This might mean choosing careers in fields positioned to address existential risks, like biosecurity, AI safety research, or institutional design. It could involve supporting organizations working on neglected longtermist priorities through donations or volunteering. Even small contributions can have outsized impacts when directed toward underfunded areas with enormous long-term stakes. For policymakers, longtermism implies institutional reforms to better represent future generations' interests. Democratic systems naturally favor short-term thinking, as future people cannot vote or lobby for their interests. Possible reforms include creating dedicated positions for future generations' representatives, establishing independent agencies with mandates to protect long-term interests, or implementing mechanisms that slow down potentially irreversible decisions to allow for more careful deliberation. Specific policy priorities emerge from longtermist analysis. Preventing catastrophic biological risks requires strengthening international biosafety standards, investing in monitoring technologies, and potentially restricting access to dangerous information. Navigating AI development safely demands careful governance frameworks that prevent racing dynamics while enabling beneficial innovation. Reducing great power conflict through diplomatic engagement and institutional cooperation becomes crucial for preserving humanity's long-term potential. For areas with deep strategic uncertainty, like AI alignment, field-building becomes crucial. Developing communities of researchers focused on long-term risks creates capacity to address emerging challenges. Similarly, improving institutional decision-making helps society better manage complex global risks. Building resilient food systems and knowledge repositories enhances civilization's ability to recover from catastrophes. Taking a community perspective enhances effectiveness. Rather than asking what single individuals can accomplish alone, we should consider how communities can collectively address humanity's greatest challenges. This portfolio approach allows for specialization, experimentation, and mutual support in service of our shared long-term future. Perhaps most fundamentally, longtermism calls for moral expansiveness—extending our circle of moral concern across both space and time. Just as we've gradually expanded moral consideration to people of all races, genders, and nationalities, we should further expand it to include future generations. This expansion doesn't require abandoning special obligations to those close to us, but it does mean giving significant weight to how our actions affect the distant future.
Summary
Future generations matter morally, and we can meaningfully shape their world. Through careful analysis of historical contingency, technological development, and moral progress, we've seen that humanity stands at a pivotal moment. Our choices today could determine whether civilization flourishes for billions of years or ends this century—and whether future society embodies values that enable genuine flourishing or lock in flawed priorities. The case for longtermism rests on three pillars: the vast scale of potential future flourishing, our ability to positively influence long-term outcomes despite uncertainty, and the moral significance of future people despite their temporal distance. This perspective doesn't require neglecting present concerns but rather expands our moral circle across time, recognizing that those who come after us deserve consideration in our decisions today. By developing resilient institutions, managing existential risks, and preserving the possibility of moral progress, we can fulfill our responsibilities to the countless generations that may follow.
Best Quote
“Whether the future is wonderful or terrible is, in part, up to us.” ― William MacAskill, What We Owe the Future
Review Summary
Strengths: The book presents an interesting concept, encouraging readers to think in longer-term increments as individuals and civilizations mature. Weaknesses: The book is criticized for being a collection of vague notions and half-baked philosophical musings. The reviewer feels the core idea could have been condensed into a blog post, with the rest being filler content on topics like climate and AI. It is perceived as out of touch with real-world operations, similar to criticisms of Nick Bostrom's "Superintelligence." Overall Sentiment: Critical Key Takeaway: While the book's concept of long-term thinking is appealing, its execution is seen as lacking substance and relevance, failing to effectively engage with practical realities.
Trending Books
Download PDF & EPUB
To save this Black List summary for later, download the free PDF and EPUB. You can print it out, or read offline at your convenience.

What We Owe the Future
By William MacAskill