Loading...
Risk cover

Risk

The Science and Politics of Fear

4.0 (5,715 ratings)
25 minutes read | Text | 9 key ideas
In a world brimming with anxiety, Dan Gardner peels back the layers of fear that cloud our judgment in "Risk." This thought-provoking exploration challenges the alarming narratives we're fed daily, revealing the psychological and societal mechanisms that heighten our sense of danger. Gardner masterfully argues that our perception is distorted, painting a reality far less threatening than we imagine. With engaging insights and a sharp wit, he dismantles the myths that drive our fears, inviting readers to reconsider what truly deserves our concern. Prepare to question everything you thought you knew about risk and safety in this captivating journey through the modern psyche.

Categories

Business, Nonfiction, Self Help, Psychology, Philosophy, Science, Economics, Politics, Audiobook, Sociology

Content Type

Book

Binding

Hardcover

Year

2007

Publisher

McClelland & Stewart

Language

English

ASIN

0771032994

ISBN

0771032994

ISBN13

9780771032998

File Download

PDF | EPUB

Risk Plot Summary

Introduction

We live in the safest era in human history, yet paradoxically, we are increasingly afraid. This fundamental contradiction defines our relationship with risk in the modern world. While objective measures show dramatic improvements in human welfare—longer lifespans, reduced disease, fewer violent deaths—our subjective experience remains dominated by anxiety about an ever-expanding catalog of threats. This disconnect between statistical reality and perceived danger stems from the mismatch between our Stone Age brains and our Information Age environment. Our risk assessment mechanisms evolved to help our ancestors survive immediate, visible dangers on the African savanna—not to evaluate statistical probabilities or assess the relative dangers of climate change versus processed foods. Understanding this paradox requires examining the complex interplay between our psychological biases, media systems, and the economic incentives that shape our perception of danger. Through rigorous analysis of how these forces interact, we can identify the systematic patterns that lead us to fear the wrong things while ignoring genuine threats. This exploration isn't merely academic—it has profound implications for both personal decision-making and public policy. When fear rather than evidence drives our responses to risk, we misallocate resources, restrict freedoms unnecessarily, and often create new dangers in our attempts to eliminate others. By understanding the mechanisms that distort our risk perception, we can develop more rational approaches to navigating an uncertain world.

Chapter 1: The Safety Paradox: Living in Fear During History's Safest Era

The modern world presents a striking contradiction: we have never been safer, yet we have never been more afraid. By virtually every objective measure, people in developed countries face fewer threats to life and health than any previous generation. Life expectancy has nearly doubled over the past century. Child mortality has plummeted from around 30% in 1900 to less than 1% today in developed nations. Infectious diseases that once regularly decimated populations have been largely controlled through vaccination, antibiotics, and improved sanitation. Even violence has declined dramatically—homicide rates in modern Western societies are a fraction of what they were in medieval times, and deaths from war have fallen precipitously since the mid-20th century. Yet despite these remarkable improvements, surveys consistently show high and often increasing levels of anxiety about health, safety, and environmental threats. Parents fear letting children play unsupervised in neighborhoods far safer than those they themselves grew up in. Consumers worry intensely about trace chemicals in food while making lifestyle choices that pose far greater health risks. Citizens demand protection from terrorism despite its statistical rarity compared to everyday dangers like automobile accidents. This persistent anxiety in the face of objective safety constitutes what researchers call the "risk perception paradox." This paradox stems partly from how progress itself changes our relationship with risk. As Steven Pinker notes, "The decline of violent behavior has been paralleled by a decline in attitudes that tolerate or glorify violence." As actual dangers recede, our sensitivity to remaining threats increases. A society that has eliminated smallpox and polio can turn its attention to more subtle health concerns. Communities where violent crime has fallen dramatically become increasingly intolerant of the violence that remains. This shifting baseline creates the impression of deteriorating conditions despite objective improvements. Media systems further amplify this distortion through their structural bias toward negative, dramatic events. News organizations operate on the principle that "if it bleeds, it leads"—unusual, frightening stories attract attention and boost ratings or readership. A plane crash anywhere in the world receives extensive coverage, while the approximately 100,000 commercial flights that land safely each day go unreported. This selective reporting makes rare events seem common and common events invisible. The advent of 24-hour news channels and social media has intensified this effect, creating a constant stream of alarming information that our ancestors never had to process. The psychology of risk perception compounds these distortions. Our brains evolved in environments where immediate, visible threats—predators, hostile tribes, natural disasters—posed the greatest dangers. We developed powerful emotional responses to these threats that helped our ancestors survive. But these same mechanisms poorly equip us to evaluate statistical risks or long-term, invisible dangers. We respond more strongly to risks that are dramatic, unfamiliar, involuntary, or man-made than to statistically greater dangers that lack these qualities. This explains why terrorism generates more fear than heart disease despite killing far fewer people. The consequences of this paradox extend beyond unnecessary anxiety. When fear rather than evidence drives decision-making, societies misallocate resources toward addressing highly publicized but relatively minor risks while neglecting more significant threats. After the September 11 attacks, for example, many Americans switched from flying to driving—a decision that statistical analysis suggests resulted in approximately 1,500 additional road fatalities in the following year. Understanding the mechanisms behind the risk perception paradox represents the first step toward developing more rational approaches to both personal and societal risk management.

Chapter 2: System 1 vs System 2: How Our Dual Thinking Processes Assess Risk

Human risk assessment operates through two distinct cognitive systems that psychologists Daniel Kahneman and Amos Tversky identified as System 1 and System 2. System 1 thinking is fast, intuitive, and emotional—it makes snap judgments based on mental shortcuts without conscious deliberation. System 2, by contrast, is slow, analytical, and deliberate—it carefully weighs evidence and calculates probabilities. These systems aren't physically separate brain regions but rather different modes of cognitive processing that evolved to serve complementary purposes. System 1 developed to help our ancestors survive in environments where quick reactions often meant the difference between life and death. When early humans heard rustling in the grass, those who immediately assumed "predator" and fled survived more often than those who paused to analyze the statistical likelihood of various causes. This intuitive system operates through emotional responses and mental shortcuts called heuristics. The "availability heuristic" judges risk based on how easily examples come to mind—events we can readily recall seem more probable regardless of their actual frequency. The "affect heuristic" assesses danger based on emotional associations—things that trigger negative feelings seem riskier than those evoking positive emotions. System 2 thinking emerged more recently in human evolution and allows for sophisticated statistical reasoning and logical analysis. It can calculate that flying is safer than driving despite dramatic news coverage of plane crashes. It can recognize that the lifetime risk of dying from terrorism is far smaller than the risk of dying from heart disease. However, System 2 requires significant mental effort and attention—resources often in short supply in our information-saturated environment. As Kahneman notes, "System 2 is lazy," defaulting to System 1 judgments whenever possible rather than engaging in the hard work of analysis. The interaction between these systems creates systematic biases in risk perception. System 1 generates initial impressions based on emotional responses, which System 2 may then adjust or override. But this adjustment process is imperfect and often insufficient. Even when presented with statistical evidence contradicting their fears, people frequently maintain emotional responses to perceived threats. A person who understands the minuscule probability of shark attacks may still feel terror when swimming in the ocean. This explains why public education campaigns focusing solely on statistics often fail to change risk perceptions. Modern media environments exploit and amplify these cognitive tendencies. News organizations naturally emphasize dramatic, emotional stories that trigger System 1 responses while providing little statistical context that would engage System 2. A single vivid example—a child abduction, a terrorist attack, a rare disease—activates powerful emotional responses that statistical information rarely counterbalances. As risk perception researcher Paul Slovic observes, "The image of a child with leukemia carries more emotional weight than statistics about cancer rates." The dual-system framework helps explain otherwise puzzling risk perceptions. Why do people fear nuclear power more than coal, despite evidence that coal causes far more deaths? Because nuclear accidents trigger powerful System 1 responses that coal's steady toll doesn't. Why do parents fear stranger abduction more than automobile accidents? Because the former evokes stronger emotions despite being dramatically less common. Understanding these mechanisms provides a foundation for developing more rational approaches to both personal and societal risk management—not by eliminating System 1 responses, which is impossible, but by recognizing when they might lead us astray and deliberately engaging System 2 analysis.

Chapter 3: Media Amplification: How News Coverage Magnifies Rare Threats

News media fundamentally transform how we perceive dangers through systematic patterns of coverage that amplify certain risks while minimizing others. This distortion occurs not primarily through deliberate manipulation but through structural features of journalism itself. News organizations naturally prioritize unusual, dramatic events over common occurrences—the proverbial "man bites dog" rather than "dog bites man." This selection bias creates a profoundly skewed portrait of reality where rare but frightening events appear commonplace. Studies examining media coverage of various risks consistently reveal this pattern. A comprehensive analysis of newspaper coverage of causes of death found that homicide, terrorism, and accidents received vastly more attention than heart disease, cancer, and stroke—despite the latter claiming far more lives. One study calculated that newspapers devoted 8,571 times more coverage per death to exotic threats like mad cow disease than to common killers like diabetes. This disproportionate coverage appears across countries and media formats, from television news to digital platforms. The presentation of stories further distorts risk perception. News reports typically focus on individual victims rather than statistical patterns, triggering what psychologists call the "identifiable victim effect." A profile of a child with a rare disease evokes powerful emotions that abstract numbers cannot match. Research shows that such personalized stories generate stronger responses than statistical information—even when the statistics describe far greater harm. As risk communication expert Peter Sandman notes, "People respond to the story, not the statistic." Media coverage suffers from what statisticians call "denominator blindness"—reporting the number of people harmed by a threat (the numerator) without providing context about the total population at risk (the denominator). Headlines announce "30 Children Sickened by New Virus" without mentioning there are 74 million children in the country. Similarly, relative risk increases ("Study Shows Chemical Doubles Cancer Risk") appear without absolute risk information (perhaps from one in a million to two in a million), creating misleading impressions of danger. These distortions create feedback loops that amplify fear. When media coverage of a threat increases, public concern rises, which generates demand for more coverage. Officials respond to public anxiety with statements and investigations, creating more news events. This cycle continues until another dramatic story captures attention. The "Summer of the Shark" in 2001 exemplifies this pattern—extensive coverage of shark attacks created the impression of an epidemic despite no statistical increase in incidents. The cycle ended abruptly on September 11th when terrorism became the new focus of fear. The consequences extend beyond unnecessary anxiety. Resources flow toward highly publicized risks rather than greater but less dramatic threats. After the 9/11 attacks, many Americans switched from flying to driving, resulting in approximately 1,500 additional road fatalities. Similarly, excessive fear of stranger abduction has restricted children's outdoor play, potentially contributing to childhood obesity. Understanding media amplification mechanisms represents a crucial step toward more rational risk assessment and response—not by ignoring genuine threats but by placing them in appropriate statistical context.

Chapter 4: The Psychology of Contamination: Why We Fear Chemicals Irrationally

Human aversion to contamination runs deeper than rational calculation—it taps into primal psychological mechanisms that evolved long before modern science. Our ancestors faced constant threats from spoiled food, contaminated water, and disease vectors. Those who developed strong intuitive aversions to potential contaminants gained survival advantages, passing these tendencies to descendants. Today, these same psychological mechanisms drive disproportionate fears about chemical exposure despite scientific evidence suggesting minimal risk from trace contaminants. The psychology of contamination fear operates through several distinct mechanisms. First is the "contagion heuristic"—the intuitive belief that once something has contacted a contaminant, it remains fundamentally tainted regardless of concentration. This explains why people refuse to drink water containing even one part per billion of a carcinogen, though the actual risk is negligible. Second is "dose insensitivity"—our inability to intuitively grasp that toxicity depends on concentration. While toxicologists emphasize "the dose makes the poison," laypeople tend to view substances as either safe or dangerous regardless of quantity. These psychological tendencies interact with cultural factors that amplify chemical fears. Modern society has redefined "chemical" to mean synthetic, laboratory-created substances rather than the technical definition encompassing all matter. This linguistic shift creates a false dichotomy between "natural" (presumed safe) and "chemical" (presumed dangerous). Surveys reveal this bias clearly—when asked about risks, respondents consistently rate natural substances as safer than synthetic ones with identical properties. This explains the premium consumers willingly pay for products labeled "natural" or "chemical-free." The media reinforces these tendencies through coverage that emphasizes contamination threats while providing little context about exposure levels or comparative risks. Headlines announce "Toxic Chemicals Found in Blood" without explaining that detection technology can identify substances at parts per trillion—concentrations far below levels of toxicological concern. Environmental organizations further amplify these fears through campaigns highlighting the presence of synthetic chemicals in human tissues, often without discussing whether these trace amounts pose actual health risks. Risk perception researchers have identified a stark divide between expert and public views on chemical risks. When surveyed about various threats, toxicologists consistently rate synthetic chemicals as low-risk concerns, while the general public ranks them among the most dangerous hazards. This gap persists even among educated populations and reflects not merely knowledge differences but fundamentally different approaches to evaluating contamination. Experts apply analytical reasoning considering dose, exposure pathways, and comparative risk, while laypeople rely on intuitive feelings of disgust and contamination. The consequences of chemical panic extend beyond unnecessary anxiety. Resources devoted to eliminating trace contaminants might achieve greater health benefits if directed toward more significant risks. Moreover, excessive fear of synthetic chemicals can lead to rejection of beneficial technologies or adoption of "natural" alternatives that may carry their own, often unexamined, risks. Addressing this disconnect requires not merely providing more information but understanding and accounting for the deep psychological roots of contamination fear—recognizing that our intuitive responses to potential toxins evolved in environments vastly different from today's world.

Chapter 5: Social Dynamics: How Group Processes Intensify Risk Perception

Risk perception is not merely an individual psychological process but a profoundly social phenomenon shaped by cultural values and group dynamics. People do not evaluate threats in isolation but as members of communities with shared worldviews, values, and social networks. These social factors help explain why different societies—and different groups within societies—often reach dramatically different conclusions about identical risks despite access to the same factual information. Cultural worldviews function as powerful filters through which risk information passes. Anthropologist Mary Douglas and political scientist Aaron Wildavsky identified how different social groups select certain dangers for attention while downplaying others based on their cultural orientations. Those with "hierarchical" worldviews (valuing social order and authority) tend to perceive different threats than those with "egalitarian" worldviews (emphasizing social equality). Similarly, "individualists" (prioritizing personal freedom) and "communitarians" (valuing collective welfare) diverge systematically in their risk assessments. These differences explain why debates about threats like climate change, nuclear power, or gun ownership often seem irreconcilable—they reflect not merely disagreements about facts but fundamentally different value systems. Group polarization further intensifies these cultural divides. When like-minded individuals discuss risks together, their views typically become more extreme than their initial positions. This occurs partly because people want to be perceived as appropriately concerned about threats to the group, and partly because they share information that predominantly supports their existing views. In online communities focused on specific threats—whether environmental chemicals, vaccine injuries, or crime—members often develop increasingly apocalyptic perspectives as they reinforce each other's concerns while dismissing contrary evidence. Social networks determine which information sources people trust and which risks receive attention. Studies show that people are far more likely to adopt risk perceptions from those they consider part of their cultural in-group than from outsiders, regardless of expertise. This explains why scientific consensus often fails to resolve risk controversies—when scientific information contradicts group values, people question the credibility of the scientists rather than adjusting their beliefs. As Yale law professor Dan Kahan demonstrates, people with strong cultural identities process identical risk information differently depending on whether it aligns with their group's worldview. Media consumption patterns reinforce these social dynamics through what communication researchers call "selective exposure." People increasingly choose news sources that align with their cultural perspectives, creating information bubbles that amplify certain risks while minimizing others. Conservative media emphasizes threats from terrorism, crime, and moral decline, while progressive outlets highlight environmental dangers, corporate malfeasance, and social injustice. These divergent narratives create fundamentally different risk landscapes for their audiences, making shared risk assessment increasingly difficult across cultural divides. Institutional trust plays a crucial role in these social processes. People assess dangers not just by evaluating evidence but by considering the trustworthiness of information sources. Those who distrust government institutions will be skeptical of official reassurances about vaccine safety or water quality, while those suspicious of corporations will doubt industry claims about chemical safety or pollution controls. These trust relationships explain why risk communication from experts often fails—the issue isn't just information deficiency but credibility gaps between institutions and communities. Understanding these social dimensions reveals why purely technical approaches to risk communication frequently prove ineffective and suggests that addressing risk controversies requires engaging with the cultural values and social dynamics that make certain dangers meaningful to different communities.

Chapter 6: The Fear Industry: Economic Incentives Behind Risk Amplification

A vast and growing ecosystem of commercial interests systematically profits from and perpetuates public anxiety. While fear serves evolutionary purposes by alerting us to genuine threats, it has also become a valuable commodity exploited by various industries to drive consumption, viewership, and political support. Understanding these economic incentives helps explain why certain risks receive disproportionate attention despite limited objective danger. The security industry represents perhaps the most direct beneficiary of fear. Global spending on private security exceeds $200 billion annually, with steady growth driven largely by perceptions of increasing danger rather than actual crime rates. Home security companies routinely advertise using emotionally charged scenarios—masked intruders breaking into suburban homes while families sleep—despite residential burglary rates having declined significantly in most developed nations. Similarly, manufacturers of personal safety products, surveillance equipment, and child tracking devices market their products by amplifying parental anxiety about statistically rare threats like stranger abduction. Pharmaceutical and healthcare industries likewise benefit from heightened risk perception. Drug companies have pioneered techniques for "disease mongering"—expanding diagnostic boundaries and promoting awareness of conditions requiring medication. Campaigns to lower thresholds for treating conditions like high cholesterol, hypertension, and osteoporosis have dramatically expanded potential markets. Direct-to-consumer advertising frequently employs fear-based messaging, presenting normal life variations as medical conditions requiring intervention. One analysis found that after pharmaceutical advertising was permitted in the United States, prescription rates for advertised conditions increased substantially compared to similar conditions not advertised. Media organizations constitute another major beneficiary of public anxiety. News outlets face economic incentives to emphasize threatening content that captures attention and drives engagement. Cable news networks experienced dramatic viewership increases during periods of heightened threat perception, such as terrorism alerts or disease outbreaks. Digital media has intensified these incentives through engagement-based algorithms that systematically promote content triggering strong emotional responses—particularly fear and outrage. This creates a self-reinforcing cycle where threatening content generates engagement, which algorithms then reward with greater visibility. Environmental and consumer advocacy organizations, while often pursuing worthy goals, sometimes employ fear-based messaging to generate support and donations. Campaigns highlighting chemical contamination, food additives, or technological risks frequently emphasize worst-case scenarios while downplaying scientific uncertainty or comparative risk information. These approaches prove effective in mobilizing supporters and generating media coverage but can contribute to disproportionate public anxiety about relatively minor threats. Political actors similarly convert fear into political capital. Campaigns frequently emphasize threats—whether from crime, terrorism, economic competition, or environmental degradation—while positioning candidates as protectors. Research demonstrates that inducing anxiety makes voters more receptive to persuasion and more likely to support policies framed as reducing threats. This dynamic creates incentives for politicians to amplify rather than contextualize risks. The expansion of criminal penalties, creation of new regulatory agencies, and growth of military and security budgets all reflect the political utility of fear. These profit-driven dynamics create significant distortions in societal risk management. Resources flow disproportionately toward dramatic but statistically minor threats that generate profit opportunities, while more significant but less marketable risks receive inadequate attention. Moreover, commercial interests systematically apply sophisticated psychological research to trigger emotional responses that bypass rational risk assessment. Understanding these economic incentives represents a crucial step toward developing more balanced approaches to societal risk management—recognizing when our fears are being deliberately amplified for commercial gain rather than reflecting genuine threats requiring action.

Chapter 7: Beyond Gut Reactions: Developing More Rational Risk Assessment

Moving beyond intuitive fear responses toward more rational risk assessment requires both individual awareness of our psychological biases and institutional changes to how risks are communicated and managed. This transition doesn't mean eliminating emotional responses to threats—an impossible and undesirable goal—but rather developing the capacity to recognize when our intuitions might be leading us astray and deliberately engaging more analytical thinking in such situations. At the individual level, understanding the mechanisms that distort our risk perceptions creates opportunities to counteract these tendencies. Simply recognizing that our intuitive judgments may be systematically biased by availability heuristics, affect heuristics, and confirmation bias allows us to approach risk decisions more deliberately. When encountering alarming information about a potential threat, we can pause to consider whether vivid anecdotes might be overriding statistical reality, whether emotional responses might be distorting our judgment, or whether we're seeking information that confirms rather than challenges our existing beliefs. Improving statistical literacy provides significant protection against many risk perception biases. Research shows that people with stronger numerical skills are less susceptible to framing effects and other manipulations of risk information. Even simple techniques can enhance understanding—converting percentages to natural frequencies (saying "5 out of 100 people" rather than "5% of people"), using visual representations of risk, and providing absolute risk information alongside relative risk changes. These approaches make statistical information more accessible to intuitive thinking, helping bridge the gap between our analytical and emotional risk assessment systems. Media organizations can contribute to more rational risk assessment by providing context for frightening events. This includes statistical information about baseline risks, comparisons to more familiar hazards, and explanations of how individual cases relate to broader patterns. Some news outlets have adopted "explanatory journalism" approaches that go beyond reporting isolated incidents to explore underlying trends and contextualize dramatic events. These efforts help counterbalance the natural tendency of news reporting to highlight unusual, frightening occurrences without adequate context. Government agencies and other risk management institutions need to recognize that public trust is essential for effective risk communication. When authorities are perceived as dishonest or motivated by political agendas, even scientifically sound risk information may be rejected. Building and maintaining trust requires transparency about uncertainties, acknowledgment of mistakes, and consistent communication that respects public intelligence rather than attempting to manipulate emotions. Institutions that have maintained public trust, like the National Weather Service, demonstrate how technical expertise can be effectively communicated when built on a foundation of credibility. Perhaps most fundamentally, moving toward more rational risk assessment requires recognizing that risk decisions inevitably involve both facts and values. Questions about which risks are "acceptable" cannot be answered through scientific analysis alone; they require explicit discussion of the trade-offs involved and the values that should guide those trade-offs. Creating forums for thoughtful public deliberation about risk—rather than allowing risk decisions to be driven by media cycles and interest group politics—could help align societal resources more closely with actual threats to human welfare. The challenge is substantial, as the psychological mechanisms that distort risk perception are deeply ingrained in human cognition. But understanding these mechanisms is the first step toward developing both personal strategies and institutional structures that can help us navigate risks more rationally in our complex modern world. The goal isn't perfect rationality but a more balanced approach that integrates both emotional wisdom and analytical reasoning in facing an uncertain future.

Summary

The fundamental paradox explored throughout this analysis reveals a profound truth about modern existence: we have never been safer, yet we have never been more afraid. This disconnect stems not from irrationality but from the sophisticated interplay between ancient psychological mechanisms and modern information environments. Our risk perception systems—evolved for a world of immediate physical dangers—now operate in a landscape of abstract threats, media amplification, and commercial exploitation of fear. The result is a systematic misallocation of concern, where rare but vivid threats generate disproportionate anxiety while more significant but mundane dangers receive insufficient attention. This analysis offers more than critique—it provides a framework for developing more balanced approaches to risk assessment. By understanding the psychological, social, and economic forces that shape our fears, we gain the capacity to evaluate threats more rationally without dismissing genuine concerns. The goal is not to eliminate fear—an impossible and undesirable aim—but to align our anxieties more closely with actual dangers, directing our protective instincts where they can do the most good. In a world where attention and resources are finite, getting risk right matters. Our survival as a species once depended on our capacity to fear; our flourishing now depends on our ability to fear wisely.

Best Quote

“Put all these numbers together and what do they add up to? In a sentence: We are the healthiest, wealthiest, and longest-lived people in history. And we are increasingly afraid. This is one of the great paradoxes of our time.” ― Daniel Gardner, The Science of Fear: Why We Fear the Things We Shouldn't--and Put Ourselves in Greater Danger

Review Summary

Strengths: Gardner's ability to make complex psychological and statistical concepts accessible stands out, engaging readers with his clear writing style. The exploration of psychological and evolutionary aspects of fear offers profound insights. Additionally, the book's relevance in the context of media sensationalism and misinformation is particularly noteworthy.\nWeaknesses: Some readers note that the book occasionally oversimplifies complex issues. Repetition in examples is another criticism, as is the limited offering of practical solutions for overcoming cognitive biases in risk perception.\nOverall Sentiment: The general reception is positive, with many appreciating the book's clarity and relevance. It is regarded as a thought-provoking examination of fear and risk perception.\nKey Takeaway: Ultimately, the book emphasizes the need for a deeper understanding of how fear and risk perception are manipulated, encouraging readers to critically assess risks in their own lives.

About Author

Loading...
Daniel Gardner Avatar

Daniel Gardner

Read more

Download PDF & EPUB

To save this Black List summary for later, download the free PDF and EPUB. You can print it out, or read offline at your convenience.

Book Cover

Risk

By Daniel Gardner

Build Your Library

Select titles that spark your interest. We'll find bite-sized summaries you'll love.