Home/Nonfiction/Not Born Yesterday
Loading...
Not Born Yesterday cover

Not Born Yesterday

The Science of Who We Trust and What We Believe

3.9 (436 ratings)
18 minutes read | Text | 9 key ideas
What if our instincts for trust and belief are sharper than we think? Hugo Mercier challenges the myth of human gullibility with "Not Born Yesterday," a riveting exploration of how we navigate the complex web of persuasion. Through the lens of experimental psychology, political science, and anthropology, Mercier dismantles the idea that we're easily duped by crafty advertisers, politicians, or religious figures. Instead, he reveals our innate ability to sift through information, guided by cognitive mechanisms that balance skepticism with openness. Even when we falter—falling for falsehoods or succumbing to rumors—these missteps are anomalies in an otherwise robust system of discernment. This provocative narrative isn't just about understanding our strengths; it's a roadmap for enhancing them, making us not only informed but resilient in a world brimming with information.

Categories

Nonfiction, Psychology, Philosophy, Science, Politics, Audiobook, Sociology, Evolution, Social, Language

Content Type

Book

Binding

Hardcover

Year

2020

Publisher

Princeton University Press

Language

English

ASIN

0691178704

ISBN

0691178704

ISBN13

9780691178707

File Download

PDF | EPUB

Not Born Yesterday Plot Summary

Introduction

The idea that humans are fundamentally gullible has dominated both academic discourse and popular culture for centuries. From ancient philosophers to modern social scientists, influential thinkers have portrayed the masses as easily manipulated, readily accepting whatever they hear or read without critical evaluation. This narrative of human credulity has shaped political philosophy, influenced public policy, and informed our everyday interactions. Yet this widespread belief faces a fundamental problem: if humans were truly gullible, communication itself would collapse as speakers would have overwhelming incentives to manipulate listeners. A radically different perspective emerges when we examine human psychology through the lens of evolutionary theory and empirical evidence. Rather than being passive receivers of information, humans possess sophisticated cognitive mechanisms that help evaluate messages based on content, source credibility, and contextual factors. These mechanisms, collectively termed "open vigilance," allow us to remain receptive to valuable information while filtering out deception and manipulation. Understanding these mechanisms not only challenges conventional wisdom about human psychology but also offers insights into why mass persuasion often fails, how misconceptions persist despite our vigilance, and how we might build better information ecosystems that work with rather than against our natural cognitive defenses.

Chapter 1: The Myth of Human Gullibility

Throughout history, a pervasive assumption has dominated our understanding of human psychology: that people are fundamentally gullible. From ancient Greek philosophers to modern social scientists, influential thinkers have portrayed the masses as easily manipulated, readily accepting whatever they hear or read. This belief in widespread credulity has shaped political philosophy, influenced public policy, and informed our everyday interactions. The evidence for human gullibility seems overwhelming at first glance. We need only look at the apparent success of demagogues who sway crowds, propagandists who shape public opinion, advertisers who influence consumer choices, and the spread of bizarre beliefs from flat earth theories to conspiracy theories. Classic psychological experiments appear to confirm this view: Solomon Asch's conformity studies showed people denying the evidence of their own eyes to agree with a group, while Stanley Milgram's experiments revealed how readily people follow authority figures even when ordered to harm others. This widespread belief in human credulity has been reinforced by sophisticated evolutionary models. Theorists argue that humans evolved to learn from others, and that this cultural learning capacity requires us to be somewhat credulous. The logic seems compelling: to benefit from accumulated cultural wisdom, we must accept what others tell us, even if this occasionally leads us to adopt harmful or mistaken beliefs. According to this view, gullibility is not a bug but a feature of human cognition - an evolutionary adaptation that allowed our species to thrive through cultural transmission. However, this dominant narrative faces a fundamental problem: if humans were truly gullible, communication itself would collapse. Speakers would have overwhelming incentives to manipulate listeners, who would eventually learn to ignore all messages. Instead, we must have evolved sophisticated mechanisms to evaluate communication - mechanisms that allow us to be open to valuable information while remaining vigilant against manipulation. A closer examination of the classic experiments reveals a different picture than commonly portrayed. In Asch's studies, participants conformed only on a minority of trials, and many later explained they knew the correct answer but felt social pressure to conform. The Milgram experiments have been widely misinterpreted - participants were not blindly following orders but were persuaded by the scientific context and goals of the experiment. When orders were given without scientific justification, compliance plummeted dramatically.

Chapter 2: Open Vigilance: Our Evolved Defense System

Human communication presents an evolutionary puzzle. On one hand, our ability to share information has been crucial to our success as a species. Through communication, we learn which foods are safe, how to avoid dangers, who to trust, and countless other vital skills. On the other hand, communication creates opportunities for exploitation. If listeners blindly accepted whatever they were told, speakers would have overwhelming incentives to manipulate them. The solution to this puzzle lies in understanding how communication evolves in nature. For communication to be evolutionarily stable, it must benefit both senders and receivers. When interests are perfectly aligned - as between cells in a body or worker bees in a hive - communication can be completely open. But human interests are rarely perfectly aligned. Even a mother and her fetus engage in a hormonal tug-of-war over resources. Given these potential conflicts, humans have evolved mechanisms of "open vigilance" - cognitive systems that allow us to be open to communication while remaining vigilant against manipulation. These mechanisms help us evaluate messages based on their content, the speaker's competence, their trustworthiness, and other relevant factors. They allow us to accept beneficial information while rejecting harmful manipulation. Contrary to popular belief, these mechanisms don't function like an arms race, with increasingly sophisticated manipulation techniques battling increasingly sophisticated defenses. If that were the case, disrupting our higher cognitive functions would leave us defenseless against manipulation. Instead, openness and vigilance evolved together, making our communication system robust. When our more sophisticated cognitive mechanisms are disrupted, we don't become more gullible - we become more conservative, more likely to reject messages rather than accept them. This explains why attempts at "brainwashing" and subliminal influence have consistently failed. Despite popular fears, techniques that aim to bypass conscious thought - from sleep learning to subliminal advertising - have negligible effects. The open vigilance system operates through several interconnected mechanisms. First, we engage in plausibility checking, comparing new information against our existing knowledge and beliefs. Second, we evaluate arguments based on their logical coherence and evidential support. Third, we assess the competence of information sources, recognizing domain-specific expertise rather than general authority. Fourth, we evaluate trustworthiness based on incentives and past reliability. Finally, we use emotional signals as one input among many when evaluating communication.

Chapter 3: How We Evaluate Information and Arguments

When evaluating communicated information, we rely on several cognitive mechanisms that work together as an integrated system. The most basic is plausibility checking, which compares new information against our existing beliefs. If someone tells us the moon is made of cheese, we reject this claim because it contradicts what we already know. This mechanism is always active, filtering out messages that don't fit with our understanding of the world. Contrary to popular belief, plausibility checking doesn't make us irrationally stubborn. The infamous "backfire effect" - where people become more entrenched in their views when presented with contradictory evidence - is actually quite rare. In most cases, when presented with reliable information that challenges our views, we move some distance toward incorporating it into our worldview, even if we don't accept it completely. Beyond plausibility checking, we also evaluate arguments. When someone provides reasons for their claims, we assess these reasons using our existing inferential mechanisms. A good argument connects dots in a way that resonates with our intuitions, even if we wouldn't have made those connections ourselves. This allows us to accept conclusions we would never have reached independently. The power of argumentation is evident in how it transforms group decision-making. When people exchange reasons in small discussion groups, they become much better at discriminating between opinions they should reject and those they should accept. This improves performance across domains, from forecasting to medical diagnosis to scientific hypothesis development. Importantly, we can recognize good arguments even when they challenge our deeply held beliefs. In experiments, participants who were extremely confident in wrong answers to logical problems were just as likely to recognize correct arguments as those who were less confident. Throughout history, good arguments have changed minds on topics ranging from mathematical foundations to moral issues like slavery and civil rights. The main limitation of these mechanisms isn't bias but the quality of our prior beliefs and intuitions. In domains where evolution and learning have equipped us well, our intuitions are generally sound. But when facing novel questions in unfamiliar domains, we may systematically err. This explains why some misconceptions - like creationism or anti-vaccination views - are so intuitive and widespread. They aren't primarily spread through manipulation but because they align with our intuitive understanding of the world.

Chapter 4: Trust Calibration: Assessing Sources and Intentions

Beyond evaluating message content, we carefully assess message sources. We recognize that some people know more than we do, either because they've had better access to information or because they're more competent in certain domains. This recognition allows us to overcome our initial skepticism and accept information that conflicts with our prior beliefs when it comes from trusted sources. From an early age, we track who has reliable access to information. Even preschoolers understand that someone who has seen what's in a box knows better than someone who has merely guessed. We also evaluate past performance, recognizing who has consistently given good advice or made accurate predictions in specific domains. These assessments are remarkably precise - we don't simply defer to prestigious individuals but track domain-specific expertise. We also consider how many people hold a given opinion. While we don't blindly follow the majority, we rationally weigh factors like group size, consensus, member competence, and independence between opinions. This allows us to benefit from collective wisdom without falling prey to groupthink. Contrary to popular interpretations of Asch's conformity experiments, people rarely accept obviously wrong judgments simply because a group endorses them. However, knowing who is knowledgeable isn't enough - we must also determine who is trustworthy. Surprisingly, detecting deception isn't primarily about spotting behavioral cues like fidgeting or gaze aversion. Despite popular claims, no reliable nonverbal cues to lying exist. Instead, we focus on incentives: we trust speakers when their interests align with ours, either naturally (as when coordinating on a joint task) or through reputation mechanisms. We track who has been reliable in the past and adjust our trust accordingly. We also pay attention to how much speakers commit to their messages. More confident speakers are more influential, but they also lose more credibility when proven wrong. This creates a system of costly signaling that keeps communication honest without requiring speakers to pay costs every time they communicate. These mechanisms work best in small-scale settings where we know the people we're interacting with. In modern societies, we often must trust strangers or institutions about whom we have limited information. This doesn't make us more gullible, but it does create challenges for establishing trust in contemporary contexts where traditional reputation tracking mechanisms may not apply.

Chapter 5: Why Mass Persuasion Often Fails

Despite widespread beliefs about the power of propaganda, advertising, and other forms of mass persuasion, historical and experimental evidence suggests these influences are far more limited than commonly assumed. This challenges fundamental assumptions about how easily public opinion can be manipulated through mass communication. Consider Nazi propaganda, often considered the quintessential example of effective mass manipulation. Detailed historical research reveals that Nazi propaganda had minimal impact on German public opinion. It succeeded only where it aligned with existing prejudices and failed whenever it conflicted with public sentiment. Even at the height of Nazi power, Germans rejected propaganda campaigns they found implausible, such as attempts to promote euthanasia of the disabled. Far from shaping German opinion, Hitler and Goebbels responded to it, carefully avoiding topics that might alienate their audience. Similar patterns emerge across different regimes and contexts. Soviet propaganda failed to convert workers to communism, instead having to adapt its message to existing patriotic sentiments. Chinese propaganda under Mao motivated only those who stood to benefit personally from the regime's initiatives. Contemporary Chinese authorities have largely abandoned attempts to change minds through persuasion, focusing instead on censorship and distraction. Democratic political campaigns fare no better. Despite billions spent on U.S. elections, rigorous studies show that campaign interventions - from canvassing to advertising - have essentially no effect on voting behavior in high-profile races. The media primarily influences elections not by persuading voters but by providing basic information about candidates and issues. The more informed voters are, the less susceptible they are to persuasion attempts. Even commercial advertising, backed by enormous resources and sophisticated techniques, has surprisingly modest effects. Most ads have no discernible impact on consumer behavior. Those that do work primarily by providing information about product characteristics, not by creating irrational preferences. Advertising works best on consumers who have no prior experience with a product and has little effect on those who do. This consistent pattern of failure contradicts the notion that humans are easily manipulated. Mass persuasion attempts rarely change minds unless they align with people's existing beliefs or serve their interests. When messages encounter resistance - when they contradict prior beliefs or ask people to do things they don't want to do - they typically fail, regardless of how skillfully they're presented or how much money backs them.

Chapter 6: Understanding Persistent Misconceptions

If humans are equipped with sophisticated mechanisms for evaluating communication, why do so many misconceptions persist? The answer lies not in widespread credulity but in understanding how these mechanisms function in different contexts. First, our vigilance mechanisms prioritize practically relevant information. When rumors affect our immediate interests - like who will be promoted at work or deployed to the front lines - we carefully track their accuracy. Studies show that workplace rumors and military grapevines are remarkably accurate, often exceeding 80% reliability. People in these contexts track who said what, monitor outcomes, and adjust their trust accordingly. By contrast, many popular misconceptions have little practical relevance. People may say they believe Barack Obama was born abroad or that a pizzeria houses a child trafficking ring, but they rarely act on these beliefs. This suggests they hold these views reflectively rather than intuitively - they accept them in a shallow way that doesn't integrate with their broader understanding or influence their behavior. Second, many misconceptions persist because they tap into our cognitive biases and intuitions. We find information about threats, powerful people, and conspiracies inherently interesting - even when this information has no practical consequences for us. This creates a market for titillating rumors that spread not because people are gullible but because such information has social currency. Sharing an interesting rumor allows us to score social points, especially if others can use this information to score points in turn. Third, we often lack reliable feedback on the accuracy of many beliefs. When we act on false information in practically relevant domains, we quickly discover our error. But beliefs about distant events, complex systems, or hidden conspiracies rarely face such reality checks. We also tend to share questionable information selectively, avoiding those who might challenge it, further insulating these beliefs from correction. Fourth, some misconceptions persist because they align with our intuitive understanding of the world. Creationist beliefs, for example, align with our intuitive tendency to see purpose and design in complex structures. Anti-vaccination views tap into intuitive notions of contamination and purity. These intuitive misconceptions aren't primarily spread through manipulation but because they resonate with how we naturally make sense of the world. Understanding these dynamics reveals that the spread of false beliefs doesn't necessarily indicate gullibility. Instead, it reflects how we prioritize cognitive resources, our interest in socially relevant information, and the social functions beliefs can serve beyond their truth value.

Chapter 7: Building Better Information Ecosystems

Understanding how our open vigilance mechanisms function offers insights into both why misconceptions persist and how we might combat them. The key lies not in assuming people are gullible but in recognizing the sophisticated ways they evaluate communication and designing information ecosystems that work with rather than against these natural tendencies. First, we should acknowledge that many misconceptions persist not because people are too credulous but because they don't take certain claims seriously enough. When people spread rumors about political figures or distant events, they often hold these beliefs reflectively rather than intuitively. To combat such misconceptions, we should encourage people to consider the practical implications of their beliefs. What would they do if they truly believed a pizzeria was trafficking children? Thinking through the practical consequences of beliefs can motivate more careful evaluation. Second, we should recognize the social dynamics that drive the spread of misconceptions. People share interesting rumors not primarily because they believe them but because such information has social currency. By questioning the plausibility of dubious claims and denying social rewards to those who spread them, we can reduce their circulation. This may entail social costs - no one likes a skeptical killjoy - but these costs represent a contribution to the public good. Third, we should improve the quality of sourcing and feedback. Many misconceptions persist because people lack reliable information about their accuracy. Creating trusted information sources, providing clear attribution, and establishing feedback mechanisms can help people distinguish reliable from unreliable claims. The success of rumor control centers during crises demonstrates how providing reliable information can effectively combat misinformation. Fourth, we should leverage the power of argumentation. While mass persuasion generally fails, small-group discussion and exchange of reasons can effectively change minds. Creating opportunities for people to engage with diverse perspectives and develop their own arguments can be more effective than simply presenting them with facts or expert opinions. Finally, we should design institutions that support our natural vigilance mechanisms. Transparency allows people to apply their natural evaluation processes rather than requiring blind trust. Independence from conflicting interests enhances credibility by addressing a specific vulnerability in our trust assessment systems. Diversity of perspectives within institutions serves as a built-in error correction mechanism, allowing weaknesses in reasoning or evidence to be identified and addressed. By understanding how our open vigilance mechanisms function, we can develop more effective approaches to combating misconceptions and building trustworthy information ecosystems. Rather than assuming people are gullible and need to be protected from manipulation, we should recognize their sophisticated evaluation capacities and help them apply these capacities more effectively across contexts.

Summary

The evidence overwhelmingly suggests that humans are not inherently gullible but possess sophisticated cognitive mechanisms for evaluating communication. These mechanisms of "open vigilance" allow us to be receptive to valuable information while remaining vigilant against deception and manipulation. They explain why mass persuasion attempts - from political propaganda to commercial advertising - typically fail unless they align with people's existing beliefs or interests. When misconceptions persist, it's not primarily because people are too credulous but because specific social and cognitive factors override or circumvent our natural vigilance mechanisms. This understanding fundamentally changes how we should approach information problems in society. Rather than assuming people need to be protected from their own gullibility, we should focus on creating conditions that support our natural vigilance mechanisms. This means designing institutions that promote transparency, accountability, and diverse information sources; creating opportunities for meaningful argumentation and discussion; and recognizing that effective communication works with rather than against our sophisticated evaluation capacities. By aligning our information ecosystems with how humans naturally process communication, we can build more robust defenses against misinformation while preserving the openness that allows valuable knowledge to spread.

Best Quote

Review Summary

Strengths: The reviewer appreciates the book for providing new insights into human gullibility and information processing, even as a professional in the field. The book offers a deeper understanding of mass persuasion, the viral nature of fake news, and trust mechanisms.\nWeaknesses: The reviewer disagrees with the author's attempt to redefine "gullibility," indicating a shift in the argument that they find problematic.\nOverall Sentiment: Enthusiastic\nKey Takeaway: The book challenges the notion that people are inherently gullible, offering a nuanced perspective on human cognition and information processing, though it may attempt to redefine key concepts like gullibility in ways that not all readers will agree with.

About Author

Loading...
Hugo Mercier Avatar

Hugo Mercier

Read more

Download PDF & EPUB

To save this Black List summary for later, download the free PDF and EPUB. You can print it out, or read offline at your convenience.

Book Cover

Not Born Yesterday

By Hugo Mercier

0:00/0:00

Build Your Library

Select titles that spark your interest. We'll find bite-sized summaries you'll love.