Home/Nonfiction/Rationality
Loading...
Rationality cover

Rationality

What It Is, Why It's Scarce, and How to Get More

3.8 (6,052 ratings)
25 minutes read | Text | 9 key ideas
In a world tangled in misinformation and cognitive traps, Steven Pinker offers a beacon of clarity with ""Rationality."" This compelling narrative challenges the notion that humans are eternally shackled to primitive instincts, highlighting our remarkable capacity for reason. How can beings capable of decoding the universe's secrets fall prey to delusion? Pinker dissects this paradox with incisive wit, arguing that understanding rationality is key to navigating our complex modern landscape. As society teeters on decisions that shape our collective future, the book serves as a vital guide, empowering individuals to harness the power of logic and critical thinking. By exploring the tools that elevate human thought, Pinker not only defends our cognitive prowess but also arms us with the skills to foster progress and sustain the institutions that uphold our shared humanity.

Categories

Nonfiction, Self Help, Psychology, Philosophy, Science, History, Politics, Productivity, Audiobook, Sociology, Social Science

Content Type

Book

Binding

Hardcover

Year

0

Publisher

Allen Lane

Language

English

File Download

PDF | EPUB

Rationality Plot Summary

Introduction

Rationality represents one of humanity's most distinctive cognitive achievements, yet paradoxically remains widely misunderstood and frequently underutilized. At its core, rational thinking involves the systematic application of logical principles and evidence-based reasoning to form accurate beliefs and make effective decisions. However, this seemingly straightforward process faces significant challenges from our evolved psychology, which includes numerous cognitive biases and heuristics that can lead us astray when confronting complex problems requiring statistical thinking or logical analysis. The exploration of rationality requires examining both its theoretical foundations and practical applications across diverse domains. By understanding the dual systems of human cognition, recognizing common cognitive biases, mastering Bayesian reasoning, distinguishing correlation from causation, applying signal detection theory, acknowledging the limits of intuition, and extending rational frameworks to moral questions, we gain powerful tools for improving our decision-making. This journey through the landscape of rational thought illuminates not just how we can think more clearly as individuals but how collective rationality might address the most pressing challenges facing humanity in an increasingly complex world.

Chapter 1: The Dual Nature of Human Reasoning Systems

Human cognition operates through two distinct yet complementary systems that shape how we perceive the world and make decisions. System 1 functions rapidly, automatically, and effortlessly, generating intuitive judgments based on pattern recognition and emotional responses. This system evolved to help our ancestors navigate dangerous environments where quick reactions could mean the difference between life and death. By contrast, System 2 operates slowly, deliberately, and with conscious effort, engaging in abstract reasoning, probability calculations, and logical analysis. This dual-process framework helps explain the paradoxical nature of human rationality—our capacity for remarkable intellectual achievements alongside persistent cognitive errors. The interplay between these systems becomes evident in simple puzzles like the Cognitive Reflection Test. When asked, "A bat and ball cost $1.10 in total. The bat costs $1 more than the ball. How much does the ball cost?" many people immediately answer "10 cents." This intuitive response feels right but proves wrong upon reflection (the correct answer is 5 cents). Such errors reveal not inherent irrationality but the tendency of our intuitive system to dominate unless deliberately overridden by analytical thinking. Understanding this dynamic helps explain why intelligent people can hold irrational beliefs—their System 2 reasoning may be sophisticated but remains subordinate to System 1 intuitions in many contexts. These reasoning systems evolved in environments vastly different from our modern world. Our ancestors faced immediate physical threats and social challenges within small groups, not complex statistical problems or abstract policy decisions affecting millions. Consequently, our intuitive system excels at social reasoning, threat detection, and navigating familiar environments but struggles with probability, statistics, and long-term planning. The analytical system developed more recently in evolutionary history and requires substantial energy and education to operate effectively. This explains why rational thinking often feels unnatural and demanding compared to intuitive judgments. The relationship between these systems is not simply antagonistic, with reason struggling to overcome emotion. Rather, emotions provide essential information about our values and motivations—what we care about and why. Without emotional guidance, rational deliberation lacks direction and purpose, as seen in patients with certain brain injuries who can reason logically but make disastrous life decisions due to emotional disconnection. The challenge lies not in suppressing emotion but in integrating emotional wisdom with analytical reasoning to make decisions that effectively serve our goals and values. Cultural and educational factors significantly influence how these systems interact. Some cultures emphasize analytical thinking and explicit reasoning, while others prioritize holistic thinking and intuitive wisdom. Educational systems similarly vary in their emphasis on critical thinking versus memorization and conformity. These differences shape not just what people believe but how they think about their beliefs—whether they subject intuitions to critical scrutiny or accept them as self-evident truths. Understanding these cultural variations helps explain why rationality manifests differently across societies while highlighting universal cognitive tendencies that transcend cultural boundaries.

Chapter 2: Cognitive Biases: Systematic Errors in Judgment

Cognitive biases represent systematic patterns of deviation from rationality in human judgment. Unlike random errors, these biases produce predictable distortions in how we perceive information, evaluate evidence, and make decisions. The availability heuristic illustrates this pattern—we judge the likelihood of events based on how easily examples come to mind. While this mental shortcut serves us well in environments where more common events are more easily recalled, it fails spectacularly when our information sources are biased. Media coverage that overrepresents dramatic but rare events like terrorist attacks or shark encounters leads people to overestimate these risks while underestimating more common dangers like heart disease or car accidents. Confirmation bias further undermines rational thinking by leading us to seek, interpret, and remember information that confirms our existing beliefs while dismissing contradictory evidence. This tendency operates largely outside conscious awareness—we genuinely believe we're evaluating information objectively while unconsciously filtering our perceptions. In political contexts, this bias creates "echo chambers" where people expose themselves primarily to information that reinforces their existing views. Even when confronted with balanced information, people tend to find flaws in evidence that contradicts their position while uncritically accepting supporting evidence. This selective scrutiny explains why factual corrections often fail to change minds on politically charged issues. Anchoring effects demonstrate how arbitrary information can dramatically influence judgment. When estimating uncertain quantities, people tend to adjust insufficiently from initial values, even when these anchors are obviously irrelevant. In one famous experiment, participants who saw the number 65 as a purported random number subsequently estimated the percentage of African nations in the UN at 45%, while those who saw the number 10 estimated only 25%. This effect persists even among experts making judgments in their domain of expertise. Real estate appraisers, for instance, are significantly influenced by suggested listing prices when estimating property values, despite their professional training and experience. The conjunction fallacy reveals how intuitive judgments can violate basic probability laws. When presented with the famous "Linda problem"—where Linda is described as a politically active philosophy graduate—most people judge it more likely that she is "a bank teller who is active in the feminist movement" than simply "a bank teller." This judgment contradicts the mathematical certainty that a subset cannot be more probable than the larger set containing it. The error stems from confusing representativeness with probability, substituting the question "How well does Linda match my stereotype of a feminist bank teller?" for "What is the mathematical probability that Linda is a feminist bank teller?" What makes cognitive biases particularly challenging is their persistence even after explanation. Unlike visual illusions, which we can recognize as illusions while still experiencing them, cognitive biases often feel like sound reasoning even when we intellectually understand their flawed nature. This metacognitive blindness—our difficulty in recognizing when our thinking has gone astray—represents perhaps the greatest obstacle to improving human rationality. Overcoming biases requires not just knowledge of their existence but developing habits of mind that counteract our natural tendencies, such as deliberately considering alternative perspectives, seeking disconfirming evidence, and applying formal decision procedures in high-stakes situations.

Chapter 3: Bayesian Reasoning: Updating Beliefs with Evidence

Bayesian reasoning provides a formal framework for updating beliefs in light of new evidence, offering both a descriptive model of ideal reasoning and a normative standard for rational belief formation. Named after Thomas Bayes, this approach quantifies how we should revise our confidence in a hypothesis when confronted with relevant data. The fundamental insight of Bayesian thinking is that rational belief formation involves starting with prior probabilities—our initial assessment of how likely something is—and then adjusting these beliefs proportionally based on how well they explain new observations. The mathematical expression of Bayes' theorem reveals the precise relationship between prior beliefs and new evidence: the posterior probability equals the prior probability multiplied by the likelihood ratio (how much more likely the evidence would be if the hypothesis were true versus if it were false). This formula captures the essence of rational belief updating—starting with what we already know, weighing new evidence according to its diagnosticity, and arriving at a revised belief that properly balances prior knowledge with new information. While the mathematics may seem abstract, the underlying principle reflects a deeply intuitive aspect of rational thinking—giving appropriate weight to both existing knowledge and new observations. Medical diagnosis illustrates the practical importance of Bayesian reasoning. Consider a patient who tests positive for a disease that affects 1% of the population, using a test with 90% sensitivity (true positive rate) and 90% specificity (true negative rate). Many physicians mistakenly conclude the patient has a 90% chance of having the disease, focusing exclusively on the test's sensitivity. Proper Bayesian reasoning reveals the actual probability is only about 8.3%. This dramatic difference arises because the low base rate of the disease (the prior probability) substantially constrains the posterior probability despite the seemingly convincing positive test result. This example demonstrates why neglecting base rates leads to systematic errors in probabilistic reasoning across domains from medicine to law to everyday life. The power of Bayesian reasoning extends beyond numerical calculations to qualitative principles that guide rational thinking. One such principle is that extraordinary claims require extraordinary evidence. Claims with low prior probability—such as telepathy, alien visitation, or revolutionary scientific discoveries that contradict established theories—require correspondingly strong evidence to overcome their initial implausibility. This principle explains why scientists rightly demand more rigorous evidence for studies reporting anomalous findings than for those confirming established theories. Far from representing closed-mindedness, this graduated skepticism reflects the mathematical logic of Bayesian updating. Despite its normative appeal, humans systematically deviate from Bayesian reasoning in predictable ways. Base-rate neglect represents one of the most common errors, as people focus on specific case information while ignoring relevant background statistics. The representativeness heuristic further compounds this error, as we judge probability based on similarity to prototypes rather than statistical likelihood. When told that Steve is "shy, withdrawn, and helpful, with little interest in the social world," people judge him more likely to be a librarian than a farmer, despite farmers vastly outnumbering librarians in the population. This pattern reveals how intuitive judgments substitute representativeness for probability, violating the principles of Bayesian reasoning. Interestingly, these deviations from Bayesian norms diminish when information is presented in formats that align with how humans naturally process quantitative information. When probabilities are reframed as natural frequencies—"Out of 1,000 people, 10 have the disease; of these 10, 9 test positive"—even statistically naive individuals can perform Bayesian reasoning accurately. This suggests that our difficulty lies not in the underlying logic but in translating abstract probabilities into concrete representations that our minds can process effectively. This insight has important implications for education, risk communication, and the design of decision support systems that help people reason more rationally under uncertainty.

Chapter 4: Distinguishing Correlation from Causation

The distinction between correlation and causation represents one of the most fundamental principles in rational thinking, yet it remains persistently misunderstood. Correlation merely indicates that two variables tend to move together—when one changes, the other changes in a predictable way. Causation, by contrast, means that changes in one variable directly produce changes in another. This distinction proves crucial because interventions based on merely correlational evidence often fail or produce unexpected negative consequences. Understanding the gap between these concepts provides essential protection against misleading conclusions in domains from medicine to public policy to personal decision-making. Several mechanisms can produce correlations between variables that have no causal relationship. Reverse causation occurs when we correctly identify a causal relationship but get the direction backward—for instance, assuming that exercise causes weight loss when weight loss might actually motivate exercise. Confounding variables create spurious correlations by independently affecting both observed variables. For example, ice cream sales and drowning deaths both increase in summer months, not because ice cream causes drowning but because a third factor—warm weather—influences both. Selection effects generate correlations when the process of data collection itself creates artificial relationships, as when studying only patients who seek treatment creates the illusion that a harmless condition causes symptoms. The post hoc fallacy represents another common error, where temporal sequence is mistaken for causation. When event B follows event A, we naturally tend to assume A caused B. This explains why ineffective medical treatments often appear to work—people typically seek treatment when symptoms are at their worst, so natural regression to the mean creates the illusion of effectiveness. Similarly, superstitions persist because coincidental sequences get misinterpreted as causal relationships, especially when they trigger emotional responses that strengthen memory formation. These temporal associations feel intuitively compelling despite lacking causal mechanisms. Statistical regression techniques can help disentangle correlation from causation by controlling for potential confounding variables. Multiple regression analysis allows researchers to estimate the relationship between two variables while mathematically adjusting for other factors that might influence the outcome. However, even sophisticated statistical methods cannot definitively establish causation from observational data alone. True causal inference requires either randomized controlled experiments or careful analysis of natural experiments where random assignment occurs through circumstance rather than design. These methodological constraints explain why causal claims based solely on observational studies should be treated with appropriate skepticism. The cluster illusion further complicates our ability to distinguish meaningful patterns from random noise. Human minds excel at detecting patterns—an evolutionary advantage in many contexts—but this talent makes us prone to seeing correlations where none exist. Random processes naturally produce clusters and streaks that appear meaningful to pattern-seeking brains. This explains why people perceive "hot hands" in basketball, winning streaks in gambling, or meaningful coincidences in daily life that actually represent statistical inevitabilities given enough opportunities. Recognizing this tendency helps guard against inferring causal relationships from what may be merely random fluctuations.

Chapter 5: Signal Detection Theory in Everyday Decisions

Signal detection theory provides a powerful framework for understanding how we distinguish meaningful patterns from random noise under conditions of uncertainty. Originally developed for radar operators during World War II, this approach has profound implications for decision-making across domains ranging from medical diagnosis to judicial verdicts. The fundamental insight of signal detection theory is that any decision about ambiguous information involves navigating an unavoidable tradeoff between different types of errors—false positives and false negatives—that cannot be simultaneously minimized without improving the quality of information itself. When attempting to detect a signal—whether a tumor on a medical scan, evidence of guilt in a trial, or a genuine pattern in scientific data—we must establish some threshold of evidence that triggers a positive judgment. Setting this threshold involves balancing two competing risks: false alarms (Type I errors) occur when we mistakenly identify noise as signal, while misses (Type II errors) happen when we fail to detect a signal that actually exists. Crucially, these errors trade off against each other—reducing one type inevitably increases the other unless we can improve the underlying quality of our information. This fundamental constraint explains why perfect accuracy remains unattainable in many real-world decisions. The optimal threshold depends on the relative costs of different errors in a particular context. In cancer screening, missing a tumor (a false negative) typically causes more harm than unnecessarily investigating a benign growth (a false positive), justifying a relatively low threshold that generates many false alarms. Conversely, in criminal justice, where false convictions are considered particularly egregious, we set a high threshold—"beyond reasonable doubt"—accepting that many guilty individuals will go free to minimize the risk of punishing the innocent. These different thresholds reflect not differences in the quality of evidence but different value judgments about the relative costs of errors. Base rates—the underlying frequency of what we're trying to detect—critically influence signal detection decisions. When searching for rare phenomena like terrorist plots or rare diseases, even highly accurate detection methods will generate mostly false positives simply because true instances are so uncommon. This mathematical reality explains why mass surveillance programs and universal screening for rare conditions inevitably flag many innocent people or healthy individuals for each genuine case identified. Understanding this constraint helps explain why perfect security or perfect medical screening remains unattainable even with advanced technology. Signal detection theory also illuminates scientific practice, particularly the concept of statistical significance. When researchers set a significance threshold (typically p < 0.05), they establish a decision rule that controls the rate of false positives under the assumption that no effect exists. However, this approach says nothing about the probability that a significant result represents a genuine effect rather than a statistical fluke. Understanding this distinction helps explain why many published findings fail to replicate—the scientific literature systematically selects for results that cross an arbitrary threshold, regardless of their underlying truth. This insight has prompted reforms in scientific practice, including pre-registration of studies and greater emphasis on effect sizes rather than mere statistical significance. The principles of signal detection theory extend to everyday judgments about social interactions, consumer choices, and personal risk assessment. When deciding whether someone's ambiguous comment contained an insult, whether a product represents a good value, or whether a situation poses a genuine threat, we implicitly set thresholds that reflect our personal values and experiences. Individual differences in these thresholds help explain why people react differently to the same ambiguous situations—some consistently err toward suspicion and alarm, while others habitually give the benefit of the doubt. Recognizing these patterns in ourselves and others can reduce unnecessary conflict and improve decision-making under uncertainty.

Chapter 6: The Limits of Intuition in Complex Choices

Human intuition, while remarkably effective in familiar environments, systematically fails when confronting problems requiring statistical thinking or expected utility calculations. These limitations stem from our evolutionary history, which optimized decision-making for immediate survival challenges rather than abstract probabilistic reasoning. Understanding these intuitive shortcomings provides the foundation for more rational approaches to consequential decisions in modern contexts where our evolved psychology may lead us astray. Prospect theory, developed by Kahneman and Tversky, reveals how our intuitive decisions systematically deviate from rational utility maximization. People evaluate outcomes not in absolute terms but as gains or losses relative to a reference point, typically the status quo. This framing dramatically influences decisions—we tend to be risk-averse when considering potential gains but risk-seeking when facing potential losses. Consequently, the same objective outcome can trigger opposite choices depending on how it's presented. When medical treatments are described in terms of survival rates, patients prefer different options than when the same treatments are described in terms of mortality rates, despite the mathematical equivalence of the information. Our intuitive probability assessments suffer from numerous distortions. We overweight small probabilities (explaining the appeal of lotteries) while treating very high probabilities as certainties (explaining why we neglect precautions against unlikely but catastrophic events). The availability heuristic leads us to judge probability based on how easily examples come to mind, making vivid or recent events seem more likely than statistics would justify. This explains why people fear airplane crashes more than car accidents despite the vastly higher risk posed by driving. Similarly, we tend to neglect the role of random chance in producing outcomes, attributing success to skill and failure to external factors in ways that flatter our self-image. Temporal discounting represents another significant limitation of intuitive decision-making. We naturally discount future consequences relative to immediate outcomes, often at inconsistent rates that lead to preference reversals over time. This explains why people make plans to save money or exercise next month but abandon these intentions when the time arrives. Such hyperbolic discounting creates a conflict between our present and future selves, with immediate gratification often winning at the expense of long-term welfare. This pattern helps explain persistent problems like insufficient retirement saving, environmental degradation, and addictive behaviors—all situations where immediate benefits come with delayed costs that intuitive decision-making tends to underweight. The sunk cost fallacy further demonstrates intuition's limitations. Rational decision-making considers only future costs and benefits, disregarding resources already expended. Yet people routinely throw good money after bad, continue watching boring movies because they've already invested time, or persist with failing projects because of previous investments. This tendency to honor sunk costs likely evolved as a commitment mechanism in social contexts but leads to irrational resource allocation when applied to impersonal decisions. Recognizing when we're being influenced by sunk costs rather than future prospects can substantially improve decision quality. These intuitive limitations don't imply that humans are hopelessly irrational. Rather, they suggest that consequential decisions benefit from structured analytical approaches that compensate for our natural biases. Expected utility calculations, decision trees, and other formal methods can supplement intuition, particularly for complex choices with significant stakes. By understanding when to trust our gut and when to override it with deliberate analysis, we can make better decisions across personal, professional, and policy domains. The most effective decision-makers develop metacognitive awareness that allows them to recognize situations where intuition is likely to mislead and apply appropriate analytical tools in those contexts.

Chapter 7: Applying Rational Frameworks to Moral Questions

Moral reasoning presents unique challenges for rationality because it must integrate factual judgments about the world with value judgments about what ought to be. While facts can be empirically verified, values appear subjective and potentially beyond rational adjudication. However, rational frameworks can still meaningfully contribute to moral deliberation by clarifying concepts, identifying inconsistencies, and analyzing the consequences of different ethical positions. This approach doesn't eliminate the need for value judgments but provides tools for making these judgments more coherent and defensible. The naturalistic fallacy—deriving "ought" from "is"—represents a fundamental logical error in moral reasoning. No amount of factual information about the world logically entails any particular moral conclusion without additional normative premises. Evolution may explain why certain moral intuitions emerged, but it cannot justify them. Similarly, appeals to what is "natural" provide no logical basis for moral judgments, as nature encompasses both cooperation and conflict, altruism and selfishness. Rational moral reasoning requires explicitly identifying the normative principles that bridge the gap between descriptive facts and prescriptive conclusions. Impartiality serves as a cornerstone of rational ethical frameworks. The recognition that one's own interests have no special status from an objective viewpoint—that "I" am just one person among billions—provides a foundation for moral reasoning that transcends mere self-interest. This principle appears across diverse ethical traditions, from Kant's categorical imperative to Rawls's veil of ignorance to utilitarian impartiality regarding whose happiness counts. While impartiality doesn't resolve all moral questions, it establishes a minimal requirement for any system of ethics that claims rational justification. Consequentialist reasoning evaluates actions based on their outcomes rather than intrinsic properties. This approach requires estimating the expected utility of different choices by multiplying the value of possible outcomes by their probabilities. Applied to moral dilemmas, consequentialism often yields counterintuitive conclusions that challenge conventional moral intuitions. For instance, it suggests that spending thousands of dollars on luxury goods while children die from preventable diseases represents a moral failure of resource allocation. These challenges highlight the tension between intuitive moral judgments and systematic ethical reasoning. Taboo trade-offs occur when sacred values come into conflict with secular considerations, particularly market values. People resist explicitly weighing sacred values like human life, love, or religious commitments against monetary considerations, even though policy decisions implicitly make such trade-offs constantly. This resistance creates a paradox—we simultaneously demand that institutions protect sacred values at any cost while expecting them to use limited resources efficiently. Rational frameworks help navigate these tensions by making implicit trade-offs explicit, allowing for more transparent deliberation about genuinely difficult moral choices. Game theory illuminates moral dilemmas involving multiple actors with interdependent choices. The Prisoner's Dilemma, Tragedy of the Commons, and other game structures explain why individually rational actions can produce collectively irrational outcomes. These models help explain why moral norms against free-riding, exploitation, and defection emerge across cultures, and why institutions that enforce cooperation often enhance welfare for all participants. By formalizing the structure of social dilemmas, game theory provides insights into both the evolution of moral intuitions and the design of institutions that align individual incentives with collective welfare.

Summary

Rationality represents humanity's most distinctive cognitive achievement and our most powerful tool for understanding and shaping our world. Through systematic application of logic, probability theory, and evidence-based reasoning, we can overcome the cognitive biases and heuristics that evolved for simpler environments but often mislead us in complex modern contexts. This pursuit requires both individual effort to recognize and correct our own thinking errors and collective institutions that harness diverse perspectives to compensate for individual limitations. The greatest insight from examining rationality is that it represents not an unattainable ideal but a practical set of skills and habits that can be cultivated through deliberate practice and supportive environments. By understanding the predictable patterns in human reasoning—from base-rate neglect to confirmation bias to prospect theory—we gain the metacognitive awareness necessary to improve our decision-making. Similarly, by designing information environments and social institutions that work with rather than against our cognitive architecture, we can achieve collectively rational outcomes even with imperfectly rational individuals. The path toward greater rationality lies not in suppressing our humanity but in more fully understanding it, embracing both the remarkable capabilities and the systematic limitations that define how we think.

Best Quote

“Disagreement is necessary in deliberations among mortals. As the saying goes, the more we disagree, the more chance there is that at least one of us is right.” ― Steven Pinker, Rationality

Review Summary

Strengths: The book provides insightful discussions from psychological and sociological perspectives, particularly in the first and second to last chapters. It offers intriguing examples of suboptimal decision-making and an intuitive explanation of the Monty Hall problem. The concluding discussion on dividing personal knowledge into real and mythological zones is both fascinating and illuminating. Weaknesses: The book primarily focuses on using probability and statistics to understand numerical implications, which was not what the reviewer expected. The content is technical and reads like a textbook, often presented in a heavily pedantic style with many terms and formulas. Overall Sentiment: Mixed Key Takeaway: While the book offers valuable insights into human irrationality and decision-making, its technical focus on probability and statistics may not align with all readers' expectations, resulting in a mixed reception.

About Author

Loading...
Steven Pinker Avatar

Steven Pinker

Steven Arthur Pinker is a prominent Canadian-American experimental psychologist, cognitive scientist, and author of popular science. Pinker is known for his wide-ranging explorations of human nature and its relevance to language, history, morality, politics, and everyday life. He conducts research on language and cognition, writes for publications such as the New York Times, Time, and The New Republic, and is the author of numerous books, including The Language Instinct, How the Mind Works, Words and Rules, The Blank Slate, The Stuff of Thought, The Better Angels of Our Nature, The Sense of Style, and most recently, Enlightenment Now: The Case for Reason, Science, Humanism, and Progress.He was born in Canada and graduated from Montreal's Dawson College in 1973. He received a bachelor's degree in experimental psychology from McGill University in 1976, and then went on to earn his doctorate in the same discipline at Harvard in 1979. He did research at the Massachusetts Institute of Technology (MIT) for a year, then became an assistant professor at Harvard and then Stanford University. From 1982 until 2003, Pinker taught at the Department of Brain and Cognitive Sciences at MIT, and eventually became the director of the Center for Cognitive Neuroscience. (Except for a one-year sabbatical at the University of California, Santa Barbara in 1995-6.) As of 2008, he is the Johnstone Family Professor of Psychology at Harvard.Pinker was named one of Time Magazine's 100 most influential people in the world in 2004 and one of Prospect and Foreign Policy's 100 top public intellectuals in 2005. He has also received honorary doctorates from the universities of Newcastle, Surrey, Tel Aviv, McGill, and the University of Tromsø, Norway. He was twice a finalist for the Pulitzer Prize, in 1998 and in 2003. In January 2005, Pinker defended Lawrence Summers, President of Harvard University, whose comments about the gender gap in mathematics and science angered much of the faculty. On May 13th 2006, Pinker received the American Humanist Association's Humanist of the Year award for his contributions to public understanding of human evolution.In 2007, he was invited on The Colbert Report and asked under pressure to sum up how the brain works in five words – Pinker answered "Brain cells fire in patterns."Pinker was born into the English-speaking Jewish community of Montreal. He has said, "I was never religious in the theological sense... I never outgrew my conversion to atheism at 13, but at various times was a serious cultural Jew." As a teenager, he says he considered himself an anarchist until he witnessed civil unrest following a police strike in 1969. His father, a trained lawyer, first worked as a traveling salesman, while his mother was first a home-maker then a guidance counselor and high-school vice-principal. He has two younger siblings. His brother is a policy analyst for the Canadian government. His sister, Susan Pinker, is a columnist for the Wall Street Journal and the author of The Sexual Paradox and The Village Effect. Pinker married Nancy Etcoff in 1980 and they divorced 1992; he married Ilavenil Subbiah in 1995 and they too divorced. He is married to the novelist and philosopher Rebecca Goldstein, the author of 10 books and winner of the National Medal of the Humanities. He has no children.His next book will take off from his research on "common knowledge" (knowing that everyone knows something). Its tentative title is: Don't Go There: Common Knowledge and the Science of Civility, Hypocrisy, Outrage, and Taboo.

Read more

Download PDF & EPUB

To save this Black List summary for later, download the free PDF and EPUB. You can print it out, or read offline at your convenience.

Book Cover

Rationality

By Steven Pinker

0:00/0:00

Build Your Library

Select titles that spark your interest. We'll find bite-sized summaries you'll love.