
What Kind of Creatures Are We?
A deep exploration of human nature, language, and consciousness
Categories
Nonfiction, Psychology, Philosophy, Science, Politics, Anthropology, Audiobook, Sociology, Linguistics, Language
Content Type
Book
Binding
Hardcover
Year
2015
Publisher
Columbia University Press
Language
English
ASIN
0231175965
ISBN
0231175965
ISBN13
9780231175968
File Download
PDF | EPUB
What Kind of Creatures Are We? Plot Summary
Introduction
What kind of creatures are we? This seemingly simple question leads us into profound inquiries about human nature, cognition, and society. Through a series of interconnected explorations into language, understanding, and ethics, we encounter fundamental questions about our cognitive capacities, the nature of thought, and the social arrangements that promote human flourishing. The philosophical journey begins with language—that uniquely human capacity that shapes our thinking and defines our species. Moving beyond traditional conceptions, we examine the biological foundations of human cognition, exploring both its remarkable scope and inherent limitations. This naturalistic approach to philosophical questions yields surprising insights about consciousness, knowledge, and the nature of reality itself. Finally, this examination of human nature extends to political and social domains, asking what social arrangements best serve human development and the common good. These investigations not only challenge conventional wisdom but also offer a coherent framework for understanding ourselves as biological creatures with remarkable cognitive and moral capacities.
Chapter 1: Language as Computational System: Beyond Communication
Human language represents one of the most distinctive aspects of our species. While traditional views have often characterized language primarily as a tool for communication, a deeper analysis reveals it to be fundamentally a computational system for thought. This perspective shifts our understanding away from externalized speech or writing toward the internal cognitive operations that make language possible. At its core, language exhibits what might be called the Basic Property: each language provides an unbounded array of hierarchically structured expressions that receive interpretations at two interfaces—the sensorimotor interface for externalization and the conceptual-intentional interface for mental processes. This computational system operates through principles of minimal computation, with the simplest operation being "Merge," which takes objects already constructed and forms new objects without modifying the original elements. Crucially, investigation reveals that the rules governing language invariably depend on structural relationships rather than linear order. When forming questions, for example, we intuitively follow structural principles rather than simple linear distance, despite the latter being computationally simpler. Neuroscientific evidence supports this view, showing that the brain processes language according to structural, not linear, principles. This suggests that linear order is merely peripheral to language, a necessary accommodation to the sensorimotor system's requirement for sequential output. The implications of this computational view are profound. If language is primarily a system for structuring thought, with externalization being secondary, then communication represents a derivative rather than essential function. Most language use occurs internally as a kind of dialogue with oneself, rarely reaching the external world. This challenges the common assumption that language evolved primarily for communication, suggesting instead that it emerged as an instrument of thought—a cognitive leap that dramatically expanded human mental capacities. This perspective aligns with evidence suggesting language emerged suddenly in human evolution, perhaps through a slight genetic mutation that yielded the Merge operation, providing the foundation for unbounded and creative thought. The resulting system follows principles of optimal design for thought, even when these conflict with ease of communication—further evidence that language is fundamentally a cognitive rather than communicative system.
Chapter 2: The Basic Property and Computational Procedure of Language
The core computational procedure underlying language provides remarkable insights into how our minds work. The mechanism of Merge, in its simplest form, takes two syntactic objects and combines them to create a new object without altering the original elements. This operation comes in two varieties: External Merge combines distinct objects, while Internal Merge applies to elements within an already formed structure, creating what linguists call "displacement"—phrases interpreted in one position but pronounced in another. What makes this system particularly revealing is its adherence to principles of minimal computation. Internal Merge creates copies of elements, yielding structures that are optimal for semantic interpretation but problematic for pronunciation. Universal principles of language design resolve this by pronouncing only one copy while deleting others during externalization. This creates difficulties for language processing and communication, yet these inefficiencies are consistently tolerated in language design—strong evidence that computational simplicity takes precedence over communicative efficiency. The structure-dependence of language operations provides another compelling example. Questions like "instinctively, eagles that fly swim" demonstrate that linguistic rules invariably refer to structural positions rather than linear proximity. The adverb "instinctively" modifies "swim," not the closer verb "fly." This pattern holds across all languages and constructions, despite linear distance being computationally simpler. Children acquire these complex principles without explicit instruction, suggesting they are part of our innate language faculty. This theoretical framework allows us to reinterpret apparent "imperfections" in language as consequences of optimal computational design. Displacement, once considered a strange property requiring explanation, emerges naturally from the simplest formulation of Merge. Similarly, seemingly arbitrary constraints on extraction (so-called "islands") likely reflect computational efficiency rather than arbitrary limitations. The minimalist approach to language thus provides not only a deeper understanding of linguistic phenomena but also insights into the nature of human cognition itself. It suggests that language is structured by principles of minimal computation, with externalization—and therefore communication—being secondary to its primary function as an instrument of thought. This perspective aligns with evidence from comparative biology, language acquisition, and neuroscience, offering a cohesive account of a defining human capacity.
Chapter 3: Human Cognition: Problems, Mysteries, and Biological Limitations
Human cognitive abilities exist within a framework of both scope and limits, a relationship that reflects our nature as biological organisms. Just as physical capacities like walking enable certain movements while precluding others, our cognitive architecture both enables and constrains what we can understand. This perspective on cognition represents what might be called "mysterianism"—not as a defeatist position, but as recognition of our biological nature. Drawing on Charles Sanders Peirce's account of abduction, we can understand human cognition as a biological system providing a limited array of "admissible hypotheses" that form the foundation of scientific inquiry. These innate structures determine not only what we can understand but even what questions we can meaningfully formulate. Questions that fall within our cognitive capacities represent "problems," while those that exceed these capacities constitute "mysteries." This distinction is organism-relative: what remains mysterious to humans might be transparent to differently constituted minds. This perspective aligns with observations from neuroscience. The human brain, like other biological systems, has evolved specialized neural structures with specific capacities and limitations. Even seemingly elementary aspects of neural computation remain poorly understood—we do not yet know the "basic instruction set" of the nervous system or how it computes at the cellular and molecular level. These gaps in understanding extend to higher cognitive functions, with consciousness being merely one among many challenging domains. The atoms of human linguistic and cognitive computation reveal further limitations. Unlike animal communication systems, where signals directly link to external objects, human concepts lack straightforward referential properties. Words do not simply pick out mind-independent entities—even seemingly concrete terms like "house" or abstract concepts like "person" involve intricate mental constructions. This suggests that natural language lacks referential semantics in the conventional sense, possessing instead syntax (internal symbol manipulation) and pragmatics (modes of use). Recognizing these cognitive limitations carries significant implications. It suggests that certain questions might permanently exceed human understanding—not because of current technological or methodological shortcomings, but because of the inherent structure of our minds. However, this recognition need not lead to pessimism. The very limitations that constrain our understanding also enable our remarkable cognitive achievements. Without constraints on possible hypotheses, cognition would lack structure and direction. Like the genetic constraints that guide an organism's development into a specific form, cognitive constraints provide the foundation for our distinctive mental capacities.
Chapter 4: The Science-Philosophy Boundary: Post-Newtonian Dilemmas
The scientific revolution fundamentally transformed our understanding of knowledge and its limits. When Newton demonstrated that objects could interact at a distance without contact—contradicting the mechanical philosophy that had defined early modern science—he confronted his contemporaries with a profound intellectual crisis. The very notion of what constituted scientific understanding had to be reconsidered. Newton himself considered action at a distance "inconceivable" and "absurd," yet his mathematical principles accurately described gravitational phenomena. This created a tension between the quest for intelligibility and the success of his theory. The resolution involved a significant shift in scientific ambition: rather than seeking ultimate explanations that render the world intelligible, science would aim for intelligible theories about the world. This distinction marks a crucial turning point in the history of scientific thought. The consequences of this shift extended far beyond physics. Hume observed that while Newton had "drawn the veil from some of the mysteries of nature," he had simultaneously restored "Nature's ultimate secrets to that obscurity, in which they ever did and ever will remain." This recognition of permanent mysteries-for-humans marks an intellectual humility grounded in an understanding of our cognitive limitations rather than merely temporary gaps in knowledge. The mechanical philosophy had provided a reassuring framework where objects interacted only through direct contact, aligning with our intuitive understanding of causality. Its collapse required accepting that certain aspects of reality might remain unintelligible to us. As Bertrand Russell later articulated, physics could only hope to discover "the causal skeleton of the world," while aspects of experience lay beyond its purview. This perspective acknowledges both the power and limitations of scientific inquiry. This historical episode provides valuable lessons for contemporary philosophy of mind. Just as Newton's contemporaries struggled to conceive how objects could interact without contact, we struggle to understand how consciousness relates to physical processes. But rather than indicating an insurmountable explanatory gap, this difficulty may simply reflect the limitations of our conceptual framework. The "hard problem" of consciousness takes its place alongside other historical "hard problems" that were never solved but eventually transcended through conceptual evolution. The Newtonian revolution thus teaches us to be cautious about what appears inconceivable. The history of science suggests that when confronted with apparent mysteries, we should neither abandon inquiry nor insist that our current conceptual framework must be adequate to all phenomena. Instead, we should pursue intelligible theories while remaining open to fundamental reconceptualizations of what constitutes explanation.
Chapter 5: Challenging Mind-Body Dualism: From Newton to Contemporary Neuroscience
The collapse of the mechanical philosophy in the wake of Newton's discoveries had profound implications for understanding the relationship between mind and matter. When Newton demonstrated that objects could interact at a distance without mechanical contact, he undermined not just one aspect of scientific thinking but the entire conceptual foundation of early modern science. The material world no longer conformed to intuitive notions of causality, raising the question: if matter itself had become mysterious, what grounds remained for distinguishing it from mind? John Locke recognized this implication, suggesting that if God had added the inconceivable property of gravitational attraction to matter, he might similarly have "superadded" the capacity for thought. This insight, developed through the eighteenth century, culminated in Joseph Priestley's work arguing that the Newtonian revolution had eliminated any coherent notion of material substance that could stand in opposition to mind. As Priestley observed, once "solidity had apparently so very little to do in the system," there remained no reason to suppose that "the principle of thought or sensation [is] incompatible with matter." This historical perspective reveals a curious irony in contemporary philosophy of mind. Many modern discussions proceed as if we have a clear conception of physical reality against which mental phenomena appear anomalous. Yet as Bertrand Russell argued, physics only describes the "causal skeleton of the world," leaving open the intrinsic nature of what it describes mathematically. There is therefore no compelling reason to think that conscious experience cannot be physical—we simply lack understanding of what physical reality fundamentally is. Contemporary neuroscience often rediscovers these insights, sometimes presenting them as revolutionary new ideas. Vernon Mountcastle, summarizing research from the Decade of the Brain, described minds as "emergent properties of brains" produced by principles "we do not yet understand"—echoing almost verbatim formulations from centuries earlier. Similarly, when Francis Crick announced his "astonishing hypothesis" that mental states are "the behavior of a vast assembly of nerve cells," he was restating conclusions that had seemed nearly inescapable after Newton. The mind-body problem as traditionally conceived may thus be largely misconceived. Gilbert Ryle famously ridiculed Cartesian dualism as positing a "ghost in the machine," but the Newtonian revolution actually exorcised the machine while leaving the ghost intact. What remains is not an explanatory gap between two well-understood domains but an incomplete understanding of a single domain whose nature we have only begun to comprehend. This perspective shifts our approach from trying to bridge an allegedly unbridgeable divide to developing more adequate conceptions of the natural world that can accommodate the full range of phenomena, including consciousness.
Chapter 6: Pragmatism in Scientific Inquiry: Accepting Explanatory Gaps
Scientific progress often involves acknowledging rather than resolving certain explanatory gaps. This pragmatic approach has repeatedly proven fruitful across disciplines, offering valuable lessons for understanding phenomena ranging from consciousness to language. Historical examples demonstrate how insistence on complete explanation can impede rather than advance understanding. Chemistry provides a particularly instructive case. For much of its history, chemistry developed largely independently from physics, despite attempts to reduce chemical phenomena to physical laws. In the eighteenth century, Joseph Black advised that "chemical affinity be received as a first principle, which we cannot explain any more than Newton could explain gravitation," allowing chemists to build "a body of doctrine" without requiring reduction to underlying physical mechanisms. This approach proved remarkably successful. When John Dalton introduced his atomic theory, he advanced chemistry while explicitly rejecting the "philosophically more coherent" but less successful reductionist approaches of Newtonian physicists. Well into the twentieth century, prominent scientists viewed chemistry as providing mere calculating devices rather than genuine explanations of reality. August Kekulé doubted that "absolute constitutions of organic molecules could ever be given," while America's first Nobel Prize-winning chemist dismissed talk about the real nature of chemical bonds as metaphysical "twaddle." Yet these skeptical attitudes did not prevent chemistry from developing a rich theoretical framework that eventually facilitated its unification with physics through quantum theory—a unification that required not the reduction of chemistry to pre-existing physics but a radical reconceptualization of physics itself. This historical pattern suggests a productive approach to current challenges in cognitive science. Rather than demanding that theories of language, perception, or consciousness immediately reduce to neuroscience, researchers should develop "bodies of doctrine" in these domains while remaining open to eventual unification. The study of insect navigation, human language, or moral judgment can proceed legitimately even without complete neural explanations, just as chemistry advanced without reduction to classical physics. The pragmatic stance acknowledges that explanatory gaps may persist not because the phenomena themselves are mysterious but because our understanding of the presumed reduction base may be inadequate. Just as chemistry could not be reduced to classical physics because that physics itself was incomplete, cognitive phenomena may resist current neurobiological explanation not because they are non-physical but because our conception of the physical requires revision. This perspective encourages productive research while avoiding premature judgments about what can or cannot be explained. Such pragmatism should not be confused with defeatism. It simply recognizes that scientific progress often involves developing theories at multiple levels simultaneously, allowing connections to emerge as understanding deepens across domains. This approach has consistently proven more productive than insisting that every phenomenon immediately conform to current explanatory frameworks.
Chapter 7: Democracy and Common Good: Libertarian Socialist Perspective
The exploration of human nature extends beyond cognitive capacities to questions of social organization. What arrangements best serve human development and the common good? This inquiry reveals another aspect of what kind of creatures we are: social beings whose flourishing depends on collective institutions that either foster or hinder individual development. Classical liberal thinkers of the Enlightenment emphasized human development as a central value. John Stuart Mill, quoting Wilhelm von Humboldt, identified "the absolute and essential importance of human development in its richest diversity" as the "grand, leading principle" of liberty. Even Adam Smith, often associated exclusively with free markets, sharply criticized the division of labor for reducing workers to performing "a few simple operations" that render them "as stupid and ignorant as it is possible for a human creature to be." These concerns point to a conception of the common good centered on enabling the full development of human capacities. This tradition found expression in libertarian socialist thought, which Rudolf Rocker described as seeking "the free unhindered unfolding of all the individual and social forces in life." Unlike authoritarian socialism, this approach emphasizes direct democratic control of both political and economic institutions. It advocates worker ownership of productive enterprises, community self-governance, and the dismantling of hierarchies that cannot justify themselves. These principles connect classical liberalism with anarchist traditions that challenge both state power and private capital as potential sources of domination. John Dewey, while not an anarchist, expressed similar concerns. He argued that "power today resides in control of the means of production, exchange, publicity, transportation and communication," making politics "the shadow cast on society by big business." True democracy would require workers to become "the masters of their own industrial fate," not tools rented by employers nor directed by state authorities. Similarly, education should not train children to work "for the sake of the work earned" but should encourage creativity, exploration, and independence. These ideals confront a very different conception of democracy prevalent in contemporary societies. The mainstream view, expressed by figures like Walter Lippmann, holds that the public are "ignorant and meddlesome outsiders" who must be kept as "spectators, not participants in action." This perspective traces back to the founding of the American republic, when James Madison designed constitutional checks to "protect the minority of the opulent against the majority." Contemporary evidence suggests this anti-democratic vision has largely succeeded, with studies showing that ordinary citizens have virtually no influence on policy while economic elites effectively determine legislation. The contrast between these visions—democracy as self-governance versus democracy as elite management—reflects fundamentally different understandings of human nature. The libertarian socialist tradition views ordinary people as capable of managing their own affairs, while the managerial conception sees them as requiring guidance from "responsible men." This philosophical disagreement has profound practical consequences for how we organize social institutions and understand the common good.
Summary
The philosophical inquiry into what kind of creatures we are yields profound insights across domains. Language emerges not primarily as a tool for communication but as a computational system for thought, adhering to principles of optimal design that prioritize mental operations over externalization. This computational perspective reveals how humans uniquely possess a finite biological system capable of generating infinite meaningful expressions—a capacity that likely emerged suddenly in our evolutionary history through minimal genetic changes. These investigations into language and cognition lead to a naturalistic understanding of knowledge and its limits. As biological organisms, humans possess cognitive capacities with both remarkable scope and inherent limitations. This perspective reconciles the seeming paradox that our intellectual achievements depend on the very constraints that make certain questions potentially unanswerable. Similarly, in social and political domains, human flourishing requires institutions that respect our nature as beings capable of self-development and self-governance. The common thread uniting these diverse inquiries is recognition that understanding ourselves as creatures—biological organisms with distinctive cognitive and social capacities—illuminates both our potential and limitations. Through this integrated approach to language, mind, and society, we gain not only philosophical clarity but practical wisdom about how to structure our individual and collective lives.
Best Quote
“So understood, anarchism is the inheritor of the classical liberal ideas that emerged from the Enlightenment. It is part of a broader range of libertarian socialist thought and action that ranges from the left anti-Bolshevik Marxism of Anton Pannekoek, Karl Korsch, Paul Mattick, and others, to the anarcho-syndicalism that crucially includes the practical achievements of revolutionary Spain in 1936, reaching further to worker-owned enterprises spreading today in the Rust Belt of the United States, in northern Mexico, in Egypt, and in many other countries, most extensively in the Basque country in Spain, also encompassing the many cooperative movements around the world and a good part of feminist and civil and human rights initiatives.” ― Noam Chomsky, What Kind of Creatures Are We?
Review Summary
Strengths: The book effectively explores the nature and limits of human language, cognition, and scientific knowledge. Despite its density, the key points about language being a unique human trait and its hierarchical design are clearly communicated. The inclusion of four essays provides a comprehensive examination of the topics.\nWeaknesses: The book suffers from considerable repetition across essays, suggesting it could have been more concise. The dense, almost telegraphic style, particularly in the opening essay, may be challenging for readers unfamiliar with linguistic terms and acronyms, which are not adequately explained.\nOverall Sentiment: Mixed\nKey Takeaway: While the book provides valuable insights into human language and cognition, its repetitive and dense presentation may limit accessibility, suggesting a more focused approach could have enhanced clarity and engagement.
Trending Books
Download PDF & EPUB
To save this Black List summary for later, download the free PDF and EPUB. You can print it out, or read offline at your convenience.

What Kind of Creatures Are We?
By Noam Chomsky