
How Language Works
How Babies Babble, Words Change Meaning, and Languages Live or Die
Categories
Nonfiction, Psychology, Science, History, Education, Writing, Reference, Linguistics, Adult, Language
Content Type
Book
Binding
Hardcover
Year
2006
Publisher
Harry N. Abrams
Language
English
ISBN13
9781585678488
File Download
PDF | EPUB
How Language Works Plot Summary
Introduction
Have you ever wondered why a simple string of sounds can make you laugh, cry, or change your entire worldview? Language is perhaps humanity's most extraordinary superpower—one we use so effortlessly that we rarely pause to marvel at its complexity. Every day, we transform abstract thoughts into sound waves that travel through air, enter another person's ears, and somehow recreate those same thoughts in their mind. This seemingly magical process happens billions of times across our planet each day, enabling everything from casual conversations to scientific breakthroughs. The science of human communication reveals fascinating insights about our brains, our social nature, and our evolutionary history. When we understand language, we gain a window into how our minds organize information, how cultures transmit knowledge across generations, and how our species evolved the unique ability to share complex ideas. Throughout this exploration, we'll discover why children can master intricate grammar rules without formal instruction, how sign languages reveal the brain's remarkable flexibility, and why even our text messages and emojis follow sophisticated linguistic patterns that would have astonished our ancestors.
Chapter 1: The Building Blocks: Sounds, Words, and Grammar
Language begins with sounds—vibrations traveling through air that our brains interpret as meaningful signals. Human languages use approximately 600 possible speech sounds, though any single language typically employs only 40-60 of these possibilities. English uses about 44 distinct sounds or phonemes, including consonants like /p/ and /b/ and vowels like /i/ and /e/. What makes these sounds special is that they're contrastive—changing just one phoneme can completely alter meaning, as in "pat" versus "bat." Our vocal apparatus—lungs, vocal cords, tongue, teeth, and lips—work together with remarkable precision to produce these sounds, allowing us to communicate with extraordinary subtlety. These sounds combine to form words, the vocabulary items we store in our mental dictionaries. The average adult English speaker knows between 20,000 and 35,000 words, each associated with both a pronunciation pattern and a meaning. Words can be simple (dog, run) or complex (unhappiness, rethinking). Languages build words differently—English tends to create new words by combining existing ones, while languages like Turkish build complex words by adding numerous suffixes to a root. For instance, a single Turkish word "evlerinizden" translates to the English phrase "from your houses," demonstrating how languages can package meaning in dramatically different ways. Grammar provides the rules for combining words into meaningful sentences. These rules include syntax (sentence structure) and morphology (word formation). English syntax generally follows a Subject-Verb-Object pattern ("The dog chased the cat"), while languages like Japanese use Subject-Object-Verb ("The dog the cat chased"). Grammar rules operate largely unconsciously—native speakers can recognize incorrect sentences without necessarily being able to explain why they're wrong. This implicit knowledge develops naturally in children without formal instruction, suggesting some aspects of grammar may be hardwired into our brains. What's fascinating is that these building blocks are both universal and diverse. All human languages have sounds, words, and grammar, yet the specific inventory of sounds, vocabulary items, and grammatical rules varies tremendously across the world's 7,000+ languages. This combination of universal structure with cultural variation reflects both our shared cognitive abilities and our diverse human experiences. No language is more "primitive" or "advanced" than another—each represents a complete and sophisticated system for communication. The building blocks of language work together in a hierarchical system. We combine phonemes to form words, words to form phrases, and phrases to form sentences. This allows us to create an infinite number of meaningful utterances from a finite set of elements—a property linguists call "discrete infinity" that makes human language uniquely powerful as a communication system. This creative aspect of language distinguishes human communication from even the most complex animal communication systems, which typically consist of a limited set of signals with fixed meanings.
Chapter 2: The Brain's Language Network: Neural Processing
The human brain processes language through a remarkable network of specialized regions working in concert. When you hear someone speak, sound waves enter your ears and are transformed into neural signals that travel to your temporal lobe. Here, in an area called Wernicke's area, your brain begins decoding these signals into meaningful linguistic units. Meanwhile, Broca's area in the frontal lobe helps with speech production and grammar processing. These two classical language areas, connected by a neural pathway called the arcuate fasciculus, form the core of the brain's language network. Modern neuroscience has revealed that language processing is far more distributed throughout the brain than previously thought. It engages regions responsible for motor control (when we speak or sign), visual processing (when we read), memory (to retrieve word meanings), and emotional processing (to understand tone and connotation). This distributed network explains why language feels so integrated with our thoughts and emotions—it literally activates many of the same brain regions involved in other cognitive processes. The speed of this processing is astonishing: your brain can recognize a spoken word within 200 milliseconds, even distinguishing it from similar-sounding words like "cap" versus "cat." The brain processes different aspects of language simultaneously through parallel processing. As you hear a sentence, your brain is simultaneously analyzing its sounds, identifying words, parsing grammatical structure, and extracting meaning—all while predicting what might come next. This predictive aspect of language processing explains why you can often complete someone else's sentence or why reading becomes difficult when text doesn't follow expected patterns. Your brain is constantly generating expectations based on context, grammar, and your knowledge of the world, allowing for rapid comprehension even when input is noisy or incomplete. Interestingly, the brain processes written and spoken language somewhat differently. Reading, a relatively recent human invention, repurposes brain circuits that evolved for object recognition. A region called the Visual Word Form Area becomes specialized for recognizing written words through years of practice. This explains why learning to read requires explicit instruction, unlike spoken language which most children acquire naturally. The brain must develop new neural pathways connecting visual processing regions with language areas—a remarkable example of neural plasticity, the brain's ability to reorganize itself in response to new demands. Bilingual brains show fascinating adaptations to managing multiple languages. Early research suggested separate brain areas for each language, but current evidence indicates overlapping networks with subtle differences in activation patterns. Bilinguals develop enhanced executive control mechanisms that help them select the appropriate language and suppress the non-target language. This neural flexibility may contribute to the cognitive advantages observed in bilinguals, including enhanced attention control, better task-switching abilities, and even delayed onset of dementia symptoms in older adults. The bilingual experience literally reshapes the brain's language network, demonstrating our remarkable neural adaptability.
Chapter 3: Language Acquisition: How We Learn to Communicate
Children acquire language with remarkable speed and precision, progressing from babbling to complex sentences within just a few years. This journey begins even before birth—fetuses can recognize their mother's voice and show preferences for the rhythms of their native language during the last trimester of pregnancy. After birth, infants demonstrate an extraordinary sensitivity to speech sounds, able to distinguish between all the phonetic contrasts used in the world's languages—a universal ability that gradually narrows as they become specialized listeners for their native language. The first year of life establishes crucial foundations for language. Around 6-12 months, babies begin specializing in the sounds of their native language, becoming less sensitive to sound distinctions that don't matter in the languages they hear regularly. Simultaneously, they start babbling, producing consonant-vowel combinations like "ba-ba-ba" that practice the motor patterns needed for speech. This babbling becomes increasingly language-specific, as infants unconsciously imitate the sound patterns they hear around them. By their first birthday, many babies produce their first recognizable words, though they understand many more than they can say. Word learning accelerates dramatically around 18 months in what researchers call the "vocabulary explosion." Children begin learning new words at a rate of several per day, using sophisticated cognitive abilities to infer meanings from context. They employ strategies like the "whole object assumption" (assuming a new word refers to an entire object rather than its parts) and "fast mapping" (connecting a new word to its meaning after just one or two exposures). By age 6, the average child knows approximately 10,000 words—requiring them to learn, on average, about 5-10 new words daily during these early years. Grammar acquisition follows a predictable sequence across languages. Children first use single words, then two-word combinations ("more juice," "daddy go"), and gradually incorporate grammatical markers like plural endings and past tense. What's fascinating is that young children often overgeneralize grammatical rules, producing forms like "goed" instead of "went" or "mouses" instead of "mice." These creative errors demonstrate that children aren't simply imitating adults but actively constructing rule systems—they've learned the rule for forming past tense (add -ed) but haven't yet learned the exceptions. This pattern of overgeneralization provides compelling evidence that language acquisition involves rule-learning, not just memorization. The debate about whether language acquisition is primarily innate or learned continues, though most researchers now recognize the interaction between biological predispositions and environmental input. Children come equipped with specialized learning mechanisms that help them extract patterns from linguistic input, but they absolutely require exposure to language to develop normally. Cases of extreme language deprivation, though rare, show that children who aren't exposed to language during critical periods may never fully develop linguistic competence. This suggests a sensitive period for language acquisition, particularly for grammar, that gradually closes as children approach puberty.
Chapter 4: Sign Language: Visual-Spatial Communication Systems
Sign languages are complete, natural languages that use visual-gestural modalities instead of sound. Contrary to common misconception, they aren't universal—American Sign Language (ASL) differs significantly from British Sign Language (BSL), just as spoken English differs from Mandarin. Each sign language has evolved within its community and possesses its own grammar, syntax, and vocabulary. These languages aren't simplified versions of spoken languages but rich linguistic systems capable of expressing any concept, from concrete objects to abstract philosophy, poetry, humor, and metaphor. The structure of sign languages reveals fascinating linguistic principles. Signs are composed of smaller units analogous to the phonemes of spoken language: handshape, location, movement, orientation, and non-manual features like facial expressions. Changing just one of these parameters can completely alter meaning, just as changing a sound in spoken language creates a different word. Sign languages use space in sophisticated ways that spoken languages cannot—establishing locations in the signing space to represent people or concepts, then referring back to those locations to track references throughout a conversation. This spatial grammar allows for complex expressions that would require multiple sentences in spoken languages. Neuroscience research has revealed that sign languages activate the same core language areas in the brain as spoken languages, including Broca's and Wernicke's areas. This confirms that these brain regions are specialized for language processing regardless of modality, not just for speech. However, sign languages also engage visual-spatial processing regions more extensively than spoken languages do, demonstrating how the brain adapts its language network to different input types. This neural flexibility highlights the brain's remarkable capacity to develop language through different sensory channels. Children acquire sign language following the same developmental milestones as spoken language when exposed from birth. Deaf children with deaf parents who use sign language babble with their hands, produce their first signs around the same age hearing children produce first words, and progress through similar grammatical stages. This parallel development provides powerful evidence that language acquisition reflects innate human capacities rather than being specific to speech. Importantly, deaf children who lack early exposure to sign language often experience significant language delays, underscoring the critical importance of early language access regardless of modality. Sign languages have contributed enormously to our understanding of language evolution. Their emergence in deaf communities demonstrates how humans spontaneously create language when needed. Nicaraguan Sign Language, which developed in the 1980s when deaf children were brought together in schools for the first time, offers a rare opportunity to observe a language being born. Each successive generation of children has added complexity to the system, transforming what began as simple gestures into a full-fledged language within decades. This natural experiment provides compelling evidence for humans' innate capacity to create and structure language, even without an existing model to follow.
Chapter 5: Language Evolution: How Communication Changes
Languages constantly evolve, changing gradually but inevitably across generations. This evolution happens at every level—pronunciation, vocabulary, grammar, and meaning—and provides fascinating insights into both linguistic processes and human history. Sound changes often follow predictable patterns. The Great Vowel Shift that occurred in English between 1400 and 1700 CE systematically altered the pronunciation of long vowels, transforming words like "mice" (once pronounced "meece") and "house" (once closer to "hoose"). These changes didn't happen overnight but progressed gradually across generations, with younger speakers adopting slightly different pronunciations than their elders. Vocabulary evolves through various mechanisms that reflect changing human needs and cultural contact. New words enter languages through borrowing (English has borrowed "tsunami" from Japanese, "algebra" from Arabic), coining new terms for new concepts (like "internet" or "selfie"), and repurposing existing words (like "mouse" for a computer device). Words can also fade from use—terms like "forsooth" and "betwixt" that were once common in English now feel archaic. Technological and cultural changes drive much vocabulary evolution—just consider how many new words have entered English in the smartphone era, from "doomscrolling" to "FOMO" to "ghosting." Grammatical changes typically occur more slowly but can be profound. Old English (spoken before 1100 CE) was highly inflected, with nouns changing form based on their grammatical role, similar to modern German. Over centuries, English lost most of these inflections, compensating by developing stricter word order rules. This simplification accelerated after the Norman Conquest, when English became the language of common people while French dominated elite communication. Today, we can observe ongoing grammatical shifts, such as the increasing acceptance of "they" as a singular pronoun or the declining use of whom in everyday speech. Social factors significantly influence language change rates and directions. Periods of intensive language contact, like conquests or mass migrations, typically accelerate change. The Norman invasion of England in 1066 transformed English vocabulary and grammar as French influences permeated the language. Similarly, globalization and digital communication are currently accelerating linguistic innovation worldwide, with English borrowing terms from diverse languages while simultaneously exporting technological vocabulary globally. Young people often lead language change, introducing innovations that may eventually become standard features if they spread throughout the speech community. Language evolution also reveals the tension between two fundamental forces: the need for efficiency versus the need for clarity. Speakers tend to simplify pronunciation through processes like assimilation (making adjacent sounds more similar) and reduction (shortening words in predictable contexts). However, when these simplifications threaten comprehension, languages develop compensatory strategies to maintain clarity. This balance explains why languages don't simply become progressively shorter over time—they evolve in somewhat cyclical patterns, with periods of simplification followed by periods of elaboration. This dynamic equilibrium ensures that languages remain efficient yet sufficiently distinct to serve their primary purpose: effective communication.
Chapter 6: Digital Communication: Language in the Electronic Age
The digital revolution has transformed language in unprecedented ways, creating new linguistic forms and practices that would have been unimaginable just decades ago. Text messaging, social media, and instant messaging have spawned distinctive styles characterized by abbreviations (LOL, BRB), emoticons and emojis, deliberate misspellings, and creative punctuation. Rather than representing a degradation of language as some feared, these innovations demonstrate language's remarkable adaptability to new communicative needs and technological constraints. Digital writing often attempts to capture features of spoken conversation—immediacy, informality, emotional tone—within text-based formats, creating what linguists call "written speech" or "spoken writing." Digital platforms have accelerated language change. New words, expressions, and usage patterns can spread globally within days through social media, compared to the years or decades such changes might have taken in previous eras. Terms like "doomscrolling," "FOMO," and "to Google" emerged and became widely adopted with unprecedented speed. This acceleration challenges traditional language authorities and democratizes the process of language evolution, giving ordinary users more influence over linguistic norms. Young people, traditionally the drivers of language change, now have global platforms to spread their linguistic innovations, while previously isolated language communities can connect online, preserving and evolving their languages in digital spaces. The constraints of different platforms shape linguistic innovation in fascinating ways. Twitter's character limits spawned creative abbreviations and syntactic compressions, while Snapchat's ephemeral nature encouraged more casual, unedited communication. Voice-to-text technologies introduce their own distinctive patterns and errors. Each technological environment creates selection pressures that favor certain linguistic features over others, demonstrating how language adapts to its medium. Emojis represent a particularly interesting development—a partially standardized system of visual communication that complements text, adding emotional nuance and sometimes replacing words entirely in a way that crosses language boundaries. Multilingualism flourishes online in new ways. Code-switching (alternating between languages) has become common in digital communication among bilinguals, with distinct patterns emerging for when and how languages are mixed. Many users develop digital repertoires that include elements from multiple languages even when they aren't fluent in all of them. English loan words appear frequently in online communication in other languages, but local languages also influence global internet English. This digital multilingualism creates new hybrid forms that reflect our increasingly interconnected world while preserving aspects of linguistic diversity. Artificial intelligence is revolutionizing language technology, with profound implications for how we communicate. Machine translation systems like Google Translate, powered by neural networks trained on vast text corpora, now provide serviceable translations between hundreds of language pairs. While not perfect, these systems have made cross-language communication accessible to millions who previously faced insurmountable language barriers. Similarly, voice assistants, chatbots, and large language models are creating new interfaces between humans and machines based on natural language rather than specialized computer commands. These developments raise fascinating questions about the future relationship between human and machine language processing, as we increasingly communicate with and through artificial systems that process language differently than human brains do.
Summary
The science of human communication reveals language as both a biological miracle and a cultural achievement. Our brains come equipped with specialized networks that process language with remarkable efficiency, allowing us to master complex grammatical systems largely unconsciously. Yet language is also deeply social, with our speech patterns reflecting and constructing our identities while following sophisticated conversational rules. This dual nature—biological capacity meets cultural creation—makes language distinctively human and explains both the universal patterns found across all languages and the rich diversity of the world's communication systems. As we look to the future, language continues evolving in response to technological and social changes. Digital communication creates new forms of expression, while artificial intelligence increasingly mediates our linguistic interactions. Yet the fundamental human capacity for language remains constant—our extraordinary ability to transform thoughts into words and back again, connecting minds across time and space. Understanding the science behind this process not only satisfies our intellectual curiosity but helps us appreciate what may be our species' defining characteristic: the ability to share our inner worlds through the remarkable code we call language.
Best Quote
“Language death is like no other form of disappearance. When people die, they leave signs of their presence in the world, in the form of their dwelling places, burial mounds, and artefacts - in a word, their archaeology. But spoken language leaves no archaeology. When a language dies, which has never been recorded, it is as if it has never been.” ― David Crystal, How Language Works: How Babies Babble, Words Change Meaning, and Languages Live or Die
Review Summary
Strengths: The review appreciates the book's comprehensive exploration of various linguistic topics without adhering to a singular "Big Idea" or grand thesis. The author, David Crystal, is recognized for his extensive knowledge and lifelong engagement with language, offering a broad summary of his insights. Weaknesses: The review suggests a potential weakness in the book's approach, noting that each topic is given limited space, which might be insufficient for in-depth exploration. This could be seen as a drawback for readers seeking detailed analysis of specific linguistic subjects. Overall Sentiment: Mixed. The reviewer acknowledges the value in the book's broad scope and Crystal's expertise but also notes the limitations of covering such expansive topics in a concise manner. Key Takeaway: The book serves as a wide-ranging overview of linguistic concepts, offering insights across various topics without centering on a singular, unifying theory, which may appeal to readers interested in a general survey of language rather than an in-depth analysis of a specific idea.
Trending Books
Download PDF & EPUB
To save this Black List summary for later, download the free PDF and EPUB. You can print it out, or read offline at your convenience.

How Language Works
By David Crystal










