Home/Business/The New Breed
Loading...
The New Breed cover

The New Breed

What Our History with Animals Reveals about Our Future with Robots

4.0 (396 ratings)
19 minutes read | Text | 8 key ideas
In a future where technology intertwines with the fabric of daily life, "The New Breed" by Kate Darling challenges the notion of robots as mere job usurpers. This groundbreaking exploration draws parallels between our historic bond with animals and the potential of machines to become trusted allies rather than threats. With insight drawn from social, legal, and ethical perspectives, Darling suggests that robots may enhance our lives, echoing the roles animals have played in work and companionship throughout history. As we stand on the cusp of this new era, her analysis reveals how these mechanical companions might reshape our understanding of interaction—not only with nonhumans but with each other—offering a thought-provoking glimpse into a more harmonious coexistence.

Categories

Business, Nonfiction, Psychology, Philosophy, Science, History, Animals, Technology, Artificial Intelligence, Robots

Content Type

Book

Binding

Hardcover

Year

2021

Publisher

Henry Holt and Co.

Language

English

ISBN13

9781250296108

File Download

PDF | EPUB

The New Breed Plot Summary

Introduction

Our relationship with technology has long been dominated by narratives of replacement and competition—fears that robots will take our jobs, outperform our abilities, and eventually surpass us entirely. This framework not only creates unnecessary anxiety but fundamentally misunderstands the most productive ways to conceptualize and design robotic systems. By shifting our perspective from human replacement to animal partnership, we gain access to thousands of years of experience integrating non-human actors into our societies, economies, and lives. Throughout history, humans have worked alongside animals that possess distinct capabilities—from oxen plowing fields to dogs guiding the blind—without viewing them as inferior humans or existential threats. This animal paradigm offers a refreshing alternative that transforms how we might approach robot design, integration, and governance. Rather than asking whether robots will replace us, we can explore how their unique abilities might complement our own, creating partnerships more productive than either could achieve alone. This conceptual shift opens new possibilities for technological development while addressing legitimate concerns about privacy, manipulation, and responsibility in more grounded, practical ways.

Chapter 1: Beyond Human Replacement: The Animal Paradigm for Understanding Robots

When we think about robots, our minds often jump to science fiction - humanoid machines that walk, talk, and think like us. This comparison to humans has dominated our conversations about robotics, shaping how we design them, how we integrate them into our workplaces, and how we imagine our future alongside them. The human-centric view creates false expectations and unnecessary anxieties about what robots are and what they could become. The animal analogy offers a refreshing alternative that can transform how we think about robots. Throughout history, humans have used animals for work, weaponry, and companionship. Animals, like robots, can sense, make decisions, act on the world, and learn. Yet we don't expect animals to replace humans; instead, they supplement our abilities in ways that extend what we can accomplish. This complementary relationship provides a more accurate and productive model for understanding human-robot interaction. By shifting our perspective from seeing robots as quasi-humans to viewing them as a new breed of autonomous agents - more similar to animals in their relationship to us - we can break free from false determinism and discover more creative, beneficial ways to design and use this technology. This paradigm shift isn't merely semantic—it fundamentally changes how we approach everything from robot design to policy development. The animal paradigm helps us recognize that robots, like animals, possess different forms of intelligence and capability than humans. A sheepdog excels at herding but can't shear wool; similarly, robots might excel at precision welding but struggle with tasks requiring contextual understanding. This perspective encourages us to design robots that complement human abilities rather than attempting to replicate them, potentially leading to more effective and innovative technological development. This reframing also helps explain why robots often succeed in unexpected domains while failing at supposedly "easy" tasks. Walking on two legs, recognizing objects in varied environments, and manipulating unfamiliar items—all trivial for humans—remain challenging for robots. Yet they excel at tasks no human could accomplish, like deep-sea exploration or microsurgery. This mirrors how animals possess capabilities humans lack, from a dog's sense of smell to a bird's navigation abilities, while struggling with tasks humans find simple.

Chapter 2: Working Partners: How Animals and Robots Extend Human Capabilities

The history of human-animal partnerships provides a rich template for understanding how robots might integrate into our lives. Consider early agriculture, where the domestication of oxen and horses dramatically expanded human capabilities. These animals didn't replace farmers—they transformed what farming could accomplish. Their strength complemented human intelligence and dexterity, creating a partnership more productive than either could achieve alone. This complementary relationship offers a more accurate model for human-robot interaction than the replacement narrative. Modern robotics follows similar patterns. Industrial robots excel at repetitive, precise tasks in controlled environments, while humans bring adaptability, judgment, and creativity. When Amazon attempted to fully automate its warehouses, they discovered that human-robot teams outperformed either working alone. The robots handled heavy lifting and long-distance transport, while humans managed complex picking and packing tasks requiring dexterity and decision-making. This division of labor mirrors how humans have worked with draft animals for millennia. The animal paradigm also helps us understand why complete automation often fails. Tesla's "manufacturing hell" resulted from Elon Musk's attempt to create a fully automated factory, only to discover that humans were still essential for many tasks. Like animals, robots have specific capabilities and limitations. A sheepdog excels at herding but can't shear wool; similarly, robots might excel at precision welding but struggle with tasks requiring contextual understanding or fine motor control. This perspective shifts our focus from replacement to enhancement. Throughout history, new technologies have changed the nature of work rather than eliminating it entirely. The introduction of ATMs actually increased bank employment as tellers shifted to relationship-based roles. Similarly, AI tools for radiologists don't replace medical professionals but help them work more efficiently and accurately. The key insight is that automation typically transforms jobs rather than eliminating them wholesale. The dominant narrative around robots and work is one of replacement: "Will robots steal your job?" Headlines warn of impending mass unemployment as robots supposedly take over human tasks. This fear stems from our tendency to view robots as mechanical versions of ourselves, designed to do exactly what we do, only better, faster, and without complaint. However, this perspective misses the fundamental difference between human and robot intelligence. Robots excel at specific, well-defined tasks but struggle with context and adaptability. By viewing robots as partners with distinct capabilities rather than as inferior human substitutes, we can design more effective systems that leverage the strengths of both. This approach encourages us to ask not "Which jobs will robots take?" but rather "How can robots and humans work together to accomplish more than either could alone?"—a question that has guided human-animal partnerships for thousands of years.

Chapter 3: Companionship without Reciprocity: Social Bonds with Non-Human Entities

Humans have formed deep emotional bonds with animals for millennia, despite fundamental differences in consciousness and communication. This capacity for cross-species connection offers important insights into how we might relate to social robots. When people name their Roombas, mourn broken AIBOs, or hesitate to "hurt" robot pets, they're demonstrating the same psychological mechanisms that enable us to bond with animals—a tendency that runs deeper than rational understanding. Research consistently shows that people respond socially to technology that exhibits even minimal lifelike cues. In studies, participants apologize to computers that "feel sad," hesitate to hit robots with hammers, and experience genuine distress when robot animals appear to suffer. These reactions occur even when people intellectually understand that the machines have no feelings. This mirrors how we relate to animals—we know a dog's emotional experience differs fundamentally from our own, yet we form authentic connections nonetheless. The therapeutic applications of this phenomenon are particularly revealing. PARO, a robotic seal designed for dementia patients, reduces anxiety and improves mood similarly to animal therapy, but without the practical complications of live animals. Studies show that stroking PARO increases feelings of care and safety—the same effects observed with therapy dogs. This suggests that the benefits of certain social interactions may derive less from reciprocity than from our own capacity for empathy and projection. Critics often dismiss these human-robot bonds as delusional or unhealthy substitutes for "real" relationships. However, similar criticisms were once leveled at pet ownership. Victorian-era psychologists pathologized emotional attachments to animals, yet contemporary research shows pet relationships generally complement rather than replace human connections. The evidence suggests that humans possess a remarkable capacity for diverse social bonds—with other humans, animals, and potentially robots—each serving different psychological needs. When PARO was introduced to elderly patients with dementia, many observers expressed concern about replacing human care with robots. This reaction reveals a common fear—that social robots will substitute for human relationships. But this anxiety stems from our persistent comparison of robots to humans, rather than seeing them as a new category of relationship altogether. Social robots can provide unique benefits in health and education contexts, similar to therapy animals. In studies with children who have autism spectrum disorders, robots have proven remarkably effective at facilitating social interaction, serving as mediators that provide a less threatening way for children to practice communication skills. The animal paradigm helps us recognize that non-reciprocal relationships can still have authentic value. A child reading to a therapy dog benefits even though the dog doesn't understand the story; similarly, an elderly person conversing with a social robot may gain genuine comfort despite the robot's limited comprehension. What matters is not whether the robot truly "understands," but whether the interaction meets human social and emotional needs in a meaningful way.

Chapter 4: The Ethics of Empathy: Emotional Responses to Robots and Animals

Our tendency to empathize with entities that merely simulate suffering raises profound ethical questions. When people refuse to "harm" robot animals in experiments or protest videos of robots being kicked, they're exhibiting the same empathic response that drives animal welfare concerns. This phenomenon challenges us to consider whether our ethical responsibilities extend to how we treat robots—not for the robots' sake, but for our own. Throughout history, philosophers have debated whether cruelty toward animals is wrong because animals suffer or because such behavior corrupts human character. Kant argued that while animals lack moral standing, cruelty toward them "deadens the feeling of sympathy" necessary for moral relations with humans. This "indirect duty" argument finds new relevance with robots. If violent behavior toward lifelike robots might desensitize people to suffering generally, should we discourage such behavior regardless of whether robots themselves can feel? The evidence on behavioral transfer remains mixed. Studies on violent video games show inconsistent results regarding whether virtual violence increases real-world aggression. However, the physical embodiment of robots may create stronger psychological effects than screen-based interactions. When researchers asked children to hold a hamster, a Barbie doll, and a Furby upside down, the children quickly righted the hamster (8 seconds) and took significantly longer with the Furby (1 minute) than with the inanimate doll (5+ minutes). Their explanations—"I didn't want him to be scared"—reveal how embodied movement triggers empathic responses even when we know intellectually that an entity cannot suffer. This empathic response isn't merely a cognitive error to be corrected. Our capacity to project feelings onto non-human entities has evolutionary roots and serves important social functions. Anthropomorphism helps us navigate complex social environments and may enhance our ability to form diverse relationships. Rather than dismissing it as irrational, we might view it as a feature of human psychology that can be channeled constructively. The animal paradigm suggests a middle path between two extremes: treating robots as mere appliances versus granting them moral standing equivalent to living beings. Just as we've developed norms around animal treatment that acknowledge both their difference from humans and our ethical responsibilities toward them, we might develop contextual ethics for robots that recognize both their non-sentient nature and the psychological impact of how we treat them. This approach shifts our focus from abstract questions about robot consciousness to practical considerations about human psychology. Rather than asking "Could robots ever deserve rights?" we might ask "How does our treatment of robots shape our character and communities?" This more grounded ethical framework acknowledges that while robots themselves may not suffer, our behavior toward them reflects and potentially influences our capacity for empathy more broadly.

Chapter 5: Responsibility and Rights: Legal Frameworks for Non-Human Actors

Legal systems have long grappled with non-human entities that act in the world. From ancient Mesopotamian laws about goring oxen to modern regulations on dangerous dog breeds, societies have developed frameworks for managing animals that cause harm or require protection. These precedents offer valuable models for addressing similar questions about robots—entities that act autonomously but lack human agency. When animals cause harm, legal systems typically hold owners responsible rather than treating animals as moral agents. This approach acknowledges that while animals make choices, they lack the capacity to understand legal obligations. Similarly, when robots cause harm—whether a delivery bot collides with a pedestrian or an algorithm makes a discriminatory decision—holding the creators, owners, or operators responsible makes more sense than treating robots as legal persons. The concept of "noxal surrender" from Roman law, where owners could either pay damages or surrender the animal that caused harm, might inspire modern approaches to robot liability. The animal paradigm also helps us navigate questions about protecting robots. Throughout history, animal protection laws have evolved from purely property-based considerations to acknowledging animals' capacity to suffer. However, these protections differ fundamentally from human rights—they're limited, contextual, and often based on human interests rather than inherent animal dignity. This middle ground between property and personhood offers a template for robot protections that might prevent abuse without the philosophical complications of robot "rights." Our legal history with animals reveals that practical considerations often outweigh philosophical consistency. We protect dogs from cruelty while permitting the slaughter of equally sentient pigs because of our emotional connection to companion animals. Similarly, we might develop special protections for social robots that people form attachments to, not because these robots have moral standing but because such protections serve human psychological and social interests. The Code of Hammurabi and the Book of Exodus, some of the earliest legal codes known to humanity, both addressed the case of the "goring ox"—establishing that if your ox killed someone unexpectedly, you weren't responsible, but if you knew the animal was dangerous and failed to take precautions, you were at fault. These ancient laws recognized that oxen could act autonomously while still placing appropriate responsibility on their owners. Throughout Western legal history, we've developed numerous approaches to animal-caused harm, implementing licensing requirements for certain animals, mandating insurance, and creating compensation systems when specific culprits couldn't be identified. The animal model helps us understand why electronic personhood—granting robots legal status similar to corporations—might be problematic. Unlike corporations, which are ultimately composed of and controlled by humans with rights and responsibilities, autonomous robots lack the underlying human moral agency that legitimizes corporate personhood. Instead, treating robots as a special category of property with specific regulations—similar to how we treat animals—may better address the unique challenges they present.

Chapter 6: The Real Concerns: Privacy, Manipulation, and Design Biases

While science fiction often fixates on robot uprisings or human replacement, the animal paradigm directs our attention to more immediate concerns. Just as our relationships with animals have raised ethical issues around exploitation and selective breeding, our integration of robots presents challenges related to privacy, manipulation, and design biases that require immediate attention. Privacy concerns emerge from robots' unique combination of physical presence and data collection capabilities. Unlike animals that "cannot tell," social robots record conversations, track behaviors, and often transmit this data to corporate servers. When children confide in robot companions or elderly patients interact with therapeutic robots, they may not realize their intimate moments are being monitored. This creates vulnerabilities different from both animal companions (which don't record data) and screen-based technologies (which lack the physical presence that encourages trust and disclosure). The potential for emotional manipulation represents another pressing concern. Research shows people form stronger attachments to physical robots than screen-based agents, making them more susceptible to influence. When a robot companion that a child has bonded with suddenly requires an expensive subscription renewal, the emotional leverage creates problematic dynamics. This mirrors how pet industry marketing exploits our attachments to animals, but with potentially greater precision and scale. The question isn't whether robots will replace human relationships, but whether companies will exploit our tendency to form emotional bonds with non-human entities. Design biases embedded in robots reflect and potentially amplify existing social inequalities. From female-voiced assistants programmed to be subservient to humanoid robots that replicate racial stereotypes, these design choices aren't neutral but encode cultural assumptions. The animal paradigm reminds us that we've selectively bred animals to suit our preferences and projected our biases onto them—processes now repeating with robots. However, unlike with animals, we have the opportunity to consciously design robots that challenge rather than reinforce harmful stereotypes. When we search for images of robots online, we're bombarded with humanoid designs—machines with torsos, arms, legs, and heads. This fixation on recreating ourselves in mechanical form limits our imagination and practical effectiveness. The animal world offers a stunning diversity of forms and functions that can inspire more creative robot design. Animals navigate their environments in countless ways—crawling, flying, swimming, jumping, swinging—each evolved for specific ecological niches. Similarly, robots don't need to look or function like humans to be valuable. In fact, forcing robots into human forms often makes them less efficient and more expensive than designs tailored to their specific tasks. These concerns shift our focus from distant hypotheticals to immediate governance challenges. Rather than worrying about robot consciousness or rights, we should develop regulations addressing data collection by social robots, transparency requirements for emotional design features, and standards for inclusive design practices. Just as animal welfare regulations emerged from practical concerns about treatment rather than abstract philosophical debates about animal consciousness, robot governance should prioritize concrete harms and benefits to humans.

Summary

The animal analogy offers a transformative lens for understanding our relationship with robots, challenging the dominant narrative that frames them as human replacements. By recognizing that robots, like animals, have different forms of intelligence and abilities that can supplement rather than substitute our own, we open up new possibilities for design, integration, and governance. This perspective helps us move beyond false technological determinism to see the choices we have in shaping our future with these autonomous agents. The most profound insight from this reframing is that our relationship with robots doesn't have to follow a predetermined path toward replacement and competition. Instead, we can draw on our long history of partnering with animals to envision more creative, beneficial relationships with robots—as workers that extend our capabilities, companions that offer unique forms of interaction, and autonomous agents that we treat with appropriate care without confusing them with people. For policymakers, designers, and citizens seeking to understand emerging technologies, the animal paradigm provides a grounded approach that moves beyond both techno-utopianism and dystopian fears, suggesting that with thoughtful governance and design, we can harness robotic technology to enhance human flourishing rather than diminish it.

Best Quote

“Some We Love, Some We Hate, Some We Eat, is a powerful illumination of how we really behave toward animals.” ― Kate Darling, The New Breed: What Our History with Animals Reveals about Our Future with Robots

Review Summary

Strengths: The book presents an intriguing thesis on robot-human relations, offering a novel perspective by comparing it to human-animal relationships. It is filled with engaging examples, anecdotes, and historical insights. The writing is well-crafted with a conversational tone, enhanced by personal stories and humor, making it accessible and enjoyable.\nWeaknesses: The book lacks cohesion and a clear through line, feeling disjointed as it transitions between stories. The central argument is most effectively communicated in the brief epilogue, suggesting the main content could be more focused or concise.\nOverall Sentiment: Mixed\nKey Takeaway: The book offers a fresh perspective on the future of robot-human interactions by drawing parallels with human-animal relationships, but its impact is diminished by a lack of structural coherence.

About Author

Loading...
Kate Darling Avatar

Kate Darling

Read more

Download PDF & EPUB

To save this Black List summary for later, download the free PDF and EPUB. You can print it out, or read offline at your convenience.

Book Cover

The New Breed

By Kate Darling

0:00/0:00

Build Your Library

Select titles that spark your interest. We'll find bite-sized summaries you'll love.