Home/Business/The Big Nine
Loading...
The Big Nine cover

The Big Nine

How the Tech Titans and Their Thinking Machines Could Warp Humanity

3.8 (1,931 ratings)
22 minutes read | Text | 9 key ideas
In the shadowy corridors of corporate power, a quiet revolution unfolds—one that threatens to redefine humanity's future. Amy Webb’s incisive analysis in "The Big Nine" unveils the tangled web woven by tech titans like Amazon, Google, and Facebook, who wield artificial intelligence not as a tool for human progress but as a means to their own lucrative ends. This is not merely a warning cry; it's a strategic blueprint for reclaiming agency from faceless algorithms. Webb exposes the unseen hands guiding AI's evolution, revealing a chilling vision of machines poised to outthink their creators. Amidst this technological upheaval, she offers a daring roadmap to wrest control from a digital dystopia and forge a future where technology serves, rather than subverts, our shared humanity.

Categories

Business, Nonfiction, Philosophy, Science, Economics, Politics, Technology, Artificial Intelligence, Audiobook, Society

Content Type

Book

Binding

Hardcover

Year

2019

Publisher

PublicAffairs

Language

English

ISBN13

9781541773752

File Download

PDF | EPUB

The Big Nine Plot Summary

Introduction

Artificial intelligence stands at a critical inflection point in human history. Nine powerful technology companies—six American and three Chinese—now control the development trajectory of AI technologies that will fundamentally reshape our societies, economies, and personal lives. These corporations make daily decisions about AI's design and deployment that embed values, priorities, and assumptions that will profoundly impact humanity's future. Yet these decisions often occur without adequate consideration of long-term consequences or representation of diverse human interests. The concentration of such transformative power within a small group of corporations creates unprecedented risks and opportunities. As AI systems become increasingly integrated into critical infrastructure and decision-making processes, the values encoded in these systems—whether prioritizing profit, surveillance, efficiency, or human flourishing—will shape the contours of human experience for generations. By examining the organizational structures, incentives, and blind spots that drive AI development within these companies, we can better understand the possible futures that lie ahead and identify intervention points to steer this powerful technology toward outcomes that genuinely serve humanity's diverse needs and interests.

Chapter 1: The AI Ecosystem: Nine Companies Controlling Our Future

The future of artificial intelligence is being shaped primarily by nine powerful technology companies that wield unprecedented influence over its developmental trajectory. These companies—Google, Microsoft, Amazon, Facebook, IBM, and Apple (collectively known as the G-MAFIA) in the United States, and Baidu, Alibaba, and Tencent (the BAT) in China—control the vast majority of resources, talent, and infrastructure necessary for advancing AI. These tech giants operate under dramatically different pressures and constraints. In the United States, the G-MAFIA face relentless market demands and shareholder expectations that prioritize short-term commercial success over long-term planning. They must constantly deliver new products and features, often at the expense of careful consideration of potential consequences. The American government has effectively outsourced AI research and development to these companies, providing little strategic direction or funding for basic research. In China, the BAT operate within a government-directed framework that aligns AI development with national strategic goals. President Xi Jinping has unveiled an ambitious plan for China to become the global leader in AI by 2030, with a domestic industry worth at least $150 billion. The Chinese government views AI as essential to economic growth, military modernization, and social governance. Unlike their American counterparts, the BAT must ultimately bend to Beijing's will. What makes the Big Nine particularly powerful is their control over the entire AI ecosystem—from the data required to train systems, to the frameworks and platforms used to build applications, to the cloud infrastructure that deploys AI at scale. Their decisions about which problems to solve, which values to encode, and which safeguards to implement will profoundly shape how AI evolves and impacts society. The concentration of AI development among these nine companies raises critical questions about representation and accountability. The teams building these systems remain overwhelmingly homogeneous in terms of gender, race, and worldview, creating significant risks of bias and blind spots in the resulting technologies. As AI becomes increasingly integrated into critical infrastructure and decision-making systems, the values and priorities of the Big Nine will be encoded into the fabric of our technological future.

Chapter 2: Warning Signs: Biases and Blind Spots in AI Development

AI systems increasingly make consequential decisions that affect human lives, yet they often do so in ways that are opaque, biased, or misaligned with human values. These systems learn from historical data that frequently contains embedded biases, leading to discriminatory outcomes in areas like hiring, lending, criminal justice, and healthcare. When an AI system trained on historical medical records makes treatment recommendations, it may perpetuate patterns of care that historically underserved certain populations. Similarly, when facial recognition systems perform poorly on darker-skinned faces or female faces, they risk reinforcing existing social inequalities. The black box problem compounds these issues. Many advanced AI systems, particularly deep learning networks, operate in ways that are difficult or impossible for humans to interpret. Engineers can observe inputs and outputs but cannot fully explain the decision-making process between them. This opacity creates significant risks when these systems are deployed in critical contexts. If an AI denies someone a loan, medical treatment, or parole, those affected have no meaningful way to understand or challenge the decision. The lack of transparency undermines accountability and makes it difficult to identify and correct problematic patterns. Technical vulnerabilities in AI systems create additional concerns. Adversarial examples - specially crafted inputs designed to fool AI systems - can cause sophisticated image recognition systems to misclassify objects in dangerous ways. A stop sign with carefully placed stickers might be misidentified as a speed limit sign by an autonomous vehicle's vision system. These vulnerabilities could be exploited maliciously, turning AI systems against their users or creating widespread disruption. As AI becomes more integrated into critical infrastructure, these technical weaknesses become increasingly concerning. The developmental track of AI is further complicated by the values gap between different stakeholders. The tribes building AI systems often reflect narrow demographic profiles - predominantly white, male, technically-oriented, and from privileged backgrounds. This homogeneity limits the perspectives considered during development and can lead to systems that work well for some groups while failing others. When AI systems are designed primarily by and for certain populations, they inevitably embed the priorities, assumptions, and blind spots of those groups. Economic incentives frequently prioritize speed over safety in AI development. Companies race to deploy new capabilities ahead of competitors, sometimes without adequate testing or safeguards. This rush can lead to premature deployment of systems that haven't been thoroughly vetted for potential harms or unintended consequences. The pressure to monetize AI capabilities quickly can override careful consideration of long-term impacts or edge cases that might affect vulnerable populations.

Chapter 3: Three Futures: Optimistic, Pragmatic, and Catastrophic Scenarios

The evolution of artificial intelligence from narrow applications to general intelligence and potentially superintelligence will unfold over the coming decades, with profound implications for humanity. Three distinct scenarios emerge when projecting the future trajectory of AI development: optimistic, pragmatic, and catastrophic. In the optimistic scenario, the world recognizes the warning signs and takes decisive action to redirect AI's developmental path. The G-MAFIA forms a coalition that prioritizes transparency, safety, and human welfare over short-term profits. Governments establish a Global Alliance on Intelligence Augmentation (GAIA) that creates international standards and oversight mechanisms. Personal data records (PDRs) are treated as distributed ledgers, with individuals maintaining ownership and control. By 2049, artificial general intelligence enhances human capabilities while remaining constrained by careful governance. This future sees AI helping to solve climate change, improve healthcare, and expand economic opportunity. The pragmatic scenario acknowledges problems but implements only minor tweaks. The G-MAFIA continues to prioritize market demands, while governments fail to develop coherent national strategies. Competition rather than collaboration prevails, leading to a bifurcated ecosystem where consumers must choose between two incompatible operating systems—Google's mega-OS or Applezon (an Apple-Amazon partnership). Personal data records are owned by corporations rather than individuals. By 2029, people experience "learned helplessness" as they become dependent on AI systems that constantly nudge their behavior. Economic inequality widens as middle-management jobs disappear, creating a digital caste system. Meanwhile, China expands its influence through economic might and technological prowess. The catastrophic scenario unfolds if current warning signs are ignored and the status quo persists. The United States fails to develop a national AI strategy while China implements its vision of technological authoritarianism. By 2039, China achieves dominance in AI and extends its social credit system globally through the Belt and Road Initiative. The G-MAFIA fragments under market pressures and regulatory challenges, while the BAT becomes the global standard for AI. Personal data records become instruments of surveillance and control. By 2069, artificial superintelligence emerges under Chinese direction, creating a new world order that prioritizes stability and control over individual liberty and human rights. These scenarios highlight the critical juncture humanity faces. The decisions made by the Big Nine, governments, and society in the coming years will determine which path we follow. The optimistic scenario requires uncomfortable choices and sacrifices in the short term to secure a better future. The pragmatic scenario reflects our tendency to address symptoms rather than underlying causes. The catastrophic scenario shows what happens if we continue on our current trajectory without meaningful intervention.

Chapter 4: The Values Gap: How Corporate Priorities Create AI's Paper Cut Problem

The development of artificial intelligence is occurring without explicit consideration of human-centered values, creating what can be described as a "values gap." While the Big Nine companies all have formal value statements, these typically focus on innovation, efficiency, and customer satisfaction rather than articulating how AI should serve humanity's broader interests. This absence of codified humanistic values means that the best interests of all humanity are not being prioritized during research, design, and deployment processes. This values gap manifests in what can be called the "paper cut problem"—a gradual accumulation of small harms that collectively cause significant damage. Unlike catastrophic scenarios involving killer robots, these paper cuts are subtle and often invisible until they accumulate. For example, when Google's advertising system showed ads for arrest records alongside searches for Black-identifying names but not white-identifying ones, it wasn't because engineers intentionally programmed racism into the system. Rather, the system was optimized for click-through rates without adequate consideration of fairness or social impact. The optimization effect is a key driver of these paper cuts. AI systems are designed to optimize for specific outcomes—maximizing engagement, minimizing costs, or increasing efficiency—without necessarily considering broader social values. When IBM's Watson Health recommended dangerous treatment protocols for cancer patients, it wasn't because the system was malicious but because market pressures had forced IBM to rush development before adequate training data was available. The black box problem compounds these issues. As AI systems become more complex, even their creators struggle to understand exactly how they make decisions. This opacity makes it difficult to identify and address biases or errors before they cause harm. When Microsoft's chatbot Tay began spewing racist and misogynistic content shortly after launch, engineers were caught off guard because they hadn't anticipated how the system would learn from malicious users. The values gap is particularly concerning as AI transitions from narrow applications to more general capabilities. Without explicit prioritization of human welfare, transparency, and inclusivity, AI systems may optimize for goals that conflict with human flourishing. This isn't because AI developers are malicious—indeed, most are well-intentioned—but because market pressures and organizational structures don't incentivize careful consideration of long-term social impacts.

Chapter 5: Democracy vs. Autocracy: The Battle for AI's Soul

The development of artificial intelligence has become a central battleground in the broader ideological contest between democratic and authoritarian governance models. This competition transcends mere technological rivalry, representing fundamentally different visions for how AI should be designed, deployed, and governed. The outcome of this contest will significantly influence whether AI ultimately strengthens human freedom and dignity or becomes a tool for unprecedented control and manipulation. In democratic societies, AI development has primarily emerged from private companies operating with relative independence from government direction. This model has fostered remarkable innovation but has also created tensions between commercial incentives and broader public interests. Companies prioritize features and applications that generate revenue or competitive advantage, sometimes at the expense of privacy, security, or fairness. Democratic governments have struggled to establish effective oversight frameworks that protect citizens while enabling beneficial innovation, resulting in a regulatory landscape that often lags behind technological capabilities. The democratic approach to AI emphasizes values like transparency, individual privacy, and user choice. Systems are designed to augment human decision-making rather than replace it entirely, preserving human agency and accountability. When problems emerge - such as algorithmic bias or privacy violations - robust civil society institutions, independent media, and legal protections provide mechanisms for identifying and addressing these issues. The messy, pluralistic nature of democratic societies creates space for diverse perspectives to influence AI development, potentially leading to more inclusive and balanced systems. By contrast, authoritarian regimes view AI primarily as a tool for maintaining domestic control and projecting power internationally. China, in particular, has made AI development a national strategic priority, investing heavily in technologies that support surveillance, social management, and military applications. The Chinese model integrates AI into a comprehensive system of social governance, exemplified by initiatives like the Social Credit System that uses algorithmic assessment to reward compliant behavior and punish deviations from state-approved norms. The authoritarian approach prioritizes state access to data, centralized control, and alignment with government objectives. Privacy is subordinated to security concerns, with vast data collection enabling increasingly precise monitoring and prediction of citizen behavior. AI systems are designed to identify potential dissent or social instability before it manifests, allowing preemptive intervention. The lack of independent oversight or meaningful consent mechanisms means citizens have little recourse against algorithmic decisions that affect their lives and opportunities. These contrasting approaches create different innovation dynamics. Democratic systems may progress more slowly in some areas due to concerns about privacy, ethics, and unintended consequences. The distributed nature of innovation across multiple companies and institutions creates redundancy and diversity but may lack the coordinated focus possible in more centralized systems. Authoritarian approaches can marshal resources toward strategic priorities and deploy technologies rapidly without addressing ethical concerns or public resistance, potentially gaining short-term advantages at the cost of long-term legitimacy and trust.

Chapter 6: From ANI to ASI: The Critical Transition Points Ahead

The evolution of artificial intelligence will unfold across three distinct phases, each with profound implications for humanity. We currently exist in the era of artificial narrow intelligence (ANI), where systems excel at specific tasks but lack general capabilities. The transition to artificial general intelligence (AGI)—machines that can reason, solve problems, and make choices across domains with human-level competence—represents a critical inflection point that could occur by the 2040s. Beyond that lies artificial superintelligence (ASI), where machines would surpass human cognitive abilities across all domains. This evolutionary trajectory is accelerated by several technological developments. Evolutionary algorithms enable AI systems to generate, test, and refine solutions through processes analogous to natural selection. These algorithms can produce innovations that human programmers might never conceive. Meanwhile, Moore's Law continues to drive exponential improvements in computing power, while new hardware architectures like Google's Tensor Processing Units and IBM's neuromorphic chips overcome traditional bottlenecks. The transition from ANI to AGI will likely be gradual rather than sudden. Systems will incrementally master more general capabilities—from understanding context and making inferences to transferring knowledge between domains. The Contributing Team Member Test represents a more meaningful benchmark than the traditional Turing Test: can an AI system participate in a meeting and make valuable, unsolicited contributions that demonstrate understanding of social dynamics and contextual factors? What makes this transition particularly significant is that while human intelligence evolves slowly through biological processes, machine intelligence can improve at exponential rates through recursive self-improvement. An AGI system could potentially design better versions of itself, leading to an "intelligence explosion" where capabilities advance rapidly beyond human comprehension. At our current rate, humans might gain 15 IQ points over 50 years of evolution, while AI could achieve vastly greater cognitive leaps in the same timeframe. This disparity creates what philosopher Nick Bostrom calls the "control problem"—how do we ensure that increasingly capable AI systems remain aligned with human values and interests? The challenge is compounded by the black box problem, where even the creators of advanced AI systems cannot fully explain how they reach particular decisions. As systems become more complex and autonomous, this opacity increases, potentially leading to unexpected and unintended outcomes. The transition to ASI, if it occurs, would represent the most profound transformation in the history of intelligence on Earth. A superintelligent system would make decisions using logic potentially alien to human understanding. The analogy often used is that for a chimpanzee to comprehend a city council meeting would be similar to humans trying to comprehend the reasoning of a superintelligent AI—the cognitive gap would be too vast.

Chapter 7: Reclaiming AI: Solutions for a More Human-Centered Future

Creating a human-centered approach to artificial intelligence requires fundamental shifts in how these technologies are conceived, developed, and governed. This transformation begins with reframing the purpose of AI from maximizing efficiency, profit, or control to enhancing human capabilities, expanding opportunities, and supporting human flourishing. Such reorientation demands changes across multiple dimensions - from technical design practices to corporate governance structures to national and international policies. At the technical level, human-centered AI requires designing systems that are transparent, explainable, and accountable. Users should understand how AI systems make decisions that affect them, particularly in high-stakes contexts like healthcare, criminal justice, or financial services. This transparency enables meaningful human oversight and intervention when systems produce problematic outcomes. Rather than treating AI as a black box whose decisions must be accepted without question, a human-centered approach ensures that these systems remain tools under human direction rather than autonomous authorities. Inclusive development processes represent another essential element of human-centered AI. The homogeneity of current AI development teams - predominantly white, male, technically-oriented, and from privileged backgrounds - inevitably shapes the problems these teams choose to solve and the solutions they develop. Diversifying these teams across dimensions including gender, race, socioeconomic background, disciplinary training, and cultural perspective would produce AI systems that better serve diverse human needs and contexts. This diversity must extend beyond token representation to meaningful influence over decision-making and priority-setting. Data practices require particular attention in human-centered AI development. Current approaches often treat personal data as a resource to be extracted and exploited without meaningful consent or compensation. A human-centered alternative would establish individual ownership and control over personal data, with clear consent mechanisms and fair compensation for its use. This shift would rebalance power between individuals and the organizations that collect and analyze their data, creating incentives for more respectful and beneficial data practices. Corporate governance structures must evolve to support human-centered AI. The current model, which prioritizes shareholder returns above all other considerations, creates incentives for developing AI that maximizes engagement, consumption, and data extraction regardless of broader impacts. Alternative models could incorporate diverse stakeholder perspectives into governance, establish binding ethical commitments, or create legal structures that balance profit motives with social responsibility. These changes would enable companies to make decisions that consider long-term social impacts alongside short-term financial returns. Education systems play a crucial role in shifting AI's trajectory. Technical training must integrate ethical reasoning, social impact assessment, and diverse perspectives rather than treating these as separate or optional considerations. Students across disciplines - not just computer science - should develop AI literacy that enables them to participate in decisions about how these technologies are developed and deployed. This broader educational approach would create a workforce capable of building more thoughtful and beneficial AI systems.

Summary

The development of artificial intelligence stands at a critical inflection point. The concentrated power of the Big Nine companies has created an unprecedented situation where a small number of corporations shape technologies that will transform human civilization. This concentration brings both opportunities for coordinated beneficial development and risks of misaligned incentives, limited perspectives, and inadequate safeguards. The divergent approaches between democratic and authoritarian governance models further complicate this landscape, creating competing visions for how AI should be designed, deployed, and controlled. The path forward requires recognizing that technical design choices embed values and priorities that will profoundly shape human experience as AI becomes increasingly integrated into social systems. These choices are not merely technical but deeply political and ethical, determining whether AI ultimately enhances human autonomy and capability or restricts and controls it. By developing comprehensive frameworks that address technical safeguards, organizational practices, governance structures, and international coordination, we can guide AI development toward outcomes that genuinely serve humanity's diverse needs and values. This task demands unprecedented collaboration across sectors, disciplines, and national boundaries - but the stakes could not be higher. The decisions made in the coming years about AI development will reverberate through generations, potentially determining whether technology becomes the greatest ally of human flourishing or its greatest threat.

Best Quote

“While plenty of smart people advocate AI for the public good, we are not yet discussing artificial intelligence as a public good. This is a mistake.” ― Amy Webb, The Big Nine: How the Tech Titans and Their Thinking Machines Could Warp Humanity

Review Summary

Strengths: Amy Webb's book, "The Big Nine," offers a well-balanced approach to the future of artificial intelligence, avoiding hysteria and irrational exuberance. Webb's credentials as a respected futurist and her experience advising corporate and government leaders lend credibility to her insights. The book is structured in three parts, providing a comprehensive overview of AI's history, potential future scenarios, and actionable strategies to address arising challenges. Webb's exploration of AI's history is noted for its depth, reaching further back than other accounts. Her imaginative scenarios for AI's future development are designed to vividly illustrate potential outcomes. Weaknesses: The review mentions that the latter part of the book, which includes scenario storytelling, lacks substance and feels hollow. There is criticism that the book does not offer new insights for those already familiar with AI and its implications, suggesting that the narrative might not add much beyond what is generally known. Additionally, the book is perceived as being too US-focused and speculative about possibilities. Overall Sentiment: The sentiment expressed in the review is mixed. While the book is praised for its balanced approach and comprehensive treatment of AI, there is disappointment in the lack of novel insights and the perceived superficiality of some sections. Key Takeaway: The review highlights the importance of understanding AI's potential impact on global geopolitics, particularly the power struggle between the US and China. It underscores the necessity for a global coalition to guide AI development positively and the need for informed and courageous leadership to address AI's challenges.

About Author

Loading...
Amy Webb Avatar

Amy Webb

Amy Webb was named by Forbes as one of the five women changing the world, listed as the BBC’s 100 Women of 2020, ranked on the Thinkers50 list of the 50 most influential management thinkers globally. She is the author of several popular books, including The Big Nine: How the Tech Titans and Their Thinking Machines Could Warp Humanity, which was longlisted for the Financial Times & McKinsey Business Book of the Year award, shortlisted for the Thinkers50 Digital Thinking Award, and won the 2020 Gold Axiom Medal for the best book about business and technology, and The Signals Are Talking: Why Today’s Fringe Is Tomorrow’s Mainstream, which won the Thinkers50 Radar Award, was selected as one of Fast Company’s Best Books of 2016, Amazon’s best books 2016, and was the recipient of the 2017 Gold Axiom Medal for the best book about business and technology. Her latest book, The Genesis Machine, explores the futures of synthetic biology. A lifelong science fiction fan, Amy collaborates closely with Hollywood writers and producers on films, TV shows and commercials about science, technology and the future. Recent projects include The First, a sci-fi drama about the first humans to travel to Mars, an AT&T commercial featuring a fully-autonomous car directed by Oscar-winner Kathryn Bigelow, and an upcoming film based on Amy’s hilarious and heart wrenching memoir about data, algorithms and online dating (Data, A Love Story). Amy is a member of the Academy of Television Arts & Sciences and has served as a Blue Ribbon Emmy award judge. Amy Webb “showed Comic-Con how it’s done” declared the Los Angeles Times, describing the 2019 main stage Westworld session she moderated with the show’s actors and showrunners.

Read more

Download PDF & EPUB

To save this Black List summary for later, download the free PDF and EPUB. You can print it out, or read offline at your convenience.

Book Cover

The Big Nine

By Amy Webb

0:00/0:00

Build Your Library

Select titles that spark your interest. We'll find bite-sized summaries you'll love.