Home/Business/Supremacy
Loading...
Supremacy cover

Supremacy

AI, ChatGPT, and the Race that Will Change the World

4.1 (2,130 ratings)
25 minutes read | Text | 9 key ideas
In the shadowy corridors of tech empires, a tempest brews as two titans, OpenAI and DeepMind, engage in a high-stakes duel for AI dominance. "Supremacy" by Parmy Olson is a riveting exposé that unfurls the secretive and intense rivalry between visionaries Sam Altman and Demis Hassabis. This isn't just a tale of technological marvels, but a piercing examination of ambition, ethics, and the unchecked power of AI behemoths. Olson, with insider access and astute analysis, reveals how the relentless pursuit of Artificial General Intelligence could redefine humanity's future, raising urgent questions about innovation's role in our lives. Will this race lead us to enlightenment, or plunge us into chaos? "Supremacy" beckons readers to ponder the thin line between progress and peril in a world where machines may soon surpass their creators.

Categories

Business, Nonfiction, Philosophy, Science, History, Politics, Technology, Artificial Intelligence, Audiobook, Society

Content Type

Book

Binding

Hardcover

Year

2024

Publisher

St. Martin's Press

Language

English

ISBN13

9781250337740

File Download

PDF | EPUB

Supremacy Plot Summary

Introduction

In the quiet laboratories of London and the bustling offices of Silicon Valley, two men with radically different backgrounds have been engaged in what may be the most consequential technological race of our time. Sam Altman, the entrepreneurial wunderkind from Missouri, and Demis Hassabis, the chess prodigy turned neuroscientist from North London, have emerged as the chief architects of artificial general intelligence—a technology with the potential to transform human civilization more profoundly than any invention that came before. Their competing visions for AI's development have shaped not just their respective organizations, OpenAI and DeepMind, but the very trajectory of humanity's relationship with intelligent machines. What makes their parallel journeys so fascinating is not merely the cutting-edge technology they've created, but the philosophical and ethical questions their work has forced us to confront. Through their stories, we witness the tension between idealism and pragmatism, between open research and commercial advantage, between moving fast and proceeding with caution. As their AI systems grow increasingly powerful, these once-obscure researchers have found themselves at the center of a global conversation about the future of work, the nature of intelligence, and ultimately, humanity's place in a world where we may no longer be the most intelligent entities.

Chapter 1: Early Visionaries: Divergent Paths to AI Leadership

Sam Altman's journey to AI leadership began far from the cutting-edge laboratories where artificial intelligence is developed. Growing up in St. Louis, Missouri, he displayed an entrepreneurial spirit from an early age. As a teenager in the late 1990s, Altman found refuge in AOL chat rooms, where he could explore his identity as a gay youth in a relatively anonymous environment. This early experience with technology as a tool for connection would later inform his vision of AI as something that could empower and unite people rather than divide them. Altman's path took him to Stanford University, though he would drop out after just one year to pursue his entrepreneurial ambitions. In 2005, at just 19 years old, he founded Loopt, a location-sharing mobile app that predated the smartphone revolution. While Loopt ultimately failed to gain significant traction, it positioned Altman as a young entrepreneur to watch in Silicon Valley circles. His real breakthrough came when he was appointed president of Y Combinator, the prestigious startup accelerator, in 2014. Under his leadership, YC expanded its portfolio and influence, backing companies that would become unicorns and establishing Altman as a kingmaker in the tech industry. Demis Hassabis followed a markedly different trajectory. Born in London to a Greek Cypriot father and Chinese Singaporean mother, Hassabis was a chess prodigy who competed at the national level by age four. His intellectual gifts led him first to the world of video game development, where he joined legendary studio Bullfrog Productions at just 17 before founding his own gaming company, Elixir Studios. For Hassabis, games weren't merely entertainment but laboratories for exploring how computers could simulate aspects of human decision-making and creativity. Recognizing the limitations of game AI, Hassabis made a pivotal decision to pursue formal academic training in neuroscience. After selling Elixir Studios, he earned a PhD from University College London, focusing on the neural processes underlying episodic memory. This interdisciplinary background—spanning computer science, gaming, and neuroscience—gave him a unique perspective on artificial intelligence. He became convinced that understanding the human brain was essential to creating truly intelligent machines. In 2010, Hassabis founded DeepMind with the mission to "solve intelligence, then use that to solve everything else." His approach was methodical and research-focused, drawing inspiration from how the human brain works. Five years later, Altman would co-found OpenAI with Elon Musk and others, driven by concerns that advanced AI might be developed by a single corporation for profit rather than for humanity's benefit. Despite their different backgrounds, both men shared a conviction that AGI represented the most important technological development in human history—one that required careful stewardship to ensure it benefited humanity rather than harmed it.

Chapter 2: Strategic Choices: Games vs. Language Models

The divergent approaches taken by DeepMind and OpenAI in their pursuit of artificial general intelligence reflected their founders' distinct philosophies about how intelligence could be developed in machines. Hassabis, with his background in games and neuroscience, believed that mastering strategic games provided an ideal training ground for developing AI systems. Games offered clear rules, definable objectives, and measurable success—creating perfect conditions for reinforcement learning, where AI systems improve through trial and error, much like humans do. This games-first approach bore spectacular fruit in 2016 when DeepMind's AlphaGo defeated world champion Lee Sedol at the ancient board game Go—a feat many experts had predicted was decades away. The victory wasn't just a technical achievement; it was a powerful demonstration of Hassabis's vision. AlphaGo didn't win through brute-force calculation but through intuitive-seeming play that sometimes appeared almost human. The system made moves that professional players initially described as mistakes but later recognized as innovative strategies. For Hassabis, this validated his belief that AI systems could develop capabilities previously thought to be uniquely human. OpenAI initially pursued a similar path, developing systems that could master complex games like Dota 2. However, under Altman's leadership, the organization gradually shifted its focus toward language as the primary medium for developing intelligence. This pivot represented a fundamentally different understanding of what constitutes intelligence. While games offered structured environments with clear rules, language was messy, contextual, and deeply human. By training models on vast amounts of text from the internet, OpenAI bet that understanding and generating human language would lead to something approximating general intelligence. The language model approach culminated in the GPT (Generative Pre-trained Transformer) series, with each iteration growing dramatically in size and capability. GPT-3, released in 2020, demonstrated an uncanny ability to generate coherent text across a wide range of topics and styles, from poetry to programming code. This suggested a form of understanding that seemed increasingly humanlike, though debates raged about whether the system truly "understood" anything or was merely performing sophisticated pattern matching. These strategic divergences reflected deeper philosophical differences about the nature of intelligence itself. Hassabis, with his neuroscience background, approached AGI as a scientific problem to be solved through careful experimentation and incremental progress. His systems learned through reinforcement learning techniques inspired by how humans learn. Altman and OpenAI increasingly embraced a more engineering-focused approach, believing that with enough data and computing power, intelligence would emerge from statistical patterns in language. By 2022, these different paths had led to AI systems with distinct strengths and limitations. DeepMind's game-playing systems excelled at strategic reasoning and planning but struggled with the nuances of human language and culture. OpenAI's language models could generate remarkably human-like text but lacked the strategic planning abilities of game-playing systems. The race toward AGI had become not just a competition between organizations but between fundamentally different conceptions of what intelligence is and how it can be created in machines.

Chapter 3: From Idealism to Commerce: OpenAI's Transformation

OpenAI's evolution from an idealistic nonprofit to a commercial powerhouse represents one of the most dramatic pivots in recent technology history. Founded in December 2015 with a billion-dollar pledge from Elon Musk, Sam Altman, and other tech luminaries, OpenAI began with a noble mission: to ensure that artificial general intelligence benefited all of humanity rather than serving narrow corporate interests. The organization promised to make its research open to the public—hence its name—and to prioritize safety over commercial gain. The founding team articulated a clear philosophy: if artificial general intelligence was inevitable, then it should be developed by an organization committed to sharing its benefits widely rather than maximizing profit. This approach attracted top research talent who shared this idealistic vision. In its early years, OpenAI published groundbreaking papers on reinforcement learning and robotics, making its findings freely available to the research community. This commitment to openness earned goodwill among academics and aligned with the organization's stated goal of ensuring that advanced AI would not be controlled by a small group of powerful entities. By 2019, however, this idealistic vision collided with harsh economic realities. Building cutting-edge AI systems required enormous computing resources and top talent, both increasingly expensive commodities in the AI boom. When Musk departed in 2018, citing conflicts of interest with Tesla's AI development, OpenAI faced an existential crisis. Altman, now leading the organization, made a controversial decision: OpenAI would create a "capped-profit" company that could attract investment while theoretically maintaining its original mission. This new structure limited investor returns to 100 times their investment—an enormous cap that still allowed for billions in potential profits. The transformation accelerated in July 2019 when OpenAI announced a billion-dollar investment from Microsoft, giving the tech giant exclusive licensing rights to OpenAI's technology. Critics immediately pointed out the irony: an organization founded to prevent AGI from being controlled by large corporations was now deeply entangled with one of the world's most powerful tech companies. Defenders argued that without this pivot, OpenAI would have ceased to exist, and its mission would have died with it. This strategic shift reflected a deeper tension in the AI field between idealism and pragmatism. The resources needed to compete in the AGI race had grown so enormous that independence seemed increasingly impossible. Even as OpenAI maintained that its nonprofit board still governed the organization's direction, ensuring alignment with its original mission, the commercial pressures and incentives had fundamentally changed. The organization that had once promised to make its research open was now keeping crucial details of its models secret, citing safety concerns but also protecting valuable intellectual property. By 2022, OpenAI had completed its transformation from nonprofit to commercial entity, though with an unusual governance structure that maintained some connection to its original mission. This evolution illustrated a broader pattern in technology development: idealistic visions often give way to commercial realities as technologies mature and require greater resources. The question remained whether OpenAI could balance its commercial success with its founding commitment to ensure that artificial general intelligence benefits all of humanity—a tension that would only intensify as its technology grew more powerful and more profitable.

Chapter 4: Corporate Entanglements: Microsoft and Google's Influence

The entry of tech giants Microsoft and Google into advanced AI development fundamentally altered the competitive landscape, transforming what had begun as a scientific and philosophical quest into a high-stakes commercial battle. Google's 2014 acquisition of DeepMind for approximately $500 million initially appeared to be a straightforward talent acquisition, but it quickly became clear that the search giant had secured a crucial advantage in the race toward artificial general intelligence. Under Google's umbrella, DeepMind gained access to vast computing resources and data that accelerated its research capabilities. As part of the acquisition, DeepMind's founders negotiated unusual terms: Google agreed to establish an ethics board to oversee the company's research and promised not to use DeepMind's technology for military applications. These provisions reflected Hassabis's determination to maintain some independence and ethical oversight even within a corporate structure. However, the practical implementation of these safeguards became increasingly complicated as DeepMind integrated more deeply into Google's corporate structure. By 2021, DeepMind's attempts to establish a more independent legal structure within Google had repeatedly stalled, and the founders had to accept that their dream of semi-autonomy would never fully materialize. Microsoft, initially slower to prioritize cutting-edge AI research, found itself playing catch-up. The company's strategic pivot came under CEO Satya Nadella, who recognized that artificial intelligence would be central to the future of computing. In 2019, Microsoft made a decisive move by investing $1 billion in OpenAI, establishing a partnership that would profoundly shape both organizations. This alliance gave OpenAI access to Microsoft's Azure cloud computing infrastructure—essential for training increasingly large AI models—while giving Microsoft exclusive licensing rights to OpenAI's technology. The Microsoft-OpenAI partnership intensified in early 2023 when Microsoft integrated ChatGPT into its Bing search engine, directly challenging Google's core business. This move prompted Google to declare a "code red" and accelerate the development of its own conversational AI system, Bard. The stock market reflected these shifting fortunes, with Microsoft's market capitalization growing while Google parent Alphabet faced investor concerns about potential disruption to its search business. These corporate entanglements transformed the economics of AI development. Training frontier models required enormous computational resources—typically tens or hundreds of millions of dollars worth of specialized chips and energy—creating significant barriers to entry. The three major cloud providers—Amazon Web Services, Microsoft Azure, and Google Cloud—controlled the vast majority of the computational resources necessary for training advanced AI models. This created a dynamic where even well-funded AI startups ultimately depended on these platforms, potentially limiting their independence. The corporate embrace of AI research also concentrated decision-making power about this transformative technology in the hands of a few executives and shareholders, raising profound questions about democratic oversight and public accountability. Researchers who had once dreamed of building AGI to solve humanity's greatest challenges now found themselves helping to optimize advertising algorithms or cloud services. As these systems became more capable and more integrated into daily life, the question of who controlled them—and to what ends—took on ever greater importance in shaping humanity's technological future.

Chapter 5: The ChatGPT Moment: Accelerating the AI Race

The release of ChatGPT in November 2022 marked a watershed moment in artificial intelligence, transforming public perception of AI capabilities virtually overnight. OpenAI launched the conversational AI system with little fanfare, describing it as a "research preview" rather than a polished product. Yet within five days, ChatGPT had attracted over one million users, and within two months, it had become the fastest-growing consumer application in history, reaching 100 million monthly active users by January 2023. ChatGPT's impact stemmed from its unprecedented accessibility. Previous advances in AI had often remained confined to research papers or specialized applications, but ChatGPT offered a simple chat interface that anyone could use without technical expertise. Users could ask questions, request creative content, seek advice, or simply engage in conversation. The system's ability to generate coherent, contextually appropriate responses across an enormous range of topics astonished both experts and the general public, creating a moment of collective realization about how far AI had advanced while few were paying attention. The technology behind ChatGPT represented the culmination of years of research. Based on GPT-3.5 (and later GPT-4), it utilized a transformer architecture—ironically, an innovation originally developed at Google—combined with reinforcement learning from human feedback. This approach allowed the model to generate more helpful, harmless, and honest responses than previous systems. While OpenAI had been developing these capabilities incrementally, ChatGPT's conversational interface made the progress suddenly tangible to millions. The business implications were immediate and far-reaching. Microsoft, having secured exclusive access to OpenAI's technology through its earlier investment, quickly integrated ChatGPT capabilities into its Bing search engine and Microsoft 365 productivity suite. This move threatened Google's core search business, prompting the search giant to declare a "code red" and accelerate the development of its own conversational AI system, Bard. The stock market reflected these shifting fortunes, with Microsoft's market capitalization growing while Google parent Alphabet faced investor concerns. ChatGPT's success also triggered a broader AI arms race. Venture capital flooded into generative AI startups, with funding increasing from approximately $4.8 billion in 2022 to over $40 billion in 2023. Established companies across industries scrambled to incorporate generative AI into their products and services. This acceleration raised concerns among AI safety advocates, who worried that competitive pressures would lead to the hasty deployment of increasingly powerful systems without adequate safety measures. For DeepMind, ChatGPT's explosive success represented both a challenge and an opportunity. The company had been pursuing a more methodical approach to AI development, focused on scientific breakthroughs rather than consumer products. Now, it faced pressure to demonstrate comparable capabilities. In response, Google merged DeepMind with its internal Google Brain team to create Google DeepMind, a unified division focused on accelerating AI development. This reorganization reflected the new competitive reality: the race for AI supremacy had entered a new, more intense phase, with public attention and commercial stakes higher than ever before.

Chapter 6: Ethical Crossroads: Balancing Safety and Innovation

As AI systems grew more powerful, the field faced an increasingly urgent dilemma: how to balance the push for greater capabilities against the need for safety and ethical deployment. This tension manifested in different approaches at DeepMind and OpenAI, reflecting their founders' divergent philosophies and the pressures of their corporate relationships. DeepMind, under Hassabis's leadership, generally adopted a more cautious approach, extensively testing systems before release and publishing research papers that detailed their methods while keeping the actual models proprietary. OpenAI's approach evolved more dramatically over time. In 2019, the organization made headlines by announcing it would not release its GPT-2 language model due to concerns about potential misuse for generating fake news and impersonation. This decision sparked debate about responsible AI development, with some praising OpenAI's caution and others questioning whether withholding technology was the right approach to safety. However, as commercial pressures mounted, OpenAI's stance shifted. By the time ChatGPT was released in late 2022, the organization had adopted a strategy of releasing powerful AI systems to the public with minimal safeguards, arguing that broad deployment was necessary to understand real-world impacts. The ethical challenges extended beyond just when and how to release AI systems. Both organizations grappled with issues of bias and fairness, as their models—trained on internet data—inevitably absorbed and sometimes amplified societal prejudices. ChatGPT, for instance, was found to generate stereotypical or discriminatory content about certain groups, reflecting biases present in its training data. These issues highlighted the challenge of building AI systems that were not just powerful but also aligned with human values and ethical principles. The field also divided over existential risk—the possibility that advanced AI might pose a threat to humanity itself. In March 2023, thousands of AI researchers and tech leaders, including Hassabis and Altman, signed an open letter calling for a six-month pause in training AI systems more powerful than GPT-4, citing potential risks to society. This was followed by a statement from leading AI labs acknowledging that mitigating extinction risk from AI should be a global priority alongside pandemics and nuclear war. Critics argued that these concerns distracted from addressing immediate harms like algorithmic discrimination and surveillance. Governance structures became increasingly important as AI systems grew more powerful. OpenAI's unusual structure, with a nonprofit board overseeing a for-profit company, was designed to ensure that the organization remained committed to its original mission of beneficial AGI. However, this arrangement faced its greatest test in November 2023 when the board briefly ousted Altman as CEO, citing concerns about his candor in communications. The ensuing crisis, which saw Microsoft intervene and most OpenAI employees threaten to quit, raised profound questions about who ultimately controlled this powerful technology. As both organizations continued to develop increasingly capable AI systems, they found themselves navigating competing demands: advancing capabilities to remain competitive, ensuring safety to prevent harm, satisfying commercial partners to secure resources, and maintaining public trust. These tensions reflected a broader challenge facing the field: how to develop governance mechanisms that could ensure powerful AI systems would be developed responsibly in an environment driven by intense competition and commercial pressures.

Chapter 7: Legacy and Impact: Reshaping Technology's Future

The work of Altman, Hassabis, and their respective organizations has fundamentally altered our technological trajectory, creating ripple effects that extend far beyond artificial intelligence itself. Their competing visions for AI development have not only shaped their own organizations but have influenced how governments, businesses, and society at large think about the future of technology. As their AI systems have grown more capable, they have forced us to reconsider fundamental questions about the nature of intelligence, creativity, and what it means to be human. In the business world, the impact has been swift and dramatic. Companies across industries have scrambled to integrate generative AI into their products and services, from legal document analysis to creative content production. The technology has begun disrupting established business models, particularly in knowledge work, where tasks once thought to require human judgment can increasingly be performed by AI systems. This has raised urgent questions about the future of work and economic distribution in a world where intelligence—once exclusively human—can be replicated and scaled through technology. The educational implications have been equally profound. Schools and universities have had to rapidly adapt to a world where students can generate essays or solve complex problems using AI assistants. This has prompted a fundamental rethinking of assessment methods and learning objectives, shifting focus from information recall to critical thinking, creativity, and effective collaboration with AI systems. The technology has democratized access to certain forms of knowledge while simultaneously raising concerns about critical thinking and the development of fundamental skills. On the geopolitical stage, AI has emerged as a key battleground in the competition between nations. The United States, China, and the European Union have developed distinct approaches to AI governance, reflecting their different values and strategic interests. The U.S. has generally favored innovation with limited regulation, China has pursued AI development as a national priority with state direction, while the EU has emphasized regulation to ensure AI aligns with European values. These divergent approaches have created a fragmented global landscape for AI governance, with implications for international cooperation on safety standards. Perhaps most significantly, the work of these AI pioneers has forced us to confront philosophical questions about consciousness, intelligence, and humanity's place in the world. As AI systems demonstrate increasingly sophisticated capabilities in domains once thought to be uniquely human—from creative writing to strategic reasoning—they challenge our understanding of what makes human intelligence special. This has sparked renewed interest in consciousness studies and the philosophy of mind, as we attempt to understand the differences between human and artificial intelligence. The legacies of Altman and Hassabis will ultimately be judged not just by the capabilities of the systems they create, but by how those systems reshape our economic, social, and political landscape. Their rival visions for AI development have accelerated technological progress while raising profound questions about governance, safety, and the distribution of benefits. As we stand at this technological crossroads, their stories remind us that the development of artificial intelligence is not merely a technical challenge but a deeply human endeavor, shaped by the values, ambitions, and limitations of its creators.

Summary

The parallel journeys of Sam Altman and Demis Hassabis reveal how the pursuit of artificial general intelligence transformed from an idealistic scientific endeavor into a high-stakes commercial and geopolitical contest. Both men began with visions that transcended mere profit: Altman sought to ensure that AGI would benefit humanity broadly rather than a select few, while Hassabis aimed to unlock the mysteries of intelligence itself. Yet as their organizations grew and the potential of AI became more apparent, they found themselves increasingly entangled with corporate giants whose resources were essential for advancing their work but whose commercial imperatives sometimes conflicted with their original missions. The story of these AI architects offers a profound lesson about the tension between idealism and pragmatism in technological innovation. Their experiences suggest that transformative technologies inevitably concentrate power, despite the best intentions of their creators. As we move deeper into an era where artificial intelligence reshapes every aspect of society, their journeys remind us that the most important questions about AI are not technical but ethical and political: who controls these systems, who benefits from them, and how can we ensure they serve humanity's best interests rather than merely the interests of those who own them? The answers to these questions will determine whether the artificial intelligence revolution leads to a more equitable and flourishing society or merely reinforces existing power structures—a responsibility that now falls to all of us, not just the architects who set this transformation in motion.

Best Quote

“I think one reason why chess has survived so successfully over generations is because the knight and bishop are perfectly balanced,” Hassabis told Thiel as canapés were being passed around. “I think that causes all the creative asymmetric tension.” ― Parmy Olson, Supremacy: AI, ChatGPT, and the Race that Will Change the World

Review Summary

Strengths: The review highlights the book's ability to provide an approachable introduction to complex AI topics, even for those who are not tech-savvy. It praises the author's thorough investigation into the financial and ethical dimensions of AI development, including detailed accounts of deals, investments, and the challenges in establishing ethics committees.\nOverall Sentiment: Mixed. The review conveys both fascination and concern, reflecting a balance of intrigue about AI advancements and worry about the ethical implications and the motivations of those behind these technologies.\nKey Takeaway: Parmy Olson's "Supremacy" offers a compelling exploration of AI's rapid development, focusing on key figures like Sam Altman and Demis Hassabis. It raises important questions about the ethical stewardship of AI and the potential risks involved, making it a significant read for understanding the broader implications of AI on society.

About Author

Loading...
Parmy Olson Avatar

Parmy Olson

Parmy Olson is a Bloomberg Opinion columnist covering technology regulation, artificial intelligence, and social media. A former reporter for the Wall Street Journal and Forbes, she is the author of We Are Anonymous and a recipient of the Palo Alto Networks Cyber Security Cannon Award. Olson has been writing about artificial intelligence systems and the money behind them for seven years. Her reporting on Facebook’s $19 billion acquisition of WhatsApp and the subsequent fallout resulted in two Forbes cover stories and two honourable mentions in the SABEW business journalism awards. At the Wall Street Journal she investigated companies that exaggerated their AI capabilities and was the first to report on a secret effort at Google’s top AI lab to spin out from the company in order to control the artificial super intelligence it created.

Read more

Download PDF & EPUB

To save this Black List summary for later, download the free PDF and EPUB. You can print it out, or read offline at your convenience.

Book Cover

Supremacy

By Parmy Olson

Build Your Library

Select titles that spark your interest. We'll find bite-sized summaries you'll love.