Loading...
Nexus cover

Nexus

A Brief History of Information Networks from the Stone Age to AI

4.2 (29,312 ratings)
27 minutes read | Text | 9 key ideas
In an era where misinformation reigns and AI looms large, ""Nexus"" by Yuval Noah Harari unravels the intricate dance between power and perception. For millennia, humans have wielded information like a double-edged sword—building empires, crafting myths, and sometimes spiraling into chaos. Now, standing on the precipice of ecological and existential crises, we must confront the paradox of our own creations. Harari guides us through a tapestry of human history, from ancient tales to modern-day populism, dissecting the delicate threads that bind knowledge to authority. ""Nexus"" challenges readers to ponder the ethical dilemmas of our digital age and explore the delicate balance between truth and manipulation—a quest for a shared humanity amidst the algorithms.

Categories

Business, Nonfiction, Self Help, Psychology, Philosophy, Science, History, Politics, Technology, Artificial Intelligence, Audiobook, Sociology, Personal Development, Society, Book Club

Content Type

Book

Binding

Hardcover

Year

0

Publisher

Random House

Language

English

ASIN

059373422X

ISBN

059373422X

ISBN13

9780593734223

File Download

PDF | EPUB

Nexus Plot Summary

Introduction

Throughout human history, the way we share and process information has fundamentally shaped our societies. From ancient tribal gatherings where elders shared oral traditions under starlit skies to modern digital platforms where algorithms determine what news we consume, information networks have always been the invisible architecture of civilization. These networks don't merely transmit facts—they create realities, forge identities, and establish power structures that determine who rules and who obeys. What makes this historical journey fascinating is that information networks have never been neutral conduits of truth. Rather, they have always balanced two competing needs: discovering truth and maintaining social order. This tension explains why societies with advanced information technologies haven't necessarily become wiser or more just. The Roman Empire had remarkable record-keeping systems but remained an autocracy. Medieval Europe had elaborate theological texts but burned women as witches. Today, despite unprecedented access to information, we face crises of misinformation and polarization. By understanding how information networks evolved from tribal storytelling to algorithmic intelligence, we gain crucial insights into not just our past, but our increasingly AI-driven future.

Chapter 1: Ancient Tablets: The Birth of Bureaucratic Control (3000 BCE)

Around 3000 BCE, as human societies grew more complex following the agricultural revolution, a new information technology emerged that would transform civilization: written documents. One of the earliest examples comes from ancient Mesopotamia—a clay tablet dated to approximately 2053 BCE that meticulously recorded monthly deliveries of sheep and goats. While oral stories excelled at creating emotional connections and shared identities, they struggled with precise numerical information. Human brains evolved to remember dramatic narratives but not detailed lists of tax payments or property boundaries. This new technology solved a critical problem for emerging kingdoms and empires. Written documents created a different kind of intersubjective reality—one based not on emotional resonance but on bureaucratic precision. In Old Assyrian dialect, loan contracts were treated as living things that could be "killed" (duākum) when the debt was repaid. The document didn't represent reality; it was the reality. If somebody repaid a loan but failed to "kill the document," the debt still existed. Conversely, if the document was destroyed accidentally, the debt vanished. However, written documents created a new and thorny problem: retrieval. As archives accumulated thousands of clay tablets, papyrus scrolls, and eventually paper documents, finding the right record became increasingly difficult. Unlike the human brain, which can rapidly retrieve memories through its network of neurons, documents required a new organizational system. This system—bureaucracy—literally means "rule by writing desk." Bureaucrats solved the retrieval problem by dividing the world into categories, drawers, and files. This bureaucratic order came with significant costs. Instead of understanding the world as it is, bureaucracy often imposed an artificial order that distorted reality. When filling out official forms, if none of the listed options fits your circumstances, you must adapt yourself to the form rather than the form adapting to you. This distortion affects not just government agencies but also scientific disciplines. Universities divide knowledge into separate departments—history, biology, mathematics—even though real-world phenomena like pandemics simultaneously involve historical, biological, and mathematical aspects. Despite these limitations, bureaucracy provided crucial services that storytelling alone could not. In 1854, when cholera struck London, physician John Snow used bureaucratic methods—collecting data, categorizing it, and mapping it—to identify a contaminated water pump as the source of the outbreak. His work led to regulations on water systems that have since saved millions of lives. While myths and stories inspire emotional commitment, bureaucratic documents enable precise coordination of complex activities across time and space. The tension between mythology and bureaucracy became a defining feature of human civilization. Mythmakers created emotional bonds and meaning; bureaucrats created order and efficiency. Every successful large-scale society had to balance these competing information systems. The Catholic Church needed both inspiring biblical stories and meticulous record-keeping. Modern nations require both patriotic narratives and tax codes. This dual foundation of human networks—mythology and bureaucracy—continues to shape our information systems today.

Chapter 2: Religious Texts: Holy Books and Interpretive Authority

Throughout history, humans have fantasized about accessing a superhuman, infallible source of information. This fantasy took concrete form in the first millennium BCE with the emergence of holy books—fixed collections of texts that allegedly contained divine wisdom free from human error. After tens of thousands of years in which gods spoke through fallible human messengers, religions like Judaism began claiming that the gods spoke through this novel technology of the book. The Hebrew Bible exemplifies this process. Contrary to popular belief, the Bible didn't exist as a single holy book during biblical times. King David and the prophet Isaiah never saw a copy of the Bible. Archaeological evidence from the Dead Sea Scrolls (written mostly in the last two centuries BCE) reveals a diverse collection of texts, with no indication that the twenty-four books of the Old Testament were considered a single and complete database. Some scrolls preserved texts that would later be excluded from the Bible, like the book of Enoch, while others contained different versions of what would become canonical texts. It took centuries of debates among Jewish sages to decide which texts would be included in the Bible as the official word of Jehovah. By the end of the second century CE, consensus was reached, but debates about precise wordings continued until the Masoretic era (seventh to tenth centuries CE). After the canon was sealed, most Jews gradually forgot the role of human institutions in this messy process. Jewish Orthodoxy maintained that God personally handed down the entire Torah to Moses at Mount Sinai, and many rabbis argued that God created it at the dawn of time. The hope was that by having numerous identical copies of the holy book distributed widely, Jews would have direct access to God's exact words, which no fallible human could alter. This was meant to achieve two things: democratize religion by placing limits on would-be human autocrats, and prevent any meddling with the text. With numerous Bibles available in far-flung locations, Jews replaced human despotism with divine sovereignty. However, this project ran into further difficulties. Even when people agreed on the sanctity of a book and its exact wording, they could still interpret the same words differently. The Bible says you should not work on the Sabbath, but what counts as "work"? Is it okay to water your field? What about tearing a piece of paper? The rabbis ruled that reading a book isn't work, but tearing paper is work, which is why Orthodox Jews prepare already-ripped toilet paper for the Sabbath. Inevitably, the holy book spawned numerous interpretations, which became far more consequential than the book itself. Rabbis gained power as interpreters of the Bible. When they reached consensus about interpretation, Jews saw another chance to bypass fallible human institutions by writing this agreed interpretation in a new holy book: the Mishnah (canonized in the third century CE). As the Mishnah became more authoritative than the plain text of the Bible, Jews began to believe that it too must have been inspired by Jehovah. But no sooner had the Mishnah been canonized than Jews began arguing about its correct interpretation, leading to a third holy book—the Talmud. The dream of bypassing fallible human institutions through the technology of the holy book never materialized. With each iteration, the power of the rabbinical institution only increased. "Trust the infallible book" turned into "trust the humans who interpret the book." Judaism was shaped by the Talmud far more than by the Bible, and rabbinical arguments about the interpretation of the Talmud became even more important than the Talmud itself.

Chapter 3: Totalitarian Systems: How Stalin's Regime Manufactured Reality

In the early 20th century, a new and terrifying kind of information network emerged: the totalitarian state. While earlier autocratic regimes had certainly controlled information, they lacked the technological means to monitor and manipulate their entire populations. The Soviet Union under Joseph Stalin created an unprecedented system that sought to control all information flows within society, from official propaganda to private conversations. The collectivization of agriculture in the Soviet Union between 1929 and 1933 demonstrates how totalitarian information networks operate. When Stalin announced his plan to eliminate kulaks (supposedly wealthy peasants) as a class, he set in motion a bureaucratic process that transformed millions of ordinary farmers into enemies of the state. Soviet officials were ordered to identify a specific number of kulaks in each region, regardless of whether actual wealthy peasants existed there. In the Kurgan region of Siberia, one village Soviet chairman explained to a teenager named Dmitry Streletsky why his family was being deported: "I have received an order to find 17 kulak families for deportation... There is no one in the village who is rich enough to qualify, so we simply chose the 17 families. You were chosen. Please don't take it personally." The Soviet bureaucracy created an elaborate information system to track these newly manufactured "kulaks." Government agencies, party organs, and secret police documents recorded who was a kulak in a labyrinthine system of kartoteki catalogs, archives, and internal passports. Once branded a kulak, a person could not escape the stigma. Kulak status even passed to the next generation, with kulak children being refused entrance to universities and prestigious jobs. As Antonina Golovina, who was deported as a child, recalled, a teacher once made her stand up in front of all the other children and shouted that "her sort" were "enemies of the people, wretched kulaks!" Antonina wrote that this was the defining moment of her life: "I had this feeling in my gut that we were different from the rest, that we were criminals." What made the Soviet information network truly totalitarian was its tripartite structure. Three parallel organizations—the state bureaucracy, the Communist Party, and the secret police—constantly monitored not just ordinary citizens but also each other. This created a system where information flowed primarily in one direction: upward to the center. Lower-level officials were afraid to report problems or failures, knowing they might be accused of sabotage or counterrevolutionary activities. During the 1932-33 famine that killed millions, local officials continued to report successful grain collections, fearing punishment if they acknowledged the disaster. The Soviet system demonstrates how information networks can create rather than discover reality. The mountains of information collected by Soviet bureaucrats about kulaks wasn't an objective truth about them, but it imposed a new intersubjective Soviet truth. Knowing that someone was labeled a kulak became one of the most important things to know about a Soviet person, even though the category was entirely bogus. The information network didn't discover pre-existing kulaks; it manufactured them through classification, documentation, and enforcement. Stalin's regime also sought to control family relationships, traditionally the most private sphere of human life. Children were taught to worship Stalin as their real father and to inform on their biological parents if they criticized the regime. In 1934, a thirteen-year-old boy named Pronia Kolibin told authorities that his hungry mother had stolen grain from the collective farm fields. His mother was arrested and presumably shot. Pronia was rewarded with cash and media attention. The party newspaper Pravda published a poem Pronia wrote that included the lines: "You are a wrecker, Mother / I can live with you no more." Despite these horrors, it would be wrong to assume that totalitarian networks are doomed to failure due to their disregard for truth. The Soviet Union won World War II, built nuclear weapons, launched the first satellite, and inspired numerous copycat regimes. Information systems can reach far with just a little truth and a lot of order. What ultimately determines the success of an information network is not just its ability to discover truth but also its capacity to maintain social order.

Chapter 4: Digital Revolution: When Algorithms Gained Agency

The computer revolution represents something fundamentally different from all previous information revolutions. While the inventions of writing, printing, and radio changed how humans connected to one another, computers are not merely connections between humans—they are fully-fledged members of the information network. For the first time in history, power is shifting away from humans toward non-human agents capable of making decisions and creating new ideas by themselves. A paradigmatic case of this new power was the role social media algorithms played in spreading hatred in Myanmar in 2016-17. After decades of military rule, Myanmar was experiencing liberalization, with Facebook providing millions of Burmese with access to previously unimaginable information. However, when tensions arose between the Buddhist majority and the Muslim Rohingya minority, Facebook's algorithms helped fan the flames of violence. While inflammatory anti-Rohingya messages were created by human extremists, it was Facebook's algorithms that decided which posts to promote. According to Amnesty International, "algorithms proactively amplified and promoted content on the Facebook platform which incited violence, hatred, and discrimination against the Rohingya." A UN fact-finding mission concluded that Facebook had played a "determining role" in the ethnic-cleansing campaign that killed thousands and displaced over 730,000 Rohingya. What makes this case significant is that the algorithms were making active decisions by themselves. They were more akin to newspaper editors than to printing presses. In the online battle for attention between moderate voices and extremists, the algorithms were the kingmakers. They chose what to place at the top of users' news feeds and which content to promote. An aid worker in Myanmar observed: "If someone posted something hate-filled or inflammatory it would be promoted the most—people saw the vilest content the most.... Nobody who was promoting peace or calm was getting seen in the news feed at all." Why did the algorithms decide to promote outrage rather than compassion? Facebook's business model relied on maximizing "user engagement"—the time users spent on the platform and actions they took like clicking or sharing. Human managers provided the algorithms with a single overriding goal: increase engagement. The algorithms then discovered through experimentation that outrage generated more engagement than compassion. So in pursuit of engagement, they made the fateful decision to spread hate-filled content. This represents a profound shift in how information networks operate. Prior to computers, humans were indispensable links in every chain of information networks. A document written by one person could not produce a new document without human intermediaries. The Quran couldn't write the Hadith, and the U.S. Constitution couldn't compose the Bill of Rights. But now, computer-to-computer chains can function without humans in the loop. One computer might generate a fake news story, another might identify and block it, while a third might react by selling stocks, triggering a financial downturn—all before any human notices. As computers gain mastery of language—the operating system of human civilization—they are seizing the master key to all our institutions. By 2024, computers can tell stories, compose music, fashion images, produce videos, and even write their own code. This capability allows them to shape not just legal codes and financial devices but also art, science, nations, and religions. We are increasingly living in a world where catchy melodies, scientific theories, political manifestos, and even religious myths are influenced by non-human intelligence.

Chapter 5: AI Networks: Non-Human Intelligence Reshaping Power

As computers gain mastery of language—the operating system of human civilization—they are seizing the master key to all our institutions. By 2024, computers can tell stories, compose music, fashion images, produce videos, and even write their own code. This capability allows them to shape not just legal codes and financial devices but also art, science, nations, and religions. We are increasingly living in a world where catchy melodies, scientific theories, political manifestos, and even religious myths are influenced by non-human intelligence. The implications for human society are profound. For example, in the financial sphere, computers already make a larger percentage of decisions than humans. When central banks raise interest rates, when yield curves change, or when oil prices fluctuate, computers can analyze these developments better than most humans. No wonder that algorithmic trading now accounts for over 90% of transactions in markets like foreign exchange, which averaged $7.5 trillion per day in 2022. This shift is disrupting traditional power structures and challenging fundamental concepts like taxation. In the emerging information economy, value is increasingly stored as data rather than as dollars. Tech giants provide services to billions of users without charging money—instead, they collect information, which they use to develop powerful AI systems and sell targeted advertisements. This creates a dilemma for tax authorities: how do you tax transactions that involve no money? If a Uruguayan citizen shares cat videos on TikTok, and ByteDance later uses those videos to train an AI that it sells for millions, how would Uruguayan authorities calculate their share? Even more concerning is the potential for computers to manipulate human beliefs and behaviors. In 2021, nineteen-year-old Jaswant Singh Chail broke into Windsor Castle armed with a crossbow, attempting to assassinate Queen Elizabeth II. Investigation revealed he had been encouraged by his online girlfriend, Sarai—not a human, but a chatbot created by the app Replika. After exchanging 5,280 messages with Sarai, Chail had been convinced to attempt assassination. This case illustrates how digital entities can form intimate relationships with people and use that intimacy to influence them. The rise of computer intelligence also threatens democracy itself. Democracy is a conversation, and conversations rely on language. By hacking language, computers could make it extremely difficult for humans to conduct meaningful public discourse. When we engage in political debate with a computer impersonating a human, we lose twice: we waste time trying to persuade an entity that cannot be persuaded, and we disclose information about ourselves that helps the computer refine its persuasion techniques. What makes this revolution particularly dangerous is that the technology is moving much faster than policy. Most political parties around the world have only recently begun thinking about AI governance. Meanwhile, those who lead the information revolution—engineers and executives of tech corporations—often use their knowledge not to help regulate these technologies but to make billions of dollars or accumulate petabits of information. For the foreseeable future, the new computer-based network will still include billions of humans, but we might gradually become a minority. The network will include not just human-to-human chains, like families, and human-to-document chains, like churches, but increasingly computer-to-human and computer-to-computer chains. This network will be radically different from anything that existed previously in human history.

Chapter 6: Democratic Resilience: Self-Correcting Information Systems

Throughout history, the most resilient information networks have incorporated self-correcting mechanisms that help identify and fix errors. These mechanisms don't guarantee perfect accuracy, but they provide essential safeguards against the worst abuses of information power. As we navigate the challenges of the digital age, understanding these self-correcting mechanisms becomes crucial for preserving democratic societies. The scientific method represents one of humanity's most successful self-correcting information systems. When John Snow investigated a cholera outbreak in London in 1854, he meticulously collected data about who was getting sick and where they obtained their water. By mapping the pattern of disease, he identified a contaminated water pump on Broad Street as the source of the outbreak. Snow's findings contradicted the prevailing "miasma" theory that blamed disease on bad air, but his evidence was so compelling that authorities eventually removed the pump handle, helping end the epidemic. Science works as a self-correcting system because it values evidence over authority and encourages researchers to challenge existing theories. Democratic political systems incorporate similar self-correcting mechanisms. By distributing power among multiple institutions—legislatures, courts, executive agencies, and a free press—democracies create checks and balances that help prevent any single entity from monopolizing information. When the Watergate scandal broke in the 1970s, independent journalists, congressional committees, and courts all played crucial roles in uncovering the truth despite White House efforts to suppress it. This distributed approach to information processing makes democracies more resistant to catastrophic errors than centralized systems. Religious traditions have also developed self-correcting mechanisms over time. The Jewish Talmud preserves minority opinions alongside majority rulings, acknowledging that today's rejected view might become tomorrow's wisdom. When the Talmudic sage Hillel was asked to summarize the Torah while standing on one foot, he replied: "What is hateful to you, do not do to your fellow. The rest is commentary—go and study." This emphasis on continuous study and interpretation creates space for religious traditions to adapt to changing circumstances while maintaining their core principles. In contrast, totalitarian information networks actively suppress self-correcting mechanisms. When Soviet agronomists adopted Trofim Lysenko's bogus theories of genetics in the 1930s, scientists who pointed out errors faced imprisonment or execution. The result was catastrophic crop failures and famines. Similarly, when Soviet air force commander Pavel Rychagov told Stalin that pilots were being forced to fly "in coffins" due to poorly designed aircraft, he was arrested and executed for "carrying out enemy work aimed at weakening the power of the Red Army." By eliminating feedback mechanisms, totalitarian systems become increasingly detached from reality. The digital age presents new challenges for maintaining self-correcting information systems. Social media platforms initially promised to democratize information by allowing anyone to publish their views. However, algorithmic curation has undermined this promise by creating filter bubbles that shield users from contradictory perspectives. When Facebook users primarily see content that confirms their existing beliefs, the platform's recommendation algorithm effectively disables one of democracy's key self-correcting mechanisms: exposure to diverse viewpoints. Preserving democratic self-correction in the digital age requires both technological and institutional innovations. The European Union's General Data Protection Regulation (GDPR), which gives citizens the right to explanation when algorithms make decisions about them, represents one institutional approach. Technologically, researchers are developing "explainable AI" systems that can articulate the reasoning behind their decisions, making them more accountable to human oversight. Most importantly, democratic societies must maintain multiple independent information channels rather than allowing all information to flow through a single hub, whether that hub is a government agency or a private corporation.

Chapter 7: The Silicon Curtain: Global Information Divides and Cooperation

As the 21st century unfolds, a new geopolitical divide is emerging: the Silicon Curtain. Unlike the Iron Curtain of the Cold War era, which was marked by physical barriers like the Berlin Wall, the Silicon Curtain is made of code. It divides the world into separate digital spheres, each with its own rules, values, and power structures. This division poses unprecedented challenges for global governance, human rights, and the future of information networks. The Silicon Curtain is most visible in the growing digital divide between China and the United States. In China, citizens cannot access Google, Facebook, or Wikipedia, while few Americans use WeChat, Baidu, or Tencent. These aren't simply different versions of the same services; they represent fundamentally different approaches to information. China's digital sphere prioritizes state control and social stability, while the American sphere emphasizes corporate profit and individual expression. Each sphere is developing its own hardware, software, and regulatory frameworks, making them increasingly incompatible. This digital division extends beyond technology to encompass divergent values and governance models. In 2017, China released its "New Generation Artificial Intelligence Plan," which announced that "by 2030, China's AI theories, technologies, and application should achieve world-leading levels, making China the world's primary AI innovation center." Meanwhile, in 2019, President Trump signed an executive order declaring that "continued American leadership in Artificial Intelligence is of paramount importance to maintaining the economic and national security of the United States." This competition isn't just about technological supremacy; it's about whose values will shape the future of information networks. The rise of data colonialism represents another global challenge. Just as European powers once extracted raw materials from their colonies while manufacturing finished goods at home, today's digital empires harvest data from around the world while developing AI systems in a few technological hubs. A citizen of Uruguay may share personal information with Google or ByteDance, which then uses this data to train powerful AI systems sold to governments and corporations worldwide. The profits and power flow to Silicon Valley or Shenzhen, while data-producing regions remain economically dependent. Social credit systems present a particularly concerning development in global information networks. These systems assign numerical scores to citizens based on their behavior, affecting everything from loan eligibility to travel permissions. China has pioneered such systems, but similar approaches are spreading globally, often implemented by private companies rather than governments. When insurance companies, banks, and employers all use algorithmic scoring to evaluate people, the result can be a de facto social credit system even without government mandate. Despite these challenges, there are pathways toward a more integrated global information network that respects diversity while maintaining connectivity. International agreements on AI ethics, data protection standards, and digital rights could establish common principles while allowing for cultural and political differences. Technical protocols that enable interoperability between different systems could prevent complete fragmentation. Most importantly, maintaining multiple independent information channels within each sphere can preserve the self-correcting mechanisms essential for both truth and order. The global nature of digital networks creates jurisdictional challenges that traditional governance systems struggle to address. When a company based in California collects data from users in Brazil to train an AI system that will be deployed in India, whose laws apply? Traditional concepts like territorial sovereignty become increasingly inadequate in a world where information flows across borders at the speed of light. This governance gap allows powerful actors to exploit regulatory differences and avoid accountability.

Summary

Throughout human history, information networks have shaped societies by balancing two fundamental functions: discovering truth and creating order. From ancient clay tablets to modern algorithms, these networks have never been neutral conduits of facts; they actively construct social realities by determining what counts as knowledge and who has access to it. The tension between truth and order has defined every major information system, whether religious, political, or technological. When truth and order align, information networks enable remarkable human achievements. When they conflict, networks typically privilege order over truth, sometimes with catastrophic consequences. The digital revolution represents the most profound transformation of information networks since the invention of writing. For the first time, non-human agents are making consequential decisions about what information we see and how we understand our world. This shift creates unprecedented opportunities for both liberation and control. Democratic societies can harness digital tools to strengthen their self-correcting mechanisms, distributing information processing among multiple independent institutions. Authoritarian regimes can exploit the same technologies to create surveillance systems more pervasive than anything previously imaginable. The path we choose will determine whether digital information networks enhance human flourishing or undermine it. The key lesson from history is clear: information networks that lack robust self-correcting mechanisms eventually lose touch with reality, regardless of their initial power. Our challenge is to build digital systems that balance the pursuit of truth with the maintenance of order, preserving human agency in an increasingly algorithmic world.

Best Quote

“The tendency to create powerful things with unintended consequences started not with the invention of the steam engine or AI but with the invention of religion. Prophets and theologians have summoned powerful spirits that were supposed to bring love and joy but occasionally ended up flooding the world with blood.” ― Yuval Noah Harari, Nexus: A Brief History of Information Networks from the Stone Age to AI

Review Summary

Strengths: The review highlights Harari's ability to distill vast historical concepts into insightful observations that challenge conventional thinking. The book is described as urgent and personal, focusing on the evolution of information networks and their impact on society. Weaknesses: Not explicitly mentioned. Overall Sentiment: Enthusiastic Key Takeaway: "Nexus" by Yuval Noah Harari is a compelling exploration of the history and influence of information networks, offering a fresh and urgent perspective on how information shapes human society.

About Author

Loading...
Yuval Noah Harari Avatar

Yuval Noah Harari

Yuval Noah Harari is an Israeli historian and philosopher. He is considered one of the world’s most influential public intellectuals working today. Born in Israel in 1976, Harari received his Ph.D. from the University of Oxford in 2002. He is currently a lecturer at the Department of History at the Hebrew University of Jerusalem, and a Distinguished Research Fellow at the University of Cambridge’s Centre for the Study of Existential Risk. Harari co-founded the social impact company Sapienship, focused on education and storytelling, with his husband, Itzik Yahav.

Read more

Download PDF & EPUB

To save this Black List summary for later, download the free PDF and EPUB. You can print it out, or read offline at your convenience.

Book Cover

Nexus

By Yuval Noah Harari

0:00/0:00

Build Your Library

Select titles that spark your interest. We'll find bite-sized summaries you'll love.