
New Dark Age
Technology and the End of the Future
Categories
Nonfiction, Philosophy, Science, Politics, Technology, Artificial Intelligence, Sociology, Essays, Theory, Internet
Content Type
Book
Binding
Kindle Edition
Year
2018
Publisher
Verso
Language
English
ASIN
B076NR24RZ
ISBN
178663550X
ISBN13
9781786635501
File Download
PDF | EPUB
New Dark Age Plot Summary
Introduction
We live in an age defined by computational systems and data-driven decision-making, yet our capacity to understand the world seems to be diminishing rather than increasing. Despite unprecedented access to information, humanity faces a paradoxical crisis where more data often leads to less clarity and worse outcomes. This crisis of computational knowledge stems from a fundamental misunderstanding of what computation can and cannot do for human cognition and society. The computational paradigm promises that with sufficient data and processing power, we can model, predict, and control increasingly complex phenomena. However, this promise contains within it several dangerous assumptions about the nature of knowledge, complexity, and human understanding. As our technological systems grow more sophisticated, they also become more opaque, creating a world where critical decisions are increasingly delegated to algorithmic processes we cannot fully comprehend. The consequences of this delegation extend far beyond technical concerns, reaching into politics, economics, social structures, and even our conception of reality itself. Understanding how computational thinking has reshaped our approach to knowledge is essential for anyone attempting to navigate our increasingly complex and networked world.
Chapter 1: The Technological Paradox: More Information, Less Understanding
The defining paradox of our age is that unprecedented access to information has not produced correspondingly improved understanding. Instead, we find ourselves drowning in data while starving for wisdom. The exponential growth in computational power and information availability has outpaced our ability to make sense of it all. We now generate more data in a single day than was created in entire centuries of human history, yet this abundance has not translated into proportionally better decisions or deeper insights. This paradox manifests across domains. In finance, sophisticated algorithmic trading systems process market data at speeds impossible for human comprehension, yet financial crises continue to blindside experts. In medicine, we sequence genomes and analyze vast clinical datasets, yet struggle to translate these insights into improved health outcomes at scale. Even in our personal lives, we have access to more information than ever before, yet studies show increased polarization and entrenchment in beliefs rather than convergence toward evidence-based consensus. The root of this paradox lies in a fundamental misunderstanding of the relationship between information and understanding. Understanding requires not just raw data, but context, interpretation, and integration into existing knowledge frameworks. Computational systems excel at processing information but struggle with these more nuanced aspects of cognition. They can identify patterns but often miss the meaningful relationships that constitute true understanding. The result is that computational approaches tend to fragment knowledge into discrete, manageable units that can be processed efficiently but lose the holistic connections that give knowledge its meaning. Further complicating matters is the opacity of modern computational systems. As these systems grow more complex, they become black boxes whose internal workings are incomprehensible even to their creators. Machine learning algorithms, for instance, may achieve impressive results without anyone—including their programmers—fully understanding how they arrive at their conclusions. This opacity creates a dangerous feedback loop: we rely increasingly on systems we cannot fully understand, which in turn shapes our conception of what constitutes knowledge itself. The consequence is a profound shift in our relationship to knowledge. Traditional epistemology valued coherent explanations and causal understanding. The computational paradigm, in contrast, often prioritizes prediction over explanation, correlation over causation, and efficiency over comprehensibility. This shift has real-world consequences, as decisions affecting millions of lives are increasingly made by systems optimized for specific metrics rather than holistic understanding.
Chapter 2: Computational Thinking and Its Dangerous Limitations
Computational thinking represents a specific mode of approaching problems that has become increasingly dominant across disciplines. At its core, it involves breaking down complex problems into discrete, manageable components that can be processed algorithmically. This approach has proven remarkably effective for certain classes of problems, but its limitations become dangerous when it is applied universally as the primary or only valid mode of thought. The first limitation concerns complexity itself. Real-world systems—whether ecosystems, economies, or human societies—are characterized by emergent properties that arise from the interactions of their components. These emergent phenomena cannot be fully captured by analyzing individual components in isolation. Computational thinking tends to adopt a reductionist approach, assuming that understanding the parts will lead to understanding the whole. However, complex systems exhibit nonlinear behaviors where small changes can produce disproportionate effects, rendering straightforward algorithmic approaches insufficient. A second critical limitation involves the treatment of uncertainty and ambiguity. Computational models require well-defined parameters and clear rule sets to function effectively. Yet many of the most important aspects of human experience resist such definition. Values like justice, freedom, or dignity cannot be easily quantified or optimized. When computational thinking attempts to incorporate these elements, it often does so by reducing them to measurable proxies that capture only a fraction of their meaning. The result is systems that optimize for what can be measured rather than what matters. Computational thinking also struggles with contextual knowledge. Human expertise involves tacit understanding gained through experience that cannot be easily formalized into explicit rules. Doctors, judges, teachers, and other professionals rely on this tacit knowledge to make nuanced judgments. Computational approaches tend to discount this form of knowledge in favor of explicit, codifiable information, potentially discarding crucial insights in the process. Perhaps most concerning is how computational thinking shapes our conception of agency and responsibility. When algorithms make decisions, questions of accountability become muddled. Is the programmer responsible? The user? The algorithm itself? This diffusion of responsibility creates what philosophers call the "problem of many hands," where no single actor can be held accountable for algorithmic outcomes. This undermines traditional notions of moral responsibility and makes systemic biases particularly difficult to address. The limitations of computational thinking are not merely theoretical concerns. They manifest in real-world failures: predictive policing algorithms that reinforce racial biases, content recommendation systems that promote extremism, or healthcare algorithms that discriminate against vulnerable populations. These failures occur not despite computational thinking but because of its inherent limitations when applied to domains requiring contextual judgment, ethical reasoning, and recognition of human complexity.
Chapter 3: Network Effects: How Technology Amplifies Inequality and Opacity
Network effects—where a service becomes more valuable as more people use it—represent one of the most powerful dynamics in modern technology. However, these effects do not distribute their benefits equally; instead, they tend to concentrate power in ways that create and amplify various forms of inequality while simultaneously making these power structures increasingly opaque. Digital networks naturally tend toward centralization through winner-take-all dynamics. The platform with the most users attracts still more users, creating self-reinforcing cycles of growth. This centralization manifests economically as unprecedented market concentration. Today's dominant technology companies control market shares that would have triggered antitrust action in previous eras. This concentration is not merely economic but extends to information access and infrastructure control. A handful of companies now determine the architecture of digital spaces where billions live, work, and form communities. The opacity of these networked systems compounds their unequal effects. Complex algorithms determine what information we see, which opportunities we're offered, and how our data is used, yet the functioning of these systems remains largely invisible to those affected by them. This creates fundamental asymmetries of knowledge and power. Platform owners possess comprehensive data about users, while users have minimal insight into how platforms operate or how their data is leveraged. Even experts often cannot fully explain why algorithmic systems make specific decisions, a problem that grows more acute as machine learning systems become more sophisticated. These inequalities extend beyond individual platforms to shape broader social structures. Algorithmic systems increasingly mediate access to essential resources and opportunities—from credit and housing to education and employment. When these systems reflect and amplify existing social biases, they can systematically disadvantage already marginalized groups. Research consistently shows that facial recognition systems perform worse on darker-skinned faces, that automated hiring systems can discriminate against women, and that predictive algorithms in criminal justice can perpetuate racial disparities. The network effects that drive platform growth ensure these biased systems rapidly achieve scale and influence. The geographical distribution of technological benefits further illustrates how network effects concentrate advantage. Digital infrastructure remains highly uneven, with high-speed connectivity concentrated in wealthy urban centers. Even as technology companies achieve global reach, their economic benefits cluster in specific regions, creating what economists call "superstar cities" with astronomical living costs alongside regions facing economic stagnation. The same network dynamics that create trillion-dollar companies also produce new forms of spatial inequality. Perhaps most troubling is how these systems interface with democratic governance. Network effects have enabled the rapid spread of misinformation and the formation of ideologically isolated communities. Platform algorithms optimized for engagement often promote sensational, divisive content that undermines shared understanding. Meanwhile, the complexity and opacity of these systems make effective regulation exceptionally difficult. Policymakers struggle to understand systems that even their creators cannot fully explain, creating substantial governance gaps. As networked technologies continue to mediate more aspects of human experience, these dynamics threaten to create self-reinforcing cycles of advantage and disadvantage. Those with access to technological tools, technical literacy, and network centrality gain increasing leverage, while others face growing exclusion. Without intervention, network effects promise not a democratizing technological future but rather new, more entrenched forms of inequality and opacity.
Chapter 4: The New Dark Age: When Systems Exceed Human Comprehension
We are entering an era where our technological systems have surpassed our collective ability to fully comprehend them—a new dark age defined not by absence of knowledge but by its impenetrable abundance. This condition represents a profound epistemological crisis, where increasing amounts of information paradoxically lead to decreased understanding and diminished capacity for effective action. The scale and complexity of contemporary technological systems have grown beyond what any individual—or even any team—can fully grasp. Modern software applications contain millions of lines of code interacting in ways that defy complete analysis. Machine learning systems process such vast datasets and develop such intricate internal representations that even their designers cannot fully explain their decision-making processes. Financial markets operate at nanosecond speeds through the interaction of thousands of algorithmic trading systems. Each of these domains has become a world unto itself, comprehensible only partially and through specialized technical languages. This complexity manifests as a form of cognitive overload that affects even technical experts. Software engineers routinely work with libraries and frameworks they don't fully understand. Scientists increasingly rely on computational tools whose inner workings remain partially opaque. Financial regulators struggle to keep pace with algorithmic innovations. In each case, specialists find themselves working at the edges of their comprehension, making decisions based on partial understanding. When these systems interact—as they increasingly do—the complexity compounds exponentially. The consequences of this limited comprehension become visible in system failures. Flash crashes in financial markets, unexpected behaviors in autonomous systems, and unpredictable interactions between algorithmic systems reveal the limits of our understanding. These failures aren't simply technical malfunctions but symptoms of a deeper epistemological problem: we have built systems whose behaviors emerge from interactions too numerous and subtle for human minds to fully anticipate or comprehend. This incomprehensibility extends beyond purely technical domains to affect social and political spheres. Complex sociotechnical systems like social media platforms shape public discourse through mechanisms that elude straightforward analysis. The interplay between algorithmic content recommendations, user behavior, and broader social dynamics creates patterns that neither platform architects nor users fully grasp. The result is often surprising and disturbing emergent behaviors: the rapid spread of misinformation, the formation of extremist communities, or unexpected electoral outcomes. Perhaps most troubling is how this incomprehensibility affects our ability to govern these systems. Democratic governance presupposes that citizens and representatives can understand the systems they seek to regulate. Yet today's technological systems resist such understanding. How can we effectively regulate systems whose behaviors even their designers cannot fully predict? How can citizens meaningfully consent to participate in systems whose operations they cannot comprehend? These questions reveal how technological complexity challenges fundamental assumptions of democratic theory. The new dark age thus represents not merely a technical challenge but a governance crisis. As our systems exceed human comprehension, traditional mechanisms of oversight, accountability, and democratic control face unprecedented difficulties. Without new approaches to understanding and governing complex systems, we risk creating a world where technological systems operate beyond meaningful human direction—a darkness defined not by absence of information but by our inability to make sense of it.
Chapter 5: Confronting Technological Opacity: New Ways of Thinking
Confronting the crisis of technological opacity requires developing new cognitive approaches and conceptual frameworks that can help us navigate increasingly complex systems. These approaches must acknowledge the limitations of traditional analytical methods while providing practical strategies for understanding and governing technologies that resist straightforward comprehension. One promising approach involves developing what some scholars call "systems literacy"—the ability to understand the behavior of complex systems without necessarily comprehending all their internal components. Systems literacy focuses on identifying patterns, feedback loops, and emergent behaviors rather than attempting exhaustive analysis of individual elements. It recognizes that in complex systems, the relationships between components often matter more than the components themselves. This shift in perspective allows us to develop useful models of system behavior even when complete understanding remains elusive. Complementing systems literacy is the practice of "cognitive mapping," which helps us orient ourselves within complex information environments. Cognitive mapping involves creating mental or visual representations that organize complex phenomena into comprehensible patterns. These maps necessarily simplify reality, but they do so strategically, highlighting relationships and dynamics most relevant to specific contexts. Unlike reductive approaches that claim to capture objective reality, cognitive mapping acknowledges its own partiality while still providing navigational value in complex terrains. Another essential approach involves embracing distributed cognition and collective intelligence. If individual comprehension has limits, then we must develop better methods for pooling intellectual resources across diverse perspectives. This means creating institutional structures and technological tools that facilitate collaborative sense-making. Examples include multidisciplinary research teams, open-source knowledge projects, and participatory modeling approaches. These collective approaches can achieve insights beyond what any individual expert could develop alone. Equally important is developing comfort with uncertainty and provisionality. Traditional epistemological approaches often seek certainty and finality, but navigating complex systems requires embracing a more provisional relationship to knowledge. This means treating understanding as an ongoing process rather than a fixed achievement, remaining open to revising models as new information emerges, and developing tolerance for ambiguity. Such cognitive flexibility becomes increasingly valuable as systems evolve in unpredictable ways. New approaches to transparency also prove crucial for confronting technological opacity. Rather than seeking total transparency—which remains impossible for truly complex systems—we might instead pursue "meaningful transparency" that illuminates system behaviors most relevant to human values and concerns. This could involve developing new interfaces that make algorithmic decisions more interpretable, creating tiered disclosure systems that provide different levels of detail for different stakeholders, or designing systems with built-in accountability mechanisms that track consequential decisions. Finally, confronting technological opacity requires developing new ethical frameworks appropriate to conditions of partial understanding. Traditional ethical approaches often presuppose complete knowledge of causes and effects, but this assumption breaks down in complex sociotechnical systems. We need ethical frameworks that acknowledge uncertainty while still providing guidance for action—frameworks that emphasize precaution when facing potentially irreversible consequences, that distribute responsibility appropriately across complex networks of actors, and that remain attentive to power asymmetries in who benefits from and who bears the risks of technological opacity. These approaches will not eliminate complexity or restore some imagined era of complete comprehension. Rather, they offer strategies for meaningful navigation and governance even under conditions of partial understanding—ways of finding light in the new darkness.
Chapter 6: Beyond Computation: Embracing Uncertainty in Complex Systems
Moving beyond the limitations of purely computational approaches requires embracing uncertainty as a fundamental feature of complex systems rather than a problem to be eliminated. This shift represents not a retreat from rationality but rather a more sophisticated understanding of the relationship between knowledge, uncertainty, and effective action in complex domains. Traditional computational approaches often treat uncertainty as noise to be filtered out or as a temporary gap in knowledge to be filled with more data. This attitude reflects a deterministic worldview that assumes perfect prediction is theoretically possible given sufficient information and processing power. However, complex systems research increasingly demonstrates that certain forms of uncertainty are irreducible—they stem from fundamental properties like nonlinearity, emergence, and open-endedness rather than from temporary ignorance. Weather systems, ecosystems, economies, and human societies all exhibit behaviors that resist perfect prediction regardless of how much data we collect. Embracing this irreducible uncertainty requires rethinking fundamental aspects of how we approach knowledge and decision-making. Rather than seeking to eliminate uncertainty through ever more elaborate models, we might instead develop what complexity theorist Stuart Kauffman calls "living with uncertainty"—approaches that acknowledge the limits of prediction while still enabling effective navigation of complex terrains. This means shifting from prediction to preparedness, developing robust strategies that perform reasonably well across a range of possible futures rather than optimizing for a single predicted outcome. Practically, this shift manifests in approaches like scenario planning, which explores multiple plausible futures without claiming to predict which will occur, or adaptive management, which treats policies as experiments to be adjusted based on observed outcomes rather than as fixed solutions. These approaches recognize that in complex systems, we often learn more by acting and observing than by attempting to model everything in advance. They embrace a more iterative relationship between knowledge and action, where doing becomes a form of knowing. This perspective also helps us recognize the value of diverse forms of knowledge beyond computation. While computational approaches excel at processing large datasets and identifying certain patterns, they struggle with contextual understanding, tacit knowledge, and value-laden judgments. Human expertise, intuition, local knowledge, and ethical reasoning all provide crucial insights that complement computational approaches. By integrating these diverse epistemologies rather than subordinating everything to computation, we develop more robust understanding of complex phenomena. Embracing uncertainty also means rethinking how we communicate about complex systems. Current practices often obscure uncertainty, presenting model outputs as definitive predictions or reducing complex phenomena to misleadingly precise metrics. More honest communication would acknowledge ranges of possibility, highlight key uncertainties, and make explicit the assumptions underlying different models. Such transparency might initially seem to undermine authority, but ultimately it builds more sustainable trust by aligning claims more closely with actual knowledge. Perhaps most profoundly, embracing uncertainty challenges us to reconsider what constitutes progress in knowledge. Rather than measuring progress solely by increased precision or improved prediction, we might also value deepened understanding of a system's fundamental dynamics, clearer articulation of the limits of our knowledge, or improved capacity to adapt to surprising developments. This expanded conception of epistemic progress opens space for more nuanced engagement with complexity. The path beyond computation thus involves not abandoning rationality but expanding it to encompass a more sophisticated understanding of complexity, uncertainty, and the diversity of knowledge forms. It means developing comfort with provisionality, embracing the iterative nature of understanding, and recognizing that some of the most important aspects of complex systems may always exceed our capacity for complete formalization.
Chapter 7: Guardianship: Responding to Technological Complexity
Confronting the challenges of technological complexity requires moving beyond purely technical solutions to embrace a broader ethic of guardianship—a commitment to responsible stewardship over systems whose full consequences we cannot predict or control. This guardianship ethic represents an essential response to a world where technological systems increasingly exceed human comprehension while profoundly shaping human possibilities. At its core, technological guardianship acknowledges both the power and the limitations of human agency in complex systems. It recognizes that while we cannot achieve perfect control or complete understanding, we retain significant responsibility for how technological systems develop and operate. This responsibility extends beyond narrow technical optimization to encompass broader questions about what values these systems embody, whose interests they serve, and what futures they make possible or foreclose. Practicing guardianship begins with expanding how we evaluate technological systems. Rather than focusing exclusively on efficiency, performance, or profitability, guardianship demands attention to broader impacts: environmental sustainability, social equity, human dignity, and long-term flourishing. It requires moving beyond short-term metrics to consider intergenerational impacts and systemic effects that may emerge gradually but irreversibly. This expanded evaluation framework helps identify potential harms that narrow technical assessment might miss. Guardianship also demands a more inclusive approach to technological governance. If no single perspective can fully comprehend complex technological systems, then meaningful governance requires incorporating diverse viewpoints—including those of marginalized communities often excluded from technical decision-making. This means creating institutions and processes that facilitate broad participation, that value different forms of expertise, and that provide meaningful transparency even to non-specialists. Such inclusive governance can help identify blind spots in technical assessment and ensure that technological development aligns with diverse human needs. A commitment to guardianship further requires embracing the precautionary principle when facing potentially irreversible consequences. This means proceeding carefully with technologies that pose systemic risks, conducting thorough testing in controlled environments before widespread deployment, establishing robust monitoring systems to detect emerging problems, and maintaining the capacity to modify or withdraw technologies that demonstrate unexpected harms. Far from opposing innovation, such precaution makes innovation more sustainable by preventing catastrophic failures that could trigger backlash against entire technological domains. Practicing guardianship also means nurturing technological diversity rather than allowing premature convergence on single approaches. Monocultures—whether in agriculture, finance, or technology—create fragility by betting everything on one approach. Maintaining diverse technological approaches provides insurance against the inevitable limitations and failures of any single system. This technological pluralism requires resisting the winner-take-all dynamics common in digital domains and instead cultivating institutional and economic arrangements that support multiple technological trajectories. Perhaps most fundamentally, guardianship requires maintaining human agency within increasingly automated systems. This means designing technologies that augment rather than replace human judgment, that make their operations interpretable to human oversight, and that respect human autonomy. It means preserving space for democratic deliberation about technological futures rather than treating them as inevitable or determined solely by technical imperatives. And it means cultivating the human capacities—ethical reasoning, contextual judgment, and democratic citizenship—that automated systems cannot replace. The guardianship ethic thus offers a middle path between uncritical technological optimism and reflexive technological pessimism. It acknowledges the genuine benefits technological systems can provide while remaining clear-eyed about their limitations and potential harms. Most importantly, it insists that even as our technological systems grow more complex, we retain both the capability and the responsibility to shape them in accordance with human values and democratic choices.
Summary
The crisis of computational knowledge reveals how our increasing reliance on data-driven systems paradoxically diminishes our capacity for genuine understanding. By mistaking calculation for comprehension, we have created a technological environment that exceeds human cognition while simultaneously reshaping it. The most profound insight emerging from this analysis is that true knowledge cannot be reduced to information processing alone—it requires embracing uncertainty, contextual judgment, and diverse forms of understanding that computational approaches systematically exclude. This recognition opens the possibility of moving beyond computational reductionism toward more nuanced approaches to complexity. This crisis demands more than technical solutions; it requires fundamentally rethinking our relationship to knowledge and technology. The path forward involves developing new cognitive approaches that can navigate complexity without requiring complete comprehension, creating more inclusive processes for technological governance, and cultivating an ethic of guardianship over systems whose full consequences we cannot predict. Rather than surrendering agency to computational systems or retreating into simplistic rejection of technology, we must develop the intellectual and institutional tools to maintain meaningful human direction even under conditions of partial understanding. In this way, what appears as a crisis might ultimately lead to more sophisticated and humane approaches to knowledge—ones that acknowledge the limits of computation while expanding our capacity to think beyond it.
Best Quote
“Thus paranoia in an age of network excess produces a feedback loop: the failure to comprehend a complex world leads to the demand for more and more information, which only further clouds our understanding – revealing more and more complexity that must be accounted for by ever more byzantine theories of the world. More information produces not more clarity, but more confusion.” ― James Bridle, New Dark Age: Technology and the End of the Future
Review Summary
Strengths: The introduction is described as riveting and promising, raising important questions about the internet age. The book is appreciated for its thought-provoking nature and its attempt to tackle significant issues. Weaknesses: The book fails to meet expectations by lacking a coherent theory and merely cataloging problems without deeper analysis. It does not address profound issues like community and politics as the reviewer hoped. The narrative is sprawling and lacks clarity, making it difficult for a moderately educated reader to follow. Overall Sentiment: Mixed Key Takeaway: While the book raises crucial questions about the internet age, it falls short of providing a coherent and profound analysis, leaving the reader wanting a more structured and insightful exploration akin to Neil Postman's work.
Trending Books
Download PDF & EPUB
To save this Black List summary for later, download the free PDF and EPUB. You can print it out, or read offline at your convenience.

New Dark Age
By James Bridle