Home/Business/The Unaccountability Machine
Loading...
The Unaccountability Machine cover

The Unaccountability Machine

Why Big Systems Make Terrible Decisions & And How the World Lost Its Mind

3.9 (647 ratings)
21 minutes read | Text | 8 key ideas
The Unaccountability Machine spins an intricate web where decisions morph into unintended consequences. Dan Davies navigates the chaotic dance of bureaucracy and market forces that conspire to produce the very outcomes no one desires. Through the lens of Stafford Beer's overlooked theories, he exposes how organizations, much like rogue intelligences, operate with a will of their own, bypassing human intention. Davies’s penetrating prose dismantles the facade of control in our institutions, painting a vivid picture of a world where responsibility evaporates amidst structural chaos. His insights pierce through the fog of modern decision-making, beckoning readers to reconsider the unseen mechanisms steering our collective fate.

Categories

Business, Nonfiction, Finance, Science, History, Economics, Politics, Technology, Management, Sociology

Content Type

Book

Binding

Kindle Edition

Year

2024

Publisher

Profile Books

Language

English

ASIN

B0CGFWBFD6

ISBN13

9781782839255

File Download

PDF | EPUB

The Unaccountability Machine Plot Summary

Introduction

Modern society faces a crisis of accountability. Across institutions, from governments to corporations, important decisions are increasingly made by complex systems rather than identifiable individuals. When failures occur, these systems serve as "accountability sinks" that absorb responsibility without any clear person to hold accountable. This phenomenon explains why corporate scandals rarely lead to meaningful consequences, why policy disasters fail to change institutional behavior, and why citizens feel increasingly powerless in the face of unresponsive bureaucracies. The heart of this problem lies in the industrialization of decision-making. Over decades, we've constructed elaborate decision-making systems that operate according to their own internal logic, often divorced from human values and resistant to outside feedback. Through the lens of cybernetics—the science of control and communication in complex systems—we can understand how these mechanisms evade responsibility, how they develop pathologies when isolated from feedback, and most importantly, how they might be reformed. The analysis reveals that accountability isn't merely a moral imperative but a crucial design feature for functional systems, without which even the most sophisticated organizations inevitably drift toward dysfunction and crisis.

Chapter 1: The Erosion of Accountability in Modern Systems

When decisions go wrong in today's world, the typical response follows a predictable pattern: investigations are launched, reports are written, and yet meaningful accountability rarely results. This isn't simply due to moral failing or cover-ups—though these certainly occur—but reflects a fundamental transformation in how decisions are made. Traditional accountability relied on identifiable decision-makers who could be held responsible. Today, decisions emerge from complex webs of procedures, policies, algorithms, and distributed authority where no single person holds decisive power. Consider what happens when you call a large organization with a complaint. You're often told "that's our policy" or "the system won't allow that exception." The customer service representative didn't make the policy and has no power to change it. Their manager likely didn't either. The policy itself may have evolved through multiple committees, each making small modifications over years. When asked who's responsible, the organization itself becomes an accountability sink—absorbing blame without any identifiable person being accountable. This pattern extends to major institutional failures. After the 2008 financial crisis, despite trillions in economic damage, almost no senior banking executives faced criminal charges. The complexity of financial instruments, distributed decision-making, regulatory structures, and market incentives created a perfect accountability sink. Everyone involved could point to someone else in the system as the responsible party. Similar dynamics played out during the Boeing 737 MAX crisis, where design flaws that led to fatal crashes were traceable to organizational processes rather than individual negligence. The mechanisms creating these accountability sinks operate on multiple levels. First, organizations develop formal structures that diffuse responsibility through committees, approval chains, and delegation. Second, information flows are managed to ensure that potentially troubling data gets filtered before reaching decision-makers. Third, incentive systems reward short-term compliance with metrics rather than long-term responsibility for outcomes. Fourth, specialized technical language creates barriers that make it difficult for outsiders to penetrate decision processes. These systems didn't emerge spontaneously but were consciously designed to reduce risk to individuals. The management theorist Stafford Beer articulated a key principle: the extent to which you can be held accountable for a decision is precisely the extent to which you can change it. By removing human discretion from decision processes, organizations systematically reduce accountability. The consequence is what might be called a "principle of diminishing accountability"—unless conscious steps are taken to prevent it, any organization in a modern industrial society will tend to restructure itself to reduce personal responsibility. This erosion creates a dangerous paradox: as systems grow more complex and potentially destructive, accountability for their actions diminishes. The resulting democratic deficit fuels populist rage against "the system" while making meaningful reform increasingly difficult to achieve.

Chapter 2: Cybernetics: Understanding Decision-Making Systems

Cybernetics offers a powerful framework for understanding how decision-making systems function and why they often fail. Developed in the mid-20th century by pioneers like Norbert Wiener, Stafford Beer, and Ross Ashby, cybernetics examines how systems—whether biological, mechanical, or social—maintain stability through information processing and feedback loops. Rather than focusing on individual components, cybernetics looks at relationships between parts and the flow of information between them. Central to cybernetic thinking is the concept of the "black box"—a system whose internal workings might be unknown or too complex to analyze completely, but whose behavior can be understood by studying inputs, outputs, and feedback. Corporations, government agencies, and other large organizations function as black boxes from this perspective. While their inner workings may be opaque, their behavior follows patterns that can be analyzed and predicted. A key cybernetic principle relevant to accountability is the "law of requisite variety," which states that a control system must have at least as much variety (range of possible states) as the system it aims to regulate. Applied to organizational accountability, this means that oversight mechanisms must match the complexity of the systems they monitor. When regulatory bodies have less variety than the entities they oversee—as when financial regulators face sophisticated global banks—effective accountability becomes mathematically impossible. Feedback loops constitute another crucial cybernetic concept. Healthy systems incorporate negative feedback mechanisms that counteract deviations from desired states. When organizational feedback loops break down, pathologies emerge. Modern institutions frequently develop structures that filter, distort, or completely block feedback, particularly from those most affected by their decisions. Corporate hierarchies, specialized technical language, and procedural barriers all serve to reduce the variety of feedback that reaches decision-makers. The pioneering management cybernetician Stafford Beer developed the "Viable System Model" to analyze these information flows in organizations. According to Beer, viable systems require specific communication channels between operational units, coordination functions, and higher-level management. Most critically, they need "algedonic" signals—pain/pleasure indicators that can bypass normal channels when something is going catastrophically wrong. Modern accountability sinks systematically eliminate these emergency feedback mechanisms. Perhaps most importantly, cybernetics introduced the profound insight captured in Beer's axiom: "The purpose of a system is what it does." This cuts through stated intentions, mission statements, and public relations to focus on actual behavior. When corporations consistently prioritize quarterly profits over environmental sustainability despite claiming otherwise, the cybernetician concludes that profit maximization is the system's actual purpose. This perspective shifts accountability discussions from individual intentions to system design. By applying cybernetic analysis to modern institutions, we can identify precise points where accountability breaks down: filtered information flows, inadequate feedback channels, mismatched regulatory capacity, and poorly designed incentive structures. Most importantly, we recognize that accountability failures aren't simply moral shortcomings but predictable outcomes of system architecture—which means they can be redesigned.

Chapter 3: How Economic Theory Created Accountability Sinks

Economic theory has played a pivotal role in creating today's accountability crisis, providing intellectual justification for systematically removing human judgment from decision processes. The mathematical revolution in economics during the mid-20th century sought to transform the discipline into a "hard science" like physics, developing models built on assumptions of perfect information, rational actors, and quantifiable outcomes. This approach required stripping away complexity and context—precisely the factors essential to meaningful accountability. The "Ricardian Vice" (named after economist David Ricardo) exemplifies this problem: economists create simplified mathematical models, prove propositions within those models, then act as if they've proven something about the real world. When applied to governance and management, this approach leads organizations to optimize for what can be easily measured while ignoring crucial factors that resist quantification. Values like institutional integrity, environmental sustainability, and social cohesion get pushed aside because they don't fit neatly into equations. Milton Friedman's influential 1970 essay "The Social Responsibility of Business is to Increase its Profits" codified a particularly destructive accountability sink. By arguing that corporate executives should focus exclusively on maximizing shareholder value while following basic legal rules, Friedman effectively absolved managers of responsibility for broader social consequences. This doctrine was transformed into the concept of "fiduciary duty to maximize shareholder value"—a legal-sounding phrase with no actual legal basis that nonetheless became gospel in business schools and boardrooms. The economics profession further undermined accountability through its concept of market efficiency. If markets perfectly incorporate all available information (as many economists claimed), then challenging market outcomes becomes illegitimate by definition. This transforms markets from tools serving human purposes into quasi-divine arbiters beyond questioning. Regulatory bodies like central banks embraced this view, developing "independence" from democratic oversight based on the premise that economic management should be insulated from "political interference"—effectively creating an accountability sink for some of society's most consequential decisions. Economic models also systematically ignore power relationships and distributive concerns. By treating labor as just another commodity input rather than recognizing worker dignity and security as legitimate goals, economic theory justified managerial practices that transferred risk and uncertainty from companies to individuals. This theoretical blindness to power dynamics helps explain why growing productivity has disproportionately benefited capital over labor for decades—the models used to evaluate economic performance simply don't count this as a problem. Perhaps most fundamentally, mainstream economics embraced "methodological individualism"—the principle that all economic phenomena must be explained in terms of individual choices. This approach makes it impossible to properly analyze systemic properties that emerge from interactions between parts. Just as water's wetness cannot be deduced from studying individual H2O molecules, many institutional failures cannot be understood by focusing on individual decisions. By insisting all responsibility must trace to individuals, economics paradoxically created systems where responsibility vanished altogether.

Chapter 4: When Organizations Become Unaccountable Intelligences

Organizations increasingly function as non-human intelligences with their own goals, processes, and logic that diverge from human values. While not "conscious" in the human sense, these organizational systems process information, make decisions, and pursue objectives with increasing autonomy from human direction. Understanding this perspective reveals why accountability problems persist despite well-intentioned individuals within these systems. Large organizations develop emergent properties beyond the sum of their human components. Corporate cultures, institutional memory, and organizational routines create decision-making patterns that no individual fully controls. As computer scientist Ben Kuipers argues, corporations meet the criteria for independent intelligent entities: they persist through time despite changing components, process information to achieve goals, and adapt to changing environments. Their "thinking" occurs through distributed processes—meetings, memos, reports, metrics—rather than in a single mind. These organizational intelligences focus relentlessly on their core objectives, often defined by narrow metrics like quarterly profits, enrollment numbers, or procedural compliance. This single-minded focus creates the organizational equivalent of the "paperclip maximizer" problem in artificial intelligence safety research: an intelligent system that pursues its programmed goal while being blind to broader consequences. When BP's complex management systems optimized for cost efficiency at the expense of safety culture, the Deepwater Horizon disaster resulted—not because anyone wanted an oil spill, but because the organizational intelligence had developed a form of tunnel vision. Information flows within these systems systematically filter out inconvenient facts. Middle managers learn to present information in ways that conform to senior management expectations, creating what psychologists call the "MUM effect" (Minimizing Unpleasant Messages). Technical language and specialized metrics further distort communication. During the 2008 financial crisis, banks had developed risk models so complex that even their creators couldn't fully explain them, creating an accountability void where no one could be held responsible for the resulting catastrophe. Decision-making becomes increasingly automated, whether through literal algorithms or through rigid policies and procedures that function as algorithmic rules. The UK government's 2020 attempt to replace canceled exams with an algorithm that assigned grades based on historical patterns exemplifies this trend. When the algorithm systematically disadvantaged bright students from underprivileged schools, public outcry forced its abandonment—but the fundamental problem remains in countless less visible domains where algorithms make consequential decisions. Corporate restructuring has accelerated these tendencies. The leveraged buyout boom that began in the 1980s transformed companies into debt-servicing machines where all decisions became subordinated to meeting short-term financial targets. Outsourcing further fragmented accountability, as responsibilities were distributed across complex contractor networks with minimal transparency. When nursing home chains owned by private equity firms provided inadequate care during the COVID-19 pandemic, responsibility vanished in a maze of holding companies, management contracts, and real estate arrangements. This transformation fundamentally changes the relationship between institutions and individuals. Rather than organizations serving as tools for human purposes, humans increasingly function as interchangeable components within organizational systems pursuing their own objectives. The resulting "accountability sink" operates at a level beyond individual human control, creating a governance crisis that traditional approaches to accountability cannot address.

Chapter 5: The Polycrisis: Symptoms of System Failure

The multiplying crises of recent years—financial instability, political polarization, institutional failures during the pandemic, worsening inequality—aren't separate problems but interconnected symptoms of systemic accountability failure. Historian Adam Tooze terms this phenomenon the "polycrisis": a situation where multiple breakdown points across different domains interact and amplify each other, overwhelming the capacity of governance systems to respond coherently. The 2008 financial crisis revealed profound failures in our accountability architecture. Central banks and regulatory bodies failed to detect systemic risk despite ample warning signs. Post-crisis investigations showed that information about dangerous practices had been available but systematically filtered out by organizational hierarchies and incentive structures. Financial institutions had effectively privatized gains while socializing losses, creating moral hazard on an unprecedented scale. Most troublingly, despite causing trillions in economic damage, almost no individual decision-makers faced meaningful consequences—demonstrating accountability's complete collapse. Political polarization and the rise of populist movements represent another manifestation of the accountability crisis. When democratic institutions become unresponsive to public concerns, citizens turn to leaders who promise to "drain the swamp" or disrupt the system. The Brexit vote, Trump's election, and similar movements worldwide share a common thread: they channel rage against unaccountable technocratic governance. While often misdirected and manipulated, this populist energy reflects genuine democratic deficits in modern institutional arrangements. Public health responses to COVID-19 exposed additional accountability failures. Complex chains of outsourced contracts led to inadequate testing capacity, PPE shortages, and vaccine distribution problems. Decisions with life-or-death consequences were made through opaque processes with limited public input. Particularly in nursing homes and elder care facilities, responsibility for catastrophic outcomes disappeared into webs of private equity ownership, management companies, and regulatory gaps. The epidemics of "deaths of despair"—suicide, drug overdose, and alcohol-related mortality—plaguing many developed nations represent another dimension of the crisis. Public health researchers Anne Case and Angus Deaton have demonstrated how these deaths correlate with declining economic opportunity and social connection, particularly in communities hollowed out by deindustrialization. The Whitehall studies of British civil servants similarly show that lacking control over one's life circumstances creates measurable health risks. At a fundamental level, accountability failures manifest as physiological harm to those subject to unaccountable power. Climate change perhaps most starkly illustrates the accountability crisis. Despite decades of scientific warnings, governance systems have proven incapable of meaningful action. The diffusion of responsibility across national boundaries, economic sectors, and generations creates a perfect accountability sink. No single entity bears full responsibility, allowing all to avoid it. Meanwhile, the costs fall disproportionately on those with the least decision-making power—future generations, vulnerable populations, and non-human species. These overlapping crises create a destabilizing feedback loop. Each institutional failure further erodes public trust, making coordinated responses to subsequent challenges more difficult. Without intervention to restore accountability mechanisms, this cycle threatens to undermine the viability of democratic governance itself. The polycrisis isn't merely a series of unfortunate events but the predictable outcome of systematically dismantling accountability structures across our society.

Chapter 6: Restoring Accountability in Complex Systems

Rebuilding accountability for today's complex systems requires fundamental redesign rather than surface-level reforms. While we cannot return to simpler times, we can create governance structures appropriate for twenty-first century complexity. This effort must address both technical challenges and deeper cultural assumptions about responsibility in an interconnected world. The first principle for accountability reform comes from cybernetics: information must flow to where decisions are made. Currently, critical feedback is systematically filtered or blocked before reaching decision-makers. Organizations need "red handle" mechanisms that allow urgent information to bypass normal channels when necessary. Whistleblower protections, ombudsman offices, and independent oversight boards can serve this function when properly designed. Critically, these mechanisms must have sufficient "variety" (in cybernetic terms) to match the complexity of the systems they monitor. Transparency represents a necessary but insufficient condition for accountability. Simply publishing information doesn't ensure it reaches the right people or influences decisions. Effective transparency requires not just disclosure but translation—making complex information comprehensible to relevant stakeholders. Technical measures like algorithmic explainability requirements and plain-language summaries of corporate actions help, but must be paired with institutions that can interpret and act on this information. Private sector accountability requires revising corporate governance structures that insulate decision-makers from consequences. The doctrine of shareholder primacy must be replaced with stakeholder governance models that acknowledge responsibilities to workers, communities, and the environment. When decisions affect multiple constituencies, all should have representation in decision processes. Similarly, the leveraged buyout model that loads companies with debt while extracting value for investors must be constrained through reformed bankruptcy laws and investment regulations. Public sector accountability demands rethinking the relationship between expertise and democracy. The trend toward delegating decisions to unelected technocratic bodies has gone too far, creating democratic deficits and policy blindness. While technical expertise remains vital, expert bodies should be embedded within frameworks of democratic oversight and public engagement. Central banks, regulatory agencies, and similar institutions need explicit requirements to consider broader social impacts beyond narrow technical metrics. Technology governance presents particular challenges. As algorithmic decision-making proliferates, accountability mechanisms must evolve accordingly. Rather than treating algorithms as "black boxes," we need legal frameworks establishing clear responsibility for their impacts. This includes liability rules, impact assessment requirements, and governance structures ensuring human oversight of consequential decisions. Paradoxically, artificial intelligence might also help restore accountability by processing complexity that overwhelms human capacity—filtering and translating information to make it actionable. Perhaps most fundamentally, restoring accountability requires shifting cultural attitudes about complexity and responsibility. The prevailing assumption that complex systems are inherently unaccountable serves those who benefit from the status quo. As management cybernetician Stafford Beer demonstrated, even highly complex systems can be designed for accountability when appropriate information channels and feedback mechanisms exist. The challenge is not technical impossibility but political will. Effective accountability doesn't require abandoning complex systems entirely, but rather redesigning them with human values at their core. Properly structured, complex organizations can amplify human capabilities while remaining responsive to human needs. The path forward involves neither simplistic rejection of complexity nor passive acceptance of unaccountability, but rather the conscious design of governance systems appropriate for an interconnected world.

Summary

The accountability crisis reflects a fundamental shift in how decisions are made in modern society. As complex systems have replaced individual judgment, traditional accountability mechanisms have become ineffective. Decision-making systems—from algorithms to bureaucracies to markets—increasingly function as black boxes, producing outcomes with no clear responsible party. This transformation isn't merely a moral failure but a design problem built into the architecture of contemporary institutions. Understanding this dynamic through cybernetic principles reveals that accountability isn't an optional ethical add-on but an essential feature for system viability. The path forward requires redesigning our institutions to restore meaningful accountability without sacrificing necessary complexity. This means creating information channels that bypass organizational filters when crucial feedback is being blocked, revising governance structures to include all affected parties in decision processes, and developing new frameworks for assigning responsibility in distributed systems. Most fundamentally, it requires rejecting the false premise that complexity inherently precludes accountability. Complex systems can be accountable when properly designed with appropriate feedback mechanisms and transparent information flows. The question isn't whether we can restore accountability in a complex world, but whether we have the wisdom and courage to rebuild institutions that serve human purposes rather than their own internal logic.

Best Quote

“The system of profit equations that Jerome Levy wrote down in 1914 anticipated a similar set of equations written down by the Polish economist Michal Kalecki in 1935. And Kalecki’s system is regarded by a lot of people as containing nearly all of what’s useful in J. M. Keynes’ General Theory of Employment, Interest and Money, published in 1936 and widely accepted as one of the greatest works of economics ever. Levy went on to demonstrate that the proverb ‘if you’re so smart, why aren’t you rich?’ was not applicable in this case; aided by his sons, the Levy family went into finance with sufficient success that the Jerome Levy Forecasting Institute they endowed at Bard College continues to promote their approach to economics today. You used to be able to buy a copy of the book Jerome wrote in 1943, Economics Is an Exact Science, from them; I got mine in about 2002. In the introduction to that book, Levy sets out his view of the purpose of capitalism: The working class is the original and fundamental economic class . . . The function of the investing class is to serve the members of the working class by insuring them against loss and by providing them with desired goods. The justification for the existence of the investing class is the service it renders the working class, measured in terms of wages and desired goods. The contrary is not true. The working class does not exist to serve the investing class. The working class has the right to insure itself through organizations composed of its members or through government, thereby eliminating the investing class.” ― Dan Davies, The Unaccountability Machine: Why Big Systems Make Terrible Decisions - and How The World Lost its Mind

Review Summary

Strengths: The review highlights the intriguing historical context of Stafford Beer's work with the Allende government in Chile, providing a dramatic narrative that captures the reader's interest. It also notes the book's exploration of modern business and government theories using Beer's concepts, particularly around the theme of "Accountability."\nWeaknesses: The review suggests a potential overemphasis on the dramatic historical narrative, which might overshadow a more balanced analysis of Beer's contributions and the book's arguments. It also implies that the book's framing might rely too heavily on Beer's mystique without critically assessing the practical implications of his theories.\nOverall Sentiment: Mixed\nKey Takeaway: The book uses the historical narrative of Stafford Beer's work as a foundation to discuss modern accountability in business and government, highlighting how rules-based systems can obscure personal responsibility.

About Author

Loading...
Dan Davies Avatar

Dan Davies

Daniel Davies is a British journalist, editor and writer.

Read more

Download PDF & EPUB

To save this Black List summary for later, download the free PDF and EPUB. You can print it out, or read offline at your convenience.

Book Cover

The Unaccountability Machine

By Dan Davies

Build Your Library

Select titles that spark your interest. We'll find bite-sized summaries you'll love.