
Atlas of AI
Power, Politics, and the Planetary Costs of Artificial Intelligence
Categories
Business, Nonfiction, Philosophy, Science, Economics, Politics, Technology, Artificial Intelligence, Audiobook, Environment
Content Type
Book
Binding
ebook
Year
2020
Publisher
Yale University Press
Language
English
ASIN
0300209576
ISBN
0300209576
ISBN13
9780300209570
File Download
PDF | EPUB
Atlas of AI Plot Summary
Introduction
When you ask your smart speaker a question or marvel at an AI-generated image, the technology seems almost magical—as if intelligence has been conjured from thin air. But behind this apparent magic lies a complex and often troubling reality. Artificial intelligence systems are not ethereal entities floating in a digital cloud; they are deeply physical technologies that extract resources from our planet, depend on hidden human labor, and consume vast amounts of personal data. The true costs of AI remain largely invisible to most of us. We don't see the lithium mines that provide materials for the batteries powering our devices, the warehouse workers whose movements are dictated by algorithms, or the data laborers who train AI systems by labeling thousands of images for pennies. This book pulls back the curtain on these hidden foundations of AI, revealing how technologies marketed as autonomous and immaterial actually rely on extensive extraction—of earth's resources, human labor, and personal data. Understanding these extractive processes is essential not just for grasping how AI works, but for recognizing who benefits from these systems and who bears their costs.
Chapter 1: The Material Foundations of "Immaterial" AI
Artificial intelligence has a physical presence that begins deep in the earth. In remote deserts across the globe, mining operations extract lithium, cobalt, and rare earth minerals essential for the batteries and components that power AI infrastructure. These mining operations transform landscapes, contaminate water supplies, and displace communities. In Nevada's Clayton Valley, for instance, lithium extraction consumes massive amounts of water in an already drought-prone region, while cobalt mining in the Democratic Republic of Congo has been linked to severe human rights abuses and environmental degradation. The environmental footprint extends far beyond mining. Data centers—the physical infrastructure where AI systems live—consume enormous amounts of electricity and water. Training a single large language model can produce carbon emissions equivalent to the lifetime emissions of five cars, including their manufacturing. These centers require constant cooling, often using millions of gallons of water annually in regions already facing water scarcity. Despite the tech industry's carefully crafted image of sustainability, the carbon footprint of AI systems is growing rapidly as models become larger and more complex. Global supply chains further amplify these environmental costs. The components for AI systems travel thousands of miles before reaching their destination, with shipping vessels producing significant pollution. The standardized cargo containers that enable this global movement create a system where environmental damage in one location powers technological advancement in another, with the costs and benefits distributed unequally across the globe. This extraction follows historical patterns. Just as San Francisco was built from the wealth generated by pulling gold and silver from California and Nevada lands in the 1800s, today's tech industry follows similar patterns of resource extraction. The difference is scale—AI systems now depend on a "planetary mine" that spans continents, connecting cities and mines, companies and supply chains in a global network of resource extraction. The tech sector carefully obscures these material realities, preferring to present AI as immaterial and ethereal. This obscuring serves a purpose: by hiding the physical infrastructure and environmental costs, companies can maintain the illusion that AI represents a clean break from industrial capitalism rather than its intensification. Understanding AI requires recognizing it as what philosopher Lewis Mumford called a "megamachine"—a set of technological approaches dependent on industrial infrastructures, supply chains, and human labor that stretch around the globe but are kept deliberately opaque.
Chapter 2: Hidden Human Labor Behind Automated Systems
Behind every "autonomous" AI system lies an extensive network of human labor. At Amazon's vast fulfillment centers, workers navigate under constant algorithmic surveillance, with every second monitored and tallied. They must maintain specific "picking rates"—selecting and packing items within allocated times—while robots glide across the floor carrying heavy shelving units. Many workers wear knee braces, elbow bandages, and wrist guards to cope with the physical demands of keeping pace with algorithmic expectations. This control over human time represents a core logic of how AI-powered systems actually function. Rather than robots replacing humans, we often see humans being treated more like robots—their movements optimized, their efficiency constantly measured, their bodies run according to computational logics. The promise of full automation serves as a useful narrative for companies while the reality involves new forms of algorithmic management that intensify labor rather than eliminate it. AI systems also depend on what anthropologist Mary Gray calls "ghost work"—hidden labor performed by millions of people around the world who train, maintain, and repair AI systems. These workers perform tasks ranging from labeling thousands of images to moderating disturbing content, often for wages far below minimum standards. Amazon's Mechanical Turk platform connects businesses with an unseen mass of workers who bid against one another for microtasks, creating what Amazon CEO Jeff Bezos brazenly called "artificial artificial intelligence"—human intelligence disguised as machine intelligence. Sometimes workers are explicitly asked to pretend to be AI systems. Companies like x.ai claimed their digital personal assistant "Amy" could "magically schedule meetings," but investigations revealed it was carefully checked and rewritten by teams of contract workers pulling long shifts. This "Potemkin AI"—facades designed to demonstrate what an automated system would look like while actually relying on human labor—represents a core logic of how many AI systems actually function. This hidden labor continues historical dynamics inherent in industrial capitalism. From Adam Smith's observations about the division of manufacturing tasks to Frederick Taylor's scientific management, workers have encountered increasingly powerful tools that change how labor is managed while transferring more value to employers. The crucial difference today is that employers now observe, assess, and modulate intimate parts of the work cycle and bodily data that were previously off-limits, creating new forms of algorithmic control that extend far beyond the factory floor.
Chapter 3: Data Extraction: The New Digital Mining
Data has become the new raw material of the digital economy, extracted from our online activities, physical movements, and even facial expressions. This extraction operates through what can be called a "data mining" process—though unlike traditional mining, the people whose data is being extracted rarely receive compensation or even awareness that extraction is occurring. Every click, like, search, and purchase becomes raw material for AI systems to find patterns and make predictions. The scale of this extraction is staggering. Computer vision systems require massive amounts of labeled images to learn how to detect objects. ImageNet, a foundational dataset for AI research, grew to include over fourteen million images organized into more than twenty thousand categories. These images weren't created for AI training—they were harvested from across the internet, often without the knowledge or consent of those who created or appeared in them. Researchers began to presume that the contents of the internet were theirs for the taking, beyond the need for agreements or ethics reviews. This mass extraction is justified through metaphors that shift the notion of data away from something personal toward something inert and nonhuman. Data is described as a resource to be consumed, a flow to be controlled, or an investment to be harnessed. The expression "data as oil" became commonplace, suggesting data as a crude material for extraction. When data is framed as oil just waiting to be extracted, then machine learning becomes its necessary refinement process—turning raw data into valuable predictions and classifications. The ideology of data extraction rests on a core premise: the belief that everything is data and available for taking. When researchers at the National Institute of Standards and Technology (NIST) included thousands of mug shots in a database used to test facial recognition algorithms, they transformed these images from identifying specific individuals in law enforcement contexts to becoming technical baselines for commercial AI systems. The context and power dynamics these images represent were considered irrelevant because they had been redefined as mere data points. This extraction creates a privatization by stealth—an appropriation of knowledge value from public goods that primarily benefits a handful of privately owned companies. Personal data, cultural artifacts, and public knowledge are captured and enclosed within proprietary systems. The value generated from this extraction flows primarily to the companies that control the infrastructure, creating new forms of digital inequality where data becomes a source of power and profit for a small number of corporations while the people who generate that data receive little benefit.
Chapter 4: Classification Systems and Embedded Inequality
At the heart of artificial intelligence lies classification—the process of sorting the world into categories that machines can recognize and process. These classifications are not neutral technical decisions but expressions of power that shape how AI systems understand and interact with the world. When AI systems produce discriminatory results along categories of race, class, gender, disability, or age, companies often respond with narrow technical fixes to make systems appear more fair. But this misses more fundamental questions about how classification itself functions as an act of power. The politics of classification has a long and troubling history. In the early 1800s, Samuel Morton measured human skulls by filling their cranial cavities with lead shot, then comparing the volumes between different racial groups. His work was cited for decades as "objective" evidence proving the biological superiority of Caucasians, helping to justify slavery and racial segregation. Later analysis by Stephen Jay Gould revealed that Morton's measurements were shaped by his prior prejudices—his way of seeing the world determined what he found in his supposedly objective measurements. This legacy foreshadows epistemological problems with measurement and classification in artificial intelligence. When Amazon attempted to automate hiring by training a model on ten years of résumés from fellow employees, the system began downgrading résumés from women candidates. It had learned to recommend men because the vast majority of engineers hired by Amazon over ten years had been men. The employment practices of the past and present were shaping the hiring tools for the future, creating what researcher Safiya Noble calls a "statistical ouroboros"—a self-reinforcing discrimination machine. The AI industry has traditionally understood bias as a bug to be fixed rather than a feature of classification itself. But the problem goes deeper than technical errors—it reflects how patterns of inequality across history shape access to resources and opportunities, which in turn shape data. That data is then extracted for use in technical systems that produce results perceived as somehow objective, creating a cycle where historical inequalities are encoded into seemingly neutral technologies. Classification is never merely technical—it is always also social and political. As sociologist Geoffrey Bowker notes, classification can disappear "into infrastructure, into habit, into the taken for granted." When AI systems classify people as high or low risk, worthy or unworthy of loans, likely or unlikely to succeed in jobs, they are not simply making predictions based on patterns—they are participating in and often amplifying existing social hierarchies. Understanding AI requires recognizing these classification systems not as neutral tools but as expressions of power that can reinforce structural inequalities under the guise of technical objectivity.
Chapter 5: Emotion Recognition: Science or Pseudoscience?
Emotion recognition technology claims to detect how people feel by analyzing their facial expressions. Companies like Affectiva have built what they call the world's largest emotion database, made up of over ten million people's expressions from eighty-seven countries. Microsoft's Face API claims to detect emotions like "anger, contempt, disgust, fear, happiness, neutral, sadness, and surprise" and asserts that "these emotions are understood to be cross-culturally and universally communicated with particular facial expressions." These systems are increasingly used in hiring, education, marketing, and security applications. The scientific foundations for these claims trace back to psychologist Paul Ekman's research in the 1960s. Ekman proposed that all humans exhibit a small number of universal emotions that are natural, innate, cross-cultural, and expressed the same way all over the world. In 1967, he traveled to remote highlands of Papua New Guinea with a collection of flashcards showing facial expressions, seeking to prove that even isolated communities would recognize the same emotions from the same expressions. His research became the foundation for today's emotion recognition industry, worth over seventeen billion dollars. However, the claim that a person's interior emotional state can be accurately assessed by analyzing their face rests on increasingly shaky evidence. A comprehensive review of the scientific literature published in 2019 was definitive: there is no reliable evidence that you can accurately predict someone's emotional state from their face. As psychologist Lisa Feldman Barrett explains, "Companies can say whatever they want, but the data are clear. They can detect a scowl, but that's not the same thing as detecting anger." Facial expressions are far more variable across cultures, contexts, and individuals than Ekman's theory suggests. The methodological problems with emotion recognition run deep. Many of the datasets underlying these systems are based on actors simulating emotional states, performing for the camera. This means AI systems are trained to recognize faked expressions of feeling rather than genuine emotional experiences. Even for images captured of people responding to commercials or films, those people are aware they're being watched, which can change their responses. The forced-choice response format used in many studies alerts subjects to the connections designers have already made between facial expressions and emotions, creating circular reasoning. Despite these serious scientific controversies, emotion recognition continues to be deployed in high-stakes contexts. The danger is that these tools may flag the speech affects of women differently from men, particularly Black women, and interpret Black faces as having more negative emotions than white faces, even controlling for their degree of smiling. This takes us back to the phrenological past, where spurious claims about reading character from physical features were used to support existing systems of power. Emotion recognition risks becoming a modern form of physiognomy—a pseudoscience dressed in the language of artificial intelligence and computer vision.
Chapter 6: Military Origins and Surveillance Applications
The military origins of artificial intelligence run deep and continue to shape the field today. As historian Paul Edwards describes, military research agencies actively shaped the emerging field of AI from its earliest days. The Office of Naval Research partly funded the first Summer Research Project on Artificial Intelligence at Dartmouth College in 1956, and the Defense Advanced Research Projects Agency (DARPA) became the primary patron for the first twenty years of AI research. This military funding wasn't incidental—it fundamentally shaped what AI would become. The military priorities of command and control, automation, and surveillance profoundly influenced AI development. The tools and approaches that emerged from DARPA funding have marked the field, including computer vision, automatic translation, and autonomous vehicles. Infused into the overall logics of AI are certain kinds of classificatory thinking—from explicitly battlefield-oriented notions such as target, asset, and anomaly detection to subtler categories of high, medium, and low risk that now permeate civilian applications. In 2017, the Department of Defense announced Project Maven, an initiative to integrate artificial intelligence more effectively across military operations. The goal was to create an AI system that would allow analysts to select a target and then see every existing clip of drone footage featuring the same person or vehicle. Google won the first contract but faced internal backlash when employees discovered the extent of the company's role in the project. More than 3,100 employees signed a letter of protest stating that Google should not be in the business of war. Beyond the military, AI surveillance systems are increasingly used at the local level of government, from welfare agencies to law enforcement. Companies like Palantir, cofounded by PayPal billionaire Peter Thiel, have moved from serving intelligence agencies to providing surveillance tools for Immigration and Customs Enforcement (ICE) and local police departments. Palantir's approach has been described as "an intelligence platform designed for the global War on Terror" that is now "weaponized against ordinary Americans at home." The surveillance armory once reserved for agencies like the NSA and CIA is now deployed domestically at a municipal level. Undocumented immigrants are tracked with logistical systems of total information control, welfare decision-making systems flag anomalous data patterns to cut people off from benefits, and license plate reader technology is used by home surveillance systems. The result is a profound expansion of surveillance and a blurring between private contractors, law enforcement, and the tech sector, creating new forms of algorithmic governance that often operate with limited public oversight or accountability.
Chapter 7: Power Structures: Who Benefits from AI Systems?
Artificial intelligence is not an objective, universal, or neutral computational technique that makes determinations without human direction. Its systems are embedded in social, political, cultural, and economic worlds, shaped by humans, institutions, and imperatives that determine what they do and how they do it. They are designed to discriminate, to amplify hierarchies, and to encode narrow classifications that reflect and often reinforce existing power structures. The standard accounts of AI often center on a kind of algorithmic exceptionalism—the idea that because AI systems can perform uncanny feats of computation, they must be smarter and more objective than their flawed human creators. When DeepMind's AlphaGo Zero mastered the game of Go, cofounder Demis Hassabis described it as "rediscovering three thousand years of human knowledge in 72 hours!" Such narratives of magic and mystification draw bright circles around spectacular displays of speed and computational reasoning while obscuring the purpose and power dynamics of these systems. AI systems are expressions of power that emerge from wider economic and political forces, created to increase profits and centralize control for those who wield them. They serve the existing structures of power, optimizing and amplifying structural inequalities under the guise of technical neutrality. The concentration of AI development in a small number of large technology companies means that the benefits of these systems flow primarily to corporate shareholders rather than being distributed across society. The real stakes of AI are not the technocratic imaginaries of artificiality, abstraction, and automation, but the global interconnected systems of extraction and power. AI is born from salt lakes in Bolivia and mines in Congo, constructed from crowdworker-labeled datasets that seek to classify human actions, emotions, and identities. It is used to navigate drones over Yemen, direct immigration police in the United States, and modulate credit scores of human value and risk across the world. Understanding AI requires recognizing these power dynamics rather than focusing solely on technical capabilities. While many companies have signed AI ethics principles, these documents are often unenforceable and unaccountable to a broader public. Self-regulating ethical frameworks allow companies to choose how to deploy technologies and, by extension, to decide what ethical AI means for the rest of the world. To understand what is at stake, we must focus less on ethics and more on power—who controls these systems, who benefits from them, and who bears their costs. Only by addressing these fundamental questions of power can we begin to imagine AI systems that serve broader social goals rather than reinforcing existing inequalities.
Summary
Artificial intelligence is not the ethereal, neutral technology that corporate marketing suggests. It is a deeply material system built upon extensive extraction—of earth's resources, human labor, and personal data. The environmental damage from mining rare minerals, the hidden labor of workers who train and maintain AI systems, and the massive harvesting of personal data all represent costs that are deliberately obscured. These extractive processes don't just power AI; they shape what it is and how it functions in the world. The most important insight this exploration reveals is that AI systems are expressions of power rather than objective computational techniques. They encode and often amplify existing social hierarchies under the guise of technical neutrality. As these systems become more embedded in our lives—determining who gets hired, who receives loans, who faces additional scrutiny from law enforcement—we must ask more fundamental questions about who controls them and who benefits from them. Rather than focusing narrowly on making AI more accurate or ethical within existing frameworks, we should examine how these technologies might be reimagined to serve broader social goals and challenge rather than reinforce structural inequalities. The future of AI isn't predetermined by technology but will be shaped by our collective choices about how these powerful tools should be governed and for whose benefit they should operate.
Best Quote
“To suggest that we democratize Al to reduce asymmetries of power is a little like arguing for democratizing weapons manufacturing in the service of peace. As Audre Lorde reminds us, the master's tools will never dismantle the master's house.” ― Kate Crawford, Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence
Review Summary
Strengths: The book's exploration of the ethical, environmental, and social implications of AI offers profound insights. Crawford's interdisciplinary approach, integrating history, politics, and sociology, provides a comprehensive perspective. Her critique of the exploitation of resources and the perpetuation of inequalities in AI systems is particularly compelling. The well-researched content and thought-provoking arguments challenge the mainstream narrative of AI as inherently beneficial. Weaknesses: Some readers find the academic style dense and challenging to navigate. There is a noted desire for more solutions or actionable steps toward reform within the AI industry. Overall Sentiment: Reception is generally positive, with particular appreciation for the book's critical lens and comprehensive analysis. It is recommended for those interested in the broader societal implications of AI. Key Takeaway: AI is not merely a technological advancement but a complex system deeply intertwined with political, economic, and cultural forces, necessitating a critical examination of its broader impacts.
Trending Books
Download PDF & EPUB
To save this Black List summary for later, download the free PDF and EPUB. You can print it out, or read offline at your convenience.

Atlas of AI
By Kate Crawford