Home/Business/AI Needs You
Loading...
AI Needs You cover

AI Needs You

How We Can Change AI's Future and Save Our Own

3.5 (202 ratings)
20 minutes read | Text | 9 key ideas
As artificial intelligence surges forward, who will determine its direction? "AI Needs You" is not just a book; it's a clarion call for everyday heroes to step into the spotlight. Verity Harding, a seasoned insider in tech and politics, dismantles the bombastic narratives surrounding AI, instead drawing on pivotal moments from our past—like the space race and the dawn of the internet—to craft a vision of technology guided by shared human values. This manifesto ignites hope, urging citizens to steer AI away from dystopian futures and towards a world where technology serves humanity, not just profit. The future of AI is ours to shape, and Harding invites us all to be its architects.

Categories

Business, Nonfiction, Science, History, Technology, Artificial Intelligence, Audiobook

Content Type

Book

Binding

Hardcover

Year

2024

Publisher

Princeton University Press

Language

English

ASIN

0691244871

ISBN

0691244871

ISBN13

9780691244877

File Download

PDF | EPUB

AI Needs You Plot Summary

Introduction

Artificial Intelligence stands at a crossroads between unprecedented opportunity and grave risk. As AI capabilities rapidly advance, society faces critical questions about who should shape this technology and what values it should embody. Yet too often, these discussions exclude ordinary citizens, as technical jargon and expert-dominated debates create barriers to meaningful participation. At its core, this exploration challenges the notion that AI governance should be left solely to technologists or corporations. Through examining historical parallels in transformative technologies—from nuclear weapons to reproductive medicine—we discover that technology is never value-neutral but rather reflects the societies and systems that create it. When we recognize AI as fundamentally human-made and value-laden, we unlock the possibility for democratic oversight that balances innovation with responsibility. By analyzing how past societies navigated similar technological transitions through deliberative processes, consensus-building, and purposeful limitation, we gain a framework for ensuring AI serves humanity rather than the reverse. The path forward requires voices from diverse backgrounds, experiences, and disciplines to participate in shaping technologies that will profoundly impact our collective future.

Chapter 1: Lessons from History: Governing Transformative Technologies

The governance of transformative technologies throughout history reveals a recurring pattern: breakthroughs initially developed for warfare or national security gradually evolve toward peaceful purposes through deliberate political leadership and diplomatic effort. The space race provides an illuminating example. While rocket technology originated in Nazi Germany's V-2 program—a weapon that brought terror to civilian populations—it later became the foundation for humanity's greatest exploratory achievement: landing on the moon. This transformation didn't happen by accident. It required visionary leadership from presidents like Eisenhower, who established NASA as a civilian agency despite military pressure, and Kennedy, who framed space exploration as a peaceful endeavor even while pursuing Cold War advantages. Their efforts culminated in the 1967 UN Outer Space Treaty, which declared space "the province of all mankind" and prohibited nuclear weapons in orbit. This achievement came during intense Cold War tensions, demonstrating that negotiated limits on dangerous technologies remain possible even in divided times. The treaty's success came from a dual approach: protecting national security interests while simultaneously advocating for international cooperation. Rather than pursuing technological dominance alone, American leadership recognized that establishing global norms would better serve long-term interests. The space race teaches us that technology must be governed not just technically but diplomatically, with an eye toward establishing boundaries that prevent worst-case scenarios. Today's AI development lacks comparable diplomatic effort. While nations compete to develop ever more powerful systems, little attention goes to establishing international norms for responsible use. This creates dangerous blind spots, particularly regarding autonomous weapons systems and surveillance technologies that threaten fundamental rights. The historical lesson is clear: technological capability must be matched with governance frameworks that direct innovation toward beneficial ends. Nations leading in AI development have a special responsibility to initiate these governance conversations. Just as Eisenhower and Kennedy balanced military necessity with peaceful intentions, today's leaders must advocate for AI development that serves human flourishing. History shows that self-interest and ethical considerations need not conflict—indeed, the most enduring technological frameworks emerge when they align.

Chapter 2: From Regulation to Progress: How Boundaries Enable Innovation

The development of in vitro fertilization (IVF) in Britain during the 1970s and 1980s demonstrates how thoughtful regulation can actually accelerate scientific progress rather than hinder it. When Louise Brown, the first "test-tube baby," was born in 1978, the breakthrough triggered profound ethical questions about human embryo research. Rather than allowing polarization to paralyze progress, the British government appointed a diverse commission led by philosopher Mary Warnock to develop a consensus framework. The Warnock Commission's most significant contribution was establishing the "fourteen-day rule," which permitted embryo research only within the first two weeks after fertilization. This bright line was not strictly determined by science alone—there was no perfect biological moment when an embryo suddenly gained moral status. Instead, the commission recognized that society needed a clear, enforceable boundary that most people could accept as reasonable. The simplicity of the rule made it transparent and monitorable, creating certainty for researchers while addressing public concerns. Critics emerged from both sides. Scientists initially complained that the fourteen-day limitation was arbitrary and would constrain discovery, while religious groups opposed any research on human embryos. Yet over time, this "strict but permissive" regulatory approach created tremendous benefits. Britain became a world leader in reproductive medicine and embryonic research precisely because scientists had clear parameters within which they could confidently innovate. The Human Fertilisation and Embryology Authority (HFEA), established following the Warnock recommendations, provided ongoing oversight without micromanagement. This created a stable environment where investment could flourish. Meanwhile, countries without such frameworks often experienced unpredictable policy swings as political winds changed. In the United States, for instance, embryo research funding became entangled in abortion politics, resulting in research bans that hindered scientific progress. The IVF case illustrates that regulation works best when it acknowledges legitimate societal concerns while creating space for innovation. Rather than treating regulation as the enemy of progress, the Warnock approach recognized that well-designed boundaries actually enable advancement by building public trust. For AI governance, this suggests that clear limitations on specific high-risk applications might paradoxically accelerate development in beneficial domains by providing the certainty that companies and researchers need to invest with confidence.

Chapter 3: The Power of Purpose: Aligning AI with Human Values

The early internet offers crucial lessons about how technologies evolve when their development lacks clear purpose beyond market forces. Originally created through ARPANET, a defense project funded by the Advanced Research Projects Agency, the internet began as a government-subsidized research network built on principles of openness and collaboration. Its founding engineers, mostly graduate students at elite universities, established a culture of consensus-driven decision making and transparent governance through informal processes like "Requests for Comments" (RFCs). During the 1990s, this purpose-driven project underwent dramatic transformation. Vice President Al Gore initially championed a vision of the "information superhighway" as a federally supported network that would guarantee universal access to knowledge, particularly for schools and libraries. His National Research and Education Network (NREN) legislation aimed to create a public-minded infrastructure that would benefit all citizens regardless of geography or economic status. However, as commercial interests recognized the internet's potential, Gore's public-interest vision gave way to market-driven development. The Clinton administration, embracing deregulation as part of its "New Democrat" positioning, accelerated the privatization of internet infrastructure without establishing requirements for equitable access, security, or transparency. The result was impressive innovation and economic growth, but also widening digital divides that persist decades later. The internet's commercialization wasn't inherently problematic, but the abandonment of public purpose meant that market forces alone determined development priorities. The potential for universal access and democratic empowerment remained unrealized in many communities. Rural areas and low-income neighborhoods experienced "digital redlining" as private companies focused investment where profits were highest. Without purposeful design, the internet evolved to maximize engagement and advertising revenue rather than public goods like information accuracy or privacy protection. For AI development, this history highlights the importance of establishing clear societal purposes before commercial incentives fully take hold. When profit becomes the primary driver, technologies tend to evolve toward maximizing return on investment rather than addressing the most pressing human needs. The handful of companies with resources to develop cutting-edge AI systems will naturally focus on commercially viable applications unless deliberate efforts guide development toward broader social goals. AI requires intentional alignment with human values through policies that incentivize applications serving urgent challenges like climate change, healthcare access, and educational opportunity. Without such direction, market forces alone will likely replicate existing inequalities and prioritize engagement over well-being. The internet's history demonstrates that technological trajectories are not inevitable but shaped through choices about purpose, access, and governance.

Chapter 4: Trust and Participation: Building Consensus in AI Development

The aftermath of the September 11 attacks revealed how quickly technological trust can erode when surveillance capabilities expand without transparent oversight. Edward Snowden's 2013 revelations about mass data collection by intelligence agencies damaged global confidence in both government institutions and technology companies. For AI governance, this episode provides crucial insights about the fragility of trust and the necessity of meaningful stakeholder participation. Before Snowden's disclosures, the National Security Agency had secretly implemented programs like PRISM and Tempora that collected vast amounts of communications data from ordinary citizens. When these programs became public, they created a legitimacy crisis. International allies felt betrayed upon learning their leaders' communications had been monitored. Technology companies faced user backlash when their complicity was exposed. Citizens questioned whether their digital lives were truly private. This trust breakdown had tangible consequences, including calls from countries like Brazil to create separate internet infrastructure that would bypass U.S. control. The surveillance controversy also transformed debates about internet governance. Until then, the United States maintained oversight of the Internet Corporation for Assigned Names and Numbers (ICANN), which manages the internet's domain name system. Though originally promised as temporary, this arrangement persisted for years, with American officials claiming it preserved internet stability. After Snowden's revelations, global pressure mounted to end U.S. control, culminating in the 2016 transition to truly international governance. This experience demonstrates that technological systems require ongoing legitimacy derived from transparent processes and diverse participation. When governance happens behind closed doors with limited stakeholder input, even well-intentioned systems eventually face crises of confidence. For AI development, similar dynamics are already emerging, with concerns about biased algorithms, surveillance applications, and autonomous capabilities developing faster than accountability mechanisms. ICANN's multistakeholder model, despite its limitations, offers valuable lessons for AI governance. By bringing together technical experts, civil society organizations, businesses, and governments in transparent decision-making processes, ICANN created legitimacy across diverse constituencies. Though sometimes cumbersome, this inclusive approach has proven more durable than alternatives that concentrate power among technical elites or government officials alone. Organizations like Partnership on AI have begun exploring similar multistakeholder approaches for AI governance, bringing together companies, researchers, and civil society groups to develop voluntary standards. While insufficient alone, these efforts demonstrate the possibility of governance mechanisms that combine technical expertise with broader societal representation. Building trust in AI systems will require not just technical safety but genuine public participation in determining how these technologies integrate into society.

Chapter 5: Beyond the Tech Elite: Democratizing AI Decision-making

Meaningful democratic participation in AI governance requires moving beyond superficial consultation to create genuine opportunities for diverse voices to influence technological development. The success of Britain's Warnock Commission in resolving divisive questions about reproductive technology demonstrates the power of well-designed deliberative processes that respect both technical expertise and public concerns. When confronting the complex ethical issues raised by in vitro fertilization, the British government assembled a commission that included not just scientists and doctors but also theologians, lawyers, social workers, and laypeople. This diverse composition enabled the group to consider medical questions alongside social, ethical, and legal implications. Chair Mary Warnock, a philosopher rather than a scientist, brought crucial skills in ethical reasoning and public communication that helped translate technical issues into terms the public could understand and accept. The commission gathered evidence from over 300 organizations and individuals, ensuring all perspectives were heard before formulating recommendations. Perhaps most importantly, Warnock recognized that public acceptance required clear boundaries rather than nuanced academic distinctions. While scientists initially criticized the fourteen-day limit on embryo research as arbitrary, this straightforward rule proved more effective at building public confidence than complex criteria based on developmental milestones. For AI governance, similar approaches can bridge technical complexities and public values. When the UK's Centre for Data Ethics and Innovation studied public attitudes toward automated decision-making, they found people were more comfortable with AI in high-stakes contexts when systems provided transparency and accountability. These findings directly influenced policy development, showing how public input can shape practical governance frameworks. Democratizing AI decisions requires recognizing that expertise comes in many forms. Those experiencing algorithmic systems in welfare offices, schools, or workplaces have valuable insights about impacts that developers may miss. Britain's algorithm-based grading fiasco during the pandemic illustrates this dynamic perfectly. When officials implemented an automated system to replace pandemic-canceled exams, it systematically disadvantaged talented students from historically lower-performing schools. Student protests forced a policy reversal, demonstrating that those subject to algorithmic systems often understand their flaws better than designers. Creating genuinely inclusive processes means addressing participation barriers, providing accessible information about technical systems, and ensuring that concerns raised actually influence outcomes. When companies like OpenAI and Meta establish community forums to gather feedback, they take a positive step, but these efforts must evolve from public relations exercises into genuine power-sharing if they are to build lasting legitimacy.

Chapter 6: Ethical Imperatives: Setting Limits for Technological Progress

Establishing ethical boundaries for AI development requires balancing competing values while acknowledging that pure neutrality in technology is impossible. Martin Luther King Jr. recognized this fundamental truth when he warned that "when scientific power outruns moral power, we end up with guided missiles and misguided men." His insight remains relevant as AI capabilities rapidly advance without corresponding progress in defining their appropriate limits. Ethical governance of AI must start by rejecting the false choice between innovation and responsibility. The experience of British embryology research demonstrates that clear limitations can actually accelerate beneficial innovation by creating certainty and public trust. Similar approaches can work for AI, where concerns about surveillance, autonomous weapons, or manipulative applications require distinct ethical frameworks from beneficial uses in healthcare or climate science. Surveillance technologies exemplify this need for ethical limits. Facial recognition systems deployed without constraints raise profound concerns about privacy, freedom of assembly, and democratic participation. When the London Metropolitan Police implemented live facial recognition in public spaces, they captured biometric data from thousands of innocent citizens. Similar systems have been used to monitor protesters in Hong Kong and track minority populations in China. Even in commercial settings, these technologies enable troubling practices, as when Madison Square Garden used facial recognition to identify and ban lawyers associated with litigation against the venue. Drawing ethical lines around such applications means recognizing that technical feasibility does not imply social acceptability. The European Union's AI Act takes this approach by identifying "unacceptable risk" applications that must be prohibited, including manipulative deepfakes and social scoring systems. While specific regulations will vary across jurisdictions, establishing ethical red lines creates space for beneficial applications to flourish without undermining fundamental rights. Autonomous weapons systems present another domain requiring ethical limitations. Unlike the Space Race, where diplomacy established the Outer Space Treaty to prevent orbital nuclear weapons, autonomous weapons development proceeds with minimal international constraints. Leading AI researchers have called for treaties prohibiting lethal autonomous weapons, arguing that systems removing human moral judgment from killing decisions cross an ethical boundary regardless of their technical sophistication. Creating ethical frameworks for AI requires acknowledging that values are embedded in technical systems through countless design choices. Facial recognition algorithms trained primarily on white faces perform poorly on darker skin tones not because of technical necessity but because of choices about training data. Hiring algorithms that perpetuate historical biases reflect decisions about what constitutes "success." Recognizing these value judgments means accepting responsibility for making them consciously rather than defaulting to whatever serves market incentives or technical convenience.

Chapter 7: Global Governance: Learning from Internet and Space Regulation

The history of internet governance offers critical insights for establishing global frameworks to guide AI development across national boundaries. When the Clinton administration created the Internet Corporation for Assigned Names and Numbers (ICANN) in 1998, they designed a novel institution neither fully governmental nor fully private. This multistakeholder model brought together technical experts, businesses, civil society organizations, and governments to manage critical internet infrastructure through consensus-based processes. Though initially under American oversight, ICANN evolved toward true international governance through decades of negotiation and compromise. The model wasn't perfect—developing countries often lacked resources to participate meaningfully, and powerful commercial interests exerted disproportionate influence. Nevertheless, ICANN demonstrated that global technology governance could function without traditional treaty organizations or single-nation control. The transition away from U.S. oversight accelerated after Edward Snowden's surveillance revelations damaged American credibility. When countries like Russia and China proposed moving internet governance to the United Nations International Telecommunication Union in 2012, the multistakeholder model faced its greatest challenge. The crisis forced American officials to acknowledge that maintaining legitimacy required genuine internationalization rather than continued U.S. dominance. The eventual ICANN transition in 2016 preserved the multistakeholder approach while removing American control. For AI governance, similar dynamics are emerging. China has established a Digital Silk Road initiative to spread its technology standards globally, while the United States restricts semiconductor exports to maintain technological advantage. These competing visions threaten to fragment AI development along geopolitical lines. Avoiding this outcome requires governance innovations that acknowledge legitimate security concerns while preserving common standards and ethical frameworks. The United Nations Outer Space Treaty offers another instructive model. By declaring space "the province of all mankind" and prohibiting nuclear weapons in orbit, this 1967 agreement established boundaries that enabled peaceful cooperation despite Cold War tensions. Its success came from combining high-level principles with practical limitations that served mutual interests. Major powers accepted constraints on military activities in space because they recognized that unrestrained competition would threaten everyone's security. AI governance requires similar pragmatism. While comprehensive global regulation remains unlikely in the current geopolitical climate, focused agreements on specific high-risk applications remain possible. Autonomous weapons, surveillance standards, and transparency requirements for powerful AI systems represent domains where international cooperation serves mutual interests despite broader competition. Organizations like the Global Partnership on AI, initiated by Canada and France, demonstrate potential starting points for international collaboration. Though lacking enforcement mechanisms, such forums can develop standards that shape corporate behavior across borders. For truly effective global governance, however, leading AI nations must recognize that technological dominance alone cannot substitute for legitimate frameworks that earn worldwide trust.

Summary

The governance of artificial intelligence represents one of humanity's most consequential challenges, requiring a delicate balance between innovation and responsibility. Through examining historical parallels in space exploration, reproductive technology, and internet development, we discover that technology is never value-neutral but rather embodies the priorities and power structures of those who create it. This fundamental insight transforms how we approach AI governance—from technical problem-solving to democratic deliberation about what kind of society we wish to build. Effective AI governance emerges not from technical expertise alone but from purposeful inclusion of diverse perspectives. The Warnock Commission's success in establishing trustworthy frameworks for reproductive technology demonstrates that non-specialists often contribute crucial insights about ethical boundaries and social implications. Similarly, ICANN's evolution from American oversight to global multistakeholder governance shows that legitimate frameworks require genuine participation across geographical, sectoral, and disciplinary boundaries. By bringing these lessons to AI development, we can create systems that reflect our collective values rather than merely technical possibility or market imperatives. The future of AI belongs not just to its creators but to all whose lives it will transform—which means everyone deserves a voice in determining its path.

Best Quote

Review Summary

Strengths: The review highlights the book's originality in contributing new ideas and insights to the AI governance debate, which is often repetitive. It praises the book for being engaging and stimulating. The historical analysis of postwar technologies like space exploration, IVF, and the internet is noted as particularly interesting, showcasing the complex interplay of various factors in governance. Weaknesses: Not explicitly mentioned. Overall Sentiment: Enthusiastic Key Takeaway: Verity Harding's book is commended for its fresh perspective on AI policy and governance, offering a compelling historical analysis of past technological governance that informs current debates.

About Author

Loading...
Verity Harding Avatar

Verity Harding

Read more

Download PDF & EPUB

To save this Black List summary for later, download the free PDF and EPUB. You can print it out, or read offline at your convenience.

Book Cover

AI Needs You

By Verity Harding

0:00/0:00

Build Your Library

Select titles that spark your interest. We'll find bite-sized summaries you'll love.