
The Reality Game
How the Next Wave of Technology Will Break the Truth
Categories
Nonfiction, Science, Communication, Politics, Technology, Audiobook, Social Science, Society
Content Type
Book
Binding
Hardcover
Year
2020
Publisher
PublicAffairs
Language
English
ISBN13
9781541768253
File Download
PDF | EPUB
The Reality Game Plot Summary
Introduction
Technology companies initially promised that social media would connect people, facilitate dialogue and democratize information. Yet, instead of heralding a digital utopia, these platforms have created an environment where disinformation flourishes and propaganda spreads at unprecedented speeds. The past decade has seen the rise of computational propaganda – the use of algorithms, automation, and human curation to manipulate public opinion through digital platforms. Bots, deepfakes, and sophisticated AI systems have emerged as powerful tools that challenge our perception of reality and threaten democratic institutions. The rapid evolution of these technologies presents an urgent challenge that extends beyond existing social media platforms. As virtual reality, augmented reality, and increasingly human-like AI systems develop, the potential for manipulation grows exponentially. Rather than merely responding to current crises of disinformation, society must anticipate how emerging technologies might be weaponized against truth and democracy. By examining how computational propaganda has evolved and identifying patterns in its deployment, we can develop systematic approaches to ensure that future technological innovations strengthen rather than undermine our democratic values and human rights.
Chapter 1: The Evolving Nature of Digital Propaganda
The origins of computational propaganda stretch further back than most people realize. While many associate the phenomenon with the 2016 US presidential election, the deliberate manipulation of social media for political purposes has a longer history. One of the earliest documented cases occurred during the 2010 Massachusetts special Senate election between Scott Brown and Martha Coakley. Researchers discovered a network of suspicious Twitter accounts launching coordinated attacks against Coakley, falsely claiming she was anti-Catholic – a serious accusation in a state where nearly half the population identifies with Catholicism. Investigation revealed these were automated accounts operated by Tea Party activists in Iowa, designed to create the illusion of widespread opposition to Coakley. This early example demonstrated a pattern that would become increasingly common: the use of automation to manufacture apparent consensus. By deploying seemingly independent accounts that amplified identical messages, propagandists could trick both humans and algorithms into believing that fringe views were mainstream. The strategy proved effective – mainstream media outlets began reporting on Coakley's supposed anti-Catholic tendencies, citing the Twitter activity as evidence of growing sentiment against her. The Republican candidate ultimately won the traditionally Democratic seat. In subsequent years, similar tactics emerged across the globe. During the Arab Spring in 2011, researchers observed bot networks flooding hashtags used by democratic organizers with spam, effectively disrupting their ability to communicate. Government-affiliated groups deployed automated accounts to artificially boost follower counts of embattled leaders, creating a false impression of popular support. In countries from Mexico to Ukraine to Syria, political actors utilized social media to silence opposition and manipulate the information environment. Social media companies largely ignored these developments, despite growing evidence of manipulation on their platforms. Researchers and experts repeatedly warned Facebook, Twitter, and other companies about computational propaganda campaigns targeting activists, journalists, and voters worldwide. Yet the platforms continued to prioritize growth and engagement over safeguarding democratic discourse. Only after the 2016 US election, when public concern about "fake news" reached a crescendo, did these companies begin acknowledging the severity of the problem. The tactics of digital propagandists have typically been blunt but effective. Most campaigns have relied on rudimentary bots and basic amplification techniques rather than sophisticated AI. These approaches succeed not through technical complexity but through sheer volume and strategic targeting. By overwhelming specific hashtags, manipulating trending algorithms, or flooding particular conversations with noise, even simple automated systems can effectively distort public discourse and create false perceptions of reality.
Chapter 2: From Social Media to Computational Manipulation
The digital propaganda landscape operates through an ecosystem where various actors play different but complementary roles. Some create the content – fabricated news stories, misleading memes, or manipulated videos. Others distribute this material through networks of bots, coordinated human accounts, or paid amplification. Still others develop the infrastructure that facilitates these campaigns, building tools that allow propagandists to operate at scale while maintaining anonymity. This ecosystem thrives in part because the fundamental design of social media platforms makes them vulnerable to manipulation. Trending algorithms, recommendation systems, and engagement metrics can all be gamed by coordinated groups. Facebook's newsfeed algorithm prioritizes content that generates strong emotional reactions, unwittingly promoting divisive and often misleading information. Twitter's trending topics can be manipulated by networks of automated accounts working in concert. YouTube's recommendation algorithm has been shown to guide viewers toward increasingly extreme content, creating what researchers call "radicalization pipelines." The business models of these companies exacerbate the problem. Platforms generate revenue through advertising, which depends on maximizing user engagement and time spent on the platform. Content that provokes outrage, fear, or tribalism typically generates more engagement than nuanced, factual information. This creates perverse incentives where the financial interests of platforms align with the goals of propagandists seeking to spread divisive content. While much attention has focused on foreign interference, domestic actors play an equally significant role in computational propaganda. Political campaigns, consultancies, and extremist groups within countries have become adept at using these techniques to influence their own citizenry. During the 2016 US election, for instance, domestic political consultants deployed sophisticated targeting strategies that far outpaced those used by foreign entities. These operations often employ a combination of legitimate campaign tactics and more manipulative approaches, blurring the line between persuasion and propaganda. The individuals who create computational propaganda campaigns range from ideologically motivated activists to mercenary contractors who sell their services to the highest bidder. Some are sophisticated technologists who build custom tools for manipulation, while others are marketing professionals who have adapted their skills to the political arena. Many operate in a legal gray zone, exploiting gaps in election laws and platform policies that have failed to keep pace with technological change. The psychological impact of these campaigns extends beyond immediate political effects. Constant exposure to contradictory information and manufactured controversies creates what researchers call "reality apathy" – a state where citizens become so overwhelmed by conflicting claims that they stop trying to distinguish truth from falsehood. This erosion of trust in shared facts undermines the foundation of democratic discourse and creates fertile ground for authoritarian movements that promise certainty in a chaotic information environment.
Chapter 3: When Artificial Intelligence Meets Disinformation
The integration of artificial intelligence into computational propaganda represents a quantum leap in the potential for manipulation. While most current disinformation campaigns rely on relatively simple automation, the rapid development of machine learning and natural language processing is creating new possibilities for generating and distributing deceptive content. These technologies enable more personalized, responsive, and convincing forms of manipulation that will be increasingly difficult to detect. AI-powered chatbots are evolving from crude automation to sophisticated conversational agents capable of engaging in nuanced exchanges. Early political bots on Twitter were easily identifiable by their repetitive messaging and unnatural patterns of activity. Today's more advanced systems can learn from interactions, adapt their communication styles, and generate contextually appropriate responses. As these capabilities improve, AI bots will become more effective at influencing opinions through seemingly authentic conversations, targeting users with personalized persuasion strategies based on their psychological profiles and past behavior. Voice synthesis technology has progressed to the point where systems like Google's Duplex can generate speech indistinguishable from human communication, complete with natural pauses, verbal tics, and emotional inflections. Research demonstrates that people are more likely to find spoken arguments persuasive and to attribute greater intelligence to communicators they hear rather than read. When deployed for political purposes, these increasingly human-sounding systems could be used for mass robocalling campaigns that target vulnerable populations with deceptive messages tailored to their specific concerns. Machine learning systems are also transforming how propagandists identify and target audiences. Algorithms can analyze vast datasets to identify psychological vulnerabilities, political leanings, and emotional triggers for specific demographic groups. These insights enable micro-targeted campaigns that deliver precisely calibrated messages to the most receptive audiences. As one digital marketing expert explained, social media companies provide political operatives with access to highly specific demographics and interest groups, allowing them to reach exactly the right people with exactly the right message. The technical barriers to deploying AI-powered propaganda are steadily falling. Tools that were once available only to well-resourced state actors or technology companies are becoming more accessible to smaller political operations, fringe groups, and even individuals. This democratization of sophisticated manipulation technology creates a world where anyone with modest technical skills can launch campaigns that would have required significant resources just a few years ago. Despite these growing threats, AI also offers potential solutions to computational propaganda. Machine learning algorithms can be trained to detect bot networks, identify manipulated media, and flag disinformation. Companies like Facebook are increasingly relying on AI tools to supplement human content moderation. However, these technological countermeasures face significant limitations. They struggle with contextual understanding, cultural nuances, and novel forms of manipulation. Most importantly, they address symptoms rather than underlying causes of the disinformation ecosystem.
Chapter 4: Deepfakes and the Crisis of Visual Evidence
The emergence of deepfakes – AI-generated or manipulated videos that convincingly depict events that never occurred – represents one of the most concerning developments in computational propaganda. These technologies undermine the longstanding status of video as reliable evidence and create unprecedented challenges for establishing shared understanding of reality. As one expert noted, "the body has no metric for fake" – our innate trust in what we see and hear makes us particularly vulnerable to these forms of manipulation. Deepfakes originated in late 2017 when anonymous users on Reddit began using Google's open-source TensorFlow AI tool to create fake pornographic videos featuring celebrities. The technology, based on generative adversarial networks (GANs), allows creators to superimpose one person's face onto another's body or to manipulate facial expressions and speech. While pornographic deepfakes remain the most common application, political uses are emerging. In 2018, a video of President Trump appeared to show him calling on Belgium to withdraw from the Paris climate agreement – a statement he never actually made. Although the video quality was imperfect, it demonstrated the potential for creating false political statements that appear authentic. The technology for creating convincing deepfakes continues to advance rapidly. Research teams from institutions including Stanford University and Germany's Max Planck Institute have developed increasingly sophisticated methods for manipulating video. These systems can now create "deep video portraits" that convincingly alter subjects' facial expressions, eye movements, and speech. While many of these research efforts acknowledge the potential for misuse, the underlying technology continues to proliferate, making increasingly realistic manipulations possible with decreasing technical expertise. Beyond sophisticated deepfakes, simple video manipulations can be equally effective in spreading misinformation. During a contentious press conference exchange between CNN reporter Jim Acosta and President Trump, the White House shared a video that appeared to show Acosta acting aggressively toward a White House intern. Analysis revealed the video had been subtly sped up to make Acosta's movements appear more forceful. This "shallow fake" – a basic manipulation rather than an AI-generated video – demonstrated how even minor alterations can significantly change the narrative around an event. The rise of manipulated video creates a dual threat to truth. Most obviously, convincing fakes can spread falsehoods that appear to have the credibility of video evidence. But perhaps more insidiously, the mere existence of this technology allows genuine videos to be dismissed as fabrications. Politicians caught making inappropriate comments or engaging in misconduct can simply claim they were victims of deepfake technology, creating what researchers call "the liar's dividend" – a benefit to those who actually do engage in wrongdoing. Media organizations are developing techniques to detect manipulated videos, combining technological analysis with traditional journalistic verification. Methods include examining videos frame-by-frame for inconsistencies, checking for unnatural blinking patterns (which early deepfakes often failed to replicate), and using blockchain technology to establish video provenance. However, as detection methods improve, so do the fakes, creating an ongoing technological arms race between truth and deception.
Chapter 5: Extended Reality: The Next Frontier of Manipulation
Extended reality (XR) technologies – including virtual reality (VR), augmented reality (AR), and mixed reality (MR) – represent the next frontier for computational propaganda. These immersive experiences engage multiple senses simultaneously, creating unprecedented opportunities for persuasion and manipulation. As these technologies become more accessible and integrated into everyday life, they will transform how information is consumed and how reality is perceived. Virtual reality creates fully immersive digital environments that can generate powerful emotional responses and lasting psychological impacts. Research demonstrates that VR experiences can produce genuine empathy by allowing users to "embody" different perspectives. This capability has been used for positive purposes – helping people understand environmental degradation or experience life from marginalized perspectives. However, the same immersive qualities make VR an ideal medium for indoctrination and propaganda. In China, the Communist Party has already begun using VR for "loyalty tests," requiring party members to don headsets and answer questions about party doctrine in virtual environments. Social VR platforms – where users interact through avatars in shared virtual spaces – present novel challenges for combating disinformation. These environments blur the line between human and automated interactions, making it difficult to determine who or what is behind particular messages. As one researcher explained, "If a friend in a VR simulation is telling us the latest news, does it matter that they are a humanlike bot?" The social dynamics of these spaces could make users particularly susceptible to manipulation, especially as the technology enables increasingly realistic avatar representations. Augmented reality, which overlays digital content onto the physical world, creates different but equally concerning manipulation possibilities. AR applications could present users with personalized, context-specific information that shapes how they perceive their surroundings. Political actors could deploy AR filters that subtly alter campaign materials, present misleading statistics about neighborhoods, or even modify how individuals appear to others. The technology could effectively create parallel realities where different users experience fundamentally different versions of the same physical space. The data collection capabilities of extended reality technologies compound these concerns. VR and AR systems typically track users' movements, gaze patterns, physiological responses, and environmental details. This data provides unprecedented insights into how users respond to specific stimuli, enabling highly sophisticated psychological profiling. Political propagandists could leverage this information to craft increasingly effective manipulation strategies tailored to individual psychological vulnerabilities. As these technologies converge with artificial intelligence, the potential for automated, personalized manipulation increases dramatically. AI systems could generate dynamic VR or AR content that adapts in real-time based on users' emotional responses, leading them progressively toward particular viewpoints or actions. The multi-sensory nature of these experiences makes them particularly resistant to critical analysis – users become fully immersed in carefully constructed narratives with little opportunity for external verification.
Chapter 6: Building Human-Like Technology: Ethical Challenges
As technology becomes increasingly human-like in appearance, voice, and behavior, we face profound ethical questions about how these anthropomorphic qualities influence our relationship with machines and their capacity for manipulation. The development of technology that mimics human characteristics is not merely a technical challenge but a social one with significant implications for truth, trust, and democratic discourse. Voice-based digital assistants like Apple's Siri, Amazon's Alexa, and Google Assistant increasingly sound and behave like humans, incorporating natural speech patterns, emotional inflections, and conversational nuances. Research conducted by Dr. Juliana Schroeder at UC Berkeley demonstrates that people systematically perceive communicators as more intelligent, competent, and thoughtful when they hear their voice rather than read their words. This finding has crucial implications for voice-based AI systems, suggesting that humanlike automated voices will be more persuasive and trusted than text-based communication – even when delivering identical content. Google's Duplex system exemplifies this trend, producing voice calls so convincingly human that recipients cannot distinguish them from real people. The system incorporates verbal hesitations, acknowledgment sounds like "mmm-hmm," and natural-sounding speech modulation. When demonstrated making restaurant reservations, Duplex did not identify itself as an automated system, raising ethical concerns about disclosure and consent. While such technology offers convenience, it also creates powerful tools for political actors to conduct mass calling campaigns that appear to come from real supporters rather than automated systems. The development of increasingly realistic digital faces poses similar challenges. Companies like Nvidia have created AI systems capable of generating photorealistic human faces that are indistinguishable from photographs of actual people. These technologies undermine traditional methods for identifying fake accounts, as propagandists no longer need to steal images or use obvious stock photos. They can simply generate unique, convincing faces for each false profile. This capability makes it increasingly difficult to distinguish between authentic grassroots movements and manufactured "astroturf" campaigns. The psychological impact of these humanlike technologies extends beyond their persuasive capabilities. They create what researchers call "false social presence" – the sense that we are interacting with conscious entities rather than programmed systems. This misconception leads people to attribute human motivations, values, and ethical frameworks to technologies that possess none of these qualities. The result is often unwarranted trust in systems designed to maximize engagement or influence behavior rather than provide objective information. As AI systems become more integrated into our information ecosystem, questions of transparency and disclosure become increasingly urgent. Should AI-generated content be clearly labeled as such? Should automated voices identify themselves as non-human? How can we ensure that people understand when they are interacting with machines rather than humans? These questions have no simple technical solutions but require thoughtful consideration of how technology shapes our understanding of reality and our capacity for informed decision-making.
Chapter 7: Designing Technology with Human Rights in Mind
The challenges posed by computational propaganda and emerging technologies demand a fundamental shift in how we approach technology development, regulation, and use. Rather than responding reactively to each new manifestation of digital manipulation, we must adopt a proactive, principles-based approach that prioritizes human rights, democratic values, and shared reality. This requires coordinated action from technology companies, governments, civil society organizations, and individual citizens. Technology companies must move beyond piecemeal fixes and superficial self-regulation to address the structural problems with their platforms. Facebook, Twitter, and YouTube have made incremental improvements to their policies and enforcement mechanisms, but these efforts remain inadequate to the scale of the challenge. Companies must recognize that they are effectively media organizations with significant influence over public discourse, and they bear corresponding responsibilities. This includes redesigning recommendation algorithms to prioritize factual information over engaging falsehoods, implementing transparent content moderation practices, and providing researchers with meaningful access to data about manipulation campaigns. Regulatory frameworks must evolve to address the unique challenges of digital propaganda. Existing laws governing political advertising, campaign finance, and election interference were designed for traditional media environments and have proven inadequate for the digital age. New policies should require transparency in political advertising across all platforms, expand the definition of electioneering communications to include online content, increase disclosure requirements for paid issue advertisements, and create independent authorities empowered to investigate digital political activity. These measures would help illuminate the currently opaque flows of money and influence in online political campaigns. The development of new technologies, from virtual reality to voice synthesis, must incorporate ethical considerations from the outset rather than as an afterthought. The principle of "ethical design" requires that technology creators anticipate potential misuses of their innovations and build in safeguards before deployment. For social XR platforms, this might include clear identification of automated avatars, robust verification systems to prevent impersonation, and enforceable codes of conduct that prohibit harassment and deception. Such approaches do not stifle innovation but rather channel it in directions that strengthen rather than undermine democratic values. Media literacy and critical thinking skills must be reimagined for the age of computational propaganda. Traditional approaches that focus solely on evaluating source credibility are insufficient when facing sophisticated manipulation campaigns that simulate credible sources or exploit platform vulnerabilities. New educational approaches must help people understand how attention is captured and directed online, how algorithms curate information, and how emotional responses can be weaponized to spread misinformation. These efforts should target all age groups, recognizing that older adults are often most vulnerable to digital manipulation. Ultimately, addressing computational propaganda requires recognizing that technology alone cannot solve what is fundamentally a social and political problem. The rise of digital disinformation reflects deeper societal divisions, declining trust in institutions, and economic incentives that reward sensationalism over accuracy. Technical solutions must therefore be accompanied by broader efforts to rebuild social cohesion, strengthen democratic institutions, and create economic models that support quality information.
Summary
The fundamental threat posed by computational propaganda lies not in the technology itself but in how it erodes our shared understanding of reality. From simple Twitter bots to sophisticated deepfakes and immersive virtual environments, these tools can manipulate not just what information we receive but how we process and respond to it. The convergence of artificial intelligence with increasingly humanlike interfaces creates unprecedented opportunities for personalized, emotionally resonant manipulation that bypasses our critical faculties and exploits our psychological vulnerabilities. Nevertheless, the situation is not hopeless. By recognizing that technology is fundamentally shaped by human choices and values, we can redirect its development toward strengthening rather than undermining democracy. This requires sustained effort from multiple sectors: technology companies willing to prioritize societal wellbeing over engagement metrics, policymakers committed to updating regulatory frameworks, researchers developing tools to detect manipulation, educators teaching critical digital literacy, and citizens demanding transparency and accountability. The challenges are significant, but so is our capacity to build technologies that embody our highest democratic ideals rather than our worst manipulative impulses.
Best Quote
Review Summary
Strengths: The book provides detailed information on the manipulation of perceptions via technology, such as bots, AI, and deep fakes. It offers a thorough explanation of the 2016 election's events and includes a deep dissection of AI and VR's impact on human perception. The book is thought-provoking and serves as a warning for those in the tech industry and social media users. Weaknesses: The reviewer expresses skepticism about the feasibility of developers and governments addressing technological misuse, particularly in the U.S., due to cultural tendencies towards exploitation. The reviewer also finds it challenging to remain hopeful despite the author's reassurances. Overall Sentiment: Mixed Key Takeaway: The book is an informative and thought-provoking examination of how technology can manipulate reality, emphasizing the need for awareness and proactive measures against potential misuse, despite societal and cultural challenges.
Trending Books
Download PDF & EPUB
To save this Black List summary for later, download the free PDF and EPUB. You can print it out, or read offline at your convenience.

The Reality Game
By Samuel Woolley