Home/Nonfiction/A History of Fake Things on the Internet
Loading...
A History of Fake Things on the Internet cover

A History of Fake Things on the Internet

From Hoaxes to Deepfakes

3.4 (59 ratings)
28 minutes read | Text | 9 key ideas
In a digital age where the line between reality and illusion blurs, "A History of Fake Things on the Internet" uncovers the fascinating evolution of digital deception. Walter J. Scheirer, a pioneer in computer science, invites readers on a thrilling exploration of how fake news, deepfakes, and digital hoaxes have infiltrated our world. From the primitive manipulations of 19th-century photography to the sophisticated fabrications of AI, Scheirer exposes the technical wizardry and human ingenuity fueling our online realities. This book delves into the minds of the hackers, artists, and tech savants who have shaped—and sometimes warped—our perception of truth. With wit and insight, Scheirer reveals that the real story of digital fakery is less about the technology and more about the age-old human dance of creation and destruction.

Categories

Nonfiction, History, Technology

Content Type

Book

Binding

Hardcover

Year

2023

Publisher

Stanford University Press

Language

English

ISBN13

9781503632882

File Download

PDF | EPUB

A History of Fake Things on the Internet Plot Summary

Introduction

In the early days of the internet, a small group of computer enthusiasts gathered in university labs and bedrooms across America, connected by slow modems and a shared fascination with the emerging digital frontier. Little did they know they were laying the groundwork for what would become one of the most transformative forces in human communication: the ability to create, manipulate, and spread digital content that blurs the line between reality and fiction. From those primitive bulletin board systems to today's sophisticated AI-generated media, the evolution of internet fakery reveals fascinating insights about human psychology, technological innovation, and our complex relationship with truth. This journey through digital deception takes us from the playful pranks of early hackers to the sophisticated deepfakes threatening democratic discourse today. Along the way, we'll discover how techniques pioneered in underground communities eventually transformed mainstream media, how verification technologies emerged in response to manipulation, and how creative expression and malicious deception often share the same technological foundations. Whether you're a technology professional concerned about the future of information integrity, a media consumer trying to navigate an increasingly synthetic landscape, or simply someone curious about how we arrived at our current post-truth predicament, understanding this history provides essential context for making sense of our digital present and preparing for what comes next.

Chapter 1: Early Hackers: Creating Alternative Realities (1980s-1990s)

The birth of digital deception can be traced to the vibrant hacker subculture that emerged in the 1980s, when personal computers were still novelties and the internet existed only as a primitive network connecting government and academic institutions. During this formative period, a diverse community of technical enthusiasts—ranging from curious teenagers to counterculture activists and computer science students—began exploring the boundaries of these new digital systems, often finding creative ways to manipulate them for both practical and playful purposes. The early hacker scene was populated by colorful characters who adopted pseudonyms or "handles" that reflected their digital personas. Figures like "The Mentor," "Captain Crunch," and "Erik Bloodaxe" became legendary within underground circles for their technical prowess and daring exploits. These individuals formed loose-knit groups such as the Legion of Doom and Masters of Deception, which functioned as knowledge-sharing collectives and social communities. Their activities ranged from exploring telephone networks (a practice known as "phreaking") to penetrating corporate and government computer systems, often motivated more by curiosity and the thrill of exploration than malicious intent. Communication within these communities happened primarily through Bulletin Board Systems (BBSs)—primitive precursors to internet forums where users could post messages, share files, and exchange information. These digital gathering places became laboratories for early forms of online deception. Hackers would create elaborate textfiles containing a mixture of genuine technical information and creative fiction, often making it difficult for readers to distinguish between fact and fantasy. The infamous "Blotto Box" hoax exemplifies this approach—a detailed technical description of a device that could supposedly disable entire telephone networks, which many accepted as real despite being entirely fictional. What drove this early culture of digital manipulation wasn't simply technical mischief but a deeper philosophical stance. The hacker ethic, articulated in manifestos like "The Conscience of a Hacker" (1986), emphasized free access to information, distrust of authority, and the belief that computers could be tools for liberation. When hackers created fictitious technical documents or spread elaborate digital myths, they weren't merely engaging in deception—they were constructing alternative realities that challenged official narratives and institutional control of information. As one former hacker explained, "We were creating spaces where different rules applied, where imagination could become reality through code." The relationship between hackers and mainstream media during this period revealed another dimension of digital deception. When NBC's Dateline aired a segment on hackers in 1992, featuring an anonymous figure who claimed to have discovered evidence of UFO activity in military computers, they unwittingly became vehicles for hacker mythology. The segment, which showed classified-looking documents scrolling across a computer screen, wasn't revealing actual government secrets but rather showcasing hackers' ability to construct convincing digital fictions that could penetrate mainstream discourse. This pattern—where underground digital communities created content that eventually influenced broader public perception—would become a recurring theme in the evolution of internet fakery. By the mid-1990s, as the World Wide Web began to reach mainstream audiences, the techniques pioneered by early hackers had established a template for digital deception that would evolve with each new communication technology. The legacy of this era wasn't just technical but cultural—it demonstrated that in digital environments, reality itself could become malleable, subject to creative manipulation by those with the skills and imagination to reshape it. This understanding would profoundly influence how subsequent generations approached digital media, setting the stage for increasingly sophisticated forms of fakery as technology advanced.

Chapter 2: Photoshop Revolution: Democratizing Image Manipulation

The launch of Adobe Photoshop in February 1990 marked a watershed moment in the history of visual manipulation. While image alteration had existed since photography's invention—with darkroom techniques allowing skilled practitioners to remove unwanted elements or combine multiple exposures—Photoshop democratized these capabilities, placing powerful manipulation tools in the hands of anyone with access to a personal computer. This software revolution transformed image editing from a specialized craft requiring chemical processes and physical equipment into a digital activity accessible to millions. Photoshop emerged from an unlikely collaboration between brothers Thomas and John Knoll. Thomas, an electrical engineering graduate student at the University of Michigan, began developing image processing software in 1987 to display grayscale images on his Macintosh Plus computer. His brother John, who worked at Industrial Light & Magic (George Lucas's visual effects company), recognized the commercial potential of Thomas's project and encouraged him to develop it into a full image editing program. After shopping their creation to various companies, they eventually struck a deal with Adobe Systems, which released the first version of Photoshop in 1990 for Macintosh computers only. The software's impact was immediate and profound. Professional photographers and graphic designers quickly adopted Photoshop, attracted by its ability to perform traditional darkroom techniques more efficiently and with greater precision. Features like layers, introduced in version 3.0 in 1994, allowed users to manipulate elements of an image independently, creating compositions that would have been extremely difficult or impossible using traditional methods. Perhaps most significantly, Photoshop introduced non-destructive editing—the ability to make changes that could be adjusted or reversed later—which encouraged experimentation and more ambitious manipulations. As Photoshop's user base expanded beyond professionals to include hobbyists, students, and casual users, a new visual culture emerged online. Internet forums like Something Awful's "Photoshop Phriday" showcased increasingly sophisticated image manipulations that blended humor, social commentary, and technical skill. These communities developed their own aesthetic conventions and inside jokes, creating a visual language that rewarded both technical proficiency and creative subversion. By the late 1990s, "photoshopped" had entered common parlance as a verb describing any digitally altered image, regardless of whether Adobe's software was actually used—a testament to the program's cultural dominance. The democratization of image manipulation through Photoshop had profound implications for visual truth. As manipulated images became more prevalent and more convincing, the photograph's status as objective evidence began to erode. This shift was particularly significant because it coincided with the internet's growth as a mass medium, creating an environment where altered images could spread rapidly without context or verification. While professional publications maintained editorial standards for image manipulation, these norms didn't extend to the broader internet, where altered images circulated freely alongside authentic ones. By the early 2000s, Photoshop had transformed from a specialized tool to a cultural phenomenon that influenced how people understood visual media. Its impact extended far beyond professional contexts to shape everyday perceptions of beauty, reality, and truth. The "Photoshop effect" became visible in advertising, fashion magazines, and celebrity culture, where digitally perfected images created unrealistic standards. Meanwhile, the software's capabilities continued to advance, with each new version making manipulations more seamless and detection more difficult. This technological evolution, combined with the internet's growing reach, created conditions where visual fakery could flourish at an unprecedented scale—setting the stage for even more sophisticated forms of digital deception in the decades to come.

Chapter 3: Textfiles as Cultural Artifacts: Underground Knowledge Sharing

In the pre-web era of the late 1980s and early 1990s, plain text files became the primary medium for underground knowledge sharing, creating a rich ecosystem of digital folklore that mixed technical information with creative fiction. These "textfiles"—simple ASCII documents that could be easily transmitted across slow modem connections—served as the cultural backbone of early online communities, particularly those centered around hacking, phreaking (phone system exploration), and other forms of technological subversion. Their importance as artifacts of early internet culture cannot be overstated; they represent the first large-scale example of digital knowledge being created, compiled, and distributed outside traditional publishing channels. The content of these textfiles ranged widely, from practical tutorials on computer programming and network security to elaborate conspiracy theories, manifestos, and pure fiction masquerading as fact. Files with titles like "How to Make a Dumb Cell" or "The Anarchist Cookbook" promised forbidden knowledge that could empower readers to manipulate technological systems. What made these documents particularly fascinating was their deliberate ambiguity about truth—many contained a mixture of accurate technical information, plausible-sounding fabrications, and outright fantasy, often with no clear distinction between them. This blending of fact and fiction wasn't merely deceptive; it reflected the playful, subversive ethos of early digital culture. Distribution of textfiles happened primarily through Bulletin Board Systems (BBSs)—dial-up computer systems that allowed users to connect, read messages, and download files. Underground BBSs with names like "The Temple of the Screaming Electron" and "Black Ice Private" became hubs for sharing these digital samizdat. The operators of these systems, known as SysOps, functioned as curators and gatekeepers, deciding which files to host and who could access them. This created a tiered information ecosystem where the most sensitive or controversial content was restricted to trusted members of the community, while more general files circulated widely. The authors of textfiles often wrote under pseudonyms or "handles," creating elaborate personas that existed solely in digital space. Writers like "The Mentor," "Phrack Staff," and "Doctor Crash" became legendary figures whose work was eagerly anticipated and widely shared. These digital authors developed distinctive writing styles that blended technical jargon with countercultural attitudes and dark humor. Their textfiles weren't simply informational; they helped construct a shared identity and value system for the emerging digital underground. As Jason Scott, creator of the textfiles.com archive, observed: "These weren't just documents—they were the mythology of a new culture being born in the electronic frontier." Perhaps most significantly, textfiles established patterns of information sharing and verification that would influence later internet culture. Readers developed sophisticated heuristics for evaluating the reliability of information, recognizing that not everything presented as fact was meant to be taken literally. Some files were clearly labeled as fiction or humor, while others maintained deliberate ambiguity, challenging readers to separate useful knowledge from creative embellishment. This created a media literacy specific to digital culture—an understanding that online information required critical evaluation and contextual knowledge to interpret correctly. As the World Wide Web emerged in the mid-1990s, the textfile era began to wane, but its influence persisted in new forms. The cultural practices established in these early digital communities—pseudonymous identity, mixing of fact and fiction, collaborative mythmaking, and skeptical reading—would shape how subsequent generations approached online information. From conspiracy theories to creative writing communities to meme culture, the legacy of textfiles can be seen throughout contemporary internet culture. They represent the beginning of a distinctly digital approach to knowledge sharing—one that values creativity and subversion alongside accuracy and utility.

Chapter 4: Media Forensics: The Battle for Verification

As digital manipulation technologies became more sophisticated throughout the 1990s and early 2000s, a parallel field emerged dedicated to detecting and authenticating digital media. Media forensics—the scientific analysis of digital images, audio, and video to determine authenticity—developed in response to growing concerns about the integrity of digital evidence in legal proceedings, news reporting, and scientific research. This emerging discipline represented a technological counterbalance to the democratization of manipulation tools, establishing methods to distinguish between authentic and altered content. The pioneers of media forensics came primarily from academic computer science departments and law enforcement agencies. Researchers like Hany Farid at Dartmouth College developed groundbreaking techniques for detecting image manipulation by analyzing statistical patterns and inconsistencies that might be invisible to the human eye. Farid's work focused on identifying telltale signs of tampering, such as incongruent lighting and shadows, cloning artifacts, and inconsistent noise patterns. Meanwhile, Jessica Fridrich at Binghamton University pioneered methods for identifying the unique "fingerprint" of specific camera sensors, making it possible to determine whether an image came from the device it purported to. Interestingly, the development of media forensics was driven less by concerns about political disinformation (which would become prominent later) and more by practical legal challenges, particularly in child exploitation cases. Following the 2002 U.S. Supreme Court decision in Ashcroft v. Free Speech Coalition, which established that simulated child pornography was protected speech, prosecutors needed ways to prove that evidence depicted real children rather than computer-generated images. This created an urgent demand for forensic techniques that could distinguish between authentic photographs and sophisticated digital creations, leading to significant investment in research and development. The relationship between media forensics and the technologies it sought to detect was inherently adversarial, creating a technological arms race that continues today. As one media forensics expert noted, "Remember that security is hard. Bad guys figure out ways to circumvent your security. The situation is the same with forensics." This dynamic meant that forensic researchers were often developing detection methods for manipulation techniques that hadn't yet been widely deployed—essentially anticipating future forms of deception. The field had to constantly evolve to keep pace with advances in manipulation technology, leading to increasingly sophisticated analytical approaches. By the mid-2000s, media forensics had established itself as a legitimate scientific discipline with dedicated journals, conferences, and university programs. Commercial applications began to emerge, with companies like Fourandsix Technologies (co-founded by Farid) developing software tools for detecting image manipulation. Law enforcement agencies around the world established digital forensics units equipped with specialized tools and training. Despite these advances, the field faced significant challenges, including the difficulty of keeping pace with rapidly evolving manipulation technologies and the computational intensity of many forensic techniques. The rise of social media in the late 2000s and early 2010s dramatically changed the landscape for media forensics. As platforms like Facebook, Twitter, and Instagram became primary channels for news and information sharing, the volume of potentially manipulated content increased exponentially, while the time available for verification decreased. Traditional media organizations, which had once served as gatekeepers applying professional standards to published images, were increasingly bypassed as information flowed directly from creators to audiences. This created new urgency for automated verification tools that could operate at scale and in real-time—a challenge that continues to drive innovation in the field today.

Chapter 5: Shock Content: From Fringe to Mainstream

The internet's capacity to deliver disturbing content has evolved dramatically from the early shock sites of the mid-1990s to today's algorithmic recommendation systems that can lead users down increasingly extreme rabbit holes. This evolution reveals important insights about human psychology, platform economics, and the changing boundaries between mainstream and fringe digital culture. What began as a niche phenomenon has become a central challenge for content moderation systems and a significant influence on broader internet culture. The shock content ecosystem began with sites like rotten.com, launched in 1996 by a computer hacker using the handle "Soylent" (real name Tom Dell). The site specialized in posting grotesque photos depicting bizarre sex acts, mangled corpses, and other material designed to horrify viewers. Dell, who had previously been part of the textfile crew Anarchy Incorporated, brought his experience in transgressive digital content to this new visual medium. Rotten.com gained notoriety in 1997 when it posted what purported to be a photo of Princess Diana's body being extracted from the car wreck that killed her—though the image was later revealed to be fake. Despite (or perhaps because of) the debunking, traffic to the site surged to unprecedented levels for that era. What made these early shock sites particularly significant was their ambiguous relationship with authenticity. Rotten.com's policy claimed to post only real photos, yet Dell frequently played with this boundary, leaving visitors guessing about what they were actually seeing. This approach echoed earlier freak shows and carnival attractions, where the narrative surrounding an exhibit was often more important than its authenticity. As Dell wrote, "We see a lot of fake pictures, and can spot them fairly easily. Real pictures of this nature aren't particularly rare; they are merely hidden from the public in most cases." This blending of the real and the fake created a compelling psychological experience that kept users returning. The shock content model evolved significantly with the launch of 4chan in 2003. Founded by Christopher Poole (known online as "moot"), who was friends with Dell, 4chan expanded on the participatory features that rotten.com had pioneered. By allowing anonymous posting and commenting, 4chan created an environment where users could share increasingly extreme content without fear of personal consequences. The site's /b/ board in particular became notorious for its shocking imagery and transgressive humor, establishing patterns of content creation and community behavior that would influence the broader internet. Media theorist Marshall McLuhan's insights help explain the psychological appeal of shock content. McLuhan argued that communication mediums physically impact the body's senses, and users naturally seek to maximize this impact by consuming increasingly stimulating content. In his words, a new medium "takes hold of them. It rubs them off, it massages them and bumps them around." This explains why even people who consider themselves progressive or enlightened might find themselves drawn to increasingly extreme content online—the medium itself encourages escalation through its direct impact on our sensory experience. As social media platforms achieved massive scale in the 2010s, shock content became simultaneously more accessible and more problematic. YouTube, Facebook, TikTok, and other mainstream platforms minimized costs and legal liability by relying on user-generated content, focusing instead on advertising and subscription revenue. This business model created few incentives to aggressively moderate disturbing material until it attracted negative publicity or legal scrutiny. The result was a paradoxical situation where content that would have been restricted to specialized shock sites in earlier eras could now appear in mainstream feeds, potentially reaching billions of users. The mainstreaming of shock content has had profound social consequences, particularly in shaping attitudes toward violence and sexuality. Research suggests that repeated exposure to extreme content can normalize behaviors that would otherwise be considered deviant or harmful. As journalist Elizabeth Bruenig discovered in interviews with high school students, many young people were mimicking extreme acts they had seen online without understanding that such content was often produced specifically to exploit the psychological dynamics of digital media. This normalization process represents one of the most concerning aspects of how shock content has evolved from a fringe phenomenon to a mainstream influence.

Chapter 6: Generative AI: The Rise of Synthetic Media

The 2010s witnessed a revolutionary development in digital fakery with the rise of generative artificial intelligence—algorithms capable of creating realistic images, videos, audio, and text that never existed in the physical world. This technological leap represented a fundamental shift from earlier forms of digital manipulation, which typically modified existing content, to the wholesale synthesis of new content that was increasingly indistinguishable from reality. The implications of this transition extend far beyond technical innovation, raising profound questions about truth, authenticity, and human creativity in the digital age. The breakthrough that enabled this revolution came in 2014 with the introduction of generative adversarial networks (GANs) by researcher Ian Goodfellow and his colleagues at the University of Montreal. GANs work through a competitive process between two neural networks—one that generates content and another that tries to detect whether that content is real or fake. Through repeated iterations, the generator becomes increasingly adept at creating convincing forgeries. This approach proved remarkably effective at producing photorealistic images of faces, scenes, and objects that existed only in the digital realm. By 2017, the technology had advanced enough to enable "deepfakes"—synthetic videos that could place one person's face onto another person's body with convincing results. The term "deepfake" originated in online communities where users applied the technique to create non-consensual pornographic videos featuring celebrities. While the earliest examples required significant technical expertise, user-friendly applications soon emerged that democratized access to this powerful technology. FakeApp, released in January 2018, allowed anyone with a decent computer and some patience to create their own deepfakes. Suddenly, creating convincing fake videos no longer required specialized knowledge or expensive equipment—a pattern that echoed the democratization of image manipulation through Photoshop decades earlier, but with potentially more serious implications. Beyond pornography, synthetic media quickly found applications in entertainment, art, and even political commentary. In 2018, director Jordan Peele collaborated with BuzzFeed to create a public service announcement about deepfakes, featuring a synthetic video of President Barack Obama delivering warnings about digital misinformation. The video demonstrated both the capabilities of the technology and its potential for political manipulation. Meanwhile, artists began exploring generative AI as a creative medium—the Paris-based collective Obvious created "Portrait of Edmond de Belamy," an AI-generated artwork that sold at Christie's auction house for $432,500 in October 2018. The media and security establishments responded to these developments with alarm. Headlines warned of an "information apocalypse" where seeing would no longer be believing. Government agencies like DARPA launched programs specifically targeting the detection of synthetic media, while academic researchers raced to develop "media forensics" techniques that could identify AI-generated fakes. However, these efforts faced a fundamental challenge: the same adversarial approach that made GANs so effective also made them increasingly resistant to detection methods. Each advance in detection technology could be incorporated into the training of the next generation of generative models, creating an arms race with no clear end. By 2020, the technology had advanced further with the introduction of GPT-3 by OpenAI, a language model capable of generating remarkably coherent and contextually appropriate text. This development extended synthetic media capabilities beyond visual content to written communication, raising concerns about automated misinformation campaigns and the potential flooding of online spaces with artificially generated content. Similar advances in audio synthesis made it possible to create convincing fake recordings of people's voices, potentially enabling new forms of fraud and impersonation. The ethical questions raised by generative AI remain largely unresolved. The technology enables new forms of non-consensual imagery, political manipulation, and scientific fraud, yet also offers unprecedented creative possibilities and accessibility. As philosopher Shannon Vallor has argued, the key question is not whether a technology can be used for good or ill—most can be used for both—but whether it promotes virtuous living within communities. This framework suggests that the impact of generative AI will ultimately depend less on the technology itself and more on the social contexts in which it is deployed, the regulatory frameworks that govern its use, and the ethical norms that evolve around its capabilities.

Chapter 7: The Metaverse: Virtualizing Human Experience

The concept of the metaverse—an immersive, persistent virtual world where people can interact, create, and transact—represents the culmination of decades of digital evolution. While the term gained mainstream attention in 2021 when Mark Zuckerberg announced Facebook's rebranding as Meta, the underlying vision has deeper roots in both science fiction and early internet culture. Neal Stephenson's 1992 novel Snow Crash, which popularized the term "metaverse," described a virtual reality successor to the internet where users navigated as avatars through a three-dimensional urban environment. This fictional concept has gradually moved toward reality through advances in virtual reality, blockchain technology, and artificial intelligence. The technical foundations for the metaverse were laid over several decades. In 1965, computer graphics pioneer Ivan Sutherland described a "looking glass into a mathematical wonderland" that would allow users to experience concepts not realizable in the physical world. Early virtual worlds like Second Life, launched in 2003, created primitive metaverse experiences where users could build environments, socialize, and conduct business using digital currencies. The 2010s brought crucial advances in virtual reality hardware, with Oculus (later acquired by Facebook) making high-quality VR headsets accessible to consumers. Meanwhile, blockchain technology enabled new forms of digital ownership through cryptocurrencies and non-fungible tokens (NFTs), creating the economic infrastructure for virtual worlds. What distinguishes the contemporary metaverse vision from earlier virtual worlds is its emphasis on interoperability and persistence. Rather than isolated environments controlled by single companies, the metaverse aims to create a connected ecosystem where digital assets, identities, and experiences can move seamlessly between platforms. This vision aligns with Web3 concepts of decentralization and user ownership, though in practice, major technology companies are positioning themselves to control significant portions of metaverse infrastructure. The tension between corporate control and user autonomy remains one of the central unresolved questions in metaverse development. The social implications of virtualizing human experience are profound. In the metaverse, identity becomes fluid—users can adopt entirely new bodies and personas, transcending the limitations of physical appearance, ability, or geography. This promises liberation but also raises questions about authenticity and connection. If everyone is presenting a carefully constructed avatar, what happens to genuine human interaction? The metaverse may exacerbate existing tendencies toward performative identity that have already emerged on social media platforms. As media theorist Sherry Turkle observed in her studies of early virtual worlds, digital environments can function as "laboratories for the construction of identity," allowing people to explore aspects of themselves that might be suppressed in physical reality. Economic considerations are equally significant. The metaverse creates new forms of digital property and labor, from virtual real estate to the creation of digital assets. In 2021, parcels of land in metaverse platforms like Decentraland and The Sandbox sold for hundreds of thousands of dollars, while digital artists earned substantial sums creating virtual clothing, accessories, and environments. These developments suggest the emergence of a parallel economy with its own rules, opportunities, and inequalities. Critics worry that existing economic disparities will be reproduced or even amplified in virtual spaces, creating new forms of digital feudalism where a small number of owners control the platforms on which everyone else depends. Perhaps most fundamentally, the metaverse represents a new frontier for digital fakery—not just manipulating individual pieces of media, but constructing entire synthetic realities. In these spaces, the distinction between "real" and "fake" becomes increasingly meaningless, as everything is simultaneously artificial and experientially authentic. Political scientist Bruno Maçães has characterized this shift as the rise of "virtualism"—a new political philosophy that may eventually replace liberalism as the dominant world order. In his view, the metaverse isn't merely an entertainment platform but a parallel existence that could eventually supplant physical reality in importance for many people. As we stand at the threshold of this new era, the metaverse represents both the culmination of decades of digital evolution and the beginning of something entirely new. It promises to transform how we work, play, socialize, and understand ourselves—continuing the long tradition of digital technologies that blur the boundary between the real and the virtual, but at an unprecedented scale and with far greater immersiveness than anything that has come before.

Summary

The evolution of digital deception reveals a consistent pattern: technologies initially developed for creative or security purposes eventually become tools for manipulation, only to spark new innovations in detection and authentication. This cycle has accelerated dramatically since the 1980s, from the text-based hoaxes of early hackers to today's AI-generated synthetic realities. What began as playful subversion within small technical communities has expanded into a global phenomenon that challenges our fundamental relationship with truth and reality. Throughout this journey, we've seen how digital fakery isn't primarily driven by malicious intent but by deeper human impulses: creativity, community-building, resistance to authority, and the universal desire to shape and control narratives about ourselves and the world around us. This historical perspective offers crucial insights for navigating our increasingly virtualized future. First, we must recognize that technical solutions alone cannot solve problems that are fundamentally social and psychological in nature—the most effective responses to digital deception combine technological tools with media literacy and critical thinking skills. Second, we should approach new technologies with nuanced understanding rather than binary judgments, recognizing that the same capabilities that enable harmful manipulation also empower creative expression and democratize media creation. Finally, as we move toward increasingly immersive virtual environments, we need ethical frameworks that acknowledge both the liberating potential of digital spaces and their capacity to manipulate and exploit. By understanding the historical patterns of digital deception, we can make more informed choices about how we design, regulate, and interact with the technologies that increasingly mediate our experience of reality.

Best Quote

Review Summary

Strengths: The review highlights the book's compelling exploration of the historical context of internet fakery, emphasizing its playful and creative aspects. The author effectively draws parallels between ancient satirical imagery and modern memes, and provides insightful discussions on hacker culture and metanarratives. The author's background in hacking is noted as a valuable asset, providing credibility and guidance to readers unfamiliar with the subject.\nWeaknesses: The review criticizes the author's understanding of contemporary political and cultural realities, describing it as embarrassingly detached and utopian. The conclusion's celebration of NFTs and a fully virtualized world is seen as unrealistic and overly optimistic.\nOverall Sentiment: Mixed\nKey Takeaway: While the book offers an engaging historical perspective on internet fakery and hacker culture, its conclusions about the current and future digital landscape are seen as overly idealistic and disconnected from reality.

About Author

Loading...
Walter Scheirer Avatar

Walter Scheirer

Read more

Download PDF & EPUB

To save this Black List summary for later, download the free PDF and EPUB. You can print it out, or read offline at your convenience.

Book Cover

A History of Fake Things on the Internet

By Walter Scheirer

Build Your Library

Select titles that spark your interest. We'll find bite-sized summaries you'll love.