
Who Can You Trust?
How Technology Brought Us Together – and Why It Could Drive Us Apart
Categories
Business, Nonfiction, Psychology, Philosophy, Science, Economics, Technology, Artificial Intelligence, Sociology, Cultural
Content Type
Book
Binding
Kindle Edition
Year
2017
Publisher
Penguin
Language
English
ASIN
B073R5QTJT
File Download
PDF | EPUB
Who Can You Trust? Plot Summary
Introduction
Throughout human history, trust has been the invisible infrastructure that enables cooperation between individuals, communities, and societies. From the handshake agreements in medieval marketplaces to the complex algorithms powering today's digital platforms, the mechanisms of trust have continuously evolved to solve an unchanging human problem: how do we cooperate with people we don't know personally? This fundamental question has shaped economies, technologies, and social structures across centuries. The transformation of trust reveals fascinating patterns that help us understand not just our past, but our rapidly changing present and possible futures. By tracing how trust evolved from face-to-face interactions in small villages to the complex digital networks of today, we gain insight into why certain technologies succeed while others fail, how power shifts between institutions and individuals, and what might be gained or lost as artificial intelligence increasingly mediates our trust relationships. Whether you're a business leader navigating digital disruption, a policymaker considering regulation of emerging technologies, or simply someone trying to understand how trust works in our increasingly connected world, this exploration offers valuable perspective on one of society's most fundamental yet invisible forces.
Chapter 1: Local Trust: The Foundation of Early Communities
For tens of thousands of years, human trust operated primarily at the local level. In small hunter-gatherer bands and early agricultural settlements, trust was based on direct personal knowledge and face-to-face interactions. These communities, typically numbering between 50-150 individuals (corresponding to what anthropologists call "Dunbar's number"), relied on intimate knowledge of each member's character, abilities, and past behavior to determine trustworthiness. The mechanics of local trust were remarkably effective yet straightforward. When everyone knows everyone else, reputation becomes the primary currency of social life. A hunter who failed to share meat, a farmer who didn't contribute to communal projects, or a trader who cheated in exchanges would face immediate social consequences - from gossip and ridicule to outright ostracism. These penalties were particularly severe in environments where survival depended on group cooperation, creating powerful incentives for trustworthy behavior. Archaeological evidence reveals how these trust systems enabled increasingly complex forms of collaboration. The construction of Göbekli Tepe in modern-day Turkey around 9500 BCE required hundreds of people working together over generations to create massive stone structures - all without formal contracts, institutions, or even writing. This coordination was possible because local trust systems created reliable expectations about how others would behave, allowing for sophisticated division of labor and resource sharing. The Maghribi traders of the 11th century Mediterranean demonstrate how local trust could extend beyond immediate geographic proximity. These Jewish merchants created a coalition spanning from Sicily to Egypt that enabled long-distance trade without formal contracts. When a merchant in Cairo needed to sell goods in Palermo, he would entrust them to an agent. If that agent cheated the merchant, word would spread throughout the network, and the dishonest agent would be collectively boycotted by all traders. This reputation-based system proved remarkably effective at facilitating complex economic transactions across vast distances. However, local trust had significant limitations that became increasingly problematic as societies grew more complex. It couldn't easily scale beyond Dunbar's number - the cognitive limit of stable social relationships one person can maintain. It created insular communities often suspicious of outsiders, limiting innovation and exchange. And it relied on information that spread slowly and imperfectly through social networks, making it vulnerable to rumors and manipulation. As human societies urbanized and commerce expanded, these limitations became increasingly constraining. The transition from villages to cities, from tribes to kingdoms, required new mechanisms to facilitate cooperation among strangers. This necessity would drive the next major evolution in trust systems - the rise of institutional trust that would enable unprecedented social complexity and economic growth beyond the boundaries of personal relationships.
Chapter 2: Institutional Trust: The Rise of Hierarchical Authority (1800s-1950s)
The Industrial Revolution marked a pivotal shift in how humans organized trust. As populations grew and urbanized, the intimate local trust of small communities could no longer sustain increasingly complex economic and social interactions. The solution that emerged was institutional trust - a system where confidence was placed in centralized organizations rather than individuals. Banks became the quintessential trust institutions of this era. The Bank of England, founded in 1694, established a model for central banking that spread worldwide during the 19th century. These institutions transformed from private ventures into public utilities, backed by government guarantees and regulations. When people deposited their money, they weren't trusting the individual banker but rather the institution itself, with its imposing architecture, formal procedures, and government oversight. The physical manifestation of institutional trust was deliberately impressive - grand marble buildings with classical columns projected permanence, stability, and authority to inspire confidence. Brands emerged as another powerful vehicle for institutional trust during this period. In the 1870s, companies like Bass Brewery registered the first trademarks to assure customers of consistent quality. As mass production separated producers from consumers, brands like Coca-Cola, Ford, and Kellogg's became proxies for personal relationships. The marketing message was clear: "You may never meet Mr. Heinz, but you can trust his 57 varieties." These corporate entities invested heavily in reputation building through advertising and consistent quality control, creating trust signals that could scale across vast markets. Professional licensing and regulatory bodies further extended institutional trust. The American Medical Association (founded 1847), the Bar Association, and countless other professional organizations created standards, credentials, and codes of conduct that signaled trustworthiness. A doctor's framed diploma on the wall indicated reliability not because you knew the doctor personally, but because you trusted the institutions that certified them. Government agencies like the Food and Drug Administration (established 1906) extended this principle to entire industries, creating rules and inspection regimes that enabled consumers to trust products from unknown manufacturers. The institutional trust model reached its zenith in the mid-20th century. The period from 1945 to 1970 saw unprecedented faith in major institutions across Western democracies. Walter Cronkite, the CBS news anchor dubbed "the most trusted man in America," embodied this phenomenon. When he delivered the news, millions accepted it not because they knew Cronkite personally, but because they trusted CBS as an institution to verify information and present it accurately. However, the institutional trust model contained inherent vulnerabilities that would eventually undermine it. It was opaque, hierarchical, and often unaccountable. The very distance and formality that enabled trust at scale also created opportunities for abuse. The Tuskegee Syphilis Study, conducted by the U.S. Public Health Service from 1932 to 1972, demonstrated how institutional trust could be weaponized against marginalized communities. This unethical experiment, where African American men were denied treatment for syphilis without their informed consent, left a legacy of distrust that continues to affect healthcare outcomes today. Such abuses of institutional power would eventually lead to widespread questioning of authority and set the stage for new forms of trust to emerge.
Chapter 3: Digital Disruption: How Technology Transformed Trust (1990s-2010s)
The digital revolution that began in the 1990s fundamentally challenged institutional trust models. The internet created new possibilities for connection and commerce, but it also introduced unprecedented uncertainty. How could you trust someone you'd never met, in a place you'd never been, selling things you couldn't physically examine? This question drove one of the most significant transformations in trust mechanisms in human history. The early internet was characterized by anonymity and uncertainty. The famous 1993 New Yorker cartoon captured this perfectly: "On the internet, nobody knows you're a dog." Without physical cues or institutional guarantees, new trust mechanisms were needed. The first wave of solutions simply transplanted institutional trust online. PayPal succeeded where earlier digital payment systems failed because it was backed by bank guarantees and government regulations - traditional trust anchors in a new digital context. However, the true innovation came with the development of distributed trust systems. eBay pioneered this approach in 1995 by introducing user ratings and reviews. This seemingly simple innovation was revolutionary - it allowed strangers to establish trustworthiness through accumulated feedback from previous interactions. A seller with thousands of positive reviews could be trusted not because of their credentials or affiliations, but because of their demonstrated behavior over time. This represented a fundamental shift in how trust was established and maintained. Amazon expanded this model with its sophisticated review system, allowing consumers to make informed decisions based on the collective experiences of others. By 2010, these platforms had created what sociologist Alvin Toffler called "prosumers" - users who both produced and consumed trust information. The power to determine trustworthiness had shifted from centralized gatekeepers to distributed networks of users. This democratization of trust assessment was unprecedented in human history and enabled cooperation at scales previously unimaginable. The sharing economy represented the next frontier in distributed trust. When Airbnb launched in 2008, the idea of staying in a stranger's home seemed ludicrous to many. Founder Brian Chesky recalls investors telling him, "Somebody's going to get raped or murdered, and the blood is gonna be on your hands." Yet by creating robust digital trust mechanisms - verified profiles, mutual reviews, secure payments, and insurance guarantees - Airbnb enabled millions of trust leaps between strangers. By 2015, it was accommodating more guests per night than the entire Hilton hotel chain worldwide, without owning a single property. What made these platforms revolutionary wasn't just their technology but their ability to create what Rachel Botsman calls "trust stacks" - layered systems that build confidence through multiple reinforcing mechanisms. Take Uber: users trust the app (institutional layer), the payment system (transactional layer), and ultimately the specific driver (personal layer). This combination of institutional and distributed trust enabled interactions that would have been unthinkable just years earlier, fundamentally transforming industries from transportation to accommodation to retail. The success of these digital trust platforms demonstrated that distributed systems could scale more efficiently than institutional ones. By leveraging what had previously been an economic liability - the uncertainty of dealing with strangers - and transforming it into an asset through innovative trust mechanisms, these platforms created enormous value. However, as they grew in power and influence, new questions emerged about accountability, bias, and control that would shape the next phase of trust evolution in the digital age.
Chapter 4: The Dark Side: When Distributed Trust Systems Fail
Despite their transformative potential, distributed trust systems revealed significant vulnerabilities as they scaled. The same mechanisms that enabled trust between strangers could be manipulated, exploited, and weaponized in ways their creators never anticipated. By the mid-2010s, the dark side of distributed trust became increasingly apparent, raising profound questions about responsibility, accountability, and governance. The Uber Kalamazoo incident of February 2016 exposed the limitations of algorithmic trust assessment. Jason Dalton, an Uber driver with a stellar 4.73 rating, went on a shooting spree between picking up passengers, killing six people. Despite erratic behavior reported by one passenger, Uber's systems failed to identify the threat. The incident raised disturbing questions: Who was responsible? The platform? The algorithm? The users who had previously rated Dalton positively? In a distributed system, accountability itself becomes distributed - often to the point of dissolution, creating a "responsibility vacuum" where no single entity is clearly liable when things go wrong. Manipulation of reputation systems emerged as another critical vulnerability. Amazon and Yelp have battled waves of fake reviews, while social media platforms struggle with bot accounts and manufactured consensus. A 2016 Harvard Business School study revealed how racial bias infected Airbnb's reputation system, with hosts systematically discriminating against guests with African-American-sounding names. The study demonstrated how distributed trust systems can amplify existing social biases rather than eliminating them, raising questions about whether algorithmic systems merely reproduce human prejudices at scale. The darknet markets represent perhaps the most extreme example of distributed trust's complexity. Platforms like Silk Road created functional trust systems for illegal transactions, complete with escrow payments, detailed reviews, and dispute resolution mechanisms. When Silk Road founder Ross Ulbricht (known as Dread Pirate Roberts) was arrested in 2013, the FBI was stunned to discover that the illegal marketplace had processed over $1.2 billion in transactions with remarkably few instances of fraud. Even in the absence of legal recourse, distributed trust mechanisms had created functional markets among individuals society would classify as inherently untrustworthy. Privacy concerns have become increasingly central to the distributed trust debate. The systems that enable trust between strangers often do so by collecting unprecedented amounts of personal data. Uber's controversial "God View" tool allowed employees to track users' movements in real-time without their knowledge. Facebook's emotional contagion experiment in 2014, where researchers secretly manipulated users' news feeds to affect their emotional states, revealed how platforms could exploit trust for manipulation. These incidents highlighted the asymmetric power relationship between users and platforms. Perhaps most concerning is the emergence of what sociologist Shoshana Zuboff calls "surveillance capitalism" - the commodification of personal data for prediction and influence. The trust mechanisms that enable the digital economy increasingly double as surveillance infrastructure. Every rating, review, and transaction generates data that platforms use not just to facilitate trust, but to predict and modify behavior. This creates a paradox where the very systems that liberate us from old hierarchies establish new forms of control and manipulation. The Facebook-Cambridge Analytica scandal of 2018 exemplified this danger. The harvesting of 87 million users' data without meaningful consent for political targeting demonstrated how distributed trust systems could be weaponized against democratic processes. It revealed that while users trusted Facebook with their data, the platform's incentives were fundamentally misaligned with user interests. This misalignment represents the central challenge for distributed trust systems moving forward - how to create governance structures that align platform incentives with the public good.
Chapter 5: Artificial Intelligence and the Future of Trust
Artificial intelligence represents the next evolutionary frontier in trust relationships. As AI systems become increasingly autonomous and integrated into daily life, humans face unprecedented questions about how to establish trust with non-human entities capable of making consequential decisions. This shift is transforming not just how we trust, but who - or what - we trust. The development of self-driving vehicles illustrates this challenge perfectly. When Tesla introduced its Autopilot feature in 2015, it created a new kind of trust relationship between humans and machines. Unlike trusting a car to function mechanically, Autopilot required trusting an AI system to make life-or-death decisions in real time. The fatal crash involving Joshua Brown in May 2016, the first known death involving a self-driving system, highlighted the stakes. Brown had apparently placed excessive trust in the system, reportedly watching a Harry Potter movie while the car was in Autopilot mode. Trust in AI follows what researchers call the "performance-expectation gap." Studies show that humans rapidly develop trust in autonomous systems that perform well initially, but this trust collapses catastrophically after a single failure. This pattern differs significantly from human-to-human trust, which tends to be more resilient to occasional mistakes. The challenge for AI designers is creating appropriate levels of trust - neither the overtrust that led to Brown's death nor the undertrust that would prevent adoption of potentially beneficial technologies. Anthropomorphism has emerged as a powerful but problematic trust mechanism for AI. Research consistently shows that humans more readily trust machines with human-like features. A 2016 study found that participants were significantly more likely to trust a self-driving car when it was given a name ("Iris") and a female voice. Similarly, voice assistants like Alexa, Siri and Cortana are deliberately designed with personalities that evoke trust through familiarity. However, this anthropomorphism can create misleading impressions of capability and judgment. The "Tay" chatbot incident of 2016 demonstrated the dangers of misplaced trust in AI systems. Microsoft's experimental Twitter bot was designed to learn from interactions with users. Within 24 hours, malicious users had taught Tay to post racist, sexist, and anti-Semitic content. The incident revealed how AI systems can amplify the worst aspects of human behavior when their trust mechanisms are naively designed. It also raised questions about responsibility - was Microsoft at fault, or the users who manipulated the system? The emergence of "explainable AI" reflects growing recognition that transparency is essential for appropriate trust. Traditional "black box" machine learning systems make decisions without providing understandable explanations. This opacity makes it impossible to assess whether trust is warranted. New approaches focus on making AI systems that can articulate their reasoning in human-understandable terms. For example, medical diagnostic AI is being designed to explain which features of an X-ray influenced its conclusion, allowing doctors to evaluate the system's judgment. Perhaps most profoundly, AI is forcing us to codify trust in explicit rules and values. Autonomous vehicles face versions of the famous "trolley problem" - ethical dilemmas about how to allocate harm when all options involve some damage. Should a self-driving car prioritize its passengers or pedestrians? Should it consider the age of potential victims? These questions, once philosophical thought experiments, now require explicit programming decisions. Companies like DeepMind have established ethics boards to address these questions, recognizing that trust in AI ultimately depends on alignment with human values and transparent governance structures that ensure accountability.
Chapter 6: Rating Society: The Emergence of Reputation Economics
The convergence of distributed trust mechanisms and ubiquitous data collection is giving rise to what might be called a "rating society" - a world where personal reputation becomes a form of capital more valuable than traditional assets. This transformation represents both the culmination of trust's evolution and a potential return to its origins in small communities where reputation was everything. China's Social Credit System, announced in 2014, represents the most comprehensive attempt to formalize reputation as currency. This government initiative aims to assign every citizen and business a trust score based on their behavior across financial, social, and even personal domains. High scores bring privileges like faster loan approvals and visa processing, while low scores restrict access to services like high-speed trains and better schools. Private companies like Ant Financial's Sesame Credit already score millions of citizens based on their purchasing patterns, social connections, and online behavior. While often portrayed as Orwellian in Western media, the Chinese system addresses a genuine trust deficit in a rapidly developing society. With limited credit history for most citizens and widespread concerns about counterfeit goods and corruption, the Social Credit System attempts to create accountability in a society transitioning from local to distributed trust. As one Chinese citizen told researchers, "It's like living in a small town again, where everyone knows what you do." Similar systems are emerging organically in Western economies, albeit in more fragmented forms. Platforms like Airbnb and Uber have created portable reputation systems that follow users across contexts. A poor Uber rating can make it difficult to get rides, while a strong Airbnb profile makes it easier to book accommodations. These ratings increasingly function as a form of identity, with some landlords now requesting Airbnb profiles from potential tenants as character references. The financial sector is rapidly adopting alternative credit scoring based on behavioral data. Companies like ZestFinance and Lenddo analyze thousands of data points - from social media activity to smartphone usage patterns - to assess creditworthiness. These approaches allow millions previously excluded from traditional credit systems to access financial services. However, they also raise profound questions about privacy, consent, and algorithmic bias. The gamification of reputation represents another frontier in the rating society. Apps like Peeple (launched in 2016) allow users to rate other people across professional, personal, and dating contexts. While Peeple faced significant backlash, similar functionality has been integrated into mainstream platforms. LinkedIn endorsements, Facebook's reputation system, and even dating apps create quantified measures of social worth that influence real-world opportunities. The implications of this shift are profound. In a rating society, trust becomes simultaneously more democratic and more controlling. Anyone can contribute to reputation formation, but everyone is subject to constant evaluation. The system rewards conformity to whatever behaviors the algorithm values, creating powerful incentives for social compliance. The spontaneity and forgiveness that characterized local trust systems are replaced by permanent digital records that never forget a mistake. Perhaps most significantly, the rating society blurs the line between economic and social value. When reputation directly determines access to resources and opportunities, every social interaction becomes an economic transaction. The Uber driver who offers water and pleasant conversation isn't just being kind - they're rationally investing in their reputation capital. This commodification of social interaction represents both the triumph of distributed trust and its fundamental limitation, raising essential questions about what kind of society we want to create as reputation becomes our most valuable currency.
Summary
Throughout human history, trust has undergone three major transformations, each reshaping how societies function at their core. We began with local trust, where face-to-face interactions within tight-knit communities governed cooperation. This evolved into institutional trust, where credentials, regulations, and brands enabled transactions between strangers across vast distances. Most recently, we've witnessed the rise of distributed trust, where digital networks and algorithms enable direct cooperation between individuals without traditional intermediaries. Each system solved certain trust problems while creating new vulnerabilities and power dynamics. The story of trust transformation offers crucial lessons for navigating our increasingly complex digital landscape. First, we must recognize that trust systems are never neutral - they always embed values and power relationships that shape who benefits and who is excluded. Second, we need governance frameworks that balance the efficiency of algorithmic trust with human judgment and ethical considerations. Finally, as individuals, we must develop new literacy around digital trust signals, understanding both their value and limitations. By consciously shaping these emerging trust systems rather than passively accepting them, we can harness their potential while preserving the human connection and contextual judgment that have always been essential to meaningful trust relationships.
Best Quote
“The mass population is relying less on newspapers and magazines and instead chooses self-affirming online communities,’ says Richard Edelman.” ― Rachel Botsman, Who Can You Trust?: How Technology Brought Us Together and Why It Might Drive Us Apart
Review Summary
Strengths: The review highlights the book's comprehensive exploration of trust, including concepts like "trust leap," reputation, and the evolution of trust from local and institutional to distributed systems. It appreciates the detailed explanations provided on how technology influences trust, such as through Blockchain and artificial intelligence.\nWeaknesses: The review notes a lack of direct answers to practical questions about whom to trust and how to build trust, suggesting that the book may not fully meet the reader's expectations for actionable guidance.\nOverall Sentiment: Mixed\nKey Takeaway: The book offers an in-depth understanding of trust's role in modern society and its technological evolution, though it may not provide direct solutions for building or identifying trustworthiness.
Trending Books
Download PDF & EPUB
To save this Black List summary for later, download the free PDF and EPUB. You can print it out, or read offline at your convenience.

Who Can You Trust?
By Rachel Botsman