Home/Business/The Master Algorithm
Loading...
The Master Algorithm cover

The Master Algorithm

How The Quest For The Ultimate Learning Machine Will Remake Our World

3.7 (6,328 ratings)
24 minutes read | Text | 9 key ideas
Hidden within the labyrinth of today's technological frontier lies a quest as profound as any mythic odyssey: the creation of a Master Algorithm, a learning entity so adaptable, it mirrors human cognition. In "The Master Algorithm," Pedro Domingos unveils the secretive, exhilarating race to craft a universal learner—one capable of deciphering the vast tapestry of data before us. This narrative isn't just a journey through code; it's a glimpse into the soul of modern science, where lines of algorithms sculpt the future of business, reshape societal norms, and redefine the very essence of knowledge. Domingos doesn't just outline the evolution of machine learning; he positions it as the new gospel of the digital age, a transformative force already entwined with every facet of our lives. This is not merely a book but a gateway to understanding the invisible engines propelling our world into tomorrow.

Categories

Business, Nonfiction, Philosophy, Science, Technology, Artificial Intelligence, Programming, Mathematics, Computer Science, Computers

Content Type

Book

Binding

Hardcover

Year

2015

Publisher

Basic Books

Language

English

ASIN

0465065708

ISBN

0465065708

ISBN13

9780465065707

File Download

PDF | EPUB

The Master Algorithm Plot Summary

Introduction

In today's data-driven world, machine learning algorithms silently shape our daily experiences. When you scroll through your social media feed, search for information online, or receive product recommendations while shopping, you're interacting with machine learning systems that are constantly learning from your behavior. These algorithms have become so embedded in our digital lives that we barely notice their presence, yet their impact is profound and far-reaching. The Master Algorithm represents the holy grail of machine learning—a hypothetical ultimate algorithm that could learn anything from data without requiring specialized programming for each new task. Imagine a single algorithm that could diagnose diseases, drive cars, translate languages, and make scientific discoveries, all by learning from examples rather than being explicitly programmed. Throughout this book, we'll explore the five major "tribes" of machine learning—each with their own approach to learning—and see how they might be unified into this master algorithm. We'll discover how machine learning is transforming industries, advancing science, and potentially reshaping our understanding of intelligence itself.

Chapter 1: The Five Tribes: Competing Approaches to Machine Learning

Machine learning isn't a monolithic field but rather a diverse ecosystem of approaches, each with its own philosophy about how machines should learn. These approaches can be organized into five major "tribes," each drawing inspiration from different intellectual traditions and each with its own master algorithm. Understanding these different perspectives is essential for grasping the full landscape of machine learning and the path toward unification. The Symbolists take their inspiration from logic and philosophy, viewing learning as a process of manipulating symbols according to logical rules. Their approach centers on inverse deduction: while traditional deduction moves from premises to conclusions, inverse deduction works backward, figuring out what knowledge would allow a conclusion to be derived. Symbolist algorithms typically produce explicit rules or decision trees that humans can interpret and understand. This transparency makes them particularly valuable in domains like medicine or finance, where being able to explain decisions is crucial. However, they sometimes struggle with messy real-world data where rules have many exceptions. The Connectionists draw inspiration from the human brain, building artificial neural networks that learn by adjusting the strengths of connections between simulated neurons. When you use face recognition on your smartphone or when a voice assistant understands your commands, you're likely benefiting from connectionist algorithms. These networks excel at finding patterns in perceptual data like images, sounds, and text, but they often function as "black boxes," making decisions through processes that even their creators struggle to explain. The recent explosion of deep learning represents the modern triumph of the connectionist approach. The Evolutionaries look to nature's own learning algorithm—evolution by natural selection—as their guide. They create populations of candidate solutions that compete, with the fittest ones surviving and reproducing with variations. Over many generations, these algorithms can discover surprisingly creative solutions to complex problems. Evolutionary approaches have designed efficient antennas for NASA satellites, optimized factory floor layouts, and even created art and music. They excel at problems with vast solution spaces where the path to the optimal solution isn't clear, though they can be computationally intensive. The Bayesians focus on uncertainty, viewing learning as a form of probabilistic inference. They start with prior beliefs about the world and update them as new evidence arrives, following Bayes' theorem. This approach naturally handles noisy data and can incorporate expert knowledge through prior probabilities. Bayesian methods excel at making predictions with limited data and explicitly representing uncertainty—instead of simply predicting that a patient has a particular disease, a Bayesian system might say there's a 70% probability of that disease. This nuanced approach is valuable in high-stakes domains, though exact Bayesian inference can be computationally challenging. The Analogizers learn by recognizing similarities between situations. When Netflix recommends movies similar to ones you've enjoyed or when a doctor diagnoses a patient based on similarity to previous cases, they're using analogical reasoning. Support vector machines, which find the optimal boundaries between categories, represent the analogizers' most sophisticated approach. These methods excel at tasks where similarity is easy to define but rules are hard to formulate, though their effectiveness depends heavily on how similarity is measured.

Chapter 2: Symbolists: Learning Through Logic and Rules

The Symbolist approach to machine learning has its roots in logic and philosophy, dating back to ancient Greek thinkers like Aristotle who sought to formalize human reasoning. For Symbolists, intelligence fundamentally involves manipulating symbols according to logical rules, and learning is the process of discovering the rules that explain observed data. This perspective aligns with how we often teach and explain concepts to each other—through explicit principles, definitions, and logical relationships. At the heart of Symbolist learning is inverse deduction. In standard deduction, we apply general rules to specific cases: "All humans are mortal; Socrates is human; therefore, Socrates is mortal." Inverse deduction works backward: given examples like "Socrates is mortal," "Plato is mortal," and "Both are human," the algorithm infers the general rule that "All humans are mortal." This process mirrors scientific discovery, where researchers observe specific phenomena and then formulate general theories to explain them. Decision trees exemplify this approach, creating a series of yes/no questions that lead to classifications: "If income > $50,000 and credit score > 700, then approve loan." The Symbolist approach offers several distinct advantages. First, the knowledge it discovers is transparent and interpretable—you can follow the reasoning step by step, understanding exactly why the system reached a particular conclusion. This transparency is crucial in domains like medicine, finance, or criminal justice, where stakeholders need to understand and trust algorithmic decisions. Second, Symbolist methods can easily incorporate existing knowledge. If experts already know certain rules, these can be directly encoded rather than having to be rediscovered from data. Third, these methods can learn from relatively small datasets, since they're looking for logical patterns rather than statistical correlations. However, Symbolists face significant challenges. Real-world concepts often don't have clean logical boundaries—they're fuzzy, with exceptions and special cases. A purely rule-based system might struggle to capture the nuance in concepts like "chair" (Is a beanbag a chair? What about a tree stump used for sitting?). Additionally, as the number of attributes increases, the space of possible rules grows exponentially, creating a "combinatorial explosion" that makes finding the optimal rules computationally challenging. Symbolists have developed various techniques to address these issues, such as pruning the search space and incorporating probabilistic elements into their rules. Despite these challenges, Symbolist methods have achieved remarkable successes. Expert systems have been developed for specialized domains like diagnosing bacterial infections, configuring computer systems, and predicting chemical reactions. More recently, Symbolist approaches have been combined with other machine learning methods to create hybrid systems that leverage the strengths of multiple paradigms. For instance, rules might be used to incorporate domain knowledge into neural networks, or logical constraints might guide evolutionary algorithms toward feasible solutions. The Symbolist perspective reminds us that learning isn't just about finding patterns in data—it's about discovering knowledge that can be communicated, verified, and integrated with what we already know. As we pursue the Master Algorithm, the Symbolists' emphasis on logical reasoning and interpretable knowledge will remain essential, especially as machine learning systems take on increasingly important roles in society.

Chapter 3: Connectionists: Neural Networks and Deep Learning

The Connectionist approach draws inspiration from the most sophisticated learning system we know: the human brain. With its billions of neurons connected in an intricate network, the brain somehow transforms sensory inputs into thoughts, decisions, and actions. Connectionists aim to recreate this magic through artificial neural networks—simplified mathematical models that capture the essence of how neurons might work together to learn from experience. At its core, a neural network consists of layers of interconnected nodes or "neurons." Each artificial neuron receives inputs, applies weights to them, sums them up, and passes the result through an activation function to produce an output. The true power of neural networks comes from their ability to learn these weights automatically through a process called backpropagation. When a network makes a prediction error, this algorithm calculates how much each connection contributed to the error and adjusts the weights accordingly, with these adjustments propagating backward through the network. Through repeated exposure to examples, the network gradually improves its predictions, sometimes achieving remarkable accuracy. What makes neural networks particularly powerful is their ability to discover representations of data without being explicitly programmed. Consider image recognition: while traditional programming would require manually specifying the features that define a cat—pointy ears, whiskers, a certain body shape—neural networks learn these features automatically from examples. The early layers might detect simple edges and textures, middle layers might recognize parts like eyes and paws, and deeper layers combine these into complete concepts like "cat" or "dog." This hierarchical learning mirrors how our visual system processes information and allows neural networks to tackle problems that defy explicit programming. The recent explosion of deep learning—neural networks with many layers—has transformed artificial intelligence. In 2012, a deep neural network called AlexNet dramatically outperformed traditional approaches in a prestigious image recognition competition, sparking a revolution. Since then, deep learning has achieved one breakthrough after another: defeating world champions at the ancient game of Go, generating remarkably human-like text, creating photorealistic images from text descriptions, and enabling real-time language translation. These advances have moved AI from research labs into our everyday lives through applications like voice assistants, photo organization, and content recommendation. Despite their remarkable successes, neural networks have limitations. They typically require enormous amounts of labeled data—far more than human learners need to grasp new concepts. They can be computationally expensive to train, sometimes requiring specialized hardware and days or weeks of processing time. Perhaps most significantly, they often function as "black boxes"; while they make accurate predictions, understanding exactly how they arrive at those predictions remains challenging. This opacity raises concerns in high-stakes applications like medical diagnosis or criminal justice, where being able to explain decisions is crucial. The Connectionist approach reminds us that intelligence doesn't necessarily require explicit rules or symbols—it can emerge from the collective behavior of simple processing units. As we pursue the Master Algorithm, the Connectionists' insights about distributed representation and hierarchical learning will be essential components, particularly for perceptual tasks where patterns are easier to recognize than to describe. The challenge lies in combining these strengths with the interpretability of symbolic approaches and the principled uncertainty handling of Bayesian methods.

Chapter 4: Evolutionaries: Nature's Learning Algorithm

Evolution is perhaps the most powerful learning algorithm nature has ever produced—a process that transformed single-celled organisms into the vast diversity of complex life forms we see today, including humans with our remarkable cognitive abilities. The Evolutionary approach to machine learning draws inspiration from this natural process, creating algorithms that mimic evolution's core mechanisms: variation, selection, and inheritance. At the heart of evolutionary algorithms is a population of candidate solutions to a problem. These might be strategies for playing chess, designs for a bridge, or computer programs that perform specific tasks. The algorithm evaluates each solution according to a fitness function that measures how well it solves the problem. The fittest solutions "survive" and "reproduce," creating a new generation of solutions that inherit characteristics from their "parents." Variation is introduced through mechanisms like mutation (random changes to a solution) and crossover (combining parts of different solutions), allowing the population to explore new possibilities. Over many generations, these simple mechanisms can produce surprisingly sophisticated solutions. What makes evolutionary algorithms particularly powerful is their ability to discover solutions that human designers might never conceive. When engineers at NASA used a genetic algorithm to design an antenna for a satellite, the algorithm produced a strange, asymmetric shape that looked nothing like conventional antennas. Yet this unusual design significantly outperformed human-engineered alternatives. By exploring countless variations and keeping what works without preconceptions about what a "proper" solution should look like, evolutionary algorithms can find creative solutions to complex problems with many interacting parts. Evolutionary approaches excel at problems where the relationship between design choices and performance isn't well understood. They're particularly valuable for optimization problems—finding the best configuration among countless possibilities. Applications range from designing more efficient jet engines to discovering new drug molecules that might treat diseases. They're also used to evolve robot behaviors, trading strategies in financial markets, and even artistic creations like music and visual art. In each case, the evolutionary algorithm explores a vast space of possibilities that would be impractical for humans to search exhaustively. Unlike other machine learning approaches that typically optimize a single model, evolutionary methods maintain diversity within their population. This diversity helps them avoid getting stuck in suboptimal solutions (local optima) and makes them robust to changing conditions. If the problem changes slightly, some members of the diverse population might already be well-adapted to the new circumstances, allowing for rapid adaptation—just as in natural evolution. Despite these strengths, evolutionary algorithms face challenges. They can be computationally expensive, often requiring thousands or millions of evaluations to find good solutions. They sometimes struggle with problems where gradient-based methods (like those used in neural networks) can efficiently navigate toward better solutions. And while they excel at optimization, they're not always the best choice for classification or prediction tasks that other machine learning approaches handle well. The Evolutionary perspective reminds us that intelligence doesn't require central planning or explicit reasoning—it can emerge from simple processes of variation and selection operating over time. As we pursue the Master Algorithm, the Evolutionaries' insights about creative exploration and adaptation will be essential components, particularly for complex design problems where the path to optimal solutions isn't clear.

Chapter 5: Bayesians: Reasoning Under Uncertainty

The Bayesian approach to machine learning centers on a fundamental insight: learning is inseparable from uncertainty. In a world of incomplete information and noisy data, we can never be absolutely certain about our conclusions. Instead of seeking definitive answers, Bayesians frame learning as updating beliefs based on evidence, using probability theory as the mathematical foundation for this process. At the core of Bayesian learning is Bayes' theorem, a simple yet profound formula discovered by Thomas Bayes in the 18th century. In its simplest form, it states that the probability of a hypothesis given some evidence equals the probability of the evidence given the hypothesis, multiplied by the prior probability of the hypothesis, divided by the overall probability of the evidence. This mathematical formulation captures the essence of scientific reasoning: we start with prior beliefs, observe evidence, and update our beliefs accordingly. The more surprising the evidence is under alternative hypotheses, the more strongly it shifts our beliefs toward the hypothesis that predicts it well. Bayesian networks exemplify this approach in practice. These graphical models represent variables as nodes and their probabilistic relationships as arrows. For example, a medical diagnosis network might include nodes for diseases, symptoms, and risk factors, with arrows showing how they influence each other. The network encodes both the structure of these relationships and their strengths as probabilities. When new evidence arrives—such as a patient presenting with certain symptoms—the network can calculate the updated probabilities of various diseases, taking into account all relevant factors and their interactions. What distinguishes Bayesians from other machine learning tribes is their emphasis on incorporating prior knowledge and quantifying uncertainty. Rather than making point predictions ("this patient has pneumonia"), Bayesian methods provide probability distributions ("there's a 70% chance of pneumonia, 20% chance of bronchitis, and 10% chance of another condition"). This approach is particularly valuable in high-stakes domains like medicine, where understanding the certainty of a diagnosis affects treatment decisions, or autonomous systems, where knowing when the system is uncertain is crucial for safety. Bayesian methods excel in situations with limited data. While other approaches might struggle to draw conclusions from small datasets, Bayesians can leverage prior knowledge to make reasonable inferences even with sparse evidence. As more data arrives, the influence of the prior diminishes, and the evidence increasingly dominates the conclusions. This graceful handling of the small-data regime makes Bayesian methods valuable for problems where collecting data is expensive or time-consuming, such as rare disease research or space exploration. Despite their elegant theoretical foundation, Bayesian methods face practical challenges. Calculating the updated probabilities (posterior distributions) often involves complex integrals that don't have analytical solutions. While approximation techniques like Markov Chain Monte Carlo have made Bayesian inference more tractable, computational complexity remains a limitation for large-scale problems. Additionally, specifying appropriate prior distributions can be challenging—too informative a prior might bias the results, while too vague a prior might not provide enough guidance. The Bayesian perspective reminds us that learning is not about finding the "correct" model but about maintaining and updating a distribution over possible models as evidence accumulates. This view aligns with how science itself progresses: not through absolute certainty but through increasingly refined degrees of belief based on accumulating evidence. As we pursue the Master Algorithm, the Bayesians' principled handling of uncertainty will be an essential component, particularly for problems where data is limited or where understanding confidence levels is crucial.

Chapter 6: Analogizers: The Power of Similarity-Based Learning

The Analogizer approach to machine learning is perhaps the most intuitive of all the tribes, reflecting how humans often learn: by drawing parallels between new situations and familiar ones. When you encounter a new fruit that looks like an apple but with orange skin, you might guess it tastes sweet with citrus notes—a prediction based purely on similarity to things you already know. Analogizers formalize this reasoning process, creating algorithms that learn by measuring similarities between examples. The simplest manifestation of this approach is the nearest-neighbor algorithm, which classifies new examples by finding the most similar previously seen examples. Imagine a doctor diagnosing patients based on their symptoms. Rather than following explicit rules, she might recall similar patients she's treated before and make a diagnosis based on their conditions. Nearest-neighbor algorithms work the same way: to classify a new email as spam or not, they find the most similar emails in their database and check how those were classified. Despite its simplicity, this approach can be remarkably effective for many problems. Support Vector Machines (SVMs) represent a more sophisticated version of similarity-based learning. Instead of simply comparing new examples to all previous ones, SVMs identify the most informative examples—called support vectors—that lie at the boundaries between categories. These boundary examples define the decision surface that separates different classes. What makes SVMs powerful is their use of the "kernel trick," which allows them to implicitly map data into high-dimensional spaces where complex patterns become linearly separable, without actually having to compute in those high dimensions. Analogizers excel at problems where defining explicit rules is difficult but similarity is easy to measure. Image recognition is a prime example—it's hard to write rules that define what makes something a cat, but we can easily measure how similar a new image is to known cat images. Recommendation systems also rely heavily on similarity: Netflix suggests movies similar to ones you've enjoyed, and Amazon recommends products purchased by customers with similar buying patterns. These systems don't need to understand why you like certain items; they just need to recognize patterns of similarity. One of the key advantages of similarity-based methods is their ability to learn from very few examples. While neural networks might need thousands of examples of each digit to learn handwriting recognition, a nearest-neighbor approach can make reasonable predictions after seeing just a few examples of each digit. This "few-shot learning" capability makes analogizers valuable in domains where labeled data is scarce or expensive to obtain, such as rare disease diagnosis or specialized industrial applications. The main challenge for analogizers is defining appropriate similarity measures. In simple cases with numerical features, Euclidean distance (the straight-line distance between points in feature space) might suffice. But for complex data like text documents, protein structures, or social networks, designing effective similarity measures requires domain knowledge and careful engineering. Additionally, similarity-based methods can struggle with high-dimensional data due to the "curse of dimensionality"—as the number of features increases, all examples tend to become equally distant from each other, making similarity judgments less meaningful. The Analogizer perspective reminds us that intelligence doesn't always require explicit models or rules—sometimes, recognizing patterns of similarity is enough. As we pursue the Master Algorithm, the analogizers' insights about similarity-based reasoning will be an essential component, particularly for problems where data is limited or where the flexibility to recognize novel patterns is essential.

Chapter 7: The Quest for Unification: Toward the Master Algorithm

The quest for a unified theory of machine learning—a Master Algorithm that combines the strengths of all five tribes—represents one of the most exciting frontiers in artificial intelligence. Each tribe has developed powerful tools for learning from data, but each also has blind spots and limitations. Symbolists struggle with perceptual tasks and uncertainty, connectionists with incorporating prior knowledge and explaining their reasoning, evolutionaries with efficiency, Bayesians with scaling to complex models, and analogizers with defining appropriate similarity measures. A truly unified approach would overcome these limitations, creating a learning system greater than the sum of its parts. The path toward unification begins with recognizing the complementary nature of the five approaches. Symbolists excel at representing structured knowledge and logical relationships, while connectionists capture complex patterns in perceptual data. Evolutionaries explore vast solution spaces creatively, Bayesians handle uncertainty rigorously, and analogizers make effective predictions from sparse data. Rather than viewing these as competing paradigms, we can see them as different facets of the same underlying phenomenon: learning from experience. Several promising approaches to unification have emerged in recent years. Probabilistic programming languages combine the expressiveness of programming languages with Bayesian inference, allowing developers to build models that incorporate both prior knowledge and learning from data. Deep learning has incorporated elements from multiple tribes, with architectures that include symbolic representations, evolutionary optimization techniques for hyperparameter tuning, and Bayesian methods for quantifying uncertainty. Hybrid systems that combine rule-based reasoning with neural networks have shown promise in domains requiring both perceptual processing and logical reasoning. The implications of a Master Algorithm would be profound across all domains of human endeavor. In science, it could accelerate discovery by automatically formulating and testing hypotheses. In medicine, it could integrate genetic, clinical, and environmental data to provide truly personalized treatments. In education, it could create customized learning experiences that adapt to each student's strengths, weaknesses, and interests. In business, it could optimize complex systems while adapting to changing conditions. The potential for positive impact is enormous, though so too are the challenges of ensuring such powerful technology is used ethically and responsibly. Perhaps most importantly, the quest for the Master Algorithm forces us to confront fundamental questions about the nature of knowledge and learning. How do we balance prior beliefs with new evidence? How do we represent complex relationships in the world? How do we explore creatively while building on what works? These questions transcend machine learning, touching on philosophy, psychology, and neuroscience. By seeking to build machines that learn, we gain deeper insight into our own learning processes. While a complete Master Algorithm remains an aspiration rather than a reality, the journey toward it has already transformed our world. Each step toward unification brings new capabilities and applications, from more accurate medical diagnoses to more efficient energy systems to more engaging educational tools. As we continue to integrate insights from all five tribes, we move closer to creating truly intelligent systems that can learn anything from data—a development that may ultimately be as significant as the invention of the scientific method or the industrial revolution.

Summary

Machine learning represents one of the most transformative technologies of our time, quietly reshaping everything from how we shop and consume entertainment to how we diagnose diseases and conduct scientific research. The five tribes of machine learning—symbolists, connectionists, evolutionaries, Bayesians, and analogizers—each offer unique perspectives on how to extract knowledge from data. While they differ in their approaches and philosophies, they all contribute essential pieces to the puzzle of creating truly intelligent systems. The symbolists' logical clarity, the connectionists' pattern recognition, the evolutionaries' creative exploration, the Bayesians' principled handling of uncertainty, and the analogizers' similarity-based reasoning all capture important aspects of learning. The ultimate goal of unifying these approaches into a Master Algorithm represents not just a technical challenge but a profound intellectual journey. Such a unified theory would transform our understanding of learning itself, with implications far beyond computer science. It would give us new tools to tackle humanity's greatest challenges—from climate change to disease to poverty—by enabling us to learn more effectively from the vast amounts of data we now collect. For those interested in the future of technology and society, understanding machine learning's foundations and potential is essential. Whether you're a student considering a career in data science, a professional looking to leverage these tools in your field, or simply a curious mind wondering how algorithms are reshaping our world, exploring the diverse landscape of machine learning approaches offers valuable insights into one of the defining technologies of the 21st century.

Best Quote

“If you’re a lazy and not-too-bright computer scientist, machine learning is the ideal occupation, because learning algorithms do all the work but let you take all the credit.” ― Pedro Domingos, The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World

Review Summary

Strengths: The book covers a very interesting and significant subject, exploring the world of computer programs that solve problems through learning. It delves into various algorithms, such as neural networks and Bayesian algorithms, in search of a "master algorithm." Weaknesses: The reviewer finds the book irritating, particularly due to the author's use of examples that seem relevant only to a niche audience, like Silicon Valley enthusiasts. The reviewer also criticizes the author's overly enthusiastic tone, which feels disconnected from the practical experiences of most readers. Overall Sentiment: Critical Key Takeaway: While the book tackles an intriguing topic with potential for broad interest, its execution is flawed by an overly niche focus and a fervent tone that may alienate readers not immersed in the Silicon Valley tech culture.

About Author

Loading...
Pedro Domingos Avatar

Pedro Domingos

I'm a professor emeritus of computer science and engineering at the University of Washington and the author of 2040 and The Master Algorithm. I'm a winner of the SIGKDD Innovation Award and the IJCAI John McCarthy Award, two of the highest honors in data science and AI. I'm a Fellow of the AAAS and AAAI, and I've received an NSF CAREER Award, a Sloan Fellowship, a Fulbright Scholarship, an IBM Faculty Award, several best paper awards, and other distinctions. I received an undergraduate degree (1988) and M.S. in Electrical Engineering and Computer Science (1992) from IST, in Lisbon, and an M.S. (1994) and Ph.D. (1997) in Information and Computer Science from the University of California at Irvine. I'm the author or co-author of over 200 technical publications in machine learning, data science, and other areas. I'm a member of the editorial board of the Machine Learning journal, co-founder of the International Machine Learning Society, and past associate editor of JAIR. I was program co-chair of KDD-2003 and SRL-2009, and I've served on the program committees of AAAI, ICML, IJCAI, KDD, NIPS, SIGMOD, UAI, WWW, and others. I've written for the Wall Street Journal, Spectator, Scientific American, Wired, and others. I helped start the fields of statistical relational AI, data stream mining, adversarial learning, machine learning for information integration, and influence maximization in social networks.

Read more

Download PDF & EPUB

To save this Black List summary for later, download the free PDF and EPUB. You can print it out, or read offline at your convenience.

Book Cover

The Master Algorithm

By Pedro Domingos

0:00/0:00

Build Your Library

Select titles that spark your interest. We'll find bite-sized summaries you'll love.