
How to Measure Anything
Finding the Value of "Intangibles" in Business
Categories
Business, Nonfiction, Finance, Science, Economics, Leadership, Audiobook, Management, Entrepreneurship, Buisness
Content Type
Book
Binding
Hardcover
Year
2006
Publisher
John Wiley & Sons Inc
Language
English
ASIN
0470110120
ISBN
0470110120
ISBN13
9780470110126
File Download
PDF | EPUB
How to Measure Anything Plot Summary
Introduction
Have you ever faced a business challenge that seemed impossible to quantify? Perhaps you needed to measure customer satisfaction, the value of your brand, or the effectiveness of a training program, but these concepts felt too intangible to pin down with numbers. This common dilemma leads many professionals to make critical decisions based on gut feeling rather than evidence, often with costly consequences. The truth is that virtually anything that matters in business can be measured, even if it initially seems abstract or intangible. The key lies not in achieving perfect precision, but in reducing uncertainty enough to make better decisions. By reframing how we think about measurement—seeing it as a process of uncertainty reduction rather than perfect quantification—we unlock powerful methods that can transform seemingly impossible measurement problems into practical solutions that drive better outcomes.
Chapter 1: Define Your Measurement Problem with Clarity
The journey to measuring what matters begins with a crucial first step: defining exactly what you're trying to measure and why it matters. Many measurement challenges appear insurmountable simply because they haven't been properly defined. When we face questions like "How effective is our marketing?" or "What's the value of employee training?", these concepts seem too nebulous to quantify. The Department of Veterans Affairs faced this exact challenge when evaluating a $100 million IT security investment. Initially, executives struggled with how to measure "security"—a concept that seemed entirely intangible. Their breakthrough came when they reframed the question. Instead of asking "How do we measure IT security?", they asked "What specific events are we trying to prevent, and what costs would those events create?" This clarification revealed that improved security meant reducing the frequency and severity of specific undesirable events like virus attacks, unauthorized access, and data breaches. By decomposing the abstract concept of "security" into observable components, the VA team created a measurement framework that identified specific systems (like encryption infrastructure and biometric authentication) and linked them to specific outcomes (like reduced virus attacks and unauthorized access). Each outcome could then be associated with concrete costs: productivity losses, fraud losses, legal liabilities, and mission interference. This clarification process transformed an apparently immeasurable concept into something quantifiable. The team realized that when they observe "better security," they're actually observing a reduction in the frequency and impact of specific events. This insight allowed them to measure what previously seemed unmeasurable. The clarification chain works for any measurement challenge: If something matters, it must be detectable in some way. If it's detectable, it must be detectable in some amount. And if it can be detected in some amount, it can be measured. This approach works whether you're measuring customer satisfaction, strategic alignment, or any other seemingly intangible concept. When facing your own measurement challenges, start by asking: "What problem am I trying to solve with this measurement?" and "What would I observe if this thing improved or worsened?" These questions will help you define your measurement problem with clarity and precision, setting the foundation for all subsequent measurement efforts.
Chapter 2: Calibrate Your Uncertainty Before Measuring
Before diving into measurement, it's essential to understand what you already know about the thing you're trying to measure. Even when you don't know the exact value of something, you still know something about it—some values would be impossible or highly unlikely. The key is expressing this uncertainty in a way that's both honest and useful. When security experts at the Department of Veterans Affairs initially claimed they couldn't estimate virus attack durations, careful questioning revealed they actually knew quite a bit. Through structured conversation, one expert admitted: "I would be 90% confident the average downtime is between 4 and 12 hours." This range represents valuable knowledge, even if it's not precise. By expressing uncertainty as a calibrated probability assessment—a range with a specific confidence level—we transform "I don't know" into useful information. Unfortunately, research by Nobel Prize winner Daniel Kahneman and his colleague Amos Tversky has shown that most people are naturally poor at assessing probabilities. We tend toward overconfidence, providing ranges that are too narrow. When asked to give 90% confidence intervals for general knowledge questions, most people capture the correct answer only 50% of the time or less—a clear sign of overconfidence. The good news is that calibration is a learnable skill. Through training exercises involving trivia questions with known answers, people can dramatically improve their ability to assess probabilities accurately. After just a half-day of training with methods like equivalent betting, pros and cons listing, and avoiding anchoring, most people become nearly perfectly calibrated—their 90% confidence intervals actually contain the correct answer 90% of the time. Pat Plunkett, a program manager at the Department of Housing and Urban Development, describes calibration as "an eye-opening experience. Many people, including myself, discovered how optimistic we tend to be when it comes to estimating. Once calibrated, you are a changed person. You have a keen sense of your level of uncertainty." This calibration skill transfers to real-world predictions. In a controlled experiment with IT analysts at Giga Information Group, analysts who received calibration training made predictions about future industry events with remarkable accuracy. Their stated confidence levels closely matched their actual success rates, while untrained executives showed significant overconfidence. By quantifying your current uncertainty through calibrated estimates, you establish the foundation for all subsequent measurement. You gain a realistic understanding of what you know and don't know, which is essential for determining what to measure next and how to measure it.
Chapter 3: Calculate the Value of Additional Information
Not all measurements are worth making. The key insight that transforms measurement practices is understanding that information has economic value precisely because it reduces uncertainty about decisions with consequences. By calculating this value before measuring, you can determine which measurements are worth pursuing and how much effort they justify. Consider a company evaluating a bold new advertising campaign expected to cost $5 million. If calibrated experts estimate a 60% chance of success (generating $40 million in profit) and a 40% chance of failure (losing the $5 million investment), we can calculate the "expected opportunity loss" for each decision option. If they approve the campaign and it fails, they lose $5 million. If they reject the campaign and it would have succeeded, they forgo $40 million in profit. Multiplying these potential losses by their probabilities gives an expected opportunity loss of $2 million for approval versus $24 million for rejection. The difference between these values—$22 million—represents the maximum value of perfect information about whether the campaign would succeed. This calculation provides a ceiling on what the company should spend on market research or other measurements. In practice, they might aim to spend 2-10% of this amount (between $440,000 and $2.2 million) on reducing uncertainty about the campaign's potential success. Douglas Hubbard discovered a pattern after analyzing over 60 major business decisions involving more than 4,000 variables, which he calls the "Measurement Inversion": the economic value of measuring a variable is usually inversely proportional to how much measurement attention it typically receives. Organizations routinely measure things with low information value while ignoring variables that could significantly affect decisions. A large UK insurance company illustrated this principle when they devoted considerable resources to "function point counting" for software development estimates. Analysis showed this expensive measurement added no value—it was no more accurate than initial estimates. Meanwhile, they neglected to measure the benefits of proposed projects, which would have provided far more decision value. The value of information follows a distinctive curve—it rises quickly with initial uncertainty reduction but levels off as we approach perfect certainty. Meanwhile, the cost of information typically starts low but accelerates as we seek greater precision. This relationship reveals a counterintuitive truth: when you have a lot of uncertainty, you don't need much data to tell you something useful. By calculating the value of information before measuring, you can focus your measurement efforts where they matter most, avoid wasting resources on low-value measurements, and discover high-value measurements you might otherwise have overlooked.
Chapter 4: Choose the Right Measurement Methods
Once you've defined your measurement problem and calculated the value of information, the next challenge is selecting appropriate measurement methods. The good news is that you don't need to invent new approaches—effective measurement methods already exist for virtually any business problem. Consider how Eratosthenes measured Earth's circumference in ancient Greece without modern technology. He observed that at noon on the summer solstice, a well in Syene was fully illuminated (meaning the sun was directly overhead), while in Alexandria, vertical objects cast shadows. By measuring the angle of these shadows and knowing the distance between the cities, he calculated Earth's circumference to within 3% of the actual value. This illustrates a fundamental truth about measurement: the first approach you think of is often "the hard way." With ingenuity, you can identify simpler, more practical methods. When the Cleveland Orchestra wanted to measure whether performances were improving, they didn't implement complex patron surveys—they simply counted standing ovations. This provided meaningful data with minimal effort. Similarly, when a European paint distributor wanted to measure how network speed affected sales, they cross-referenced PBX phone logs (showing customer hang-ups while on hold) with network utilization data and sales records. This revealed the relationship between slower network response times and lost sales. For most measurement challenges, four basic approaches can be applied. First, look for existing trails—almost every phenomenon leaves evidence that can be analyzed. Second, use direct observation, even if limited to samples. The Environmental Protection Agency needed to measure how many drivers were using leaded gasoline in cars designed for unleaded fuel. Their solution was surprisingly simple: they stationed observers at randomly selected gas stations to record which type of fuel cars used, then matched license plates to vehicle records. Third, if no trail exists, create one by adding "tracers." Amazon offers free gift wrapping partly to track which books are purchased as gifts. Similarly, coupons help retailers determine which newspapers their customers read. Fourth, if tracking isn't feasible, conduct an experiment. When measuring the effect of customer relationship training, one company randomly assigned 30 support staff to training while keeping others as a control group. By comparing post-call survey results, they determined with 99% confidence that the training improved word-of-mouth recommendations. The right measurement method depends on your specific situation, but remember that you need less data than you think. The Rule of Five states that with just five random samples from any population, there's a 93.75% chance that the median of the population lies between the smallest and largest values in your sample. For many business decisions, this level of uncertainty reduction is sufficient. By applying these approaches iteratively and starting with the simplest methods first, you can make significant progress in measuring what previously seemed immeasurable.
Chapter 5: Apply Sampling Techniques to Reduce Uncertainty
Sampling—observing some things to learn about all things—is perhaps the most powerful and versatile measurement tool available. It allows us to reduce uncertainty about large populations by examining just a small portion, making it practical to measure things that would otherwise be impossible to count completely. When William Sealy Gosset, a chemist at the Guinness brewery in Dublin, needed to measure which types of barley produced the best beer yields, he couldn't test large numbers of batches. Instead, he developed the "t-statistic" (publishing under the pseudonym "Student") to estimate confidence intervals from very small samples—as few as two. This breakthrough made it possible to draw meaningful conclusions from limited data. The power of small samples is illustrated by how quickly uncertainty diminishes with additional observations. With just five random samples, you can significantly narrow your confidence interval about a population. After 30 samples, the interval becomes much narrower, but diminishing returns set in—you need four times as many samples (120) to cut the error in half again. This principle applies whether you're measuring jelly bean weights, customer satisfaction, or employee productivity. Various sampling techniques can be applied depending on your measurement challenge. Catch-recatch sampling helps measure populations that can't be directly counted. Marine biologists use this to estimate fish populations by tagging a sample of fish, releasing them, then catching another sample and noting what percentage are tagged. This same approach can measure undiscovered software bugs, unauthorized system intrusions, or potential customers not yet identified. Spot sampling involves taking random snapshots instead of continuous tracking. To determine how employees spend their time, you might randomly observe 100 instances throughout the day. Finding employees on conference calls in 12 of those instances indicates they spend about 12% of their time on calls (with a 90% confidence interval of 8% to 18%). Serial number sampling can reveal production quantities from limited observations. During World War II, Allied intelligence estimated German tank production by analyzing the serial numbers of captured tanks. With just eight captured tanks, they could estimate total production with surprising accuracy—far better than traditional intelligence methods. For many business measurements, the "mathless 90% confidence interval" provides a simple yet powerful approach. With 5 samples, use the highest and lowest values; with 8 samples, use the second highest and second lowest; with 11 samples, use the third highest and third lowest. This method requires no calculations yet provides reliable confidence intervals for the median of a population. Remember that the uncertainty about whether a value exceeds a decision threshold can fall much faster than uncertainty about the exact value. After just a few observations, you may still have a wide range, but if the threshold is well outside that range, you can be highly confident about which side of the threshold the true value lies on. By applying these sampling techniques iteratively and focusing on decision-relevant thresholds, you can achieve practical certainty with far less data than you might expect.
Chapter 6: Leverage Bayesian Methods for Better Accuracy
Traditional statistical methods often ignore valuable information we already possess. When measuring something, we rarely start from complete ignorance—we typically have prior knowledge, experience, or contextual understanding that should inform our analysis. Bayesian methods provide a framework for systematically incorporating this knowledge into our measurements. Named after Thomas Bayes, an 18th-century mathematician, the Bayesian approach treats probability as a measure of belief that can be updated as new evidence emerges. The process begins with a "prior probability" representing what we know before gathering new data, then updates this to a "posterior probability" after considering new evidence. Consider a software company trying to estimate defect rates in a new application. Based on historical data from similar projects, they might start with a prior belief that between 2 and 8 defects per thousand lines of code is 90% likely. After testing a sample of the code and finding 3 defects in 1,000 lines, they can update this range to a narrower posterior estimate. The power of Bayesian updating becomes evident when comparing it to traditional methods that might ignore the valuable historical context. Douglas Hubbard applied this approach when helping a major consumer products company estimate the value of a proposed information system. Starting with calibrated expert estimates, they identified the key uncertainties affecting the decision. As they gathered data on each uncertainty, they systematically updated their estimates using Bayesian methods, ultimately determining that the system would deliver significant value—a conclusion later confirmed by actual results. Bayesian approaches are particularly valuable when dealing with rare events or small samples. Traditional methods might struggle to estimate the risk of a catastrophic failure that has never occurred before, but Bayesian analysis can incorporate expert judgment and related evidence to produce meaningful estimates. One powerful application is heterogeneous benchmarking, which uses data from similar but not identical situations. A retail chain opening a new location might have no direct historical data for that specific area. However, they can start with data from existing stores, then update this prior information with local demographic data, competitor analysis, and early sales results from the new location to refine their forecast. The Bayesian approach also helps overcome cognitive biases. When presented with new evidence, people tend to either overreact to it or stubbornly maintain their original beliefs. Bayesian updating provides a mathematical framework for appropriate belief revision—neither ignoring new evidence nor giving it too much weight. By integrating prior knowledge with new observations, Bayesian methods often produce more accurate estimates with less data than traditional approaches. This makes them particularly valuable for business decisions where perfect information is unattainable but improved estimates can significantly reduce risk. To apply Bayesian methods in practice, start by explicitly documenting your prior beliefs about the variable you're measuring. Gather new data through observation or experiment. Then use Bayes' theorem to update your prior beliefs in light of the new evidence. This process can be repeated iteratively as more information becomes available, continuously refining your estimates and reducing uncertainty. Remember that the goal isn't mathematical elegance but practical decision support. Even simplified Bayesian approaches can dramatically improve your measurement accuracy compared to methods that ignore prior knowledge.
Chapter 7: Make Decisions Based on Meaningful Data
The ultimate purpose of measurement is to inform decisions. After defining your problem, calibrating uncertainty, calculating information value, choosing methods, and gathering data, you must translate your findings into action. This requires understanding not just the measurements themselves but their implications for risk and return. Monte Carlo simulation provides a powerful framework for this decision-making process. Unlike traditional business cases that use single-point estimates, Monte Carlo models incorporate ranges of uncertainty for each variable and simulate thousands of possible scenarios. This reveals not just the expected outcome but the full range of possibilities and their likelihoods. Consider a manufacturing company evaluating a new machine lease costing $400,000 annually. They estimate maintenance savings of $10-$20 per unit, labor savings of -$2 to $8 per unit, materials savings of $3-$9 per unit, and production levels of 15,000-35,000 units. A simple calculation using midpoints suggests annual savings of $600,000, making the lease appear profitable. However, a Monte Carlo simulation reveals that despite the positive expected value, there's a 14% chance the lease will lose money. Is this acceptable? The answer depends on the decision maker's risk tolerance and alternative opportunities. When the Environmental Protection Agency needed to evaluate a Geographic Information System for tracking methyl mercury (a substance suspected of lowering children's IQ), they had to weigh the $3 million cost against potential health benefits. By modeling the uncertainty in exposure levels, population affected, and potential IQ impact, they could make an informed decision about whether this investment represented the best use of limited resources. Douglas Hubbard discovered what he calls the "Risk Paradox"—the largest, most consequential decisions often receive the least rigorous analysis. While sophisticated quantitative methods are routinely applied to operational decisions like loan approvals or insurance pricing, major strategic decisions about mergers, research initiatives, or IT portfolios often rely on subjective judgments or simplistic scoring systems. This represents a massive missed opportunity, as studies show that organizations using quantitative risk analysis methods make better decisions and achieve superior financial performance. NASA found that cost and schedule estimates from Monte Carlo simulations had less than half the error of traditional accounting estimates. Similarly, oil exploration firms using quantitative risk assessment methods consistently outperformed competitors. These examples demonstrate that embracing uncertainty through rigorous analysis leads to better decisions than pretending certainty exists where it doesn't. To apply these principles in your organization, start by identifying a specific decision that depends on uncertain variables. Model the decision explicitly, including the ranges of uncertainty for each variable. Use Monte Carlo simulation to understand the full distribution of possible outcomes. Focus on the probabilities that matter for your decision—not just the average outcome but the likelihood of exceeding or falling below critical thresholds. As Lord Kelvin observed, "When you can measure what you are speaking about, and express it in numbers, you know something about it; but when you cannot express it in numbers, your knowledge is of a meager and unsatisfactory kind." By following the path from problem definition through measurement to decision, you transform uncertainty into practical certainty—not perfect knowledge, but knowledge sufficient for confident action.
Summary
Throughout this exploration of measurement, we've discovered that virtually anything that matters in business can be measured, even if it initially seems intangible or abstract. The journey begins with clearly defining what you're trying to measure and why it matters, then progresses through calibrating your uncertainty, calculating the value of additional information, choosing appropriate measurement methods, applying sampling techniques, leveraging Bayesian approaches, and ultimately making better decisions based on meaningful data. As Douglas Hubbard reminds us, "When you understand that measurement is about reducing uncertainty, not eliminating it, you realize that even small amounts of relevant data can be surprisingly valuable." This perspective liberates us from the paralysis of seeking perfect measurement and encourages us to take practical steps toward better decisions through incremental uncertainty reduction. Start today by identifying one important decision in your work that depends on a seemingly immeasurable variable, then apply these principles to reduce your uncertainty just enough to make a better choice. Remember that the goal isn't perfect knowledge but practical certainty—the confidence to act wisely in an uncertain world.
Best Quote
“If a measurement matters at all, it is because it must have some conceivable effect on decisions and behaviour. If we can't identify a decision that could be affected by a proposed measurement and how it could change those decisions, then the measurement simply has no value” ― Douglas W. Hubbard, How to Measure Anything: Finding the Value of "Intangibles" in Business
Review Summary
Strengths: The review highlights several strengths of the book, including its clear explanation of measurements as approximations, the art and science of making educated guesses, and the use of Bayesian thinking. It praises the book's examples, such as Fermi's estimation problem and the innovative use of metrics by the Cleveland Orchestra. The book's discussion on the importance of the Confidence Interval, Monte Carlo simulations, and practical applications like Amazon's strategy to identify gift purchases are also noted as impressive.\nOverall Sentiment: Enthusiastic\nKey Takeaway: The book provides a comprehensive and enlightening approach to understanding and applying measurement techniques, emphasizing that exactness is not necessary for usefulness and reliability in metrics.
Trending Books
Download PDF & EPUB
To save this Black List summary for later, download the free PDF and EPUB. You can print it out, or read offline at your convenience.

How to Measure Anything
By Douglas W. Hubbard










