Home/Business/Perfectly Confident
Loading...
Perfectly Confident cover

Perfectly Confident

How to Calibrate Your Decisions Wisely

4.1 (918 ratings)
23 minutes read | Text | 9 key ideas
"Perfectly Confident (2020) is an in-depth exploration of what confidence is and how we can leverage it effectively. Having too much confidence can be just as bad as having too little. But how do you strike the right balance and become perfectly confident? Drawing on psychology, economics, and his own research into business leadership, author Don A. Moore offers some insightful answers."

Categories

Business, Nonfiction, Self Help, Psychology, Science, Leadership, Management, Personal Development, Buisness

Content Type

Book

Binding

Kindle Edition

Year

2020

Publisher

Harper Business

Language

English

ASIN

B07W66YN2C

ISBN

0062887777

ISBN13

9780062887771

File Download

PDF | EPUB

Perfectly Confident Plot Summary

Introduction

Confidence is a double-edged sword. Too much of it can lead to catastrophic decisions, while too little can paralyze us into inaction. The challenge lies not in maximizing confidence, but in calibrating it just right. When was the last time you felt absolutely certain about something, only to discover later you were completely wrong? Or perhaps you've hesitated to pursue an opportunity because you doubted your abilities, only to watch someone less qualified succeed? The path to making better decisions doesn't lie in blind optimism or crippling self-doubt, but in what psychologists call "calibrated confidence" - the ability to match your level of certainty to the actual evidence at hand. Throughout these pages, you'll discover research-backed strategies to recognize when you're being overconfident, when you're selling yourself short, and how to find that elusive middle ground where your confidence aligns with reality. This journey isn't about becoming more confident in every situation - it's about becoming perfectly confident, with the wisdom to know the difference between justified certainty and dangerous delusion.

Chapter 1: Understand the Three Faces of Confidence

Confidence isn't a single trait but manifests in three distinct forms that affect our judgment in different ways. The first is estimation - how good we think we are or how likely we believe we'll succeed. The second is placement - how we compare ourselves to others. The third is precision - how certain we are that our beliefs are accurate. Each of these dimensions can be miscalibrated in ways that lead to poor decisions. Consider Elon Musk, whose confidence has fueled remarkable achievements. As a young immigrant, Musk helped create PayPal, then used his earnings to invest in Tesla, revolutionizing the automobile market. He simultaneously created SpaceX, transforming the space industry with plans to colonize Mars. His relentless work ethic was legendary - colleagues at his early company Zip2 would arrive at work to find him asleep in a beanbag chair after working through the night. Yet this same confidence led to significant failures. He was ousted as CEO from both Zip2 and PayPal. SpaceX's first rockets exploded disastrously. Tesla regularly missed production targets, with Musk tweeting in 2018: "I'm back to sleeping at the factory. Car biz is hell." What makes Musk's story instructive isn't just his successes but how his confidence sometimes exceeded reality. When Tesla fell dramatically short of his promised production numbers for the Model 3, it wasn't just a minor miscalculation - it was overestimation by a factor of ten. Yet in other areas, his confidence was well-calibrated and led to breakthrough innovations that others thought impossible. The key insight isn't that confidence is bad or good, but that it must be matched to reality. Research shows we're particularly prone to overprecision - being too sure we know the truth. When asked to provide 90% confidence intervals for uncertain quantities (ranges where we're 90% sure the true answer lies), most people's intervals contain the correct answer only about 50% of the time. To calibrate your confidence, start by distinguishing which type of confidence you're dealing with. Are you overestimating your abilities? Comparing yourself too favorably to others? Or being too certain about your beliefs? Each requires different corrective strategies. Try widening your confidence intervals when making predictions, seek specific feedback on your performance relative to peers, and actively consider how you might be wrong about your most cherished beliefs.

Chapter 2: Challenge Your Assumptions Through Active Testing

The most dangerous beliefs are those we never think to question. Our minds naturally search for evidence that confirms what we already believe, creating a closed loop of self-reinforcing certainty. Breaking this cycle requires deliberately testing your assumptions rather than simply defending them. Harold Camping provides a cautionary tale of unchallenged assumptions. As president of the Christian Family Radio network, Camping predicted the world would end on May 21, 2011. His calculation was based on his interpretation of biblical numerology, claiming the apocalypse would come exactly 7,000 years after Noah's flood. When economists studied Camping's followers, they found these believers were so certain of the prophecy that they refused to accept any bet that would pay them after the predicted doomsday, regardless of how large the potential payout. Their certainty was absolute. Of course, May 21 came and went without apocalypse. What's remarkable isn't just that Camping was wrong, but how resistant his followers were to considering this possibility. When researchers asked them to contemplate being wrong, many refused to even engage with the question. This extreme case illustrates a universal human tendency: we naturally seek confirmation rather than disconfirmation of our beliefs. This confirmation bias appears in everyday thinking. When we ask "Is this hypothesis true?" we unconsciously search for supporting evidence. The question itself directs our attention toward confirmation. Researchers demonstrated this by asking volunteers to determine if someone was an introvert or extrovert. Those tasked with testing if someone was an extrovert chose questions likely to elicit extroverted responses, while those testing for introversion chose questions biased toward revealing introversion. To overcome this bias, actively challenge your assumptions through a technique called "consider the opposite." Before making important decisions, explicitly ask: "How might I be wrong?" This simple question activates different mental pathways and helps uncover blind spots. Organizations can institutionalize this approach by appointing a devil's advocate for key decisions. The Catholic Church formalized this role in 1587 when evaluating candidates for sainthood. The devil's advocate's job was to argue against canonization, suggesting natural explanations for alleged miracles. When testing your assumptions, seek out people who disagree with you. Their perspective isn't a threat but a gift that helps you see what you might be missing. Amazon's leadership principles include the obligation to "respectfully challenge decisions when they disagree, even when doing so is uncomfortable." This isn't just about avoiding errors - it's about finding truth through the collision of diverse viewpoints.

Chapter 3: Quantify Uncertainty Using Probability Distributions

When facing uncertainty, most people simplify their thinking into three categories: "It's definitely going to happen," "It definitely won't happen," or "It's 50/50." This crude approximation leads to poor decisions because it fails to capture the rich spectrum of possibilities between certainty and complete uncertainty. The Wright brothers' journey illustrates how even pioneers struggle with uncertainty. In 1901, after a disastrous attempt to build a glider at Kitty Hawk, Wilbur Wright told his brother they should abandon their flying experiments, declaring "man won't be flying for a thousand years." Fortunately, they tried again, and in 1903 achieved the first powered flight. Wilbur later reflected, "Ever since, I have distrusted myself and avoided all predictions." While humility is warranted, none of us can avoid making predictions - every decision depends on forecasting uncertain outcomes. A more sophisticated approach is to think in terms of probability distributions rather than single-point predictions. Consider how the chemical company BASF approached forecasting product sales. Initially, they asked product managers for "best guess" estimates of future sales. This created three problems: First, the precise number was inevitably wrong. Second, it neglected the range of possibilities. Third, it exacerbated overconfidence by focusing on a single outcome. A better approach is to create a histogram showing the full distribution of possible outcomes with their associated probabilities. Instead of saying "We'll sell 100,000 kilograms of ibuprofen next quarter," a manager might specify: "There's a 30% chance we'll sell between 90,000-110,000 kilograms, a 5% chance we'll sell less than 30,000, and a 10% chance we'll sell over 170,000." This forces consideration of both likely outcomes and edge cases. Research shows that when people complete such histograms, they become better calibrated in their confidence. In one study, participants estimated someone's weight from a photograph. When asked for a single "best guess" with confidence, they were right only about 30% of the time despite claiming 60-70% confidence. When forced to distribute probabilities across multiple weight ranges, their confidence aligned much better with actual accuracy. To practice thinking in distributions, identify an upcoming project with an uncertain completion date. Instead of making a single prediction, create bins representing different time frames (e.g., "before April 1," "April 1-June 30," etc.). Assign probabilities to each bin, ensuring they sum to 100%. This exercise helps you recognize the full range of possibilities and communicate uncertainty more accurately to others. The goal isn't to eliminate uncertainty but to quantify it realistically. As the statistician Frederick Mosteller noted: "It is easy to lie with statistics, but easier without them."

Chapter 4: Calculate Expected Value to Make Better Choices

When facing uncertain decisions, our intuitions often lead us astray. We might avoid a risky opportunity with positive expected value while embracing a seemingly safe option that's actually detrimental. The solution is to systematically calculate expected value - the probability-weighted average of all possible outcomes. Consider this personal example: When finishing my PhD, I worried about finding employment. My advisor offered an unusual insurance policy: "If you don't get a job, I'll pay you $90,000 from my own pocket." The catch? I'd need to pay him $5,000 for this insurance. Was it worth it? The expected value calculation is straightforward: If unemployed, I'd get $85,000 ($90,000 minus the $5,000 premium). If employed, I'd earn $90,000 but lose the premium. The breakeven probability is 5.6% - if my chance of unemployment exceeded this threshold, the insurance would be worthwhile. This example illustrates how expected value clarifies decisions under uncertainty. However, two common biases distort our calculations. First, wishful thinking leads us to overestimate the probability of desirable outcomes. Jerry Yang, co-founder of Yahoo!, rejected Microsoft's $44.6 billion acquisition offer in 2008, insisting Yahoo was "positioned for accelerated financial growth." Eight years later, Verizon bought Yahoo's core businesses for just $4.83 billion - roughly one-tenth of Microsoft's offer. The second bias is fear-driven pessimism. After the September 11 attacks, Americans estimated their chance of being injured in a terrorist attack at 20%, though the actual risk was less than 0.0001%. This exaggerated fear led to costly overreactions, including wars that caused far more deaths than the original attacks. Entrepreneurs face a particular dilemma: they need accurate beliefs to make good decisions, but also must project confidence to attract investors. Research shows 81% of entrepreneurs rate their chances of success at least 7 out of 10, with one-third claiming certainty of success. Yet nearly 80% of new businesses fail within five years. This overconfidence can be destructive when entrepreneurs invest everything they have - maxing out credit cards, mortgaging homes, and convincing friends and family to invest - in ventures with negative expected value. To calculate expected value effectively, separate your assessment of probabilities from your desires. For each possible outcome, assign an objective probability and multiply by its value. Then sum these products. This applies to personal decisions too, like choosing between jobs or romantic partners. Identify key dimensions (pay, advancement opportunities, personal fulfillment), score each option on these dimensions while accounting for uncertainties, and calculate the expected value. Remember that expected value calculations are only as good as the numbers you input. When others present you with forecasts, examine their underlying assumptions. As the statistician Frederick Mosteller noted: "It is easy to lie with statistics, but easier without them."

Chapter 5: Define Clear Standards for Performance Assessment

Ambiguity breeds overconfidence. When standards are vague, it's easy to believe we're performing better than we actually are. Defining clear, measurable criteria for success is essential for well-calibrated confidence. Consider the infamous "American Idol" contestant Kathleen Colby Dunn (stage name Koby), who delivered what judges called "one of the longest, loudest, and oddest vocal runs in 'Idol' history." When eliminated, she was genuinely shocked: "I thought I sang really, really well. I'm pretty sure I just nailed it. I guess they wanted mediocre singers?" She even suggested judge Katy Perry was "just a little jealous" because "she can't hit those notes." Koby's confidence, completely disconnected from reality, led to public embarrassment. This disconnect isn't limited to reality TV contestants. In a classic study, psychologist Ola Svenson found that 93% of American drivers claimed their skill put them in the top half - a statistical impossibility. However, when researchers clarified specific driving skills (alertness, checking blind spots, braking, etc.), overplacement decreased substantially. Similarly, while students often claim to be more intelligent than their peers, this effect disappears when asked about performance on a specific test they all just took. The lesson is clear: ambiguity enables self-deception, while specificity promotes accuracy. This applies in professional settings too. When employees complain about being passed over for promotion, clarifying performance standards helps them understand both the decision and what they need to do to advance. In classrooms, sharing examples of excellent work and detailed grading criteria reduces grade disputes. Beyond individual assessment, organizations must define what constitutes success. Many proudly describe themselves as "results-oriented," but this approach can backfire. If you reward only successful outcomes and punish failures, you incentivize caution and penalize well-intentioned risk-taking. Consider a new product that costs $100 million to develop with only a 5% chance of success. If success means $200 billion in profits, that's a good bet with an expected value of $9.9 billion. But an engineer working for a results-oriented boss would be wise to decline this opportunity since failure is likely and could cost their job. Instead of focusing solely on outcomes, evaluate decisions based on expected value at the time they were made. Apple's Newton personal digital assistant failed commercially despite $100 million in development costs. Yet it pioneered innovations later incorporated into the iPhone, which generated over $200 billion in profits. The initial investment had positive expected value even though the specific outcome was negative. To improve performance assessment, start by defining specific, measurable criteria for success. Break vague qualities like "leadership potential" into observable behaviors. When evaluating others or yourself, focus on the quality of decisions rather than just outcomes. And remember that even failed projects can have positive expected value if the potential upside was sufficiently large.

Chapter 6: Learn from Both Success and Failure Through Analysis

Our natural tendency is to celebrate successes and quickly move past failures. But both contain valuable lessons that can only be extracted through deliberate analysis. Organizations and individuals who systematically examine both outcomes gain a significant advantage in calibrating their confidence. The airline industry exemplifies this approach through rigorous post-accident investigations. When Lion Air flight 610 crashed in 2018, killing all 189 people aboard, investigators meticulously analyzed what went wrong. They discovered that Boeing's new 737 Max aircraft had a flawed automated system that pushed the plane's nose down. The pilots, unaware of this system, frantically searched the plane's manuals as they struggled to maintain control. When a similar crash occurred with Ethiopian Airlines months later, the investigation led to the grounding of all 737 Max aircraft until Boeing could fix the problem. What makes the airline industry remarkable isn't just these post-crash investigations but their proactive approach to near-misses. The Aviation Safety Reporting System allows pilots to report incidents that almost resulted in tragedy, with immunity from prosecution for most errors. These reports help identify systemic weaknesses before they cause disasters. This culture of learning has made commercial air travel extraordinarily safe - less than 0.05 deaths per billion kilometers traveled. Tech companies have adapted similar practices. Google conducts formal postmortems after failures, using a standard form with two key questions: "What happened?" and "What can we do differently next time?" The form includes a quote from Henry Ford: "Failure is only opportunity to begin again. Only this time, more wisely." Rather than hiding failures, Google celebrates them as learning opportunities. For future-oriented analysis, psychologist Gary Klein developed the "premortem" technique. Before launching a project, team members imagine it has failed spectacularly and write down every possible reason why. This exercise reveals potential problems that might otherwise go unmentioned. When the Philadelphia 76ers basketball team conducted a premortem, coach Brett Brown asked, "If we're going to die, what's it going to look like?" The team identified vulnerabilities like "jacking up bad shots" and "turning the ball over," then developed strategies to address these weaknesses. Another powerful technique is backcasting - starting with a desired outcome and working backward to identify necessary steps. While forecasting predicts the most likely future, backcasting maps the path to a preferred future. The most effective approach combines both: use backcasting to identify your ideal outcome, then conduct a premortem to anticipate obstacles. To apply these techniques personally, conduct your own postmortems after significant successes and failures. Ask: What went well? What didn't? What assumptions proved correct or incorrect? Where did luck play a role? Then identify specific lessons for future decisions. For upcoming projects, imagine both success and failure, mapping the path to the former while preparing for the latter.

Chapter 7: Seek Diverse Perspectives to Test Your Views

Our natural tendency is to surround ourselves with people who think like us and confirm our existing beliefs. Breaking this echo chamber is essential for calibrating confidence, as diverse perspectives help us identify blind spots and correct errors in our thinking. Ray Dalio, founder of the world's most successful hedge fund, learned this lesson the hard way. In 1982, he confidently predicted an economic depression: "There'll be no soft landing. I can say that with absolute certainty, because I know how markets work." Instead, the U.S. experienced unprecedented growth. Dalio lost so much money that he had to fire all his employees, becoming his company's sole remaining worker. "Being so wrong—and especially so publicly wrong—was incredibly humbling and cost me just about everything I had built," he recalled. From this failure, Dalio developed a radically different approach. He actively sought out independent thinkers who disagreed with him: "By engaging them in thoughtful disagreement, I'd be able to understand their reasoning and have them stress-test mine." He built a culture of "radical transparency" at his firm Bridgewater Associates, where everyone from interns to executives was encouraged to challenge others' views. Most meetings were recorded and available to everyone within the company. This culture of constructive disagreement helped Bridgewater make extraordinarily successful investment decisions. Professional poker player Annie Duke suggests a simple way to invite such challenges: ask "Wanna bet?" This question forces both parties to quantify their confidence and consider what the other person might know that they don't. When two people disagree about a forecast, a wager can have positive expected value for both if they have different information or perspectives. The mere possibility that someone would take the opposite side of your bet should cause you to reconsider your position. This principle explains why day trading is generally a losing proposition. Every time you trade a stock, someone is on the other side of that transaction. Are you more informed than they are? Most trading volume comes from sophisticated institutional investors and algorithms. If you don't know why the person on the other side is trading with you, you might be what poker players call "the patsy." Studies show day traders lose an annualized 23% after fees. A better approach is to harness the "wisdom of crowds" - the phenomenon where collective judgment often outperforms individual experts. When 787 visitors to a fair guessed the weight of an ox, the average guess was remarkably accurate - just one pound off the actual weight of 1,198 pounds. Individual errors canceled out, preserving the signal while eliminating noise. This principle works best when the crowd is diverse and independent, with errors that aren't correlated. To apply this in your own decisions, actively seek out people with different backgrounds, expertise, and viewpoints. Before important meetings, collect everyone's opinions independently to prevent groupthink. Take the "outside view" by asking how similar situations typically turn out. And remember that disagreement isn't a threat but an opportunity to improve your understanding. As John Stuart Mill wrote: "It is only by the collision of adverse opinions that the remainder of the truth has any chance of being supplied."

Summary

The path to better decisions lies not in maximizing confidence, but in calibrating it perfectly to match reality. Throughout these chapters, we've explored how overconfidence leads to catastrophic errors while underconfidence causes missed opportunities. The solution isn't choosing one extreme over the other, but finding what Aristotle called the "golden mean" between excess and deficiency. As Ray Dalio discovered after his devastating market loss: "In retrospect, my crash was one of the best things that ever happened to me." It taught him to let go of his attachment to feeling right and instead seek critical reflection: "How do I know I'm right?" This question - a willingness to consider how we might be wrong - is at the heart of well-calibrated confidence. The most successful decision-makers combine the courage to act decisively with the humility to question their own certainty. Start today by identifying one important upcoming decision. Rather than making a single prediction, create a probability distribution of possible outcomes. Seek out someone who disagrees with your assessment and genuinely try to understand their perspective. Calculate the expected value of different options, separating your desires from your probability estimates. Remember that perfect confidence isn't about being more certain - it's about matching your certainty precisely to the evidence at hand.

Best Quote

Review Summary

Strengths: The reviewer appreciates the practical strategies offered by Don A. Moore in the book, finding it useful for improving decision-making and understanding confidence levels. The reviewer mentions personal growth and improved performance in academic and work-related situations as a result of implementing the book's concepts. Weaknesses: Not explicitly mentioned. Overall: The reviewer highly recommends "Perfectly Confident" by Don A. Moore for its valuable insights into confidence, decision-making, and practical tools for self-improvement. The book is praised for its impact on personal and professional development.

About Author

Loading...
Don A. Moore Avatar

Don A. Moore

Read more

Download PDF & EPUB

To save this Black List summary for later, download the free PDF and EPUB. You can print it out, or read offline at your convenience.

Book Cover

Perfectly Confident

By Don A. Moore

0:00/0:00

Build Your Library

Select titles that spark your interest. We'll find bite-sized summaries you'll love.