Tag: Decision Making

[Episode 28] The Return of a Decision-Making Jedi: My Discussion With Michael Mauboussin

Guess who's back? Back again?
Michael Mauboussin is back; tell a friend.

Mauboussin was actually the very first guest on the podcast, when it was still very much an experiment. I enjoyed it so much, I decided to continue with the show. (If you missed his last interview, you can listen to it here, or if you’re a member of The Learning Community, you can download a transcript.)

Michael is one of my very favorite people to talk to, and I couldn’t wait to pick up right where we left off.

In this interview, Michael and I dive deep into some of the topics we care most about here at Farnam Street, including:

  • The concept of “base rates” and how they can help us make far better decisions and avoid the pain and consequences of making poor choices.
  • How to know where you land on the luck/skill continuum and why it matters
  • Michael’s advice on creating a systematic decision-making process in your organization to improve outcomes.
  • The two most important elements of any decision-making process
  • How to train your intuition to be one of your most powerful assets instead of a dangerous liability
  • The three tests Michael uses in his company to determine the health and financial stability of his environment
  • Why “algorithm aversion” is creating such headaches in many organizations and how to help your teams overcome it, so you can make more rapid progress
  • The most significant books that he’s read since we last spoke, his reading habits, and the strategies he uses to get the most out of every book
  • The importance of sleep in Michael's life to make sure his body and mind are running at peak efficiency
  • His greatest failures and what he learned from them
  • How Michael and his wife raised their kids and the unique parenting style they adopted
  • How Michael defines happiness and the decisions he makes to maximize the joy in his life

Any one of those insights alone is worth a listen, so I think you’re really going to enjoy this interview.



An edited transcript is available to members of our learning community or for purchase separately ($9).

More Episodes

A complete list of all of our podcast episodes.


Members can discuss this post on the Learning Community Forum

What You Can Learn from Fighter Pilots About Making Fast and Accurate Decisions

“What is strategy? A mental tapestry of changing intentions for harmonizing and focusing our efforts as a basis for realizing some aim or purpose in an unfolding and often unforeseen world of many bewildering events and many contending interests.””

— John Boyd

What techniques do people use in the most extreme situations to make decisions? What can we learn from them to help us make more rational and quick decisions?

If these techniques work in the most drastic scenarios, they have a good chance of working for us. This is why military mental models can have such wide, useful applications outside their original context.

Military mental models are constantly tested in the laboratory of conflict. If they weren’t agile, versatile, and effective, they would quickly be replaced by others. Military leaders and strategists invest a great deal of time in developing and teaching decision-making processes.

One strategy that I’ve found repeatedly effective is the OODA loop.

Developed by strategist and U.S. Air Force Colonel John Boyd, the OODA loop is a practical concept designed to be the foundation of rational thinking in confusing or chaotic situations. OODA stands for Observe, Orient, Decide, and Act.

Boyd developed the strategy for fighter pilots. However, like all good mental models, it can be extended into other fields. We used it at the intelligence agency I used to work at. I know lawyers, police officers, doctors, businesspeople, politicians, athletes, and coaches who use it.

Fighter pilots have to work fast. Taking a second too long to make a decision can cost them their lives. As anyone who has ever watched Top Gun knows, pilots have a lot of decisions and processes to juggle when they’re in dogfights (close-range aerial battles). Pilots move at high speeds and need to avoid enemies while tracking them and keeping a contextual knowledge of objectives, terrains, fuel, and other key variables.

Dogfights are nasty. I’ve talked to pilots who’ve been in them. They want the fights to be over as quickly as possible. The longer they go, the higher the chances that something goes wrong. Pilots need to rely on their creativity and decision-making abilities to survive. There is no game plan to follow, no schedule or to-do list. There is only the present moment when everything hangs in the balance.

Forty-Second Boyd

Boyd was no armchair strategist. He developed his ideas during his own time as a fighter pilot. He earned the nickname “Forty-Second Boyd” for his ability to win any fight in under 40 seconds.

In a tribute written after Boyd’s death, General C.C. Krulak described him as “a towering intellect who made unsurpassed contributions to the American art of war. Indeed, he was one of the central architects of the reform of military thought…. From John Boyd we learned about competitive decision making on the battlefield—compressing time, using time as an ally.”

Reflecting Robert Greene’s maxim that everything is material, Boyd spent his career observing people and organizations. How do they adapt to changeable environments in conflicts, business, and other situations?

Over time, he deduced that these situations are characterized by uncertainty. Dogmatic, rigid theories are unsuitable for chaotic situations. Rather than trying to rise through the military ranks, Boyd focused on using his position as colonel to compose a theory of the universal logic of war.

Boyd was known to ask his mentees the poignant question, “Do you want to be someone, or do you want to do something?” In his own life, he certainly focused on the latter path and, as a result, left us ideas with tangible value. The OODA loop is just one of many.

The Four Parts of the OODA Loop

Let's break down the four parts of the OODA loop and see how they fit together.

OODA stands for Observe, Orient, Decide, Act. The description of it as a loop is crucial. Boyd intended the four steps to be repeated again and again until a conflict finishes. Although most depictions of the OODA loop portray it as a superficial idea, there is a lot of depth to it. Using it should be simple, but it has a rich basis in interdisciplinary knowledge.

1: Observe
The first step in the OODA Loop is to observe. At this stage, the main focus is to build a comprehensive picture of the situation with as much accuracy as possible.

A fighter pilot needs to consider: What is immediately affecting me? What is affecting my opponent? What could affect us later on? Can I make any predictions, and how accurate were my prior ones? A pilot's environment changes rapidly, so these observations need to be broad and fluid.

And information alone is not enough. The observation stage requires awareness of the overarching meaning of the information. It also necessitates separating the information which is relevant for a particular decision from that which is not. You have to add context to the variables.

The observation stage is vital in decision-making processes.

For example, faced with a patient in an emergency ward, a doctor needs to start by gathering as much foundational knowledge as possible. That might be the patient's blood pressure, pulse, age, underlying health conditions, and reason for admission. At the same time, the doctor needs to discard irrelevant information and figure out which facts are relevant for this precise situation. Only by putting the pieces together can she make a fast decision about the best way to treat the patient. The more experienced a doctor is, the more factors she is able to take into account, including subtle ones, such as a patient's speech patterns, his body language, and the absence (rather than presence) of certain signs.

2: Orient

Orientation, the second stage of the OODA loop, is frequently misunderstood or skipped because it is less intuitive than the other stages. Boyd referred to it as the schwerpunkt, a German term which loosely translates to “the main emphasis.” In this context, to orient is to recognize the barriers that might interfere with the other parts of the process.

Without an awareness of these barriers, the subsequent decision cannot be a fully rational one. Orienting is all about connecting with reality, not with a false version of events filtered through the lens of cognitive biases and shortcuts.

“Orientation isn't just a state you're in; it's a process. You're always orienting.”

— John Boyd

Including this step, rather than jumping straight to making a decision, gives us an edge over the competition. Even if we are at a disadvantage to begin with, having fewer resources or less information, Boyd maintained that the Orient step ensures that we can outsmart an opponent.

For Western nations, cyber-crime is a huge threat — mostly because for the first time ever, they can’t outsmart, outspend, or out-resource the competition. Boyd has some lessons for them.

Boyd believed that four main barriers prevent us from seeing information in an unbiased manner:

  1. Our cultural traditions
  2. Our genetic heritage
  3. Our ability to analyze and synthesize
  4. The influx of new information — it is hard to make sense of observations when the situation keeps changing

Boyd was one of the first people to discuss the importance of building a toolbox of mental models, prior to Charlie Munger’s popularization of the concept among investors.

Boyd believed in “destructive deduction” — taking note of incorrect assumptions and biases and then replacing them with fundamental, versatile mental models. Only then can we begin to garner a reality-oriented picture of the situation, which will inform subsequent decisions.

Boyd employed a brilliant metaphor for this — a snowmobile. In one talk, he described how a snowmobile comprises elements of different devices. The caterpillar treads of a tank, skis, the outboard motor of a boat, the handlebars of a bike — each of those elements is useless alone, but combining them creates a functional vehicle.

As Boyd put it: “A loser is someone (individual or group) who cannot build snowmobiles when facing uncertainty and unpredictable change; whereas a winner is someone (individual or group) who can build snowmobiles, and employ them in an appropriate fashion, when facing uncertainty and unpredictable change.”

To orient ourselves, we have to build a metaphorical snowmobile by combining practical concepts from different disciplines.

Although Boyd is regarded as a military strategist, he didn’t confine himself to any particular discipline. His theories encompass ideas drawn from various disciplines, including mathematical logic, biology, psychology, thermodynamics, game theory, anthropology, and physics. Boyd described his approach as a “scheme of pulling things apart (analysis) and putting them back together (synthesis) in new combinations to find how apparently unrelated ideas and actions can be related to one another.”

3. Decide

No surprises here. Having gathered information and oriented ourselves, we have to make an informed decision. The previous two steps should have generated a plethora of ideas, so this is the point where we choose the most relevant option.

Boyd cautioned against first-conclusion bias, explaining that we cannot keep making the same decision again and again. This part of the loop needs to be flexible and open to Bayesian updating. In some of his notes, Boyd described this step as the hypothesis stage. The implication is that we should test the decisions we make at this point in the loop, spotting their flaws and including any issues in future observation stages.

4. Act

While technically a decision-making process, the OODA loop is all about action. The ability to act upon rational decisions is a serious advantage.

The other steps are mere precursors. A decision made, now is the time to act upon it. Also known as the test stage, this is when we experiment to see how good our decision was. Did we observe the right information? Did we use the best possible mental models? Did we get swayed by biases and other barriers? Can we disprove the prior hypothesis? Whatever the outcome, we then cycle back to the first part of the loop and begin observing again.

Why the OODA Loop Works

The OODA loop has four key benefits.

1. Speed

Fighter pilots must make many decisions in fast succession. They don’t have time to list pros and cons or to consider every available avenue. Once the OODA loop becomes part of their mental toolboxes, they should be able to cycle through it in a matter of seconds.

Speed is a crucial element of military decision making. Using the OODA loop in everyday life, we probably have a little more time than a fighter pilot would. But Boyd emphasized the value of being decisive, taking initiative, and staying autonomous. These are universal assets and apply to many situations.

Take the example of modern growth hacker marketing.

“The ability to operate at a faster tempo or rhythm than an adversary enables one to fold the adversary back inside himself so that he can neither appreciate nor keep up with what is going on. He will become disoriented and confused…”

— John Boyd

The key advantage growth hackers have over traditional marketers is speed. They observe (look at analytics, survey customers, perform a/b tests, etc.) and orient themselves (consider vanity versus meaningful metrics, assess interpretations, and ground themselves in the reality of a market) before making a decision and then acting. The final step serves to test their ideas and they have the agility to switch tactics if the desired outcome is not achieved.

Meanwhile, traditional marketers are often trapped in lengthy campaigns which do not offer much in the way of useful metrics. Growth hackers can adapt and change their techniques every single day depending on what works. They are not confined by stagnant ideas about what worked before.

So, although they may have a small budget and fewer people to assist them, their speed gives them an advantage. Just as Boyd could defeat any opponent in under 40 seconds (even starting at a position of disadvantage), growth hackers can grow companies and sell products at extraordinary rates, starting from scratch.

2. Comfort With Uncertainty
Uncertainty does not always equate to risk. A fighter pilot is in a precarious situation, where there will be gaps in their knowledge. They cannot read the mind of the opponent and might have incomplete information about the weather conditions and surrounding environment. They can, however, take into account key factors such as the opponent's nationality, the type of airplane they are flying, and what their maneuvers reveal about their intentions and level of training.

If the opponent uses an unexpected strategy, is equipped with a new type of weapon or airplane, or behaves in an irrational, ideologically motivated way, the pilot must accept the accompanying uncertainty. However, Boyd belabored the point that uncertainty is irrelevant if we have the right filters in place.

If we don’t, we can end up stuck at the observation stage, unable to decide or act. But if we do have the right filters, we can factor uncertainty into the observation stage. We can leave a margin of error. We can recognize the elements which are within our control and those which are not.

Three key principles supported Boyd’s ideas. In his presentations, he referred to Gödel’s Proof, Heisenberg’s Uncertainty Principle, and the Second Law of Thermodynamics.

Gödel’s theorems indicate that any mental model we have of reality will omit certain information and that Bayesian updating must be used to bring it in line with reality. Our understanding of science illustrates this.

In the past, people’s conception of reality missed crucial concepts such as criticality, relativity, the laws of thermodynamics, and gravity. As we have discovered these concepts, we have updated our view of the world. Yet we would be foolish to think that we now know everything and our worldview is complete. Other key principles remain undiscovered. The same goes for fighter pilots — their understanding of what is going on during a battle will always have gaps. Identifying this fundamental uncertainty gives it less power over us.

The second concept Boyd referred to is Heisenberg’s Uncertainty Principle. In its simplest form, this principle describes the limit of the precision with which pairs of physical properties can be understood. We cannot know the position and the velocity of a body at the same time. We can know either its location or its speed, but not both. Although Heisenberg’s Uncertainty Principle was initially used to describe particles, Boyd’s ability to combine disciplines led him to apply it to planes. If a pilot focuses too hard on where an enemy plane is, they will lose track of where it is going and vice versa. Trying harder to track the two variables will actually lead to more inaccuracy! Heisenberg’s Uncertainty Principle applies to myriad areas where excessive observation proves detrimental. Reality is imprecise.

Finally, Boyd made use of the Second Law of Thermodynamics. In a closed system, entropy always increases and everything moves towards chaos. Energy spreads out and becomes disorganized.

Although Boyd’s notes do not specify the exact applications, his inference appears to be that a fighter pilot must be an open system or they will fail. They must draw “energy” (information) from outside themselves or the situation will become chaotic. They should also aim to cut their opponent off, forcing them to become a closed system. Drawing on his studies, Boyd developed his Energy Maneuverability theory, which recast maneuvers in terms of the energy they used.

“Let your plans be dark and impenetrable as night, and when you move, fall like a thunderbolt.”

— Sun Tzu

3. Unpredictability

Using the OODA loop should enable us to act faster than an opponent, thereby seeming unpredictable. While they are still deciding what to do, we have already acted. This resets their own loop, moving them back to the observation stage. Keep doing this, and they are either rendered immobile or forced to act without making a considered decision. So, they start making mistakes, which can be exploited.

Boyd recommended making unpredictable changes in speed and direction, and wrote, “we should operate at a faster tempo than our adversaries or inside our adversaries[’] time scales. … Such activity will make us appear ambiguous (non predictable) [and] thereby generate confusion and disorder among our adversaries.” He even helped design planes better equipped to make those unpredictable changes.

For the same reason that you can’t run the same play 70 times in a football game, rigid military strategies often become useless after a few uses, or even one iteration, as opponents learn to recognize and counter them. The OODA loop can be endlessly used because it is a formless strategy, unconnected to any particular maneuvers.

We know that Boyd was influenced by Sun Tzu (he owned seven thoroughly annotated copies of The Art of War), and he drew many ideas from the ancient strategist. Sun Tzu depicts war as a game of deception where the best strategy is that which an opponent cannot pre-empt. Apple has long used this strategy as a key part of their product launches. Meticulously planned, their launches are shrouded in secrecy and the goal is for no one outside the company to see a product prior to the release.

When information has been leaked, the company has taken serious legal action as well as firing associated employees. We are never sure what Apple will put out next (just search for “Apple product launch 2017” and you will see endless speculation based on few facts). As a consequence, Apple can stay ahead of their rivals.

Once a product launches, rival companies scramble to emulate it. But by the time their technology is ready for release, Apple is on to the next thing and has taken most of the market share. Although inexpensive compared to the drawn-out product launches other companies use, Apple’s unpredictability makes us pay attention. Stock prices rise the day after, tickets to launches sell out in seconds, and the media reports launches as if they were news events, not marketing events.

4. Testing

A notable omission in Boyd’s work is any sort of specific instructions for how to act or which decisions to make. This is presumably due to his respect for testing. He believed that ideas should be tested and then, if necessary, discarded.

“We can't just look at our own personal experiences or use the same mental recipes over and over again; we've got to look at other disciplines and activities and relate or connect them to what we know from our experiences and the strategic world we live in.”

— John Boyd

Boyd’s OODA is a feedback loop, with the outcome of actions leading back to observations. Even in Aerial Attack Study, his comprehensive manual of maneuvers, Boyd did not describe any particular one as superior. He encouraged pilots to have the widest repertoire possible so they could select the best option in response to the maneuvers of an opponent.

We can incorporate testing into our decision-making processes by keeping track of outcomes in decision journals. Boyd’s notes indicate that he may have done just that during his time as a fighter pilot, building up the knowledge that went on to form Aerial Attack Study. Rather than guessing how our decisions lead to certain outcomes, we can get a clear picture to aid us in future orientation stages. Over time, our decision journals will reveal what works and what doesn’t.

Applying the OODA Loop

In sports, there is an adage that carries over to business quite well: “Speed kills.” If you are able to be nimble, able to assess the ever-changing environment and adapt quickly, you'll always carry the advantage over your opponent.

Start applying the OODA loop to your day-to-day decisions and watch what happens. You'll start to notice things that you would have been oblivious to before. Before jumping to your first conclusion, you'll pause to consider your biases, take in additional information, and be more thoughtful of consequences.

As with anything you practice,  if you do it right, the more you do it, the better you'll get.  You'll start making better decisions more quickly. You'll see more rapid progress. And as John Boyd would prescribe, you'll start to DO something in your life, and not just BE somebody.


Members can discuss this post on the Learning Community Forum

Poker, Speeding Tickets, and Expected Value: Making Decisions in an Uncertain World

“Take the probability of loss times the amount of possible loss from the probability of gain times the amount of possible gain. That is what we're trying to do. It's imperfect but that's what it's all about.”

— Warren Buffett

You can train your brain to think like CEOs, professional poker players, investors, and others who make tricky decisions in an uncertain world by weighing probabilities.

All decisions involve potential tradeoffs and opportunity costs. The question is, how can we make the best possible choices when the factors involved are often so complicated and confusing? How can we determine which statistics and metrics are worth paying attention to? How do we think about averages?

Expected value is one of the simplest tools you can use to think better. While not a natural way of thinking for most people, it instantly turns the world into shades of grey by forcing us to weigh probabilities and outcomes. Once we've mastered it, our decisions become supercharged. We know which risks to take, when to quit projects, when to go all in, and more.

Expected value refers to the long-run average of a random variable.

If you flip a fair coin ten times, the heads-to-tails ratio will probably not be exactly equal. If you flip it one hundred times, the ratio will be closer to 50:50, though again not exactly. But for a very large number of iterations, you can expect heads to come up half the time and tails the other half. The law of large numbers dictates that the values will, in the long term, regress to the mean, even if the first few flips seem unequal.

The more coin flips, the closer you get to the 50:50 ratio. If you bet a sum of money on a coin flip, the potential winnings on a fair coin have to be bigger than your potential loss to make the expected value positive.

We make many expected-value calculations without even realizing it. If we decide to stay up late and have a few drinks on a Tuesday, we regard the expected value of an enjoyable evening as higher than the expected costs the following day. If we decide to always leave early for appointments, we weigh the expected value of being on time against the frequent instances when we arrive early. When we take on work, we view the expected value in terms of income and other career benefits as higher than the cost in terms of time and/or sanity.

Likewise, anyone who reads a lot knows that most books they choose will have minimal impact on them, while a few books will change their lives and be of tremendous value. Looking at the required time and money as an investment, books have a positive expected value (provided we choose them with care and make use of the lessons they teach).

These decisions might seem obvious. But the math behind them would be somewhat complicated if we tried to sit down and calculate it. Who pulls out a calculator before deciding whether to open a bottle of wine (certainly not me) or walk into a bookstore?

The factors involved are impossible to quantify in a non-subjective manner – like trying to explain how to catch a baseball. We just have a feel for them. This expected-value analysis is unconscious – something to consider if you have ever labeled yourself as “bad at math.”

Parking Tickets

Another example of expected value is parking tickets. Let's say that a parking spot costs $5 and the fine for not paying is $10. If you can expect to be caught one-third of the time, why pay for parking? The expected value of doing so is negative. It's a disincentive. You can park without paying three times and pay only $10 in fines, instead of paying $15 for three parking spots. But if the fine is $100, the probability of getting caught would have to be higher than one in twenty for it to be worthwhile. This is why fines tend to seem excessive. They cover the people who are not caught while giving an incentive for everyone to pay.

Consider speeding tickets. Here, the expected value can be more abstract, encompassing different factors. If speeding on the way to work saves 15 minutes, then a monthly $100 fine might seem worthwhile to some people. For most of us, though, a weekly fine would mean that speeding has a negative expected value. Add in other disincentives (such as the loss of your driver's license), and speeding is not worth it. So the calculation is not just financial; it takes into account other tradeoffs as well.

The same goes for free samples and trial periods on subscription services. Many companies (such as Graze, Blue Apron, and Amazon Prime) offer generous free trials. How can they afford to do this? Again, it comes down to expected value. The companies know how much the free trials cost them. They also know the probability of someone's paying afterwards and the lifetime value of a customer. Basic math reveals why free trials are profitable. Say that a free trial costs the company $10 per person, and one in ten people then sign up for the paid service, going on to generate $150 in profits. The expected value is positive. If only one in twenty people sign up, the company needs to find a cheaper free trial or scrap it.

Similarly, expected value applies to services that offer a free “lite” version (such as Buffer and Spotify). Doing so costs them a small amount or even nothing. Yet it increases the chance of someone's deciding to pay for the premium version. For the expected value to be positive, the combined cost of the people who never upgrade needs to be lower than the profit from the people who do pay.

Lottery tickets prove useless when viewed through the lens of expected value. If a ticket costs $1 and there is a possibility of winning $500,000, it might seem as if the expected value of the ticket is positive. But it is almost always negative. If one million people purchase a ticket, the expected value is $0.50. That difference is the profit that lottery companies make. Only on sporadic occasions is the expected value positive, even though the probability of winning remains minuscule.

Failing to understand expected value is a common logical fallacy. Getting a grasp of it can help us to overcome many limitations and cognitive biases.

“Constantly thinking in expected value terms requires discipline and is somewhat unnatural. But the leading thinkers and practitioners from somewhat varied fields have converged on the same formula: focus not on the frequency of correctness, but on the magnitude of correctness.”

— Michael Mauboussin

Expected Value and Poker

Let's look at poker. How do professional poker players manage to win large sums of money and hold impressive track records? Well, we can be certain that the answer isn't all luck, although there is some of that involved.

Professional players rely on mathematical mental models that create order among random variables. Although these models are basic, it takes extensive experience to create the fingerspitzengefühl (“fingertips feeling,” or instinct) necessary to use them.

A player needs to make correct calculations every minute of a game with an automaton-like mindset. Emotions and distractions can corrupt the accuracy of the raw math.

In a game of poker, the expected value is the average return on each dollar invested in the pot. Each time a player makes a bet or call, they are taking into account the probability of making more money than they invest. If a player is risking $100, with a 1 in 5 probability of success, the pot must contain at least $500 for the bet to be safe. The expected value per call is at least equal to the amount the player stands to lose. If the pot contains $300 and the probability is 1 in 5, the expected value is negative. The idea is that even if this tactic is unsuccessful at times, in the long run, the player will profit.

Expected-value analysis gives players a clear idea of probabilistic payoffs. Successful poker players can win millions one week, then make nothing or lose money the next, depending on the probability of winning. Even the best possible hands can lose due to simple probability. With each move, players also need to use Bayesian updating to adapt their calculations. because sticking with a prior figure could prove disastrous. Casinos make their fortunes from people who bet on situations with a negative expected value.

Expected Value and the Ludic Fallacy

In The Black Swan, Nassim Taleb explains the difference between everyday randomness and randomness in the context of a game or casino. Taleb coined the term “ludic fallacy” to refer to “the misuse of games to model real-life situations.” (Or, as the website logicallyfallacious.com puts it: the assumption that flawless statistical models apply to situations where they don’t actually apply.)

In Taleb’s words, gambling is “sterilized and domesticated uncertainty. In the casino, you know the rules, you can calculate the odds… ‘The casino is the only human venture I know where the probabilities are known, Gaussian (i.e., bell-curve), and almost computable.’ You cannot expect the casino to pay out a million times your bet, or to change the rules abruptly during the game….”

Games like poker have a defined, calculable expected value. That’s because we know the outcomes, the cards, and the math. Most decisions are more complicated. If you decide to bet $100 that it will rain tomorrow, the expected value of the wager is incalculable. The factors involved are too numerous and complex to compute. Relevant factors do exist; you are more likely to win the bet if you live in England than if you live in the Sahara, for example. But that doesn't rule out Black Swan events, nor does it give you the neat probabilities which exist in games. In short, there is a key distinction between Knightian risks, which are computable because we have enough information to calculate the odds, and Knightian uncertainty, which is non-computable because we don’t have enough information to calculate odds accurately. (This distinction between risk and uncertainty is based on the writings of economist Frank Knight.) Poker falls into the former category. Real life is in the latter. If we take the concept literally and only plan for the expected, we will run into some serious problems.

As Taleb writes in Fooled By Randomness:

Probability is not a mere computation of odds on the dice or more complicated variants; it is the acceptance of the lack of certainty in our knowledge and the development of methods for dealing with our ignorance. Outside of textbooks and casinos, probability almost never presents itself as a mathematical problem or a brain teaser. Mother nature does not tell you how many holes there are on the roulette table, nor does she deliver problems in a textbook way (in the real world one has to guess the problem more than the solution).

The Monte Carlo Fallacy

Even in the domesticated environment of a casino, probabilistic thinking can go awry if the principle of expected value is forgotten. This famously occurred in Monte Carlo Casino in 1913. A group of gamblers lost millions when the roulette table landed on black 26 times in a row. The probability of this occurring is no more or less likely than the other 67,108,863 possible permutations, but the people present kept thinking, “It has to be red next time.” They saw the likelihood of the wheel landing on red as higher each time it landed on black. In hindsight, what sense does that make? A roulette wheel does not remember the color it landed on last time. The likelihood of either outcome is exactly 50% with each spin, regardless of the previous iteration. So the potential winnings for each spin need to be at least twice the bet a player makes, or the expected value is negative.

“A lot of people start out with a 400-horsepower motor but only get 100 horsepower of output. It's way better to have a 200-horsepower motor and get it all into output.”

— Warren Buffett

Given all the casinos and roulette tables in the world, the Monte Carlo incident had to happen at some point. Perhaps some day a roulette wheel will land on red 26 times in a row and the incident will repeat. The gamblers involved did not consider the negative expected value of each bet they made. We know this mistake as the Monte Carlo fallacy (or the “gambler's fallacy” or “the fallacy of the maturity of chances”) – the assumption that prior independent outcomes influence future outcomes that are actually also independent. In other words, people assume that “a random process becomes less random and more predictable as it is repeated”1.

It's a common error. People who play the lottery for years without success think that their chance of winning rises with each ticket, but the expected value is unchanged between iterations. Amos Tversky and Daniel Kahneman consider this kind of thinking a component of the representativeness heuristic, stating that the more we believe we control random events, the more likely we are to succumb to the Monte Carlo fallacy.

Magnitude over Frequency

Steven Crist, in his book Bet with the Best, offers an example of how an expected-value mindset can be applied. Consider a hypothetical race with four horses. If you’re trying to maximize return on investment, you might want to avoid the horse with a high likelihood of winning. Crist writes,

The point of this exercise is to illustrate that even a horse with a very high likelihood of winning can be either a very good or a very bad bet, and that the difference between the two is determined by only one thing: the odds.”2

Everything comes down to payoffs. A horse with a 50% chance of winning might be a good bet, but it depends on the payoff. The same holds for a 100-to-1 longshot. It's not the frequency of winning but the magnitude of the win that matters.

Error Rates, Averages, and Variability

When Bill Gates walks into a room with 20 people, the average wealth per person in the room quickly goes beyond a billion dollars. It doesn't matter if the 20 people are wealthy or not; Gates's wealth is off the charts and distorts the results.

An old joke tells of the man who drowns in a river which is, on average, three feet deep. If you're deciding to cross a river and can't swim, the range of depths matters a heck of a lot more than the average depth.

The Use of Expected Value: How to Make Decisions in an Uncertain World

Thinking in terms of expected value requires discipline and practice. And yet, the top performers in almost any field think in terms of probabilities. While this isn't natural for most of us, once you implement the discipline of the process, you'll see the quality of your thinking and decisions improve.

In poker, players can predict the likelihood of a particular outcome. In the vast majority of cases, we cannot predict the future with anything approaching accuracy. So what use is expected value outside gambling? It turns out, quite a lot. Recognizing how expected value works puts any of us at an advantage. We can mentally leap through various scenarios and understand how they affect outcomes.

Expected value takes into account wild deviations. Averages are useful, but they have limits, as the man who tried to cross the river discovered. When making predictions about the future, we need to consider the range of outcomes. The greater the possible variance from the average, the more our decisions should account for a wider range of outcomes.

There's a saying in the design world: when you design for the average, you design for no one. Large deviations can mean more risk-which is not always a bad thing. So expected-value calculations take into account the deviations. If we can make decisions with a positive expected value and the lowest possible risk, we are open to large benefits.

Investors use expected value to make decisions. Choices with a positive expected value and minimal risk of losing money are wise. Even if some losses occur, the net gain should be positive over time. In investing, unlike in poker, the potential losses and gains cannot be calculated in exact terms. Expected-value analysis reveals opportunities that people who just use probabilistic thinking often miss. A trade with a low probability of success can still carry a high expected value. That's why it is crucial to have a large number of robust mental models. As useful as probabilistic thinking can be, it has far more utility when combined with expected value.

Understanding expected value is also an effective way to overcome the sunk costs fallacy. Many of our decisions are based on non-recoverable past investments of time, money, or resources. These investments are irrelevant; we can't recover them, so we shouldn't factor them into new decisions. Sunk costs push us toward situations with a negative expected value. For example, consider a company that has invested considerable time and money in the development of a new product. As the launch date nears, they receive irrefutable evidence that the product will be a failure. Perhaps research shows that customers are disinterested, or a competitor launches a similar, better product. The sunk costs fallacy would lead them to release their product anyway. Even if they take a loss. Even if it damages their reputation. After all, why waste the money they spent developing the product? Here's why: Because the product has a negative expected value, which will only worsen their losses. An escalation of commitment will only increase sunk costs.

When we try to justify a prior expense, calculating the expected value can prevent us from worsening the situation. The sunk costs fallacy robs us of our most precious resource: time. Each day we are faced with the choice between continuing and quitting numerous endeavors. Expected-value analysis reveals where we should continue, and where we should cut our losses and move on to a better use of time and resources. It's an efficient way to work smarter, and not engage in unnecessary projects.

Thinking in terms of expected value will make you feel awkward when you first try it. That's the hardest thing about it; you need to practice it a while before it becomes second nature. Once you get the hang of it, you'll see that it's valuable in almost every decision. That's why the most rational people in the world constantly think about expected value. They've uncovered the key insight that the magnitude of correctness matters more than its frequency. And yet, human nature is such that we're happier when we're frequently right.

  • 1

    From https://rationalwiki.org/wiki/Gambler’s_fallacy, accessed on 11 January 2018.

  • 2

    Steven Crist, “Crist on Value,” in Andrew Beyer et al., Bet with the Best: All New Strategies From America’s Leading Handicappers (New York: Daily Racing Form Press, 2001), 63-64.

All Models Are Wrong

How is your journey towards understanding Farnam Street’s latticework of mental models going? Is it proving useful? Changing your view of the world? If the answer is that it’s going well that’s good. There’s just one tiny hitch.

All models are wrong.

Yep. It's the truth. However, there is another part to that statement:

All models are wrong, some are useful.

Those words come from the British statistician, George Box. In a groundbreaking 1976 paper, Box revealed the fallacy of our desire to categorize and organize the world. We create models (a term with many applications), once to confuse them for reality.

Box also stated:

Remember that all models are wrong; the practical question is how wrong do they have to be to not be useful.

What Exactly Is A Model?

First, we should understand precisely what a model is.

The dictionary definition states a model is ‘a representation, generally in miniature, to show the construction or appearance of something’ or ‘a simplified description, especially a mathematical one, of a system or process, to assist calculations and predictions.’

For our purposes here, we are better served by the second definition. A model is a simplification which fosters understanding.

Think of an architectural model. These are typically a small scale model of a building, made before it's built. Its purpose is to show what the building will look like and to help people working on the project to develop a clear picture of the overall feel. In the iconic scene from Zoolander, Derek (played by Ben Stiller) looks at the architectural model of his propsed ‘school for kids who can’t read good’ and shouts “What is this? A center for ants??”

That scene illustrates the wrong way to understand models: Too literally.

Why We Use Models- And Why They Work

At Farnam Street, we believe in using models for the purpose of building a massive, but finite amount of fundamental, invariant knowledge about how the world really works. Applying this knowledge is the key to making good decisions and avoiding stupidity.

“Scientists generally agree that no theory is 100 percent correct. Thus, the real test of knowledge is not truth, but utility. Science gives us power. The more useful that power, the better the science.”

— Yuval Noah Harari

Time-tested models allow us to understand how things work in the real world. And understanding how things work prepares us to make better decisions without expending too much mental energy in the process.

Instead of relying on fickle and specialized facts, we can learn versatile concepts. The mental models we cover are intended to be widely applicable.

It's crucial for us to understand as many mental models as possible. As the adage goes, a little knowledge can be dangerous and creates more problems than total ignorance. No single model is universally applicable – we find exceptions for nearly everything. Even hardcore physics has not been totally solved.

“The basic trouble, you see, is that people think that “right” and “wrong” are absolute; that everything that isn't perfectly and completely right is totally and equally wrong.”

— Isaac Asimov

Take a look at almost any comment section on the internet and you are guaranteed to find at least one pedant raging about a minor perceived inaccuracy, throwing out the good with the bad. While ignorance and misinformation are certainly not laudable, neither is an obsession with perfection.

Like heuristics, models work as a consequence of the fact they are usually helpful in most situations, not because they are always helpful in a small number of situations.

Models can assist us in making predictions and forecasting the future. Forecasts are never guaranteed, yet they provide us with a degree of preparedness and comprehension of the future. For example, a weather forecast which claims it will rain today may get that wrong. Still, it's correct often enough to enable us to plan appropriately and bring an umbrella.

Mental Models and Minimum Viable Products

Think of mental models as minimum viable products.

Sure, all of them can be improved. But the only way that can happen is if we try them out, educate ourselves and collectively refine them.

We can apply one of our mental models, Occam’s razor, to this. Occam’s razor states that the simplest solution is usually correct. In the same way, our simplest mental models tend to be the most useful. This is because there is minimal room for errors and misapplication.

“The world doesn’t have the luxury of waiting for complete answers before it takes action.”

— Daniel Gilbert

Your kitchen knives are not as sharp as they could be. Does that matter as long as they still cut vegetables? Your bed is not as comfortable as it could be. Does that matter if you can still get a good night’s sleep in it? Your internet is not as fast as it could be. Does that matter as long as you can load this article? Arguably not. Our world runs on the functional, not the perfect. This is what a mental model is – a functional tool. A tool which maybe could be a bit sharper or easier to use, but still does the job.

The statistician David Hand made the following statement in 2014;

In general, when building statistical models, we must not forget that the aim is to understand something about the real world. Or predict, choose an action, make a decision, summarize evidence, and so on, but always about the real world, not an abstract mathematical world: our models are not the reality.

For example, in 1960, Georg Rasch said the following:

When you construct a model you leave out all the details which you, with the knowledge at your disposal, consider inessential…. Models should not be true, but it is important that they are applicable, and whether they are applicable for any given purpose must, of course, be investigated. This also means that a model is never accepted finally, only on trial.

Imagine a world where physics like precision is prized over usefulness.

We would lack medical care because a medicine or procedure can never be perfect. In a world like this, we would possess little scientific knowledge, because research can never be 100% accurate. We would have no art because a work can never be completed. We would have no technology because there are always little flaws which can be ironed out.

“A model is a simplification or approximation of reality and hence will not reflect all of reality … While a model can never be “truth,” a model might be ranked from very useful, to useful, to somewhat useful to, finally, essentially useless.”

— Ken Burnham and David Anderson

In short, we would have nothing. Everything around us is imperfect and uncertain. Some things are more imperfect than others, but issues are always there. Over time, incremental improvements happen through unending experimentation and research.

The Map is Not the Territory

As we know, the map is not the territory. A map can be seen as a symbol or index of a place, not an icon.

When we look at a map of Paris, we know it is a representation of the actual city. There are bound to be flaws; streets which have been renamed, demolished buildings, perhaps a new Metro line. Even so, the map will help us find our way. It is far more useful to have a map showing the way from Notre Dame to Gare du Nord (a tool) than to know how many meters they are apart (a piece of trivia.)

Someone who has spent a lot of time studying a map will be able to use it with greater ease, just like a mental model. Someone who lives in Paris will find the map easier to understand than a tourist, just as someone who uses a mental model in their day to day life will apply it better than a novice. As long as there are no major errors, we can consider the map useful, even if it is by no means a reflection of reality. Gregory Bateson writes in Steps to an Ecology of Mind that the purpose of a map is not to be true, but to have a structure which represents truth within the current context.

“A map is not the territory it represents, but, if correct, it has a similar structure to the territory, which accounts for its usefulness.”

— Alfred Korzybski

Physical maps generally become more accurate as time passes. Not long ago, they often included countries which didn’t exist, omitted some which did, portrayed the world as flat or fudged distances. Nowadays, our maps have come a long way.

The same goes for mental models – they are always evolving, being revised – never really achieving perfection. Certainly, over time, the best models are revised only slightly, but we must never consider our knowledge “set”.

Another factor to consider in using models is to take into account what they're used for.

Many mental models (e.g. entropy, critical mass and activation energy) are based upon scientific and mathematical concepts. A person who works in those areas will obviously need a deeper understanding of it than someone who want to learn to think better when making investment decisions. They will need a different map and a more detailed one showing elements which the rest of us have no need for.

“A model which took account of all the variation of reality would be of no more use than a map at the scale of one to one.”

— Joan Robinson

In Partial Enchantments of the Quixote, Jorge Luis Borges provides an even more interesting analysis of the confusion between models and reality:

Let us imagine that a portion of the soil of England has been leveled off perfectly and that on it a cartographer traces a map of England. The job is perfect; there is no detail of the soil of England, no matter how minute that is not registered on the map; everything has there its correspondence. This map, in such a case, should contain a map of the map, which should contain a map of the map of the map, and so on to infinity.Why does it disturb us that the map be included in the map and the thousand and one nights in the book of the Thousand and One Nights? Why does it disturb us that Don Quixote be a reader of the Quixote and Hamlet a spectator of Hamlet? I believe I have found the reason: these inversions suggest that if the characters of a fictional work can be readers or spectators, we, its readers or spectators, can be fictions.

How Do We Know If A Model Is Useful?

This is a tricky question to answer. When looking at any model, it is helpful to ask some of the following questions:

  • How long has this model been around? As a general rule, mental models which have been around for a long time (such as Occam’s razor) will have been subjected to a great deal of scrutiny. Time is an excellent curator, trimming away inefficient ideas. A mental model which is new may not be particularly refined or versatile. Many of our mental models originate from Ancient Greece and Rome, meaning they have to be functional to have survived this long.
  • Is it a representation of reality? In other words, does it reflect the real world? Or is it based on abstractions?
  • Does this model apply to multiple areas? The more elastic a model is, the more valuable it is to learn about. (Of course, be careful not to apply the model where it doesn't belong. Mind Feynman: “You must not fool yourself, and you're the easiest person to fool.”)
  • How did this model originate? Many mental models arise from scientific or mathematical concepts. The more fundamental the domain, the more likely the model is to be true and lasting.
  • Is it based on first principles? A first principle is a foundational concept which cannot be deduced from any other concept and must be known.
  • Does it require infinite regress? Infinite regress refers to something which is justified by principles, which themselves require justification by other principles. A model based on infinite regress is likely to required extensive knowledge of a particular topic, and have minimal real-world application.

When using any mental model, we must avoid becoming too rigid. There are exceptions to all of them, and situations in which they are not applicable.

Think of the latticework as a toolkit. That's why it pays to do the work up front to put so many of them in your toolbox at a deep, deep level. If you only have one or two, you're likely to attempt to use them in places that don't make sense. If you've absorbed them only lightly, you will not be able to use them when the time is at hand.

If on the other hand, you have a toolbox full of them and they're sunk in deep, you're more likely to pull out the best ones for the job exactly when they are needed.

Too many people are caught up wasting time on physics-like precision in areas of practical life that do not have such precision available. A better approach is to ask “Is it useful?” and, if yes, “To what extent?”

Mental models are a way of thinking about the world that prepares us to make good decisions in the first place.

Rory Sutherland on The Psychology of Advertising, Complex Evolved Systems, Reading, Decision Making

“There is a huge danger in looking at life as an optimization problem.”


Rory Sutherland (@rorysutherland) is the Vice Chairman of Ogilvy & Mather Group, which is one of the largest advertising companies in the world.

Rory started the behavioral insights team and spends his days applying behavioral economics and evolutionary psychology to solve problems that conventionally advertising agencies haven't been able to solve.

In this wide-ranging interview we talk about: how advertising agencies are solving airport security problems, what Silicon Valley misses, how to mess with self-driving cars, reading habits, decision making, the intersection of advertising and psychology, and so much more.

This interview was recorded live in London, England.

Enjoy this amazing conversation.

“The problem with economics is not only that it is wrong but that it's incredibly creatively limiting.”


A lot of people like to take notes while listening. A transcription of this conversation is available to members of our learning community or you can purchase one separately.


If you liked this, check out all the episodes of the knowledge project.

Get Smart: Three Ways of Thinking to Make Better Decisions and Achieve Results

“Give me six hours to chop down a tree and I will spend the first four sharpening the axe.”
— Abraham Lincoln


Your ability to think clearly determines the decisions you make and the actions you take.

In Get Smart!: How to Think and Act Like the Most Successful and Highest-Paid People in Every Field, author Brian Tracy presents ten different ways of thinking that enable better decisions. Better decisions free up your time and improve results. At Farnam Street, we believe that a multidisciplinary approach based on mental models allows you to gauge situations from different perspectives and profoundly affect the quality of decisions you make.

Most of us slip into a comfort zone of what Tracy calls “easy thinking and decision-making.” We use less than our cognitive capacity because we become lazy and jump to simple conclusions.

This isn't about being faster. I disagree with the belief that decisions should be, first and foremost, fast and efficient. A better approach is to be effective. If it takes longer to come to a better decision, so be it. In the long run, this will pay for itself over and over with fewer messes, more free time, and less anxiety.

In Get Smart, Tracy does a good job of showing people a series of simple, practical, and powerful ways of examining a situation to improve the odds you're making the best decision.

Let's take a look at a few of them.

1. Long-Time Perspective Versus Short-Time Perspective

Dr. Edward Banfield of Harvard University studied upward economic mobility for almost 50 years. He wondered why some people and families moved from lower socioeconomic classes to higher ones and some didn't. A lot of these people moved from labor jobs to riches in one lifetime. He wanted to know why. His findings are summarized in the controversial book, The Unheavenly City. Banfield offered one simple conclusion that has endured. He concluded that “time perspective” was overwhelmingly the most important factor.

Tracy picks us up here:

At the lowest socioeconomic level, lower-lower class, the time perspective was often only a few hours, or minutes, such as in the case of the hopeless alcoholic or drug addict, who thinks only about the next drink or dose.

At the highest level, those who were second- or third-generation wealthy, their time perspective was many years, decades, even generations into the future. It turns out that successful people are intensely future oriented. They think about the future most of the time.


The very act of thinking long term sharpens your perspective and dramatically improves the quality of your short-term decision making.

So what should we do about this? Tracy advises:

Resolve today to develop long-time perspective. Become intensely future oriented. Think about the future most of the time. Consider the consequences of your decisions and actions. What is likely to happen? And then what could happen? And then what? Practice self-discipline, self-mastery, and self-control. Be willing to pay the price today in order to enjoy the rewards of a better future tomorrow.

Sounds a lot like Garrett Hardin‘s three lessons from ecology. But really what we're talking about here is second-level thinking.

2. Slow Thinking 

“If it is not necessary to decide, it is necessary not to decide.” 
— Lord Acton

I don't know many consistently successful people or organizations that are constantly reacting without thinking. And yet most of us are habitually in reactive mode. We react and respond to what's happening around us with little deliberate thought.

“From the first ring of the alarm clock,” Tracy writes, we are “largely reacting and responding to stimuli from [our] environment.” This feeds our impulses and appetites. “The normal thinking process is almost instantaneous: stimulus, then immediate response, with no time in between.”

The superior thinking process is also triggered by stimulus, but between the stimulus and the response there is a moment or more where you think before you respond. Just like your mother told you, “Count to ten before you respond, especially when you are upset or angry.”

The very act of stopping to think before you say or do anything almost always improves the quality of your ultimate response. It is an indispensable requirement for success.

One of the best things we can do to improve the quality of our thinking is to understand when we gain an advantage from slow thinking and when we don't.

Ask yourself “does this decision require fast or slow thinking?” 

Shopping for toothpaste is a situation where we derive little benefit from slow thinking. On the other hand if we're making an acquisition or investment we want to be deliberate. Where do we draw the line? A good shortcut is to consider the consequences. Telling your boss he's an idiot when he says something stupid is going to feel really good in the moment but carry lasting consequences. Don't React.

Pause. Think. Act. 

This sounds easy but it's not. One habit you can develop is to continually ask “How do we know this is true?” for the pieces of information you think are relevant to the decision.

3. Informed Thinking Versus Uninformed Thinking

“Beware of endeavouring to be a great man in a hurry.
One such attempt in ten thousand may succeed: these are fearful odds.”
—Benjamin Disraeli


I know a lot of entrepreneurs and most of them religiously say the same two words “due diligence.” In fact, a great friend of mine has a 20+ page due diligence checklist. This means taking the time to make the right decision. You may be wrong but it won't be because you rushed. Of course, most of the people who preach due diligence have skin in the game. It's easier to be cavalier (or stupid) when it's heads I win and tails I don't lose much (hello government).

Harold Geneen, who formed a conglomerate at ITT, said, “The most important elements in business are facts. Get the real facts, not the obvious facts or assumed facts or hoped-for facts. Get the real facts. Facts don’t lie.”

Heck, use the scientific method. Tracy writes:

Create a hypothesis— a yet-to-be-proven theory. Then seek ways to invalidate this hypothesis, to prove that your idea is wrong. This is what scientists do.

This is exactly the opposite of what most people do. They come up with an idea, and then they seek corroboration and proof that their idea is a good one. They practice “confirmation bias.” They only look for confirmation of the validity of the idea, and they simultaneously reject all input or information that is inconsistent with what they have already decided to believe.

Create a negative or reverse hypothesis. This is the opposite of your initial theory. For example, you are Isaac Newton, and the idea of gravity has just occurred to you. Your initial hypothesis would be that “things fall down.” You then attempt to prove the opposite—“things fall up.”

If you cannot prove the reverse or negative hypothesis of your idea, you can then conclude that your hypothesis is correct.



One of the reasons why Charles Darwin was such an effective thinker is that he relentlessly sought out disconfirming evidence.

As the psychologist Jerry Jampolsky once wrote, “Do you want to be right or do you want to be happy?”

It is amazing how many people come up with a new product or service idea and then fall in love with the idea long before they validate whether or not this is something that a sufficient number of customers are willing to buy and pay for.

Keep gathering information until the proper course of action becomes clear, as it eventually will. Check and double-check your facts. Assume nothing on faith. Ask, “How do we know that this is true?”

Finally, search for the hidden flaw, the one weak area in the decision that could prove fatal to the product or business if it occurred. J. Paul Getty, once the richest man in the world, was famous for his approach to making business decisions. He said, “We first determine that it is a good business opportunity. Then we ask, ‘What is the worst possible thing that could happen to us in this business opportunity?’ We then go to work to make sure that the worst possible outcome does not occur.”

Most importantly, never stop gathering information. One of the reasons that Warren Buffett is so successful is that he spends most of his day reading and thinking. I call this the Buffett Formula.



If you're a knowledge worker decisions are your product. Milton Friedman, the economist, wrote: “The best measure of quality thinking is your ability to accurately predict the consequences of your ideas and subsequent actions.”

If there were a single message to Get Smart, it's another plus in the Farnam Street mold of being conscious. Stop and think before deciding — especially if the consequences are serious. The more ways you have to look at a problem, the more likely you are to better understand. And when you understand a problem — when you really understand a problem — the solution becomes obvious. A friend of mine has a great expression: “To understand is to know what to do.”

Get Smart goes on to talk about goal and result orientated thinking, positive and negative thinking, entrepreneurial vs. corporate thinking and more.