Tag: Decision Making

Earning Your Stripes: My Conversation with Patrick Collison [The Knowledge Project #32]

Subscribe on iTunes | Stitcher | Spotify | Android | Google Play

On this episode of the Knowledge Project, I chat with Patrick Collison, co-founder and CEO of the leading online payment processing company, Stripe. If you’ve purchased anything online recently, there’s a good chance that Stripe facilitated the transaction.

What is now an organization with over a thousand employees and handling billions of dollars of online purchases every year, began as a small side experiment while Patrick and his brother John were going to college.

During our conversation, Patrick shares the details of their unlikely journey and some of the hard-earned wisdom he picked up along the way. I hope you have something handy to write with because the nuggets per minute in this episode are off the charts. Patrick was so open and generous with his responses that I’m really excited for you to hear what he has to say.

Here are just a few of the things we cover:

  • The biggest (and most valuable) mistakes Patrick made in the early days of Stripe and how they helped him get better
  • The characteristics that Patrick looks for in a new hire to fit and contribute to the Stripe company culture
  • What compelled he and his brother to move forward with the early concept of Stripe, even though on paper it was doomed to fail from the start
  • The gaps Patrick saw in the market that dozens of other processing companies were missing — and how he capitalized on them
  • The lessons Patrick learned from scaling Stripe from two employees (he and his brother) to nearly 1,000 today
  • How he evaluates the upsides and potential dangers of speculative positions within the company
  • How his Irish upbringing influenced his ability to argue and disagree without taking offense (and how we can all be a little more “Irish”)
  • The power of finding the right peer group in your social and professional circles and how impactful and influential it can be in determining where you end up.
  • The 4 ways Patrick has modified his decision-making process over the last 5 years and how it’s helped him develop as a person and as a business leader (this part alone is worth the listen)
  • Patrick’s unique approach to books and how he chooses what he’s going to spend his time reading
  • …life in Silicon Valley, Baumol’s cost disease, and so, so much more.

Patrick truly is one of the warmest, humble and down to earth people I’ve had the pleasure to speak with and I thoroughly enjoyed our conversation together. I hope you will too!

Listen

Transcript

Normally only members of our learning community have access to transcripts, however, we pick one or two a year to make avilable to everyone. Here's the complete transcript of the interview with Patrick.

If you liked this, check out other episodes of the knowledge project.

***

Members can discuss this podcast on the Learning Community Forum

Go Fast and Break Things: The Difference Between Reversible and Irreversible Decisions

Reversible vs. irreversible decisions. We often think that collecting as much information as possible will help us make the best decisions. Sometimes that's true, but sometimes it hamstrings our progress. Other times it can be flat out dangerous.

***

Many of the most successful people adopt simple, versatile decision-making heuristics to remove the need for deliberation in particular situations.

One heuristic might be defaulting to saying no, as Steve Jobs did. Or saying no to any decision that requires a calculator or computer, as Warren Buffett does. Or it might mean reasoning from first principles, as Elon Musk does. Jeff Bezos, the founder of Amazon.com, has another one we can add to our toolbox. He asks himself, is this a reversible or irreversible decision?

If a decision is reversible, we can make it fast and without perfect information. If a decision is irreversible, we had better slow down the decision-making process and ensure that we consider ample information and understand the problem as thoroughly as we can.

Bezos used this heuristic to make the decision to found Amazon. He recognized that if Amazon failed, he could return to his prior job. He would still have learned a lot and would not regret trying. The decision was reversible, so he took a risk. The heuristic served him well and continues to pay off when he makes decisions.

Decisions Amidst Uncertainty

Let’s say you decide to try a new restaurant after reading a review online. Having never been there before, you cannot know if the food will be good or if the atmosphere will be dreary. But you use the incomplete information from the review to make a decision, recognizing that it’s not a big deal if you don’t like the restaurant.

In other situations, the uncertainty is a little riskier. You might decide to take a particular job, not knowing what the company culture is like or how you will feel about the work after the honeymoon period ends.

Reversible decisions can be made fast and without obsessing over finding complete information. We can be prepared to extract wisdom from the experience with little cost if the decision doesn’t work out. Frequently, it’s not worth the time and energy required to gather more information and look for flawless answers. Although your research might make your decision 5% better, you might miss an opportunity.

Reversible decisions are not an excuse to act reckless or be ill-informed, but is rather a belief that we should adapt the frameworks of our decisions to the types of decisions we are making. Reversible decisions don’t need to be made the same way as irreversible decisions.

The ability to make decisions fast is a competitive advantage. One major advantage that start-ups have is that they can move with velocity, whereas established incumbents typically move with speed. The difference between the two is meaningful and often means the difference between success and failure.

Speed is measured as distance over time. If we’re headed from New York to LA on an airplane and we take off from JFK and circle around New York for three hours, we’re moving with a lot of speed, but we’re not getting anywhere. Speed doesn’t care if you are moving toward your goals or not. Velocity, on the other hand, measures displacement over time. To have velocity, you need to be moving toward your goal.

This heuristic explains why start-ups making quick decisions have an advantage over incumbents. That advantage is magnified by environmental factors, such as the pace of change. The faster the pace of environmental change, the more an advantage will accrue to people making quick decisions because those people can learn faster.

Decisions provide us with data, which can then make our future decisions better. The faster we can cycle through the OODA loop, the better. This framework isn’t a one-off to apply to certain situations; it is a heuristic that needs to be an integral part of a decision-making toolkit.

With practice, we also get better at recognizing bad decisions and pivoting, rather than sticking with past choices due to the sunk costs fallacy. Equally important, we can stop viewing mistakes or small failures as disastrous and view them as pure information which will inform future decisions.

“A good plan, violently executed now, is better than a perfect plan next week.”

— General George Patton

Bezos compares decisions to doors. Reversible decisions are doors that open both ways. Irreversible decisions are doors that allow passage in only one direction; if you walk through, you are stuck there. Most decisions are the former and can be reversed (even though we can never recover the invested time and resources). Going through a reversible door gives us information: we know what’s on the other side.

In his shareholder letter, Bezos writes[1]:

Some decisions are consequential and irreversible or nearly irreversible – one-way doors – and these decisions must be made methodically, carefully, slowly, with great deliberation and consultation. If you walk through and don’t like what you see on the other side, you can’t get back to where you were before. We can call these Type 1 decisions. But most decisions aren’t like that – they are changeable, reversible – they’re two-way doors. If you’ve made a suboptimal Type 2 decision, you don’t have to live with the consequences for that long. You can reopen the door and go back through. Type 2 decisions can and should be made quickly by high judgment individuals or small groups.

As organizations get larger, there seems to be a tendency to use the heavy-weight Type 1 decision-making process on most decisions, including many Type 2 decisions. The end result of this is slowness, unthoughtful risk aversion, failure to experiment sufficiently, and consequently diminished invention. We’ll have to figure out how to fight that tendency.

Bezos gives the example of the launch of one-hour delivery to those willing to pay extra. This service launched less than four months after the idea was first developed. In 111 days, the team “built a customer-facing app, secured a location for an urban warehouse, determined which 25,000 items to sell, got those items stocked, recruited and onboarded new staff, tested, iterated, designed new software for internal use – both a warehouse management system and a driver-facing app – and launched in time for the holidays.”

As further guidance, Bezos considers 70% certainty to be the cut-off point where it is appropriate to make a decision. That means acting once we have 70% of the required information, instead of waiting longer. Making a decision at 70% certainty and then course-correcting is a lot more effective than waiting for 90% certainty.

In Blink: The Power of Thinking Without Thinking, Malcolm Gladwell explains why decision-making under uncertainty can be so effective. We usually assume that more information leads to better decisions — if a doctor proposes additional tests, we tend to believe they will lead to a better outcome. Gladwell disagrees: “In fact, you need to know very little to find the underlying signature of a complex phenomenon. All you need is evidence of the ECG, blood pressure, fluid in the lungs, and an unstable angina. That’s a radical statement.”

In medicine, as in many areas, more information does not necessarily ensure improved outcomes. To illustrate this, Gladwell gives the example of a man arriving at a hospital with intermittent chest pains. His vital signs show no risk factors, yet his lifestyle does and he had heart surgery two years earlier. If a doctor looks at all the available information, it may seem that the man needs admitting to the hospital. But the additional factors, beyond the vital signs, are not important in the short term. In the long run, he is at serious risk of developing heart disease. Gladwell writes,

… the role of those other factors is so small in determining what is happening to the man right now that an accurate diagnosis can be made without them. In fact, … that extra information is more than useless. It’s harmful. It confuses the issues. What screws up doctors when they are trying to predict heart attacks is that they take too much information into account.

We can all learn from Bezos’s approach, which has helped him to build an enormous company while retaining the tempo of a start-up. Bezos uses his heuristic to fight the stasis that sets in within many large organizations. It is about being effective, not about following the norm of slow decisions.

Once you understand that reversible decisions are in fact reversible you can start to see them as opportunities to increase the pace of your learning. At a corporate level, allowing employees to make and learn from reversible decisions helps you move at the pace of a start-up. After all, if someone is moving with speed, you’re going to pass them when you move with velocity.

***

Members can discuss this on the Learning Community Forum.

End Notes

[1] https://www.sec.gov/Archives/edgar/data/1018724/000119312516530910/d168744dex991.htm

The Return of a Decision-Making Jedi [The Knowledge Project #28]

Subscribe on iTunes | Stitcher | Spotify | Android | Google Play

Michael Mauboussin returns for a fascinating encore interview on the Knowledge Project, a show that explores ideas, methods, and mental models, that will help you expand your mind, live deliberately, and master the best of what other people have already figured out.

In my conversation with Michael, we geek out on decision making, luck vs. skill, work/life balance, and so much more.

Mauboussin was actually the very first guest on the podcast when it was still very much an experiment. I enjoyed it so much, I decided to continue with the show. (If you missed his last interview, you can listen to it here, or if you’re a member of The Learning Community, you can download a transcript.)

Michael is one of my very favorite people to talk to, and I couldn’t wait to pick up right where we left off.

In this interview, Michael and I dive deep into some of the topics we care most about here at Farnam Street, including:

  • The concept of “base rates” and how they can help us make far better decisions and avoid the pain and consequences of making poor choices.
  • How to know where you land on the luck/skill continuum and why it matters
  • Michael’s advice on creating a systematic decision-making process in your organization to improve outcomes.
  • The two most important elements of any decision-making process
  • How to train your intuition to be one of your most powerful assets instead of a dangerous liability
  • The three tests Michael uses in his company to determine the health and financial stability of his environment
  • Why “algorithm aversion” is creating such headaches in many organizations and how to help your teams overcome it, so you can make more rapid progress
  • The most significant books that he’s read since we last spoke, his reading habits, and the strategies he uses to get the most out of every book
  • The importance of sleep in Michael's life to make sure his body and mind are running at peak efficiency
  • His greatest failures and what he learned from them
  • How Michael and his wife raised their kids and the unique parenting style they adopted
  • How Michael defines happiness and the decisions he makes to maximize the joy in his life

Any one of those insights alone is worth a listen, so I think you’re really going to enjoy this interview.

Listen

Transcript

An edited transcript is available to members of our learning community or for purchase separately ($7).

More Episodes

A complete list of all of our podcast episodes.

***

Members can discuss this post on the Learning Community Forum

What You Can Learn from Fighter Pilots About Making Fast and Accurate Decisions

“What is strategy? A mental tapestry of changing intentions for harmonizing and focusing our efforts as a basis for realizing some aim or purpose in an unfolding and often unforeseen world of many bewildering events and many contending interests.””

— John Boyd

What techniques do people use in the most extreme situations to make decisions? What can we learn from them to help us make more rational and quick decisions?

If these techniques work in the most drastic scenarios, they have a good chance of working for us. This is why military mental models can have such wide, useful applications outside their original context.

Military mental models are constantly tested in the laboratory of conflict. If they weren’t agile, versatile, and effective, they would quickly be replaced by others. Military leaders and strategists invest a great deal of time in developing and teaching decision-making processes.

One strategy that I’ve found repeatedly effective is the OODA loop.

Developed by strategist and U.S. Air Force Colonel John Boyd, the OODA loop is a practical concept designed to be the foundation of rational thinking in confusing or chaotic situations. OODA stands for Observe, Orient, Decide, and Act.

Boyd developed the strategy for fighter pilots. However, like all good mental models, it can be extended into other fields. We used it at the intelligence agency I used to work at. I know lawyers, police officers, doctors, businesspeople, politicians, athletes, and coaches who use it.

Fighter pilots have to work fast. Taking a second too long to make a decision can cost them their lives. As anyone who has ever watched Top Gun knows, pilots have a lot of decisions and processes to juggle when they’re in dogfights (close-range aerial battles). Pilots move at high speeds and need to avoid enemies while tracking them and keeping a contextual knowledge of objectives, terrains, fuel, and other key variables.

Dogfights are nasty. I’ve talked to pilots who’ve been in them. They want the fights to be over as quickly as possible. The longer they go, the higher the chances that something goes wrong. Pilots need to rely on their creativity and decision-making abilities to survive. There is no game plan to follow, no schedule or to-do list. There is only the present moment when everything hangs in the balance.

Forty-Second Boyd

Boyd was no armchair strategist. He developed his ideas during his own time as a fighter pilot. He earned the nickname “Forty-Second Boyd” for his ability to win any fight in under 40 seconds.

In a tribute written after Boyd’s death, General C.C. Krulak described him as “a towering intellect who made unsurpassed contributions to the American art of war. Indeed, he was one of the central architects of the reform of military thought…. From John Boyd we learned about competitive decision making on the battlefield—compressing time, using time as an ally.”

Reflecting Robert Greene’s maxim that everything is material, Boyd spent his career observing people and organizations. How do they adapt to changeable environments in conflicts, business, and other situations?

Over time, he deduced that these situations are characterized by uncertainty. Dogmatic, rigid theories are unsuitable for chaotic situations. Rather than trying to rise through the military ranks, Boyd focused on using his position as colonel to compose a theory of the universal logic of war.

Boyd was known to ask his mentees the poignant question, “Do you want to be someone, or do you want to do something?” In his own life, he certainly focused on the latter path and, as a result, left us ideas with tangible value. The OODA loop is just one of many.

The Four Parts of the OODA Loop

Let's break down the four parts of the OODA loop and see how they fit together.

OODA stands for Observe, Orient, Decide, Act. The description of it as a loop is crucial. Boyd intended the four steps to be repeated again and again until a conflict finishes. Although most depictions of the OODA loop portray it as a superficial idea, there is a lot of depth to it. Using it should be simple, but it has a rich basis in interdisciplinary knowledge.

1: Observe
The first step in the OODA Loop is to observe. At this stage, the main focus is to build a comprehensive picture of the situation with as much accuracy as possible.

A fighter pilot needs to consider: What is immediately affecting me? What is affecting my opponent? What could affect us later on? Can I make any predictions, and how accurate were my prior ones? A pilot's environment changes rapidly, so these observations need to be broad and fluid.

And information alone is not enough. The observation stage requires awareness of the overarching meaning of the information. It also necessitates separating the information which is relevant for a particular decision from that which is not. You have to add context to the variables.

The observation stage is vital in decision-making processes.

For example, faced with a patient in an emergency ward, a doctor needs to start by gathering as much foundational knowledge as possible. That might be the patient's blood pressure, pulse, age, underlying health conditions, and reason for admission. At the same time, the doctor needs to discard irrelevant information and figure out which facts are relevant for this precise situation. Only by putting the pieces together can she make a fast decision about the best way to treat the patient. The more experienced a doctor is, the more factors she is able to take into account, including subtle ones, such as a patient's speech patterns, his body language, and the absence (rather than presence) of certain signs.

2: Orient

Orientation, the second stage of the OODA loop, is frequently misunderstood or skipped because it is less intuitive than the other stages. Boyd referred to it as the schwerpunkt, a German term which loosely translates to “the main emphasis.” In this context, to orient is to recognize the barriers that might interfere with the other parts of the process.

Without an awareness of these barriers, the subsequent decision cannot be a fully rational one. Orienting is all about connecting with reality, not with a false version of events filtered through the lens of cognitive biases and shortcuts.

“Orientation isn't just a state you're in; it's a process. You're always orienting.”

— John Boyd

Including this step, rather than jumping straight to making a decision, gives us an edge over the competition. Even if we are at a disadvantage to begin with, having fewer resources or less information, Boyd maintained that the Orient step ensures that we can outsmart an opponent.

For Western nations, cyber-crime is a huge threat — mostly because for the first time ever, they can’t outsmart, outspend, or out-resource the competition. Boyd has some lessons for them.

Boyd believed that four main barriers prevent us from seeing information in an unbiased manner:

  1. Our cultural traditions
  2. Our genetic heritage
  3. Our ability to analyze and synthesize
  4. The influx of new information — it is hard to make sense of observations when the situation keeps changing

Boyd was one of the first people to discuss the importance of building a toolbox of mental models, prior to Charlie Munger’s popularization of the concept among investors.

Boyd believed in “destructive deduction” — taking note of incorrect assumptions and biases and then replacing them with fundamental, versatile mental models. Only then can we begin to garner a reality-oriented picture of the situation, which will inform subsequent decisions.

Boyd employed a brilliant metaphor for this — a snowmobile. In one talk, he described how a snowmobile comprises elements of different devices. The caterpillar treads of a tank, skis, the outboard motor of a boat, the handlebars of a bike — each of those elements is useless alone, but combining them creates a functional vehicle.

As Boyd put it: “A loser is someone (individual or group) who cannot build snowmobiles when facing uncertainty and unpredictable change; whereas a winner is someone (individual or group) who can build snowmobiles, and employ them in an appropriate fashion, when facing uncertainty and unpredictable change.”

To orient ourselves, we have to build a metaphorical snowmobile by combining practical concepts from different disciplines.

Although Boyd is regarded as a military strategist, he didn’t confine himself to any particular discipline. His theories encompass ideas drawn from various disciplines, including mathematical logic, biology, psychology, thermodynamics, game theory, anthropology, and physics. Boyd described his approach as a “scheme of pulling things apart (analysis) and putting them back together (synthesis) in new combinations to find how apparently unrelated ideas and actions can be related to one another.”

3. Decide

No surprises here. Having gathered information and oriented ourselves, we have to make an informed decision. The previous two steps should have generated a plethora of ideas, so this is the point where we choose the most relevant option.

Boyd cautioned against first-conclusion bias, explaining that we cannot keep making the same decision again and again. This part of the loop needs to be flexible and open to Bayesian updating. In some of his notes, Boyd described this step as the hypothesis stage. The implication is that we should test the decisions we make at this point in the loop, spotting their flaws and including any issues in future observation stages.

4. Act

While technically a decision-making process, the OODA loop is all about action. The ability to act upon rational decisions is a serious advantage.

The other steps are mere precursors. A decision made, now is the time to act upon it. Also known as the test stage, this is when we experiment to see how good our decision was. Did we observe the right information? Did we use the best possible mental models? Did we get swayed by biases and other barriers? Can we disprove the prior hypothesis? Whatever the outcome, we then cycle back to the first part of the loop and begin observing again.

Why the OODA Loop Works

The OODA loop has four key benefits.

1. Speed

Fighter pilots must make many decisions in fast succession. They don’t have time to list pros and cons or to consider every available avenue. Once the OODA loop becomes part of their mental toolboxes, they should be able to cycle through it in a matter of seconds.

Speed is a crucial element of military decision making. Using the OODA loop in everyday life, we probably have a little more time than a fighter pilot would. But Boyd emphasized the value of being decisive, taking initiative, and staying autonomous. These are universal assets and apply to many situations.

Take the example of modern growth hacker marketing.

“The ability to operate at a faster tempo or rhythm than an adversary enables one to fold the adversary back inside himself so that he can neither appreciate nor keep up with what is going on. He will become disoriented and confused…”

— John Boyd

The key advantage growth hackers have over traditional marketers is speed. They observe (look at analytics, survey customers, perform a/b tests, etc.) and orient themselves (consider vanity versus meaningful metrics, assess interpretations, and ground themselves in the reality of a market) before making a decision and then acting. The final step serves to test their ideas and they have the agility to switch tactics if the desired outcome is not achieved.

Meanwhile, traditional marketers are often trapped in lengthy campaigns which do not offer much in the way of useful metrics. Growth hackers can adapt and change their techniques every single day depending on what works. They are not confined by stagnant ideas about what worked before.

So, although they may have a small budget and fewer people to assist them, their speed gives them an advantage. Just as Boyd could defeat any opponent in under 40 seconds (even starting at a position of disadvantage), growth hackers can grow companies and sell products at extraordinary rates, starting from scratch.

2. Comfort With Uncertainty
Uncertainty does not always equate to risk. A fighter pilot is in a precarious situation, where there will be gaps in their knowledge. They cannot read the mind of the opponent and might have incomplete information about the weather conditions and surrounding environment. They can, however, take into account key factors such as the opponent's nationality, the type of airplane they are flying, and what their maneuvers reveal about their intentions and level of training.

If the opponent uses an unexpected strategy, is equipped with a new type of weapon or airplane, or behaves in an irrational, ideologically motivated way, the pilot must accept the accompanying uncertainty. However, Boyd belabored the point that uncertainty is irrelevant if we have the right filters in place.

If we don’t, we can end up stuck at the observation stage, unable to decide or act. But if we do have the right filters, we can factor uncertainty into the observation stage. We can leave a margin of error. We can recognize the elements which are within our control and those which are not.

Three key principles supported Boyd’s ideas. In his presentations, he referred to Gödel’s Proof, Heisenberg’s Uncertainty Principle, and the Second Law of Thermodynamics.

Gödel’s theorems indicate that any mental model we have of reality will omit certain information and that Bayesian updating must be used to bring it in line with reality. Our understanding of science illustrates this.

In the past, people’s conception of reality missed crucial concepts such as criticality, relativity, the laws of thermodynamics, and gravity. As we have discovered these concepts, we have updated our view of the world. Yet we would be foolish to think that we now know everything and our worldview is complete. Other key principles remain undiscovered. The same goes for fighter pilots — their understanding of what is going on during a battle will always have gaps. Identifying this fundamental uncertainty gives it less power over us.

The second concept Boyd referred to is Heisenberg’s Uncertainty Principle. In its simplest form, this principle describes the limit of the precision with which pairs of physical properties can be understood. We cannot know the position and the velocity of a body at the same time. We can know either its location or its speed, but not both. Although Heisenberg’s Uncertainty Principle was initially used to describe particles, Boyd’s ability to combine disciplines led him to apply it to planes. If a pilot focuses too hard on where an enemy plane is, they will lose track of where it is going and vice versa. Trying harder to track the two variables will actually lead to more inaccuracy! Heisenberg’s Uncertainty Principle applies to myriad areas where excessive observation proves detrimental. Reality is imprecise.

Finally, Boyd made use of the Second Law of Thermodynamics. In a closed system, entropy always increases and everything moves towards chaos. Energy spreads out and becomes disorganized.

Although Boyd’s notes do not specify the exact applications, his inference appears to be that a fighter pilot must be an open system or they will fail. They must draw “energy” (information) from outside themselves or the situation will become chaotic. They should also aim to cut their opponent off, forcing them to become a closed system. Drawing on his studies, Boyd developed his Energy Maneuverability theory, which recast maneuvers in terms of the energy they used.

“Let your plans be dark and impenetrable as night, and when you move, fall like a thunderbolt.”

— Sun Tzu

3. Unpredictability

Using the OODA loop should enable us to act faster than an opponent, thereby seeming unpredictable. While they are still deciding what to do, we have already acted. This resets their own loop, moving them back to the observation stage. Keep doing this, and they are either rendered immobile or forced to act without making a considered decision. So, they start making mistakes, which can be exploited.

Boyd recommended making unpredictable changes in speed and direction, and wrote, “we should operate at a faster tempo than our adversaries or inside our adversaries[’] time scales. … Such activity will make us appear ambiguous (non predictable) [and] thereby generate confusion and disorder among our adversaries.” He even helped design planes better equipped to make those unpredictable changes.

For the same reason that you can’t run the same play 70 times in a football game, rigid military strategies often become useless after a few uses, or even one iteration, as opponents learn to recognize and counter them. The OODA loop can be endlessly used because it is a formless strategy, unconnected to any particular maneuvers.

We know that Boyd was influenced by Sun Tzu (he owned seven thoroughly annotated copies of The Art of War), and he drew many ideas from the ancient strategist. Sun Tzu depicts war as a game of deception where the best strategy is that which an opponent cannot pre-empt. Apple has long used this strategy as a key part of their product launches. Meticulously planned, their launches are shrouded in secrecy and the goal is for no one outside the company to see a product prior to the release.

When information has been leaked, the company has taken serious legal action as well as firing associated employees. We are never sure what Apple will put out next (just search for “Apple product launch 2017” and you will see endless speculation based on few facts). As a consequence, Apple can stay ahead of their rivals.

Once a product launches, rival companies scramble to emulate it. But by the time their technology is ready for release, Apple is on to the next thing and has taken most of the market share. Although inexpensive compared to the drawn-out product launches other companies use, Apple’s unpredictability makes us pay attention. Stock prices rise the day after, tickets to launches sell out in seconds, and the media reports launches as if they were news events, not marketing events.

4. Testing

A notable omission in Boyd’s work is any sort of specific instructions for how to act or which decisions to make. This is presumably due to his respect for testing. He believed that ideas should be tested and then, if necessary, discarded.

“We can't just look at our own personal experiences or use the same mental recipes over and over again; we've got to look at other disciplines and activities and relate or connect them to what we know from our experiences and the strategic world we live in.”

— John Boyd

Boyd’s OODA is a feedback loop, with the outcome of actions leading back to observations. Even in Aerial Attack Study, his comprehensive manual of maneuvers, Boyd did not describe any particular one as superior. He encouraged pilots to have the widest repertoire possible so they could select the best option in response to the maneuvers of an opponent.

We can incorporate testing into our decision-making processes by keeping track of outcomes in decision journals. Boyd’s notes indicate that he may have done just that during his time as a fighter pilot, building up the knowledge that went on to form Aerial Attack Study. Rather than guessing how our decisions lead to certain outcomes, we can get a clear picture to aid us in future orientation stages. Over time, our decision journals will reveal what works and what doesn’t.

Applying the OODA Loop

In sports, there is an adage that carries over to business quite well: “Speed kills.” If you are able to be nimble, able to assess the ever-changing environment and adapt quickly, you'll always carry the advantage over your opponent.

Start applying the OODA loop to your day-to-day decisions and watch what happens. You'll start to notice things that you would have been oblivious to before. Before jumping to your first conclusion, you'll pause to consider your biases, take in additional information, and be more thoughtful of consequences.

As with anything you practice,  if you do it right, the more you do it, the better you'll get.  You'll start making better decisions more quickly. You'll see more rapid progress. And as John Boyd would prescribe, you'll start to DO something in your life, and not just BE somebody.

***

Members can discuss this post on the Learning Community Forum

Poker, Speeding Tickets, and Expected Value: Making Decisions in an Uncertain World

“Take the probability of loss times the amount of possible loss from the probability of gain times the amount of possible gain. That is what we're trying to do. It's imperfect but that's what it's all about.”

— Warren Buffett

You can train your brain to think like CEOs, professional poker players, investors, and others who make tricky decisions in an uncertain world by weighing probabilities.

All decisions involve potential tradeoffs and opportunity costs. The question is, how can we make the best possible choices when the factors involved are often so complicated and confusing? How can we determine which statistics and metrics are worth paying attention to? How do we think about averages?

Expected value is one of the simplest tools you can use to think better. While not a natural way of thinking for most people, it instantly turns the world into shades of grey by forcing us to weigh probabilities and outcomes. Once we've mastered it, our decisions become supercharged. We know which risks to take, when to quit projects, when to go all in, and more.

Expected value refers to the long-run average of a random variable.

If you flip a fair coin ten times, the heads-to-tails ratio will probably not be exactly equal. If you flip it one hundred times, the ratio will be closer to 50:50, though again not exactly. But for a very large number of iterations, you can expect heads to come up half the time and tails the other half. The law of large numbers dictates that the values will, in the long term, regress to the mean, even if the first few flips seem unequal.

The more coin flips, the closer you get to the 50:50 ratio. If you bet a sum of money on a coin flip, the potential winnings on a fair coin have to be bigger than your potential loss to make the expected value positive.

We make many expected-value calculations without even realizing it. If we decide to stay up late and have a few drinks on a Tuesday, we regard the expected value of an enjoyable evening as higher than the expected costs the following day. If we decide to always leave early for appointments, we weigh the expected value of being on time against the frequent instances when we arrive early. When we take on work, we view the expected value in terms of income and other career benefits as higher than the cost in terms of time and/or sanity.

Likewise, anyone who reads a lot knows that most books they choose will have minimal impact on them, while a few books will change their lives and be of tremendous value. Looking at the required time and money as an investment, books have a positive expected value (provided we choose them with care and make use of the lessons they teach).

These decisions might seem obvious. But the math behind them would be somewhat complicated if we tried to sit down and calculate it. Who pulls out a calculator before deciding whether to open a bottle of wine (certainly not me) or walk into a bookstore?

The factors involved are impossible to quantify in a non-subjective manner – like trying to explain how to catch a baseball. We just have a feel for them. This expected-value analysis is unconscious – something to consider if you have ever labeled yourself as “bad at math.”

Parking Tickets

Another example of expected value is parking tickets. Let's say that a parking spot costs $5 and the fine for not paying is $10. If you can expect to be caught one-third of the time, why pay for parking? The expected value of doing so is negative. It's a disincentive. You can park without paying three times and pay only $10 in fines, instead of paying $15 for three parking spots. But if the fine is $100, the probability of getting caught would have to be higher than one in twenty for it to be worthwhile. This is why fines tend to seem excessive. They cover the people who are not caught while giving an incentive for everyone to pay.

Consider speeding tickets. Here, the expected value can be more abstract, encompassing different factors. If speeding on the way to work saves 15 minutes, then a monthly $100 fine might seem worthwhile to some people. For most of us, though, a weekly fine would mean that speeding has a negative expected value. Add in other disincentives (such as the loss of your driver's license), and speeding is not worth it. So the calculation is not just financial; it takes into account other tradeoffs as well.

The same goes for free samples and trial periods on subscription services. Many companies (such as Graze, Blue Apron, and Amazon Prime) offer generous free trials. How can they afford to do this? Again, it comes down to expected value. The companies know how much the free trials cost them. They also know the probability of someone's paying afterwards and the lifetime value of a customer. Basic math reveals why free trials are profitable. Say that a free trial costs the company $10 per person, and one in ten people then sign up for the paid service, going on to generate $150 in profits. The expected value is positive. If only one in twenty people sign up, the company needs to find a cheaper free trial or scrap it.

Similarly, expected value applies to services that offer a free “lite” version (such as Buffer and Spotify). Doing so costs them a small amount or even nothing. Yet it increases the chance of someone's deciding to pay for the premium version. For the expected value to be positive, the combined cost of the people who never upgrade needs to be lower than the profit from the people who do pay.

Lottery tickets prove useless when viewed through the lens of expected value. If a ticket costs $1 and there is a possibility of winning $500,000, it might seem as if the expected value of the ticket is positive. But it is almost always negative. If one million people purchase a ticket, the expected value is $0.50. That difference is the profit that lottery companies make. Only on sporadic occasions is the expected value positive, even though the probability of winning remains minuscule.

Failing to understand expected value is a common logical fallacy. Getting a grasp of it can help us to overcome many limitations and cognitive biases.

“Constantly thinking in expected value terms requires discipline and is somewhat unnatural. But the leading thinkers and practitioners from somewhat varied fields have converged on the same formula: focus not on the frequency of correctness, but on the magnitude of correctness.”

— Michael Mauboussin

Expected Value and Poker

Let's look at poker. How do professional poker players manage to win large sums of money and hold impressive track records? Well, we can be certain that the answer isn't all luck, although there is some of that involved.

Professional players rely on mathematical mental models that create order among random variables. Although these models are basic, it takes extensive experience to create the fingerspitzengefühl (“fingertips feeling,” or instinct) necessary to use them.

A player needs to make correct calculations every minute of a game with an automaton-like mindset. Emotions and distractions can corrupt the accuracy of the raw math.

In a game of poker, the expected value is the average return on each dollar invested in the pot. Each time a player makes a bet or call, they are taking into account the probability of making more money than they invest. If a player is risking $100, with a 1 in 5 probability of success, the pot must contain at least $500 for the bet to be safe. The expected value per call is at least equal to the amount the player stands to lose. If the pot contains $300 and the probability is 1 in 5, the expected value is negative. The idea is that even if this tactic is unsuccessful at times, in the long run, the player will profit.

Expected-value analysis gives players a clear idea of probabilistic payoffs. Successful poker players can win millions one week, then make nothing or lose money the next, depending on the probability of winning. Even the best possible hands can lose due to simple probability. With each move, players also need to use Bayesian updating to adapt their calculations. because sticking with a prior figure could prove disastrous. Casinos make their fortunes from people who bet on situations with a negative expected value.

Expected Value and the Ludic Fallacy

In The Black Swan, Nassim Taleb explains the difference between everyday randomness and randomness in the context of a game or casino. Taleb coined the term “ludic fallacy” to refer to “the misuse of games to model real-life situations.” (Or, as the website logicallyfallacious.com puts it: the assumption that flawless statistical models apply to situations where they don’t actually apply.)

In Taleb’s words, gambling is “sterilized and domesticated uncertainty. In the casino, you know the rules, you can calculate the odds… ‘The casino is the only human venture I know where the probabilities are known, Gaussian (i.e., bell-curve), and almost computable.’ You cannot expect the casino to pay out a million times your bet, or to change the rules abruptly during the game….”

Games like poker have a defined, calculable expected value. That’s because we know the outcomes, the cards, and the math. Most decisions are more complicated. If you decide to bet $100 that it will rain tomorrow, the expected value of the wager is incalculable. The factors involved are too numerous and complex to compute. Relevant factors do exist; you are more likely to win the bet if you live in England than if you live in the Sahara, for example. But that doesn't rule out Black Swan events, nor does it give you the neat probabilities which exist in games. In short, there is a key distinction between Knightian risks, which are computable because we have enough information to calculate the odds, and Knightian uncertainty, which is non-computable because we don’t have enough information to calculate odds accurately. (This distinction between risk and uncertainty is based on the writings of economist Frank Knight.) Poker falls into the former category. Real life is in the latter. If we take the concept literally and only plan for the expected, we will run into some serious problems.

As Taleb writes in Fooled By Randomness:

Probability is not a mere computation of odds on the dice or more complicated variants; it is the acceptance of the lack of certainty in our knowledge and the development of methods for dealing with our ignorance. Outside of textbooks and casinos, probability almost never presents itself as a mathematical problem or a brain teaser. Mother nature does not tell you how many holes there are on the roulette table, nor does she deliver problems in a textbook way (in the real world one has to guess the problem more than the solution).

The Monte Carlo Fallacy

Even in the domesticated environment of a casino, probabilistic thinking can go awry if the principle of expected value is forgotten. This famously occurred in Monte Carlo Casino in 1913. A group of gamblers lost millions when the roulette table landed on black 26 times in a row. The probability of this occurring is no more or less likely than the other 67,108,863 possible permutations, but the people present kept thinking, “It has to be red next time.” They saw the likelihood of the wheel landing on red as higher each time it landed on black. In hindsight, what sense does that make? A roulette wheel does not remember the color it landed on last time. The likelihood of either outcome is exactly 50% with each spin, regardless of the previous iteration. So the potential winnings for each spin need to be at least twice the bet a player makes, or the expected value is negative.

“A lot of people start out with a 400-horsepower motor but only get 100 horsepower of output. It's way better to have a 200-horsepower motor and get it all into output.”

— Warren Buffett

Given all the casinos and roulette tables in the world, the Monte Carlo incident had to happen at some point. Perhaps some day a roulette wheel will land on red 26 times in a row and the incident will repeat. The gamblers involved did not consider the negative expected value of each bet they made. We know this mistake as the Monte Carlo fallacy (or the “gambler's fallacy” or “the fallacy of the maturity of chances”) – the assumption that prior independent outcomes influence future outcomes that are actually also independent. In other words, people assume that “a random process becomes less random and more predictable as it is repeated”1.

It's a common error. People who play the lottery for years without success think that their chance of winning rises with each ticket, but the expected value is unchanged between iterations. Amos Tversky and Daniel Kahneman consider this kind of thinking a component of the representativeness heuristic, stating that the more we believe we control random events, the more likely we are to succumb to the Monte Carlo fallacy.

Magnitude over Frequency

Steven Crist, in his book Bet with the Best, offers an example of how an expected-value mindset can be applied. Consider a hypothetical race with four horses. If you’re trying to maximize return on investment, you might want to avoid the horse with a high likelihood of winning. Crist writes,

The point of this exercise is to illustrate that even a horse with a very high likelihood of winning can be either a very good or a very bad bet, and that the difference between the two is determined by only one thing: the odds.”2

Everything comes down to payoffs. A horse with a 50% chance of winning might be a good bet, but it depends on the payoff. The same holds for a 100-to-1 longshot. It's not the frequency of winning but the magnitude of the win that matters.

Error Rates, Averages, and Variability

When Bill Gates walks into a room with 20 people, the average wealth per person in the room quickly goes beyond a billion dollars. It doesn't matter if the 20 people are wealthy or not; Gates's wealth is off the charts and distorts the results.

An old joke tells of the man who drowns in a river which is, on average, three feet deep. If you're deciding to cross a river and can't swim, the range of depths matters a heck of a lot more than the average depth.

The Use of Expected Value: How to Make Decisions in an Uncertain World

Thinking in terms of expected value requires discipline and practice. And yet, the top performers in almost any field think in terms of probabilities. While this isn't natural for most of us, once you implement the discipline of the process, you'll see the quality of your thinking and decisions improve.

In poker, players can predict the likelihood of a particular outcome. In the vast majority of cases, we cannot predict the future with anything approaching accuracy. So what use is expected value outside gambling? It turns out, quite a lot. Recognizing how expected value works puts any of us at an advantage. We can mentally leap through various scenarios and understand how they affect outcomes.

Expected value takes into account wild deviations. Averages are useful, but they have limits, as the man who tried to cross the river discovered. When making predictions about the future, we need to consider the range of outcomes. The greater the possible variance from the average, the more our decisions should account for a wider range of outcomes.

There's a saying in the design world: when you design for the average, you design for no one. Large deviations can mean more risk-which is not always a bad thing. So expected-value calculations take into account the deviations. If we can make decisions with a positive expected value and the lowest possible risk, we are open to large benefits.

Investors use expected value to make decisions. Choices with a positive expected value and minimal risk of losing money are wise. Even if some losses occur, the net gain should be positive over time. In investing, unlike in poker, the potential losses and gains cannot be calculated in exact terms. Expected-value analysis reveals opportunities that people who just use probabilistic thinking often miss. A trade with a low probability of success can still carry a high expected value. That's why it is crucial to have a large number of robust mental models. As useful as probabilistic thinking can be, it has far more utility when combined with expected value.

Understanding expected value is also an effective way to overcome the sunk costs fallacy. Many of our decisions are based on non-recoverable past investments of time, money, or resources. These investments are irrelevant; we can't recover them, so we shouldn't factor them into new decisions. Sunk costs push us toward situations with a negative expected value. For example, consider a company that has invested considerable time and money in the development of a new product. As the launch date nears, they receive irrefutable evidence that the product will be a failure. Perhaps research shows that customers are disinterested, or a competitor launches a similar, better product. The sunk costs fallacy would lead them to release their product anyway. Even if they take a loss. Even if it damages their reputation. After all, why waste the money they spent developing the product? Here's why: Because the product has a negative expected value, which will only worsen their losses. An escalation of commitment will only increase sunk costs.

When we try to justify a prior expense, calculating the expected value can prevent us from worsening the situation. The sunk costs fallacy robs us of our most precious resource: time. Each day we are faced with the choice between continuing and quitting numerous endeavors. Expected-value analysis reveals where we should continue, and where we should cut our losses and move on to a better use of time and resources. It's an efficient way to work smarter, and not engage in unnecessary projects.

Thinking in terms of expected value will make you feel awkward when you first try it. That's the hardest thing about it; you need to practice it a while before it becomes second nature. Once you get the hang of it, you'll see that it's valuable in almost every decision. That's why the most rational people in the world constantly think about expected value. They've uncovered the key insight that the magnitude of correctness matters more than its frequency. And yet, human nature is such that we're happier when we're frequently right.

Footnotes
  • 1

    From https://rationalwiki.org/wiki/Gambler’s_fallacy, accessed on 11 January 2018.

  • 2

    Steven Crist, “Crist on Value,” in Andrew Beyer et al., Bet with the Best: All New Strategies From America’s Leading Handicappers (New York: Daily Racing Form Press, 2001), 63-64.

All Models Are Wrong

How is your journey towards understanding Farnam Street’s latticework of mental models going? Is it proving useful? Changing your view of the world? If the answer is that it’s going well that’s good. There’s just one tiny hitch.

All models are wrong.

Yep. It's the truth. However, there is another part to that statement:

All models are wrong, some are useful.

Those words come from the British statistician, George Box. In a groundbreaking 1976 paper, Box revealed the fallacy of our desire to categorize and organize the world. We create models (a term with many applications), once to confuse them for reality.

Box also stated:

Remember that all models are wrong; the practical question is how wrong do they have to be to not be useful.

What Exactly Is A Model?

First, we should understand precisely what a model is.

The dictionary definition states a model is ‘a representation, generally in miniature, to show the construction or appearance of something’ or ‘a simplified description, especially a mathematical one, of a system or process, to assist calculations and predictions.’

For our purposes here, we are better served by the second definition. A model is a simplification which fosters understanding.

Think of an architectural model. These are typically a small scale model of a building, made before it's built. Its purpose is to show what the building will look like and to help people working on the project to develop a clear picture of the overall feel. In the iconic scene from Zoolander, Derek (played by Ben Stiller) looks at the architectural model of his propsed ‘school for kids who can’t read good’ and shouts “What is this? A center for ants??”

That scene illustrates the wrong way to understand models: Too literally.

Why We Use Models- And Why They Work

At Farnam Street, we believe in using models for the purpose of building a massive, but finite amount of fundamental, invariant knowledge about how the world really works. Applying this knowledge is the key to making good decisions and avoiding stupidity.

“Scientists generally agree that no theory is 100 percent correct. Thus, the real test of knowledge is not truth, but utility. Science gives us power. The more useful that power, the better the science.”

— Yuval Noah Harari

Time-tested models allow us to understand how things work in the real world. And understanding how things work prepares us to make better decisions without expending too much mental energy in the process.

Instead of relying on fickle and specialized facts, we can learn versatile concepts. The mental models we cover are intended to be widely applicable.

It's crucial for us to understand as many mental models as possible. As the adage goes, a little knowledge can be dangerous and creates more problems than total ignorance. No single model is universally applicable – we find exceptions for nearly everything. Even hardcore physics has not been totally solved.

“The basic trouble, you see, is that people think that “right” and “wrong” are absolute; that everything that isn't perfectly and completely right is totally and equally wrong.”

— Isaac Asimov

Take a look at almost any comment section on the internet and you are guaranteed to find at least one pedant raging about a minor perceived inaccuracy, throwing out the good with the bad. While ignorance and misinformation are certainly not laudable, neither is an obsession with perfection.

Like heuristics, models work as a consequence of the fact they are usually helpful in most situations, not because they are always helpful in a small number of situations.

Models can assist us in making predictions and forecasting the future. Forecasts are never guaranteed, yet they provide us with a degree of preparedness and comprehension of the future. For example, a weather forecast which claims it will rain today may get that wrong. Still, it's correct often enough to enable us to plan appropriately and bring an umbrella.

Mental Models and Minimum Viable Products

Think of mental models as minimum viable products.

Sure, all of them can be improved. But the only way that can happen is if we try them out, educate ourselves and collectively refine them.

We can apply one of our mental models, Occam’s razor, to this. Occam’s razor states that the simplest solution is usually correct. In the same way, our simplest mental models tend to be the most useful. This is because there is minimal room for errors and misapplication.

“The world doesn’t have the luxury of waiting for complete answers before it takes action.”

— Daniel Gilbert

Your kitchen knives are not as sharp as they could be. Does that matter as long as they still cut vegetables? Your bed is not as comfortable as it could be. Does that matter if you can still get a good night’s sleep in it? Your internet is not as fast as it could be. Does that matter as long as you can load this article? Arguably not. Our world runs on the functional, not the perfect. This is what a mental model is – a functional tool. A tool which maybe could be a bit sharper or easier to use, but still does the job.

The statistician David Hand made the following statement in 2014;

In general, when building statistical models, we must not forget that the aim is to understand something about the real world. Or predict, choose an action, make a decision, summarize evidence, and so on, but always about the real world, not an abstract mathematical world: our models are not the reality.

For example, in 1960, Georg Rasch said the following:

When you construct a model you leave out all the details which you, with the knowledge at your disposal, consider inessential…. Models should not be true, but it is important that they are applicable, and whether they are applicable for any given purpose must, of course, be investigated. This also means that a model is never accepted finally, only on trial.

Imagine a world where physics like precision is prized over usefulness.

We would lack medical care because a medicine or procedure can never be perfect. In a world like this, we would possess little scientific knowledge, because research can never be 100% accurate. We would have no art because a work can never be completed. We would have no technology because there are always little flaws which can be ironed out.

“A model is a simplification or approximation of reality and hence will not reflect all of reality … While a model can never be “truth,” a model might be ranked from very useful, to useful, to somewhat useful to, finally, essentially useless.”

— Ken Burnham and David Anderson

In short, we would have nothing. Everything around us is imperfect and uncertain. Some things are more imperfect than others, but issues are always there. Over time, incremental improvements happen through unending experimentation and research.

The Map is Not the Territory

As we know, the map is not the territory. A map can be seen as a symbol or index of a place, not an icon.

When we look at a map of Paris, we know it is a representation of the actual city. There are bound to be flaws; streets which have been renamed, demolished buildings, perhaps a new Metro line. Even so, the map will help us find our way. It is far more useful to have a map showing the way from Notre Dame to Gare du Nord (a tool) than to know how many meters they are apart (a piece of trivia.)

Someone who has spent a lot of time studying a map will be able to use it with greater ease, just like a mental model. Someone who lives in Paris will find the map easier to understand than a tourist, just as someone who uses a mental model in their day to day life will apply it better than a novice. As long as there are no major errors, we can consider the map useful, even if it is by no means a reflection of reality. Gregory Bateson writes in Steps to an Ecology of Mind that the purpose of a map is not to be true, but to have a structure which represents truth within the current context.

“A map is not the territory it represents, but, if correct, it has a similar structure to the territory, which accounts for its usefulness.”

— Alfred Korzybski

Physical maps generally become more accurate as time passes. Not long ago, they often included countries which didn’t exist, omitted some which did, portrayed the world as flat or fudged distances. Nowadays, our maps have come a long way.

The same goes for mental models – they are always evolving, being revised – never really achieving perfection. Certainly, over time, the best models are revised only slightly, but we must never consider our knowledge “set”.

Another factor to consider in using models is to take into account what they're used for.

Many mental models (e.g. entropy, critical mass and activation energy) are based upon scientific and mathematical concepts. A person who works in those areas will obviously need a deeper understanding of it than someone who want to learn to think better when making investment decisions. They will need a different map and a more detailed one showing elements which the rest of us have no need for.

“A model which took account of all the variation of reality would be of no more use than a map at the scale of one to one.”

— Joan Robinson

In Partial Enchantments of the Quixote, Jorge Luis Borges provides an even more interesting analysis of the confusion between models and reality:

Let us imagine that a portion of the soil of England has been leveled off perfectly and that on it a cartographer traces a map of England. The job is perfect; there is no detail of the soil of England, no matter how minute that is not registered on the map; everything has there its correspondence. This map, in such a case, should contain a map of the map, which should contain a map of the map of the map, and so on to infinity.Why does it disturb us that the map be included in the map and the thousand and one nights in the book of the Thousand and One Nights? Why does it disturb us that Don Quixote be a reader of the Quixote and Hamlet a spectator of Hamlet? I believe I have found the reason: these inversions suggest that if the characters of a fictional work can be readers or spectators, we, its readers or spectators, can be fictions.

How Do We Know If A Model Is Useful?

This is a tricky question to answer. When looking at any model, it is helpful to ask some of the following questions:

  • How long has this model been around? As a general rule, mental models which have been around for a long time (such as Occam’s razor) will have been subjected to a great deal of scrutiny. Time is an excellent curator, trimming away inefficient ideas. A mental model which is new may not be particularly refined or versatile. Many of our mental models originate from Ancient Greece and Rome, meaning they have to be functional to have survived this long.
  • Is it a representation of reality? In other words, does it reflect the real world? Or is it based on abstractions?
  • Does this model apply to multiple areas? The more elastic a model is, the more valuable it is to learn about. (Of course, be careful not to apply the model where it doesn't belong. Mind Feynman: “You must not fool yourself, and you're the easiest person to fool.”)
  • How did this model originate? Many mental models arise from scientific or mathematical concepts. The more fundamental the domain, the more likely the model is to be true and lasting.
  • Is it based on first principles? A first principle is a foundational concept which cannot be deduced from any other concept and must be known.
  • Does it require infinite regress? Infinite regress refers to something which is justified by principles, which themselves require justification by other principles. A model based on infinite regress is likely to required extensive knowledge of a particular topic, and have minimal real-world application.

When using any mental model, we must avoid becoming too rigid. There are exceptions to all of them, and situations in which they are not applicable.

Think of the latticework as a toolkit. That's why it pays to do the work up front to put so many of them in your toolbox at a deep, deep level. If you only have one or two, you're likely to attempt to use them in places that don't make sense. If you've absorbed them only lightly, you will not be able to use them when the time is at hand.

If on the other hand, you have a toolbox full of them and they're sunk in deep, you're more likely to pull out the best ones for the job exactly when they are needed.

Too many people are caught up wasting time on physics-like precision in areas of practical life that do not have such precision available. A better approach is to ask “Is it useful?” and, if yes, “To what extent?”

Mental models are a way of thinking about the world that prepares us to make good decisions in the first place.