# Blog

## Spies, Crime, and Lightning Strikes: The Value of Probabilistic Thinking

Probabilistic thinking is essentially trying to estimate, using some tools of math and logic, the likelihood of any specific outcome coming to pass. It is one of the best tools we have to improve the accuracy of our decisions. In a world where each moment is determined by an infinitely complex set of factors, probabilistic thinking helps us identify the most likely outcomes. When we know these our decisions can be more precise and effective.

## Are you going to get hit by lightning or not?

Why we need the concept of probabilities at all is worth thinking about. Things either are or are not, right? We either will get hit by lightning today or we won’t. The problem is, we just don’t know until we live out the day, which doesn’t help us at all when we make our decisions in the morning. The future is far from determined and we can better navigate it by understanding the likelihood of events that could impact us.

Our lack of perfect information about the world gives rise to all of probability theory, and its usefulness. We know now that the future is inherently unpredictable because not all variables can be known and even the smallest error imaginable in our data very quickly throws off our predictions. The best we can do is estimate the future by generating realistic, useful probabilities. So how do we do that?

Probability is everywhere, down to the very bones of the world. The probabilistic machinery in our minds—the cut-to-the-quick heuristics made so famous by the psychologists Daniel Kahneman and Amos Tversky—was evolved by the human species in a time before computers, factories, traffic, middle managers, and the stock market. It served us in a time when human life was about survival, and still serves us well in that capacity.

But what about today—a time when, for most of us, survival is not so much the issue? We want to thrive. We want to compete, and win. Mostly, we want to make good decisions in complex social systems that were not part of the world in which our brains evolved their (quite rational) heuristics.

For this, we need to consciously add in in a needed layer of probability awareness. What is it and how can I use it to my advantage?

There are three important aspects of probability that we need to explain so you can integrate them into your thinking to get into the ballpark and improve your chances of catching the ball:

1. Bayesian thinking,
2. Fat-tailed curves
3. Asymmetries

Thomas Bayes and Bayesian thinking: Bayes was an English minister in the first half of the 18th century, whose most famous work, “An Essay Toward Solving a Problem in the Doctrine of Chances” was brought to the attention of the Royal Society by his friend Richard Price in 1763—two years after his death. The essay, the key to what we now know as Bayes's Theorem, concerned how we should adjust probabilities when we encounter new data.

The core of Bayesian thinking (or Bayesian updating, as it can be called) is this: given that we have limited but useful information about the world, and are constantly encountering new information, we should probably take into account what we already know when we learn something new. As much of it as possible. Bayesian thinking allows us to use all relevant prior information in making decisions. Statisticians might call it a base rate, taking in outside information about past situations like the one you’re in.

Consider the headline “Violent Stabbings on the Rise.” Without Bayesian thinking, you might become genuinely afraid because your chances of being a victim of assault or murder is higher than it was a few months ago. But a Bayesian approach will have you putting this information into the context of what you already know about violent crime.

You know that violent crime has been declining to its lowest rates in decades. Your city is safer now than it has been since this measurement started. Let’s say your chance of being a victim of a stabbing last year was one in 10,000, or 0.01%. The article states, with accuracy, that violent crime has doubled. It is now two in 10,000, or 0.02%. Is that worth being terribly worried about? The prior information here is key. When we factor it in, we realize that our safety has not really been compromised.

Conversely, if we look at the diabetes statistics in the United States, our application of prior knowledge would lead us to a different conclusion. Here, a Bayesian analysis indicates you should be concerned. In 1958, 0.93% of the population was diagnosed with diabetes. In 2015 it was 7.4%. When you look at the intervening years, the climb in diabetes diagnosis is steady, not a spike. So the prior relevant data, or priors, indicate a trend that is worrisome.

It is important to remember that priors themselves are probability estimates. For each bit of prior knowledge, you are not putting it in a binary structure, saying it is true or not. You’re assigning it a probability of being true. Therefore, you can’t let your priors get in the way of processing new knowledge. In Bayesian terms, this is called the likelihood ratio or the Bayes factor. Any new information you encounter that challenges a prior simply means that the probability of that prior being true may be reduced. Eventually, some priors are replaced completely. This is an ongoing cycle of challenging and validating what you believe you know. When making uncertain decisions, it’s nearly always a mistake not to ask: What are the relevant priors? What might I already know that I can use to better understand the reality of the situation?

Now we need to look at fat-tailed curves: Many of us are familiar with the bell curve, that nice, symmetrical wave that captures the relative frequency of so many things from height to exam scores. The bell curve is great because it’s easy to understand and easy to use. Its technical name is “normal distribution.” If we know we are in a bell curve situation, we can quickly identify our parameters and plan for the most likely outcomes.

Fat-tailed curves are different. Take a look.

At first glance they seem similar enough. Common outcomes cluster together, creating a wave. The difference is in the tails. In a bell curve the extremes are predictable. There can only be so much deviation from the mean. In a fat-tailed curve there is no real cap on extreme events.

The more extreme events that are possible, the longer the tails of the curve get. Any one extreme event is still unlikely, but the sheer number of options means that we can’t rely on the most common outcomes as representing the average. The more extreme events that are possible, the higher the probability that one of them will occur. Crazy things are definitely going to happen, and we have no way of identifying when.

Think of it this way. In a bell curve type of situation, like displaying the distribution of height or weight in a human population, there are outliers on the spectrum of possibility, but the outliers have a fairly well defined scope. You’ll never meet a man who is ten times the size of an average man. But in a curve with fat tails, like wealth, the central tendency does not work the same way. You may regularly meet people who are ten, 100, or 10,000 times wealthier than the average person. That is a very different type of world.

Let’s re-approach the example of the risks of violence we discussed in relation to Bayesian thinking. Suppose you hear that you had a greater risk of slipping on the stairs and cracking your head open than being killed by a terrorist. The statistics, the priors, seem to back it up: 1,000 people slipped on the stairs and died last year in your country and only 500 died of terrorism. Should you be more worried about stairs or terror events?

Some use examples like these to prove that terror risk is low—since the recent past shows very few deaths, why worry?[1] The problem is in the fat tails: The risk of terror violence is more like wealth, while stair-slipping deaths are more like height and weight. In the next ten years, how many events are possible? How fat is the tail?

The important thing is not to sit down and imagine every possible scenario in the tail (by definition, it is impossible) but to deal with fat-tailed domains in the correct way: by positioning ourselves to survive or even benefit from the wildly unpredictable future, by being the only ones thinking correctly and planning for a world we don’t fully understand.

Asymmetries: Finally, you need to think about something we might call “metaprobability” —the probability that your probability estimates themselves are any good.

This massively misunderstood concept has to do with asymmetries. If you look at nicely polished stock pitches made by professional investors, nearly every time an idea is presented, the investor looks their audience in the eye and states they think they’re going to achieve a rate of return of 20% to 40% per annum, if not higher. Yet exceedingly few of them ever attain that mark, and it’s not because they don’t have any winners. It’s because they get so many so wrong. They consistently overestimate their confidence in their probabilistic estimates. (For reference, the general stock market has returned no more than 7% to 8% per annum in the United States over a long period, before fees.)

Another common asymmetry is people’s ability to estimate the effect of traffic on travel time. How often do you leave “on time” and arrive 20% early? Almost never? How often do you leave “on time” and arrive 20% late? All the time? Exactly. Your estimation errors are asymmetric, skewing in a single direction. This is often the case with probabilistic decision-making.[2]

Far more probability estimates are wrong on the “over-optimistic” side than the “under-optimistic” side. You’ll rarely read an about an investor who aimed for 25% annual return rates who subsequently earned 40% over a long period of time. You can throw a dart at the Wall Street Journal and hit the names of lots of investors who aim for 25% per annum with each investment and end up closer to 10%.

## The spy world

Successful spies are very good at probabilistic thinking. High-stakes survival situations tend to make us evaluate our environment with as little bias as possible.

When Vera Atkins was second in command of the French unit of the Special Operations Executive (SOE), a British intelligence organization reporting directly to Winston Churchill during World War II[3], she had to make hundreds of decisions by figuring out the probable accuracy of inherently unreliable information.

Atkins was responsible for the recruitment and deployment of British agents into occupied France. She had to decide who could do the job, and where the best sources of intelligence were. These were literal life-and-death decisions, and all were based in probabilistic thinking.

First, how do you choose a spy? Not everyone can go undercover in high-stress situations and make the contacts necessary to gather intelligence. The result of failure in France in WWII was not getting fired; it was death. What factors of personality and experience show that a person is right for the job? Even today, with advancements in psychology, interrogation, and polygraphs, it’s still a judgment call.

For Vera Atkins in the 1940s, it was very much a process of assigning weight to the various factors and coming up with a probabilistic assessment of who had a decent chance of success. Who spoke French? Who had the confidence? Who was too tied to family? Who had the problem-solving capabilities? From recruitment to deployment, her development of each spy was a series of continually updated, educated estimates.

Getting an intelligence officer ready to go is only half the battle. Where do you send them? If your information was so great that you knew exactly where to go, you probably wouldn’t need an intelligence mission. Choosing a target is another exercise in probabilistic thinking. You need to evaluate the reliability of the information you have and the networks you have set up. Intelligence is not evidence. There is no chain of command or guarantee of authenticity.

The stuff coming out of German-occupied France was at the level of grainy photographs, handwritten notes that passed through many hands on the way back to HQ, and unverifiable wireless messages sent quickly, sometimes sporadically, and with the operator under incredible stress. When deciding what to use, Atkins had to consider the relevancy, quality, and timeliness of the information she had.

She also had to make decisions based not only on what had happened, but what possibly could. Trying to prepare for every eventuality means that spies would never leave home, but they must somehow prepare for a good deal of the unexpected. After all, their jobs are often executed in highly volatile, dynamic environments. The women and men Atkins sent over to France worked in three primary occupations: organizers were responsible for recruiting locals, developing the network, and identifying sabotage targets; couriers moved information all around the country, connecting people and networks to coordinate activities; and wireless operators had to set up heavy communications equipment, disguise it, get information out of the country, and be ready to move at a moment’s notice. All of these jobs were dangerous. The full scope of the threats was never completely identifiable. There were so many things that could go wrong, so many possibilities for discovery or betrayal, that it was impossible to plan for them all. The average life expectancy in France for one of Atkins’ wireless operators was six weeks.

Finally, the numbers suggest an asymmetry in the estimation of the probability of success of each individual agent. Of the 400 agents that Atkins sent over to France, 100 were captured and killed. This is not meant to pass judgment on her skills or smarts. Probabilistic thinking can only get you in the ballpark. It doesn’t guarantee 100% success.

There is no doubt that Atkins relied heavily on probabilistic thinking to guide her decisions in the challenging quest to disrupt German operations in France during World War II. It is hard to evaluate the success of an espionage career, because it is a job that comes with a lot of loss. Atkins was extremely successful in that her network conducted valuable sabotage to support the allied cause during the war, but the loss of life was significant.

## Conclusion

Successfully thinking in shades of probability means roughly identifying what matters, coming up with a sense of the odds, doing a check on our assumptions, and then making a decision. We can act with a higher level of certainty in complex, unpredictable situations. We can never know the future with exact precision. Probabilistic thinking is an extremely useful tool to evaluate how the world will most likely look so that we can effectively strategize.

Members can discuss this post on the Learning Community Forum

References:

[1] Taleb, Nassim Nicholas. Antifragile. New York: Random House, 2012.

[2] Bernstein, Peter L. Against the Gods: The Remarkable Story of Risk. New York: John Wiley and Sons, 1996. (This book includes an excellent discussion in Chapter 13 on the idea of the scope of events in the past as relevant to figuring out the probability of events in the future, drawing on the work of Frank Knight and John Maynard Keynes.)

[3] Helm, Sarah. A Life in Secrets: The Story of Vera Atkins and the Lost Agents of SOE. London: Abacus, 2005.

## Earning Your Stripes: My Conversation with Patrick Collison [The Knowledge Project #32]

Subscribe on iTunes | Stitcher | Spotify | Android | Google Play

On this episode of the Knowledge Project, I chat with Patrick Collison, co-founder and CEO of the leading online payment processing company, Stripe. If you’ve purchased anything online recently, there’s a good chance that Stripe facilitated the transaction.

What is now an organization with over a thousand employees and handling billions of dollars of online purchases every year, began as a small side experiment while Patrick and his brother John were going to college.

During our conversation, Patrick shares the details of their unlikely journey and some of the hard-earned wisdom he picked up along the way. I hope you have something handy to write with because the nuggets per minute in this episode are off the charts. Patrick was so open and generous with his responses that I’m really excited for you to hear what he has to say.

Here are just a few of the things we cover:

• The biggest (and most valuable) mistakes Patrick made in the early days of Stripe and how they helped him get better
• The characteristics that Patrick looks for in a new hire to fit and contribute to the Stripe company culture
• What compelled he and his brother to move forward with the early concept of Stripe, even though on paper it was doomed to fail from the start
• The gaps Patrick saw in the market that dozens of other processing companies were missing — and how he capitalized on them
• The lessons Patrick learned from scaling Stripe from two employees (he and his brother) to nearly 1,000 today
• How he evaluates the upsides and potential dangers of speculative positions within the company
• How his Irish upbringing influenced his ability to argue and disagree without taking offense (and how we can all be a little more “Irish”)
• The power of finding the right peer group in your social and professional circles and how impactful and influential it can be in determining where you end up.
• The 4 ways Patrick has modified his decision-making process over the last 5 years and how it’s helped him develop as a person and as a business leader (this part alone is worth the listen)
• Patrick’s unique approach to books and how he chooses what he’s going to spend his time reading
• …life in Silicon Valley, Baumol’s cost disease, and so, so much more.

Patrick truly is one of the warmest, humble and down to earth people I’ve had the pleasure to speak with and I thoroughly enjoyed our conversation together. I hope you will too!

## Transcript

Normally only members of our learning community have access to transcripts, however, we pick one or two a year to make avilable to everyone. Here's the complete transcript of the interview with Patrick.

If you liked this, check out other episodes of the knowledge project.

***

Members can discuss this podcast on the Learning Community Forum

## The Nerds Were Right. Math Makes Life Beautiful.

Math has long been the language of science, engineering, and finance, but can math help you feel calm on a turbulent flight? Get a date? Make better decisions? Here are some heroic ways math shows up in our everyday life.

***

Sounds intellectually sophisticated, doesn’t it? Other than sounding really smart at after-work cocktails, what could be the benefit of understanding where math and physics permeate your life?

Well, what if I told you that math and physics can help you make better decisions by aligning with how the world works? What if I told you that math can help you get a date? Help you solve problems? What if I told you that knowing the basics of math and physics can help make you less afraid and confused? And, perhaps most important, they can help make life more beautiful. Seriously.

If you’ve ever been on a plane when turbulence has hit, you know how unnerving that can be. Most people get freaked out by it, and no matter how much we fly, most of us have a turbulence threshold. When the sides of the plane are shaking, noisily holding themselves together, and the people beside us are white with fear, hands clenched on their armrests, even the calmest of us will ponder the wisdom of jetting 38,000 feet above the ground in a metal tube moving at 1,000 km an hour.

Considering that most planes don’t fall from the sky on account of turbulence isn’t that comforting in the moment. Aren’t there always exceptions to the rule? But what if you understood why, or could explain the physics involved to the freaked-out person beside you? That might help.

In Storm in a Teacup: The Physics of Everyday Life, Helen Czerski spends a chapter describing the gas laws. Covering subjects from the making of popcorn to the deep dives of sperm whales, her amazingly accessible prose describes how the movement of gas is fundamental to the functioning of pretty much everything on earth, including our lungs. She reveals air to be not the static clear thing that we perceive when we bother to look, but rivers of molecules in constant collision, pushing and moving, giving us both storms and cloudless skies.

So when you appreciate air this way, as a continually flowing and changing collection of particles, turbulence is suddenly less scary. Planes are moving through a substance that is far from uniform. Of course, there are going to be pockets of more or less dense air molecules. Of course, they will have minor impacts on the plane as it moves through these slightly different pressure areas. Given that the movement of air can create hurricanes, it’s amazing that most flights are as smooth as they are.

You know what else is really scary? Approaching someone for a date or a job. Rejection sucks. It makes us feel awful, and therefore the threat of it often stops us from taking risks. You know the scene. You’re out at a bar with some friends. A group of potential dates is across the way. Do you risk the cringingly icky feeling of rejection and approach the person you find most attractive, or do you just throw out a lot of eye contact and hope that person approaches you?

Most men go with the former, as difficult as it is. Women will often opt for the latter. We could discuss social conditioning, with the roles that our culture expects each of us to follow. But this post is about math and physics, which actually turn out to be a lot better in providing guidance to optimize our chances of success in the intimidating bar situation.

In The Mathematics of Love, Hannah Fry explains the Gale-Shapley matching algorithm, which essentially proves that “If you put yourself out there, start at the top of the list, and work your way down, you’ll always end up with the best possible person who’ll have you. If you sit around and wait for people to talk to you, you’ll end up with the least bad person who approaches you. Regardless of the type of relationship you’re after, it pays to take the initiative.”

The math may be complicated, but the principle isn’t. Your chances of ending up with what you want — say, the guy with the amazing smile or that lab director job in California — dramatically increase if you make the first move. Fry says, “aim high, and aim frequently. The math says so.” Why argue with that?

Understanding more physics can also free us from the panic-inducing, heart-pounding fear that we are making the wrong decisions. Not because physics always points out the right decision, but because it can lead us away from this unproductive, subjective, binary thinking. How? By giving us the tools to ask better questions.

Consider this illuminating passage from Czerski:

We live in the middle of the timescales, and sometimes it’s hard to take the rest of time seriously. It’s not just the difference between now and then, it’s the vertigo you get when you think about what “now” actually is. It could be a millionth of a second, or a year. Your perspective is completely different when you’re looking at incredibly fast events or glacially slow ones. But the difference hasn’t got anything to do with how things are changing; it’s just a question of how long they take to get there. And where is “there”? It is equilibrium, a state of balance. Left to itself, nothing will ever shift from this final position because it has no reason to do so. At the end, there are no forces to move anything, because they’re all balanced. They physical world, all of it, only ever has one destination: equilibrium.

How can this change your decision-making process?

You might start to consider whether you are speeding up the goal of equilibrium (working with force) or trying to prevent equilibrium (working against force).  One option isn’t necessarily worse than the other. But the second one is significantly more work.

So then you will understand how much effort is going to be required on your part. Love that house with the period Georgian windows? Great. But know that you will have to spend more money fighting to counteract the desire of the molecules on both sides of the window to achieve equilibrium in varying temperatures than you will if you go with the modern bungalow with the double-paned windows.

And finally, curiosity. Being curious about the world helps us find solutions to problems by bringing new knowledge to bear on old challenges. Math and physics are actually powerful tools for investigating the possibilities of what is out there.

Fry writes that “Mathematics is about abstracting away from reality, not replicating it. And it offers real value in the process. By allowing yourself to view the world from an abstract perspective, you create a language that is uniquely able to capture and describe the patterns and mechanisms that would otherwise remain hidden.”

Physics is very similar. Czerski says, “Seeing what makes the world tick changes your perspective. The world is a mosaic of physical patterns, and once you’re familiar with the basics, you start to see how those patterns fit together.”

Math and physics enhance your curiosity. These subjects allow us to dive into the unknown without being waylaid by charlatans or sidetracked by the impossible. They allow us to tackle the mysteries of life one at a time, opening up the possibilities of the universe.

As Czerski says, “Knowing about some basics bits of physics [and math!] turns the world into a toybox.” A toybox full of powerful and beautiful things.

## Inertia: The Force That Holds the Universe Together

Inertia is the force that holds the universe together. Literally. Without it, things would fall apart. It's also what keeps us locked in destructive habits, and resistant to change.

***

“If it were possible to flick a switch and turn off inertia, the universe would collapse in an instant to a clump of matter,” write Peter and Neal Garneau in In the Grip of the Distant Universe: The Science of Inertia.

### “…death is the destination we all share. No one has ever escaped it. And that is as it should be, because death is very likely the single best invention of life. It’s life’s change agent; it clears out the old to make way for the new … Your time is limited, so don’t waste it living someone else’s life.”

— Steve Jobs

Inertia is the force that holds the universe together. Literally. Without it, matter would lack the electric forces necessary to form its current arrangement. Inertia is counteracted by the heat and kinetic energy produced by moving particles. Subtract it and everything cools to -459.67 degrees Fahrenheit (absolute zero temperature). Yet we know so little about inertia and how to leverage it in our daily lives.

## The Basics

The German astronomer Johannes Kepler (1571–1630) coined the word “inertia.” The etymology of the term is telling. Kepler obtained it from the Latin for “unskillfulness, ignorance; inactivity or idleness.” True to its origin, inertia keeps us in bed on a lazy Sunday morning (we need to apply activation energy to overcome this state).

Inertia refers to resistance to change — in particular, resistance to changes in motion. Inertia may manifest in physical objects or in the minds of people.

We learn the principle of inertia early on in life. We all know that it takes a force to get something moving, to change its direction, or to stop it.

Our intuitive sense of how inertia works enables us to exercise a degree of control over the world around us. Learning to drive offers further lessons. Without external physical forces, a car would keep moving in a straight line in the same direction. It takes a force (energy) to get a car moving and overcome the inertia that kept it still in a parking space. Changing direction to round a corner or make a U-turn requires further energy. Inertia is why a car does not stop the moment the brakes are applied.

The heavier a vehicle is, the harder it is to overcome inertia and make it stop. A light bicycle stops with ease, while an eight-carriage passenger train needs a good mile to halt. Similarly, the faster we run, the longer it takes to stop. Running in a straight line is much easier than twisting through a crowded sidewalk, changing direction to dodge people.

Any object that can be rotated, such as a wheel, has rotational inertia. This tells us how hard it is to change the object’s speed around the axis. Rotational inertia depends on the mass of the object and its distribution relative to the axis.

Inertia is Newton’s first law of motion, a fundamental principle of physics. Newton summarized it this way: “The vis insita, or innate force of matter, is a power of resisting by which every body, as much as in it lies, endeavors to preserve its present state, whether it be of rest or of moving uniformly forward in a straight line.”

When developing his first law, Newton drew upon the work of Galileo Galilei. In a 1624 letter to Francesco Ingoli, Galileo outlined the principle of inertia:

I tell you that if natural bodies have it from Nature to be moved by any movement, this can only be a circular motion, nor is it possible that Nature has given to any of its integral bodies a propensity to be moved by straight motion. I have many confirmations of this proposition, but for the present one alone suffices, which is this.

I suppose the parts of the universe to be in the best arrangement so that none is out of its place, which is to say that Nature and God have perfectly arranged their structure… Therefore, if the parts of the world are well ordered, the straight motion is superfluous and not natural, and they can only have it when some body is forcibly removed from its natural place, to which it would then return to a straight line.

In 1786, Immanuel Kant elaborated further: “All change of matter has an external cause. (Every body remains in its state of rest or motion in the same direction and with the same velocity, if not compelled by an external cause to forsake this state.) … This mechanical law can only be called the law of inertia (lex inertiæ)….”

Now that we understand the principle, let’s look at some of the ways we can understand it better and apply it to our advantage.

## Decision Making and Cognitive Inertia

We all experience cognitive inertia: the tendency to stick to existing ideas, beliefs, and habits even when they no longer serve us well. Few people are truly able to revise their opinions in light of disconfirmatory information. Instead, we succumb to confirmation bias and seek out verification of existing beliefs. It’s much easier to keep thinking what we’ve always been thinking than to reflect on the chance that we might be wrong and update our views. It takes work to overcome cognitive dissonance, just as it takes effort to stop a car or change its direction.

When the environment changes, clinging to old beliefs can be harmful or even fatal. Whether we fail to perceive the changes or fail to respond to them, the result is the same. Even when it’s obvious to others that we must change, it’s not obvious to us. It’s much easier to see something when you’re not directly involved. If I ask you how fast you’re moving right now, you’d likely say zero, but you’re moving 18,000 miles an hour around the sun. Perspective is everything, and the perspective that matters is the one that most closely lines up with reality.

### “Sometimes you make up your mind about something without knowing why, and your decision persists by the power of inertia. Every year it gets harder to change.”

— Milan Kundera, The Unbearable Lightness of Being

Cognitive inertia is the reason that changing our habits can be difficult. The default is always the path of least resistance, which is easy to accept and harder to question. Consider your bank, for example. Perhaps you know that there are better options at other banks. Or you have had issues with your bank that took ages to get sorted. Yet very few people actually change their banks, and many of us stick with the account we first opened. After all, moving away from the status quo would require a lot of effort: researching alternatives, transferring balances, closing accounts, etc. And what if something goes wrong? Sounds risky. The switching costs are high, so we stick to the status quo.

Sometimes inertia helps us. After all, questioning everything would be exhausting. But in many cases, it is worthwhile to overcome inertia and set something in motion, or change direction, or halt it.

The important thing about inertia is that it is only the initial push that is difficult. After that, progress tends to be smoother. Ernest Hemingway had a trick for overcoming inertia in his writing. Knowing that getting started was always the hardest part, he chose to finish work each day at a point where he had momentum (rather than when he ran out of ideas). The next day, he could pick up from there. In A Moveable Feast, Hemingway explains:

I always worked until I had something done and I always stopped when I knew what was going to happen next. That way I could be sure of going on the next day.

Later on in the book, he describes another method, which was to write just one sentence:

Do not worry. You have always written before and you will write now. All you have to do is write one true sentence. Write the truest sentence that you know. So, finally I would write one true sentence and go on from there. It was easy then because there was always one true sentence that I knew or had seen or had heard someone say. If I started to write elaborately, or like someone introducing or presenting something, I found that I could cut that scrollwork or ornament out and throw it away and start with the first true simple declarative sentence I had written.

We can learn a lot from Hemingway’s approach to tackling inertia and apply it in areas beyond writing. As with physics, the momentum from getting started can carry us a long way. We just need to muster the required activation energy and get going.

## Status Quo Bias: “When in Doubt, Do Nothing”

Cognitive inertia also manifests in the form of status quo bias. When making decisions, we are rarely rational. Faced with competing options and information, we often opt for the default because it’s easy. Doing something other than what we’re already doing requires mental energy that we would rather preserve. In many areas, this helps us avoid decision fatigue.

Many of us eat the same meals most of the time, wear similar outfits, and follow routines. This tendency usually serves us well. But the status quo is not necessarily the optimum solution. Indeed, it may be outright harmful or at least unhelpful if something has changed in the environment or we want to optimize our use of time.

### “The great enemy of any attempt to change men's habits is inertia. Civilization is limited by inertia.”

— Edward L. Bernays, Propaganda

In a paper entitled “If you like it, does it matter if it's real?” Felipe De Brigard[1] offers a powerful illustration of status quo bias. One of the best-known thought experiments concerns Robert Nozick’s “experience machine.” Nozick asked us to imagine that scientists have created a virtual reality machine capable of simulating any pleasurable experience. We are offered the opportunity to plug ourselves in and live out the rest of our lives in permanent, but fake enjoyment. The experience machine would later inspire the Matrix film series. Presented with the thought experiment, most people balk and claim they would prefer reality. But what if we flip the narrative? De Brigard believed that we are opposed to the experience machine because it contradicts the status quo, the life we are accustomed to.

In an experiment, he asked participants to imagine themselves woken by the doorbell on a Saturday morning. A man in black, introducing himself as Mr. Smith, is at the door. He claims to have vital information. Mr. Smith explains that there has been an error and you are in fact connected to an experience machine. Everything you have lived through so far has been a simulation. He offers a choice: stay plugged in, or return to an unknown real life. Unsurprisingly, far fewer people wished to return to reality in the latter situation than wished to remain in it in the former. The aversive element is not the experience machine itself, but the departure from the status quo it represents.

## Conclusion

Inertia is a pervasive, problematic force. It's the pull that keeps us clinging to old ways and prevents us from trying new things. But as we have seen, it is also a necessary one. Without it, the universe would collapse. Inertia is what enables us to maintain patterns of functioning, maintain relationships, and get through the day without questioning everything. We can overcome inertia much like Hemingway did — by recognizing its influence and taking the necessary steps to create that all-important initial momentum.

***

Prime Members can discuss this on the Learning Community Forum.

## End Notes

[1] https://www.tandfonline.com/doi/abs/10.1080/09515080903532290

## Go Fast and Break Things: The Difference Between Reversible and Irreversible Decisions

Reversible vs. irreversible decisions. We often think that collecting as much information as possible will help us make the best decisions. Sometimes that's true, but sometimes it hamstrings our progress. Other times it can be flat out dangerous.

***

Many of the most successful people adopt simple, versatile decision-making heuristics to remove the need for deliberation in particular situations.

One heuristic might be defaulting to saying no, as Steve Jobs did. Or saying no to any decision that requires a calculator or computer, as Warren Buffett does. Or it might mean reasoning from first principles, as Elon Musk does. Jeff Bezos, the founder of Amazon.com, has another one we can add to our toolbox. He asks himself, is this a reversible or irreversible decision?

If a decision is reversible, we can make it fast and without perfect information. If a decision is irreversible, we had better slow down the decision-making process and ensure that we consider ample information and understand the problem as thoroughly as we can.

Bezos used this heuristic to make the decision to found Amazon. He recognized that if Amazon failed, he could return to his prior job. He would still have learned a lot and would not regret trying. The decision was reversible, so he took a risk. The heuristic served him well and continues to pay off when he makes decisions.

Decisions Amidst Uncertainty

Let’s say you decide to try a new restaurant after reading a review online. Having never been there before, you cannot know if the food will be good or if the atmosphere will be dreary. But you use the incomplete information from the review to make a decision, recognizing that it’s not a big deal if you don’t like the restaurant.

In other situations, the uncertainty is a little riskier. You might decide to take a particular job, not knowing what the company culture is like or how you will feel about the work after the honeymoon period ends.

Reversible decisions can be made fast and without obsessing over finding complete information. We can be prepared to extract wisdom from the experience with little cost if the decision doesn’t work out. Frequently, it’s not worth the time and energy required to gather more information and look for flawless answers. Although your research might make your decision 5% better, you might miss an opportunity.

Reversible decisions are not an excuse to act reckless or be ill-informed, but is rather a belief that we should adapt the frameworks of our decisions to the types of decisions we are making. Reversible decisions don’t need to be made the same way as irreversible decisions.

The ability to make decisions fast is a competitive advantage. One major advantage that start-ups have is that they can move with velocity, whereas established incumbents typically move with speed. The difference between the two is meaningful and often means the difference between success and failure.

Speed is measured as distance over time. If we’re headed from New York to LA on an airplane and we take off from JFK and circle around New York for three hours, we’re moving with a lot of speed, but we’re not getting anywhere. Speed doesn’t care if you are moving toward your goals or not. Velocity, on the other hand, measures displacement over time. To have velocity, you need to be moving toward your goal.

This heuristic explains why start-ups making quick decisions have an advantage over incumbents. That advantage is magnified by environmental factors, such as the pace of change. The faster the pace of environmental change, the more an advantage will accrue to people making quick decisions because those people can learn faster.

Decisions provide us with data, which can then make our future decisions better. The faster we can cycle through the OODA loop, the better. This framework isn’t a one-off to apply to certain situations; it is a heuristic that needs to be an integral part of a decision-making toolkit.

With practice, we also get better at recognizing bad decisions and pivoting, rather than sticking with past choices due to the sunk costs fallacy. Equally important, we can stop viewing mistakes or small failures as disastrous and view them as pure information which will inform future decisions.

### “A good plan, violently executed now, is better than a perfect plan next week.”

— General George Patton

Bezos compares decisions to doors. Reversible decisions are doors that open both ways. Irreversible decisions are doors that allow passage in only one direction; if you walk through, you are stuck there. Most decisions are the former and can be reversed (even though we can never recover the invested time and resources). Going through a reversible door gives us information: we know what’s on the other side.

In his shareholder letter, Bezos writes[1]:

Some decisions are consequential and irreversible or nearly irreversible – one-way doors – and these decisions must be made methodically, carefully, slowly, with great deliberation and consultation. If you walk through and don’t like what you see on the other side, you can’t get back to where you were before. We can call these Type 1 decisions. But most decisions aren’t like that – they are changeable, reversible – they’re two-way doors. If you’ve made a suboptimal Type 2 decision, you don’t have to live with the consequences for that long. You can reopen the door and go back through. Type 2 decisions can and should be made quickly by high judgment individuals or small groups.

As organizations get larger, there seems to be a tendency to use the heavy-weight Type 1 decision-making process on most decisions, including many Type 2 decisions. The end result of this is slowness, unthoughtful risk aversion, failure to experiment sufficiently, and consequently diminished invention. We’ll have to figure out how to fight that tendency.

Bezos gives the example of the launch of one-hour delivery to those willing to pay extra. This service launched less than four months after the idea was first developed. In 111 days, the team “built a customer-facing app, secured a location for an urban warehouse, determined which 25,000 items to sell, got those items stocked, recruited and onboarded new staff, tested, iterated, designed new software for internal use – both a warehouse management system and a driver-facing app – and launched in time for the holidays.”

As further guidance, Bezos considers 70% certainty to be the cut-off point where it is appropriate to make a decision. That means acting once we have 70% of the required information, instead of waiting longer. Making a decision at 70% certainty and then course-correcting is a lot more effective than waiting for 90% certainty.

In Blink: The Power of Thinking Without Thinking, Malcolm Gladwell explains why decision-making under uncertainty can be so effective. We usually assume that more information leads to better decisions — if a doctor proposes additional tests, we tend to believe they will lead to a better outcome. Gladwell disagrees: “In fact, you need to know very little to find the underlying signature of a complex phenomenon. All you need is evidence of the ECG, blood pressure, fluid in the lungs, and an unstable angina. That’s a radical statement.”

In medicine, as in many areas, more information does not necessarily ensure improved outcomes. To illustrate this, Gladwell gives the example of a man arriving at a hospital with intermittent chest pains. His vital signs show no risk factors, yet his lifestyle does and he had heart surgery two years earlier. If a doctor looks at all the available information, it may seem that the man needs admitting to the hospital. But the additional factors, beyond the vital signs, are not important in the short term. In the long run, he is at serious risk of developing heart disease. Gladwell writes,

… the role of those other factors is so small in determining what is happening to the man right now that an accurate diagnosis can be made without them. In fact, … that extra information is more than useless. It’s harmful. It confuses the issues. What screws up doctors when they are trying to predict heart attacks is that they take too much information into account.

We can all learn from Bezos’s approach, which has helped him to build an enormous company while retaining the tempo of a start-up. Bezos uses his heuristic to fight the stasis that sets in within many large organizations. It is about being effective, not about following the norm of slow decisions.

Once you understand that reversible decisions are in fact reversible you can start to see them as opportunities to increase the pace of your learning. At a corporate level, allowing employees to make and learn from reversible decisions helps you move at the pace of a start-up. After all, if someone is moving with speed, you’re going to pass them when you move with velocity.

***

Members can discuss this on the Learning Community Forum.

## End Notes

[1] https://www.sec.gov/Archives/edgar/data/1018724/000119312516530910/d168744dex991.htm

## Learning How to Learn: My Conversation With Barbara Oakley

In this interview, Barbara Oakley, 8-time author and creator of Learning to Learn, an online course with over a million enrolled students, shares the science and strategies to learn more quickly, overcome procrastination and get better at practically anything.

***

Just when I start to think I’m using my time well and getting a lot done in my life, I meet someone like Barbara Oakley.

Barbara is a true polymath. She was a captain in the U.S. Army, a Russian translator on Soviet trawlers, a radio operator in the South Pole, an engineer, university professor, researcher and the author of 8 books.

Oh, and she is also the creator and instructor of Learning to Learn, the most popular Massive Open Online Course (MOOC) ever(!), with over one million enrolled students.

In this fascinating interview, we cover many aspects of learning, including how to make it stick so we remember more and forget less, how to be more efficient so we learn more quickly, and how to remove that barriers that get in the way of effective learning.

Specifically, Barbara covers:

• How she changed her brain from hating math and science to loving it so much she now teaches engineering to college students
• The two modes of your brain and how that impacts what and how you learn
• Why backing off can sometimes be the best thing you can do when learning something new
• How to “chunk” your learning so new knowledge is woven into prior knowledge making it easily accessible
• The best ways to develop new patterns of learning in our brains
• How to practice a skill so you can blast through plateaus and improve more quickly
• Her favorite tactic for dealing with procrastination so you can spend more time learning
• The activities she recommends that rapidly increase neural connections like fertilizer on the brain
• Whether memorization has a place in learning anymore, or simply a barrier to true understanding
• The truth about “learning types” and how identifying as a visual or auditory learner might be setting yourself up for failure.

…and a whole lot more.

If you want to be the most efficient learner you can be, and have more fun doing it, you won’t want to miss this discussion.

Listen

Transcript
An edited copy of this transcript is available to members of our learning community or for purchase separately (\$7).

If you liked this, check out all the episodes of the knowledge project.

***

Members can discuss this on the Learning Community Forum.