Blog

Winning the Battle, Losing the War

“War ends at the moment when peace permanently wins out. Not when the articles of surrender are signed or the last shot is fired, but when the last shout of a sidewalk battle fades, when the next generation starts to wonder whether the whole thing ever really happened.”

— Lee Sandlin

The Basics

In a classic American folktale, a stubborn railroad worker decides to prove his skill by competing with a drilling machine. John Henry, enraged to hear that machines might take his job, claims that his digging abilities are superior. A contest is arranged. He goes head to head with the new drill. The result is impressive — the drill breaks after three meters, whereas John Henry makes it to four meters in the same amount of time. As the other workers begin to celebrate his victory, he collapses and dies of exhaustion.

John Henry might have been victorious against the drill, but that small win was meaningless in the face of his subsequent death. In short, we can say that he won the battle but lost the war.

Winning a battle but losing the war is a military mental model that refers to achieving a minor victory that ultimately results in a larger defeat, rendering the victory empty or hollow. It can also refer to gaining a small tactical advantage that corresponds to a wider disadvantage.

One particular type of hollow victory is the Pyrrhic victory, which Wikipedia defines as a victory that “inflicts such a devastating toll on the victor that it is tantamount to defeat.” That devastating toll can come in the form of an enormous number of casualties, the wasting of resources, high financial costs, damage to land, and other losses. Or, in that folktale, the death of the railroad worker.

Another hollow victory occurs when you engage in a conventional war and prompt a response from an opponent who has significantly more firepower than you do. The attack on Pearl Harbor was considered a victory for the Japanese. However, by provoking an army with superior forces, they set something in motion they could not control.

While the concept of a hollow victory arises in military contexts, understanding the broader principle allows you to apply it to other areas of life. It can often be helpful in the context of non-zero-sum situations, in which both parties suffer even if one has technically succeeded.

We have won a battle but lost a war whenever we achieve some minor aim that leads to wider loss.

We have won a battle but lost a war whenever we achieve some minor (or even major) aim that leads to wider loss. We might win an argument with a partner over a small infraction, only to come across as hostile and damage the relationship. We may achieve a short-term professional goal by working overtime, only to harm our health and reduce our long-term productivity. We might pursue a particular career for the sake of money, but feel unfulfilled and miserable in the process.

“Grand strategy is the art of looking beyond the present battle and calculating ahead. It requires that you focus on your ultimate goal and plot to reach it.”

— Robert Greene, The 33 Strategies of War

The Original Pyrrhic Victory

The term “Pyrrhic victory” is named after the Greek king Pyrrhus of Epirus. Between 280 and 279 BC, Pyrrhus’s army managed to defeat the Romans in two major battles. Striding into Italy with 25,000 men and 20 elephants — a new sight for the Romans — Pyrrhus was confident that he could extend his empire. However, the number of lives lost in the process made the victory meaningless. According to Plutarch, Pyrrhus is said to have told a friend that another victory against the Romans would “utterly undo him.”

Pyrrhus did not have access to anywhere near enough potential recruits to replenish his army. He had, after all, lost most of his men, including the majority of his friends and commanders. Meanwhile, the Romans were only temporarily defeated. They could replace their lost soldiers with relative ease. Even worse, the two losses had enraged the Romans and made them more willing to continue fighting. The chastened king gathered his remaining troops and sailed back to Greece.

The Battle of Bunker Hill

A classic example of a Pyrrhic victory is the Battle of Bunker Hill, fought on June 17th, 1775, during the American Revolutionary War. Colonial and British troops grappled for control of the strategically advantageous Bunker Hill in Massachusetts.

Four days earlier, on June 13th, the colonial army received intelligence that the British were planning to take control of the hills around Boston, which would give them greater authority over the nearby harbor. About 1200 colonial soldiers situated themselves on the hills, while others spread throughout the surrounding area. The British army, realizing this, mounted an attack.

The British army succeeded in their aim after the colonial army ran out of ammunition. Yet the Battle of Bunker Hill was anything but a true victory, because the British lost a substantial number of men, including 100 of their officers. This left the British army depleted (having sustained 1000 casualties), low on resources, and without proper management.

This Pyrrhic victory was unexpected; the British troops had far more experience and outnumbered the colonial army by almost 2:1. The Battle of Bunker Hill sapped British morale but was somewhat motivating for the colonials, who had sustained less than half the number of casualties.

In The American Revolutionary War and the War of 1812, the situation is described this way:

… the British were stopped by heavy fire from the colonial troops barricaded behind rail fences that had been stuffed with grass, hay, and brush. On the second or third advance, however, the attackers carried the redoubt and forced the surviving defenders, mostly exhausted and weaponless, to flee. …

If the British had followed this victory with an attack on Dorchester Heights to the South of Boston, it might have been worth the heavy cost. But, presumably, because of their severe losses and the fighting spirit displayed by the rebels, the British commanders abandoned or indefinitely postponed such a plan. Consequently, after Gen. George Washington took colonial command two weeks later, enough heavy guns and ammunition had been collected that he was able in March 1776 to seize and fortify Dorchester Heights and compel the British to evacuate Boston.… Also, the heavy losses inflicted on the British in the Battle of Bunker Hill bolstered the Americans' confidence and showed that the relatively inexperienced colonists could indeed fight on par with the mighty redcoats of the British army.

In The War of the American Revolution, Robert W. Coakley writes of the impact of Bunker Hill:

Bunker Hill was a Pyrrhic victory, its strategic effect practically nil since the two armies remained in virtually the same position they had held before. Its consequences, nevertheless, cannot be ignored. A force of farmers and townsmen, fresh from their fields and shops, with hardly a semblance of orthodox military organization, had met and fought on equal terms with a professional British army. …[N]ever again would British commanders lightly attempt such an assault on Americans in fortified positions.

“I wish we could sell them another hill at the same price.”

— Nathanael Greene, leader of the colonial army

The Battle of Borodino

Fought on September 7, 1812, the Battle of Borodino was the bloodiest day of the Napoleonic Wars. The French army (led by Napoleon) sought to invade Russia. Roughly a quarter of a million soldiers fought at the Battle of Borodino, with more than 70,000 casualties. Although the French army succeeded in forcing the Russians into retreat, their victory was scarcely a triumphant one. Both sides ended up depleted and low on morale without having achieved their respective aims.

The Battle of Borodino is considered a Pyrrhic victory because the French army destroyed itself in the process of capturing Moscow. The Russians had no desire to surrender, and the conflict was more costly for the French than for their opponent.
By the time Napoleon's men began their weary journey back to France, they had little reason to consider themselves victorious. The Battle of Borodino had no clear purpose, as no tactical advantage was gained. Infighting broke out and Napoleon eventually lost both the war and his role as leader of France.

History has shown again and again that attempting to take over Russia is rarely a good idea. Napoleon was at a serious disadvantage to begin with. The country's size and climate made tactical movements difficult. Bringing supplies in proved nearly impossible, and the French soldiers easily succumbed to cold, starvation, and infectious diseases. Even as they hastened to retreat, the Russian army recovered its lost men quickly and continued to whittle away at the remaining French soldiers. Of the original 95,000 French troops, a mere 23,000 returned from Russia (exact figures are impossible to ascertain due to each side's exaggerating or downplaying the losses). The Russian approach to defeating the French is best described as attrition warfare – a stubborn, unending wearing down. Napoleon might have won the Battle of Borodino, but in the process he lost everything he had built during his time as a leader and his army was crushed.

Pyrrhic victories often serve as propaganda in the long term – for the losing side, not the victors.

Something we can note from both Borodino and Bunker Hill is that Pyrrhic victories often serve as propaganda in the long term – for the losing side, not for the victors. As the adage goes, history is written by winners. A Latin saying, ad victorem spolias – to the victor belong the spoils – exemplifies this idea. Except that it doesn't quite ring true when it comes to Pyrrhic victories, which tend to be a source of shame for the winning side. In the case of Borodino, it became an emblem of patriotism and pride for the Russians.

“[I]t is much better to lose a battle and win the war than to win a battle and lose the war. Resolve to keep your eyes on the big ball.”

— David J. Schwartz, The Magic of Thinking Big

Hollow Victories in Business

A company has won a Pyrrhic victory when it leverages all available resources to take over another company, only to be ruined by the financial costs and the loss of key employees. Businesses can also ruin themselves over lawsuits that drain resources, distract managers, and get negative attention in the press.

American Apparel is one instance of a company ending up bankrupt, partially as a result of mounting legal fees. The exact causes of the company’s downfall are not altogether understood, though a number of lawsuits are believed to have been a major factor. It began with a series of sexual harassment lawsuits against founder Dov Charney.

American Apparel’s board of directors fired Charney after the growing fees associated with defending him began harming the company’s finances (as well as its reputation). Charney responded by attempting a hostile takeover, as unwilling to surrender control of the company he founded as Czar Alexander was to surrender Moscow to Napoleon. More lawsuits followed as American Apparel shareholders and board members seemingly sued everyone in sight and were sued by suppliers, by more than 200 former employees, and by patent holders.

As everyone involved focused on winning their respective battles, the company ended up filing for bankruptcy and losing the war. In short, everyone suffered substantial losses, from Charney himself to the many factory workers who were made redundant.

Hollow Victories in Court Cases

Hollow victories are common in the legal system. For example, consider the following scenarios:

  • A divorced couple engages in a lengthy, tedious legal battle over the custody of their children. Eventually, they are given shared custody. Yet the tense confrontations associated with the court case have alienated the children from their parents and removed tens of thousands of dollars from the collective purse.
  • A man unknowingly puts up trees that slightly cross over into his neighbor's property. The man tries to come to a compromise by perhaps trimming the trees or allowing the neighbor to cross into his property in exchange for leaving the trees up. No dice; the neighbor sticks to his guns. Unable to resolve the matter, the neighbor sues the man and wins, forcing him to cut down the trees and pay all legal expenses. While the neighbor has technically won the case, he now has an enemy next door, and enemies up and down the street who think he's a Scrooge.
  • A freelance illustrator discovers that her work has been used without permission or payment by a non-profit group that printed her designs on T-shirts and sold them, with the proceeds going to charity. The illustrator sues them and wins for copyright infringement, but costs herself and the charity substantial legal fees. Unhappy that the illustrator sued a charity instead of making a compromise, the public boycotts her and she has trouble selling her future work.
  • A well-known business magnate discovers that his children are suing him for the release of trust fund money they believe they are owed. He counter-sues, arguing publicly that his children are greedy and don't deserve the money. He wins the case on a legal technicality, but both his public image and his relationships with his children are tarnished. He's kept his money, but not his happiness.

A notable instance of a legal Pyrrhic victory was the decade-long McLibel case, the longest running case in English history. The fast-food chain McDonald's attempted to sue two environmental activists, Helen Steel and David Morris, over leaflets they distributed. McDonald's claimed the contents of the leaflets were false. Steel and Morris claimed they were true.

Court hearings found that both parties were both wrong – some of the claims were verifiable; others were fabricated. After ten years of tedious litigation and negative media attention, McDonald's won the case, but it was far from worthwhile. The (uncollected) £40,000 settlement they were awarded was paltry compared to the millions the legal battle had cost the company. Meanwhile, Steel and Morris chose to represent themselves and spent only £30,000 (both had limited income and did not receive Legal Aid).

Although McDonald's did win the case, it came with enormous costs, both financially and in reputation. The case attracted a great deal of media attention as a result of its David-vs.-Goliath nature. The idea of two unemployed activists taking on an international corporation had an undeniable appeal, and the portrayals of McDonald's were unanimously negative. The case did far more harm to their reputation than a few leaflets distributed in London would have. At one point, McDonald's attempted to placate Steel and Morris by offering to donate money to a charity of their choice, provided that they stopped criticizing the company publicly and did so only “in private with friends.” The pair responded that they would accept the terms if McDonald's halted any form of advertising and staff recommended it only “in private with friends.”

“Do not be ashamed to make a temporary withdrawal from the field if you see that your enemy is stronger than you; it is not winning or losing a single battle that matters, but how the war ends.”

— Paulo Coelho, Warrior of the Light

Hollow Victories in Politics

Theresa May’s General Election win is a perfect example of a political Pyrrhic victory, as is the Brexit vote the year prior.

Much like Napoleon at Borodino, David Cameron achieved his aims, only to lose his role as a leader in the process. And much like the French soldiers who defeated the Russians at Borodino, only to find themselves limping home through snow and ice, the triumphant Leave voters now face a drop in wages and general quality of life, making the fulfilment of their desire to leave the European Union seem somewhat hollow. Elderly British people (the majority of whom voted to leave) must deal with dropping pensions and potentially worse healthcare due to reduced funding. Voters won the battle but at a cost that is unknown.

Even before the shock of the Brexit vote had worn off, Britain saw a second dramatic Pyrrhic victory: Theresa May’s train-wreck General Election. Amid soaring inflation, May aimed to win a clear majority and secure her leadership. Although she was not voted out of office, her failure to receive unanimous support only served to weaken her position. Continued economic decline has weakened it further.

“Victorious warriors win first and then go to war, while defeated warriors go to war first and then seek to win.”

— Sun Tzu, The Art of War

How We Can Avoid Hollow Victories in Our Lives

One important lesson we can learn from hollow victories is the value of focusing on the bigger picture, rather than chasing smaller goals.

One way to avoid winning a battle but losing the war is to think in terms of opportunity costs. Charlie Munger has said that “All intelligent people use opportunity cost to make decisions”; maybe what he should have said is that “All intelligent people should use opportunity cost to make decisions.”

Consider a businessman, well versed in opportunity cost economics, who chooses to work late every night instead of spending time with his family, whom he then alienates and eventually becomes distanced from. The opportunity cost of the time spent at the office between 7-10 pm wasn't just TV, or dinner, or any other thing he would have done were he at home. It was a good long-term relationship with his wife and children! Talk about opportunity costs! Putting in the late hours may have helped him with the “battle” of business, but what about the “war” of life? Unfortunately, many people realize too late that they paid too high a price for their achievements or victories.

Hollow victories can occur as a result of a person or party focusing on a single goal – winning a lawsuit, capturing a hill, winning an election – while ignoring the wider implications. It's like looking at the universe by peering into one small corner of space with a telescope.

As was noted earlier, this mental model isn't relevant just in military, legal, or political contexts; hollow victories can occur in every part of our lives, including relationships, health, personal development, and careers. Understanding military tactics and concepts can teach us a great deal about being effective leaders, achieving our goals, maintaining relationships, and more.

It's obvious that we should avoid Pyrrhic victories wherever possible, but how do we do that? In spite of situations differing vastly, there are some points to keep in mind:

  • Zoom out to see the big picture. By stepping back when we get too focused on minutiae, we can pay more attention to the war, not just the battle. Imagine that you are at the gym when you feel a sharp pain in your leg. You ignore it and finish the workout, despite the pain increasing with each rep. Upon visiting a doctor, you find you have a serious injury and will be unable to exercise until it heals. If you had focused on the bigger picture, you would have stopped the workout, preventing a minor injury from getting worse, and been able to get back to your workouts sooner.
  • Keep in mind core principles and focus on overarching goals. When Napoleon sacrificed thousands of his men in a bid to take control of Moscow, he forgot his core role as the leader of the French people. His own country should have been the priority, but he chose to chase more power and ended up losing everything. When we risk something vital – our health, happiness, or relationships – we run the risk of a Pyrrhic victory.
  • Recognize that we don't have to lose our minds just because everyone else has. As Warren Buffett once said, “be fearful when others are greedy and greedy when others are fearful.” Or, as Nathan Rothschild wrote, “great fortunes are made when cannonballs fall in the harbor, not when violins play in the ballroom.” When others are thrashing to win a battle, we would do well to pay attention to the war. What can we notice that they ignore? If we can't (or don't want to) resolve the turmoil, how can we benefit from it?
  • Recognize when to give up. We cannot win every battle we engage in, but we can sometimes win the war. In some situations, the optimum choice is to withdraw or surrender to avoid irreparable problems. The goal is not the quick boost from a short-term victory; it is the valuable satisfaction of long-term success.
  • Remember that underdogs can win – or at least put up a good fight. Remember what the British learned the hard way at Bunker Hill, and what it cost McDonald's to win the McLibel case. Even if we think we can succeed against a seemingly weaker party, that victory can come at a very high cost.

***

Members can discuss this post on the Learning Community Forum

Making the Most of Second Chances

We all get lucky. Once in a while we do something really stupid that could have resulted in death, but didn’t. Just the other day, I saw someone who was texting walk out into oncoming traffic, narrowly avoiding the car whose driver slammed on the brakes. As the adrenaline starts to dissipate, we realize that we don’t ever want to be in that situation again. What can we do? We can make the most of our second chances by building margins of safety into our lives.

What is a margin of safety and where can I get one?

The concept is a cornerstone of engineering. Engineers design systems to withstand significantly more emergencies, unexpected loads, misuse, or degradation than would normally be expected.

Take a bridge. You are designing a bridge to cross just under two hundred feet of river. The bridge has two lanes going in each direction. Given the average car size, the bridge could reasonably carry 50 to 60 cars at a time. At 4,000 pounds per car, your bridge needs to be able to carry at least 240,000 pounds of weight; otherwise, don’t bother building it. So that’s the minimum consideration for safety — but only the worst engineer would stop there.

Can anyone walk across your bridge? Can anyone park their car on the shoulder? What if cars get heavier? What if 20 cement trucks are on the bridge at the same time? How does the climate affect the integrity of your materials over time? You don’t want the weight capacity of the bridge to ever come close to the actual load. Otherwise, one seagull decides to land on the railing and the whole structure collapses.

Considering these questions and looking at the possibilities is how you get the right information so you can adjust your specs to build in a margin of safety. That’s the difference between what your system is expected to withstand and what it actually could. So when you are designing a bridge, the first step is to figure out the maximum load it should ever see (bumper-to-bumper vehicles, hordes of tourist groups, and birds perched wing to wing), and then you design for at least double that load.

Knowing that the infrastructure was designed to withstand significantly more than the anticipated maximum load makes us happy when we are on bridges, or in airplanes, or jumping on the bed in our second-story bedroom. We feel confident that many smart people have conspired to make these activities as safe as possible. We’re so sure of this that it almost never crosses our minds. Sure, occasional accidents happen. But it is remarkably reassuring that these structures can withstand quite a bit of the unexpected.

So how do we make ourselves a little more resilient? Less susceptible to the vagaries of change? Turns out that engineers aren’t the only ones obsessed with building in margins of safety. Spies are pretty good at it, too, and we can learn a lot from them.

Operation Kronstadt, by Harry Ferguson, chronicles the remarkable story of Paul Dukes, the only British secret agent working in Russia in 1919, and the equally amazing adventures of the small team that was sent in to rescue him.

Paul Dukes was not an experienced spy. He was actually a pianist. It was his deep love of Russian culture that led to him to approach his government and volunteer for the mission of collecting information on Bolshevik activities in St. Petersburg. As Ferguson writes, “Paul had no military experience, let alone any experience of intelligence work and yet they were going to send him back into one of the toughest espionage environments in the world.”

However, MI6, the part of British Intelligence that Paul worked for, wasn’t exactly the powerful and well-prepared agency that it’s portrayed as today. Consider this description by Ferguson: “having dragged Paul out of Russia, MI6 did not appear to have given much thought to how he should get back or how he would survive once he got there: ‘As to the means whereby you gain access to the country, under what cover you will live there, and how you will send out reports, we shall leave it to you, being best informed as to the conditions’.”

So off went Paul into Russia, not as a musician but as a spy. No training, no gadgets, no emergency network, no safe houses. Just a bunch of money and sentiments of ‘good luck’. So it is all the more amazing that Paul Dukes turned out to be an excellent spy. After reading his story, I think the primary reason for this is that he learned extremely quickly from his experiences. One of the things he learned quickly was how to build margins of safety into his tradecraft.

There is no doubt that the prospect of death wakes us up. We don’t often think about how dangerous something can be until we almost die doing it. Then, thanks to our big brains that let us learn from experience, we adapt. We recognize that if we don’t, we might not be so lucky next time. And no one wants to rely on luck as a survival strategy.

This is where margins of safety come in. We build them to reduce the precariousness of chance.

Imagine you are in St. Petersburg in 1919. What you have going for you is that you speak the language, understand the culture, and know the streets. Your major problem is that you have no idea how to start this spying thing. How do you get contacts and build a network in a city that is under psychological siege? The few names you have been given come from dubious sources at the border, and the people attached to those names may have been compromised, arrested, or both. You have nowhere to sleep at night, and although you have some money, it can’t buy anything, not even food, because there is nothing for sale. The whole country is on rations.

Not to mention, if by some miracle you actually get a few good contacts who give you useful information, how do you get it home? There are no cell phones or satellites. Your passport is fake and won’t hold up to any intense scrutiny, yet all your intelligence has to be taken out by hand from a country that has sealed its borders. And it’s 1919. You can’t hop on a plane or drive a car. Train or foot are your only options.

This is what Paul Dukes faced. Daunting to be sure. Which is why his ultimate success reads like the improbable plot of a Hollywood movie. Although he made mistakes, he learned from them as they were happening.

Consider this tense moment as described by Ferguson:

The doorbell in the flat rang loudly and Paul awoke with a start.

He had slept late. Stepanova had kindly allowed him sleep in one of the spare beds and she had even found him an old pair of Ivan's pyjamas. There were no sheets, but there were plenty of blankets and Paul had been cosy and warm. Now it was 7.45 a.m., and here he was half-asleep and without his clothes. Suppose it was the Cheka [Russian Bolshevik Police] at the door? In a panic he realised that he had no idea what to do. The windows of the apartment were too high for him to jump from and like a fool he had chosen a hiding place with no other exits. … He was reduced to waiting nervously as he stood in Ivan's pyjamas whilst Stepanova shuffled to the door to find out who it was. As he stood there with his stomach in knots, Paul swore that he would never again sleep in a place from which there was only one exit.

One exit was good enough for normal, anticipated use. But one exit wouldn't allow him to adapt to the unexpected, the unusual load produced by the appearance of the state police. So from then on, his sleeping accommodations were chosen with a minimum margin of safety of two exits.

This type of thinking dictated a lot of his actions. He never stayed at the same house more than two nights in a row, and often moved after just one night. He arranged for the occupants to signal him, such as by placing a plant in the window, if they believed the house was unsafe. He siloed knowledge as much as he could, never letting the occupants of one safe house know about the others. Furthermore, as Ferguson writes:

He also arranged a back-up plan in case the Cheka finally got him. He had to pick one trustworthy agent … and soon Paul began entrusting her with all the details of his movements and told her at which safe house he would be sleeping so that if he did disappear MI6 would have a better idea of who had betrayed him. He even used her as part of his courier service and she hid all his reports in the float while he was waiting for someone who could take them out of the country.

Admittedly this plan didn’t provide a large margin of safety, but at least he wasn’t so arrogant as to assume he was never going to get captured.

Large margins of safety are not always possible. Sometimes they are too expensive. Sometimes they are not available. Dukes liked to have an extra identity handy should some of his dubious contacts turn him in, but this wasn’t always an option in a country that changed identity papers frequently. Most important, though, he was aware that planning for the unexpected was his best chance of staying alive, even if he couldn’t always put in place as large a margin of safety as he would have liked. And survival was a daily challenge, not something to take for granted.

The disaster at the Fukushima nuclear power plant taught us a lot about being cavalier regarding margins of safety. The unexpected is just that: not anticipated. That doesn’t mean it is impossible or even improbable. The unexpected is not the worst thing that has happened before. It is the worst thing, given realistic parameters such as the laws of physics, that could happen.

In the Fukushima case, the margin of safety was good enough to deal with the weather of the recent past. But preparing for the worst we have seen is not the same as preparing for the worst.

The Fukushima power plant was overwhelmed by a tsunami, creating a nuclear disaster on par with Chernobyl. Given the seismic activity in the area, although a tsunami wasn’t predictable, it was certainly possible. The plant could have been designed with a margin of safety to better withstand a tsunami. It wasn’t. Why? Because redundancy is expensive. That’s the trade-off. You are safer, but it costs more money.

Sometimes when the stakes are low, we decide the trade-off isn’t worth it. For instance, maybe we wouldn’t pay to insure a wedding ring that wasn’t expensive. You would think, however, that power plants wouldn’t cut it close. The consequences of a lost ring are some emotional pain and the cost of a new one. The consequences of a nuclear accident are exponentially higher. Lives are lost, and the environment corrupted. In the Fukushima case, the world will be dealing with the negative effects for a long time.

What decisions would you make differently if you were factoring safety margins into your life? To be fair, you can’t put them everywhere. Otherwise, your life might be all margin and no living. But you can identify the maximum load your life is currently designed to withstand and figure out how close to it you are coming.

For example, having your expenses equal 100 percent of your income is allowing you no flexibility in the load you have to carry. A job loss, a bad flood in your neighborhood, or significant sickness are all unexpected events that would change the load your financial structure has to support. Without a margin of safety, such as a healthy savings or investment account, you could find your structure collapsing, compromising the roof over your head.

The idea is to identify the unlikely but possible risks to your survival and build margins of safety that will allow you to continue your lifestyle should these things come to pass. That way, a missed paycheck will be easily absorbed instead of jeopardizing your ability to put food on the table.

To figure out where else you should build margins of safety into your life, think of the times you’ve been terrified and desperate. Those might be good places to start learning from experience and making the most of your second chances.

[Episode 28] The Return of a Decision Making Jedi: My Discussion With Michael Mauboussin

Guess who's back? Back again?
Michael Mauboussin is back, tell a friend.

Mauboussin was actually the very first guest on the podcast when it was still very much an experiment. I enjoyed it so much, I decided to continue with the show. (If you missed his last interview, you can listen to it here, or if you’re a member of The Learning Community, you can download a transcript.)

Michael is one of my very favorite people to talk to, and I couldn’t wait to pick up right where we left off.

In this interview, Michael and I dive deep into some of the topics we care most about here at Farnam Street, including:

  • The concept of “base rates” and how they can help us make far better decisions and avoid the pain and consequences of making poor choices.
  • How to know where you land on the luck/skill continuum and why it matters
  • Michael’s advice on creating a systematic decision-making process in your organization to improve outcomes./li>
  • The two most important elements of any decision-making process
  • How to train your intuition to be one of your most powerful assets instead of a dangerous liability
  • The three tests Michael uses in his company to determine the health and financial stability of his environment
  • Why “algorithm aversion” is creating such headaches in many organizations and how to help your teams overcome it, so you can make more rapid progress
  • The most impactful books that he’s read since we last spoke, is reading habits, and the strategies he uses to get the most of every book
  • The importance of sleep in Michael's’ life to make sure his body and mind are running at peak efficiency
  • His greatest failures and what he learned from them
  • How Michael and his wife raised their kids and the unique parenting style they adopted
  • How Michael defines happiness and the decisions he makes to maximize the joy in his life

Any one of those insights alone is worth a listen, so I think you’re really going to enjoy this interview.

Listen

Transcript

An edited transcript is available to members of our learning community or for purchase separately ($9).

More Episodes

A complete list of all of our podcast episodes.

***

Members can discuss this post on the Learning Community Forum

Bayes and Deadweight: Using Statistics to Eject the Deadweight From Your Life

“[K]nowledge is indeed highly subjective, but we can quantify it with a bet. The amount we wager shows how much we believe in something.”

— Sharon Bertsch McGrayne

The quality of your life will, to a large extent, be decided by whom you elect to spend your time with. Supportive, caring, and funny are great attributes in friends and lovers. Unceasingly negative cynics who chip away at your self-esteem? We need to jettison those people as far and fast as we can.

The problem is, how do we identify these people who add nothing positive — or not enough positive — to our lives?

Few of us keep relationships with obvious assholes. There are always a few painfully terrible family members we have to put up with at weddings and funerals, but normally we choose whom we spend time with. And we’ve chosen these people because, at some point, our interactions with them felt good.

How, then, do we identify the deadweight? The people who are really dragging us down and who have a high probability of continuing to do so in the future? We can apply the general thinking tool called Bayesian Updating.

Bayes's theorem can involve some complicated mathematics, but at its core lies a very simple premise. Probability estimates should start with what we already know about the world and then be incrementally updated as new information becomes available. Bayes can even help us when that information is relevant but subjective.

How? As McGrayne explains in the quote above, from The Theory That Would Not Die, you simply ask yourself to wager on the outcome.

Let’s take an easy example.

You are going on a blind date. You’ve been told all sorts of good things in advance — the person is attractive and funny and has a good job — so of course, you are excited. The date starts off great, living up to expectations. Halfway through you find out they have a cat. You hate cats. Given how well everything else is going, how much should this information affect your decision to keep dating?

Quantify your belief in the most probable outcome with a bet. How much would you wager that harmony on the pet issue is an accurate predictor of relationship success? Ten cents? Ten thousand dollars? Do the thought experiment. Imagine walking into a casino and placing a bet on the likelihood that this person’s having a cat will ultimately destroy the relationship. How much money would you take out of your savings and lay on the table? Your answer will give you an idea of how much to factor the cat into your decision-making process. If you wouldn’t part with a dime, then I wouldn’t worry about it.

This kind of approach can help us when it comes to evaluating our interpersonal relationships. Deciding if someone is a good friend, partner, or co-worker is full of subjective judgments. There is usually some contradictory information, and ultimately no one is perfect. So how do you decide who is worth keeping around?

Let’s start with friends. The longer a friendship lasts, the more likely it is to have ups and downs. The trick is to start quantifying these. A hit from a change in geographical proximity is radically different from a hit from betrayal — we need to factor these differently into our friendship formula.

This may seem obvious, but the truth is that we often give the same weight to a wide variety of behaviors. We’ll says things like “yeah, she talked about my health problems when I asked her not to, but she always remembers my birthday.” By treating all aspects of the friendship equally, we have a hard time making reasonable estimates about the future value of that friendship. And that’s how we end up with deadweight.

For the friend who has betrayed your confidence, what you really want to know is the likelihood that she’s going to do it again. Instead of trying to remember and analyze every interaction you’ve ever had, just imagine yourself betting on it. Go back to that casino and head to the friendship roulette wheel. Where would you put your money? All in on “She can’t keep her mouth shut” or a few chips on “Not likely to happen again”?

Using a rough Bayesian model in our heads, we’re forcing ourselves to quantify what “good” is and what “bad” is. How good? How bad? How likely? How unlikely? Until we do some (rough) guessing at these things, we’re making decisions much more poorly than we need to be.

The great thing about using Bayes’s theorem is that it encourages constant updating. It also encourages an open mind by giving us the chance to look at a situation from multiple angles. Maybe she really is sorry about the betrayal. Maybe she thought she was acting in your best interests. There are many possible explanations for her behavior and you can use Bayes’s theorem to integrate all of her later actions into your bet. If you find yourself reducing the amount of money you’d bet on further betrayal, you can accurately assume that the probability she will betray your trust again has gone down.

Using this strategy can also stop the endless rounds of asking why. Why did that co-worker steal my idea? Who else do I have to watch out for? This what-if thinking is paralyzing. You end up self-justifying your behavior through anticipating the worst possible scenarios you can imagine. Thus, you don’t change anything, and you step further away from a solution.

In reality, who cares? The why isn’t important; the most relevant task for you is to figure out the probability that your coworker will do it again. Don’t spend hours analyzing what to do, get upset over the doomsday scenarios you have come up with, or let a few glasses of wine soften the experience.

Head to your mental casino and place the bet, quantifying all the subjective information in your head that is messy and hard to articulate. You will cut through the endless “but maybes” and have a clear path forward that addresses the probable future. It may make sense to give him the benefit of the doubt. It may also be reasonable to avoid him as much as possible. When you figure out how much you would wager on the potential outcomes, you’ll know what to do.

Sometimes we can’t just get rid of people who aren’t good for us — family being the prime example. But you can also use Bayes to test how your actions will change the probability of outcomes to find ways of keeping the negativity minimal. Let’s say you have a cousin who always plans to visit but then cancels. You can’t stop being his cousin and saying “you aren’t welcome at my house” will cause a big family drama. So what else can you do?

Your initial equation — your probability estimate — indicates that the behavior is likely to continue. In your casino, you would comfortably bet your life savings that it will happen again. Now imagine ways in which you could change your behavior. Which of these would reduce your bet? You could have an honest conversation with him, telling him how his actions make you feel. To know if he’s able to openly receive this, consider whether your bet would change. Or would you wager significantly less after employing the strategy of always being busy when he calls to set up future visits?

And you can dig even deeper. Which of your behaviors would increase the probability that he actually comes? Which behaviors would increase the probability that he doesn’t bother making plans in the first place? Depending on how much you like him, you can steer your changes to the outcome you’d prefer.

Quantifying the subjective and using Bayes’s theorem can help us clear out some of the relationship negativity in our lives.

What You Can Learn from Fighter Pilots About Making Fast and Accurate Decisions

“What is strategy? A mental tapestry of changing intentions for harmonizing and focusing our efforts as a basis for realizing some aim or purpose in an unfolding and often unforeseen world of many bewildering events and many contending interests.””

— John Boyd

What techniques do people use in the most extreme situations to make decisions? What can we learn from them to help us make more rational and quick decisions?

If these techniques work in the most drastic scenarios, they have a good chance of working for us. This is why military mental models can have such wide, useful applications outside their original context.

Military mental models are constantly tested in the laboratory of conflict. If they weren’t agile, versatile, and effective, they would quickly be replaced by others. Military leaders and strategists invest a great deal of time in developing and teaching decision-making processes.

One strategy that I’ve found repeatedly effective is the OODA loop.

Developed by strategist and U.S. Air Force Colonel John Boyd, the OODA loop is a practical concept designed to be the foundation of rational thinking in confusing or chaotic situations. OODA stands for Observe, Orient, Decide, and Act.

Boyd developed the strategy for fighter pilots. However, like all good mental models, it can be extended into other fields. We used it at the intelligence agency I used to work at. I know lawyers, police officers, doctors, businesspeople, politicians, athletes, and coaches who use it.

Fighter pilots have to work fast. Taking a second too long to make a decision can cost them their lives. As anyone who has ever watched Top Gun knows, pilots have a lot of decisions and processes to juggle when they’re in dogfights (close-range aerial battles). Pilots move at high speeds and need to avoid enemies while tracking them and keeping a contextual knowledge of objectives, terrains, fuel, and other key variables.

Dogfights are nasty. I’ve talked to pilots who’ve been in them. They want the fights to be over as quickly as possible. The longer they go, the higher the chances that something goes wrong. Pilots need to rely on their creativity and decision-making abilities to survive. There is no game plan to follow, no schedule or to-do list. There is only the present moment when everything hangs in the balance.

Forty-Second Boyd

Boyd was no armchair strategist. He developed his ideas during his own time as a fighter pilot. He earned the nickname “Forty-Second Boyd” for his ability to win any fight in under 40 seconds.

In a tribute written after Boyd’s death, General C.C. Krulak described him as “a towering intellect who made unsurpassed contributions to the American art of war. Indeed, he was one of the central architects of the reform of military thought…. From John Boyd we learned about competitive decision making on the battlefield—compressing time, using time as an ally.”

Reflecting Robert Greene’s maxim that everything is material, Boyd spent his career observing people and organizations. How do they adapt to changeable environments in conflicts, business, and other situations?

Over time, he deduced that these situations are characterized by uncertainty. Dogmatic, rigid theories are unsuitable for chaotic situations. Rather than trying to rise through the military ranks, Boyd focused on using his position as colonel to compose a theory of the universal logic of war.

Boyd was known to ask his mentees the poignant question, “Do you want to be someone, or do you want to do something?” In his own life, he certainly focused on the latter path and, as a result, left us ideas with tangible value. The OODA loop is just one of many.

The Four Parts of the OODA Loop

Let's break down the four parts of the OODA loop and see how they fit together.

OODA stands for Observe, Orient, Decide, Act. The description of it as a loop is crucial. Boyd intended the four steps to be repeated again and again until a conflict finishes. Although most depictions of the OODA loop portray it as a superficial idea, there is a lot of depth to it. Using it should be simple, but it has a rich basis in interdisciplinary knowledge.

1: Observe
The first step in the OODA Loop is to observe. At this stage, the main focus is to build a comprehensive picture of the situation with as much accuracy as possible.

A fighter pilot needs to consider: What is immediately affecting me? What is affecting my opponent? What could affect us later on? Can I make any predictions, and how accurate were my prior ones? A pilot's environment changes rapidly, so these observations need to be broad and fluid.

And information alone is not enough. The observation stage requires awareness of the overarching meaning of the information. It also necessitates separating the information which is relevant for a particular decision from that which is not. You have to add context to the variables.

The observation stage is vital in decision-making processes.

For example, faced with a patient in an emergency ward, a doctor needs to start by gathering as much foundational knowledge as possible. That might be the patient's blood pressure, pulse, age, underlying health conditions, and reason for admission. At the same time, the doctor needs to discard irrelevant information and figure out which facts are relevant for this precise situation. Only by putting the pieces together can she make a fast decision about the best way to treat the patient. The more experienced a doctor is, the more factors she is able to take into account, including subtle ones, such as a patient's speech patterns, his body language, and the absence (rather than presence) of certain signs.

2: Orient

Orientation, the second stage of the OODA loop, is frequently misunderstood or skipped because it is less intuitive than the other stages. Boyd referred to it as the schwerpunkt, a German term which loosely translates to “the main emphasis.” In this context, to orient is to recognize the barriers that might interfere with the other parts of the process.

Without an awareness of these barriers, the subsequent decision cannot be a fully rational one. Orienting is all about connecting with reality, not with a false version of events filtered through the lens of cognitive biases and shortcuts.

“Orientation isn't just a state you're in; it's a process. You're always orienting.”

— John Boyd

Including this step, rather than jumping straight to making a decision, gives us an edge over the competition. Even if we are at a disadvantage to begin with, having fewer resources or less information, Boyd maintained that the Orient step ensures that we can outsmart an opponent.

For Western nations, cyber-crime is a huge threat — mostly because for the first time ever, they can’t outsmart, outspend, or out-resource the competition. Boyd has some lessons for them.

Boyd believed that four main barriers prevent us from seeing information in an unbiased manner:

  1. Our cultural traditions
  2. Our genetic heritage
  3. Our ability to analyze and synthesize
  4. The influx of new information — it is hard to make sense of observations when the situation keeps changing

Boyd was one of the first people to discuss the importance of building a toolbox of mental models, prior to Charlie Munger’s popularization of the concept among investors.

Boyd believed in “destructive deduction” — taking note of incorrect assumptions and biases and then replacing them with fundamental, versatile mental models. Only then can we begin to garner a reality-oriented picture of the situation, which will inform subsequent decisions.

Boyd employed a brilliant metaphor for this — a snowmobile. In one talk, he described how a snowmobile comprises elements of different devices. The caterpillar treads of a tank, skis, the outboard motor of a boat, the handlebars of a bike — each of those elements is useless alone, but combining them creates a functional vehicle.

As Boyd put it: “A loser is someone (individual or group) who cannot build snowmobiles when facing uncertainty and unpredictable change; whereas a winner is someone (individual or group) who can build snowmobiles, and employ them in an appropriate fashion, when facing uncertainty and unpredictable change.”

To orient ourselves, we have to build a metaphorical snowmobile by combining practical concepts from different disciplines.

Although Boyd is regarded as a military strategist, he didn’t confine himself to any particular discipline. His theories encompass ideas drawn from various disciplines, including mathematical logic, biology, psychology, thermodynamics, game theory, anthropology, and physics. Boyd described his approach as a “scheme of pulling things apart (analysis) and putting them back together (synthesis) in new combinations to find how apparently unrelated ideas and actions can be related to one another.”

3. Decide

No surprises here. Having gathered information and oriented ourselves, we have to make an informed decision. The previous two steps should have generated a plethora of ideas, so this is the point where we choose the most relevant option.

Boyd cautioned against first-conclusion bias, explaining that we cannot keep making the same decision again and again. This part of the loop needs to be flexible and open to Bayesian updating. In some of his notes, Boyd described this step as the hypothesis stage. The implication is that we should test the decisions we make at this point in the loop, spotting their flaws and including any issues in future observation stages.

4. Act

While technically a decision-making process, the OODA loop is all about action. The ability to act upon rational decisions is a serious advantage.

The other steps are mere precursors. A decision made, now is the time to act upon it. Also known as the test stage, this is when we experiment to see how good our decision was. Did we observe the right information? Did we use the best possible mental models? Did we get swayed by biases and other barriers? Can we disprove the prior hypothesis? Whatever the outcome, we then cycle back to the first part of the loop and begin observing again.

Why the OODA Loop Works

The OODA loop has four key benefits.

1. Speed

Fighter pilots must make many decisions in fast succession. They don’t have time to list pros and cons or to consider every available avenue. Once the OODA loop becomes part of their mental toolboxes, they should be able to cycle through it in a matter of seconds.

Speed is a crucial element of military decision making. Using the OODA loop in everyday life, we probably have a little more time than a fighter pilot would. But Boyd emphasized the value of being decisive, taking initiative, and staying autonomous. These are universal assets and apply to many situations.

Take the example of modern growth hacker marketing.

“The ability to operate at a faster tempo or rhythm than an adversary enables one to fold the adversary back inside himself so that he can neither appreciate nor keep up with what is going on. He will become disoriented and confused…”

— John Boyd

The key advantage growth hackers have over traditional marketers is speed. They observe (look at analytics, survey customers, perform a/b tests, etc.) and orient themselves (consider vanity versus meaningful metrics, assess interpretations, and ground themselves in the reality of a market) before making a decision and then acting. The final step serves to test their ideas and they have the agility to switch tactics if the desired outcome is not achieved.

Meanwhile, traditional marketers are often trapped in lengthy campaigns which do not offer much in the way of useful metrics. Growth hackers can adapt and change their techniques every single day depending on what works. They are not confined by stagnant ideas about what worked before.

So, although they may have a small budget and fewer people to assist them, their speed gives them an advantage. Just as Boyd could defeat any opponent in under 40 seconds (even starting at a position of disadvantage), growth hackers can grow companies and sell products at extraordinary rates, starting from scratch.

2. Comfort With Uncertainty
Uncertainty does not always equate to risk. A fighter pilot is in a precarious situation, where there will be gaps in their knowledge. They cannot read the mind of the opponent and might have incomplete information about the weather conditions and surrounding environment. They can, however, take into account key factors such as the opponent's nationality, the type of airplane they are flying, and what their maneuvers reveal about their intentions and level of training.

If the opponent uses an unexpected strategy, is equipped with a new type of weapon or airplane, or behaves in an irrational, ideologically motivated way, the pilot must accept the accompanying uncertainty. However, Boyd belabored the point that uncertainty is irrelevant if we have the right filters in place.

If we don’t, we can end up stuck at the observation stage, unable to decide or act. But if we do have the right filters, we can factor uncertainty into the observation stage. We can leave a margin of error. We can recognize the elements which are within our control and those which are not.

Three key principles supported Boyd’s ideas. In his presentations, he referred to Gödel’s Proof, Heisenberg’s Uncertainty Principle, and the Second Law of Thermodynamics.

Gödel’s theorems indicate that any mental model we have of reality will omit certain information and that Bayesian updating must be used to bring it in line with reality. Our understanding of science illustrates this.

In the past, people’s conception of reality missed crucial concepts such as criticality, relativity, the laws of thermodynamics, and gravity. As we have discovered these concepts, we have updated our view of the world. Yet we would be foolish to think that we now know everything and our worldview is complete. Other key principles remain undiscovered. The same goes for fighter pilots — their understanding of what is going on during a battle will always have gaps. Identifying this fundamental uncertainty gives it less power over us.

The second concept Boyd referred to is Heisenberg’s Uncertainty Principle. In its simplest form, this principle describes the limit of the precision with which pairs of physical properties can be understood. We cannot know the position and the velocity of a body at the same time. We can know either its location or its speed, but not both. Although Heisenberg’s Uncertainty Principle was initially used to describe particles, Boyd’s ability to combine disciplines led him to apply it to planes. If a pilot focuses too hard on where an enemy plane is, they will lose track of where it is going and vice versa. Trying harder to track the two variables will actually lead to more inaccuracy! Heisenberg’s Uncertainty Principle applies to myriad areas where excessive observation proves detrimental. Reality is imprecise.

Finally, Boyd made use of the Second Law of Thermodynamics. In a closed system, entropy always increases and everything moves towards chaos. Energy spreads out and becomes disorganized.

Although Boyd’s notes do not specify the exact applications, his inference appears to be that a fighter pilot must be an open system or they will fail. They must draw “energy” (information) from outside themselves or the situation will become chaotic. They should also aim to cut their opponent off, forcing them to become a closed system. Drawing on his studies, Boyd developed his Energy Maneuverability theory, which recast maneuvers in terms of the energy they used.

“Let your plans be dark and impenetrable as night, and when you move, fall like a thunderbolt.”

— Sun Tzu

3. Unpredictability

Using the OODA loop should enable us to act faster than an opponent, thereby seeming unpredictable. While they are still deciding what to do, we have already acted. This resets their own loop, moving them back to the observation stage. Keep doing this, and they are either rendered immobile or forced to act without making a considered decision. So, they start making mistakes, which can be exploited.

Boyd recommended making unpredictable changes in speed and direction, and wrote, “we should operate at a faster tempo than our adversaries or inside our adversaries[’] time scales. … Such activity will make us appear ambiguous (non predictable) [and] thereby generate confusion and disorder among our adversaries.” He even helped design planes better equipped to make those unpredictable changes.

For the same reason that you can’t run the same play 70 times in a football game, rigid military strategies often become useless after a few uses, or even one iteration, as opponents learn to recognize and counter them. The OODA loop can be endlessly used because it is a formless strategy, unconnected to any particular maneuvers.

We know that Boyd was influenced by Sun Tzu (he owned seven thoroughly annotated copies of The Art of War), and he drew many ideas from the ancient strategist. Sun Tzu depicts war as a game of deception where the best strategy is that which an opponent cannot pre-empt. Apple has long used this strategy as a key part of their product launches. Meticulously planned, their launches are shrouded in secrecy and the goal is for no one outside the company to see a product prior to the release.

When information has been leaked, the company has taken serious legal action as well as firing associated employees. We are never sure what Apple will put out next (just search for “Apple product launch 2017” and you will see endless speculation based on few facts). As a consequence, Apple can stay ahead of their rivals.

Once a product launches, rival companies scramble to emulate it. But by the time their technology is ready for release, Apple is on to the next thing and has taken most of the market share. Although inexpensive compared to the drawn-out product launches other companies use, Apple’s unpredictability makes us pay attention. Stock prices rise the day after, tickets to launches sell out in seconds, and the media reports launches as if they were news events, not marketing events.

4. Testing

A notable omission in Boyd’s work is any sort of specific instructions for how to act or which decisions to make. This is presumably due to his respect for testing. He believed that ideas should be tested and then, if necessary, discarded.

“We can't just look at our own personal experiences or use the same mental recipes over and over again; we've got to look at other disciplines and activities and relate or connect them to what we know from our experiences and the strategic world we live in.”

— John Boyd

Boyd’s OODA is a feedback loop, with the outcome of actions leading back to observations. Even in Aerial Attack Study, his comprehensive manual of maneuvers, Boyd did not describe any particular one as superior. He encouraged pilots to have the widest repertoire possible so they could select the best option in response to the maneuvers of an opponent.

We can incorporate testing into our decision-making processes by keeping track of outcomes in decision journals. Boyd’s notes indicate that he may have done just that during his time as a fighter pilot, building up the knowledge that went on to form Aerial Attack Study. Rather than guessing how our decisions lead to certain outcomes, we can get a clear picture to aid us in future orientation stages. Over time, our decision journals will reveal what works and what doesn’t.

Applying the OODA Loop

In sports, there is an adage that carries over to business quite well: “Speed kills.” If you are able to be nimble, able to assess the ever-changing environment and adapt quickly, you'll always carry the advantage over your opponent.

Start applying the OODA loop to your day-to-day decisions and watch what happens. You'll start to notice things that you would have been oblivious to before. Before jumping to your first conclusion, you'll pause to consider your biases, take in additional information, and be more thoughtful of consequences.

As with anything you practice,  if you do it right, the more you do it, the better you'll get.  You'll start making better decisions more quickly. You'll see more rapid progress. And as John Boyd would prescribe, you'll start to DO something in your life, and not just BE somebody.

***

Members can discuss this post on the Learning Community Forum

Poker, Speeding Tickets, and Expected Value: Making Decisions in an Uncertain World

“Take the probability of loss times the amount of possible loss from the probability of gain times the amount of possible gain. That is what we're trying to do. It's imperfect but that's what it's all about.”

— Warren Buffett

You can train your brain to think like CEOs, professional poker players, investors, and others who make tricky decisions in an uncertain world by weighing probabilities.

All decisions involve potential tradeoffs and opportunity costs. The question is, how can we make the best possible choices when the factors involved are often so complicated and confusing? How can we determine which statistics and metrics are worth paying attention to? How do we think about averages?

Expected value is one of the simplest tools you can use to think better. While not a natural way of thinking for most people, it instantly turns the world into shades of grey by forcing us to weigh probabilities and outcomes. Once we've mastered it, our decisions become supercharged. We know which risks to take, when to quit projects, when to go all in, and more.

Expected value refers to the long-run average of a random variable.

If you flip a fair coin ten times, the heads-to-tails ratio will probably not be exactly equal. If you flip it one hundred times, the ratio will be closer to 50:50, though again not exactly. But for a very large number of iterations, you can expect heads to come up half the time and tails the other half. The law of large numbers dictates that the values will, in the long term, regress to the mean, even if the first few flips seem unequal.

The more coin flips, the closer you get to the 50:50 ratio. If you bet a sum of money on a coin flip, the potential winnings on a fair coin have to be bigger than your potential loss to make the expected value positive.

We make many expected-value calculations without even realizing it. If we decide to stay up late and have a few drinks on a Tuesday, we regard the expected value of an enjoyable evening as higher than the expected costs the following day. If we decide to always leave early for appointments, we weigh the expected value of being on time against the frequent instances when we arrive early. When we take on work, we view the expected value in terms of income and other career benefits as higher than the cost in terms of time and/or sanity.

Likewise, anyone who reads a lot knows that most books they choose will have minimal impact on them, while a few books will change their lives and be of tremendous value. Looking at the required time and money as an investment, books have a positive expected value (provided we choose them with care and make use of the lessons they teach).

These decisions might seem obvious. But the math behind them would be somewhat complicated if we tried to sit down and calculate it. Who pulls out a calculator before deciding whether to open a bottle of wine (certainly not me) or walk into a bookstore?

The factors involved are impossible to quantify in a non-subjective manner – like trying to explain how to catch a baseball. We just have a feel for them. This expected-value analysis is unconscious – something to consider if you have ever labeled yourself as “bad at math.”

Parking Tickets

Another example of expected value is parking tickets. Let's say that a parking spot costs $5 and the fine for not paying is $10. If you can expect to be caught one-third of the time, why pay for parking? The expected value of doing so is negative. It's a disincentive. You can park without paying three times and pay only $10 in fines, instead of paying $15 for three parking spots. But if the fine is $100, the probability of getting caught would have to be higher than one in twenty for it to be worthwhile. This is why fines tend to seem excessive. They cover the people who are not caught while giving an incentive for everyone to pay.

Consider speeding tickets. Here, the expected value can be more abstract, encompassing different factors. If speeding on the way to work saves 15 minutes, then a monthly $100 fine might seem worthwhile to some people. For most of us, though, a weekly fine would mean that speeding has a negative expected value. Add in other disincentives (such as the loss of your driver's license), and speeding is not worth it. So the calculation is not just financial; it takes into account other tradeoffs as well.

The same goes for free samples and trial periods on subscription services. Many companies (such as Graze, Blue Apron, and Amazon Prime) offer generous free trials. How can they afford to do this? Again, it comes down to expected value. The companies know how much the free trials cost them. They also know the probability of someone's paying afterwards and the lifetime value of a customer. Basic math reveals why free trials are profitable. Say that a free trial costs the company $10 per person, and one in ten people then sign up for the paid service, going on to generate $150 in profits. The expected value is positive. If only one in twenty people sign up, the company needs to find a cheaper free trial or scrap it.

Similarly, expected value applies to services that offer a free “lite” version (such as Buffer and Spotify). Doing so costs them a small amount or even nothing. Yet it increases the chance of someone's deciding to pay for the premium version. For the expected value to be positive, the combined cost of the people who never upgrade needs to be lower than the profit from the people who do pay.

Lottery tickets prove useless when viewed through the lens of expected value. If a ticket costs $1 and there is a possibility of winning $500,000, it might seem as if the expected value of the ticket is positive. But it is almost always negative. If one million people purchase a ticket, the expected value is $0.50. That difference is the profit that lottery companies make. Only on sporadic occasions is the expected value positive, even though the probability of winning remains minuscule.

Failing to understand expected value is a common logical fallacy. Getting a grasp of it can help us to overcome many limitations and cognitive biases.

“Constantly thinking in expected value terms requires discipline and is somewhat unnatural. But the leading thinkers and practitioners from somewhat varied fields have converged on the same formula: focus not on the frequency of correctness, but on the magnitude of correctness.”

— Michael Mauboussin

Expected Value and Poker

Let's look at poker. How do professional poker players manage to win large sums of money and hold impressive track records? Well, we can be certain that the answer isn't all luck, although there is some of that involved.

Professional players rely on mathematical mental models that create order among random variables. Although these models are basic, it takes extensive experience to create the fingerspitzengefühl (“fingertips feeling,” or instinct) necessary to use them.

A player needs to make correct calculations every minute of a game with an automaton-like mindset. Emotions and distractions can corrupt the accuracy of the raw math.

In a game of poker, the expected value is the average return on each dollar invested in the pot. Each time a player makes a bet or call, they are taking into account the probability of making more money than they invest. If a player is risking $100, with a 1 in 5 probability of success, the pot must contain at least $500 for the bet to be safe. The expected value per call is at least equal to the amount the player stands to lose. If the pot contains $300 and the probability is 1 in 5, the expected value is negative. The idea is that even if this tactic is unsuccessful at times, in the long run, the player will profit.

Expected-value analysis gives players a clear idea of probabilistic payoffs. Successful poker players can win millions one week, then make nothing or lose money the next, depending on the probability of winning. Even the best possible hands can lose due to simple probability. With each move, players also need to use Bayesian updating to adapt their calculations. because sticking with a prior figure could prove disastrous. Casinos make their fortunes from people who bet on situations with a negative expected value.

Expected Value and the Ludic Fallacy

In The Black Swan, Nassim Taleb explains the difference between everyday randomness and randomness in the context of a game or casino. Taleb coined the term “ludic fallacy” to refer to “the misuse of games to model real-life situations.” (Or, as the website logicallyfallacious.com puts it: the assumption that flawless statistical models apply to situations where they don’t actually apply.)

In Taleb’s words, gambling is “sterilized and domesticated uncertainty. In the casino, you know the rules, you can calculate the odds… ‘The casino is the only human venture I know where the probabilities are known, Gaussian (i.e., bell-curve), and almost computable.’ You cannot expect the casino to pay out a million times your bet, or to change the rules abruptly during the game….”

Games like poker have a defined, calculable expected value. That’s because we know the outcomes, the cards, and the math. Most decisions are more complicated. If you decide to bet $100 that it will rain tomorrow, the expected value of the wager is incalculable. The factors involved are too numerous and complex to compute. Relevant factors do exist; you are more likely to win the bet if you live in England than if you live in the Sahara, for example. But that doesn't rule out Black Swan events, nor does it give you the neat probabilities which exist in games. In short, there is a key distinction between Knightian risks, which are computable because we have enough information to calculate the odds, and Knightian uncertainty, which is non-computable because we don’t have enough information to calculate odds accurately. (This distinction between risk and uncertainty is based on the writings of economist Frank Knight.) Poker falls into the former category. Real life is in the latter. If we take the concept literally and only plan for the expected, we will run into some serious problems.

As Taleb writes in Fooled By Randomness:

Probability is not a mere computation of odds on the dice or more complicated variants; it is the acceptance of the lack of certainty in our knowledge and the development of methods for dealing with our ignorance. Outside of textbooks and casinos, probability almost never presents itself as a mathematical problem or a brain teaser. Mother nature does not tell you how many holes there are on the roulette table, nor does she deliver problems in a textbook way (in the real world one has to guess the problem more than the solution).

The Monte Carlo Fallacy

Even in the domesticated environment of a casino, probabilistic thinking can go awry if the principle of expected value is forgotten. This famously occurred in Monte Carlo Casino in 1913. A group of gamblers lost millions when the roulette table landed on black 26 times in a row. The probability of this occurring is no more or less likely than the other 67,108,863 possible permutations, but the people present kept thinking, “It has to be red next time.” They saw the likelihood of the wheel landing on red as higher each time it landed on black. In hindsight, what sense does that make? A roulette wheel does not remember the color it landed on last time. The likelihood of either outcome is exactly 50% with each spin, regardless of the previous iteration. So the potential winnings for each spin need to be at least twice the bet a player makes, or the expected value is negative.

“A lot of people start out with a 400-horsepower motor but only get 100 horsepower of output. It's way better to have a 200-horsepower motor and get it all into output.”

— Warren Buffett

Given all the casinos and roulette tables in the world, the Monte Carlo incident had to happen at some point. Perhaps some day a roulette wheel will land on red 26 times in a row and the incident will repeat. The gamblers involved did not consider the negative expected value of each bet they made. We know this mistake as the Monte Carlo fallacy (or the “gambler's fallacy” or “the fallacy of the maturity of chances”) – the assumption that prior independent outcomes influence future outcomes that are actually also independent. In other words, people assume that “a random process becomes less random and more predictable as it is repeated”1.

It's a common error. People who play the lottery for years without success think that their chance of winning rises with each ticket, but the expected value is unchanged between iterations. Amos Tversky and Daniel Kahneman consider this kind of thinking a component of the representativeness heuristic, stating that the more we believe we control random events, the more likely we are to succumb to the Monte Carlo fallacy.

Magnitude over Frequency

Steven Crist, in his book Bet with the Best, offers an example of how an expected-value mindset can be applied. Consider a hypothetical race with four horses. If you’re trying to maximize return on investment, you might want to avoid the horse with a high likelihood of winning. Crist writes,

The point of this exercise is to illustrate that even a horse with a very high likelihood of winning can be either a very good or a very bad bet, and that the difference between the two is determined by only one thing: the odds.”2

Everything comes down to payoffs. A horse with a 50% chance of winning might be a good bet, but it depends on the payoff. The same holds for a 100-to-1 longshot. It's not the frequency of winning but the magnitude of the win that matters.

Error Rates, Averages, and Variability

When Bill Gates walks into a room with 20 people, the average wealth per person in the room quickly goes beyond a billion dollars. It doesn't matter if the 20 people are wealthy or not; Gates's wealth is off the charts and distorts the results.

An old joke tells of the man who drowns in a river which is, on average, three feet deep. If you're deciding to cross a river and can't swim, the range of depths matters a heck of a lot more than the average depth.

The Use of Expected Value: How to Make Decisions in an Uncertain World

Thinking in terms of expected value requires discipline and practice. And yet, the top performers in almost any field think in terms of probabilities. While this isn't natural for most of us, once you implement the discipline of the process, you'll see the quality of your thinking and decisions improve.

In poker, players can predict the likelihood of a particular outcome. In the vast majority of cases, we cannot predict the future with anything approaching accuracy. So what use is expected value outside gambling? It turns out, quite a lot. Recognizing how expected value works puts any of us at an advantage. We can mentally leap through various scenarios and understand how they affect outcomes.

Expected value takes into account wild deviations. Averages are useful, but they have limits, as the man who tried to cross the river discovered. When making predictions about the future, we need to consider the range of outcomes. The greater the possible variance from the average, the more our decisions should account for a wider range of outcomes.

There's a saying in the design world: when you design for the average, you design for no one. Large deviations can mean more risk-which is not always a bad thing. So expected-value calculations take into account the deviations. If we can make decisions with a positive expected value and the lowest possible risk, we are open to large benefits.

Investors use expected value to make decisions. Choices with a positive expected value and minimal risk of losing money are wise. Even if some losses occur, the net gain should be positive over time. In investing, unlike in poker, the potential losses and gains cannot be calculated in exact terms. Expected-value analysis reveals opportunities that people who just use probabilistic thinking often miss. A trade with a low probability of success can still carry a high expected value. That's why it is crucial to have a large number of robust mental models. As useful as probabilistic thinking can be, it has far more utility when combined with expected value.

Understanding expected value is also an effective way to overcome the sunk costs fallacy. Many of our decisions are based on non-recoverable past investments of time, money, or resources. These investments are irrelevant; we can't recover them, so we shouldn't factor them into new decisions. Sunk costs push us toward situations with a negative expected value. For example, consider a company that has invested considerable time and money in the development of a new product. As the launch date nears, they receive irrefutable evidence that the product will be a failure. Perhaps research shows that customers are disinterested, or a competitor launches a similar, better product. The sunk costs fallacy would lead them to release their product anyway. Even if they take a loss. Even if it damages their reputation. After all, why waste the money they spent developing the product? Here's why: Because the product has a negative expected value, which will only worsen their losses. An escalation of commitment will only increase sunk costs.

When we try to justify a prior expense, calculating the expected value can prevent us from worsening the situation. The sunk costs fallacy robs us of our most precious resource: time. Each day we are faced with the choice between continuing and quitting numerous endeavors. Expected-value analysis reveals where we should continue, and where we should cut our losses and move on to a better use of time and resources. It's an efficient way to work smarter, and not engage in unnecessary projects.

Thinking in terms of expected value will make you feel awkward when you first try it. That's the hardest thing about it; you need to practice it a while before it becomes second nature. Once you get the hang of it, you'll see that it's valuable in almost every decision. That's why the most rational people in the world constantly think about expected value. They've uncovered the key insight that the magnitude of correctness matters more than its frequency. And yet, human nature is such that we're happier when we're frequently right.

Footnotes
  • 1

    From https://rationalwiki.org/wiki/Gambler’s_fallacy, accessed on 11 January 2018.

  • 2

    Steven Crist, “Crist on Value,” in Andrew Beyer et al., Bet with the Best: All New Strategies From America’s Leading Handicappers (New York: Daily Racing Form Press, 2001), 63-64.