Category: Mental Models

Half Life: The Decay of Knowledge and What to Do About It

Understanding the concept of a half-life will change what you read and how you invest your time. It will explain why our careers are increasingly specialized and offer a look into how we can compete more effectively in a very crowded world.

The Basics

A half-life is the time taken for something to halve its quantity. The term is most often used in the context of radioactive decay, which occurs when unstable atomic particles lose energy. Twenty-nine elements are known to be capable of undergoing this process. Information also has a half-life, as do drugs, marketing campaigns, and all sorts of other things. We see the concept in any area where the quantity or strength of something decreases over time.

Radioactive decay is random, and measured half-lives are based on the most probable rate. We know that a nucleus will decay at some point; we just cannot predict when. It could be anywhere between instantaneous and the total age of the universe. Although scientists have defined half-lives for different elements, the exact rate is completely random.

Half-lives of elements vary tremendously. For example, carbon takes millions of years to decay; that’s why it is stable enough to be a component of the bodies of living organisms. Different isotopes of the same element can also have different half-lives.

Three main types of nuclear decay have been identified: alpha, beta, and gamma. Alpha decay occurs when a nucleus splits into two parts: a helium nucleus and the remainder of the original nucleus. Beta decay occurs when a neutron in the nucleus of an element changes into a proton. The result is that it turns into a different element, such as when potassium decays into calcium. Beta decay also releases a neutrino — a particle with virtually no mass. If a nucleus emits radiation without experiencing a change in its composition, it is subject to gamma decay. Gamma radiation contains an enormous amount of energy.

The Discovery of Half-Lives

The discovery of half-lives (and alpha and beta radiation) is credited to Ernest Rutherford, one of the most influential physicists of his time. Rutherford was at the forefront of this major discovery when he worked with physicist Joseph John Thompson on complementary experiments leading to the discovery of electrons. Rutherford recognized the potential of what he was observing and began researching radioactivity. Two years later, he identified the distinction between alpha and beta rays. This led to his discovery of half-lives, when he noticed that samples of radioactive materials took the same amount of time to decay by half. By 1902, Rutherford and his collaborators had a coherent theory of radioactive decay (which they called “atomic disintegration”). They demonstrated that radioactive decay enabled one element to turn into another — research which would earn Rutherford a Nobel Prize. A year later, he spotted the missing piece in the work of the chemist Paul Villard and named the third type of radiation gamma.

Half-lives are based on probabilistic thinking. If the half-life of an element is seven days, it is most probable that half of the atoms will have decayed in that time. For a large number of atoms, we can expect half-lives to be fairly consistent. It’s important to note that radioactive decay is based on the element itself, not the quantity of it. By contrast, in other situations, the half-life may vary depending on the amount of material. For example, the half-life of a chemical someone ingests might depend on the quantity.

In biology, a half-life is the time taken for a substance to lose half its effects. The most obvious instance is drugs; the half-life is the time it takes for their effect to halve, or for half of the substance to leave the body. The half-life of caffeine is around 6 hours, but (as with most biological half-lives) numerous factors can alter that number. People with compromised liver function or certain genes will take longer to metabolize caffeine. Consumption of grapefruit juice has been shown in some studies to slow caffeine metabolism. It takes around 24 hours for a dose of caffeine to fully leave the body.

The half-lives of drugs vary from a few seconds to several weeks. To complicate matters, biological half-lives vary for different parts of the body. Lead has a half-life of around a month in the blood, but a decade in bone. Plutonium in bone has a half-life of a century — more than double the time for the liver.

Marketers refer to the half-life of a campaign — the time taken to receive half the total responses. Unsurprisingly, this time varies among media. A paper catalog may have a half-life of about three weeks, whereas a tweet might have a half-life of a few minutes. Calculating this time is important for establishing how frequently a message should be sent.

“Every day that we read the news we have the possibility of being confronted with a fact about our world that is wildly different from what we thought we knew.”

— Samuel Arbesman

The Half-Life of Facts

In The Half-Life of Facts: Why Everything We Know Has an Expiration Date, Samuel Arbesman (see our Knowledge Project interview) posits that facts decay over time until they are no longer facts or perhaps no longer complete. According to Arbesman, information has a predictable half-life: the time taken for half of it to be replaced or disproved. Over time, one group of facts replaces another. As our tools and knowledge become more advanced, we can discover more — sometimes new things that contradict what we thought we knew, sometimes nuances about old things. Sometimes we discover a whole area that we didn’t know about.

The rate of these discoveries varies. Our body of engineering knowledge changes more slowly, for example, than does our body of psychological knowledge.

Arbesman studied the nature of facts. The field was born in 1947, when mathematician Derek J. de Solla Price was arranging a set of philosophical books on his shelf. Price noted something surprising: the sizes of the books fit an exponential curve. His curiosity piqued, he began to see whether the same curve applied to science as a whole. Price established that the quantity of scientific data available was doubling every 15 years. This meant that some of the information had to be rendered obsolete with time.

Scientometrics shows us that facts are always changing, and much of what we know is (or soon will be) incorrect. Indeed, much of the available published research, however often it is cited, has never been reproduced and cannot be considered true. In a controversial paper entitled “Why Most Published Research Findings Are False,” John Ioannides covers the rampant nature of poor science. Many researchers are incentivized to find results that will please those giving them funding. Intense competition makes it essential to find new information, even if it is found in a dubious manner. Yet we all have a tendency to turn a blind eye when beliefs we hold dear are disproved and to pay attention only to information confirming our existing opinions.

As an example, Arbesman points to the number of chromosomes in a human cell. Up until 1965, 48 was the accepted number that medical students were taught. (In 1953, it had been declared an established fact by a leading cytologist). Yet in 1956, two researchers, Joe Hin Tjio and Albert Levan, made a bold assertion. They declared the true number to be 46. During their research, Tjio and Levan could never find the number of chromosomes they expected. Discussing the problem with their peers, they discovered they were not alone. Plenty of other researchers found themselves two chromosomes short of the expected 48. Many researchers even abandoned their work because of this perceived error. But Tjio and Levan were right (for now, anyway). Although an extra two chromosomes seems like a minor mistake, we don’t know the opportunity costs of the time researchers invested in faulty hypotheses or the value of the work that was abandoned. It was an emperor’s-new-clothes situation, and anyone counting 46 chromosomes assumed they were the ones making the error.

As Arbesman puts it, facts change incessantly. Many of us have seen the ironic (in hindsight) doctor-endorsed cigarette ads from the past. A glance at a newspaper will doubtless reveal that meat or butter or sugar has gone from deadly to saintly, or vice versa. We forget that laughable, erroneous beliefs people once held are not necessarily any different from those we now hold. The people who believed that the earth was the center of the universe, or that some animals appeared out of nowhere or that the earth was flat, were not stupid. They just believed facts that have since decayed. Arbesman gives the example of a dermatology test that had the same question two years running, with a different answer each time. This is unsurprising considering the speed at which our world is changing.

As Arbesman points out, in the last century the world’s population has swelled from 2 billion to 7 billion, we have taken on space travel, and we have altered the very definition of science.

Our world seems to be in constant flux. With our knowledge changing all the time, even the most informed people can barely keep up. All this change may seem random and overwhelming (Dinosaurs have feathers? When did that happen?), but it turns out there is actually order within the shifting noise. This order is regular and systematic and is one that can be described by science and mathematics.

The order Arbesman describes mimics the decay of radioactive elements. Whenever new information is discovered, we can be sure it will break down and be proved wrong at some point. As with a radioactive atom, we don’t know precisely when that will happen, but we know it will occur at some point.

If we zoom out and look at a particular body of knowledge, the random decay becomes orderly. Through probabilistic thinking, we can predict the half-life of a group of facts with the same certainty with which we can predict the half-life of a radioactive atom. The problem is that we rarely consider the half-life of information. Many people assume that whatever they learned in school remains true years or decades later. Medical students who learned in university that cells have 48 chromosomes would not learn later in life that this is wrong unless they made an effort to do so.

OK, so we know that our knowledge will decay. What do we do with this information? Arbesman says,

… simply knowing that knowledge changes like this isn’t enough. We would end up going a little crazy as we frantically tried to keep up with the ever changing facts around us, forever living on some sort of informational treadmill. But it doesn’t have to be this way because there are patterns. Facts change in regular and mathematically understandable ways. And only by knowing the pattern of our knowledge evolution can we be better prepared for its change.

Recent initiatives have sought to calculate the half-life of an academic paper. Ironically, academic journals have largely neglected research into how people use them and how best to fund the efforts of researchers. Research by Philip Davis shows the time taken for a paper to receive half of its total downloads. Davis’s results are compelling. While most forms of media have a half-life measured in days or even hours, 97 percent of academic papers have a half-life longer than a year. Engineering papers have a slightly shorter half-life than other fields of research, with double the average (6 percent) having a half-life of under a year. This makes sense considering what we looked at earlier in this post. Health and medical publications have the shortest overall half-life: two to three years. Physics, mathematics, and humanities publications have the longest half-lives: two to four years.

The Half-Life of Secrets

According to Peter Swire, writing in “The Declining Half-Life of Secrets,” the half-life of secrets (by which Swire generally means classified information) is shrinking. In the past, a government secret could be kept for over 25 years. Nowadays, hacks and leaks have shrunk that time considerably. Swire writes:

During the Cold War, the United States developed the basic classification system that exists today. Under Executive Order 13526, an executive agency must declassify its documents after 25 years unless an exception applies, with stricter rules if documents stay classified for 50 years or longer. These time frames are significant, showing a basic mind-set of keeping secrets for a time measured in decades.

Swire notes that there are three main causes: “the continuing effects of Moore’s Law — or the idea that computing power doubles every two years, the sociology of information technologists, and the different source and methods for signals intelligence today compared with the Cold War.” One factor is that spreading leaked information is easier than ever. In the past, it was often difficult to get information published. Newspapers feared legal repercussions if they shared classified information. Anyone can now release secret information, often anonymously, as with WikiLeaks. Governments cannot as easily rely on media gatekeepers to cover up leaks.

Rapid changes in technology or geopolitics often reduce the value of classified information, so the value of some, but not all, classified information also has a half-life. Sometimes it’s days or weeks, and sometimes it’s years. For some secrets, it’s not worth investing the massive amount of computer time that would be needed to break them because by the time you crack the code, the information you wanted to know might have expired.

(As an aside, if you were to invert the problem of all these credit card and SSN leaks, you might conclude that reducing the value of possessing this information would be more effective than spending money to secure it.)

“Our policy (at Facebook) is literally to hire as many talented engineers as we can find. The whole limit in the system is that there are not enough people who are trained and have these skills today.”

— Mark Zuckerberg

The Half-Lives of Careers and Business Models

The issue with information having a half-life should be obvious. Many fields depend on individuals with specialized knowledge, learned through study or experience or both. But what if those individuals are failing to keep up with changes and clinging to outdated facts? What if your doctor is offering advice that has been rendered obsolete since they finished medical school? What if your own degree or qualifications are actually useless? These are real problems, and knowing about half-lives will help you make yourself more adaptable.

While figures for the half-lives of most knowledge-based careers are hard to find, we do know the half-life of an engineering career. A century ago, it would take 35 years for half of what an engineer learned when earning their degree to be disproved or replaced. By the 1960s, that time span shrank to a mere decade. Today that figure is probably even lower.

In 1966 paper entitled “The Dollars and Sense of Continuing Education,” Thomas Jones calculated the effort that would be required for an engineer to stay up to date, assuming a 10-year half-life. According to Jones, an engineer would need to devote at least five hours per week, 48 weeks a year, to stay up to date with new advancements. A typical degree requires about 4800 hours of work. Within 10 years, the information learned during 2400 of those hours would be obsolete. The five-hour figure does not include the time necessary to revise forgotten information that is still relevant. A 40-year career as an engineer would require 9600 hours of independent study.

Keep in mind that Jones made his calculations in the 1960s. Modern estimates place the half-life of an engineering degree at between 2.5 and 5 years, requiring between 10 and 20 hours of study per week. Welcome to the treadmill, where you have to run faster and faster so that you don’t fall behind.

Unsurprisingly, putting in this kind of time is simply impossible for most people. The result is an ever-shrinking length of a typical engineer’s career and a bias towards hiring recent graduates. A partial escape from this time-consuming treadmill that offers little progress is to recognize the continuous need for learning. If you agree with that, it becomes easier to place time and emphasis on developing heuristics and systems to foster learning. The faster the pace of knowledge change, the more valuable the skill of learning becomes.

A study by PayScale found that the median age of workers in most successful technology companies is substantially lower than that of other industries. Of 32 companies, just six had a median worker age above 35, despite the average across all workers being just over 42. Eight of the top companies had a median worker age of 30 or below — 28 for Facebook, 29 for Google, and 26 for Epic Games. The upshot is that salaries are high for those who can stay current while gaining years of experience.

In a similar vein, business models have ever shrinking half-lives. The nature of capitalism is that you have to be better last year than you were this year — not to gain market share but to maintain what you already have. If you want to get ahead, you need asymmetry; otherwise, you get lost in trench warfare. How long would it take for half of Uber or Facebook’s business models to be irrelevant? It’s hard to imagine it being more than a couple of years or even months.

In The Business Model Innovation Factory: How to Stay Relevant When the World Is Changing, Saul Kaplan highlights the changing half-lives of business models. In the past, models could last for generations. The majority of CEOs oversaw a single business for their entire careers. Business schools taught little about agility or pivoting. Kaplan writes:

During the industrial era once the basic rules for how a company creates, delivers, and captures value were established[,] they became etched in stone, fortified by functional silos, and sustained by reinforcing company cultures. All of a company’s DNA, energy, and resources were focused on scaling the business model and beating back competition attempting to do a better job executing the same business model. Companies with nearly identical business models slugged it out for market share within well-defined industry sectors.


Those days are over. The industrial era is not coming back. The half-life of a business model is declining. Business models just don’t last as long as they used to. In the twenty-first century business leaders are unlikely to manage a single business for an entire career. Business leaders are unlikely to hand down their businesses to the next generation of leaders with the same business model they inherited from the generation before.

The Burden of Knowledge

The flip side of a half-life is the time it takes to double something. A useful guideline to calculate the time it takes for something to double is to divide 70 by the rate of growth. This formula isn’t perfect, but it gives a good indication. Known as the Rule of 70, it applies only to exponential growth when the relative growth rate remains consistent, such as with compound interest.

The higher the rate of growth, the shorter the doubling time. For example, if the population of a city is increasing by 2 percent per year, we divide 70 by 2 to get a doubling time of 35 years. The rule of 70 is a useful heuristic; population growth of 2 percent might seem low, but your perspective might change when you consider that the city’s population could double in just 35 years. The Rule of 70 can also be used to calculate the time for an investment to double in value; for example, $100 at 7 percent compound interest will double in just a decade and quadruple in 20 years. The average newborn baby doubles its birth weight in under four months. The average doubling time for a tumor is also four months.

We can see how information changes in the figures for how long it takes for a body of knowledge to double in size. The figures quoted by Arbesman (drawn from Little Science, Big Science … and Beyond by Derek J. de Solla Price) are compelling, including:

  • Time for the number of entries in a dictionary of national biographies to double: 100 years
  • Time for the number of universities to double: 50 years
  • Time for the number of known chemical compounds to double: 15 years
  • Time for the number of known asteroids to double: 10 years

Arbesman also gives figures for the time taken for the available knowledge in a particular field to double, including:

  • Medicine: 87 years
  • Mathematics: 63 years
  • Chemistry: 35 years
  • Genetics: 32 years

The doubling of knowledge increases the learning load over time. As a body of knowledge doubles so does the cost of wrapping your head around what we already know. This cost is the burden of knowledge. To be the best in a general field today requires that you know more than the person who was the best only 20 years ago. Not only do you have to be better to be the best, but you also have to be better just to stay in the game.

The corollary is that because there is so much to know, we specialize in very niche areas. This makes it easier to grasp the existing body of facts, keep up to date on changes, and rise to the level of expert. The problem is that specializing also makes it easier to see the world through the narrow focus of your specialty, makes it harder to work with other people (as niches are often dominated by jargon), and makes you prone to overvalue the new and novel.


As we have seen, understanding how half-lives work has numerous practical applications, from determining when radioactive materials will become safe to figuring out effective drug dosages. Half-lives also show us that if we spend time learning something that changes quickly, we might be wasting our time. Like Alice in Wonderland — and a perfect example of the Red Queen Effect — we have to run faster and faster just to keep up with where we are. So if we want our knowledge to compound, we’ll need to focus on the invariant general principles.


Members can discuss this post on the Learning Community Forum.

The Law of Unintended Consequences: Shakespeare, Cobra Breeding, and a Tower in Pisa

“When we try to pick out anything by itself, we find it hitched to everything else in the universe”

— John Muir

In 1890, a New Yorker named Eugene Schieffelin took his intense love of Shakespeare's Henry VI to the next level.

Most Shakespeare fanatics channel their interest by going to see performances of the plays, meticulously analyzing them, or reading everything they can about the playwright's life. Schieffelin wanted more; he wanted to look out his window and see the same kind of birds in the sky that Shakespeare had seen.

Inspired by a mention of starlings in Henry VI, Schieffelin released 100 of the non-native birds in Central Park over two years. (He wasn't acting alone – he had the support of scientists and the American Acclimatization Society.) We can imagine him watching the starlings flutter off into the park and hoping for them to survive and maybe breed. Which they did. In fact, the birds didn't just survive; they thrived and bred like weeds.

Unfortunately, Schieffelin's plan worked too well. Far, far too well. The starlings multiplied exponentially, spreading across America at an astonishing rate. Today, we don't even know how many of them live in the U.S., with official estimates ranging from 45 million to 200 million. Most, if not all, of them are descended from Schieffelin's initial 100 birds. The problem is that as an alien species, the starlings wreak havoc because they were introduced into an ecosystem they were not naturally part of and the local species had (and still have) no defense against them.

If you live in an area with a starling population, you are doubtless familiar with the hardy, fearless nature of these birds. They gather in enormous flocks, destroying crops, snatching food supplies from native birds, and scavenging in cities. Starlings now consume millions of dollars' worth of crops each year and cause fatal airplane crashes. Starlings also spread diseases, including e. coli infections and salmonella.

Schieffelin's starlings are a prime example of unintended consequences. In Best Laid Plans: The Tyranny of Unintended Consequences and How to Avoid Them, William A. Sherden writes:

Sometimes unintended consequences are catastrophic, sometimes beneficial. Occasionally their impacts are imperceptible, at other times colossal. Large events frequently have a number of unintended consequences, but even small events can trigger them. There are numerous instances of purposeful deeds completely backfiring, causing the exact opposite of what was intended.

We all know that our actions and decisions can have surprising reverberations that have no relation to our initial intentions. This is why second-order thinking is so crucial. Sometimes we can open a Pandora’s box or kick a hornet’s nest without realizing it. In a dynamic world, you can never do merely one thing.

Unintended consequences arise because of the chaotic nature of systems. When Schieffelin released the starlings, he did not know the minutiae of the ecological and social systems they would be entering. As the world becomes more complicated and interconnected, the potential for ever more serious unintended consequences grows.

All too often when we mess with complicated systems, we have no more control over the outcomes than we would if we performed shamanistic dances. The simple fact is that we cannot predict how a system will behave through mathematical models or computer simulations or basic concepts like cause and effect or supply and demand.

In The Gene: An Intimate History, Siddhartha Mukherjee writes that unintended consequences can be the result of scientists failing to appreciate the complexity of systems:

The parables of such scientific overreach are well-known: foreign animals, introduced to control pests, become pests in their own right; the raising of smokestacks, meant to alleviate urban pollution, releases particulate effluents higher in the air and exacerbates pollution; stimulating blood formation, meant to prevent heart attacks, thickens the blood and results in an increased risk of blood clots to the heart.

Mukherjee notes that unintended consequences can also be the result of people thinking that something is more complex than it actually is:

… when nonscientists overestimate complexity-“No one can possibly crack this code”-they fall into the trap of unanticipated consequences. In the early 1950s, a common trope among some biologists was that the genetic code would be so context dependent-so utterly determined by a particular cell in a particular organism and so horribly convoluted-that deciphering it would be impossible. The truth turned out to be quite the opposite: just one molecule carries the code, and just one code pervades the biological world. If we know the code, we can intentionally alter it in organisms, and ultimately in humans.

As was mentioned in the quote from Sherden above, sometimes perverse unintended consequences occur when actions have the opposite of the desired effect. From The Nature of Change or the Law of Unintended Consequences by John Mansfield:

An example of the unexpected results of change is found in the clearing of trees to make available more agricultural land. This practice has led to rising water tables and increasing salinity that eventually reduces the amount of useable land.

Some additional examples:

  • Suspending problematic children from school worsens their behavior, as they are more likely to engage in criminal behavior when outside school.
  • Damage-control lawsuits can lead to negative media attention and cause more harm (as occurred in the notorious McLibel case).
  • Banning alcohol has, time and time again, led to higher consumption and the formation of criminal gangs, resulting in violent deaths.
  • Abstinence-based education invariably causes a rise in teenage pregnancies.
  • Many people who experience a rodent infestation will stop feeding their cats, assuming that this will encourage them to hunt more. The opposite occurs: well-fed cats are better hunters than hungry ones.
  • When the British government offered financial rewards for people who killed and turned in cobras in India, people, reacting to incentives, began breeding the snakes. Once the reward program was scrapped, the population of cobras in India rose as people released the ones they had raised. The same thing occurred in Vietnam with rats.

This phenomenon, of the outcome being the opposite of the intended one, is known as “blowback” or the Cobra effect, for obvious reasons. Just as with iatrogenics, interventions often lead to worse problems.

Sometimes the consequences are mixed and take a long time to appear, as with the famous Leaning Tower of Pisa. From The Nature of Change again:

When the tower was built, it was undoubtedly intended to stand vertical. It took about 200 years to complete, but by the time the third floor was added, the poor foundations and loose subsoil had allowed it to sink on one side. Subsequent builders tried to correct this lean and the foundations have been stabilised by 20th-century engineering, but at the present time, the top of the tower is still about 15 feet (4.5 meters) from the perpendicular. Along with the unexpected failure of the foundations is the unexpected consequence of the Leaning Tower of Pisa becoming a popular tourist attraction, bringing enormous revenue to the town.

It's important to note that unintended consequences can sometimes be positive. Someone might have a child because they think parenthood will be a fulfilling experience. If their child grows up and invents a drug that saves thousands of lives, that consequence is positive yet unplanned. Pokemon Go, strange as it seemed, encouraged players to get more exercise. The creation of No Man's Lands during conflicts can preserve the habitats of local wildlife, as has occurred around the Berlin Wall. Sunken ships form coral reefs where wildlife thrives. Typically, though, when we talk about the law of unintended consequences, we're talking about negative consequences.

“Any endeavor has unintended consequences. Any ill-conceived endeavor has more.”

— Stephen Tobolowsky, The Dangerous Animals Club

The Causes of Unintended Consequences

By their nature, unintended consequences can be a mystery. I’m not a fan of the term “unintended consequences,” though, as it’s too often a scapegoat for poor thinking. There are always consequences, whether you see them or not.

When we reflect on the roots of consequences that we failed to see but could have, we are liable to build a narrative that packages a series of chaotic events into a neat chain of cause and effect. A chain that means we don’t have to reflect on our decisions to see where we went wrong. A chain that keeps our egos intact.

Sociologist Robert K. Merton has identified five potential causes of consequences we failed to see:

  1. Our ignorance of the precise manner in which systems work.
  2. Analytical errors or a failure to use Bayesian thinking (not updating our beliefs in light of new information).
  3. Focusing on short-term gain while forgetting long-term consequences.
  4. The requirement for or prohibition of certain actions, despite the potential long-term results.
  5. The creation of self-defeating prophecies (for example, due to worry about inflation, a central bank announces that it will take drastic action, thereby accidentally causing crippling deflation amidst the panic).

Most unintended consequences are just unanticipated consequences.

Using logical fallacies and mental models, and keeping Schieffelin’s starlings in mind, we can identify several more possible causes of consequences that we likely should have seen in advance but didn’t. Here they are:

Over-reliance on models and predictions—mistaking the map for the territory. Schieffelin could have made a predictive model of how his starlings would breed and would affect their new habitat. The issue is that models are not gospel and the outcomes they predict do not represent the real world. All models are wrong, but that doesn’t mean they’re not useful sometimes. You have to understand the model and the terrain it’s based on. Schieffelin’s predictive model might have told him that the starlings' breeding habits would have a very minor impact on their new habitat. But in reality, the factors involved were too diverse and complex to take into account. Schieffelin’s starlings bred faster and interacted with their new environment in ways that would be hard to predict. We can assume that he based his estimations of the future of the starlings on their behavior in their native countries.

Survivorship bias. Unintended consequences can also occur when we fail to take into account all of the available information. When predicting an outcome, we have an inherent tendency to search for other instances in which the desired result occurred. Nowadays, when anyone considers introducing a species to a new area, they are likely to hear about Schieffelin’s starlings. And Schieffelin was likely influenced by stories about, perhaps even personal experiences with, successfully introducing birds into new habitats, unaware of the many ecosystem-tampering experiments that had gone horribly wrong.

The compounding effect of consequences. Unintended results do not progress in a linear manner. Just as untouched money in a savings account compounds, the population of Schieffelin’s starlings compounded over the following decades. Each new bird that was hatched meant more hatchlings in future generations. At some point, the bird populations reached critical mass and no attempts to check their growth could be successful. As people in one area shot or poisoned the starlings, the breeding of those elsewhere continued.

Denial. Just as we seek out confirmatory evidence, we are inclined to deny the existence of disconfirming information. We may be in denial about the true implications of actions. Governments in particular tend to focus on the positive consequences of legislation while ignoring the costs. Negative unintended consequences do not always result in changes being made. Open-plan offices are another instance; they were first designed to encourage collaboration and creativity. Even though research has shown that they have the opposite effect, many companies continue to opt for open offices. They sound like a good idea, and airy offices with beanbags and pot plants might look nice, but those who continue building them are in obvious denial.

Failure to account for base rates. When we neglect to consider how the past will affect the future, we are failing to account for base rates. Schieffelin likely failed to consider the base rates of successful species introduction.

Curiosity. We sometimes perform actions out of curiosity, without any idea of the potential consequences. The problem is that our curiosity can lead us to behave in reckless, unplanned, or poorly thought-through ways. The release of Schieffelin’s starlings was in part the result of widespread curiosity about the potential for introducing European species to America.

The tendency to want to do something. We are all biased towards action. We don’t want to sit around — we want to act and make changes. The problem is that sometimes doing nothing is the best route to take. In the case of Schieffelin’s starlings, he was biased towards making alterations to the wildlife around him to bring Shakespeare’s world to life, even though leaving nature alone is usually preferable.

Mental Models for Avoiding or Minimizing Unintended Consequences

We cannot eliminate unintended consequences, but we can become more aware of them through rational thinking techniques. In this section, we will examine some ways of working with and understanding the unexpected. Note that the examples provided here are simplifications of complex issues. The observations made about them are those of armchair critics, not those involved in the actual decision making.

Inversion. When we invert our thinking, we consider what we want to avoid, not what we want to cause. Rather than seeking perfection, we should avoid stupidity. By considering potential unintended consequences, we can then work backwards. For example, the implementation of laws which required cyclists to wear helmets at all times led to a rise in fatalities. (People who feel safer behave in a more risky manner.) If we use inversion, we know we do not want any change in road safety laws to cause more injuries or deaths. So, we could consider creating stricter laws surrounding risky cycling and enforcing penalties for those who fail to follow them.

Another example is laws which aim to protect endangered animals by preventing new developments on land where rare species live. Imagine that you are a landowner, about to close a lucrative deal. You look out at your land and notice a smattering of endangered wildflowers. Do you cancel the sale and leave the land to the flowers? Of course not. Unless you are exceptionally honest, you grab a spade, dig up the flowers, and keep them a secret. Many people shoot, poison, remove, or otherwise harm endangered animals and plants. If lawmakers used inversion, they would recognize that they want to avoid those consequences, and work backwards.

We have to focus on avoiding the worst unintended consequences, rather than on controlling everything.

Looking for disconfirming evidence. Instead of looking for information that confirms that our actions will have the desired consequences, we should rigorously search for evidence that they will not. How did this go in the past? Take the example of laws regarding the minimum wage and worker rights. Every country has people pushing for a higher minimum wage and for more protection of workers. If we search for disconfirming evidence, we see that these laws can do more harm than good. The French appear to have perfected labor laws. All employees are, on the face of it, blessed with a minimum wage of 17,764 euros per year, a 35-hour work week, five weeks paid holiday, and strict protection against redundancy (layoffs). So, why don’t we all just move to France? Because these measures result in a lot of negative unintended consequences. Unemployment rates are high, as many businesses cannot afford to hire many employees. Foreign companies are reluctant to hire French workers, as they can’t fire them during tough economic times. Everyone deserves a fair minimum wage and protection from abuse of their rights, but France illustrates how taking this principle too far can have negative unintended consequences.

Understanding our circle of competence. Each of us has areas we understand well and are familiar with. When we act outside our circle of competence, we increase the risk of unintended consequences. If you decide to fix your boiler without consulting a plumber, you are acting outside of your circle of competence and have a good chance of making the problem worse. When the British government implemented bounties for dead cobras in India, their circle of competence did not include an understanding of the locals. Perhaps if they had consulted some Indian people and asked how they would react to such a law, they could have avoided causing a rise in the cobra population.

Second-order thinking. We often forget that our actions can have two layers of consequences, of which the first might be intended and the second unintended. With Schieffelin’s starlings, the first layer of consequences was positive and as intended. The birds survived and bred, and Shakespeare fans living in New York got to feel a bit closer to the iconic playwright. But the negative second layer of consequences dwarfed the first layer. For the parents of a child who grows up to invent a life-saving drug, the first layer of consequences is that those parents (presumably) have a fulfilling experience. The second layer of consequences is that lives are saved. When we use second-order thinking, we ask: what could happen? What if the opposite of what I expect happens? What might the results be a year, five years, or a decade from now?


Most unintended consequences are just unanticipated consequences. And in the world of consequences intentions often don't matter.  Intentions, after all, only apply to positive anticipated consequences. Only in rare circumstances would someone intend to cause negative consequences.

So when we make decisions we must ask what the consequences be? This is where having a toolbox of mental models becomes helpful.


Members can discuss this post on the Learning Community Forum

Winning the Battle, Losing the War

“War ends at the moment when peace permanently wins out. Not when the articles of surrender are signed or the last shot is fired, but when the last shout of a sidewalk battle fades, when the next generation starts to wonder whether the whole thing ever really happened.”

— Lee Sandlin

The Basics

In a classic American folktale, a stubborn railroad worker decides to prove his skill by competing with a drilling machine. John Henry, enraged to hear that machines might take his job, claims that his digging abilities are superior. A contest is arranged. He goes head to head with the new drill. The result is impressive — the drill breaks after three meters, whereas John Henry makes it to four meters in the same amount of time. As the other workers begin to celebrate his victory, he collapses and dies of exhaustion.

John Henry might have been victorious against the drill, but that small win was meaningless in the face of his subsequent death. In short, we can say that he won the battle but lost the war.

Winning a battle but losing the war is a military mental model that refers to achieving a minor victory that ultimately results in a larger defeat, rendering the victory empty or hollow. It can also refer to gaining a small tactical advantage that corresponds to a wider disadvantage.

One particular type of hollow victory is the Pyrrhic victory, which Wikipedia defines as a victory that “inflicts such a devastating toll on the victor that it is tantamount to defeat.” That devastating toll can come in the form of an enormous number of casualties, the wasting of resources, high financial costs, damage to land, and other losses. Or, in that folktale, the death of the railroad worker.

Another hollow victory occurs when you engage in a conventional war and prompt a response from an opponent who has significantly more firepower than you do. The attack on Pearl Harbor was considered a victory for the Japanese. However, by provoking an army with superior forces, they set something in motion they could not control.

While the concept of a hollow victory arises in military contexts, understanding the broader principle allows you to apply it to other areas of life. It can often be helpful in the context of non-zero-sum situations, in which both parties suffer even if one has technically succeeded.

We have won a battle but lost a war whenever we achieve some minor aim that leads to wider loss.

We have won a battle but lost a war whenever we achieve some minor (or even major) aim that leads to wider loss. We might win an argument with a partner over a small infraction, only to come across as hostile and damage the relationship. We may achieve a short-term professional goal by working overtime, only to harm our health and reduce our long-term productivity. We might pursue a particular career for the sake of money, but feel unfulfilled and miserable in the process.

“Grand strategy is the art of looking beyond the present battle and calculating ahead. It requires that you focus on your ultimate goal and plot to reach it.”

— Robert Greene, The 33 Strategies of War

The Original Pyrrhic Victory

The term “Pyrrhic victory” is named after the Greek king Pyrrhus of Epirus. Between 280 and 279 BC, Pyrrhus’s army managed to defeat the Romans in two major battles. Striding into Italy with 25,000 men and 20 elephants — a new sight for the Romans — Pyrrhus was confident that he could extend his empire. However, the number of lives lost in the process made the victory meaningless. According to Plutarch, Pyrrhus is said to have told a friend that another victory against the Romans would “utterly undo him.”

Pyrrhus did not have access to anywhere near enough potential recruits to replenish his army. He had, after all, lost most of his men, including the majority of his friends and commanders. Meanwhile, the Romans were only temporarily defeated. They could replace their lost soldiers with relative ease. Even worse, the two losses had enraged the Romans and made them more willing to continue fighting. The chastened king gathered his remaining troops and sailed back to Greece.

The Battle of Bunker Hill

A classic example of a Pyrrhic victory is the Battle of Bunker Hill, fought on June 17th, 1775, during the American Revolutionary War. Colonial and British troops grappled for control of the strategically advantageous Bunker Hill in Massachusetts.

Four days earlier, on June 13th, the colonial army received intelligence that the British were planning to take control of the hills around Boston, which would give them greater authority over the nearby harbor. About 1200 colonial soldiers situated themselves on the hills, while others spread throughout the surrounding area. The British army, realizing this, mounted an attack.

The British army succeeded in their aim after the colonial army ran out of ammunition. Yet the Battle of Bunker Hill was anything but a true victory, because the British lost a substantial number of men, including 100 of their officers. This left the British army depleted (having sustained 1000 casualties), low on resources, and without proper management.

This Pyrrhic victory was unexpected; the British troops had far more experience and outnumbered the colonial army by almost 2:1. The Battle of Bunker Hill sapped British morale but was somewhat motivating for the colonials, who had sustained less than half the number of casualties.

In The American Revolutionary War and the War of 1812, the situation is described this way:

… the British were stopped by heavy fire from the colonial troops barricaded behind rail fences that had been stuffed with grass, hay, and brush. On the second or third advance, however, the attackers carried the redoubt and forced the surviving defenders, mostly exhausted and weaponless, to flee. …

If the British had followed this victory with an attack on Dorchester Heights to the South of Boston, it might have been worth the heavy cost. But, presumably, because of their severe losses and the fighting spirit displayed by the rebels, the British commanders abandoned or indefinitely postponed such a plan. Consequently, after Gen. George Washington took colonial command two weeks later, enough heavy guns and ammunition had been collected that he was able in March 1776 to seize and fortify Dorchester Heights and compel the British to evacuate Boston.… Also, the heavy losses inflicted on the British in the Battle of Bunker Hill bolstered the Americans' confidence and showed that the relatively inexperienced colonists could indeed fight on par with the mighty redcoats of the British army.

In The War of the American Revolution, Robert W. Coakley writes of the impact of Bunker Hill:

Bunker Hill was a Pyrrhic victory, its strategic effect practically nil since the two armies remained in virtually the same position they had held before. Its consequences, nevertheless, cannot be ignored. A force of farmers and townsmen, fresh from their fields and shops, with hardly a semblance of orthodox military organization, had met and fought on equal terms with a professional British army. …[N]ever again would British commanders lightly attempt such an assault on Americans in fortified positions.

“I wish we could sell them another hill at the same price.”

— Nathanael Greene, leader of the colonial army

The Battle of Borodino

Fought on September 7, 1812, the Battle of Borodino was the bloodiest day of the Napoleonic Wars. The French army (led by Napoleon) sought to invade Russia. Roughly a quarter of a million soldiers fought at the Battle of Borodino, with more than 70,000 casualties. Although the French army succeeded in forcing the Russians into retreat, their victory was scarcely a triumphant one. Both sides ended up depleted and low on morale without having achieved their respective aims.

The Battle of Borodino is considered a Pyrrhic victory because the French army destroyed itself in the process of capturing Moscow. The Russians had no desire to surrender, and the conflict was more costly for the French than for their opponent.
By the time Napoleon's men began their weary journey back to France, they had little reason to consider themselves victorious. The Battle of Borodino had no clear purpose, as no tactical advantage was gained. Infighting broke out and Napoleon eventually lost both the war and his role as leader of France.

History has shown again and again that attempting to take over Russia is rarely a good idea. Napoleon was at a serious disadvantage to begin with. The country's size and climate made tactical movements difficult. Bringing supplies in proved nearly impossible, and the French soldiers easily succumbed to cold, starvation, and infectious diseases. Even as they hastened to retreat, the Russian army recovered its lost men quickly and continued to whittle away at the remaining French soldiers. Of the original 95,000 French troops, a mere 23,000 returned from Russia (exact figures are impossible to ascertain due to each side's exaggerating or downplaying the losses). The Russian approach to defeating the French is best described as attrition warfare – a stubborn, unending wearing down. Napoleon might have won the Battle of Borodino, but in the process he lost everything he had built during his time as a leader and his army was crushed.

Pyrrhic victories often serve as propaganda in the long term – for the losing side, not the victors.

Something we can note from both Borodino and Bunker Hill is that Pyrrhic victories often serve as propaganda in the long term – for the losing side, not for the victors. As the adage goes, history is written by winners. A Latin saying, ad victorem spolias – to the victor belong the spoils – exemplifies this idea. Except that it doesn't quite ring true when it comes to Pyrrhic victories, which tend to be a source of shame for the winning side. In the case of Borodino, it became an emblem of patriotism and pride for the Russians.

“[I]t is much better to lose a battle and win the war than to win a battle and lose the war. Resolve to keep your eyes on the big ball.”

— David J. Schwartz, The Magic of Thinking Big

Hollow Victories in Business

A company has won a Pyrrhic victory when it leverages all available resources to take over another company, only to be ruined by the financial costs and the loss of key employees. Businesses can also ruin themselves over lawsuits that drain resources, distract managers, and get negative attention in the press.

American Apparel is one instance of a company ending up bankrupt, partially as a result of mounting legal fees. The exact causes of the company’s downfall are not altogether understood, though a number of lawsuits are believed to have been a major factor. It began with a series of sexual harassment lawsuits against founder Dov Charney.

American Apparel’s board of directors fired Charney after the growing fees associated with defending him began harming the company’s finances (as well as its reputation). Charney responded by attempting a hostile takeover, as unwilling to surrender control of the company he founded as Czar Alexander was to surrender Moscow to Napoleon. More lawsuits followed as American Apparel shareholders and board members seemingly sued everyone in sight and were sued by suppliers, by more than 200 former employees, and by patent holders.

As everyone involved focused on winning their respective battles, the company ended up filing for bankruptcy and losing the war. In short, everyone suffered substantial losses, from Charney himself to the many factory workers who were made redundant.

Hollow Victories in Court Cases

Hollow victories are common in the legal system. For example, consider the following scenarios:

  • A divorced couple engages in a lengthy, tedious legal battle over the custody of their children. Eventually, they are given shared custody. Yet the tense confrontations associated with the court case have alienated the children from their parents and removed tens of thousands of dollars from the collective purse.
  • A man unknowingly puts up trees that slightly cross over into his neighbor's property. The man tries to come to a compromise by perhaps trimming the trees or allowing the neighbor to cross into his property in exchange for leaving the trees up. No dice; the neighbor sticks to his guns. Unable to resolve the matter, the neighbor sues the man and wins, forcing him to cut down the trees and pay all legal expenses. While the neighbor has technically won the case, he now has an enemy next door, and enemies up and down the street who think he's a Scrooge.
  • A freelance illustrator discovers that her work has been used without permission or payment by a non-profit group that printed her designs on T-shirts and sold them, with the proceeds going to charity. The illustrator sues them and wins for copyright infringement, but costs herself and the charity substantial legal fees. Unhappy that the illustrator sued a charity instead of making a compromise, the public boycotts her and she has trouble selling her future work.
  • A well-known business magnate discovers that his children are suing him for the release of trust fund money they believe they are owed. He counter-sues, arguing publicly that his children are greedy and don't deserve the money. He wins the case on a legal technicality, but both his public image and his relationships with his children are tarnished. He's kept his money, but not his happiness.

A notable instance of a legal Pyrrhic victory was the decade-long McLibel case, the longest running case in English history. The fast-food chain McDonald's attempted to sue two environmental activists, Helen Steel and David Morris, over leaflets they distributed. McDonald's claimed the contents of the leaflets were false. Steel and Morris claimed they were true.

Court hearings found that both parties were both wrong – some of the claims were verifiable; others were fabricated. After ten years of tedious litigation and negative media attention, McDonald's won the case, but it was far from worthwhile. The (uncollected) £40,000 settlement they were awarded was paltry compared to the millions the legal battle had cost the company. Meanwhile, Steel and Morris chose to represent themselves and spent only £30,000 (both had limited income and did not receive Legal Aid).

Although McDonald's did win the case, it came with enormous costs, both financially and in reputation. The case attracted a great deal of media attention as a result of its David-vs.-Goliath nature. The idea of two unemployed activists taking on an international corporation had an undeniable appeal, and the portrayals of McDonald's were unanimously negative. The case did far more harm to their reputation than a few leaflets distributed in London would have. At one point, McDonald's attempted to placate Steel and Morris by offering to donate money to a charity of their choice, provided that they stopped criticizing the company publicly and did so only “in private with friends.” The pair responded that they would accept the terms if McDonald's halted any form of advertising and staff recommended it only “in private with friends.”

“Do not be ashamed to make a temporary withdrawal from the field if you see that your enemy is stronger than you; it is not winning or losing a single battle that matters, but how the war ends.”

— Paulo Coelho, Warrior of the Light

Hollow Victories in Politics

Theresa May’s General Election win is a perfect example of a political Pyrrhic victory, as is the Brexit vote the year prior.

Much like Napoleon at Borodino, David Cameron achieved his aims, only to lose his role as a leader in the process. And much like the French soldiers who defeated the Russians at Borodino, only to find themselves limping home through snow and ice, the triumphant Leave voters now face a drop in wages and general quality of life, making the fulfilment of their desire to leave the European Union seem somewhat hollow. Elderly British people (the majority of whom voted to leave) must deal with dropping pensions and potentially worse healthcare due to reduced funding. Voters won the battle but at a cost that is unknown.

Even before the shock of the Brexit vote had worn off, Britain saw a second dramatic Pyrrhic victory: Theresa May’s train-wreck General Election. Amid soaring inflation, May aimed to win a clear majority and secure her leadership. Although she was not voted out of office, her failure to receive unanimous support only served to weaken her position. Continued economic decline has weakened it further.

“Victorious warriors win first and then go to war, while defeated warriors go to war first and then seek to win.”

— Sun Tzu, The Art of War

How We Can Avoid Hollow Victories in Our Lives

One important lesson we can learn from hollow victories is the value of focusing on the bigger picture, rather than chasing smaller goals.

One way to avoid winning a battle but losing the war is to think in terms of opportunity costs. Charlie Munger has said that “All intelligent people use opportunity cost to make decisions”; maybe what he should have said is that “All intelligent people should use opportunity cost to make decisions.”

Consider a businessman, well versed in opportunity cost economics, who chooses to work late every night instead of spending time with his family, whom he then alienates and eventually becomes distanced from. The opportunity cost of the time spent at the office between 7-10 pm wasn't just TV, or dinner, or any other thing he would have done were he at home. It was a good long-term relationship with his wife and children! Talk about opportunity costs! Putting in the late hours may have helped him with the “battle” of business, but what about the “war” of life? Unfortunately, many people realize too late that they paid too high a price for their achievements or victories.

Hollow victories can occur as a result of a person or party focusing on a single goal – winning a lawsuit, capturing a hill, winning an election – while ignoring the wider implications. It's like looking at the universe by peering into one small corner of space with a telescope.

As was noted earlier, this mental model isn't relevant just in military, legal, or political contexts; hollow victories can occur in every part of our lives, including relationships, health, personal development, and careers. Understanding military tactics and concepts can teach us a great deal about being effective leaders, achieving our goals, maintaining relationships, and more.

It's obvious that we should avoid Pyrrhic victories wherever possible, but how do we do that? In spite of situations differing vastly, there are some points to keep in mind:

  • Zoom out to see the big picture. By stepping back when we get too focused on minutiae, we can pay more attention to the war, not just the battle. Imagine that you are at the gym when you feel a sharp pain in your leg. You ignore it and finish the workout, despite the pain increasing with each rep. Upon visiting a doctor, you find you have a serious injury and will be unable to exercise until it heals. If you had focused on the bigger picture, you would have stopped the workout, preventing a minor injury from getting worse, and been able to get back to your workouts sooner.
  • Keep in mind core principles and focus on overarching goals. When Napoleon sacrificed thousands of his men in a bid to take control of Moscow, he forgot his core role as the leader of the French people. His own country should have been the priority, but he chose to chase more power and ended up losing everything. When we risk something vital – our health, happiness, or relationships – we run the risk of a Pyrrhic victory.
  • Recognize that we don't have to lose our minds just because everyone else has. As Warren Buffett once said, “be fearful when others are greedy and greedy when others are fearful.” Or, as Nathan Rothschild wrote, “great fortunes are made when cannonballs fall in the harbor, not when violins play in the ballroom.” When others are thrashing to win a battle, we would do well to pay attention to the war. What can we notice that they ignore? If we can't (or don't want to) resolve the turmoil, how can we benefit from it?
  • Recognize when to give up. We cannot win every battle we engage in, but we can sometimes win the war. In some situations, the optimum choice is to withdraw or surrender to avoid irreparable problems. The goal is not the quick boost from a short-term victory; it is the valuable satisfaction of long-term success.
  • Remember that underdogs can win – or at least put up a good fight. Remember what the British learned the hard way at Bunker Hill, and what it cost McDonald's to win the McLibel case. Even if we think we can succeed against a seemingly weaker party, that victory can come at a very high cost.


Members can discuss this post on the Learning Community Forum

Making the Most of Second Chances

We all get lucky. Once in a while we do something really stupid that could have resulted in death, but didn’t. Just the other day, I saw someone who was texting walk out into oncoming traffic, narrowly avoiding the car whose driver slammed on the brakes. As the adrenaline starts to dissipate, we realize that we don’t ever want to be in that situation again. What can we do? We can make the most of our second chances by building margins of safety into our lives.

What is a margin of safety and where can I get one?

The concept is a cornerstone of engineering. Engineers design systems to withstand significantly more emergencies, unexpected loads, misuse, or degradation than would normally be expected.

Take a bridge. You are designing a bridge to cross just under two hundred feet of river. The bridge has two lanes going in each direction. Given the average car size, the bridge could reasonably carry 50 to 60 cars at a time. At 4,000 pounds per car, your bridge needs to be able to carry at least 240,000 pounds of weight; otherwise, don’t bother building it. So that’s the minimum consideration for safety — but only the worst engineer would stop there.

Can anyone walk across your bridge? Can anyone park their car on the shoulder? What if cars get heavier? What if 20 cement trucks are on the bridge at the same time? How does the climate affect the integrity of your materials over time? You don’t want the weight capacity of the bridge to ever come close to the actual load. Otherwise, one seagull decides to land on the railing and the whole structure collapses.

Considering these questions and looking at the possibilities is how you get the right information so you can adjust your specs to build in a margin of safety. That’s the difference between what your system is expected to withstand and what it actually could. So when you are designing a bridge, the first step is to figure out the maximum load it should ever see (bumper-to-bumper vehicles, hordes of tourist groups, and birds perched wing to wing), and then you design for at least double that load.

Knowing that the infrastructure was designed to withstand significantly more than the anticipated maximum load makes us happy when we are on bridges, or in airplanes, or jumping on the bed in our second-story bedroom. We feel confident that many smart people have conspired to make these activities as safe as possible. We’re so sure of this that it almost never crosses our minds. Sure, occasional accidents happen. But it is remarkably reassuring that these structures can withstand quite a bit of the unexpected.

So how do we make ourselves a little more resilient? Less susceptible to the vagaries of change? Turns out that engineers aren’t the only ones obsessed with building in margins of safety. Spies are pretty good at it, too, and we can learn a lot from them.

Operation Kronstadt, by Harry Ferguson, chronicles the remarkable story of Paul Dukes, the only British secret agent working in Russia in 1919, and the equally amazing adventures of the small team that was sent in to rescue him.

Paul Dukes was not an experienced spy. He was actually a pianist. It was his deep love of Russian culture that led to him to approach his government and volunteer for the mission of collecting information on Bolshevik activities in St. Petersburg. As Ferguson writes, “Paul had no military experience, let alone any experience of intelligence work and yet they were going to send him back into one of the toughest espionage environments in the world.”

However, MI6, the part of British Intelligence that Paul worked for, wasn’t exactly the powerful and well-prepared agency that it’s portrayed as today. Consider this description by Ferguson: “having dragged Paul out of Russia, MI6 did not appear to have given much thought to how he should get back or how he would survive once he got there: ‘As to the means whereby you gain access to the country, under what cover you will live there, and how you will send out reports, we shall leave it to you, being best informed as to the conditions’.”

So off went Paul into Russia, not as a musician but as a spy. No training, no gadgets, no emergency network, no safe houses. Just a bunch of money and sentiments of ‘good luck’. So it is all the more amazing that Paul Dukes turned out to be an excellent spy. After reading his story, I think the primary reason for this is that he learned extremely quickly from his experiences. One of the things he learned quickly was how to build margins of safety into his tradecraft.

There is no doubt that the prospect of death wakes us up. We don’t often think about how dangerous something can be until we almost die doing it. Then, thanks to our big brains that let us learn from experience, we adapt. We recognize that if we don’t, we might not be so lucky next time. And no one wants to rely on luck as a survival strategy.

This is where margins of safety come in. We build them to reduce the precariousness of chance.

Imagine you are in St. Petersburg in 1919. What you have going for you is that you speak the language, understand the culture, and know the streets. Your major problem is that you have no idea how to start this spying thing. How do you get contacts and build a network in a city that is under psychological siege? The few names you have been given come from dubious sources at the border, and the people attached to those names may have been compromised, arrested, or both. You have nowhere to sleep at night, and although you have some money, it can’t buy anything, not even food, because there is nothing for sale. The whole country is on rations.

Not to mention, if by some miracle you actually get a few good contacts who give you useful information, how do you get it home? There are no cell phones or satellites. Your passport is fake and won’t hold up to any intense scrutiny, yet all your intelligence has to be taken out by hand from a country that has sealed its borders. And it’s 1919. You can’t hop on a plane or drive a car. Train or foot are your only options.

This is what Paul Dukes faced. Daunting to be sure. Which is why his ultimate success reads like the improbable plot of a Hollywood movie. Although he made mistakes, he learned from them as they were happening.

Consider this tense moment as described by Ferguson:

The doorbell in the flat rang loudly and Paul awoke with a start.

He had slept late. Stepanova had kindly allowed him sleep in one of the spare beds and she had even found him an old pair of Ivan's pyjamas. There were no sheets, but there were plenty of blankets and Paul had been cosy and warm. Now it was 7.45 a.m., and here he was half-asleep and without his clothes. Suppose it was the Cheka [Russian Bolshevik Police] at the door? In a panic he realised that he had no idea what to do. The windows of the apartment were too high for him to jump from and like a fool he had chosen a hiding place with no other exits. … He was reduced to waiting nervously as he stood in Ivan's pyjamas whilst Stepanova shuffled to the door to find out who it was. As he stood there with his stomach in knots, Paul swore that he would never again sleep in a place from which there was only one exit.

One exit was good enough for normal, anticipated use. But one exit wouldn't allow him to adapt to the unexpected, the unusual load produced by the appearance of the state police. So from then on, his sleeping accommodations were chosen with a minimum margin of safety of two exits.

This type of thinking dictated a lot of his actions. He never stayed at the same house more than two nights in a row, and often moved after just one night. He arranged for the occupants to signal him, such as by placing a plant in the window, if they believed the house was unsafe. He siloed knowledge as much as he could, never letting the occupants of one safe house know about the others. Furthermore, as Ferguson writes:

He also arranged a back-up plan in case the Cheka finally got him. He had to pick one trustworthy agent … and soon Paul began entrusting her with all the details of his movements and told her at which safe house he would be sleeping so that if he did disappear MI6 would have a better idea of who had betrayed him. He even used her as part of his courier service and she hid all his reports in the float while he was waiting for someone who could take them out of the country.

Admittedly this plan didn’t provide a large margin of safety, but at least he wasn’t so arrogant as to assume he was never going to get captured.

Large margins of safety are not always possible. Sometimes they are too expensive. Sometimes they are not available. Dukes liked to have an extra identity handy should some of his dubious contacts turn him in, but this wasn’t always an option in a country that changed identity papers frequently. Most important, though, he was aware that planning for the unexpected was his best chance of staying alive, even if he couldn’t always put in place as large a margin of safety as he would have liked. And survival was a daily challenge, not something to take for granted.

The disaster at the Fukushima nuclear power plant taught us a lot about being cavalier regarding margins of safety. The unexpected is just that: not anticipated. That doesn’t mean it is impossible or even improbable. The unexpected is not the worst thing that has happened before. It is the worst thing, given realistic parameters such as the laws of physics, that could happen.

In the Fukushima case, the margin of safety was good enough to deal with the weather of the recent past. But preparing for the worst we have seen is not the same as preparing for the worst.

The Fukushima power plant was overwhelmed by a tsunami, creating a nuclear disaster on par with Chernobyl. Given the seismic activity in the area, although a tsunami wasn’t predictable, it was certainly possible. The plant could have been designed with a margin of safety to better withstand a tsunami. It wasn’t. Why? Because redundancy is expensive. That’s the trade-off. You are safer, but it costs more money.

Sometimes when the stakes are low, we decide the trade-off isn’t worth it. For instance, maybe we wouldn’t pay to insure a wedding ring that wasn’t expensive. You would think, however, that power plants wouldn’t cut it close. The consequences of a lost ring are some emotional pain and the cost of a new one. The consequences of a nuclear accident are exponentially higher. Lives are lost, and the environment corrupted. In the Fukushima case, the world will be dealing with the negative effects for a long time.

What decisions would you make differently if you were factoring safety margins into your life? To be fair, you can’t put them everywhere. Otherwise, your life might be all margin and no living. But you can identify the maximum load your life is currently designed to withstand and figure out how close to it you are coming.

For example, having your expenses equal 100 percent of your income is allowing you no flexibility in the load you have to carry. A job loss, a bad flood in your neighborhood, or significant sickness are all unexpected events that would change the load your financial structure has to support. Without a margin of safety, such as a healthy savings or investment account, you could find your structure collapsing, compromising the roof over your head.

The idea is to identify the unlikely but possible risks to your survival and build margins of safety that will allow you to continue your lifestyle should these things come to pass. That way, a missed paycheck will be easily absorbed instead of jeopardizing your ability to put food on the table.

To figure out where else you should build margins of safety into your life, think of the times you’ve been terrified and desperate. Those might be good places to start learning from experience and making the most of your second chances.

Bayes and Deadweight: Using Statistics to Eject the Deadweight From Your Life

“[K]nowledge is indeed highly subjective, but we can quantify it with a bet. The amount we wager shows how much we believe in something.”

— Sharon Bertsch McGrayne

The quality of your life will, to a large extent, be decided by whom you elect to spend your time with. Supportive, caring, and funny are great attributes in friends and lovers. Unceasingly negative cynics who chip away at your self-esteem? We need to jettison those people as far and fast as we can.

The problem is, how do we identify these people who add nothing positive — or not enough positive — to our lives?

Few of us keep relationships with obvious assholes. There are always a few painfully terrible family members we have to put up with at weddings and funerals, but normally we choose whom we spend time with. And we’ve chosen these people because, at some point, our interactions with them felt good.

How, then, do we identify the deadweight? The people who are really dragging us down and who have a high probability of continuing to do so in the future? We can apply the general thinking tool called Bayesian Updating.

Bayes's theorem can involve some complicated mathematics, but at its core lies a very simple premise. Probability estimates should start with what we already know about the world and then be incrementally updated as new information becomes available. Bayes can even help us when that information is relevant but subjective.

How? As McGrayne explains in the quote above, from The Theory That Would Not Die, you simply ask yourself to wager on the outcome.

Let’s take an easy example.

You are going on a blind date. You’ve been told all sorts of good things in advance — the person is attractive and funny and has a good job — so of course, you are excited. The date starts off great, living up to expectations. Halfway through you find out they have a cat. You hate cats. Given how well everything else is going, how much should this information affect your decision to keep dating?

Quantify your belief in the most probable outcome with a bet. How much would you wager that harmony on the pet issue is an accurate predictor of relationship success? Ten cents? Ten thousand dollars? Do the thought experiment. Imagine walking into a casino and placing a bet on the likelihood that this person’s having a cat will ultimately destroy the relationship. How much money would you take out of your savings and lay on the table? Your answer will give you an idea of how much to factor the cat into your decision-making process. If you wouldn’t part with a dime, then I wouldn’t worry about it.

This kind of approach can help us when it comes to evaluating our interpersonal relationships. Deciding if someone is a good friend, partner, or co-worker is full of subjective judgments. There is usually some contradictory information, and ultimately no one is perfect. So how do you decide who is worth keeping around?

Let’s start with friends. The longer a friendship lasts, the more likely it is to have ups and downs. The trick is to start quantifying these. A hit from a change in geographical proximity is radically different from a hit from betrayal — we need to factor these differently into our friendship formula.

This may seem obvious, but the truth is that we often give the same weight to a wide variety of behaviors. We’ll says things like “yeah, she talked about my health problems when I asked her not to, but she always remembers my birthday.” By treating all aspects of the friendship equally, we have a hard time making reasonable estimates about the future value of that friendship. And that’s how we end up with deadweight.

For the friend who has betrayed your confidence, what you really want to know is the likelihood that she’s going to do it again. Instead of trying to remember and analyze every interaction you’ve ever had, just imagine yourself betting on it. Go back to that casino and head to the friendship roulette wheel. Where would you put your money? All in on “She can’t keep her mouth shut” or a few chips on “Not likely to happen again”?

Using a rough Bayesian model in our heads, we’re forcing ourselves to quantify what “good” is and what “bad” is. How good? How bad? How likely? How unlikely? Until we do some (rough) guessing at these things, we’re making decisions much more poorly than we need to be.

The great thing about using Bayes’s theorem is that it encourages constant updating. It also encourages an open mind by giving us the chance to look at a situation from multiple angles. Maybe she really is sorry about the betrayal. Maybe she thought she was acting in your best interests. There are many possible explanations for her behavior and you can use Bayes’s theorem to integrate all of her later actions into your bet. If you find yourself reducing the amount of money you’d bet on further betrayal, you can accurately assume that the probability she will betray your trust again has gone down.

Using this strategy can also stop the endless rounds of asking why. Why did that co-worker steal my idea? Who else do I have to watch out for? This what-if thinking is paralyzing. You end up self-justifying your behavior through anticipating the worst possible scenarios you can imagine. Thus, you don’t change anything, and you step further away from a solution.

In reality, who cares? The why isn’t important; the most relevant task for you is to figure out the probability that your coworker will do it again. Don’t spend hours analyzing what to do, get upset over the doomsday scenarios you have come up with, or let a few glasses of wine soften the experience.

Head to your mental casino and place the bet, quantifying all the subjective information in your head that is messy and hard to articulate. You will cut through the endless “but maybes” and have a clear path forward that addresses the probable future. It may make sense to give him the benefit of the doubt. It may also be reasonable to avoid him as much as possible. When you figure out how much you would wager on the potential outcomes, you’ll know what to do.

Sometimes we can’t just get rid of people who aren’t good for us — family being the prime example. But you can also use Bayes to test how your actions will change the probability of outcomes to find ways of keeping the negativity minimal. Let’s say you have a cousin who always plans to visit but then cancels. You can’t stop being his cousin and saying “you aren’t welcome at my house” will cause a big family drama. So what else can you do?

Your initial equation — your probability estimate — indicates that the behavior is likely to continue. In your casino, you would comfortably bet your life savings that it will happen again. Now imagine ways in which you could change your behavior. Which of these would reduce your bet? You could have an honest conversation with him, telling him how his actions make you feel. To know if he’s able to openly receive this, consider whether your bet would change. Or would you wager significantly less after employing the strategy of always being busy when he calls to set up future visits?

And you can dig even deeper. Which of your behaviors would increase the probability that he actually comes? Which behaviors would increase the probability that he doesn’t bother making plans in the first place? Depending on how much you like him, you can steer your changes to the outcome you’d prefer.

Quantifying the subjective and using Bayes’s theorem can help us clear out some of the relationship negativity in our lives.

What You Can Learn from Fighter Pilots About Making Fast and Accurate Decisions

“What is strategy? A mental tapestry of changing intentions for harmonizing and focusing our efforts as a basis for realizing some aim or purpose in an unfolding and often unforeseen world of many bewildering events and many contending interests.””

— John Boyd

What techniques do people use in the most extreme situations to make decisions? What can we learn from them to help us make more rational and quick decisions?

If these techniques work in the most drastic scenarios, they have a good chance of working for us. This is why military mental models can have such wide, useful applications outside their original context.

Military mental models are constantly tested in the laboratory of conflict. If they weren’t agile, versatile, and effective, they would quickly be replaced by others. Military leaders and strategists invest a great deal of time in developing and teaching decision-making processes.

One strategy that I’ve found repeatedly effective is the OODA loop.

Developed by strategist and U.S. Air Force Colonel John Boyd, the OODA loop is a practical concept designed to be the foundation of rational thinking in confusing or chaotic situations. OODA stands for Observe, Orient, Decide, and Act.

Boyd developed the strategy for fighter pilots. However, like all good mental models, it can be extended into other fields. We used it at the intelligence agency I used to work at. I know lawyers, police officers, doctors, businesspeople, politicians, athletes, and coaches who use it.

Fighter pilots have to work fast. Taking a second too long to make a decision can cost them their lives. As anyone who has ever watched Top Gun knows, pilots have a lot of decisions and processes to juggle when they’re in dogfights (close-range aerial battles). Pilots move at high speeds and need to avoid enemies while tracking them and keeping a contextual knowledge of objectives, terrains, fuel, and other key variables.

Dogfights are nasty. I’ve talked to pilots who’ve been in them. They want the fights to be over as quickly as possible. The longer they go, the higher the chances that something goes wrong. Pilots need to rely on their creativity and decision-making abilities to survive. There is no game plan to follow, no schedule or to-do list. There is only the present moment when everything hangs in the balance.

Forty-Second Boyd

Boyd was no armchair strategist. He developed his ideas during his own time as a fighter pilot. He earned the nickname “Forty-Second Boyd” for his ability to win any fight in under 40 seconds.

In a tribute written after Boyd’s death, General C.C. Krulak described him as “a towering intellect who made unsurpassed contributions to the American art of war. Indeed, he was one of the central architects of the reform of military thought…. From John Boyd we learned about competitive decision making on the battlefield—compressing time, using time as an ally.”

Reflecting Robert Greene’s maxim that everything is material, Boyd spent his career observing people and organizations. How do they adapt to changeable environments in conflicts, business, and other situations?

Over time, he deduced that these situations are characterized by uncertainty. Dogmatic, rigid theories are unsuitable for chaotic situations. Rather than trying to rise through the military ranks, Boyd focused on using his position as colonel to compose a theory of the universal logic of war.

Boyd was known to ask his mentees the poignant question, “Do you want to be someone, or do you want to do something?” In his own life, he certainly focused on the latter path and, as a result, left us ideas with tangible value. The OODA loop is just one of many.

The Four Parts of the OODA Loop

Let's break down the four parts of the OODA loop and see how they fit together.

OODA stands for Observe, Orient, Decide, Act. The description of it as a loop is crucial. Boyd intended the four steps to be repeated again and again until a conflict finishes. Although most depictions of the OODA loop portray it as a superficial idea, there is a lot of depth to it. Using it should be simple, but it has a rich basis in interdisciplinary knowledge.

1: Observe
The first step in the OODA Loop is to observe. At this stage, the main focus is to build a comprehensive picture of the situation with as much accuracy as possible.

A fighter pilot needs to consider: What is immediately affecting me? What is affecting my opponent? What could affect us later on? Can I make any predictions, and how accurate were my prior ones? A pilot's environment changes rapidly, so these observations need to be broad and fluid.

And information alone is not enough. The observation stage requires awareness of the overarching meaning of the information. It also necessitates separating the information which is relevant for a particular decision from that which is not. You have to add context to the variables.

The observation stage is vital in decision-making processes.

For example, faced with a patient in an emergency ward, a doctor needs to start by gathering as much foundational knowledge as possible. That might be the patient's blood pressure, pulse, age, underlying health conditions, and reason for admission. At the same time, the doctor needs to discard irrelevant information and figure out which facts are relevant for this precise situation. Only by putting the pieces together can she make a fast decision about the best way to treat the patient. The more experienced a doctor is, the more factors she is able to take into account, including subtle ones, such as a patient's speech patterns, his body language, and the absence (rather than presence) of certain signs.

2: Orient

Orientation, the second stage of the OODA loop, is frequently misunderstood or skipped because it is less intuitive than the other stages. Boyd referred to it as the schwerpunkt, a German term which loosely translates to “the main emphasis.” In this context, to orient is to recognize the barriers that might interfere with the other parts of the process.

Without an awareness of these barriers, the subsequent decision cannot be a fully rational one. Orienting is all about connecting with reality, not with a false version of events filtered through the lens of cognitive biases and shortcuts.

“Orientation isn't just a state you're in; it's a process. You're always orienting.”

— John Boyd

Including this step, rather than jumping straight to making a decision, gives us an edge over the competition. Even if we are at a disadvantage to begin with, having fewer resources or less information, Boyd maintained that the Orient step ensures that we can outsmart an opponent.

For Western nations, cyber-crime is a huge threat — mostly because for the first time ever, they can’t outsmart, outspend, or out-resource the competition. Boyd has some lessons for them.

Boyd believed that four main barriers prevent us from seeing information in an unbiased manner:

  1. Our cultural traditions
  2. Our genetic heritage
  3. Our ability to analyze and synthesize
  4. The influx of new information — it is hard to make sense of observations when the situation keeps changing

Boyd was one of the first people to discuss the importance of building a toolbox of mental models, prior to Charlie Munger’s popularization of the concept among investors.

Boyd believed in “destructive deduction” — taking note of incorrect assumptions and biases and then replacing them with fundamental, versatile mental models. Only then can we begin to garner a reality-oriented picture of the situation, which will inform subsequent decisions.

Boyd employed a brilliant metaphor for this — a snowmobile. In one talk, he described how a snowmobile comprises elements of different devices. The caterpillar treads of a tank, skis, the outboard motor of a boat, the handlebars of a bike — each of those elements is useless alone, but combining them creates a functional vehicle.

As Boyd put it: “A loser is someone (individual or group) who cannot build snowmobiles when facing uncertainty and unpredictable change; whereas a winner is someone (individual or group) who can build snowmobiles, and employ them in an appropriate fashion, when facing uncertainty and unpredictable change.”

To orient ourselves, we have to build a metaphorical snowmobile by combining practical concepts from different disciplines.

Although Boyd is regarded as a military strategist, he didn’t confine himself to any particular discipline. His theories encompass ideas drawn from various disciplines, including mathematical logic, biology, psychology, thermodynamics, game theory, anthropology, and physics. Boyd described his approach as a “scheme of pulling things apart (analysis) and putting them back together (synthesis) in new combinations to find how apparently unrelated ideas and actions can be related to one another.”

3. Decide

No surprises here. Having gathered information and oriented ourselves, we have to make an informed decision. The previous two steps should have generated a plethora of ideas, so this is the point where we choose the most relevant option.

Boyd cautioned against first-conclusion bias, explaining that we cannot keep making the same decision again and again. This part of the loop needs to be flexible and open to Bayesian updating. In some of his notes, Boyd described this step as the hypothesis stage. The implication is that we should test the decisions we make at this point in the loop, spotting their flaws and including any issues in future observation stages.

4. Act

While technically a decision-making process, the OODA loop is all about action. The ability to act upon rational decisions is a serious advantage.

The other steps are mere precursors. A decision made, now is the time to act upon it. Also known as the test stage, this is when we experiment to see how good our decision was. Did we observe the right information? Did we use the best possible mental models? Did we get swayed by biases and other barriers? Can we disprove the prior hypothesis? Whatever the outcome, we then cycle back to the first part of the loop and begin observing again.

Why the OODA Loop Works

The OODA loop has four key benefits.

1. Speed

Fighter pilots must make many decisions in fast succession. They don’t have time to list pros and cons or to consider every available avenue. Once the OODA loop becomes part of their mental toolboxes, they should be able to cycle through it in a matter of seconds.

Speed is a crucial element of military decision making. Using the OODA loop in everyday life, we probably have a little more time than a fighter pilot would. But Boyd emphasized the value of being decisive, taking initiative, and staying autonomous. These are universal assets and apply to many situations.

Take the example of modern growth hacker marketing.

“The ability to operate at a faster tempo or rhythm than an adversary enables one to fold the adversary back inside himself so that he can neither appreciate nor keep up with what is going on. He will become disoriented and confused…”

— John Boyd

The key advantage growth hackers have over traditional marketers is speed. They observe (look at analytics, survey customers, perform a/b tests, etc.) and orient themselves (consider vanity versus meaningful metrics, assess interpretations, and ground themselves in the reality of a market) before making a decision and then acting. The final step serves to test their ideas and they have the agility to switch tactics if the desired outcome is not achieved.

Meanwhile, traditional marketers are often trapped in lengthy campaigns which do not offer much in the way of useful metrics. Growth hackers can adapt and change their techniques every single day depending on what works. They are not confined by stagnant ideas about what worked before.

So, although they may have a small budget and fewer people to assist them, their speed gives them an advantage. Just as Boyd could defeat any opponent in under 40 seconds (even starting at a position of disadvantage), growth hackers can grow companies and sell products at extraordinary rates, starting from scratch.

2. Comfort With Uncertainty
Uncertainty does not always equate to risk. A fighter pilot is in a precarious situation, where there will be gaps in their knowledge. They cannot read the mind of the opponent and might have incomplete information about the weather conditions and surrounding environment. They can, however, take into account key factors such as the opponent's nationality, the type of airplane they are flying, and what their maneuvers reveal about their intentions and level of training.

If the opponent uses an unexpected strategy, is equipped with a new type of weapon or airplane, or behaves in an irrational, ideologically motivated way, the pilot must accept the accompanying uncertainty. However, Boyd belabored the point that uncertainty is irrelevant if we have the right filters in place.

If we don’t, we can end up stuck at the observation stage, unable to decide or act. But if we do have the right filters, we can factor uncertainty into the observation stage. We can leave a margin of error. We can recognize the elements which are within our control and those which are not.

Three key principles supported Boyd’s ideas. In his presentations, he referred to Gödel’s Proof, Heisenberg’s Uncertainty Principle, and the Second Law of Thermodynamics.

Gödel’s theorems indicate that any mental model we have of reality will omit certain information and that Bayesian updating must be used to bring it in line with reality. Our understanding of science illustrates this.

In the past, people’s conception of reality missed crucial concepts such as criticality, relativity, the laws of thermodynamics, and gravity. As we have discovered these concepts, we have updated our view of the world. Yet we would be foolish to think that we now know everything and our worldview is complete. Other key principles remain undiscovered. The same goes for fighter pilots — their understanding of what is going on during a battle will always have gaps. Identifying this fundamental uncertainty gives it less power over us.

The second concept Boyd referred to is Heisenberg’s Uncertainty Principle. In its simplest form, this principle describes the limit of the precision with which pairs of physical properties can be understood. We cannot know the position and the velocity of a body at the same time. We can know either its location or its speed, but not both. Although Heisenberg’s Uncertainty Principle was initially used to describe particles, Boyd’s ability to combine disciplines led him to apply it to planes. If a pilot focuses too hard on where an enemy plane is, they will lose track of where it is going and vice versa. Trying harder to track the two variables will actually lead to more inaccuracy! Heisenberg’s Uncertainty Principle applies to myriad areas where excessive observation proves detrimental. Reality is imprecise.

Finally, Boyd made use of the Second Law of Thermodynamics. In a closed system, entropy always increases and everything moves towards chaos. Energy spreads out and becomes disorganized.

Although Boyd’s notes do not specify the exact applications, his inference appears to be that a fighter pilot must be an open system or they will fail. They must draw “energy” (information) from outside themselves or the situation will become chaotic. They should also aim to cut their opponent off, forcing them to become a closed system. Drawing on his studies, Boyd developed his Energy Maneuverability theory, which recast maneuvers in terms of the energy they used.

“Let your plans be dark and impenetrable as night, and when you move, fall like a thunderbolt.”

— Sun Tzu

3. Unpredictability

Using the OODA loop should enable us to act faster than an opponent, thereby seeming unpredictable. While they are still deciding what to do, we have already acted. This resets their own loop, moving them back to the observation stage. Keep doing this, and they are either rendered immobile or forced to act without making a considered decision. So, they start making mistakes, which can be exploited.

Boyd recommended making unpredictable changes in speed and direction, and wrote, “we should operate at a faster tempo than our adversaries or inside our adversaries[’] time scales. … Such activity will make us appear ambiguous (non predictable) [and] thereby generate confusion and disorder among our adversaries.” He even helped design planes better equipped to make those unpredictable changes.

For the same reason that you can’t run the same play 70 times in a football game, rigid military strategies often become useless after a few uses, or even one iteration, as opponents learn to recognize and counter them. The OODA loop can be endlessly used because it is a formless strategy, unconnected to any particular maneuvers.

We know that Boyd was influenced by Sun Tzu (he owned seven thoroughly annotated copies of The Art of War), and he drew many ideas from the ancient strategist. Sun Tzu depicts war as a game of deception where the best strategy is that which an opponent cannot pre-empt. Apple has long used this strategy as a key part of their product launches. Meticulously planned, their launches are shrouded in secrecy and the goal is for no one outside the company to see a product prior to the release.

When information has been leaked, the company has taken serious legal action as well as firing associated employees. We are never sure what Apple will put out next (just search for “Apple product launch 2017” and you will see endless speculation based on few facts). As a consequence, Apple can stay ahead of their rivals.

Once a product launches, rival companies scramble to emulate it. But by the time their technology is ready for release, Apple is on to the next thing and has taken most of the market share. Although inexpensive compared to the drawn-out product launches other companies use, Apple’s unpredictability makes us pay attention. Stock prices rise the day after, tickets to launches sell out in seconds, and the media reports launches as if they were news events, not marketing events.

4. Testing

A notable omission in Boyd’s work is any sort of specific instructions for how to act or which decisions to make. This is presumably due to his respect for testing. He believed that ideas should be tested and then, if necessary, discarded.

“We can't just look at our own personal experiences or use the same mental recipes over and over again; we've got to look at other disciplines and activities and relate or connect them to what we know from our experiences and the strategic world we live in.”

— John Boyd

Boyd’s OODA is a feedback loop, with the outcome of actions leading back to observations. Even in Aerial Attack Study, his comprehensive manual of maneuvers, Boyd did not describe any particular one as superior. He encouraged pilots to have the widest repertoire possible so they could select the best option in response to the maneuvers of an opponent.

We can incorporate testing into our decision-making processes by keeping track of outcomes in decision journals. Boyd’s notes indicate that he may have done just that during his time as a fighter pilot, building up the knowledge that went on to form Aerial Attack Study. Rather than guessing how our decisions lead to certain outcomes, we can get a clear picture to aid us in future orientation stages. Over time, our decision journals will reveal what works and what doesn’t.

Applying the OODA Loop

In sports, there is an adage that carries over to business quite well: “Speed kills.” If you are able to be nimble, able to assess the ever-changing environment and adapt quickly, you'll always carry the advantage over your opponent.

Start applying the OODA loop to your day-to-day decisions and watch what happens. You'll start to notice things that you would have been oblivious to before. Before jumping to your first conclusion, you'll pause to consider your biases, take in additional information, and be more thoughtful of consequences.

As with anything you practice,  if you do it right, the more you do it, the better you'll get.  You'll start making better decisions more quickly. You'll see more rapid progress. And as John Boyd would prescribe, you'll start to DO something in your life, and not just BE somebody.


Members can discuss this post on the Learning Community Forum