“You can reduce the number of mistakes you make by thinking about problems more clearly.“
In his book Think Twice: Harnessing the Power of Counterintuition, Michael Mauboussin discusses how we can “fall victim to simplified mental routines that prevent us from coping with the complex realities inherent in important judgment calls.” One of those routines is the inside view, which we're going to talk about in this article but first let's get a bit of context.
No one wakes up thinking, “I am going to make bad decisions today.” Yet we all make them. What is particularly surprising is some of the biggest mistakes are made by people who are, by objective standards, very intelligent. Smart people make big, dumb, and consequential mistakes.
Mental flexibility, introspection, and the ability to properly calibrate evidence are at the core of rational thinking and are largely absent on IQ tests. Smart people make poor decisions because they have the same factory settings on their mental software as the rest of us, and that software isn’t designed to cope with many of today’s problems.
We don't spend enough time thinking and learning from the process. Generally, we're pretty ambivalent about the process by which we make decisions.
… typical decision makers allocate only 25 percent of their time to thinking about the problem properly and learning from experience. Most spend their time gathering information, which feels like progress and appears diligent to superiors. But information without context is falsely empowering.
That reminds me of what Daniel Kahneman wrote in Thinking, Fast and Slow:
A remarkable aspect of your mental life is that you are rarely stumped … The normal state of your mind is that you have intuitive feelings and opinions about almost everything that comes your way. You like or dislike people long before you know much about them; you trust or distrust strangers without knowing why; you feel that an enterprise is bound to succeed without analyzing it.
Context comes from broad understanding — looking at the problem from the outside in and not the inside out. When we make a decision, we're not really gathering and contextualizing information as much as trying to satisfice our existing intuition; The very thing a good decision process should help root out. Think about it this way, every time you make a decision, you're saying you understand something. Most of us stop there. But understanding is not enough, you need to test that your understanding is correct which comes through feedback and reflection. Then you need to update your understanding. This is the learning loop.
So why are we so quick to assume we understand?
Ego Induced Blindness
We tend to favor the inside view over the outside view.
An inside view considers a problem by focusing on the specific task and by using information that is close at hand, and makes predictions based on that narrow and unique set of inputs. These inputs may include anecdotal evidence and fallacious perceptions. This is the approach that most people use in building models of the future and is indeed common for all forms of planning.
The outside view asks if there are similar situations that can provide a statistical basis for making a decision. Rather than seeing a problem as unique, the outside view wants to know if others have faced comparable problems and, if so, what happened. The outside view is an unnatural way to think, precisely because it forces people to set aside all the cherished information they have gathered.
When the inside view is more positive than the outside view you effectively have a base rate argument. You're saying (knowingly or, more likely, unknowingly) that this time is different. Our brains are all too happy to help us construct this argument.
Mauboussin argues that we embrace the inside view for a few primary reasons. First, we're optimistic by nature. Second, is the “illusion of optimism” (we see our future as brighter than that of others). Finally, is the illusion of control (we think that chance events are subject to our control).
One interesting point is that while we're bad at looking at the outside view when it comes to ourselves, we're better at it when it comes to other people.
In fact, the planning fallacy embodies a broader principle. When people are forced to look at similar situations and see the frequency of success, they tend to predict more accurately. If you want to know how something is going to turn out for you, look at how it turned out for others in the same situation. Daniel Gilbert, a psychologist at Harvard University, ponders why people don’t rely more on the outside view, “Given the impressive power of this simple technique, we should expect people to go out of their way to use it. But they don’t.” The reason is most people think of themselves as different, and better, than those around them.
So it's mostly ego. I'm better than the people tackling this problem before me. We see the differences between situations and use those as rationalizations as to why things are different this time.
We incorrectly think that differences are more valuable than similarities.
After all, anyone can see what’s the same but it takes true insight to see what’s different, right? We’re all so busy trying to find differences that we forget to pay attention to what is the same.
Incorporating the Outside View
1. Select a Reference Class
Find a group of situations, or a reference class, that is broad enough to be statistically significant but narrow enough to be useful in analyzing the decision that you face. The task is generally as much art as science, and is certainly trickier for problems that few people have dealt with before. But for decisions that are common—even if they are not common for you— identifying a reference class is straightforward. Mind the details. Take the example of mergers and acquisitions. We know that the shareholders of acquiring companies lose money in most mergers and acquisitions. But a closer look at the data reveals that the market responds more favorably to cash deals and those done at small premiums than to deals financed with stock at large premiums. So companies can improve their chances of making money from an acquisition by knowing what deals tend to succeed.
2. Assess the distribution of outcomes.
Once you have a reference class, take a close look at the rate of success and failure. … Study the distribution and note the average outcome, the most common outcome, and extreme successes or failures.
Two other issues are worth mentioning. The statistical rate of success and failure must be reasonably stable over time for a reference class to be valid. If the properties of the system change, drawing inference from past data can be misleading. This is an important issue in personal finance, where advisers make asset allocation recommendations for their clients based on historical statistics. Because the statistical properties of markets shift over time, an investor can end up with the wrong mix of assets.
Also keep an eye out for systems where small perturbations can lead to large-scale change. Since cause and effect are difficult to pin down in these systems, drawing on past experiences is more difficult. Businesses driven by hit products, like movies or books, are good examples. Producers and publishers have a notoriously difficult time anticipating results, because success and failure is based largely on social influence, an inherently unpredictable phenomenon.
3. Make a prediction.
With the data from your reference class in hand, including an awareness of the distribution of outcomes, you are in a position to make a forecast. The idea is to estimate your chances of success and failure. For all the reasons that I’ve discussed, the chances are good that your prediction will be too optimistic.
Sometimes when you find the right reference class, you see the success rate is not very high. So to improve your chance of success, you have to do something different than everyone else.
4. Assess the reliability of your prediction and fine-tune.
How good we are at making decisions depends a great deal on what we are trying to predict. Weather forecasters, for instance, do a pretty good job of predicting what the temperature will be tomorrow. Book publishers, on the other hand, are poor at picking winners, with the exception of those books from a handful of best-selling authors. The worse the record of successful prediction is, the more you should adjust your prediction toward the mean (or other relevant statistical measure). When cause and effect is clear, you can have more confidence in your forecast.
The main lesson we can take from this is that we tend to focus on what's different whereas the best decisions often focus on just the opposite: what's the same. While this situation seems a little different, it's almost always the same.
As Charlie Munger has said: “if you notice, the plots are very similar. The same plot comes back time after time.”
Particulars may vary but, unless those particulars are the variables that govern the outcome of the situation, the pattern remains. If we're going to focus on what's different rather than what's the same, you'd best be sure the variables you're clinging to matter.