In The Art of War Sun Tzu said “The general who wins a battle makes many calculations in his temple before the battle is fought.”
Those ‘calculations’ are the tools we have available to think better. One of the best questions you can ask is how we can make our mental processes work better.
Charlie Munger says that “developing the habit of mastering the multiple models which underlie reality is the best thing you can do.”
Those models are mental models.
They fall into two categories: (1) ones that help us simulate time (and predict the future) and better understand how the world works (e.g. understanding a useful idea from like autocatalysis), and (2) ones that help us better understand how our mental processes lead us astray (e.g., availability bias).
When our mental models line up with reality they help us avoid problems. However, they also cause problems when they don’t line up with reality as we think something that isn't true.
In Peter Bevelin’s Seeking Wisdom, he highlights Munger talking about autocatalysis:
If you get a certain kind of process going in chemistry, it speeds up on its own. So you get this marvellous boost in what you're trying to do that runs on and on. Now, the laws of physics are such that it doesn't run on forever. But it runs on for a goodly while. So you get a huge boost. You accomplish A – and, all of a sudden, you're getting A + B + C for awhile.
He continues telling us how this idea can be applied:
Disney is an amazing example of autocatalysis … They had those movies in the can. They owned the copyright. And just as Coke could prosper when refrigeration came, when the videocassette was invented, Disney didn't have to invent anything or do anything except take the thing out of the can and stick it on the cassette.
This leads us to an interesting problem. The world is always changing so which models should we prioritize learning?
How we prioritize our learning has implications beyond the day-to-day. Often we focus on things that change quickly. We chase the latest study, the latest findings, the most recent best-sellers. We do this to keep up-to-date with the latest-and-greatest.
Despite our intentions, learning in this way fails to account for cumulative knowledge. Instead we consume all of our time keeping up to date.
If we are prioritize learning, we should focus on things that change slowly.
The models that come from hard science and engineering are the most reliable models on this Earth. And engineering quality control – at least the guts of it that matters to you and me and people who are not professional engineers – is very much based on the elementary mathematics of Fermat and Pascal: It costs so much and you get so much less likelihood of it breaking if you spend this much…
And, of course, the engineering idea of a backup system is a very powerful idea. The engineering idea of breakpoints – that's a very powerful model, too. The notion of a critical mass – that comes out of physics – is a very powerful model.
After we learn a model we have to make it useful. We have to integrate it into our existing knowledge.
Our world is mutli-dimensional and our problems are complicated. Most problems cannot be solved using one model alone. The more models we have the better able we are to rationally solve problems.
But if we don’t have the models we become the proverbial man with a hammer. To the man with a hammer everything looks like a nail. If you only have one model you will fit whatever problem you face to the model you have. If you have more than one model, however, you can look at the problem from a variety of perspectives and increase the odds you come to a better solution.
“Since no single discipline has all the answers,” Peter Bevelin writes in Seeking Wisdom, “we need to understand and use the big ideas from all the important disciplines: Mathematics, physics, chemistry, engineering, biology, psychology, and rank and use them in order of reliability.”
Charles Munger illustrates the importance of this:
Suppose you want to be good at declarer play in contract bridge. Well, you know the contract – you know what you have to achieve. And you can count up the sure winners you have by laying down your high cards and your invincible trumps.
But if you're a trick or two short, how are you going to get the other needed tricks? Well, there are only six or so different, standard methods: You've got long-suit establishment. You've got finesses. You've got throw-in plays. You've got cross-ruffs. You've got squeezes. And you've got various ways of misleading the defense into making errors. So it's a very limited number of models. But if you only know one or two of those models, then you're going to be a horse's patoot in declarer play…
If you don't have the full repertoire, I guarantee you that you'll overutilize the limited repertoire you have – including use of models that are inappropriate just because they're available to you in the limited stock you have in mind.
As for how we can use different ideas, Munger again shows the way …
Have a full kit of tools … go through them in your mind checklist-style.. .you can never make any explanation that can be made in a more fundamental way in any other way than the most fundamental way. And you always take with full attribution to the most fundamental ideas that you are required to use. When you're using physics, you say you're using physics. When you're using biology, you say you're using biology.
But ideas alone are not enough. We need to understand how they interact and combine. This leads to lollapalooza effects.
You get lollapalooza effects when two, three or four forces are all operating in the same direction. And, frequently, you don't get simple addition. It's often like a critical mass in physics where you get a nuclear explosion if you get to a certain point of mass – and you don't get anything much worth seeing if you don't reach the mass.
Sometimes the forces just add like ordinary quantities and sometimes they combine on a break-point or critical-mass basis … More commonly, the forces coming out of … models are conflicting to some extent. And you get huge, miserable trade-offs … So you [must] have the models and you [must] see the relatedness and the effects from the relatedness.