Tag: Charlie Munger

Complexity Bias: Why We Prefer Complicated to Simple

Complexity bias is a logical fallacy that leads us to give undue credence to complex concepts.

Faced with two competing hypotheses, we are likely to choose the most complex one. That’s usually the option with the most assumptions and regressions. As a result, when we need to solve a problem, we may ignore simple solutions — thinking “that will never work” — and instead favor complex ones.

To understand complexity bias, we need first to establish the meaning of three key terms associated with it: complexity, simplicity, and chaos.

Complexity, like pornography, is hard to define when we’re put on the spot, although most of us recognize it when we see it. The Cambridge Dictionary defines complexity as “the state of having many parts and being difficult to understand or find an answer to.” The definition of simplicity is the inverse: “something [that] is easy to understand or do.” Chaos is defined as “a state of total confusion with no order.”

“Life is really simple, but we insist on making it complicated.”

— Confucius

Complex systems contain individual parts that combine to form a collective that often can’t be predicted from its components. Consider humans. We are complex systems. We’re made of about 100 trillion cells and yet we are so much more than the aggregation of our cells. You’d never predict what we’re like or who we are from looking at our cells.

Complexity bias is our tendency to look at something that is easy to understand, or look at it when we are in a state of confusion, and view it as having many parts that are difficult to understand.

We often find it easier to face a complex problem than a simple one.

A person who feels tired all the time might insist that their doctor check their iron levels while ignoring the fact that they are unambiguously sleep deprived. Someone experiencing financial difficulties may stress over the technicalities of their telephone bill while ignoring the large sums of money they spend on cocktails.

Marketers make frequent use of complexity bias.

They do this by incorporating confusing language or insignificant details into product packaging or sales copy. Most people who buy “ammonia-free” hair dye, or a face cream which “contains peptides,” don’t fully understand the claims. Terms like these often mean very little, but we see them and imagine that they signify a product that’s superior to alternatives.

How many of you know what probiotics really are and how they interact with gut flora?

Meanwhile, we may also see complexity where only chaos exists. This tendency manifests in many forms, such as conspiracy theories, superstition, folklore, and logical fallacies. The distinction between complexity and chaos is not a semantic one. When we imagine that something chaotic is in fact complex, we are seeing it as having an order and more predictability than is warranted. In fact, there is no real order, and prediction is incredibly difficult at best.

Complexity bias is interesting because the majority of cognitive biases occur in order to save mental energy. For example, confirmation bias enables us to avoid the effort associated with updating our beliefs. We stick to our existing opinions and ignore information that contradicts them. Availability bias is a means of avoiding the effort of considering everything we know about a topic. It may seem like the opposite is true, but complexity bias is, in fact, another cognitive shortcut. By opting for impenetrable solutions, we sidestep the need to understand. Of the fight-or-flight responses, complexity bias is the flight response. It is a means of turning away from a problem or concept and labeling it as too confusing. If you think something is harder than it is, you surrender your responsibility to understand it.

“Most geniuses—especially those who lead others—prosper not by deconstructing intricate complexities but by exploiting unrecognized simplicities.”

— Andy Benoit

Faced with too much information on a particular topic or task, we see it as more complex than it is. Often, understanding the fundamentals will get us most of the way there. Software developers often find that 90% of the code for a project takes about half the allocated time. The remaining 10% takes the other half. Writing — and any other sort of creative work — is much the same. When we succumb to complexity bias, we are focusing too hard on the tricky 10% and ignoring the easy 90%.

Research has revealed our inherent bias towards complexity.

In a 1989 paper entitled “Sensible reasoning in two tasks: Rule discovery and hypothesis evaluation,” Hilary F. Farris and Russell Revlin evaluated the topic. In one study, participants were asked to establish an arithmetic rule. They received a set of three numbers (such as 2, 4, 6) and tried to generate a hypothesis by asking the experimenter if other number sequences conformed to the rule. Farris and Revlin wrote, “This task is analogous to one faced by scientists, with the seed triple functioning as an initiating observation, and the act of generating the triple is equivalent to performing an experiment.”

The actual rule was simple: list any three ascending numbers.

The participants could have said anything from “1, 2, 3” to “3, 7, 99” and been correct. It should have been easy for the participants to guess this, but most of them didn’t. Instead, they came up with complex rules for the sequences. (Also see Falsification of Your Best Loved Ideas.)

A paper by Helena Matute looked at how intermittent reinforcement leads people to see complexity in chaos. Three groups of participants were placed in rooms and told that a loud noise would play from time to time. The volume, length, and pattern of the sound were identical for each group. Group 1 (Control) was told to sit and listen to the noises. Group 2 (Escape) was told that there was a specific action they could take to stop the noises. Group 3 (Yoked) was told the same as Group 2, but in their case, there was actually nothing they could do.

Matute wrote:

Yoked participants received the same pattern and duration of tones that had been produced by their counterparts in the Escape group. The amount of noise received by Yoked and Control subjects depends only on the ability of the Escape subjects to terminate the tones. The critical factor is that Yoked subjects do not have control over reinforcement (noise termination) whereas Escape subjects do, and Control subjects are presumably not affected by this variable.

The result? Not one member of the Yoked group realized that they had no control over the sounds. Many members came to repeat particular patterns of “superstitious” behavior. Indeed, the Yoked and Escape groups had very similar perceptions of task controllability. Faced with randomness, the participants saw complexity.

Does that mean the participants were stupid? Not at all. We all exhibit the same superstitious behavior when we believe we can influence chaotic or simple systems.

Funnily enough, animal studies have revealed much the same. In particular, consider B.F. Skinner’s well-known research on the effects of random rewards on pigeons. Skinner placed hungry pigeons in cages equipped with a random-food-delivery mechanism. Over time, the pigeons came to believe that their behavior affected the food delivery. Skinner described this as a form of superstition. One bird spun in counterclockwise circles. Another butted its head against a corner of the cage. Other birds swung or bobbed their heads in specific ways. Although there is some debate as to whether “superstition” is an appropriate term to apply to birds, Skinner’s research shed light on the human tendency to see things as being more complex than they actually are.

Skinner wrote (in “‘Superstition’ in the Pigeon,” Journal of Experimental Psychology, 38):

The bird behaves as if there were a causal relation between its behavior and the presentation of food, although such a relation is lacking. There are many analogies in human behavior. Rituals for changing one's fortune at cards are good examples. A few accidental connections between a ritual and favorable consequences suffice to set up and maintain the behavior in spite of many unreinforced instances. The bowler who has released a ball down the alley but continues to behave as if he were controlling it by twisting and turning his arm and shoulder is another case in point. These behaviors have, of course, no real effect upon one's luck or upon a ball half way down an alley, just as in the present case the food would appear as often if the pigeon did nothing—or, more strictly speaking, did something else.

The world around us is a chaotic, entropic place. But it is rare for us to see it that way.

In Living with Complexity, Donald A. Norman offers a perspective on why we need complexity:

We seek rich, satisfying lives, and richness goes along with complexity. Our favorite songs, stories, games, and books are rich, satisfying, and complex. We need complexity even while we crave simplicity… Some complexity is desirable. When things are too simple, they are also viewed as dull and uneventful. Psychologists have demonstrated that people prefer a middle level of complexity: too simple and we are bored, too complex and we are confused. Moreover, the ideal level of complexity is a moving target, because the more expert we become at any subject, the more complexity we prefer. This holds true whether the subject is music or art, detective stories or historical novels, hobbies or movies.

As an example, Norman asks readers to contemplate the complexity we attach to tea and coffee. Most people in most cultures drink tea or coffee each day. Both are simple beverages, made from water and coffee beans or tea leaves. Yet we choose to attach complex rituals to them. Even those of us who would not consider ourselves to be connoisseurs have preferences. Offer to make coffee for a room full of people, and we can be sure that each person will want it made in a different way.

Coffee and tea start off as simple beans or leaves, which must be dried or roasted, ground and infused with water to produce the end result. In principle, it should be easy to make a cup of coffee or tea. Simply let the ground beans or tea leaves [steep] in hot water for a while, then separate the grounds and tea leaves from the brew and drink. But to the coffee or tea connoisseur, the quest for the perfect taste is long-standing. What beans? What tea leaves? What temperature water and for how long? And what is the proper ratio of water to leaves or coffee?

The quest for the perfect coffee or tea maker has been around as long as the drinks themselves. Tea ceremonies are particularly complex, sometimes requiring years of study to master the intricacies. For both tea and coffee, there has been a continuing battle between those who seek convenience and those who seek perfection.

Complexity, in this way, can enhance our enjoyment of a cup of tea or coffee. It’s one thing to throw some instant coffee in hot water. It’s different to select the perfect beans, grind them ourselves, calculate how much water is required, and use a fancy device. The question of whether this ritual makes the coffee taste better or not is irrelevant. The point is the elaborate surrounding ritual. Once again, we see complexity as superior.

“Simplicity is a great virtue but it requires hard work to achieve it and education to appreciate it. And to make matters worse: complexity sells better.”

— Edsger W. Dijkstra

The Problem with Complexity

Imagine a person who sits down one day and plans an elaborate morning routine. Motivated by the routines of famous writers they have read about, they lay out their ideal morning. They decide they will wake up at 5 a.m., meditate for 15 minutes, drink a liter of lemon water while writing in a journal, read 50 pages, and then prepare coffee before planning the rest of their day.

The next day, they launch into this complex routine. They try to keep at it for a while. Maybe they succeed at first, but entropy soon sets in and the routine gets derailed. Sometimes they wake up late and do not have time to read. Their perceived ideal routine has many different moving parts. Their actual behavior ends up being different each day, depending on random factors.

Now imagine that this person is actually a famous writer. A film crew asks to follow them around on a “typical day.” On the day of filming, they get up at 7 a.m., write some ideas, make coffee, cook eggs, read a few news articles, and so on. This is not really a routine; it is just a chaotic morning based on reactive behavior. When the film is posted online, people look at the morning and imagine they are seeing a well-planned routine rather than the randomness of life.

This hypothetical scenario illustrates the issue with complexity: it is unsustainable without effort.

The more individual constituent parts a system has, the greater the chance of its breaking down. Charlie Munger once said that “Where you have complexity, by nature you can have fraud and mistakes.” Any complex system — be it a morning routine, a business, or a military campaign — is difficult to manage. Addressing one of the constituent parts inevitably affects another (see the Butterfly Effect). Unintended and unexpected consequences are likely to occur.

As Daniel Kahneman and Amos Tversky wrote in 1974 (in Judgment Under Uncertainty: Heuristics and Biases): “A complex system, such as a nuclear reactor or the human body, will malfunction if any of its essential components fails. Even when the likelihood of failure in each component is slight, the probability of an overall failure can be high if many components are involved.”

This is why complexity is less common than we think. It is unsustainable without constant maintenance, self-organization, or adaptation. Chaos tends to disguise itself as complexity.

“Human beings are pattern-seeking animals. It's part of our DNA. That's why conspiracy theories and gods are so popular: we always look for the wider, bigger explanations for things.”

— Adrian McKinty, The Cold Cold Ground

Complexity Bias and Conspiracy Theories

A musician walks barefoot across a zebra-crossing on an album cover. People decide he died in a car crash and was replaced by a lookalike. A politician’s eyes look a bit odd in a blurry photograph. People conclude that he is a blood-sucking reptilian alien taking on a human form. A photograph shows an indistinct shape beneath the water of a Scottish lake. The area floods with tourists hoping to glimpse a surviving prehistoric creature. A new technology overwhelms people. So, they deduce that it is the product of a government mind-control program.

Conspiracy theories are the ultimate symptom of our desire to find complexity in the world. We don’t want to acknowledge that the world is entropic. Disasters happen and chaos is our natural state. The idea that hidden forces animate our lives is an appealing one. It seems rational. But as we know, we are all much less rational and logical than we think. Studies have shown that a high percentage of people believe in some sort of conspiracy. It’s not a fringe concept. According to research by Joseph E. Uscinski and Joseph M. Parent, about one-third of Americans believe the notion that Barack Obama’s birth certificate is fake. Similar numbers are convinced that 9/11 was an inside job orchestrated by George Bush. Beliefs such as these are present in all types of people, regardless of class, age, gender, race, socioeconomic status, occupation, or education level.

Conspiracy theories are invariably far more complex than reality. Although education does reduce the chances of someone’s believing in conspiracy theories, one in five Americans with postgraduate degrees still hold conspiratorial beliefs.

Uscinski and Parent found that, just as uncertainty led Skinner’s pigeons to see complexity where only randomness existed, a sense of losing control over the world around us increases the likelihood of our believing in conspiracy theories. Faced with natural disasters and political or economic instability, we are more likely to concoct elaborate explanations. In the face of horrific but chaotic events such as Hurricane Katrina, or the recent Grenfell Tower fire, many people decide that secret institutions are to blame.

Take the example of the “Paul McCartney is dead” conspiracy theory. Since the 1960s, a substantial number of people have believed that McCartney died in a car crash and was replaced by a lookalike, usually said to be a Scottish man named William Campbell. Of course, conspiracy theorists declare, The Beatles wanted their most loyal fans to know this, so they hid clues in songs and on album covers.

The beliefs surrounding the Abbey Road album are particularly illustrative of the desire to spot complexity in randomness and chaos. A police car is parked in the background — an homage to the officers who helped cover up the crash. A car’s license plate reads “LMW 28IF” — naturally, a reference to McCartney being 28 if he had lived (although he was 27) and to Linda McCartney (whom he had not met yet). Matters were further complicated once The Beatles heard about the theory and began to intentionally plant “clues” in their music. The song “I’m So Tired” does in fact feature backwards mumbling about McCartney’s supposed death. The 1960s were certainly a turbulent time, so is it any wonder that scores of people pored over album art or played records backwards, looking for evidence of a complex hidden conspiracy?

As Henry Louis Gates Jr. wrote, “Conspiracy theories are an irresistible labor-saving device in the face of complexity.”

Complexity Bias and Language

We have all, at some point, had a conversation with someone who speaks like philosopher Theodor Adorno wrote: using incessant jargon and technical terms even when simpler synonyms exist and would be perfectly appropriate. We have all heard people say things which we do not understand, but which we do not question for fear of sounding stupid.

Jargon is an example of how complexity bias affects our communication and language usage. When we use jargon, especially out of context, we are putting up unnecessary semantic barriers that reduce the chances of someone’s challenging or refuting us.

In an article for The Guardian, James Gingell describes his work translating scientific jargon into plain, understandable English:

It’s quite simple really. The first step is getting rid of the technical language. Whenever I start work on refining a rough-hewn chunk of raw science into something more pleasant I use David Dobbs’ (rather violent) aphorism as a guiding principle: “Hunt down jargon like a mercenary possessed, and kill it.” I eviscerate acronyms and euthanise decrepit Latin and Greek. I expunge the esoteric. I trim and clip and pare and hack and burn until only the barest, most easily understood elements remain.

[…]

Jargon…can be useful for people as a shortcut to communicating complex concepts. But it’s intrinsically limited: it only works when all parties involved know the code. That may be an obvious point but it’s worth emphasising — to communicate an idea to a broad, non-specialist audience, it doesn’t matter how good you are at embroidering your prose with evocative imagery and clever analogies, the jargon simply must go.”

Gingell writes that even the most intelligent scientists struggle to differentiate between thinking (and speaking and writing) like a scientist, and thinking like a person with minimal scientific knowledge.

Unnecessarily complex language is not just annoying. It's outright harmful. The use of jargon in areas such as politics and economics does real harm. People without the requisite knowledge to understand it feel alienated and removed from important conversations. It leads people to believe that they are not intelligent enough to understand politics, or not educated enough to comprehend economics. When a politician talks of fiscal charters or rolling four-quarter growth measurements in a public statement, they are sending a crystal clear message to large numbers of people whose lives will be shaped by their decisions: this is not about you.

Complexity bias is a serious issue in politics. For those in the public eye, complex language can be a means of minimizing the criticism of their actions. After all, it is hard to dispute something you don't really understand. Gingell considers jargon to be a threat to democracy:

If we can’t fully comprehend the decisions that are made for us and about us by the government then how we can we possibly revolt or react in an effective way? Yes, we have a responsibility to educate ourselves more on the big issues, but I also think it’s important that politicians and journalists meet us halfway.

[…]

Economics and economic decisions are more important than ever now, too. So we should implore our journalists and politicians to write and speak to us plainly. Our democracy depends on it.

In his essay “Politics and the English Language,” George Orwell wrote:

In our time, political speech and writing are largely the defence of the indefensible. … Thus, political language has to consist largely of euphemism, question-begging and sheer cloudy vagueness. Defenceless villages are bombarded from the air, the inhabitants driven out into the countryside, the cattle machine-gunned, the huts set on fire with incendiary bullets: this is called pacification. Millions of peasants are robbed of their farms and sent trudging along the roads with no more than they can carry: this is called transfer of population or rectification of frontiers. People are imprisoned for years without trial, or shot in the back of the neck or sent to die of scurvy in Arctic lumber camps: this is called elimination of unreliable elements.

An example of the problems with jargon is the Sokal affair. In 1996, Alan Sokal (a physics professor) submitted a fabricated scientific paper entitled “Transgressing the Boundaries: Towards a Transformative Hermeneutics of Quantum Gravity.” The paper had absolutely no relation to reality and argued that quantum gravity is a social and linguistic construct. Even so, the paper was published in a respected journal. Sokal’s paper consisted of convoluted, essentially meaningless claims, such as this paragraph:

Secondly, the postmodern sciences deconstruct and transcend the Cartesian metaphysical distinctions between humankind and Nature, observer and observed, Subject and Object. Already quantum mechanics, earlier in this century, shattered the ingenious Newtonian faith in an objective, pre-linguistic world of material objects “out there”; no longer could we ask, as Heisenberg put it, whether “particles exist in space and time objectively.”

(If you're wondering why no one called him out, or more specifically why we have a bias to not call BS out, check out pluralistic ignorance).

Jargon does have its place. In specific contexts, it is absolutely vital. But in everyday communication, its use is a sign that we wish to appear complex and therefore more intelligent. Great thinkers throughout the ages have stressed the crucial importance of using simple language to convey complex ideas. Many of the ancient thinkers whose work we still reference today — people like Plato, Marcus Aurelius, Seneca, and Buddha — were known for their straightforward communication and their ability to convey great wisdom in a few words.

“Any intelligent fool can make things bigger, more complex, and more violent. It takes a touch of genius — and a lot of courage — to move in the opposite direction.”

— Ernst F. Schumacher

How Can We Overcome Complexity Bias?

The most effective tool we have for overcoming complexity bias is Occam’s razor. Also known as the principle of parsimony, this is a problem-solving principle used to eliminate improbable options in a given situation. Occam’s razor suggests that the simplest solution or explanation is usually the correct one. When we don’t have enough empirical evidence to disprove a hypothesis, we should avoid making unfounded assumptions or adding unnecessary complexity so we can make quick decisions or establish truths.

An important point to note is that Occam’s razor does not state that the simplest hypothesis is the correct one, but states rather that it is the best option before the establishment of empirical evidence. It is also useful in situations where empirical data is difficult or impossible to collect. While complexity bias leads us towards intricate explanations and concepts, Occam’s razor can help us to trim away assumptions and look for foundational concepts.

Returning to Skinner’s pigeons, had they known of Occam’s razor, they would have realized that there were two main possibilities:

  • Their behavior affects the food delivery.

Or:

  • Their behavior is irrelevant because the food delivery is random or on a timed schedule.

Using Occam’s razor, the head-bobbing, circles-turning pigeons would have realized that the first hypothesis involves numerous assumptions, including:

  • There is a particular behavior they must enact to receive food.
  • The delivery mechanism can somehow sense when they enact this behavior.
  • The required behavior is different from behaviors that would normally give them access to food.
  • The delivery mechanism is consistent.

And so on. Occam’s razor would dictate that because the second hypothesis is the simplest, involving the fewest assumptions, it is most likely the correct one.

So many geniuses, are really good at eliminating unnecessary complexity. Einstein, for instance, was a master at sifting the essential from the non-essential. Steve Jobs was the same.

Comment on Facebook | Twitter

The Power of Incentives: Inside The Hidden Forces that Shape Behavior

“Never, ever, think about something else when you should be thinking about the power of incentives.”

— Charlie Munger

According to Charlie Munger, there are only a few forces more powerful than incentives. In his speech “The Psychology of Human Misjudgment,” he reflects on how the power of incentives never disappoints him:

Well, I think I’ve been in the top 5% of my age cohort all my life in understanding the power of incentives, and all my life I’ve underestimated it. And never a year passes but I get some surprise that pushes my limit a little farther.

Sometimes the solution to a behavior problem is simply to revisit incentives and make sure they align with the desired goal. Munger talks about Federal Express, which is one of his favorite examples of the power of incentives:

The heart and soul of the integrity of the system is that all the packages have to be shifted rapidly in one central location each night. And the system has no integrity if the whole shift can’t be done fast. And Federal Express had one hell of a time getting the thing to work.
And they tried moral suasion, they tried everything in the world, and finally somebody got the happy thought that they were paying the night shift by the hour, and that maybe if they paid them by the shift, the system would work better. And lo and behold, that solution worked.

If you’re trying to change a behavior, reason will take you only so far. Reflecting on another example where misaligned incentives hampered the sales of a superior product, Munger said:

Early in the history of Xerox, Joe Wilson, who was then in the government, had to go back to Xerox because he couldn’t understand how their better, new machine was selling so poorly in relation to their older and inferior machine. Of course when he got there, he found out that the commission arrangement with the salesmen gave a tremendous incentive to the inferior machine.

Ignoring incentives almost never works out well. Thinking about the incentives of others is necessary to create win-win relationships.

We can turn to psychology to obtain a more structured and thorough understanding of how incentives shape our actions.

The Science of Reinforcement

The science of reinforcement was furthered by Burrhus Frederic Skinner (usually called B.F. Skinner), a professor of psychology at Harvard from 1958 to 1974.

Skinner, unlike his contemporaries, refused to hypothesize about what happened on the inside (what people or animals thought and felt) and preferred to focus on what we can observe. To him, focusing on how much people ate meant more than focusing on subjective measures, like how hungry people were or how much pleasure they got from eating. He wanted to find out how environmental variables affected behavior, and he believed that behavior is shaped by its consequences.

If we don’t like the consequences of an action we’ve taken, we’re less likely to do it again; if we do like the consequences, we’re more likely to do it again. That assumption is the basis of operant conditioning, “a type of learning in which the strength of a behavior is modified by [its] consequences, such as reward or punishment.” 1

One of Skinner’s most important inventions was the operant conditioning chamber, also known as a “Skinner box,” which was used to study the effects of reinforcers on lab animals. The rats in the box had to figure out how to do a task (such as pushing a lever) that would reward them with food. Such an automated system allowed Skinner and thousands of successors to study conditioned behavior in a controlled setting.

What years of studies on reinforcement have revealed is that consistency and timing play important roles in shaping new behaviors. Psychologists argue that the best way for us to learn complex behaviors is via continuous reinforcement, in which the desired behavior is reinforced every time it’s performed.

If you want to teach your dog a new trick, for example, it is smart to reward him for every correct response. At the very beginning of the learning curve, your failure to immediately respond to a positive behavior might be misinterpreted as a sign of incorrect behavior from the dog’s perspective.

Intermittent reinforcement is reinforcement that is given only some of the times that the desired behavior occurs, and it can be done according to various schedules, some predictable and some not (see “Scheduling Reinforcement,” below). Intermittent reinforcement is argued to be the most efficient way to maintain an already learnt behavior. This is due to three reasons.

First, rewarding the behavior takes time away from the behavior’s continuation. Paying a worker after each piece is assembled on the assembly line simply does not make sense.

Second, intermittent reinforcement is better from an economic perspective. Not only is it cheaper not to reward every instance of a desired behavior, but by making the rewards unpredictable, you trigger excitement and thus get an increase in response without increasing the amount of reinforcement. Intermittent reinforcement is how casinos work; they want people to gamble, but they can’t afford to have people win large amounts very often.

Finally, intermittent reinforcement can induce resistance to extinction (stopping the behavior when reinforcement is removed). Consider the example of resistance outlined in the textbook Psychology: Core Concepts:

Imagine two gamblers and two slot machines. One machine inexplicably pays off on every trial and another, a more usual machine, pays on an unpredictable, intermittent schedule. Now, suppose that both devices suddenly stop paying. Which gambler will catch on first?

Most of us would probably guess it right:

The one who has been rewarded for each pull of the lever (continuous reinforcement) will quickly notice the change, while the gambler who has won only occasionally (on partial reinforcement) may continue playing unrewarded for a long time.

Scheduling Reinforcement

Intermittent reinforcement can be used on various schedules, each with its own degree of effectiveness and situations to which it can be appropriately applied. Ratio schedules are based on the number of responses (the amount of work done), whereas interval schedules are based on the amount of time spent.

  • Fixed-ratio schedules are used when you pay your employees based on the amount of work they do. Fixed-ratio schedules are common in freelancing, where contractors are paid on a piecework basis. Managers like fixed-ratio schedules because the response to reinforcement is usually very high (if you want to get paid, you do the work).
  • Variable-ratio schedules are unpredictable because the number of responses between reinforcers varies. Telemarketers, salespeople, and slot machine players are on this schedule because they never know when the next sale or the next big win will occur. Skinner himself demonstrated the power of this schedule by showing that a hungry pigeon would peck a disk 12,000 times an hour while being rewarded on average for only every 110 pecks. Unsurprisingly, this is the type of reinforcement that normally produces more responses than any other schedule. (Varying the intervals between reinforcers is another way of making reinforcement unpredictable, but if you want people to feel appreciated, this kind of schedule is probably not the one to use.)
  • Fixed-interval schedules are the most common type of payment — they reward people for the time spent on a specific task. You might have already guessed that the response rate on this schedule is very low. Even a rat in a Skinner box programmed for a fixed-interval schedule learns that lever presses beyond the required minimum are just a waste of energy. Ironically, the “9-5 job” is a preferred way to reward employees in business.

While the design of scheduling can be a powerful technique for continuing or amplifying a specific behavior, we may still fail to recognize an important aspect of reinforcement — individual preferences for specific rewards.

Experience suggests that survival is propelled by our need for food and water. However, most of us don’t live in conditions of extreme scarcity and thus the types of reinforcement appealing to us will differ.

Culture plays an important role in determining effective reinforcers. And what’s reinforced shapes culture. Offering tickets to a cricket match might serve as a powerful reward for someone in a country where cricket is a big deal, but would be meaningless to most Americans. Similarly, an air-conditioned office might be a powerful incentive for employees in Indonesia, but won’t matter as much to employees in a more temperate area.

What About Punishment?

So far we’ve talked about positive reinforcement — the carrot, if you will. However, there is also a stick.

There is no doubt that our society relies heavily on threat and punishment as a way to keep ourselves in line. Still, we keep arriving late, forgetting birthdays, and receiving parking fines, even though we know there is the potential to be punished.

There are several reasons that punishment might not be the best way to alter someone’s behavior.

First of all, Skinner observed that the power of punishment to suppress behavior usually disappears when the threat of punishment is removed. Indeed, we all refrain from using social networks during work hours, when we know our boss is around, and we similarly adhere to the speed limit when we know we are being watched by a police patrol.

Second, punishment often triggers a fight-or-flight response and renders us aggressive. When punished, we seek to flee from further punishment, and when the escape is blocked, we may become aggressive. This punishment-aggression link may also explain why abusing parents come from abusing families themselves.

Third, punishment inhibits the ability to learn new and better responses. Punishment leads to a variety of responses — such as escape, aggression, and learned helplessness — none of which aid in the subject’s learning process. Punishment also fails to show subjects what exactly they must do and instead focuses on what not to do. This is why environments that forgive failure are so important in the learning process.

Finally, punishment is often applied unequally. We are ruled by bias in our assessment of who deserves to be punished. We scold boys more often than girls, physically punish grade-schoolers more often than adults, and control members of racial minorities more often (and more harshly) than whites.

What Should I Do Instead?

There are three alternatives that you can try the next time you feel tempted to punish someone.

The first we already touched upon — extinction. A response will usually diminish or disappear if it ceases to produce the rewards it once did. However, it is important that all possible reinforcements are withheld. This is far more difficult to do in real life than in a lab setting.

What makes it especially difficult is that during the extinction process, organisms tend to look for novel techniques to obtain reinforcement. This means that a whining child will either redouble her efforts or change tactics to regain the parent’s attention before ceasing the behavior. In this case, a better extinction strategy is to combine methods by withholding attention after whining occurs and rewarding more desirable behaviors with attention before the whining occurs.

The second alternative is positively reinforcing preferred activities. For example, people who exercise regularly (and enjoy it) might use a daily run as a reward for getting other tasks done. Similarly, young children learn to sit still by being rewarded with occasional permission to run around and make noise. The main principle of this idea is that a preferred activity, such as running around, can be used to reinforce a less preferred activity. This idea is also called the Premack principle.

Finally, prompting and shaping are two actions we can use together to change behavior in an iterative manner. A prompt is a cue or stimulus that encourages the desired behavior. When shaping begins, any approximation of the target response is reinforced. Once you see the approximation occurring regularly, you can make the criterion for the target more strict (the actual behavior has to match the desired behavior more closely), and you continue narrowing the criteria until the specific target behavior is performed. This tactic is often the preferred method of developing a habit gradually and of training animals to perform a specific behavior.

***

I hope that you are now better equipped to recognize incentives as powerful forces shaping the way we and others behave. The next time you wish someone would change the way they behave, think about changing their incentives.

Like any parent, I experiment with my kids all the time. One of the most effective things I do when one of them has misbehaved is to acknowledge my child’s feelings and ask him what he was trying to achieve.

When one kid hits the other, for example, I ask him what he was trying to accomplish. Usually, the response is “He hit me. (So I hit him back.)” I know this touches on an automatic human response that many adults can’t control. Which makes me wonder how I can change my kids’ behavior to be more effective.

“So, you were angry and you wanted him to know?”

“Yes.”

“People are not for hitting. If you want, I’ll help you go tell him why you’re angry.”

Tensions dissipate. And I’m (hopefully) starting to get my kids thinking about effective and ineffective ways to achieve their goals.

Punishment works best to prevent actions whereas incentives work best to encourage them.

Let’s end with an excellent piece of advice that has been given regarding incentives. Here is Charlie Munger, speaking at the University South California commencement:

You do not want to be in a perverse incentive system that’s causing you to behave more and more foolishly or worse and worse — incentives are too powerful a control over human cognition or human behavior. If you’re in one [of these systems], I don’t have a solution for you. You’ll have to figure it out for yourself, but it’s a significant problem.

Footnotes

Charlie Munger on Getting Rich, Wisdom, Focus, Fake Knowledge and More

“In the chronicles of American financial history,” writes David Clark in The Tao of Charlie Munger: A Compilation of Quotes from Berkshire Hathaway's Vice Chairman on Life, Business, and the Pursuit of Wealth, “Charlie Munger will be seen as the proverbial enigma wrapped in a paradox—he is both a mystery and a contradiction at the same time.”

On one hand, Munger received an elite education and it shows: He went to Cal Tech to train as a meteorologist for the Second World War and then attended Harvard Law School and eventually opened his own law firm. That part of his success makes sense.

Yet here's a man who never took a single course in economics, business, marketing, finance, psychology, or accounting, and managed to become one of the greatest, most admired, and most honorable businessmen of our age. He was noted by essentially all observers for the originality of his thoughts, especially about business and human behavior. You don't learn that in law school, at Harvard or anywhere else.

Bill Gates said of him: “He is truly the broadest thinker I have ever encountered.” His business partner Warren Buffett put it another way: “He comes equipped for rationality… I would say that to try and typecast Charlie in terms of any other human that I can think of, no one would fit. He's got his own mold.”

How does such an extreme result happen? How is such an original and unduly capable mind formed? In the case of Munger, it's clearly a combination of unusual genetics and an unusual approach to learning and life.

While we can't have his genetics, we can try to steal his approach to rationality. There's almost no limit to the amount one could learn from studying the Munger mind, so let's at least start with a rundown of some of his best ideas.

***

Wisdom and Circles of Competence

“Knowing what you don’t know is more useful than being brilliant.”
“Acknowledging what you don’t know is the dawning of wisdom.”

Identify your circle of competence and use your knowledge, when possible, to stay away from things you don't understand. There are no points for difficulty at work or in life.  Avoiding stupidity is easier than seeking brilliance.

Of course this principle relates to another of Munger's sayings: “People are trying to be smart—all I am trying to do is not to be idiotic, but it’s harder than most people think.”

And this reminds me of perhaps my favorite Mungerism of all time, the very quote that sits right beside my desk:

“It is remarkable how much long-term advantage people like us have gotten by trying to be consistently not stupid, instead of trying to be very intelligent.”

***

Divergence

“Mimicking the herd invites regression to the mean.”

Here's a simple axiom to live by: If you do what everyone else does, you're going to get the same results that everyone else gets. This means that, taking out luck (good or bad), if you act average, you're going to be average. If you want to move away from average, you must diverge. You must be different. And if you want to outperform others, you must be different and correct. As Munger would say, “How could it be otherwise?”

***

Know When to Fold ’Em

“Life, in part, is like a poker game, wherein you have to learn to quit sometimes when holding a much-loved hand—you must learn to handle mistakes and new facts that change the odds.”

Mistakes are an opportunity to grow. How we handle adversity is up to us. This is how we become personally antifragile.

***

False Models

Echoing Einstein, who said that “Not everything that counts can be counted, and not everything that can be counted counts,” Munger said this about his and Buffett's shift to acquiring high-quality businesses for Berkshire Hathaway:

“Once we’d gotten over the hurdle of recognizing that a thing could be a bargain based on quantitative measures that would have horrified Graham, we started thinking about better businesses.”

***

Being Lazy

“Sit on your ass. You’re paying less to brokers, you’re listening to less nonsense, and if it works, the tax system gives you an extra one, two, or three percentage points per annum.”

Time is a friend to a good business and the enemy of the poor business. It's also the friend of knowledge and the enemy of the new and novel. As Seneca said, “Time discovers truth.”

***

Investing Is a Perimutuel System

“You’re looking for a mispriced gamble,” says Munger. “That’s what investing is. And you have to know enough to know whether the gamble is mispriced. That’s value investing.”  At another time, he added: “You should remember that good ideas are rare— when the odds are greatly in your favor, bet heavily.”

May the odds forever be in your favor. Actually, learning properly is one way you can tilt the odds in your favor.

***

Focus

When asked about his success, Munger says, “I succeeded because I have a long attention span.”

Long attention spans allow for a deep understanding of subjects. When combined with deliberate practice, focus allows you to increase your skills and get out of your rut. The Art of Focus is a divergent and correct strategy that can help you identify where the leverage points are and apply your efforts toward them.

***

Fake Knowledge

“Smart people aren’t exempt from professional disasters from overconfidence.”

We're so used to outsourcing our thinking to others that we've forgotten what it's like to really understand something from all perspectives. We've forgotten just how much work that takes. The path of least resistance, however, is just a click away. Fake knowledge, which comes from reading headlines and skimming the news, seems harmless, but it's not. It makes us overconfident. It's better to remember a simple trick: anything you're getting easily through Google or Twitter is likely to be widely known and should not be given undue weight.

However, Munger adds, “If people weren’t wrong so often, we wouldn’t be so rich.

***

Sit Quietly

Echoing Pascal, who said some version of “All of humanity's problems stem from man's inability to sit quietly in a room alone,” Munger adds an investing twist: “It’s waiting that helps you as an investor, and a lot of people just can’t stand to wait.”

The ability to be alone with your thoughts and turn ideas over and over, without giving in to Do Something syndrome, affects so many of us. A perfectly reasonable option is to hold your ground and await more information.

***

Deal With Reality

“I think that one should recognize reality even when one doesn’t like it; indeed, especially when one doesn’t like it.”

Munger clearly learned from Joseph Tussman's wisdom. This means facing harsh truths that you might prefer to ignore. It means meeting the world on the world's terms, not according to how you wish it would be. If this causes temporary pain, so be it. “Your pain,” writes Kahil Gibran in The Prophet, “is the breaking of the shell that encloses your understanding.”

***

There Is No Free Lunch

We like quick solutions that don't require a lot of effort. We're drawn to the modern equivalent of an old hustler selling an all-curing tonic. However, the world does not work that way. Munger expands:

“There isn’t a single formula. You need to know a lot about business and human nature and the numbers… It is unreasonable to expect that there is a magic system that will do it for you.”

Acquiring knowledge is hard work. It's reading and adding to your knowledge so it compounds. It's going deep and developing fluency, something Darwin knew well.

***

Maximization/Minimization

In business we often find that the winning system goes almost ridiculously far in maximizing and or minimizing one or a few variables—like the discount warehouses of Costco.

When everything is a priority, nothing is a priority. Attempting to maximize competing variables is a recipe for disaster. Picking one variable and relentlessly focusing on it, which is an effective strategy, diverges from the norm. It's hard to compete with businesses that have correctly identified the right variables to maximize or minimize. When you focus on one variable, you'll increase the odds that you're quick and nimble — and can respond to changes in the terrain.

***

Map and Terrain

At Berkshire there has never been a master plan. Anyone who wanted to do it, we fired because it takes on a life of its own and doesn’t cover new reality. We want people taking into account new information.”

Plans are maps that we become attached to. Once we've told everyone there is a plan and what that plan is, especially multi-year plans, we're psychologically more likely to stick to it because coming out and changing it would be admitting we were wrong. This makes it harder for us to change our strategies when we need to, so we're stacking the odds against ourselves. Detailed five-year plans (that will clearly be wrong) are as disastrous as overly general five-year plans (which can never be wrong).

Scrap the plan, isolate the key variables that you need to maximize and minimize, and follow the agile path blazed by Henry Singleton and followed by Buffett and Munger.

***

The Keys to Good Government

There are three keys: honesty, effectiveness, and efficiency. Munger says:

“In a democracy, everyone takes turns. But if you really want a lot of wisdom, it’s better to concentrate decisions and process in one person. It’s no accident that Singapore has a much better record, given where it started, than the United States. There, power was concentrated in an enormously talented person, Lee Kuan Yew, who was the Warren Buffett of Singapore.”

Lee Kuan Yew put it this way: “With few exceptions, democracy has not brought good government to new developing countries. … What Asians value may not necessarily be what Americans or Europeans value. Westerners value the freedoms and liberties of the individual. As an Asian of Chinese cultural background, my values are for a government which is honest, effective, and efficient.”

***

One Step At a Time

“Spend each day trying to be a little wiser than you were when you woke up. Discharge your duties faithfully and well. Slug it out one inch at a time, day by day. At the end of the day—if you live long enough—most people get what they deserve.”

An incremental approach to life reminds one of the nature of compounding. There will always be someone going faster than you, but you can learn from the Darwinian guide to overachieving your natural IQ. In order for this approach to be effective, you need a long axis of time as well as continuous incremental progress.

***

Getting Rich

“The desire to get rich fast is pretty dangerous.” 

Getting rich is a function of being happy with what you have, spending less than you make, and time.

***

Mental Models

“Know the big ideas in the big disciplines and use them routinely—all of them, not just a few.”

Mental models are the big ideas from multiple disciplines. While most people agree that these are worth knowing, they often think they can identify which models will add the most value, and in so doing they miss something important. There is a reason that the “know-nothing” index fund almost always beats the investors who think they know. Understanding this idea in greater detail will change a lot of things, including how you read. Acquiring the big ideas — without selectivity — is the way to mimic a know-nothing index fund.

***

Know-it-alls

“I try to get rid of people who always confidently answer questions about which they don’t have any real knowledge.”

Few things have made as much of a difference in my life as systemically removing (and when that's not possible, reducing the importance of) people who think they know the answer to everything.

***

Stoic Resolve

“There’s no way that you can live an adequate life without many mistakes. In fact, one trick in life is to get so you can handle mistakes. Failure to handle psychological denial is a common way for people to go broke.”

While we all make mistakes, it's how we respond to failure that defines us.

***

Thinking

“We all are learning, modifying, or destroying ideas all the time. Rapid destruction of your ideas when the time is right is one of the most valuable qualities you can acquire. You must force yourself to consider arguments on the other side.”

“It’s bad to have an opinion you’re proud of if you can’t state the arguments for the other side better than your opponents. This is a great mental discipline.”

Thinking is a lot of work. “My first thought,” William Deresiewicz said in one of my favorite speeches, “is never my best thought. My first thought is always someone else’s; it’s always what I’ve already heard about the subject, always the conventional wisdom.”

***

Choose Your Associates Wisely

“Oh, it’s just so useful dealing with people you can trust and getting all the others the hell out of your life. It ought to be taught as a catechism. … [W]ise people want to avoid other people who are just total rat poison, and there are a lot of them.”

No comment needed there.

***

Complement The Tao of Charlie Munger with this excellent Peter Bevelin interview.

Who’s in Charge of Our Minds? The Interpreter

One of the most fascinating discoveries of modern neuroscience is that the brain is a collection of distinct modules (grouped, highly connected neurons) performing specific functions rather than a unified system.

We'll get to why this is so important when we introduce The Interpreter later on.

This modular organization of the human brain is considered one of the key properties that sets us apart from animals. So much so, that it has displaced the theory that it stems from disproportionately bigger brains for our body size.

As neuroscientist Dr. Michael Gazzaniga points out in his wonderful book Who's In Charge? Free Will and the Science of the Brain, in terms of numbers of cells, the human brain is a proportionately scaled-up primate brain: It is what is expected for a primate of our size and does not possess relatively more neurons. They also found that the ratio between nonneuronal brain cells and neurons in human brain structures is similar to those found in other primates.

So it's not the size of our brains or the number of neurons, it's about the patterns of connectivity. As brains scaled up from insect to small mammal to larger mammal, they had to re-organize, for the simple reason that billions of neurons cannot all be connected to one another — some neurons would be way too far apart and too slow to communicate. Our brains would be gigantic and require a massive amount of energy to function.

Instead, our brain specializes and localizes. As Dr. Gazzaniga puts it, “Small local circuits, made of an interconnected group of neurons, are created to perform specific processing jobs and become automatic.” This is an important advance in our efforts to understand the mind.

Dr. Gazzaniga is most famous for his work studying split-brain patients, where many of the discoveries we're talking about were refined and explored. Split-brain patients give us a natural controlled experiment to find out “what the brain is up to” — and more importantly, how it does its work. What Gazzaniga and his co-researchers found was fascinating.

Emergence

We experience our conscious mind as a single unified thing. But if Gazzaniga & company are right, it most certainly isn't. How could a “specialized and localized” modular brain give rise to the feeling of “oneness” we feel so strongly about? It would seem there are too many things going on separately and locally:

Our conscious awareness is the mere tip of the iceberg of nonconscious processing. Below our level of awareness is the very busy nonconscious brain hard at work. Not hard for us to imagine are the housekeeping jobs the brain constantly struggles to keep homeostatic mechanisms up and running, such as our heart beating, our lungs breathing, and our temperature just right. Less easy to imagine, but being discovered left and right over the past fifty years, are the myriads of nonconscious processes smoothly putt-putting along. Think about it.

To begin with there are all the automatic visual and other sensory processing we have talked about. In addition, our minds are always being unconsciously biased by positive and negative priming processes, and influenced by category identification processes. In our social world, coalitionary bonding processes, cheater detection processes, and even moral judgment processes (to name only a few) are cranking away below our conscious mechanisms. With increasingly sophisticated testing methods, the number and diversity of identified processes is only going to multiply.

So what's going on? Who's controlling all this stuff? The idea is that the brain works more like traffic than a car. No one is controlling it!

It's due to a principle of complex systems called emergence, and it explains why all of these “specialized and localized” processes can give rise to what seems like a unified mind.

The key to understanding emergence is to understand that there are different levels of organization. My favorite analogy is that of the car, which I have mentioned before. If you look at an isolated car part, such as a cam shaft, you cannot predict that the freeway will be full of traffic at 5:15 PM. Monday through Friday. In fact, you could not even predict the phenomenon of traffic would even occur if you just looked at a brake pad. You cannot analyze traffic at the level of car parts. Did the guy who invented the wheel ever visualize the 405 in Los Angeles on Friday evening? You cannot even analyze traffic at the level of the individual car. When you get a bunch of cars and drivers together, with the variables of location, time, weather, and society, all in the mix, then at that level you can predict traffic. A new set of laws emerge that aren't predicted from the parts alone.

Emergence, Gazzaniga goes on, is how to understand the brain. Sub-atomic particles, atoms, molecules, cells, neurons, modules, the mind, and a collection of minds (a society) are all different levels of organization, with their own laws that cannot necessarily be predicted from the properties of the level below.

The unified mind we feel present emerges from the thousands of lower-level processes operating in parallel. Most of it is so automatic that we have no idea it's going on. (Not only does the mind work bottom-up but top down processes also influence it. In other words, what you think influences what you see and hear.)

And when we do start consciously explaining what's going on — or trying to — we start getting very interesting results. The part of our brain that seeks explanations and infers causality turns out to be a quirky little beast.

The Interpreter

Let's say you were to see a snake and jump back, automatically and quickly. Did you choose that action? If asked, you'd almost certainly say so, but the truth is more complicated.

If you were to have asked me why I jumped, I would have replied that I thought I'd seen a snake. That answer certainly makes sense, but the truth is I jumped before I was conscious of the snake: I had seen it, I didn't know I had seen it. My explanation is from post hoc information I have in my conscious system: The facts are that I jumped and that I saw a snake. The reality, however, is that I jumped way before (in a world of milliseconds) I was conscious of the snake. I did not make a conscious decision to jump and then consciously execute it. When I answered that question, I was, in a sense, confabulating: giving a fictitious account of a past event, believing it to be true. The real reason I jumped was an automatic nonconscious reaction to the fear response set into play by the amygdala. The reason I would have confabulated is that our human brains are driven to infer causality. They are driven to explain events that make sense out of the scattered facts. The facts that my conscious brain had to work were that I saw a snake, and I jumped. It did not register that I jumped before I was consciously aware of the snake.

Here's how it works: A thing happens, we react, we feel something about it, and then we go on explaining it. Sensory information is fed into an explanatory module which Gazzaniga calls The Interpreter, and studying split-brain patients showed him that it resides in the left hemisphere of the brain.

With that knowledge, Gazzaniga and his team were able to do all kinds of clever things to show how ridiculous our Interpreter can often be, especially in split-brain patients.

Take this case of a split-brain patient unconsciously making up a nonsense story when its two hemispheres are shown different images and instructed to choose a related image from a group of pictures. Read carefully:

We showed a split-brain patient two pictures: A chicken claw was shown to his right visual field, so the left hemisphere only saw the claw picture, and a snow scene was shown to the left visual field, so the right hemisphere saw only that. He was then asked to choose a picture from an array of pictures placed in fully view in front of him, which both hemispheres could see.

The left hand pointed to a shovel (which was the most appropriate answer for the snow scene) and the right hand pointed to a chicken (the most appropriate answer for the chicken claw). Then we asked why he chose those items. His left-hemisphere speech center replied, “Oh, that's simple. The chicken claw goes with the chicken,” easily explaining what it knew. It had seen the chicken claw.

Then, looking down at his left hand pointing to the shovel, without missing a beat, he said, “And you need a shovel to clean out the chicken shed.” Immediately, the left brain, observing the left hand's response without the knowledge of why it had picked that item, put into a context that would explain it. It interpreted the response in a context consistent with what it knew, and all it knew was: Chicken claw. It knew nothing about the snow scene, but it had to explain the shovel in his left hand. Well, chickens do make a mess, and you have to clean it up. Ah, that's it! Makes sense.

What was interesting was that the left hemisphere did not say, “I don't know,” which truly was the correct answer. It made up a post hoc answer that fit the situation. It confabulated, taking cues from what it knew and putting them together in an answer that made sense.

The left hand, responding to the snow Gazzaniga covertly showed the left visual field, pointed to the snow shovel. This all took place in the right hemisphere of the brain (think of it like an “X” — the right hemisphere controls the left side of the body and vice versa). But since it was a split-brain patient, the left hemisphere was not given any of the information about snow.

And yet, the left hemisphere is where the Interpreter resides! So what did the Interpreter do, asked to explain why the shovel was chosen seeing but having no information about snow, only about chickens? It made up a story about shoveling chicken coops!

Gazzaniga goes on to explain several cases of being able to fool the left brain Interpreter over and over, and in often subtle ways.

***

This left-brain module is what we use to explain causality, seeking it for its own sake. The Interpreter, like all of our mental modules, is a wonderful adaption that's led us to understand and explain causality and the world around us, to our great advantage, but as any good student of social psychology knows, we'll simply make up a plausible story if we have nothing solid to go on — leading to a narrative fallacy.

This leads to odd results that seem pretty maladaptive, like our tendency to gamble like idiots. (Charlie Munger calls this mis-gambling compulsion.) But outside of the artifice of the casino, the Interpreter works quite well.

But here's the catch. In the words of Gazzaniga, “The interpreter is only as good as the information it gets.”

The interpreter receives the results of the computations of a multitude of modules. It does not receive the information that there are multitudes of modules. It does not receive the information about how the modules work. It does not receive the information that there is a pattern-recognition system in the right hemisphere. The interpreter is a module that explains events from the information it does receive.

[…]

The interpreter is receiving data from the domains that monitor the visual system, the somatosensory system, the emotions, and cognitive representations. But as we just saw above, the interpreter is only as good as the information it receives. Lesions or malfunctions in any one of these domain-monitoring systems leads to an array of peculiar neurological conditions that involve the formation of either incomplete or delusional understandings about oneself, other individuals, objects, and the surrounding environment, manifesting in what appears to be bizarre behavior. It no longer seems bizarre, however, once you understand that such behaviors are the result of the interpreter getting no, or bad, information.

This can account for a lot of the ridiculous behavior and ridiculous narratives we see around us. The Interpreter must deal with what it's given, and as Gazzaniga's work shows, it can be manipulated and tricked. He calls it “hijacking” — and when the Interpreter is hijacked, it makes pretty bad decisions and generates strange explanations.

Anyone who's watched a friend acting hilariously when wearing a modern VR headset can see how easy it is to “hijack” one's sensory perceptions even if the conscious brain “knows” that it's not real. And of course, Robert Cialdini once famously described this hijacking process as a “click, whirr” reaction to social stimuli. It's a powerful phenomenon.

***

What can we learn from this?

The story of the multi-modular mind and the Interpreter module shows us that the brain does not have a rational “central command station” — your mind is at the mercy of what it's fed. The Interpreter is constantly weaving a story of what's going on around us, applying causal explanations to the data it's being fed; doing the best job it can with what it's got.

This is generally useful: a few thousand generations of data has honed our modules to understand the world well enough to keep us surviving and thriving. The job of the brain is to pass on our genes. But that doesn't mean that it's always making optimal decisions in the modern world.

We must realize that our brain can be fooled; it can be tricked, played with, and we won't always realize it immediately. Our Interpreter will weave a plausible story — that's it's job.

For this reason, Charlie Munger employs a “two track” analysis: What are the facts; and where is my brain fooling me? We're wise to follow suit.

What Are You Doing About It? Reaching Deep Fluency with Mental Models

The mental models approach is very intellectually appealing, almost seductive to a certain type of person. (It certainly is for us.)

The whole idea is to take the world's greatest, most useful ideas and make them work for you!

How hard can it be?

Nearly all of the models themselves are perfectly well understandable by the average well-educated knowledge worker, including all of you reading this piece. Ideas like Bayes' rule, multiplicative thinking, hindsight bias, or the bias from envy and jealousy, are all obviously true and part of the reality we live in.

There's a bit of a problem we're seeing though: People are reading the stuff, enjoying it, agreeing with it…but not taking action. It's not becoming part of their standard repertoire.

Let's say you followed up on Bayesian thinking after reading our post on it — you spent some time soaking in Thomas Bayes‘ great wisdom on updating your understanding of the world incrementally and probabilistically rather than changing your mind in black-and-white. Great!

But a week later, what have you done with that knowledge? How has it actually impacted your life? If the honest answer is “It hasn't,” then haven't you really wasted your time?

Ironically, it's this habit of “going halfway” instead of “going all the way,” like Sisyphus constantly getting halfway up the mountain, which is the biggest waste of time!

See, the common reason why people don't truly “follow through” with all of this stuff is that they haven't raised their knowledge to a “deep fluency” — they're skimming the surface. They pick up bits and pieces — some heuristics or biases here, a little physics or biology there, and then call it a day and pull up Netflix. They get a little understanding, but not that much, and certainly no doing.

The better approach, if you actually care about making changes, is to imitate Charlie Munger, Charles Darwin, and Richard Feynman, and start raising your knowledge of the Big Ideas to a deep fluency, and then figuring out systems, processes, and mental tricks to implement them in your own life.

Let's work through an example.

***

Say you're just starting to explore all the wonderful literature on heuristics and biases and come across the idea of Confirmation Bias: The idea that once we've landed on an idea we really like, we tend to keep looking for further data to confirm our already-held notions rather than trying to disprove our idea.

This is common, widespread, and perfectly natural. We all do it. John Kenneth Galbraith put it best:

“In the choice between changing one's mind and proving there's no need to do so, most people get busy on the proof.”

Now, what most people do, the ones you're trying to outperform, is say “Great idea! Thanks Galbraith.” and then stop thinking about it.

Don't do that!

The next step would be to push a bit further, to get beyond the sound bite: What's the process that leads to confirmation bias? Why do I seek confirmatory information and in which contexts am I particularly susceptible? What other models are related to the confirmation bias? How do I solve the problem?

The answers are out there: They're in Daniel Kahneman and in Charlie Munger and in Elster. They're available by searching through Farnam Street.

The big question: How far do you go? A good question without a perfect answer. But the best test I can think of is to perform something like the Feynman technique, and to think about the chauffeur problem.

Can you explain it simply to an intelligent layperson, using vivid examples? Can you answer all the follow-ups? That's fluency. And you must be careful not to fool yourself, because in the wise words of Feynman, “…you are the easiest person to fool.

While that's great work, you're not done yet. You have to make the rubber hit the road now. Something has to happen in your life and mind.

The way to do that is to come up with rules, systems, parables, and processes of your own, or to copy someone else's that are obviously sound.

In the case of Confirmation Bias, we have two wonderful models to copy, one from each of the Charlies — Darwin, and Munger.

Darwin had rule, one we have written about before but will restate here: Make a note, immediately, if you come across a thought or idea that is contrary to something you currently believe. 

As for Munger, he implemented a rule in his own life: “I never allow myself to have an opinion on anything that I don’t know the other side’s argument better than they do.”

Now we're getting somewhere! With the implementation of those two habits and some well-earned deep fluency, you can immediately, tomorrow, start improving the quality of your decision-making.

Sometimes when we get outside the heuristic/biases stuff, it's less obvious how to make the “rubber hit the road” — and that will be a constant challenge for you as you take this path.

But that's also the fun part! With every new idea and model you pick up, you also pick up the opportunity to synthesize for yourself a useful little parable to make it stick or a new habit that will help you use it. Over time, you'll come up with hundreds of them, and people might even look to you when they're having problems doing it themselves!

Look at Buffett and Munger — both guys are absolute machines, chock full of pithy little rules and stories they use in order to implement and recall what they've learned.

For example, Buffett discovered early on the manipulative psychology behind open-outcry auctions. What did he do? He made a rule to never go to one! That's how it's done.

Even if you can't come up with a great rule like that, you can figure out a way to use any new model or idea you learn. It just takes some creative thinking.

Sometimes it's just a little mental rule or story that sticks particularly well. (Recall one of the prime lessons from our series on memory: Salient, often used, well-associated, and important information sticks best.)

We did this very thing recently with Lee Kuan Yew's Rule. What a trite way to refer to the simple idea of asking if something actually works…attributing it to a Singaporean political leader!

But that's exactly the point. Give the thing a name and a life and, like clockwork, you'll start recalling it. The phrase “Lee Kuan Yew's Rule” actually appears in my head when I'm approaching some new system or ideology, and as soon as it does, I find myself backing away from ideology and towards pragmatism. Exactly as I'd hoped.

Your goal should be to create about a thousand of those little tools in your head, attached to a deep fluency in the material from which it came. 

***

I can hear the objection coming. Who has time for this stuff?

You do. It's about making time for the things that really matter. And what could possibly matter more than upgrading your whole mental operating system? I solemnly promise that you're spending way more time right now making sub-optimal decisions and trying to deal with the fallout.

If you need help learning to manage your time right this second, check out our Productivity Seminar, one that's changed some people's lives entirely. The central idea is to become more thoughtful and deliberate with how you spend your hours. When you start doing that, you'll notice you do have an hour a day to spend on this Big Ideas stuff. It's worth the 59 bucks.

If you don't have 59 bucks, at least imitate Cal Newport and start scheduling your days and put an hour in there for “Getting better at making all of my decisions.”

Once you find that solid hour (or more), start using it in the way outlined above, and let the world's great knowledge actually start making an impact. Just do a little every day.

What you'll notice, over the weeks and months and years of doing this, is that your mind will really change! It has to! And with that, your life will change too. The only way to fail at improving your brain is by imitating Sisyphus, pushing the boulder halfway up, over and over.

Unless and until you really understand this, you'll continue spinning your wheels. So here's your call to action. Go get to it!

Bias from Disliking/Hating

(This is a follow-up to our post on the Bias from Liking/Loving, which you can find here.)

Think of a cat snarling and spitting, lashing with its tail and standing with its back curved. Her pulse is elevated, blood vessels constricted and muscles tense. This reaction may sound familiar, because everyone has experienced the same tensed-up feeling of rage at least once in their lives.

When rage is directed towards an external object, it becomes hate. Just as we learn to love certain things or people, we learn to hate others.

There are several cognitive processes that awaken the hate within us and most of them stem from our need for self-protection.

Reciprocation

We tend to dislike people who dislike us (and, true to Newton, with equal strength.) The more we perceive they hate us, the more we hate them.

Competition

A lot of hate comes from scarcity and competition. Whenever we compete for resources, our own mistakes can mean good fortune for others. In these cases, we affirm our own standing and preserve our self-esteem by blaming others.

Robert Cialdini explains that because of the competitive environment in American classrooms, school desegregation may increase the tension between children of different races instead of decreasing it. Imagine being a secondary school child:

If you knew the right answer and the teacher called on someone else, you probably hoped that he or she would make a mistake so that you would have a chance to display your knowledge. If you were called on and failed, or if you didn't even raise your hand to compete, you probably envied and resented your classmates who knew the answer.

At first we are merely annoyed. But then as the situation fails to improve and our frustration grows, we are slowly drawn into false attributions and hate. We keep blaming and associating “the others” who are doing better with the loss and scarcity we are experiencing (or perceive we are experiencing). That is one way our emotional frustration boils into hate.

Us vs. Them

The ability to separate friends from enemies has been critical for our safety and survival. Because mistaking the two can be deadly, our mental processes have evolved to quickly spot potential threats and react accordingly. We are constantly feeding information about others into our “people information lexicon” that forms not only our view of individuals, whom we must decide how to act around, but entire classes of people, as we average out that information.

To shortcut our reactions, we classify narrowly and think in dichotomies: right or wrong, good or bad, heroes or villains. (The type of Grey Thinking we espouse is almost certainly unnatural, but, then again, so is a good golf swing.) Since most of us are merely average at everything we do, even superficial and small differences, such as race or religious affiliation, can become an important source of identification. We are, after all, creatures who seek to belong to groups above all else.

Seeing ourselves as part of a special, different and, in its own way, superior group, decreases our willingness to empathize with the other side. This works both ways – the hostility towards the others also increases the solidarity of the group. In extreme cases, we are so drawn towards the inside view that we create a strong picture of the enemy that has little to do with reality or our initial perceptions.

From Compassion to Hate

We think of ourselves as compassionate, empathetic and cooperative. So why do we learn to hate?

Part of the answer lies in the fact that we think of ourselves in a specific way. If we cannot reach a consensus, then the other side, which is in some way different from us, must necessarily be uncooperative for our assumptions about our own qualities to hold true.

Our inability to examine the situation from all sides and shake our beliefs, together with self-justifying behavior, can lead us to conclude that others are the problem. Such asymmetric views, amplified by strong perceived differences, often fuel hate.

What started off as odd or difficult to understand, has quickly turned into unholy.

If the situation is characterized by competition, we may also see ourselves as a victim. The others, who abuse our rights, take away our privileges or restrict our freedom are seen as bullies who deserve to be punished. We convince ourselves that we are doing good by doing harm to those who threaten to cross the line.

This is understandable. In critical times our survival indeed may depend on our ability to quickly spot and neutralize dangers. The cost of a false positive – mistaking a friend for a foe – is much lower than the potentially fatal false negative of mistaking our adversaries for innocent allies. As a result, it is safest to assume that anything we are not familiar with is dangerous by default. Natural selection, by its nature, “keeps what works,” and this tendency towards distrust of the unfamiliar probably survived in that way.

The Displays of Hate

Physical and psychological pain is very mobilizing. We despise foods that make us nauseous and people that have hurt us. Because we are scared to suffer, we end up either avoiding or destroying the “enemy”, which is why revenge can be pursued with such vengeance. In short, hate is a defense against enduring pain repeatedly.

There are several ways that the bias for disliking and hating display themselves to the outer world. The most obvious of them is war, which seems to have been more or less prevalent throughout the history of mankind.

This would lead us to think that war may well be unavoidable. Charlie Munger offers the more moderate opinion that while hatred and dislike cannot be avoided, the instances of war can be minimized by channeling our hate and fear into less destructive behaviors. (A good political system allows for dissent and disagreement without explosions of blood upheaval.)

Even with the spread of religion, and the advent of advanced civilization, modern war remains pretty savage. But we also get what we observe in present-day Switzerland and the United States, wherein the clever political arrangements of man “channel” the hatreds and dislikings of individuals and groups into nonlethal patterns including elections.

But these dislikings and hatreds that are arguably inherent to our nature never go away completely and transcend themselves into politics. Think of the dichotomies. There is the left versus the right wing, the nationalists versus the communists and libertarians vs. authoritarians. This might be the reason why there are maxims like: “Politics is the art of marshaling hatreds.

Finally, as we move away from politics, arguably the most sophisticated and civilized way of channeling hatred is litigation. Charlie Munger attributes the following words to Warren Buffett:

A major difference between rich and poor people is that the rich people can spend their lives suing their relatives.

While most of us reflect on our memories of growing up with our siblings with fondness, there are cases where the competition for shared attention or resources breeds hatred. If the siblings can afford it, they will sometimes litigate endlessly to lay claims over their parents' property or attention.

Under the Influence of Bias

There are several ways that bias from hating can interfere with our normal judgement and lead to suboptimal decisions.

Ignoring Virtues of The Other Side

Michael Faraday was once asked after a lecture whether he implied that a hated academic rival was always wrong. His reply was short and firm “He’s not that consistent.” Faraday must have recognized the bias from hating and corrected for it with the witty comment.

What we should recognize here is that no situation is ever black or white. We all have our virtues and we all have our weaknesses. However, when possessed by the strong emotions of hate, our perceptions can be distorted to the extent that we fail to recognize any good in the opponent at all. This is driven by consistency bias, which motivates us to form a coherent (“she is all-round bad”) opinion of ourselves and others.

Association Fueled Hate

The principle of association goes that the nature of the news tends to infect the teller. This means that the worse the experience, the worse the impression of anything related to it.

Association is why we blame the messenger who tells us something that we don't want to hear even when they didn't cause the bad news. (Of course, this creates an incentive not to speak the truth and avoid giving bad news.)

A classic example is an unfortunate and confused weatherman, who receives hate mail, whenever it rains. One went so far as to seek advice from the Arizona State professor of psychology, Robert Cialdini, whose work we have discussed before.

Cialdini explained to him that in light of the destinies of other messengers, he was born lucky. Rain might ruin someone’s holiday plans, but it will rarely change the destiny of a nation, which was the case of Persian war messengers. Delivering good news meant a feast, whereas delivering bad news resulted in their death.

The weatherman left Cialdini’s office with a sense of privilege and relief.

“Doc,” he said on his way out, “I feel a lot better about my job now. I mean, I'm in Phoenix where the sun shines 300 days a year, right? Thank God I don't do the weather in Buffalo.”

Fact Distortion

Under the influence of liking or disliking bias, we tend to fill gaps in our knowledge by building our conclusions on assumptions, which are based on very little evidence.

Imagine you meet a woman at a party and find her to be a self-centered, unpleasant conversation partner. Now her name comes up as someone who could be asked to contribute to a charity. How likely do you feel it is that she will give to the charity?

In reality, you have no useful knowledge, because there is little to nothing that should make you believe that people who are self-centered are not also generous contributors to charity. The two are unrelated, yet because of the well-known fundamental attribution error, we often assume one is correlated to the other.

By association, you are likely to believe that this woman is not likely to be generous towards charities despite lack of any evidence. And because now you also believe she is stingy and ungenerous, you probably dislike her even more.

This is just an innocent example, but the larger effects of such distortions can be so extreme that they lead to a major miscognition. Each side literally believes that every single bad attribute or crime is attributable to the opponent.

Charlie Munger explains this with a relatively recent example:

When the World Trade Center was destroyed, many Pakistanis immediately concluded that the Hindus did it, while many Muslims concluded that the Jews did it. Such factual distortions often make mediation between opponents locked in hatred either difficult or impossible. Mediations between Israelis and Palestinians are difficult because facts in one side's history overlap very little with facts from the other side's. These distortions and the overarching mistrust might be why some conflicts seem to never end.

Avoiding Being Hated

To varying degrees, we value acceptance and affirmation from others. Very few of us wake up wanting to be disliked or rejected. Social approval, at its heart the cause of social influence, shapes behavior and contributes to conformity. Francois VI, Duc de La Rochefoucauld wrote: “We only confess our little faults to persuade people that we have no big ones.”

Remember the old adage, “The nail that sticks out gets hammered down.” This is why we don't openly speak the truth or question people, we don't want to be the nail.

How do we resolve hate?

It is only normal that we can find more common ground with some people than with others. But are we really destined to fall into the traps of hate or is there a way to take hold of these biases?

That’s a question worth over a hundred million lives. There are ways that psychologists think that we can minimize prejudice against others.

Firstly, we can engage with others in sustained close contact to breed our familiarity. The contact must not only be prolonged, but also positive and cooperative in nature – either working towards a common cause or against a common enemy.

Secondly, we also reduce prejudice by attaining equal status in all aspects, including education, income and legal rights. This effect is further reinforced, when equality is supported not only “on paper”, but also ingrained within broader social norms.

And finally the obvious – we should practice awareness of our own emotions and ability to hold back on the temptations to dismiss others. Whenever confronted with strong feelings it might simply be best to sit back, breathe and do our best to eliminate the distorted thinking.

 

***

Want more? Check out the bias of liking/loving, or check out a whole bunch of mental models.