Tag: George Orwell

Complexity Bias: Why We Prefer Complicated to Simple

Complexity bias is a logical fallacy that leads us to give undue credence to complex concepts.

Faced with two competing hypotheses, we are likely to choose the most complex one. That’s usually the option with the most assumptions and regressions. As a result, when we need to solve a problem, we may ignore simple solutions — thinking “that will never work” — and instead favor complex ones.

To understand complexity bias, we need first to establish the meaning of three key terms associated with it: complexity, simplicity, and chaos.

Complexity, like pornography, is hard to define when we’re put on the spot, although most of us recognize it when we see it. The Cambridge Dictionary defines complexity as “the state of having many parts and being difficult to understand or find an answer to.” The definition of simplicity is the inverse: “something [that] is easy to understand or do.” Chaos is defined as “a state of total confusion with no order.”

“Life is really simple, but we insist on making it complicated.”

— Confucius

Complex systems contain individual parts that combine to form a collective that often can’t be predicted from its components. Consider humans. We are complex systems. We’re made of about 100 trillion cells and yet we are so much more than the aggregation of our cells. You’d never predict what we’re like or who we are from looking at our cells.

Complexity bias is our tendency to look at something that is easy to understand, or look at it when we are in a state of confusion, and view it as having many parts that are difficult to understand.

We often find it easier to face a complex problem than a simple one.

A person who feels tired all the time might insist that their doctor check their iron levels while ignoring the fact that they are unambiguously sleep deprived. Someone experiencing financial difficulties may stress over the technicalities of their telephone bill while ignoring the large sums of money they spend on cocktails.

Marketers make frequent use of complexity bias.

They do this by incorporating confusing language or insignificant details into product packaging or sales copy. Most people who buy “ammonia-free” hair dye, or a face cream which “contains peptides,” don’t fully understand the claims. Terms like these often mean very little, but we see them and imagine that they signify a product that’s superior to alternatives.

How many of you know what probiotics really are and how they interact with gut flora?

Meanwhile, we may also see complexity where only chaos exists. This tendency manifests in many forms, such as conspiracy theories, superstition, folklore, and logical fallacies. The distinction between complexity and chaos is not a semantic one. When we imagine that something chaotic is in fact complex, we are seeing it as having an order and more predictability than is warranted. In fact, there is no real order, and prediction is incredibly difficult at best.

Complexity bias is interesting because the majority of cognitive biases occur in order to save mental energy. For example, confirmation bias enables us to avoid the effort associated with updating our beliefs. We stick to our existing opinions and ignore information that contradicts them. Availability bias is a means of avoiding the effort of considering everything we know about a topic. It may seem like the opposite is true, but complexity bias is, in fact, another cognitive shortcut. By opting for impenetrable solutions, we sidestep the need to understand. Of the fight-or-flight responses, complexity bias is the flight response. It is a means of turning away from a problem or concept and labeling it as too confusing. If you think something is harder than it is, you surrender your responsibility to understand it.

“Most geniuses—especially those who lead others—prosper not by deconstructing intricate complexities but by exploiting unrecognized simplicities.”

— Andy Benoit

Faced with too much information on a particular topic or task, we see it as more complex than it is. Often, understanding the fundamentals will get us most of the way there. Software developers often find that 90% of the code for a project takes about half the allocated time. The remaining 10% takes the other half. Writing — and any other sort of creative work — is much the same. When we succumb to complexity bias, we are focusing too hard on the tricky 10% and ignoring the easy 90%.

Research has revealed our inherent bias towards complexity.

In a 1989 paper entitled “Sensible reasoning in two tasks: Rule discovery and hypothesis evaluation,” Hilary F. Farris and Russell Revlin evaluated the topic. In one study, participants were asked to establish an arithmetic rule. They received a set of three numbers (such as 2, 4, 6) and tried to generate a hypothesis by asking the experimenter if other number sequences conformed to the rule. Farris and Revlin wrote, “This task is analogous to one faced by scientists, with the seed triple functioning as an initiating observation, and the act of generating the triple is equivalent to performing an experiment.”

The actual rule was simple: list any three ascending numbers.

The participants could have said anything from “1, 2, 3” to “3, 7, 99” and been correct. It should have been easy for the participants to guess this, but most of them didn’t. Instead, they came up with complex rules for the sequences. (Also see Falsification of Your Best Loved Ideas.)

A paper by Helena Matute looked at how intermittent reinforcement leads people to see complexity in chaos. Three groups of participants were placed in rooms and told that a loud noise would play from time to time. The volume, length, and pattern of the sound were identical for each group. Group 1 (Control) was told to sit and listen to the noises. Group 2 (Escape) was told that there was a specific action they could take to stop the noises. Group 3 (Yoked) was told the same as Group 2, but in their case, there was actually nothing they could do.

Matute wrote:

Yoked participants received the same pattern and duration of tones that had been produced by their counterparts in the Escape group. The amount of noise received by Yoked and Control subjects depends only on the ability of the Escape subjects to terminate the tones. The critical factor is that Yoked subjects do not have control over reinforcement (noise termination) whereas Escape subjects do, and Control subjects are presumably not affected by this variable.

The result? Not one member of the Yoked group realized that they had no control over the sounds. Many members came to repeat particular patterns of “superstitious” behavior. Indeed, the Yoked and Escape groups had very similar perceptions of task controllability. Faced with randomness, the participants saw complexity.

Does that mean the participants were stupid? Not at all. We all exhibit the same superstitious behavior when we believe we can influence chaotic or simple systems.

Funnily enough, animal studies have revealed much the same. In particular, consider B.F. Skinner’s well-known research on the effects of random rewards on pigeons. Skinner placed hungry pigeons in cages equipped with a random-food-delivery mechanism. Over time, the pigeons came to believe that their behavior affected the food delivery. Skinner described this as a form of superstition. One bird spun in counterclockwise circles. Another butted its head against a corner of the cage. Other birds swung or bobbed their heads in specific ways. Although there is some debate as to whether “superstition” is an appropriate term to apply to birds, Skinner’s research shed light on the human tendency to see things as being more complex than they actually are.

Skinner wrote (in “‘Superstition’ in the Pigeon,” Journal of Experimental Psychology, 38):

The bird behaves as if there were a causal relation between its behavior and the presentation of food, although such a relation is lacking. There are many analogies in human behavior. Rituals for changing one's fortune at cards are good examples. A few accidental connections between a ritual and favorable consequences suffice to set up and maintain the behavior in spite of many unreinforced instances. The bowler who has released a ball down the alley but continues to behave as if he were controlling it by twisting and turning his arm and shoulder is another case in point. These behaviors have, of course, no real effect upon one's luck or upon a ball half way down an alley, just as in the present case the food would appear as often if the pigeon did nothing—or, more strictly speaking, did something else.

The world around us is a chaotic, entropic place. But it is rare for us to see it that way.

In Living with Complexity, Donald A. Norman offers a perspective on why we need complexity:

We seek rich, satisfying lives, and richness goes along with complexity. Our favorite songs, stories, games, and books are rich, satisfying, and complex. We need complexity even while we crave simplicity… Some complexity is desirable. When things are too simple, they are also viewed as dull and uneventful. Psychologists have demonstrated that people prefer a middle level of complexity: too simple and we are bored, too complex and we are confused. Moreover, the ideal level of complexity is a moving target, because the more expert we become at any subject, the more complexity we prefer. This holds true whether the subject is music or art, detective stories or historical novels, hobbies or movies.

As an example, Norman asks readers to contemplate the complexity we attach to tea and coffee. Most people in most cultures drink tea or coffee each day. Both are simple beverages, made from water and coffee beans or tea leaves. Yet we choose to attach complex rituals to them. Even those of us who would not consider ourselves to be connoisseurs have preferences. Offer to make coffee for a room full of people, and we can be sure that each person will want it made in a different way.

Coffee and tea start off as simple beans or leaves, which must be dried or roasted, ground and infused with water to produce the end result. In principle, it should be easy to make a cup of coffee or tea. Simply let the ground beans or tea leaves [steep] in hot water for a while, then separate the grounds and tea leaves from the brew and drink. But to the coffee or tea connoisseur, the quest for the perfect taste is long-standing. What beans? What tea leaves? What temperature water and for how long? And what is the proper ratio of water to leaves or coffee?

The quest for the perfect coffee or tea maker has been around as long as the drinks themselves. Tea ceremonies are particularly complex, sometimes requiring years of study to master the intricacies. For both tea and coffee, there has been a continuing battle between those who seek convenience and those who seek perfection.

Complexity, in this way, can enhance our enjoyment of a cup of tea or coffee. It’s one thing to throw some instant coffee in hot water. It’s different to select the perfect beans, grind them ourselves, calculate how much water is required, and use a fancy device. The question of whether this ritual makes the coffee taste better or not is irrelevant. The point is the elaborate surrounding ritual. Once again, we see complexity as superior.

“Simplicity is a great virtue but it requires hard work to achieve it and education to appreciate it. And to make matters worse: complexity sells better.”

— Edsger W. Dijkstra

The Problem with Complexity

Imagine a person who sits down one day and plans an elaborate morning routine. Motivated by the routines of famous writers they have read about, they lay out their ideal morning. They decide they will wake up at 5 a.m., meditate for 15 minutes, drink a liter of lemon water while writing in a journal, read 50 pages, and then prepare coffee before planning the rest of their day.

The next day, they launch into this complex routine. They try to keep at it for a while. Maybe they succeed at first, but entropy soon sets in and the routine gets derailed. Sometimes they wake up late and do not have time to read. Their perceived ideal routine has many different moving parts. Their actual behavior ends up being different each day, depending on random factors.

Now imagine that this person is actually a famous writer. A film crew asks to follow them around on a “typical day.” On the day of filming, they get up at 7 a.m., write some ideas, make coffee, cook eggs, read a few news articles, and so on. This is not really a routine; it is just a chaotic morning based on reactive behavior. When the film is posted online, people look at the morning and imagine they are seeing a well-planned routine rather than the randomness of life.

This hypothetical scenario illustrates the issue with complexity: it is unsustainable without effort.

The more individual constituent parts a system has, the greater the chance of its breaking down. Charlie Munger once said that “Where you have complexity, by nature you can have fraud and mistakes.” Any complex system — be it a morning routine, a business, or a military campaign — is difficult to manage. Addressing one of the constituent parts inevitably affects another (see the Butterfly Effect). Unintended and unexpected consequences are likely to occur.

As Daniel Kahneman and Amos Tversky wrote in 1974 (in Judgment Under Uncertainty: Heuristics and Biases): “A complex system, such as a nuclear reactor or the human body, will malfunction if any of its essential components fails. Even when the likelihood of failure in each component is slight, the probability of an overall failure can be high if many components are involved.”

This is why complexity is less common than we think. It is unsustainable without constant maintenance, self-organization, or adaptation. Chaos tends to disguise itself as complexity.

“Human beings are pattern-seeking animals. It's part of our DNA. That's why conspiracy theories and gods are so popular: we always look for the wider, bigger explanations for things.”

— Adrian McKinty, The Cold Cold Ground

Complexity Bias and Conspiracy Theories

A musician walks barefoot across a zebra-crossing on an album cover. People decide he died in a car crash and was replaced by a lookalike. A politician’s eyes look a bit odd in a blurry photograph. People conclude that he is a blood-sucking reptilian alien taking on a human form. A photograph shows an indistinct shape beneath the water of a Scottish lake. The area floods with tourists hoping to glimpse a surviving prehistoric creature. A new technology overwhelms people. So, they deduce that it is the product of a government mind-control program.

Conspiracy theories are the ultimate symptom of our desire to find complexity in the world. We don’t want to acknowledge that the world is entropic. Disasters happen and chaos is our natural state. The idea that hidden forces animate our lives is an appealing one. It seems rational. But as we know, we are all much less rational and logical than we think. Studies have shown that a high percentage of people believe in some sort of conspiracy. It’s not a fringe concept. According to research by Joseph E. Uscinski and Joseph M. Parent, about one-third of Americans believe the notion that Barack Obama’s birth certificate is fake. Similar numbers are convinced that 9/11 was an inside job orchestrated by George Bush. Beliefs such as these are present in all types of people, regardless of class, age, gender, race, socioeconomic status, occupation, or education level.

Conspiracy theories are invariably far more complex than reality. Although education does reduce the chances of someone’s believing in conspiracy theories, one in five Americans with postgraduate degrees still hold conspiratorial beliefs.

Uscinski and Parent found that, just as uncertainty led Skinner’s pigeons to see complexity where only randomness existed, a sense of losing control over the world around us increases the likelihood of our believing in conspiracy theories. Faced with natural disasters and political or economic instability, we are more likely to concoct elaborate explanations. In the face of horrific but chaotic events such as Hurricane Katrina, or the recent Grenfell Tower fire, many people decide that secret institutions are to blame.

Take the example of the “Paul McCartney is dead” conspiracy theory. Since the 1960s, a substantial number of people have believed that McCartney died in a car crash and was replaced by a lookalike, usually said to be a Scottish man named William Campbell. Of course, conspiracy theorists declare, The Beatles wanted their most loyal fans to know this, so they hid clues in songs and on album covers.

The beliefs surrounding the Abbey Road album are particularly illustrative of the desire to spot complexity in randomness and chaos. A police car is parked in the background — an homage to the officers who helped cover up the crash. A car’s license plate reads “LMW 28IF” — naturally, a reference to McCartney being 28 if he had lived (although he was 27) and to Linda McCartney (whom he had not met yet). Matters were further complicated once The Beatles heard about the theory and began to intentionally plant “clues” in their music. The song “I’m So Tired” does in fact feature backwards mumbling about McCartney’s supposed death. The 1960s were certainly a turbulent time, so is it any wonder that scores of people pored over album art or played records backwards, looking for evidence of a complex hidden conspiracy?

As Henry Louis Gates Jr. wrote, “Conspiracy theories are an irresistible labor-saving device in the face of complexity.”

Complexity Bias and Language

We have all, at some point, had a conversation with someone who speaks like philosopher Theodor Adorno wrote: using incessant jargon and technical terms even when simpler synonyms exist and would be perfectly appropriate. We have all heard people say things which we do not understand, but which we do not question for fear of sounding stupid.

Jargon is an example of how complexity bias affects our communication and language usage. When we use jargon, especially out of context, we are putting up unnecessary semantic barriers that reduce the chances of someone’s challenging or refuting us.

In an article for The Guardian, James Gingell describes his work translating scientific jargon into plain, understandable English:

It’s quite simple really. The first step is getting rid of the technical language. Whenever I start work on refining a rough-hewn chunk of raw science into something more pleasant I use David Dobbs’ (rather violent) aphorism as a guiding principle: “Hunt down jargon like a mercenary possessed, and kill it.” I eviscerate acronyms and euthanise decrepit Latin and Greek. I expunge the esoteric. I trim and clip and pare and hack and burn until only the barest, most easily understood elements remain.

[…]

Jargon…can be useful for people as a shortcut to communicating complex concepts. But it’s intrinsically limited: it only works when all parties involved know the code. That may be an obvious point but it’s worth emphasising — to communicate an idea to a broad, non-specialist audience, it doesn’t matter how good you are at embroidering your prose with evocative imagery and clever analogies, the jargon simply must go.”

Gingell writes that even the most intelligent scientists struggle to differentiate between thinking (and speaking and writing) like a scientist, and thinking like a person with minimal scientific knowledge.

Unnecessarily complex language is not just annoying. It's outright harmful. The use of jargon in areas such as politics and economics does real harm. People without the requisite knowledge to understand it feel alienated and removed from important conversations. It leads people to believe that they are not intelligent enough to understand politics, or not educated enough to comprehend economics. When a politician talks of fiscal charters or rolling four-quarter growth measurements in a public statement, they are sending a crystal clear message to large numbers of people whose lives will be shaped by their decisions: this is not about you.

Complexity bias is a serious issue in politics. For those in the public eye, complex language can be a means of minimizing the criticism of their actions. After all, it is hard to dispute something you don't really understand. Gingell considers jargon to be a threat to democracy:

If we can’t fully comprehend the decisions that are made for us and about us by the government then how we can we possibly revolt or react in an effective way? Yes, we have a responsibility to educate ourselves more on the big issues, but I also think it’s important that politicians and journalists meet us halfway.

[…]

Economics and economic decisions are more important than ever now, too. So we should implore our journalists and politicians to write and speak to us plainly. Our democracy depends on it.

In his essay “Politics and the English Language,” George Orwell wrote:

In our time, political speech and writing are largely the defence of the indefensible. … Thus, political language has to consist largely of euphemism, question-begging and sheer cloudy vagueness. Defenceless villages are bombarded from the air, the inhabitants driven out into the countryside, the cattle machine-gunned, the huts set on fire with incendiary bullets: this is called pacification. Millions of peasants are robbed of their farms and sent trudging along the roads with no more than they can carry: this is called transfer of population or rectification of frontiers. People are imprisoned for years without trial, or shot in the back of the neck or sent to die of scurvy in Arctic lumber camps: this is called elimination of unreliable elements.

An example of the problems with jargon is the Sokal affair. In 1996, Alan Sokal (a physics professor) submitted a fabricated scientific paper entitled “Transgressing the Boundaries: Towards a Transformative Hermeneutics of Quantum Gravity.” The paper had absolutely no relation to reality and argued that quantum gravity is a social and linguistic construct. Even so, the paper was published in a respected journal. Sokal’s paper consisted of convoluted, essentially meaningless claims, such as this paragraph:

Secondly, the postmodern sciences deconstruct and transcend the Cartesian metaphysical distinctions between humankind and Nature, observer and observed, Subject and Object. Already quantum mechanics, earlier in this century, shattered the ingenious Newtonian faith in an objective, pre-linguistic world of material objects “out there”; no longer could we ask, as Heisenberg put it, whether “particles exist in space and time objectively.”

(If you're wondering why no one called him out, or more specifically why we have a bias to not call BS out, check out pluralistic ignorance).

Jargon does have its place. In specific contexts, it is absolutely vital. But in everyday communication, its use is a sign that we wish to appear complex and therefore more intelligent. Great thinkers throughout the ages have stressed the crucial importance of using simple language to convey complex ideas. Many of the ancient thinkers whose work we still reference today — people like Plato, Marcus Aurelius, Seneca, and Buddha — were known for their straightforward communication and their ability to convey great wisdom in a few words.

“Any intelligent fool can make things bigger, more complex, and more violent. It takes a touch of genius — and a lot of courage — to move in the opposite direction.”

— Ernst F. Schumacher

How Can We Overcome Complexity Bias?

The most effective tool we have for overcoming complexity bias is Occam’s razor. Also known as the principle of parsimony, this is a problem-solving principle used to eliminate improbable options in a given situation. Occam’s razor suggests that the simplest solution or explanation is usually the correct one. When we don’t have enough empirical evidence to disprove a hypothesis, we should avoid making unfounded assumptions or adding unnecessary complexity so we can make quick decisions or establish truths.

An important point to note is that Occam’s razor does not state that the simplest hypothesis is the correct one, but states rather that it is the best option before the establishment of empirical evidence. It is also useful in situations where empirical data is difficult or impossible to collect. While complexity bias leads us towards intricate explanations and concepts, Occam’s razor can help us to trim away assumptions and look for foundational concepts.

Returning to Skinner’s pigeons, had they known of Occam’s razor, they would have realized that there were two main possibilities:

  • Their behavior affects the food delivery.

Or:

  • Their behavior is irrelevant because the food delivery is random or on a timed schedule.

Using Occam’s razor, the head-bobbing, circles-turning pigeons would have realized that the first hypothesis involves numerous assumptions, including:

  • There is a particular behavior they must enact to receive food.
  • The delivery mechanism can somehow sense when they enact this behavior.
  • The required behavior is different from behaviors that would normally give them access to food.
  • The delivery mechanism is consistent.

And so on. Occam’s razor would dictate that because the second hypothesis is the simplest, involving the fewest assumptions, it is most likely the correct one.

So many geniuses, are really good at eliminating unnecessary complexity. Einstein, for instance, was a master at sifting the essential from the non-essential. Steve Jobs was the same.

Comment on Facebook | Twitter

Proximate vs Root Causes: Why You Should Keep Digging to Find the Answer

“Anything perceived has a cause.
All conclusions have premises.
All effects have causes.
All actions have motives.”
— Arthur Schopenhauer

***

The Basics

One of the first principles we learn as babies is that of cause and effect. Infants learn that pushing an object will cause it to move, crying will cause people to give them attention, and bumping into something will cause pain. As we get older, this understanding becomes more complex. Many people love to talk about the causes of significant events in their lives (if I hadn’t missed the bus that day I would never have met my partner! or if I hadn’t taken that class in college I would never have discovered my passion and got my job!) Likewise, when something bad happens we have a tendency to look for somewhere to pin the blame.

The mental model of proximate vs root causes is a more advanced version of this reasoning, which involves looking beyond what appears to be the cause and finding the real cause. As a higher form of understanding, it is useful for creative and innovative thinking. It can also help us to solve problems, rather than relying on band-aid solutions.

Much of our understanding of cause and effect comes from Isaac Newton. His work examined how forces lead to motion and other effects. Newton’s laws explain how a body remains stationary unless a force acts upon it. From this, we can take a cause to be whatever causes something to happen.

For example, someone might ask: Why did I lose my job?

  • Proximate cause: the company was experiencing financial difficulties and could not continue to pay all its employees.
  • Root cause: I was not of particular value to the company and they could survive easily without me.

This can then be explored further: Why was I not of value to the company?

  • Ultimate cause: I allowed my learning to stagnate and did not seek constant improvement. I continued doing the same as I had been for years which did not help the company progress.
  • Even further: Newer employees were of more value because they had more up-to-date knowledge and could help the company progress.

This can then help us to find solutions: How can I prevent this from happening again?

  • Answer: In future jobs, I can continually aim to learn more, keep to date with industry advancements, read new books on the topic and bring creative insights to my work. I will know this is working if I find myself receiving increasing amounts of responsibility and being promoted to higher roles.

This example illustrates the usefulness of this line of thinking. If our hypothetical person went with the proximate cause, they would walk away feeling nothing but annoyance at the company which fired them. By establishing the root causes, they can mitigate the risk of the same thing happening in the future.

There are a number of relevant factors which we must take into account when figuring out root causes. These are known as predisposing factors and can be used to prevent a future repeat of an unwanted occurrence.

Predisposing factors tend to include:

  • The location of the effect
  • The exact nature of the effect
  • The severity of the effect
  • The time at which the effect occurs
  • The level of vulnerability to the effect
  • The cause of the effect
  • The factors which prevented it from being more severe.

Looking at proximate vs root causes is a form of abductive reasoning- a process used to unearth simple, probable explanations. We can use it in conjunction with philosophical razors (such as Occam’s and Hanlon’s) to make smart decisions and choices.

In Root Cause Analysis, Paul Wilson defines root causes as:

Root cause is that most basic reason for an undesirable condition or problem which, if eliminated or corrected, would have prevented it from existing or occurring.

In Leviathan, Chapter XI (1651) Thomas Hobbes wrote:

Ignorance of remote causes disposeth men to attribute all events to the causes immediate and instrumental: for these are all the causes they perceive…Anxiety for the future time disposeth men to inquire into the causes of things: because the knowledge of them maketh men the better able to order the present to their best advantage. Curiosity, or love of the knowledge of causes, draws a man from consideration of the effect to seek the cause; and again, the cause of that cause; till of necessity he must come to this thought at last that there is some cause whereof there is no former cause.

In Maxims of the Law, Francis Bacon wrote:

It were infinite for the law to consider the causes of causes, and their impulsions one of another; therefore it contented itself with the immediate cause, and judgeth of acts by that, without looking to any further degree.

A rather tongue in cheek perspective comes from the ever satirical George Orwell:

Man is the only real enemy we have. Remove Man from the scene, and the root cause of hunger and overwork is abolished forever.”
The issue with root cause analysis is that it can lead to oversimplification and it is rare for there to be one single root cause. It can also lead us to go too far (as George Orwell illustrates.) Over emphasising root causes is common among depressed people who end up seeing their existence as the cause of all their problems. As a consequence, suicide can seem like a solution (although it is the exact opposite.) The same can occur after a relationship ends, as people imagine their personality and nature to be the cause. To use this mental model in an effective manner, we must avoid letting it lead to self blame or negative thought spirals. When using it to examine our lives, it is best to only do so with a qualified therapist, rather than while ruminating in bed late at night. Finding root causes should be done with the future in mind, not for dwelling on past issues. Expert root cause analysts use it to prevent further problems and create innovative solutions. We can do the same in our own lives and work.

“Shallow men believe in luck or in circumstance. Strong men believe in cause and effect.”

— Ralph Waldo Emerson

Establishing Root Causes

Establishing root causes is rarely an easy task. However, there a number of techniques we can use to simplify the deduction process. These are similar to the methods used to find first principles:

Socratic questioning
Socratic questioning is a technique which can be used to establish root causes through strict analysis. This a disciplined questioning process used to uncover truths, reveal underlying assumptions and separate knowledge from ignorance. The key distinction between Socratic questioning and normal discussions is that the former seeks to draw out root causes in a systematic manner. Socratic questioning generally follows this process:

  1. Clarifying thinking and explaining origins of ideas. (What happened? What do I think caused it?)
  2. Challenging assumptions. (How do I know this is the cause? What could have caused that cause)
  3. Looking for evidence. (How do I know that was the cause? What can I do to prove or disprove my ideas?)
  4. Considering alternative perspectives. (What might others think? What are all the potential causes? )
  5. Examining consequences and implications. (What are the consequences of the causes I have established? How can they help me solve problems?)
  6. Questioning the original questions. (What can I do differently now that I know the root cause? How will this help me?)

The 5 Whys
This technique is simpler and less structured than Socratic questioning. Parents of young children will no doubt be familiar with this process, which necessitates asking ‘why?’ five times to a given statement. The purpose is to understand cause and effect relationships, leading to the root causes. Five is generally the necessary number of repetitions required. Each question is based on the previous answer, not the initial statement.

Returning to the example of our hypothetical laid off employee (mentioned in the introduction), we can see how this technique works.

  • Effect: I lost my job.
  • Why? Because I was not valuable enough to the company and they could let me go without it causing any problems.
  • Why? Because a newer employee in my department was getting far more done and having more creative ideas than me.
  • Why? Because I had allowed my learning to stagnate and stopped keeping up with industry developments. I continued doing what I have for years because I thought it was effective.
  • Why? Because I only received encouraging feedback from people higher up in the company, and even when I knew my work was substandard, they avoided mentioning it.
  • Why? Because whenever I received negative feedback in the past, I got angry and defensive. After a few occurrences of this, I was left to keep doing work which was not of much use. Then, when the company began to experience financial difficulties, firing me was the practical choice.
  • Solution: In future jobs, I must learn to be responsive to feedback, aim to keep learning and make myself valuable. I can also request regular updates on my performance. To avoid becoming angry when I receive negative feedback, I can try meditating during breaks to stay calmer at work.

As this example illustrates, the 5 whys technique is useful for drawing out root causes and finding solutions.

Cause and Effect Mapping

This technique is often used to establish causes of accidents, disasters, and other mishaps. Let’s take a look at how cause and effect mapping can be used to identify the root cause of a disaster which occurred in 1987: The King's Cross fire. This was a shocking event, where 31 people died and 100 were injured in a tube station fire. It was the first fatal fire to have occurred on the London Underground and led to tremendous changes in rules and regulations. This diagram shows the main factors which led to the fire, and how they all combined to lead to the tragic event. Factors included: flammable grease on the floors which allowed flames to spread, flammable out of date wooden escalators, complacent fire staff who failed to control the initial flames, untrained staff with no knowledge of how to evacuate people, blocked exits (believed to be due to cleaning staff negligence) and a dropped match (assumed to have been discarded by someone lighting a cigarette.)

Once investigators had established these factors which led to the fire, they could begin looking for solutions to prevent another fatal fire. Of course, solving the wrong problem would have been ineffective. Let’s take a look at each of the causes and figure out the root problem:

  • Cause: A dropped match. Smoking on Underground trains had been banned 3 years prior, but many people still lit cigarettes on the escalators as they left. Investigators were certain that the fire was caused by a match, and was not arson. Research found that many other fires had began in the past, yet had not spread. This alone did not explain the severity of this particular fire. Better measures have since been put into place to prevent smoking in stations (although Londoners can vouch for the fact that it still occasionally happens late at night or in secluded stations.)
  • Cause: flammable grease on escalators. Research found that this was indeed highly flammable. Solving this would have been almost impossible- the sheer size of stations and the numbers of people passing through them made thorough cleaning difficult. Solving this alone would not have been sufficient.
  • Cause: wooden escalators. Soon after the fire, stations began replacing these with metal (although it took until 2014 for the entire Underground network to replace every single one.
  • Cause: untrained staff. This was established to be the root cause. Even if the other factors were resolved, the lack of staff training or access to fire fighting equipment still left a high risk of another fatal incident. Investigations found that staff were only instructed to call the Fire Brigade once a fire was out of control, most had no training and little ability to communicate with each other. Once this root cause was found, it could be dealt with. Staff were given improved emergency training and better radio tools for communicating. Heat detectors and sprinklers were fitted in stations.

From this example, we can see how useful finding root causes is. The lack of staff training was the root cause, while the other factors were proximate causes which contributed.

From this information, we can create this diagram to illustrate the relationship between causes.

“All our knowledge has its origins in our perceptions … In nature, there is no effect without a cause … Experience never errs; it is only your judgments that err by promising themselves effects such as are not caused by your experiments.”

- Leonardo da Vinci

How We Can Use This Mental Model as Part of our Latticework

  • Hanlon’s Razor — This mental model states: never attribute to malice that which can be attributed to incompetence. It is relevant when looking for root causes. Take the aforementioned example of the Kings Cross fire. It could be assumed that staff failed to control the fire due to malice. However, we can be 99% certain that their failure to act was due to incompetence (the result of poor training and miscommunication.) When analysing root causes, we must be sure not to attribute blame where it does not exist.
  • Occam’s Razor — This model states: the simplest solution is usually correct. In the case of the fire, there are infinite possible causes which could be investigated. It could be said that the fire was started on purpose, the builders of the station made it flammable on purpose so they would be required to rebuild it that the whole thing is a conspiracy theory and people actually died in an alternate manner. However, the simplest solution is that the fire was caused by a discarded match. When looking for root causes, it is wise to first consider the simplest potential causes, rather than looking at everything which could have contributed.
  • Arguing from first principles — This mental model involves establishing the first principles of any given area of knowledge- information which cannot be deduced from anything else. Understanding the first principles of how fire spreads (such as the fire triangle) could have helped to prevent the event.
  • Black swans — This model, developed by Nassim Taleb is: “an event with the following three attributes. First, it is an outlier, as it lies outside the realm of regular expectations because nothing in the past can convincingly point to its possibility. Second, it carries an extreme impact…. Third, in spite of its outlier status, human nature makes us concoct explanations for its occurrence after the fact, making it explainable and predictable.” The King's Cross fire was a black swan- surprising, impactful and much analyzed afterwards. Understanding that black swans do occur can help us to plan for serious events before they happen.
  • Availability bias — This model states: we misjudge the frequency of events which have happened recently and information which is vivid. Imagine a survivor of the Kings Cross fire who had also been on a derailed train a few months earlier. The intensity of the two memories would be likely to lead them to see travelling on the Underground as dangerous. However, this is not the case – only one in 300 million journeys experience issues (much safer than driving.) When devising root causes, we must be sure to consider all information, not just that which comes to mind with ease.
  • Narrative fallacy — This model states: we tend to turn the past into a narrative, imagining events as fitting together in a manner which is usually false.
  • Hindsight bias — This model states: we see events as predictable when looking back on them.
  • Confirmation bias — This model states: we tend to look for information which confirms pre-existing beliefs and ideas.

Amusing Ourselves To Death

I've read Orwell's 1984. I'm currently reading Neil Postman's Amusing Ourselves to Death: Public Discourse in the Age of Show Business and I recently ordered Aldous Huxley's Brave New World.

Contrary to popular belief, Orwell and Huxley did not prophesy the same thing. This brilliant comic by Stuart McMillian, which takes its words from Amusing Ourselves to Death, contrasts the difference between Orwell and Huxley.

“In short,” Postman writes, “Orwell feared that what we hate will ruin us. Huxley feared that we we love will ruin us.”

huxley-orwell-amusing-ourselves-to-death

(↬ Biblioklept)

Can History Tell the Truth?

“I know it is the fashion to say,” George Orwell once wrote, “that most of recorded history is lies anyway. I am willing to believe that history is for the most part inaccurate and biased, but what is peculiar to our own age is the abandonment of the idea that history could be truthfully written.”

Charles Frankel in The Case For the Modern Man

An exciting Japanese movie of a few years ago, Rashomon, offers a peculiarly apt illustration of (Karl) Mannheim's central thesis (that all social thinking is inevitably determined by unconscious assumptions and unacknowledged commitments – that is, everyone views events through a limited point of view.) The movie is in the form of a story told by a woodsman, who is in despair at what he has seen and heard, and has lost all of his faith in man. He reports that a Japanese lady and her husband have been set upon in the woods by a highwayman. The lady has been raped, the husband killed. And then he repeats in turn the accounts he has overheard at the police station, where the highway man, the lady, and the dead husband, speaking through a medium, have had to tell the events that transpired. Each participant tells a different story, each subtly arranges the events in a pattern that will put his own position in the best light. As each of these stories is re-enacted before our eyes, our tension mounts. We are not sure whether what really happened was murder and rape, whether the lady was treacherous or loyal, the husband cowardly or heroic, the highwayman an aggressor or a victim. Each time we move to the next story we hope to get closer to the truth, and each time we are put off. But suddenly we seem to see an opening. For it turns our that the woodsman, who has claimed to be merely repeating the stories he overheard at the police station, has been an eyewitness to the actual scene in the forest. So the woodsman tells his story. But, once more, we hear a story which has something subtly off-center about it. A dagger is unaccounted for. And then it turns out that the woodsman has stolen it. He has not been a neutral bystander; he too is a participant.

This notation that we are all participants in what happens in human history, and that there can therefore be no such thing as objectivity about history, is the central theme, and the central problem, in Mannheim's philosophy of history. We never see what “really” happens, and in fact it makes no sense even to ask. The affairs of men take place in a hall of mirrors, each with its own angle of distortion; and all we can report is what we see in the mirrors, for there is nothing else to see. All social thinking is inevitably the thinking of men who have a role in events, feelings about them, and a limited perspective upon them. Every belief comes labeled with the date, place, and social pedigree of the man who holders it. And the idea that there is an objective truth about human affairs, independently of who asserts it, is only one element in the special perspective of liberalism.

Frankel continues

There can be no disengaged intelligence seeking a universal truth. Intelligence is inevitably earth-bound, practical and biased. The questions men ask about social affairs are always selected questions that are suggested by some particular point of view and serve some special interest. The answers men accept as satisfactory are always partial answers with an inescapable element of arbitrariness in them. And even the standards of truth that men employ are limited by the social perspectives in which they are framed.

Mannheim's argument that we cannot objectively observe social affairs is based on two main lines of argument. “The first of these rests on a sharp distinction between the study of nature and the study of human affairs. The second rests on an assumption about the meaning of terms like “partiality” and “bias.” Frankel goes on to look at each of those.

We are putting the cart before the horse when we think that a science of politics must be different from other sciences because political behavior is random and haphazard. It is not because political behavior is random and haphazard that we do not have much objective knowledge about it. It is because we do not have much objective knowledge about it that it is random and haphazard.

Ultimately Frankel comes to doubt Mannheim.

There are obvious differences between the behavior of human beings and the behavior of physical things; but they do not justify setting these two in separate worlds, or suggesting that the ideals of truth and reason we apply to the physical sciences do not apply to the study of human history. The natural sciences after all, have also had social origins and social consequences.

In 2012, I spent over 1,000 hours bringing you Farnam Street. If you find any value in reading Farnam Street, I’d greatly appreciate your support with a modest donation.