Tag: Philosophy

So Two Stoics Walk Into a Bar…

The first thing he ordered was OJ with a splash of vodka. When people come to the FS bar the first thing they did was order a drink so this didn't seem out of the ordinary. But looking closely … this was no ordinary man.

Why was Seneca ordering a drink at the FS Bar? And who was that next to him? Is that Epictetus? It's clear this was going to be no ordinary night at the FS bar.

It's time to get to work.

***

(What follows is our imagined dialogue between Epictetus and Seneca, two essential contributors to Stoic thought, at the FS bar, presided over by an intellectually curious bartender, Kit.

Imagine: There is a slight breeze as the door opens. In walk Seneca and Epictetus. They are both dressed decently, but plainly. After taking a moment to adjust to the light, they each take a seat at the FS bar.)

***

Kit: Evening Gentlemen. What can I get you?

Epictetus: I'll have an orange juice with a little vodka. Get my friend here a hemlock tea.

Seneca: Very humorous. I'll have the same, please.

Kit: No problem. (She begins to mix the drinks)

Seneca turns to Epictetus, obviously continuing a conversation they had started earlier.

Seneca: I’m not sure I agree with you. Relationships don’t automatically interfere with our ability to be content. If you find someone who has the same approach to life as you, then it’s possible to share your life with them.

Epictetus: Ah, that makes me nervous. Other people, their decisions, their actions, are outside of our control. If we can’t walk away from relationships then we’re relying on things that we have no control over. And it’s impossible to be content like that.

Seneca: But surely a life without emotional attachment is not the kind of life that will provide contentment?

Epictetus: Why not?

(Seneca pauses to think about this.)

Seneca: It was nature’s intention that there should be no need of great equipment for a good life. Every individual can make himself happy. That implies that feeling something positive is the goal.

Epictetus: Yes, but happiness comes when you can generate it yourself. Like you said, everyone is born with the tools to make himself happy. You don’t need anything else in this world to achieve it. Money, stuff, or relationships.

Seneca: I guess the question then, is can you have something without needing it? Can you enjoy something without relying on it?

(pause while they both consider this)

Kit: And how are the screwdrivers Gentlemen?

Seneca: Exactly as they should be. Thank you.

Epictetus: You probably think our conversation isn’t very appropriate for a bar.

Kit: (smiles) Everything is appropriate for a bar. It’s a good place to work out your thoughts.

Seneca: What do you think? About my friend’s point that we should form no real attachments to anyone. Spouses. Children. Because we can never be truly content relying on anything outside of our control.

Kit: It sounds pretty impossible. If you didn’t care about anyone, why would you even bother getting married or having kids? What would be the point?

Seneca: Exactly! I think that relationships can play a crucial role in being content with your life. The goal is not to avoid feeling because it can cause pain, but accept that pain will inevitably come, and learn to deal with it with equanimity. And if you have a close relationship with someone who’s similar, you can find contentment with each other. It’s about enjoying relationships without becoming attached to them.

Epictetus: No, no. Denial is better than moderation. Wanting nothing means no one has power over you. As soon as you want a spouse, you compromise your ability to control your life.

Seneca: As soon as you desire anything, you compromise. But what if it’s not about wanting a spouse. Or children. What if it’s just about doing it if the opportunity presents itself, and then it becomes about loving the ones you have.

Kit: (who has continued to listen to their conversation due to a lack of other patrons) I think it would be really hard to not want your children to grow up and have great lives.

Epictetus: It’s not ‘wanting’ or ‘not wanting’. It’s not feeling anything at all beyond what you can control.

Kit: Is that even possible?

Epictetus: (shrugs) It’s something to work towards. (Sees Kit’s skeptical expression) Look, if you go buy a chocolate bar, it costs you a dollar. If you don’t buy it, you don’t have the chocolate bar, but you still have the dollar. You can’t both get something and not pay for it. It’s the same with relationships. You can’t derive benefit from them without it costing you to some degree. And if you don’t invest yourself in them, you’ll still have that effort available for yourself.

Seneca: I disagree. I think it is possible to love. You just can’t let yourself be controlled by it. It is desires that blind us to the truth. The wanting, not the being. You can and should love your children. But you must also be mindful of the precariousness of life, and not be amazed or devastated by the things that happen to them. A lot of bad shit happens in life, to us and the ones we love. The problem is that we are always surprised by it.

Epictetus: Ah, so when a little wine is stolen, don’t get upset. It’s the price you pay for tranquility.

Seneca: Right.

Kit: So, you just have to accept that your husband will leave you, and your children will die, that way when it happens you will just be like ‘oh, okay’?

Seneca: (shakes his head) Not quite. It’s more knowing that they could. See, it might not ever happen, but then again, it might. And if you start off accepting that fortune, or fate, or however you understand the world, brings both good and bad, then you will be able to still find contentment no matter what life throws at you.

Kit: Hmm. And does it work for you?

Seneca: (laughs) Sometimes.

Epictetus: I think it’s about trying to be one step removed from what’s happening. If you can recognize, for instance, that it’s not people who are irritating, but your judgment about their behavior that is irritating, then you create a space where you can change how you feel without needing anyone else to change.

Seneca: Yes. The more understanding and acceptance you have of the reality of living, the less you are impacted when circumstances knock you down.

Kit: Well, that I can get behind. Another drink?

Making Compassionate Decisions: The Role of Empathy in Decision Making

“The biggest deficit that we have in our society and in the world right now is an empathy deficit. We are in great need of people being able to stand in somebody else's shoes and see the world through their eyes.”

— Barack Obama

You don’t have to look hard to find quotes expounding the need for more empathy in society. As with Barack Obama’s quote above, we are encouraged to actively build empathy with others — especially those who are different from us. The implicit message in these pleas is that empathy will make us treat each other with more respect and caring and will help reduce violence. But is this true? Does empathy make us appreciate others, help us behave in moral ways, or help us make better decisions?

These are questions Paul Bloom tackles in his book Against Empathy: The Case for Rational Compassion. As the title suggests, Bloom’s book makes a case against empathy as an inherent force for good and takes a closer look at what empathy is (and is not), how empathy works in our brains, how empathy can lead to immoral outcomes despite our best intentions, and how we can improve our ability to have a positive impact by strengthening our intelligence, compassion, self-control, and ability to reason.

To explore these questions, we first need to define what we’re talking about.

What Is Empathy?

Empathy is an often-used word that can mean different things. Bloom quotes one team of empathy researchers who joke that “there are probably nearly as many definitions of empathy as people working on this topic.” For his part, Bloom defines empathy as “the act of coming to experience the world as you think someone else does.” This type of empathy was explored by philosophers of the Scottish Enlightenment. Bloom writes:

As Adam Smith put it, we have the capacity to think about another person and “place ourselves in his situation and become in some measure the same person with him, and thence form some idea of his sensations, and even feel something which, though weaker in degree, is not altogether unlike them.”

This is the definition and view of empathy that Bloom devotes most of the book to exploring. This is the “standing in another man’s shoes” type of empathy from Barack Obama’s quote above, which Bloom calls emotional empathy.

“I feel your pain” is more than a metaphor. It's literal.

With emotional empathy, you actually experience a weaker degree of what somebody else feels. Researchers in recent years have been able to show that empathic responses of pain occur in the same area of the brain where real pain is experienced.

So “I feel your pain” isn’t just a gooey metaphor; it can be made neurologically literal: Other people’s pain really does activate the same brain area as your own pain, and more generally, there is neural evidence for a correspondence between self and other.

To make the shoe metaphor literal, imagine that you see somebody drop something heavy on their foot — you flinch because you know what this feels like and the parts of your brain that experience pain (the anterior insula and the cingulate cortex) react. You don’t feel the same degree of pain, of course — you didn’t drop anything on your foot after all — but it is likely that you have an involuntary physical reaction like a flinch, a facial grimace, or an audible outburst. This is an emotionally empathic response.

But there is another form of empathy that Bloom wants us to be aware of and consider differently. It relates to our ability to understand what is going on in the minds of others. Bloom refers to this form as cognitive empathy:

… if I understand that you are in pain without feeling it myself, this is what psychologists describe as social cognition, social intelligence, mind reading, theory of mind, or mentalizing. It's also sometimes described as a form of empathy—“cognitive empathy” as opposed to “emotional empathy.”

In this sense, cognitive empathy speaks to our capacity to understand what is going on in the minds of others. In the case of pain, which is where a lot of empathy research is done, we’re not talking about feeling any degree of pain, as we might with emotional empathy, but instead, we simply understand that the other person is feeling pain without feeling it ourselves. Cognitive empathy goes beyond pain — our ability to understand what is going on in somebody else’s mind is an important part of being human and is necessary for us to relate to each other.

Empathy and compassion are synonyms in many dictionaries and used interchangeably by many, but they have different characteristics.

The brain is, of course, very complicated, so it is plausible that these two types of empathy could take place in the same part of the brain. So far, though, the research seems to indicate that they are largely separate:

In a review article, Jamil Zaki and Kevin Ochsner note that hundreds of studies now support a certain perspective on the mind, which they call “a tale of two systems.” One system involves sharing the experience of others, what we’ve called empathy; the other involves inferences about the mental states of others—mentalizing or mind reading. While they can both be active at once, and often are, they occupy different parts of the brain. For instance, the medial prefrontal cortex, just behind the forehead, is involved in mentalizing, while the anterior cingulate cortex, sitting right behind that, is involved in empathy.

The difference between cognitive and emotional empathy is important for understanding Bloom’s arguments. From Bloom’s perspective, cognitive empathy is “…a useful and necessary tool for anyone who wishes to be a good person—but it is morally neutral.” On the other hand, Bloom believes that emotional empathy is “morally corrosive,” and the bulk of his attack is directed at highlighting the pitfalls of relying on emotional empathy while making the case for cultivating and practicing “rational compassion” instead.

I believe that the capacity for emotional empathy, described as “sympathy” by philosophers such as Adam Smith and David Hume, often simply known as “empathy” and defended by so many scholars, theologians, educators, and politicians, is actually morally corrosive. If you are struggling with a moral decision and find yourself trying to feel someone else’s pain or pleasure, you should stop. This empathic engagement might give you some satisfaction, but it’s not how to improve things and can lead to bad decisions and bad outcomes. Much better to use reason and cost-benefit analysis, drawing on a more distanced compassion and kindness.

Here again, the definition of the terms is important for understanding the argument. Empathy and compassion are synonyms in many dictionaries and used interchangeably by many, but they have different characteristics. Bloom outlines the difference:

… compassion and concern are more diffuse than empathy. It is weird to talk about having empathy for the millions of victims of malaria, say, but perfectly normal to say that you are concerned about them or feel compassion for them. Also, compassion and concern don’t require mirroring of others’ feelings. If someone works to help the victims of torture and does so with energy and good cheer, it doesn’t seem right to say that as they do this, they are empathizing with the individuals they are helping. Better to say that they feel compassion for them.

Bloom references a review paper written by Tania Singer and Olga Klimecki to help make the distinction clear. Singer and Klimecki write:

In contrast to empathy, compassion does not mean sharing the suffering of the other: rather, it is characterized by feelings of warmth, concern and care for the other, as well as a strong motivation to improve the other’s well-being. Compassion is feeling for and not feeling with the other.

To summarize, emotional empathy could be simply described as “feeling what others feel,” cognitive empathy as “understanding what others feel,” and compassion as “caring about how others feel.”

Emotional empathy could be simply described as “feeling what others feel,” cognitive empathy as “understanding what others feel,” and compassion as “caring about how others feel.”

Empathy and Morality

Many people believe that our ability to empathize is the basis for morality because it causes us to consider our actions from another’s perspective. “Treat others as you would like to be treated” is the basic morality lesson repeated thousands of times to children all over the world.

In this way, empathy can lead us to rely on our self-centered nature. If this is true, Bloom suggests that the argument in its simplest form would go like this:

Everyone is naturally interested in him- or herself; we care most about our own pleasure and pain. It requires nothing special to yank one’s hand away from a flame or to reach for a glass of water when thirsty. But empathy makes the experiences of others salient and important—your pain becomes my pain, your thirst becomes my thirst, and so I rescue you from the fire or give you something to drink. Empathy guides us to treat others as we treat ourselves and hence expands our selfish concerns to encompass others.

In this way, the willful exercise of empathy can motivate kindness that would never have otherwise occurred. Empathy can make us care about a slave, or a homeless person, or someone in solitary confinement. It can put us into the mind of a gay teenager bullied by his peers, or a victim of rape. We can empathize with a member of a despised minority or someone suffering from religious persecution in a faraway land. All these experiences are alien to me, but through the exercise of empathy, I can, in some limited way, experience them myself, and this makes me a better person.

When we consider the plight of others by imagining ourselves in their situation, we experience an empathic response that can cause us to evaluate the morality of our actions.

When we consider the plight of others by imagining ourselves in their situation, we experience an empathic response that can cause us to evaluate the morality of our actions.

In an interview, Steven Pinker hypothesizes that it was an increase in empathy, made possible by the technology of the printing press and the resulting increase in literacy, that led to the Humanitarian Revolution during the Enlightenment. The increase in empathy brought about by our ability to read accounts of violent punishments like disembowelment and mutilation caused us to reconsider the morality of treating other human beings in such ways.

So in certain instances, empathy can play a role in motivating us to take moral action. But is an empathic response required to do so?

To use a classic example from philosophy—first thought up by the Chinese philosopher Mencius—imagine that you are walking by a lake and see a young child struggling in shallow water. If you can easily wade into the water and save her, you should do it. It would be wrong to keep walking.

What motivates this good act? It is possible, I suppose, that you might imagine what it feels like to be drowning, or anticipate what it would be like to be the child’s mother or father hearing that she drowned. Such empathic feelings could then motivate you to act. But that is hardly necessary. You don’t need empathy to realize that it’s wrong to let a child drown. Any normal person would just wade in and scoop up the child, without bothering with any of this empathic hoo-ha.

And so there has to be more to morality than empathy. Our decisions about what’s right and what’s wrong, and our motivations to act, have many sources. One’s morality can be rooted in a religious worldview or a philosophical one. It can be motivated by a more diffuse concern for the fates of others—something often described as concern or compassion…

I hope most people reading this would agree that failing to attempt to save a drowning child or supporting or perpetrating violent punishments like disembowelment would be at the very least morally reprehensible, if not outright evil.

But what motivates people to be “evil”? For researchers like Simon Baron-Cohen, evil is defined as “empathy erosion” — truly evil people lack the capacity to empathize, and it is this lack of empathy that causes them to act in evil ways. Bloom looks at the question of what causes people to be evil from a slightly different angle:

Indeed, some argue that the myth of pure evil gets things backward. That is, it’s not that certain cruel actions are committed because the perpetrators are self-consciously and deliberatively evil. Rather it is because they think they are doing good. They are fueled by a strong moral sense.

When the perpetrators of violence or cruelty believe that their actions are morally justified, what motivates them? Bloom suggests that it can be empathy. Empathy often causes us to choose sides, to choose whom to empathize with. We see this tendency play out in politics all the time.

Empathy often causes us to choose sides, to choose whom to empathize with.

Politicians representing one side believe they are saving the world, while representatives on the other side believe that their adversaries are out to destroy civilization as we know it. If I believe that I am protecting a person or group of people whom I choose to empathize with, then I may be motivated to act in a way I believe is morally justified, even though others may believe that I have harmed them.

Steven Pinker weighed in on this issue when he wrote the following in The Better Angels of our Nature:

If you added up all the homicides committed in pursuit of self-help justice, the casualties of religious and revolutionary wars, the people executed for victimless crimes and misdemeanors, and the targets of ideological genocides, they would surely outnumber the fatalities from amoral predation and conquest.

Bloom quotes Pinker and goes on to write:

Henry Adams put this in stronger terms, with regard to Robert E. Lee: “It's always the good men who do the most harm in the world.”

This might seem perverse. How can good lead to evil? One thing to keep in mind here is that we are interested in beliefs and motivations, not what’s good in some objective sense. So the idea isn’t that evil is good; rather, it’s that evil is done by those who think they are doing good.

So from a moral perspective, empathy can lead us astray. We may believe we are doing good or that our actions are justified but this may not necessarily be true for all involved. This is especially troublesome when we consider how we are affected by a growing list of cognitive biases.

Empathy and Biases

While empathy may not be required to motivate us to save a drowning child, it can still help us consider the differing experiences or suffering of another person thus motivating us to consider things from their perspective or thus act to relieve their suffering:

I see the bullied teenager and might be tempted initially to join in with his tormenters, out of sadism or boredom or a desire to dominate or be popular, but then I empathize—I feel his pain, I feel what it’s like to be bullied—so I don’t add to his suffering. Maybe I even rise to his defense. Empathy is like a spotlight directing attention and aid to where it’s needed.

On the surface this seems like an excellent case for the positive power of empathy; it shines a “spotlight” on a person in need and motivates us to help them. But what happens when we dig a little deeper into this metaphor? Bloom writes

… spotlights have a narrow focus, and this is one problem with empathy. It does poorly in a world where there are many people in need and where the effects of one’s actions are diffuse, often delayed, and difficult to compute, a world in which an act that helps one person in the here and now can lead to greater suffering in the future.

He adds:

Further, spotlights only illuminate what they are pointed at, so empathy reflects our biases. Although we might intellectually believe that the suffering of our neighbor is just as awful as the suffering of someone living in another country, it’s far easier to empathize with those who are close to us, those who are similar to us, and those we see as more attractive or vulnerable and less scary. Intellectually, a white American might believe that a black person matters just as much as a white person, but he or she will typically find it a lot easier to empathize with the plight of the latter than the former. In this regard, empathy distorts our moral judgments in pretty much the same way that prejudice does.

We are all predisposed to care more deeply for those we are close to. From a purely biological perspective, we will care for and protect our children and families before the children or families of strangers. Our decision making often falls victim to narrow framing, and our actions are affected by biases like Liking/Loving and Disliking/Hating and our tendency to discount the pain of people we don’t like:

We are constituted to favor our friends and family over strangers, to care more about members of our own group than people from different, perhaps opposing, groups. This fact about human nature is inevitable given our evolutionary history. Any creature that didn’t have special sentiments toward those that shared its genes and helped it in the past would get its ass kicked from a Darwinian perspective; it would falter relative to competitors with more parochial natures. This bias to favor those close to us is general—it influences who we readily empathize with, but it also influences who we like, who we tend to care for, who we will affiliate with, who we will punish, and so on.

There are many causes for human biases — empathy is only one — but taking a step back, we can see how the intuitive gut responses motivated by emotional empathy can negatively affect our ability to make rational decisions.

Empathy’s narrow focus, specificity, and innumeracy mean that it’s always going to be influenced by what captures our attention, by racial preferences, and so on. It’s only when we escape from empathy and rely instead on the application of rules and principles or a calculation of costs and benefits that we can, to at least some extent, become fair and impartial.

While many of us are motivated to be good and to make good decisions, it isn’t always cut and dry. Our preferences for whom to help or which organizations to support are affected by our biases. If we’re not careful, empathy can affect our ability to see the potential impacts of our actions. However, considering these impacts takes much more than empathy and a desire to do good; it takes awareness of our biases and mental effort to combat their effects:

… doing actual good, instead of doing what feels good, requires dealing with complex issues and being mindful of exploitation from competing, sometimes malicious and greedy, interests. To do so, you need to step back and not fall into empathy traps. The conclusion is not that one shouldn’t give, but rather that one should give intelligently, with an eye toward consequences.

In addition to biases like Liking/Loving and Disliking/Hating, empathy can lead to biases related to the Representative Heuristic. Actions motivated by empathy often fail to take the broader picture into account; the spotlight doesn’t encourage us to consider base rates or sample size when we make our decisions. Instead, we are motivated by positive emotions for a specific individual or small group:

Empathy is limited as well in that it focuses on specific individuals. Its spotlight nature renders it innumerate and myopic: It doesn’t resonate properly to the effects of our actions on groups of people, and it is insensitive to statistical data and estimated costs and benefits.

Part of the challenge that exists with empathy is this innumeracy that Bloom describes. It is impossible for us to form genuine empathic connections with abstractions. Conversely, if we see the suffering of one, empathy can motivate us to help make it stop. As Mother Theresa said, “If I look at the mass, I will never act. If I look at the one, I will.” This is what psychologists call “the identifiable victim effect.”

While many of us are motivated to be good and to make good decisions, it isn’t always cut and dry.

Perhaps an example will help illustrate.  On October 17, 1987, 18-month-old Jessica McClure fell 22 feet down an eight-inch-diameter well in the backyard of her home in Midland, Texas. Over the next 2 ½ days, fire, police, and volunteer rescuers worked around the clock to save her. Media coverage of the emergency was broadcast all over the world resulting in Jessica McClure becoming internationally known as “Baby Jessica” and prompting then-President Ronald Reagan to proclaim that “…everybody in America became the godmothers and godfathers of Jessica while this was going on.” The intense coverage and global awareness led to an influx of donations, resulting in an $800,000 trust being established in Jessica’s name.

What prompted this massive outpouring of concern and support? There are millions of children in need every day all over the world. How many of the people who sent donations to Baby Jessica had ever tried to help these faceless children? In the case of Baby Jessica, they had an identifiable victim, and empathy motivated many of them to help Jessica and her family. They could imagine what it might feel like for those poor parents and they felt genuine concern for the child’s future; all the other needy children around the world were statistical abstractions. This ability to identify and put a face on the suffering child and their family enables us to experience an empathic response with them, but the random children and their families remain empathically out of reach.

None of this is to say that rescuers should not have worked to save Jessica McClure — she was a real-world example of Mencius’s proverbial drowning child — but there are situations every day where we choose to help individuals at the cost of the continued suffering of others. Our actions often have diffuse and unknowable impacts.

If our concern is driven by thoughts of the suffering of specific individuals, then it sets up a perverse situation in which the suffering of one can matter more than the suffering of a thousand.

Furthermore, not only are we more likely to empathize with the identifiable victim, our empathy has its limits in scale as well. If we hear that an individual in a faraway land is suffering, we may have an empathic response, but will that response be increased proportionally if we learned that thousands or millions of people suffered? Adam Smith got to the heart of this question in The Theory of Moral Sentiments when he wrote:

Let us suppose that the great empire of China, with all its myriads of inhabitants, was suddenly swallowed up by an earthquake, and let us consider how a man of humanity in Europe, who had no sort of connection with that part of the world, would be affected upon receiving intelligence of this dreadful calamity. He would, I imagine, first of all, express very strongly his sorrow for the misfortune of that unhappy people, he would make many melancholy reflections upon the precariousness of human life, and the vanity of all the labors of man, which could thus be annihilated in a moment. He would too, perhaps, if he was a man of speculation, enter into many reasonings concerning the effects which this disaster might produce upon the commerce of Europe, and the trade and business of the world in general. And when all this fine philosophy was over, when all these humane sentiments had been once fairly expressed, he would pursue his business or his pleasure, take his repose or his diversion, with the same ease and tranquility, as if no such accident had happened.

Empathy can inadvertently motivate us to act to save the one at the expense of the many. While the examples provided are by no means clear-cut issues, it is worth considering how the morality or goodness of our actions to help the few may have negative consequences for the many.

Charlie Munger has written and spoken about the Kantian Fairness Tendency, in which he suggests that for certain systems to be moral to the many, they must be unfair to the few.

For certain systems to be moral to the many, they must be unfair to the few.

Empathy and Reason

We are emotional creatures, then, but we are also rational beings, with the capacity for rational decision-making. We can override, deflect, and overrule our passions, and we often should do so. It’s not hard to see this for feelings like anger and hate—it’s clear that these can lead us astray, that we do better when they don’t rule us and when we are capable of circumventing them.

While we need kindness and compassion and we should strive to be good people making good decisions, we are not necessarily well served by empathy in this regard; emotional empathy’s negatives often outweigh its positives. Instead, we should rely on our capacity to reason and control our emotions. Empathy is not something that can be removed or ignored; it is a normal function of our brains after all, but we can and do combine reason with our natural instincts and intuitions:

The idea that human nature has two opposing facets—emotion versus reason, gut feelings versus careful, rational deliberation—is the oldest and most resilient psychological theory of all. It was there in Plato, and it is now the core of the textbook account of cognitive processes, which assumes a dichotomy between “hot” and “cold” mental processes, between an intuitive “System 1” and a deliberative “System 2.”

We know from Daniel Kahneman’s Thinking, Fast and Slow that these two systems are not inherently separate in practice. They are both functioning in our brains at the same time.

Some decisions are made faster due to heuristics and intuitions from experiences or our biology, while other decisions are made in a more deliberative and slow fashion using reason. Bloom writes:

We go through a mental process that is typically called “choice,” where we think about the consequences of our actions. There is nothing magical about this. The neural basis of mental life is fully compatible with the existence of conscious deliberation and rational thought—with neural systems that analyze different options, construct logical chains of argument, reason through examples and analogies, and respond to the anticipated consequences of actions.

We have an impulsive, emotional, and intuitive decision-making system in System 1 and a deliberative, reasoning, and (sometimes) rational decision-making system in System 2.

We will always have emotional reactions, but on average our decision making will be better served by improving our ability to reason rather than leveraging our ability to empathize

We will always have emotional reactions, but on average our decision making will be better served by improving our ability to reason rather than by leveraging our ability to empathize. One way to increase our ability to reason is to focus on improving our self-control:

Self-control can be seen as the purest embodiment of rationality in that it reflects the working of a brain system (embedded in the frontal lobe, the part of the brain that lies behind the forehead) that restrains our impulsive, irrational, or emotive desires.

While Bloom is unabashedly against empathy as an inherent force for good in the world, he is also a firm supporter of being and doing good. He believes that the “feeling with” nature of emotional empathy leads us to make biased and bad decisions despite our best intentions and that we should instead foster and encourage the “caring for” nature of compassion while combining it with our intelligence, self-control, and ability to reason:

… none of this is to deny the importance of traits such as compassion and kindness. We want to nurture these traits in our children and work to establish a culture that prizes and rewards them. But they are not enough. To make the world a better place, we would also want to bless people with more smarts and more self-control. These are central to leading a successful and happy life—and a good and moral one.

 

[Editor's note: Where you see boldface in block quotes, emphasis has been added by Farnam Street.]

The Code of Hammurabi: The Best Rule To Manage Risk

hammurabi's code

Almost 4,000 years ago, King Hammurabi of Babylon, Mesopotamia, laid out one of the first sets of laws.

Hammurabi’s Code is among the oldest translatable writings. It consists of 282 laws, most concerning punishment. Each law takes into account the perpetrator’s status. The code also includes the earliest known construction laws, designed to align the incentives of builder and occupant to ensure that builders created safe homes:

  1. If a builder builds a house for a man and does not make its construction firm, and the house which he has built collapses and causes the death of the owner of the house, that builder shall be put to death.
  2. If it causes the death of the son of the owner of the house, they shall put to death a son of that builder.
  3. If it causes the death of a slave of the owner of the house, he shall give to the owner of the house a slave of equal value.
  4. If it destroys property, he shall restore whatever it destroyed, and because he did not make the house which he builds firm and it collapsed, he shall rebuild the house which collapsed at his own expense.
  5. If a builder builds a house for a man and does not make its construction meet the requirements and a wall falls in, that builder shall strengthen the wall at his own expense.

Hammurabi became ruler of Babylon in 1792 BC and held the position for 43 years. In the era of city-states, Hammurabi grew his modest kingdom (somewhere between 60 and 160 square kilometers) by conquering several neighboring states. Satisfied, then, with the size of the area he controlled, Hammurabi settled down to rule his people.

“This world of ours appears to be separated by a slight and precarious margin of safety from a most singular and unexpected danger.”

— Arthur Conan Doyle

Hammurabi was a fair leader and concerned with the well-being of his people. He transformed the area, ordering the construction of irrigation ditches to improve agricultural productivity, as well as supplying cities with protective walls and fortresses. Hammurabi also renovated temples and religious sites.

By today’s standards, Hammurabi was a dictator. Far from abusing his power, however, he considered himself the “shepherd” of his people. Although the Babylonians kept slaves, they too had rights. Slaves could marry other people of any status, start businesses, and purchase their freedom, and they were protected from mistreatment.

At first glance, it might seem as if we have little to learn from Hammurabi. I mean, why bother learning about the ancient Babylonians? They were just barbaric farmers, right?

It seems we’re not as different as it appears. Our modern beliefs are not separate from those of people in Hammurabi’s time; they are a continuation of them. Early legal codes are the ancestors of the ones we now put our faith in.

Whether a country is a dictatorship or democracy, one of the keys to any effective legal system is the ability for anyone to understand its laws. We’re showing cracks in ours and we can learn from the simplicity of Hammurabi’s Code, which concerned itself with practical justice and not lofty principles. To even call it a set of laws is misleading. The ancient Babylonians did not appear to have an equivalent term.

Three important concepts are implicit in Hammurabi’s Code: reciprocity, accountability, and incentives.

We have no figures for how often Babylonian houses fell down before and after the implementation of the Code. We have no idea how many (if any) people were put to death as a result of failing to adhere to Hammurabi’s construction laws. But we do know that human self-preservation instincts are strong. More than strong, they underlie most of our behavior. Wanting to avoid death is the most powerful incentive we have. If we assume that people felt and thought the same way 4000 years ago, we can guess at the impact of the Code.

Imagine yourself as a Babylonian builder. Each time you construct a house, there is a risk it will collapse if you make any mistakes. So, what do you do? You allow for the widest possible margin of safety. You plan for any potential risks. You don’t cut corners or try to save a little bit of money. No matter what, you are not going to allow any known flaws in the construction. It wouldn’t be worth it. You want to walk away certain that the house is solid.

Now contrast that with modern engineers or builders.

They don’t have much skin in the game. The worst they face if they cause a death is a fine. We saw this in Hurricane Katrina —1600 people died due to flooding caused in part by the poor design of hurricane protection systems in New Orleans. Hindsight analysis showed that the city’s floodwalls, levees, pumps, and gates were ill designed and maintained. The death toll was worse than it would otherwise have been. And yet, no one was held accountable.

Hurricane Katrina is regarded as a disaster that was part natural and part man-made. In recent months, in the Grenfell Tower fire in London, we saw the effects of negligent construction. At least 80 people died in a blaze that is believed to have started accidentally but that, according to expert analysis, was accelerated by the conscious use of cheap building materials that had failed safety tests.

The portions of Hammurabi’s Code that deal with construction laws, as brutal as they are (and as uncertain as we are of their short-term effects) illustrate an important concept: margins of safety. When we construct a system, ensuring that it can handle the expected pressures is insufficient.

A Babylonian builder would not have been content to make a house that was strong enough to handle just the anticipated stressors. A single Black Swan event — such as abnormal weather — could cause its collapse and in turn the builder’s own death, so builders had to allow for a generous margin of safety. The larger the better. In 59 mph winds, we do not want to be in a house built to withstand 60 mph winds.

But our current financial systems do not incentivize people to create wide margins of safety. Instead, they do the opposite — they encourage dangerous risk-taking.

Nassim Taleb referred to Hammurabi’s Code in a New York Times opinion piece in which he described a way to prevent bankers from threatening the public well-being. His solution? Stop offering bonuses for the risky behavior of people who will not be the ones paying the price if the outcome is bad. Taleb wrote:

…it’s time for a fundamental reform: Any person who works for a company that, regardless of its current financial health, would require a taxpayer-financed bailout if it failed should not get a bonus, ever. In fact, all pay at systemically important financial institutions — big banks, but also some insurance companies and even huge hedge funds — should be strictly regulated.

The issue, in Taleb’s opinion, is not the usual complaint of income inequality or overpay. Instead, he views bonuses as asymmetric incentives. They reward risks but do not punish the subsequent mistakes that cause “hidden risks to accumulate in the financial system and become a catalyst for disaster.” It’s a case of “heads, I win; tails, you lose.”

Bonuses encourage bankers to ignore the potential for Black Swan events, with the 2008 financial crisis being a prime (or rather, subprime) example. Rather than ignoring these events, banks should seek to minimize the harm caused.

Some career fields have a strict system of incentives and disincentives, both official and unofficial. Doctors get promotions and respect if they do their jobs well, and risk heavy penalties for medical malpractice. With the exception of experiments in which patients are fully informed of and consent to the risks, doctors don’t get a free pass for taking risks that cause harm to patients.

The same goes for military and security personnel. As Taleb wrote, “we trust the military and homeland security personnel with our lives, yet we don’t give them lavish bonuses. They get promotions and the honor of a job well done if they succeed, and the severe disincentive of shame if they fail.”

Hammurabi and his advisors were unconcerned with complex laws and legalese. Instead, they wanted the Code to produce results and to be understandable by everyone. And Hammurabi understood how incentives work — a lesson we’d be well served to learn.

When you align incentives of everyone in both positive and negative ways, you create a system that takes care of itself. Taleb describes Law 229 of Hammurabi’s Code as “the best risk-management rule ever.” Although barbaric to modern eyes, it took into account certain truisms. Builders typically know more about construction than their clients do and can take shortcuts in ways that aren’t obvious. After completing construction, a builder can walk away with a little extra profit, while the hapless client is unknowingly left with an unsafe house.

The little extra profit that builders can generate is analogous to the bonus system in some of today’s industries. It rewards those who take unwise risks, trick their customers, and harm other people for their own benefit. Hammurabi’s system had the opposite effect; it united the interests of the person getting paid and the person paying. Rather than the builder being motivated to earn as much profit as possible and the homeowner being motivated to get a safe house, they both shared the latter goal.

The Code illustrates the efficacy of using self-preservation as an incentive. We feel safer in airplanes that are flown by a person and not by a machine because, in part, we believe that pilots want to protect their own lives along with ours.

When we lack an incentive to protect ourselves, we are far more likely to risk the safety of other people. This is why bankers are willing to harm their customers if it means the bankers get substantial bonuses. And why male doctors prescribed contraceptive pills to millions of female patients in the 1960s, without informing them of the risks (which were high at the time). This is why companies that market harmful products, such as fast food and tobacco, are content to play down the risks. Or why the British initiative to reduce the population of Indian cobras by compensating those who caught the snakes had the opposite effect. Or why Wells Fargo employees opened millions of fake accounts to reach sales targets.

Incentives backfire when there are no negative consequences for those who exploit them. External incentives are based on extrinsic motivation, which easily goes awry.

When we have real skin in the game—when we have upsides and downsides—we care about outcomes in a way that we wouldn’t otherwise. We act in a different way. We take our time. We use second-order thinking and inversion. We look for evidence or a way to disprove it.

Four thousand years ago, the Babylonians understood the power of incentives, yet we seem to have since forgotten about the flaws in human nature that make it difficult to resist temptation.

The Fairness Principle: How the Veil of Ignorance Helps Test Fairness

“But the nature of man is sufficiently revealed for him to know something of himself and sufficiently veiled to leave much impenetrable darkness, a darkness in which he ever gropes, forever in vain, trying to understand himself.”

— Alexis de Tocqueville, Democracy in America

The Basics

If you could redesign society from scratch, what would it look like?

How would you distribute wealth and power?

Would you make everyone equal or not? How would you define fairness and equality?

And — here’s the kicker — what if you had to make those decisions without knowing who you would be in this new society?

Philosopher John Rawls asked just that in a thought experiment known as “the Veil of Ignorance” in his 1971 book, Theory of Justice.

Like many thought experiments, the Veil of Ignorance could never be carried out in the literal sense, nor should it be. Its purpose is to explore ideas about justice, morality, equality, and social status in a structured manner.

The Veil of Ignorance, a component of social contract theory, allows us to test ideas for fairness.

Behind the Veil of Ignorance, no one knows who they are. They lack clues as to their class, their privileges, their disadvantages, or even their personality. They exist as an impartial group, tasked with designing a new society with its own conception of justice.

As a thought experiment, the Veil of Ignorance is powerful because our usual opinions regarding what is just and unjust are informed by our own experiences. We are shaped by our race, gender, class, education, appearance, sexuality, career, family, and so on. On the other side of the Veil of Ignorance, none of that exists. Technically, the resulting society should be a fair one.

In Ethical School Leadership, Spencer J. Maxcy writes:

Imagine that you have set for yourself the task of developing a totally new social contract for today's society. How could you do so fairly? Although you could never actually eliminate all of your personal biases and prejudices, you would need to take steps at least to minimize them. Rawls suggests that you imagine yourself in an original position behind a veil of ignorance. Behind this veil, you know nothing of yourself and your natural abilities, or your position in society. You know nothing of your sex, race, nationality, or individual tastes. Behind such a veil of ignorance all individuals are simply specified as rational, free, and morally equal beings. You do know that in the “real world,” however, there will be a wide variety in the natural distribution of natural assets and abilities, and that there will be differences of sex, race, and culture that will distinguish groups of people from each other.

“The Fairness Principle: When contemplating a moral action, imagine that you do not know if you will be the moral doer or receiver, and when in doubt err on the side of the other person.”

— Michael Shermer, The Moral Arc: How Science and Reason Lead Humanity Toward Truth, Justice, and Freedom

The Purpose of the Veil of Ignorance

Because people behind the Veil of Ignorance do not know who they will be in this new society, any choice they make in structuring that society could either harm them or benefit them.

If they decide men will be superior, for example, they must face the risk that they will be women. If they decide that 10% of the population will be slaves to the others, they cannot be surprised if they find themselves to be slaves. No one wants to be part of a disadvantaged group, so the logical belief is that the Veil of Ignorance would produce a fair, egalitarian society.

Behind the Veil of Ignorance, cognitive biases melt away. The hypothetical people are rational thinkers. They use probabilistic thinking to assess the likelihood of their being affected by any chosen measure. They possess no opinions for which to seek confirmation. Nor do they have any recently learned information to pay undue attention to. The sole incentive they are biased towards is their own self-preservation, which is equivalent to the preservation of the entire group. They cannot stereotype any particular group as they could be members of it. They lack commitment to their prior selves as they do not know who they are.

So, what would these people decide on? According to Rawls, in a fair society all individuals must possess the following:

  • Rights and liberties (including the right to vote, the right to hold public office, free speech, free thought, and fair legal treatment)
  • Power and opportunities
  • Income and wealth sufficient for a good quality of life (Not everyone needs to be rich, but everyone must have enough money to live a comfortable life.)
  • The conditions necessary for self-respect

For these conditions to occur, the people behind the Veil of Ignorance must figure out how to achieve what Rawls regards as the two key components of justice:

  • Everyone must have the best possible life which does not cause harm to others.
  • Everyone must be able to improve their position, and any inequalities must be present solely if they benefit everyone.

However, the people behind the Veil of Ignorance cannot be completely blank slates or it would be impossible for them to make rational decisions. They understand general principles of science, psychology, politics, and economics. Human behavior is no mystery to them. Neither are key economic concepts, such as comparative advantage and supply and demand. Likewise, they comprehend the deleterious impact of social entropy, and they have a desire to create a stable, ordered society. Knowledge of human psychology leads them to be cognizant of the universal desire for happiness and fulfillment. Rawls considered all of this to be the minimum viable knowledge for rational decision-making.

Ways of Understanding the Veil of Ignorance

One way to understand the Veil of Ignorance is to imagine that you are tasked with cutting up a pizza to share with friends. You will be the last person to take a slice. Being of sound mind, you want to get the largest possible share, and the only way to ensure this is to make all the slices the same size. You could cut one huge slice for yourself and a few tiny ones for your friends, but one of them might take the large slice and leave you with a meager share. (Not to mention, your friends won’t think very highly of you.)

Another means of appreciating the implications of the Veil of Ignorance is by considering the social structures of certain species of ants. Even though queen ants are able to form colonies alone, they will band together to form stronger, more productive colonies. Once the first group of worker ants reaches maturity, the queens fight to the death until one remains. When they first form a colony, the queen ants are behind a Veil of Ignorance. They do not know if they will be the sole survivor or not. All they know, on an instinctual level, is that cooperation is beneficial for their species. Like the people behind the Veil of Ignorance, the ants make a decision which, by necessity, is selfless.

The Veil of Ignorance, as a thought experiment, shows us that ignorance is not always detrimental to a society. In some situations, it can create robust social structures. In the animal kingdom, we see many examples of creatures that cooperate even though they do not know if they will suffer or benefit as a result. In a paper entitled “The Many Selves of Social Insects,” Queller and Strassmann write of bees:

…social insect colonies are so tightly integrated that they seem to function as single organisms, as a new level of self. The honeybees' celebrated dance about food location is just one instance of how their colonies integrate and act on information that no single individual possesses. Their unity of purpose is underscored by the heroism of workers, whose suicidal stinging attacks protect the single reproducing queen.

We can also consider the Tragedy of the Commons. Introduced by ecologist Garrett Hardin, this mental model states that shared resources will be exploited if no system for fair distribution is implemented. Individuals have no incentive to leave a share of free resources for others. Hardin’s classic example is an area of land which everyone in a village is free to use for their cattle. Each person wants to maximize the usefulness of the land, so they put more and more cattle out to graze. Yet the land is finite and at some point will become too depleted to support livestock. If the people behind the Veil of Ignorance had to choose how the common land should be shared, the logical decision would be to give each person an equal part and forbid them from introducing too many cattle.

As N. Gregory Mankiw writes in Principles of Microeconomics:

The Tragedy of the Commons is a story with a general lesson: when one person uses a common resource, he diminishes other people's enjoyment of it. Because of this negative externality, common resources tend to be used excessively. The government can solve the problem by reducing use of the common resource through regulation or taxes. Alternatively, the government can sometimes turn the common resource into a private good.

This lesson has been known for thousands of years. The ancient Greek philosopher Aristotle pointed out the problem with common resources: “What is common to many is taken least care of, for all men have greater regard for what is their own than for what they possess in common with others.”

In The Case for Meritocracy, Michael Faust uses other thought experiments to support the Veil of Ignorance:

Let’s imagine another version of the thought experiment. If inheritance is so inherently wonderful — such an intrinsic good — then let’s collect together all of the inheritable money in the world. We shall now distribute this money in exactly the same way it would be distributed in today’s world… but with one radical difference. We are going to distribute it by lottery rather than by family inheritance, i.e, anyone in the world can receive it. So, in these circumstances, how many people who support inheritance would go on supporting it? Note that the government wouldn’t be getting the money… just lucky strangers. Would the advocates of inheritance remain as fiercely committed to their cherished principle? Or would the entire concept instantly be exposed for the nonsense it is?

If inheritance were treated as the lottery it is, no one would stand by it.

[…]

In the world of the 1% versus the 99%, no one in the 1% would ever accept a lottery to decide inheritance because there would be a 99% chance they would end up as schmucks, exactly like the rest of us.

And a further surrealistic thought experiment:

Imagine that on a certain day of the year, each person in the world randomly swaps bodies with another person, living anywhere on earth. Well, for the 1%, there’s a 99% chance that they will be swapped from heaven to hell. For the 99%, 1% might be swapped from hell to heaven, while the other 98% will stay the same as before. What kind of constitution would the human race adopt if annual body swapping were a compulsory event?! They would of course choose a fair one.

“In the immutability of their surroundings the foreign shores, the foreign faces, the changing immensity of life, glide past, veiled not by a sense of mystery but by a slightly disdainful ignorance.”

— Joseph Conrad, Heart of Darkness

The History of Social Contract Theory

Although the Veil of Ignorance was first described by Rawls in 1971, many other philosophers and writers have discussed similar concepts in the past. Philosophers discussed social contract theory as far back as ancient Greece.

In Crito, Plato describes a conversation in which Socrates discusses the laws of Athens and how they are responsible for his existence. Finding himself in prison and facing the death penalty, Socrates rejects Crito’s suggestion that he should escape. He states that further injustice is not an appropriate response to prior injustice. Crito believes that by refusing to escape, Socrates is aiding his enemies, as well as failing to fulfil his role as a father. But Socrates views the laws of Athens as a single entity that has always protected him. He describes breaking any of the laws as being like injuring a parent. Having lived a long, fulfilling life as a result of the social contract he entered at birth, he has no interest in now turning away from Athenian law. Accepting death is essentially a symbolic act that Socrates intends to use to illustrate rationality and reason to his followers. If he were to escape, he would be acting out of accord with the rest of his life, during which he was always concerned with justice.

Social contract theory is concerned with the laws and norms a society decides on and the obligation individuals have to follow them. Socrates’ dialogue with Plato has similarities with the final scene of Arthur Miller’s The Crucible. At the end of the play, John Proctor is hung for witchcraft despite having the option to confess and avoid death. In continuing to follow the social contract of Salem and not confessing to a crime he obviously did not commit, Proctor believes that his death will redeem his earlier mistakes. We see this in the final dialogue between Reverend Hale and Elizabeth (Proctor's wife):

HALE: Woman, plead with him! […] Woman! It is pride, it is vanity. […] Be his helper! What profit him to bleed? Shall the dust praise him? Shall the worms declare his truth? Go to him, take his shame away!

 

ELIZABETH: […] He have his goodness now. God forbid I take it from him!

In these two situations, individuals allow themselves to be put to death in the interest of following the social contract they agreed upon by living in their respective societies. Earlier in their lives, neither person knew what their ultimate fate would be. They were essentially behind the Veil of Ignorance when they chose (consciously or unconsciously) to follow the laws enforced by the people around them. Just as the people behind the Veil of Ignorance must accept whatever roles they receive in the new society, Socrates and Proctor followed social contracts. To modern eyes, the decision both men make to abandon their children in the interest of proving a point is not easily defensible.

Immanuel Kant wrote about justice and freedom in the late 1700s. Kant believed that fair laws should not be based on making people happy or reflecting the desire of individual policymakers, but should be based on universal moral principles:

Is it not of the utmost necessity to construct a pure moral philosophy which is completely freed from everything that may be only empirical and thus belong to anthropology? That there must be such a philosophy is self-evident from the common idea of duty and moral laws. Everyone must admit that a law, if it is to hold morally, i.e., as a ground of obligation, must imply absolute necessity; he must admit that the command, “Then shalt not lie,” does not apply to men only, as if other rational beings had no need to observe it. The same is true for all other moral laws properly so called. He must concede that the ground of obligation here must not be sought in the nature of man or in the circumstances in which he is placed, but sought a priori solely in the concepts of pure reason, and that every other precept which is in certain respects universal, so far as it leans in the least on empirical grounds (perhaps only in regard to the motive involved), may be called a practical rule but never a moral law.

How We Can Apply This Concept

We can use the Veil of Ignorance to test whether a certain issue is fair.

When my kids are fighting over the last cookie, which happens more often than you'd imagine, I ask them to determine who will spilt the cookie. The other person picks. This is the old playground rule, “you split, I pick.” Without this rule, one of them would surely give the other a smaller portion. With it, the halves are as equal as they would be with sensible adults.

When considering whether we should endorse a proposed law or policy, we can ask: if I did not know if this would affect me or not, would I still support it? Those who make big decisions that shape the lives of large numbers of people are almost always those in positions of power. And those in positions of power are almost always members of privileged groups. As Benjamin Franklin once wrote: “Justice will not be served until those who are unaffected are as outraged as those who are.”

Laws allowing or prohibiting abortion have typically been made by men, for example. As the issue lacks real significance in their personal lives, they are free to base decisions on their own ideological views, rather than consider what is fair and sane. However, behind the Veil of Ignorance, no one knows their sex. Anyone deciding on abortion laws would have to face the possibility that they themselves will end up as a woman with an unwanted pregnancy.

In Justice as Fairness: A Restatement, Rawls writes:

So what better alternative is there than an agreement between citizens themselves reached under conditions that are fair for all?

[…]

[T]hreats of force and coercion, deception and fraud, and so on must be ruled out.

And:

Deep religious and moral conflicts characterize the subjective circumstances of justice. Those engaged in these conflicts are surely not in general self-interested, but rather, see themselves as defending their basic rights and liberties which secure their legitimate and fundamental interests. Moreover, these conflicts can be the most intractable and deeply divisive, often more so than social and economic ones.

 

In Ethics: Studying the Art of Moral Appraisal, Ronnie Littlejohn explains:

We must have a mechanism by which we can eliminate the arbitrariness and bias of our “situation in life” and insure that our moral standards are justified by the one thing all people share in common: reason. It is the function of the veil of ignorance to remove such bias.

When we have to make decisions that will affect other people, especially disadvantaged groups (such as when a politician decides to cut benefits or a CEO decides to outsource manufacturing to a low-income country), we can use the Veil of Ignorance as a tool for making fair choices.

As Robert F. Kennedy (the younger brother of John F. Kennedy) said in the 1960s:

Few will have the greatness to bend history itself, but each of us can work to change a small portion of events. It is from numberless diverse acts of courage and belief that human history is shaped. Each time a man stands up for an ideal, or acts to improve the lot of others, or strikes out against injustice, he sends forth a tiny ripple of hope, and crossing each other from a million different centers of energy and daring, those ripples build a current which can sweep down the mightiest walls of oppression and resistance.

When we choose to position ourselves behind the Veil of Ignorance, we have a better chance of creating one of those all-important ripples.

Discuss on Twitter | Comment on Facebook

Finding Truth in History

If we are to learn from the past, does the account of it have to be true? One would like to think so. Otherwise you might be preparing for the wrong battle. There you are, geared up for mountains, and instead you find swamps. You've done a bunch of reading, trying to understand the terrain you are about to enter, only to find it useless. The books must have been written by crazy people. You are upset and confused. Surely there must be some reliable, objective account of the past. How are you supposed to prepare for the possibilities of the future if you can't trust the accuracy of the reports on anything that has come before?

For why do we study history, anyway? Why keep a record of things that have happened? We fear that if we don't, we are doomed to repeat history; but often that doesn't seem to stop us from repeating it. And we have an annoying tendency to remember only the things which don't really challenge or upset us. But still we try to capture what we can, through museums and ceremonies and study, because somehow we believe that eventually we will come to learn something about why things happen the way they do. And armed with this knowledge, we might even be able to shape our future.

This “problem of historical truth” is explored by Isaiah Berlin in The Hedgehog and the Fox: An Essay on Tolstoy's View of History. He explains that Tolstoy was driven by a “desire to penetrate to first causes, to understand how and why things happen as they do and not otherwise.” We can understand this goal – because if we know how the world really works, we know everything.

Of course, it's not that simple, and — spoiler alert — Tolstoy never figured it out. But Berlin's analysis can illuminate the challenges we face with history and help us find something to learn from.

Tolstoy's main problem with historical efforts at the time was that they were “nothing but a collection of fables and useless trifles. … History does not reveal causes; it presents only a blank succession of unexplained events.” Seen like this, the study of history is a waste of time, other than for trivia games or pub quizzes. Being able to recite what happened is supremely uninteresting if you can't begin to understand why it happened in the first place.

But Tolstoy was also an expert at tearing down the theories of anyone who attempted to make sense of history and provide the why. He thought that they “must be imposters, since no theories can possibly fit the immense variety of possible human behavior, the vast multiplicity of minute, undiscoverable causes and effects which form that interplay of men and nature which history purports to record.”

History is more than just factoids, but its complexity makes it difficult for us to learn exactly why things happened the way they did.

And therein lies the spectrum of the problem for Tolstoy. History is more than just factoids, but its complexity makes it difficult for us to learn exactly why things happened the way they did. A battle is more than dates and times, but trying to trace the real impact of the decisions of Napoleon or Churchill is a fool's errand. There is too much going on – too many decisions and interactions happening in every moment – for us to be able to conclude cause and effect with any certainty. After leaving an ice cube to melt on a table, you can't untangle exactly what happened with each molecule from the puddle. That doesn't mean we can't learn from history; it means only that we need to be careful with the lessons we draw and the confidence we have in them.

Berlin explains:

There is a particularly vivid simile [in War and Peace] in which the great man is likened to the ram whom the shepherd is fattening for slaughter. Because the ram duly grows fatter, and perhaps is used as a bellwether for the rest of the flock, he may easily imagine that he is the leader of the flock, and that the other sheep go where they go solely in obedience to his will. He thinks this and the flock may think it too. Nevertheless the purpose of his selection is not the role he believes himself to play, but slaughter – a purpose conceived by beings whose aims neither he nor the other sheep can fathom. For Tolstoy, Napoleon is just such a ram, and so to some degree is Alexander, and indeed all the great men of history.

Arguing against this view of history was N. I. Kareev, who said:

…it is men, doubtless, who make social forms, but these forms – the ways in which men live – in their turn affect those born into them; individual wills may not be all-powerful, but neither are they totally impotent, and some are more effective than others. Napoleon may not be a demigod, but neither is he a mere epiphenomenon of a process which would have occurred unaltered without him.

This means that studying the past is important for making better decisions in the future. If we can't always follow the course of cause and effect, we can at least discover some very strong correlations and act accordingly.

We have a choice between these two perspectives: Either we can treat history as an impenetrable fog, or we can figure out how to use history while accepting that each day might reveal more and we may have to update our thinking.

Sound familiar? Sounds a lot like the scientific method to me – a preference for updating the foundation of knowledge versus being adrift in chaos or attached to a raft that cannot be added to.

Berlin argues that Tolstoy spent his life trying to find a theory strong enough to unify everything. A way to build a foundation so strong that all arguments would crumble against it. Although that endeavor was ambitious, we don't need to fully understand the why of history in order to be able to learn from it. We don't need the foundation of the past to be solid and fixed in order to gain some insight into our future. We can still find some truth in history.

How?

Funnily enough, Berlin clarifies that Tolstoy “believed that only by patient empirical observation could any knowledge be obtained.” But he also believed “that simple people often know the truth better than learned men, because their observation of men and nature is less clouded by empty theories.”

Unhelpfully, Tolstoy's position amounts to “the more you know, the less you learn.”

The answer to finding truth in history is not to be found in Tolstoy's writing. He was looking for “something too indivisibly simple and remote from normal intellectual processes to be assailable by the instruments of reason, and therefore, perhaps, offering a path to peace and salvation.” He never was able to conclude what that might be.

But there might be an answer in how Berlin interprets Tolstoy's major dissonance in life, the discrepancy that drove him and was never resolved. Tolstoy “tried to resolve the glaring contradiction between what he believed about men and events, and what he thought he believed, or ought to believe.”

Finding truth in history is about understanding that this truth is not absolute. In this sense, truth is based on perspective. The perspective of the person who captured it and the person interpreting it. And the perspective of the translators and editors and primary sources. We don't get to be invisible observers of moments in the past, and we don't get to go into other minds. The best we can do is keep our eyes open and keep our biases in check. And what history can teach us is found not just in the moments it tries to describe, but also in what we choose to look at and how we choose to represent it.

Zero — Invented or Discovered?

It seems almost a bizarre question. Who thinks about whether zero was invented or discovered? And why is it important?

Answering this question, however, can tell you a lot about yourself and how you see the world.

Let’s break it down.

“Invented” implies that humans created the zero and that without us, the zero and its properties would cease to exist.

“Discovered” means that although the symbol is a human creation, what it represents would exist independently of any human ability to label it.

So do you think of the zero as a purely mathematical function, and by extension think of all math as a human construct like, say, cheese or self-driving cars? Or is math, and the zero, a symbolic language that describes the world, the content of which exists completely independently of our descriptions?

The zero is now a ubiquitous component of our understanding.

The concept is so basic it is routinely mastered by the pre-kindergarten set. Consider the equation 3-3=0. Nothing complicated about that. It is second nature to us that we can represent “nothing” with a symbol. It makes perfect sense now, in 2017, and it's so common that we forget that zero was a relatively late addition to the number scale.

Here's a fact that's amazing to most people: the zero is actually younger than mathematics. Pythagoras’s famous conclusion — that in a right-angled triangle, the square of the hypotenuse is equal to the sum of the squares of the other two sides — was achieved without a zero. As was Euclid’s entire Elements.

How could this be? It seems surreal, given the importance the zero now has to mathematics, computing, language, and life. How could someone figure out the complex geometry of triangles, yet not realize that nothing was also a number?

Tobias Dantzig, in Number: The Language of Science, offers this as a possible explanation: “The concrete mind of the ancient Greeks could not conceive the void as a number, let alone endow the void with a symbol.” This gives us a good direction for finding the answer to the original question because it hints that you must first understand the concept of the void before you can name it. You need to see that nothingness still takes up space.

It was thought, and sometimes still is, that the number zero was invented in the pursuit of ancient commerce. Something was needed as a placeholder; otherwise, 65 would be indistinguishable from 605 or 6050. The zero represents “no units” of the particular place that it holds. So for that last number, we have six thousands, no hundreds, five tens, and no singles.

A happy accident of no great original insight, zero then made its way around the world. In addition to being convenient for keeping track of how many bags of grain you were owed, or how many soldiers were in your army, it turned our number scale into an extremely efficient decimal system. More so than any numbering system that preceded it (and there were many), the zero transformed the power of our other numerals, propelling mathematics into fantastic equations that can explain our world and fuel incredible scientific and technological advances.

But there is, if you look closely, a missing link in this story.

What changed in humanity that made us comfortable with confronting the void and giving it a symbol? And is it reasonable to imagine creating the number without understanding what it represented? Given its properties, can we really think that it started as a placeholder? Or did it contain within it, right from the beginning, the notion of defining the void, of giving it space?

In Finding Zero, Amir Aczel offers some insight. Basically, he claims that the people who discovered the zero must have had an appreciation of the emptiness that it represented. They were labeling a concept with which they were already familiar.

He rediscovered the oldest known zero, on a stone tablet dating from 683 CE in what is now Cambodia.

On his quest to find this zero, Aczel realized that it was far more natural for the zero to first appear in the Far East, rather than in Western or Arab cultures, due to the philosophical and religious understandings prevalent in the region.

Western society was, and still is in many ways, a binary culture. Good and evil. Mind and body. You’re either with us or against us. A patriot or a terrorist. Many of us naturally try to fit our world into these binary understandings. If something is “A,” then it cannot be “not A.” The very definition of “A” is that it is not “not A.” Something cannot be both.

Aczel writes that this duality is not at all reflected in much Eastern thought. He describes the catuskoti, found in early Buddhist logic, that presents four possibilities, instead of two, for any state: that something is, is not, is both, or is neither.

At first, a typical Western mind might rebel against this kind of logic. My father is either bald or not bald. He cannot be both and he cannot be neither, so what is the use of these two other almost nonsensical options?

A closer examination of our language, though, reveals that the expression of the non-binary is understood, and therefore perhaps more relevant than we think. Take, for example, “you’re either with us or against us.” Is it possible to say “I’m both with you and against you”? Yes. It could mean that you are for the principles but against the tactics. Or that you are supportive in contrast to your values. And to say “I’m neither with you nor against you” could mean that you aren’t supportive of the tactic in question, but won’t do anything to stop it. Or that you just don’t care.

Feelings, in particular, are a realm where the binary is often insufficient. Watching my children, I know that it's possible to be both happy and sad, a traditional binary, at the same time. And the zero itself defies binary categorization. It is something and nothing simultaneously.

Aczel reflects on a conversation he had with a Buddhist monk. “Everything is not everything — there is always something that lies outside of what you may think covers all creation. It could be a thought, or a kind of void, or a divine aspect. Nothing contains everything inside it.”

He goes on to conclude that “Here was the intellectual source of the number zero. It came from Buddhist meditation. Only this deep introspection could equate absolute nothingness with a number that had not existed until the emergence of this idea.”

Which is to say, certain properties of the zero likely were understood conceptually before the symbol came about — nothingness was a thing that could be represented. This idea fits with how we treat the zero today; it may represent nothing, but that nothing still has properties. And investigating those properties demonstrates that there is power in the void — it has something to teach us about how our universe operates.

Further contemplation might illuminate that the zero has something to teach us about existence as well. If we accept zero, the symbol, as being discovered as part of our realization about the existence of nothingness, then trying to understand the zero can teach us a lot about moving beyond the binary of alive/not alive to explore other ways of conceptualizing what it means to be.