Tag: Bayes’ theorem

Bayes and Deadweight: Using Statistics to Eject the Deadweight From Your Life

“[K]nowledge is indeed highly subjective, but we can quantify it with a bet. The amount we wager shows how much we believe in something.”

— Sharon Bertsch McGrayne

The quality of your life will, to a large extent, be decided by whom you elect to spend your time with. Supportive, caring, and funny are great attributes in friends and lovers. Unceasingly negative cynics who chip away at your self-esteem? We need to jettison those people as far and fast as we can.

The problem is, how do we identify these people who add nothing positive — or not enough positive — to our lives?

Few of us keep relationships with obvious assholes. There are always a few painfully terrible family members we have to put up with at weddings and funerals, but normally we choose whom we spend time with. And we’ve chosen these people because, at some point, our interactions with them felt good.

How, then, do we identify the deadweight? The people who are really dragging us down and who have a high probability of continuing to do so in the future? We can apply the general thinking tool called Bayesian Updating.

Bayes's theorem can involve some complicated mathematics, but at its core lies a very simple premise. Probability estimates should start with what we already know about the world and then be incrementally updated as new information becomes available. Bayes can even help us when that information is relevant but subjective.

How? As McGrayne explains in the quote above, from The Theory That Would Not Die, you simply ask yourself to wager on the outcome.

Let’s take an easy example.

You are going on a blind date. You’ve been told all sorts of good things in advance — the person is attractive and funny and has a good job — so of course, you are excited. The date starts off great, living up to expectations. Halfway through you find out they have a cat. You hate cats. Given how well everything else is going, how much should this information affect your decision to keep dating?

Quantify your belief in the most probable outcome with a bet. How much would you wager that harmony on the pet issue is an accurate predictor of relationship success? Ten cents? Ten thousand dollars? Do the thought experiment. Imagine walking into a casino and placing a bet on the likelihood that this person’s having a cat will ultimately destroy the relationship. How much money would you take out of your savings and lay on the table? Your answer will give you an idea of how much to factor the cat into your decision-making process. If you wouldn’t part with a dime, then I wouldn’t worry about it.

This kind of approach can help us when it comes to evaluating our interpersonal relationships. Deciding if someone is a good friend, partner, or co-worker is full of subjective judgments. There is usually some contradictory information, and ultimately no one is perfect. So how do you decide who is worth keeping around?

Let’s start with friends. The longer a friendship lasts, the more likely it is to have ups and downs. The trick is to start quantifying these. A hit from a change in geographical proximity is radically different from a hit from betrayal — we need to factor these differently into our friendship formula.

This may seem obvious, but the truth is that we often give the same weight to a wide variety of behaviors. We’ll says things like “yeah, she talked about my health problems when I asked her not to, but she always remembers my birthday.” By treating all aspects of the friendship equally, we have a hard time making reasonable estimates about the future value of that friendship. And that’s how we end up with deadweight.

For the friend who has betrayed your confidence, what you really want to know is the likelihood that she’s going to do it again. Instead of trying to remember and analyze every interaction you’ve ever had, just imagine yourself betting on it. Go back to that casino and head to the friendship roulette wheel. Where would you put your money? All in on “She can’t keep her mouth shut” or a few chips on “Not likely to happen again”?

Using a rough Bayesian model in our heads, we’re forcing ourselves to quantify what “good” is and what “bad” is. How good? How bad? How likely? How unlikely? Until we do some (rough) guessing at these things, we’re making decisions much more poorly than we need to be.

The great thing about using Bayes’s theorem is that it encourages constant updating. It also encourages an open mind by giving us the chance to look at a situation from multiple angles. Maybe she really is sorry about the betrayal. Maybe she thought she was acting in your best interests. There are many possible explanations for her behavior and you can use Bayes’s theorem to integrate all of her later actions into your bet. If you find yourself reducing the amount of money you’d bet on further betrayal, you can accurately assume that the probability she will betray your trust again has gone down.

Using this strategy can also stop the endless rounds of asking why. Why did that co-worker steal my idea? Who else do I have to watch out for? This what-if thinking is paralyzing. You end up self-justifying your behavior through anticipating the worst possible scenarios you can imagine. Thus, you don’t change anything, and you step further away from a solution.

In reality, who cares? The why isn’t important; the most relevant task for you is to figure out the probability that your coworker will do it again. Don’t spend hours analyzing what to do, get upset over the doomsday scenarios you have come up with, or let a few glasses of wine soften the experience.

Head to your mental casino and place the bet, quantifying all the subjective information in your head that is messy and hard to articulate. You will cut through the endless “but maybes” and have a clear path forward that addresses the probable future. It may make sense to give him the benefit of the doubt. It may also be reasonable to avoid him as much as possible. When you figure out how much you would wager on the potential outcomes, you’ll know what to do.

Sometimes we can’t just get rid of people who aren’t good for us — family being the prime example. But you can also use Bayes to test how your actions will change the probability of outcomes to find ways of keeping the negativity minimal. Let’s say you have a cousin who always plans to visit but then cancels. You can’t stop being his cousin and saying “you aren’t welcome at my house” will cause a big family drama. So what else can you do?

Your initial equation — your probability estimate — indicates that the behavior is likely to continue. In your casino, you would comfortably bet your life savings that it will happen again. Now imagine ways in which you could change your behavior. Which of these would reduce your bet? You could have an honest conversation with him, telling him how his actions make you feel. To know if he’s able to openly receive this, consider whether your bet would change. Or would you wager significantly less after employing the strategy of always being busy when he calls to set up future visits?

And you can dig even deeper. Which of your behaviors would increase the probability that he actually comes? Which behaviors would increase the probability that he doesn’t bother making plans in the first place? Depending on how much you like him, you can steer your changes to the outcome you’d prefer.

Quantifying the subjective and using Bayes’s theorem can help us clear out some of the relationship negativity in our lives.

Thomas Bayes and Bayes’s Theorem

Bayes’s Theorem

Thomas Bayes was an English minister in the first half of the 18th century, whose (now) most famous work, “An Essay toward Solving a Problem is the Doctrine of Chances,” was brought to the attention of the Royal Society in 1763 – two years after his death – by his friend Richard Price. The essay, the key to what we now know as Bayes's Theorem, concerned how we should adjust probabilities when we encounter new data.

In The Signal And The Noise, Nate Silver explains the theory:

[Richard] Price, in framing Bayes's essay, gives the example of a person who emerges into the world (perhaps he is Adam, or perhaps he came from Plato's cave) and sees the sun rise for the first time. At first, he does not know whether this is typical or some sort of freak occurrence. However, each day that he survives and the sun rises again, his confidence increases that it is a permanent feature of nature. Gradually, through this purely statistical form of inference, the probability he assigns to his prediction that the sun will rise again tomorrow approaches (although never exactly reaches) 100 percent.

The argument made by Bayes and Price is not that the world is intrinsically probabilistic or uncertain Bayes was a believer in divine perfection; he was also an advocate of Isaac Newton's work, which had seemed to suggest that nature follows regular and predictable laws. It is, rather, a statement—expressed both mathematically and philosophically—about how we learn about the universe: that we learn about it through approximation, getting closer and closer to the truth as we gather more evidence.

This contrasted with the more skeptical viewpoint of the Scottish philosopher David Hume, who argued that since we could not be certain that the sun would rise again, a prediction that it would was inherently no more rational than one that it wouldn't. The Bayesian viewpoint, instead, regards rationality as a probabilistic matter. In essence, Bayes and Price are telling Hume, don't blame nature because you are too daft to understand it: if you step out of your skeptical shell and make some predictions about its behavior, perhaps you will get a little closer to the truth.

Bayes's Theorem

Bayes's theorem wasn't first formulated by Thomas Bayes. Instead it was developed by the French mathematician and astronomer Pierre-Simon Laplace.

Laplace believed in scientific determinism — given the location of every particle in the universe and enough computing power we could predict the universe perfectly. However it was the disconnect between the perfection of nature and our human imperfections in measuring and understanding it that led to Laplace's involvement in a theory based on probabilism.

Laplace was frustrated at the time by astronomical observations that appeared to show anomalies in the orbits of Jupiter and Saturn — they seemed to predict that Jupiter would crash into the sun while Saturn would drift off into outer space. These prediction were, of course, quite wrong and Laplace devoted much of his life to developing much more accurate measurements of these planets' orbits. The improvements that Laplace made relied on probabilistic inferences in lieu of exacting measurements, since instruments like the telescope were still very crude at the time. Laplace came to view probability as a waypoint between ignorance and knowledge. It seemed obvious to him that a more thorough understanding of probability was essential to scientific progress.

The Bayesian approach to probability is simple: take the odds of something happening, and adjust for new information. This, of course, is most useful in the cases where you have strong prior knowledge. If your initial probability is off the Bayesian approach is much less helpful.

In her book, The Theory That Would Not Die, Sharon Bertsch McGrayne lays out the Bayesian process:

We modify our opinions with objective information: Initial Beliefs + Recent Objective Data = A New and Improved Belief. … each time the system is recalculated, the posterior becomes the prior of the new iteration. It was an evolving system, with each bit of new information pushed closer and closer to certitude.

Here is a short example, found in Investing: The Last Liberal Art, on how it works:

Let's imagine that you and a friend have spent the afternoon playing your favorite board game, and now, at the end of the game, you are chatting about this and that. Something your friend says leads you to make a friendly wager: that with one roll of the die from the game, you will get a 6. Straight odds are one in six, a 16 percent probability. But then suppose your friend rolls the die, quickly covers it with her hand, and takes a peek. “I can tell you this much,” she says; “it's an even number.” Now you have new information and your odds change dramatically to one in three, a 33 percent probability. While you are considering whether to change your bet, your friend teasingly adds: “And it's not a 4.” With this additional bit of information, your odds have changed again, to one in two, a 50 percent probability. With this very simple example, you have performed a Bayesian analysis. Each new piece of information affected the original probability, and that is a Bayesian inference.

Knowing the exact math is not really the key to understanding Bayesian thinking, although being able to quantify is a huge advantage in thinking and life.

“Bayes's theorem,” Silver continues, “is concerned with conditional probability. That is, it tells us the probability that a theory or hypothesis is true if some event has happened.”

When our priors are strong, they can be surprisingly resilient in the face of new evidence. One classic example of this is the presence of breast cancer among women in their forties. The chance that a woman will develop breast cancer in her forties is fortunately quite low — about 1.4 percent. But what is the probability if she has a positive mammogram?

Studies show that if a woman does not have cancer, a mammogram will incorrectly claim that she does only about 10 percent of the time. If she does have cancer, on the other hand, they will detect it about 75 percent of the time. When you see those statistics, a positive mammogram seems like very bad news indeed. But if you apply Bayes's Theorem to these numbers, you'll come to a different conclusion: the chance that a woman in her forties has breast cancer given that she's had a positive mammogram is still only about 10 percent. These false positive dominate the equation because very few young women have breast cancer to begin with. For this reason, many doctors recommend that women do not begin getting regular mammograms until they are in their fifties and the prior probability of having breast cancer is higher.

When doing research for this post, I stumbled on Eliezer Yudkowsky's intuitive explanation (building upon the mammogram example above):

The most common mistake is to ignore the original fraction of women with breast cancer, and the fraction of women without breast cancer who receive false positives, and focus only on the fraction of women with breast cancer who get positive results. For example, the vast majority of doctors in these studies seem to have thought that if around 80% of women with breast cancer have positive mammographies, then the probability of a women with a positive mammography having breast cancer must be around 80%.

Figuring out the final answer always requires all three pieces of information – the percentage of women with breast cancer, the percentage of women without breast cancer who receive false positives, and the percentage of women with breast cancer who receive (correct) positives.

To see that the final answer always depends on the original fraction of women with breast cancer, consider an alternate universe in which only one woman out of a million has breast cancer. Even if mammography in this world detects breast cancer in 8 out of 10 cases, while returning a false positive on a woman without breast cancer in only 1 out of 10 cases, there will still be a hundred thousand false positives for every real case of cancer detected. The original probability that a woman has cancer is so extremely low that, although a positive result on the mammography does increase the estimated probability, the probability isn't increased to certainty or even “a noticeable chance”; the probability goes from 1:1,000,000 to 1:100,000.

Similarly, in an alternate universe where only one out of a million women does not have breast cancer, a positive result on the patient's mammography obviously doesn't mean that she has an 80% chance of having breast cancer! If this were the case her estimated probability of having cancer would have been revised drastically downward after she got a positive result on her mammography – an 80% chance of having cancer is a lot less than 99.9999%! If you administer mammographies to ten million women in this world, around eight million women with breast cancer will get correct positive results, while one woman without breast cancer will get false positive results. Thus, if you got a positive mammography in this alternate universe, your chance of having cancer would go from 99.9999% up to 99.999987%. That is, your chance of being healthy would go from 1:1,000,000 down to 1:8,000,000.

These two extreme examples help demonstrate that the mammography result doesn't replace your old information about the patient's chance of having cancer; the mammography slides the estimated probability in the direction of the result.

Part of the problem is the availability heuristic — we focus on what's readily available. In this case that's the newest information and the bigger picture gets lost. We fail to adjust the probability to reflect new information.

The big idea behind Bayes's theorem is that we must continuously update our probability estimates on an as-needed basis.

Let's take a look at another example, only this time we'll do some basic algebra.

Consider a somber example: the September 11 attacks. Most of us would have assigned almost no probability to terrorists crashing planes into buildings in Manhattan when we woke up that morning. But we recognized that a terror attack was an obvious possibility once the first plane hit the World Trade Center. And we had no doubt we were being attacked once the second tower was hit. Bayes's theorem can replicate this result.

For instances, say that before the first plane hit, our estimate of the possibility of terror attack on tall buildings in Manhattan was just 1 chance in 20,000, or 0.005 percent. However, we would also have assigned a very low probability to a plane hitting the World Trade Center by accident. This figure can actually be estimated empirically: in the previous 25,000 days of aviation over Manhattan prior to September 11, there had been two such accidents: one involving the Empire State building in 1945 and another at 40 Wall Street in 1946. That would make the possibility of such an accident about 1 chance in 12,500 on any given day. If you use Bayes's theorem to run these numbers (see below), the probability we'd assign to a terror attack increased form 0.005 percent to 38 percent the moment that the first plane hit.

The Signal And The Noise, Nate Silver

Weigh the Evidence

Tim Harford, adds:

Bayes’ theorem is an important reality check on our efforts to forecast the future. How, for instance, should we reconcile a large body of theory and evidence predicting global warming with the fact that there has been no warming trend over the last decade or so? Sceptics react with glee, while true believers dismiss the new information.

A better response is to use Bayes’ theorem: the lack of recent warming is evidence against recent global warming predictions, but it is weak evidence. This is because there is enough variability in global temperatures to make such an outcome unsurprising. The new information should reduce our confidence in our models of global warming – but only a little.

The same approach can be used in anything from an economic forecast to a hand of poker, and while Bayes’ theorem can be a formal affair, Bayesian reasoning also works as a rule of thumb. We tend to either dismiss new evidence, or embrace it as though nothing else matters. Bayesians try to weigh both the old hypothesis and the new evidence in a sensible way.

Here is another example, this time from Quora. A reader poses the question, “What does it mean when a girl smiles at you every time she sees you?” Another reader, using Bayes's Theorem replies:

The probability she likes you is

P(like|smile) = \frac{P(smile|like)P(like)}{P(smile)}

P(like|smile) is what you want to know – the probability she likes you given the fact that she smiles at you.

P(smile|like) is the probability that she will smile given that she sees someone she likes.

P(like) is the probability that she likes a random person.

P(smile) is the probability that she will smile at a random person.

For example, suppose she just smiles at everyone. Then intuition says that fact that she smiles at you doesn't mean anything one way or another. Indeed, P(smile|like) = 1 and P(smile)=1, and we have

P(like|smile) = P(like)

meaning that knowing that she smiles at you doesn't change anything.

At the other extreme, suppose she smiles at everyone she likes, and only those she likes. Then P(smile) = P(like) and P(smile|like) = 1.  Then we have

P(like|smile) = 1

and she is certain to like you.

In the intermediate case, what you need to do is find the ratio of odds of smiling to people she likes to smiles in general, multiply by the percentage of people she likes, and there is your answer.

The more she smiles in general, the lower the chance she likes you. The more she smiles at people she likes, the better the chance. And of course the more people she likes, the better your chances are.

Of course, how to actually determine these values is a mystery I have never solved.

Decision Trees

In The Essential Buffett: Timeless Principles for the New Economy, Robert Hagstrom writes:

Bayesian analysis is an attempt to incorporate all available information into a process for making inferences, or decisions, about the underlying state of nature. Colleges and universities use Bayes's theorem to help their students study decision making. In the classroom, the Bayesian approach is more popularly called the decision tree theory; each branch of the tree represents new information that, in turn, changes the odds in making decisions. “At Harvard Business School,” explains Charlie Munger, “the great quantitative thing that bonds the first-year class together is what they call decision tree theory. All they do is take high school algebra and apply it to real life problems. The students love it. They're amazed to find that high school algebra works in life.

Limitations of the Bayesian

Besides seeing the the world as an ever shifting array of probabilities, we must also remember the limitations of inductive reasoning such as the “sun rising every day” example given by Price/Bayes above.

The most useful example of this is explained by Nassim Taleb in the Black Swan:

Consider a turkey that is fed everyday. Every single feeding will firm up the bird's belief that it is the general rule of life to be fed everyday by friendly members of the human race “looking out for its best interests,” as a politician would say. On the afternoon of the Wednesday before Thanksgiving, something unexpected will happen to the turkey. It will incur a revision of belief.

Don't walk away thinking the Bayesian approach will enable you to predict everything. In fact, with the volume of information is increasing exponentially, the future may be as unpredictable as ever, concludes Silver:

There is no reason to conclude that the affairs of man are becoming more predictable. The opposite may well be true. The same sciences that uncover the laws of nature are making the organization of society more complex.

In the final analysis, though, picking up Bayesian reasoning can truly change your life, as said well in this Big Think video by Julia Galef of the Center for Applied Rationality:

After you’ve been steeped in Bayes’ rule for a little while, it starts to produce some fundamental changes to your thinking. For example, you become much more aware that your beliefs are grayscale. They’re not black and white and that you have levels of confidence in your beliefs about how the world works that are less than 100 percent but greater than zero percent and even more importantly as you go through the world and encounter new ideas and new evidence, that level of confidence fluctuates, as you encounter evidence for and against your beliefs.

Bayes's Theorem is part of the Farnam Street latticework of mental models.