Tag: Bayesian

Bayes and Deadweight: Using Statistics to Eject the Deadweight From Your Life

“[K]nowledge is indeed highly subjective, but we can quantify it with a bet. The amount we wager shows how much we believe in something.”

— Sharon Bertsch McGrayne

The quality of your life will, to a large extent, be decided by whom you elect to spend your time with. Supportive, caring, and funny are great attributes in friends and lovers. Unceasingly negative cynics who chip away at your self-esteem? We need to jettison those people as far and fast as we can.

The problem is, how do we identify these people who add nothing positive — or not enough positive — to our lives?

Few of us keep relationships with obvious assholes. There are always a few painfully terrible family members we have to put up with at weddings and funerals, but normally we choose whom we spend time with. And we’ve chosen these people because, at some point, our interactions with them felt good.

How, then, do we identify the deadweight? The people who are really dragging us down and who have a high probability of continuing to do so in the future? We can apply the general thinking tool called Bayesian Updating.

Bayes's theorem can involve some complicated mathematics, but at its core lies a very simple premise. Probability estimates should start with what we already know about the world and then be incrementally updated as new information becomes available. Bayes can even help us when that information is relevant but subjective.

How? As McGrayne explains in the quote above, from The Theory That Would Not Die, you simply ask yourself to wager on the outcome.

Let’s take an easy example.

You are going on a blind date. You’ve been told all sorts of good things in advance — the person is attractive and funny and has a good job — so of course, you are excited. The date starts off great, living up to expectations. Halfway through you find out they have a cat. You hate cats. Given how well everything else is going, how much should this information affect your decision to keep dating?

Quantify your belief in the most probable outcome with a bet. How much would you wager that harmony on the pet issue is an accurate predictor of relationship success? Ten cents? Ten thousand dollars? Do the thought experiment. Imagine walking into a casino and placing a bet on the likelihood that this person’s having a cat will ultimately destroy the relationship. How much money would you take out of your savings and lay on the table? Your answer will give you an idea of how much to factor the cat into your decision-making process. If you wouldn’t part with a dime, then I wouldn’t worry about it.

This kind of approach can help us when it comes to evaluating our interpersonal relationships. Deciding if someone is a good friend, partner, or co-worker is full of subjective judgments. There is usually some contradictory information, and ultimately no one is perfect. So how do you decide who is worth keeping around?

Let’s start with friends. The longer a friendship lasts, the more likely it is to have ups and downs. The trick is to start quantifying these. A hit from a change in geographical proximity is radically different from a hit from betrayal — we need to factor these differently into our friendship formula.

This may seem obvious, but the truth is that we often give the same weight to a wide variety of behaviors. We’ll says things like “yeah, she talked about my health problems when I asked her not to, but she always remembers my birthday.” By treating all aspects of the friendship equally, we have a hard time making reasonable estimates about the future value of that friendship. And that’s how we end up with deadweight.

For the friend who has betrayed your confidence, what you really want to know is the likelihood that she’s going to do it again. Instead of trying to remember and analyze every interaction you’ve ever had, just imagine yourself betting on it. Go back to that casino and head to the friendship roulette wheel. Where would you put your money? All in on “She can’t keep her mouth shut” or a few chips on “Not likely to happen again”?

Using a rough Bayesian model in our heads, we’re forcing ourselves to quantify what “good” is and what “bad” is. How good? How bad? How likely? How unlikely? Until we do some (rough) guessing at these things, we’re making decisions much more poorly than we need to be.

The great thing about using Bayes’s theorem is that it encourages constant updating. It also encourages an open mind by giving us the chance to look at a situation from multiple angles. Maybe she really is sorry about the betrayal. Maybe she thought she was acting in your best interests. There are many possible explanations for her behavior and you can use Bayes’s theorem to integrate all of her later actions into your bet. If you find yourself reducing the amount of money you’d bet on further betrayal, you can accurately assume that the probability she will betray your trust again has gone down.

Using this strategy can also stop the endless rounds of asking why. Why did that co-worker steal my idea? Who else do I have to watch out for? This what-if thinking is paralyzing. You end up self-justifying your behavior through anticipating the worst possible scenarios you can imagine. Thus, you don’t change anything, and you step further away from a solution.

In reality, who cares? The why isn’t important; the most relevant task for you is to figure out the probability that your coworker will do it again. Don’t spend hours analyzing what to do, get upset over the doomsday scenarios you have come up with, or let a few glasses of wine soften the experience.

Head to your mental casino and place the bet, quantifying all the subjective information in your head that is messy and hard to articulate. You will cut through the endless “but maybes” and have a clear path forward that addresses the probable future. It may make sense to give him the benefit of the doubt. It may also be reasonable to avoid him as much as possible. When you figure out how much you would wager on the potential outcomes, you’ll know what to do.

Sometimes we can’t just get rid of people who aren’t good for us — family being the prime example. But you can also use Bayes to test how your actions will change the probability of outcomes to find ways of keeping the negativity minimal. Let’s say you have a cousin who always plans to visit but then cancels. You can’t stop being his cousin and saying “you aren’t welcome at my house” will cause a big family drama. So what else can you do?

Your initial equation — your probability estimate — indicates that the behavior is likely to continue. In your casino, you would comfortably bet your life savings that it will happen again. Now imagine ways in which you could change your behavior. Which of these would reduce your bet? You could have an honest conversation with him, telling him how his actions make you feel. To know if he’s able to openly receive this, consider whether your bet would change. Or would you wager significantly less after employing the strategy of always being busy when he calls to set up future visits?

And you can dig even deeper. Which of your behaviors would increase the probability that he actually comes? Which behaviors would increase the probability that he doesn’t bother making plans in the first place? Depending on how much you like him, you can steer your changes to the outcome you’d prefer.

Quantifying the subjective and using Bayes’s theorem can help us clear out some of the relationship negativity in our lives.

Gaming the System

Some college students used game theory to get an A by exploiting a loophole in the grading curve.
La Rochefoucauld

Catherine Rampell explains:

In several computer science courses at Johns Hopkins University, the grading curve was set by giving the highest score on the final an A, and then adjusting all lower scores accordingly. The students determined that if they collectively boycotted, then the highest score would be a zero, and so everyone would get an A.

Inside Higher Ed, writes:

The students refused to come into the room and take the exam, so we sat there for a while: me on the inside, they on the outside,” [Peter Fröhlich, the professor,] said. “After about 20-30 minutes I would give up…. Then we all left.” The students waited outside the rooms to make sure that others honored the boycott, and were poised to go in if someone had. No one did, though.

Andrew Kelly, a student in Fröhlich’s Introduction to Programming class who was one of the boycott’s key organizers, explained the logic of the students’ decision via e-mail: “Handing out 0’s to your classmates will not improve your performance in this course,” Kelly said.

“So if you can walk in with 100 percent confidence of answering every question correctly, then your payoff would be the same for either decision. Just consider the impact on your other exam performances if you studied for [the final] at the level required to guarantee yourself 100. Otherwise, it’s best to work with your colleagues to ensure a 100 for all and a very pleasant start to the holidays.”

Bayesian Nash equilibria

In this one-off final exam, there are at least two Bayesian Nash equilibria (a stable outcome, where no student has an incentive to change his strategy after considering the other students’ strategies). Equilibrium #1 is that no one takes the test, and equilibrium #2 is that everyone takes the test. Both equilibria depend on what all the students believe their peers will do.

If all students believe that everyone will boycott with 100 percent certainty, then everyone should boycott (#1). But if anyone suspects that even one person will break the boycott, then at least someone will break the boycott, and everyone else will update their choices and decide to take the exam (#2).

Two incomplete thoughts

First, exploiting loopholes ensures increasing rules, laws, and language (to close previous loopholes), which lead to creating more complexity. More complexity, in turn, leads to more loopholes (among other things). … you see where this is going.

Second, ‘gaming the system’ is a form of game theory. What's best for you, the individual (or in this case, a small group), may not be best for society.

Today's college kids are tomorrow's bankers and CEO's. Just because you can do something doesn't mean you should.

Update (via metafilter): In 2009, Peter Fröhlich, the instructor mentioned above, published Game Design: Tricking Students into Learning More.

Still curious? Learn more about game theory with the Prisoners' Dilemma.

Blindness to the Benefits of Ambiguity

“Decision makers,” write Stefan Trautmann and Richard Zeckhauser in their paper Blindness to the Benefits of Ambiguity, “often prove to be blind to the learning opportunities offered by ambiguous probabilities. Such decision makers violate rational decision making and forgo significant expected payoffs.”

Trautmann and Zeckhauser argue that we often don't recognize the benefits in commonly occurring ambiguous situations. In part this is because we often treated repeated decisions involving ambiguity as one-shot decisions. In doing so, we ignore the opportunity for learning when we encounter ambiguity in decisions that offer repeat choices.

To put this in context, the authors offer the following example:

A patient is prescribed a drug for high cholesterol. It is successful, lowering her total cholesterol from 230 to 190, and her only side effect is a mild case of sweaty palms. The physician is likely to keep the patient on this drug as long as her cholesterol stays low. Yet, there are many medications for treating cholesterol. Another might lower her cholesterol even more effectively or impose no side effects. Trying an alternative would seem to make sense, since the patient is likely to be on a cholesterol medication for the rest of her life.

In situations of ambiguity with repeated choices we often gravitate towards the first decision that offers a positive payoff. Once we've found a positive payoff we're likely to stick with that decision when given the opportunity to make the same choice again rather than experiment in an attempt to optimize payoffs. We ignore the opportunity for learning and favor the status quo. Another way to think of this is uncertainty avoidance (or ambiguity aversion).

Few individuals recognize that ambiguity offers the opportunity for learning. If a choice situation is to be repeated, ambiguity brings benefits, since one can change one’s choice if one learns the ambiguous choice is superior.

“We observe,” they offer, “that people's lack of a clear understanding of learning under ambiguity leads them to adopt non-Bayesian rules.”

Another example of how this manifests itself in the real world:

In the summer of 2010, the consensus estimate is that there are five applicants for every job opening, yet major employers who expect to hire significant numbers of workers once the economy turns up are sitting by the sidelines and having current workers do overtime. The favorability of the hiring situation is unprecedented in recent years. Thus, it would seem to make sense to hire a few workers, see how they perform relative to the norm. If the finding is much better, suggesting that the ability to select in a very tough labor market and among five applicants is a big advantage, then hire many more. This situation, where the payoff to the first-round decision is highly ambiguous, but perhaps well worthwhile once learning is taken into account, is a real world exemplar of the laboratory situations investigated in this paper.

According to Tolstoi, happy families are all alike, while every unhappy family is unhappy in its own way. A similar observation seems to hold true for situations involving ambiguity: There is only one way to capitalize correctly on learning opportunities under ambiguity, but there are many ways to violate reasonable learning strategies.

From an evolutionary perspective, why would learning avoidance persist if the benefits from learning are large?

Psychological findings suggest that negative experiences are crucial to learning, while good experiences have virtually no pedagogic power. In the current setting, ambiguous options would need to be sampled repeatedly in order to obtain sufficient information on whether to switch from the status quo. Both bad and good outcomes would be experienced along the way, but only good ones could trigger switching. Bad outcomes would also weigh much more heavily, leading people to require too much positive evidence before shifting to ambiguous options. In individual decision situations, losses often weigh 2 to 3 times as much as gains.

In addition, if one does not know what returns would have come from an ambiguous alternative, one cannot feel remorse from not having chosen it. Blame from others also plays an important role. In principal-agent relationships, bad outcomes often lead to criticism, and possibly legal consequences because of responsibility and accountability. Therefore, agents, such as financial advisors or medical practitioners may experience an even higher asymmetry from bad and good payoffs. Most people, for that reason, have had many fewer positive learning experiences with ambiguity than rational sampling would provide.

It might be a good idea to try a new brand the next time you're at the store rather than just making the same choice over and over. Who knows, you might discover you like it better.