Edge.org asked the likes of Christopher Chabris, Nicholas Epley, Jason Zweig, William Poundstone, Cass Sunstein, Phil Rosenzweig, Richard Thaler & Sendhil Mullainathan, Nassim Nicholas Taleb, Steven Pinker, and Rory Sutherland among others: “How has Kahneman's work influenced your own? What step did it make possible?”
Kahneman's work is summarized in the international best-seller Thinking, Fast and Slow.
Here are some select excerpts that I found interesting.
Christopher Chabris (author of The Invisible Gorilla)
There's an overarching lesson I have learned from the work of Danny Kahneman, Amos Tversky, and their colleagues who collectively pioneered the modern study of judgment and decision-making: Don't trust your intuition.
After what I see as years of hard work, experiments of admirable design, lucid writing, and quiet leadership, Kahneman, a man who spent the majority of his career in departments of psychology, earned the highest prize in economics. This was a reminder that some of the best insights into economic behavior could be (and had been) gleaned outside of the discipline
Jason Zweig (author of Your Money and Your Brain)
… nothing amazed me more about Danny than his ability to detonate what we had just done.
Anyone who has ever collaborated with him tells a version of this story: You go to sleep feeling that Danny and you had done important and incontestably good work that day. You wake up at a normal human hour, grab breakfast, and open your email. To your consternation, you see a string of emails from Danny, beginning around 2:30 a.m. The subject lines commence in worry, turn darker, and end around 5 a.m. expressing complete doubt about the previous day's work.
You send an email asking when he can talk; you assume Danny must be asleep after staying up all night trashing the chapter. Your cellphone rings a few seconds later. “I think I figured out the problem,” says Danny, sounding remarkably chipper. “What do you think of this approach instead?”
The next thing you know, he sends a version so utterly transformed that it is unrecognizable: It begins differently, it ends differently, it incorporates anecdotes and evidence you never would have thought of, it draws on research that you've never heard of. If the earlier version was close to gold, this one is hewn out of something like diamond: The raw materials have all changed, but the same ideas are somehow illuminated with a sharper shift of brilliance.
The first time this happened, I was thunderstruck. How did he do that? How could anybody do that? When I asked Danny how he could start again as if we had never written an earlier draft, he said the words I've never forgotten: “I have no sunk costs.”
William Poundstone (author of Are Your Smart Enough To Work At Google?)
As a writer of nonfiction I'm often in the position of trying to connect the dots—to draw grand conclusions from small samples. Do three events make a trend? Do three quoted sources justify a conclusion? Both are maxims of journalism. I try to keep in mind Kahneman and Tversky's Law of Small Numbers. It warns that small samples aren't nearly so informative, in our uncertain world, as intuition counsels.
Cass R. Sunstein (Author, Why Nudge?)
These ideas are hardly Kahneman’s most well-known, but they are full of implications, and we have only started to understand them.
1. The outrage heuristic. People’s judgments about punishment are a product of outrage, which operates as a shorthand for more complex inquiries that judges and lawyers often think relevant. When people decide about appropriate punishment, they tend to ask a simple question: How outrageous was the underlying conduct? It follows that people are intuitive retributivists, and also that utilitarian thinking will often seem uncongenial and even outrageous.
2. Scaling without a modulus. Remarkably, it turns out that people often agree on how outrageous certain misconduct is (on a scale of 1 to 8), but also remarkably, their monetary judgments are all over the map. The reason is that people do not have a good sense of how to translate their judgments of outrage onto the monetary scale. As Kahneman shows, some work in psychophysics explains the problem: People are asked to “scale without a modulus,” and that is an exceedingly challenging task. The result is uncertainty and unpredictability. These claims have implications for numerous questions in law and policy, including the award of damages for pain and suffering, administrative penalties, and criminal sentences.
3. Rhetorical asymmetry. In our work on jury awards, we found that deliberating juries typically produce monetary awards against corporate defendants that are higher, and indeed much higher, than the median award of the individual jurors before deliberation began. Kahneman’s hypothesis is that in at least a certain category of cases, those who argue for higher awards have a rhetoric advantage over those who argue for lower awards, leading to a rhetorical asymmetry. The basic idea is that in light of social norms, one side, in certain debates, has an inherent advantage – and group judgments will shift accordingly. A similar rhetorical asymmetry can be found in groups of many kinds, in both private and public sectors, and it helps to explain why groups move.
4. Predictably incoherent judgments. We found that when people make moral or legal judgments in isolation, they produce a pattern of outcomes that they would themselves reject, if only they could see that pattern as a whole. A major reason is that human thinking is category-bound. When people see a case in isolation, they spontaneously compare it to other cases that are mainly drawn from the same category of harms. When people are required to compare cases that involve different kinds of harms, judgments that appear sensible when the problems are considered separately often appear incoherent and arbitrary in the broader context. In my view, Kahneman’s idea of predictable coherence has yet to be adequately appreciated; it bears on both fiscal policy and on regulation.
For years, there were (as the old saying has it) two kinds of people: those relatively few of us who were aware of the work of Danny Kahneman and Amos Tversky, and the much more numerous who were not. Happily, the balance is now shifting, and more of the general public has been able to hear directly a voice that is in equal measures wise and modest.
Sendhil Mullainathan (Author of Scarcity: Why Having Too Little Means So Much)
… Kahneman and Tversky's early work opened this door exactly because it was not what most people think it was. Many think of this work as an attack on rationality (often defined in some narrow technical sense). That misconception still exists among many, and it misses the entire point of their exercise. Attacks on rationality had been around well before Kahneman and Tversky—many people recognized that the simplifying assumptions of economics were grossly over-simplifying. Of course humans do not have infinite cognitive abilities. We are also not as strong as gorillas, as fast as cheetahs, and cannot swim like sea lions. But we do not therefore say that there is something wrong with humans. That we have limited cognitive abilities is both true and no more helpful to doing good social science that to acknowledge our weakness as swimmers. Pointing it out did it open any new doors.
Kahneman and Tversky's work did not just attack rationality, it offered a constructive alternative: a better description of how humans think. People, they argued, often use simple rules of thumb to make judgments, which incidentally is a pretty smart thing to do. But this is not the insight that left us one step from doing behavioral economics. The breakthrough idea was that these rules of thumb could be catalogued. And once understood they can be used to predict where people will make systematic errors. Those two words are what made behavioral economics possible.
Nassim Taleb (Author of Antifragile)
Here is an insight Danny K. triggered and changed the course of my work. I figured out a nontrivial problem in randomness and its underestimation a decade ago while reading the following sentence in a paper by Kahneman and Miller of 1986:
A spectator at a weight lifting event, for example, will find it easier to imagine the same athlete lifting a different weight than to keep the achievement constant and vary the athlete's physique.
This idea of varying one side, not the other also applies to mental simulations of future (random) events, when people engage in projections of different counterfactuals. Authors and managers have a tendency to take one variable for fixed, sort-of a numeraire, and perturbate the other, as a default in mental simulations. One side is going to be random, not the other.
It hit me that the mathematical consequence is vastly more severe than it appears. Kahneman and colleagues focused on the bias that variable of choice is not random. But the paper set off in my mind the following realization: now what if we were to go one step beyond and perturbate both? The response would be nonlinear. I had never considered the effect of such nonlinearity earlier nor seen it explicitly made in the literature on risk and counterfactuals. And you never encounter one single random variable in real life; there are many things moving together.
Increasing the number of random variables compounds the number of counterfactuals and causes more extremes—particularly in fat-tailed environments (i.e., Extremistan): imagine perturbating by producing a lot of scenarios and, in one of the scenarios, increasing the weights of the barbell and decreasing the bodyweight of the weightlifter. This compounding would produce an extreme event of sorts. Extreme, or tail events (Black Swans) are therefore more likely to be produced when both variables are random, that is real life. Simple.
Now, in the real world we never face one variable without something else with it. In academic experiments, we do. This sets the serious difference between laboratory (or the casino's “ludic” setup), and the difference between academia and real life. And such difference is, sort of, tractable.
… Say you are the manager of a fertilizer plant. You try to issue various projections of the sales of your product—like the weights in the weightlifter's story. But you also need to keep in mind that there is a second variable to perturbate: what happens to the competition—you do not want them to be lucky, invent better products, or cheaper technologies. So not only you need to predict your fate (with errors) but also that of the competition (also with errors). And the variance from these errors add arithmetically when one focuses on differences.
When I met Danny in London in 2009 he diffidently said that the only hope he had for his work was that “it might lead to a better kind of gossip”—where people discuss each other's motivations and behaviour in slightly more intelligent terms. To someone from an industry where a new flavour-variant of toothpaste is presented as being an earth-changing event, this seemed an incredibly modest aspiration for such important work.
However, if this was his aim, he has surely succeeded. When I meet people, I now use what I call “the Kahneman heuristic”. You simply ask people “Have you read Danny Kahneman's book?” If the answer is yes, you know (p>0.95) that the conversation will be more interesting, wide-ranging and open-minded than otherwise.
And it then occurred to me that his aim—for better conversations—was perhaps not modest at all. Multiplied a millionfold it may very important indeed. In the social sciences, I think it is fair to say, the good ideas are not always influential and the influential ideas are not always good. Kahneman's work is now both good and influential.