Tag: Richard Feynman

First Principles: The Building Blocks of True Knowledge

First-principles thinking is one of the best ways to reverse-engineer complicated problems and unleash creative possibility. Sometimes called “reasoning from first principles,” the idea is to break down complicated problems into basic elements and then reassemble them from the ground up. It’s one of the best ways to learn to think for yourself, unlock your creative potential, and move from linear to non-linear results.

This approach was used by the philosopher Aristotle andsup is used now by Elon Musk and Charlie Munger. It allows them to cut through the fog of shoddy reasoning and inadequate analogies to see opportunities that others miss.

“I don’t know what’s the matter with people: they don’t learn by understanding; they learn by some other way—by rote or something. Their knowledge is so fragile!”

— Richard Feynman

The Basics

A first principle is a foundational proposition or assumption that stands alone. We cannot deduce first principles from any other proposition or assumption.

Aristotle, writing[1] on first principles, said:

In every systematic inquiry (methodos) where there are first principles, or causes, or elements, knowledge and science result from acquiring knowledge of these; for we think we know something just in case we acquire knowledge of the primary causes, the primary first principles, all the way to the elements.

Later he connected the idea to knowledge, defining first principles as “the first basis from which a thing is known.”[2]

The search for first principles is not unique to philosophy. All great thinkers do it.

Reasoning by first principles removes the impurity of assumptions and conventions. What remains is the essentials. It’s one of the best mental models you can use to improve your thinking because the essentials allow you to see where reasoning by analogy might lead you astray.

The Coach and the Play Stealer

My friend Mike Lombardi (a former NFL executive) and I were having dinner in L.A. one night, and he said, “Not everyone that’s a coach is really a coach. Some of them are just play stealers.”

Every play we see in the NFL was at some point created by someone who thought, “What would happen if the players did this?” and went out and tested the idea. Since then, thousands, if not millions, of plays have been created. That’s part of what coaches do. They assess what’s physically possible, along with the weaknesses of the other teams and the capabilities of their own players, and create plays that are designed to give their teams an advantage.

The coach reasons from first principles. The rules of football are the first principles: they govern what you can and can’t do. Everything is possible as long as it’s not against the rules.

The play stealer works off what’s already been done. Sure, maybe he adds a tweak here or there, but by and large he’s just copying something that someone else created.

While both the coach and the play stealer start from something that already exists, they generally have different results. These two people look the same to most of us on the sidelines or watching the game on the TV. Indeed, they look the same most of the time, but when something goes wrong, the difference shows. Both the coach and the play stealer call successful plays and unsuccessful plays. Only the coach, however, can determine why a play was successful or unsuccessful and figure out how to adjust it. The coach, unlike the play stealer, understands what the play was designed to accomplish and where it went wrong, so he can easily course-correct. The play stealer has no idea what’s going on. He doesn’t understand the difference between something that didn’t work and something that played into the other team's strengths.

Musk would identify the play stealer as the person who reasons by analogy, and the coach as someone who reasons by first principles. When you run a team, you want a coach in charge and not a play stealer. (If you’re a sports fan, you need only look at the difference between the Cleveland Browns and the New England Patriots.)

We’re all somewhere on the spectrum between coach and play stealer. We reason by first principles, by analogy, or a blend of the two.

Another way to think about this distinction comes from another friend, Tim Urban. He says[3] it’s like the difference between the cook and the chef. While these terms are often used interchangeably, there is an important nuance. The chef is a trailblazer, the person who invents recipes. He knows the raw ingredients and how to combine them. The cook, who reasons by analogy, uses a recipe. He creates something, perhaps with slight variations, that’s already been created.

The difference between reasoning by first principles and reasoning by analogy is like the difference between being a chef and being a cook. If the cook lost the recipe, he’d be screwed. The chef, on the other hand, understands the flavor profiles and combinations at such a fundamental level that he doesn’t even use a recipe. He has real knowledge as opposed to know-how.

Authority

So much of what we believe is based on some authority figure telling us that something is true. As children, we learn to stop questioning when we’re told “Because I said so.” (More on this later.) As adults, we learn to stop questioning when people say “Because that’s how it works.” The implicit message is “understanding be damned — shut up and stop bothering me.” It’s not intentional or personal. OK, sometimes it’s personal, but most of the time, it’s not.

If you outright reject dogma, you often become a problem: a student who is always pestering the teacher. A kid who is always asking questions and never allowing you to cook dinner in peace. An employee who is always slowing things down by asking why.

When you can’t change your mind, though, you die. Sears was once thought indestructible before Wal-Mart took over. Sears failed to see the world change. Adapting to change is an incredibly hard thing to do when it comes into conflict with the very thing that caused so much success. As Upton Sinclair aptly pointed out, “It is difficult to get a man to understand something, when his salary depends on his not understanding it.” Wal-Mart failed to see the world change and is now under assault from Amazon.

If we never learn to take something apart, test the assumptions, and reconstruct it, we end up trapped in what other people tell us — trapped in the way things have always been done. When the environment changes, we just continue as if things were the same.

First-principles reasoning cuts through dogma and removes the blinders. We can see the world as it is and see what is possible.

When it comes down to it, everything that is not a law of nature is just a shared belief. Money is a shared belief. So is a border. So are bitcoins. The list goes on.

Some of us are naturally skeptical of what we’re told. Maybe it doesn’t match up to our experiences. Maybe it’s something that used to be true but isn’t true anymore. And maybe we just think very differently about something.

“To understand is to know what to do.”

— Wittgenstein

Techniques for Establishing First Principles

There are many ways to establish first principles. Let’s take a look at a few of them.

Socratic Questioning

Socratic questioning can be used to establish first principles through stringent analysis. This a disciplined questioning process, used to establish truths, reveal underlying assumptions, and separate knowledge from ignorance. The key distinction between Socratic questioning and normal discussions is that the former seeks to draw out first principles in a systematic manner. Socratic questioning generally follows this process:

  1. Clarifying your thinking and explaining the origins of your ideas (Why do I think this? What exactly do I think?)
  2. Challenging assumptions (How do I know this is true? What if I thought the opposite?)
  3. Looking for evidence (How can I back this up? What are the sources?)
  4. Considering alternative perspectives (What might others think? How do I know I am correct?)
  5. Examining consequences and implications (What if I am wrong? What are the consequences if I am?)
  6. Questioning the original questions (Why did I think that? Was I correct? What conclusions can I draw from the reasoning process?)

This process stops you from relying on your gut and limits strong emotional responses. This process helps you build something that lasts.

“Because I Said So” or “The Five Whys”

Children instinctively think in first principles. Just like us, they want to understand what’s happening in the world. To do so, they intuitively break through the fog with a game some parents have come to hate.

“Why?”

“Why?”

“Why?”

Here’s an example that has played out numerous times at my house:

“It’s time to brush our teeth and get ready for bed.”

“Why?”

“Because we need to take care of our bodies, and that means we need sleep.”

“Why do we need sleep?”

“Because we’d die if we never slept.”

“Why would that make us die?”

“I don’t know; let’s go look it up.”

Kids are just trying to understand why adults are saying something or why they want them to do something.

The first time your kid plays this game, it’s cute, but for most teachers and parents, it eventually becomes annoying. Then the answer becomes what my mom used to tell me: “Because I said so!” (Love you, Mom.)

Of course, I’m not always that patient with the kids. For example, I get testy when we’re late for school, or we’ve been travelling for 12 hours, or I’m trying to fit too much into the time we have. Still, I try never to say “Because I said so.”

People hate the “because I said so” response for two reasons, both of which play out in the corporate world as well. The first reason we hate the game is that we feel like it slows us down. We know what we want to accomplish, and that response creates unnecessary drag. The second reason we hate this game is that after one or two questions, we are often lost. We actually don’t know why. Confronted with our own ignorance, we resort to self-defense.

I remember being in meetings and asking people why we were doing something this way or why they thought something was true. At first, there was a mild tolerance for this approach. After three “whys,” though, you often find yourself on the other end of some version of “we can take this offline.”

Can you imagine how that would play out with Elon Musk? Richard Feynman? Charlie Munger? Musk would build a billion-dollar business to prove you wrong, Feynman would think you’re an idiot, and Munger would profit based on your inability to think through a problem.

“Science is a way of thinking much more than it is a body of knowledge.”

— Carl Sagan

Examples of First Principles in Action

So we can better understand how first-principles reasoning works, let’s look at four examples.

Elon Musk and SpaceX

Perhaps no one embodies first-principles thinking more than Elon Musk. He is one of the most audacious entrepreneurs the world has ever seen. My kids (grades 3 and 2) refer to him as a real-life Tony Stark, thereby conveniently providing a good time for me to remind them that by fourth grade, Musk was reading the Encyclopedia Britannica and not Pokemon.

What’s most interesting about Musk is not what he thinks but how he thinks:

I think people’s thinking process is too bound by convention or analogy to prior experiences. It’s rare that people try to think of something on a first principles basis. They’ll say, “We’ll do that because it’s always been done that way.” Or they’ll not do it because “Well, nobody’s ever done that, so it must not be good. But that’s just a ridiculous way to think. You have to build up the reasoning from the ground up—“from the first principles” is the phrase that’s used in physics. You look at the fundamentals and construct your reasoning from that, and then you see if you have a conclusion that works or doesn’t work, and it may or may not be different from what people have done in the past.[4]

His approach to understanding reality is to start with what is true — not with his intuition. The problem is that we don’t know as much as we think we do, so our intuition isn’t very good. We trick ourselves into thinking we know what’s possible and what’s not. The way Musk thinks is much different.

Musk starts out with something he wants to achieve, like building a rocket. Then he starts with the first principles of the problem. Running through how Musk would think, Larry Page said in an

interview, “What are the physics of it? How much time will it take? How much will it cost? How much cheaper can I make it? There’s this level of engineering and physics that you need to make judgments about what’s possible and interesting. Elon is unusual in that he knows that, and he also knows business and organization and leadership and governmental issues.”[5]

Rockets are absurdly expensive, which is a problem because Musk wants to send people to Mars. And to send people to Mars, you need cheaper rockets. So he asked himself, “What is a rocket made of? Aerospace-grade aluminum alloys, plus some titanium, copper, and carbon fiber. And … what is the value of those materials on the commodity market? It turned out that the materials cost of a rocket was around two percent of the typical price.”[6]

Why, then, is it so expensive to get a rocket into space? Musk, a notorious self-learner with degrees in both economics and physics, literally taught himself rocket science. He figured that the only reason getting a rocket into space is so expensive is that people are stuck in a mindset that doesn’t hold up to first principles. With that, Musk decided to create SpaceX and see if he could build rockets himself from the ground up.

In an interview with Kevin Rose, Musk summarized his approach:

I think it's important to reason from first principles rather than by analogy. So the normal way we conduct our lives is, we reason by analogy. We are doing this because it's like something else that was done, or it is like what other people are doing… with slight iterations on a theme. And it's … mentally easier to reason by analogy rather than from first principles. First principles is kind of a physics way of looking at the world, and what that really means is, you … boil things down to the most fundamental truths and say, “okay, what are we sure is true?” … and then reason up from there. That takes a lot more mental energy.[7]

Musk then gave an example of how Space X uses first principles to innovate at low prices:

Somebody could say — and in fact people do — that battery packs are really expensive and that's just the way they will always be because that's the way they have been in the past. … Well, no, that's pretty dumb… Because if you applied that reasoning to anything new, then you wouldn't be able to ever get to that new thing…. you can't say, … “oh, nobody wants a car because horses are great, and we're used to them and they can eat grass and there’s lots of grass all over the place and … there's no gasoline that people can buy….”

He then gives a fascinating example about battery packs:

… they would say, “historically, it costs $600 per kilowatt-hour. And so it's not going to be much better than that in the future. … So the first principles would be, … what are the material constituents of the batteries? What is the spot market value of the material constituents? … It’s got cobalt, nickel, aluminum, carbon, and some polymers for separation, and a steel can. So break that down on a material basis; if we bought that on a London Metal Exchange, what would each of these things cost? Oh, jeez, it's … $80 per kilowatt-hour. So, clearly, you just need to think of clever ways to take those materials and combine them into the shape of a battery cell, and you can have batteries that are much, much cheaper than anyone realizes.

BuzzFeed

After studying the psychology of virality, Jonah Peretti founded BuzzFeed in 2006. The site quickly grew to be one of the most popular on the internet, with hundreds of employees and substantial revenue.

Peretti figured out early on the first principle of a successful website: wide distribution. Rather than publishing articles people should read, BuzzFeed focuses on publishing those that people want to read. This means aiming to garner maximum social shares to put distribution in the hands of readers.

Peretti recognized the first principles of online popularity and used them to take a new approach to journalism. He also ignored SEO, saying, “Instead of making content robots like, it was more satisfying to make content humans want to share.”[8] Unfortunately for us, we share a lot of cat videos.

A common aphorism in the field of viral marketing is, “content might be king, but distribution is queen, and she wears the pants” (or “and she has the dragons”; pick your metaphor). BuzzFeed’s distribution-based approach is based on obsessive measurement, using A/B testing and analytics.

Jon Steinberg, president of BuzzFeed, explains the first principles of virality:

Keep it short. Ensure [that] the story has a human aspect. Give people the chance to engage. And let them react. People mustn’t feel awkward sharing it. It must feel authentic. Images and lists work. The headline must be persuasive and direct.

Derek Sivers and CD Baby

When Sivers founded his company CD Baby, he reduced the concept down to first principles. Sivers asked, What does a successful business need? His answer was happy customers.

Instead of focusing on garnering investors or having large offices, fancy systems, or huge numbers of staff, Sivers focused on making each of his customers happy. An example of this is his famous order confirmation email, part of which reads:

Your CD has been gently taken from our CD Baby shelves with sterilized contamination-free gloves and placed onto a satin pillow. A team of 50 employees inspected your CD and polished it to make sure it was in the best possible condition before mailing. Our packing specialist from Japan lit a candle and a hush fell over the crowd as he put your CD into the finest gold-lined box money can buy.

By ignoring unnecessary details that cause many businesses to expend large amounts of money and time, Sivers was able to rapidly grow the company to $4 million in monthly revenue. In Anything You Want, Sivers wrote:

Having no funding was a huge advantage for me.
A year after I started CD Baby, the dot-com boom happened. Anyone with a little hot air and a vague plan was given millions of dollars by investors. It was ridiculous. …
Even years later, the desks were just planks of wood on cinder blocks from the hardware store. I made the office computers myself from parts. My well-funded friends would spend $100,000 to buy something I made myself for $1,000. They did it saying, “We need the very best,” but it didn't improve anything for their customers. …
It's counterintuitive, but the way to grow your business is to focus entirely on your existing customers. Just thrill them, and they'll tell everyone.

To survive as a business, you need to treat your customers well. And yet so few of us master this principle.

Employing First Principles in Your Daily Life

Most of us have no problem thinking about what we want to achieve in life, at least when we’re young. We’re full of big dreams, big ideas, and boundless energy. The problem is that we let others tell us what’s possible, not only when it comes to our dreams but also when it comes to how we go after them. And when we let other people tell us what’s possible or what the best way to do something is, we outsource our thinking to someone else.

The real power of first-principles thinking is moving away from incremental improvement and into possibility. Letting others think for us means that we’re using their analogies, their conventions, and their possibilities. It means we’ve inherited a world that conforms to what they think. This is incremental thinking.

When we take what already exists and improve on it, we are in the shadow of others. It’s only when we step back, ask ourselves what’s possible, and cut through the flawed analogies that we see what is possible. Analogies are beneficial; they make complex problems easier to communicate and increase understanding. Using them, however, is not without a cost. They limit our beliefs about what’s possible and allow people to argue without ever exposing our (faulty) thinking. Analogies move us to see the problem in the same way that someone else sees the problem.

The gulf between what people currently see because their thinking is framed by someone else and what is physically possible is filled by the people who use first principles to think through problems.

First-principles thinking clears the clutter of what we’ve told ourselves and allows us to rebuild from the ground up. Sure, it’s a lot of work, but that’s why so few people are willing to do it. It’s also why the rewards for filling the chasm between possible and incremental improvement tend to be non-linear.

Let’s take a look at a few of the limiting beliefs that we tell ourselves.

“I don’t have a good memory.” [10]
People have far better memories than they think they do. Saying you don’t have a good memory is just a convenient excuse to let you forget. Taking a first-principles approach means asking how much information we can physically store in our minds. The answer is “a lot more than you think.” Now that we know it’s possible to put more into our brains, we can reframe the problem into finding the most optimal way to store information in our brains.

“There is too much information out there.”
A lot of professional investors read Farnam Street. When I meet these people and ask how they consume information, they usually fall into one of two categories. The differences between the two apply to all of us. The first type of investor says there is too much information to consume. They spend their days reading every press release, article, and blogger commenting on a position they hold. They wonder what they are missing. The second type of investor realizes that reading everything is unsustainable and stressful and makes them prone to overvaluing information they’ve spent a great amount of time consuming. These investors, instead, seek to understand the variables that will affect their investments. While there might be hundreds, there are usually three to five variables that will really move the needle. The investors don’t have to read everything; they just pay attention to these variables.

“All the good ideas are taken.”
A common way that people limit what’s possible is to tell themselves that all the good ideas are taken. Yet, people have been saying this for hundreds of years — literally — and companies keep starting and competing with different ideas, variations, and strategies.

“We need to move first.”
I’ve heard this in boardrooms for years. The answer isn’t as black and white as this statement. The iPhone wasn’t first, it was better. Microsoft wasn’t the first to sell operating systems; it just had a better business model. There is a lot of evidence showing that first movers in business are more likely to fail than latecomers. Yet this myth about the need to move first continues to exist.

Sometimes the early bird gets the worm and sometimes the first mouse gets killed. You have to break each situation down into its component parts and see what’s possible. That is the work of first-principles thinking.

“I can’t do that; it’s never been done before.”
People like Elon Musk are constantly doing things that have never been done before. This type of thinking is analogous to looking back at history and building, say, floodwalls, based on the worst flood that has happened before. A better bet is to look at what could happen and plan for that.

“As to methods, there may be a million and then some, but principles are few. The man who grasps principles can successfully select his own methods. The man who tries methods, ignoring principles, is sure to have trouble.”

— Harrington Emerson

Conclusion

The thoughts of others imprison us if we’re not thinking for ourselves.

Reasoning from first principles allows us to step outside of history and conventional wisdom and see what is possible. When you really understand the principles at work, you can decide if the existing methods make sense. Often they don’t.

Reasoning by first principles is useful when you are (1) doing something for the first time, (2) dealing with complexity, and (3) trying to understand a situation that you’re having problems with. In all of these areas, your thinking gets better when you stop making assumptions and you stop letting others frame the problem for you.

Analogies can’t replace understanding. While it’s easier on your brain to reason by analogy, you’re more likely to come up with better answers when you reason by first principles. This is what makes it one of the best sources of creative thinking. Thinking in first principles allows you to adapt to a changing environment, deal with reality, and seize opportunities that others can’t see.

Many people mistakenly believe that creativity is something that only some of us are born with, and either we have it or we don’t. Fortunately, there seems to be ample evidence that this isn’t true.[11] We’re all born rather creative, but during our formative years, it can be beaten out of us by busy parents and teachers. As adults, we rely on convention and what we’re told because that’s easier than breaking things down into first principles and thinking for ourselves. Thinking through first principles is a way of taking off the blinders. Most things suddenly seem more possible.

“I think most people can learn a lot more than they think they can,” says Musk. “They sell themselves short without trying. One bit of advice: it is important to view knowledge as sort of a semantic tree — make sure you understand the fundamental principles, i.e., the trunk and big branches, before you get into the leaves/details or there is nothing for them to hang on to.”

***

Members can discuss this on the Learning Community Forum.

End Notes

[1] Aristotle, Physics 184a10–21

[2] Aristotle, Metaphysics 1013a14-15

[3] https://waitbutwhy.com/2015/11/the-cook-and-the-chef-musks-secret-sauce.html

[4] Elon Musk, quoted by Tim Urban in “The Cook and the Chef: Musk’s Secret Sauce,” Wait But Why https://waitbutwhy.com/2015/11/the-cook-and-the-chef-musks-secret-sauce.html

[5] Vance, Ashlee. Elon Musk: Tesla, SpaceX, and the Quest for a Fantastic Future (p. 354)

[6] https://www.wired.com/2012/10/ff-elon-musk-qa/all/

[7] https://www.youtube.com/watch?v=L-s_3b5fRd8

[8] David Rowan, “How BuzzFeed mastered social sharing to become a media giant for a new era,” Wired.com. 2 January 2014. https://www.wired.co.uk/article/buzzfeed

[9] https://www.quora.com/What-does-Elon-Musk-mean-when-he-said-I-think-it%E2%80%99s-important-to-reason-from-first-principles-rather-than-by-analogy/answer/Bruce-Achterberg

[10] https://www.scientificamerican.com/article/new-estimate-boosts-the-human-brain-s-memory-capacity-10-fold/

[11] Breakpoint and Beyond: Mastering the Future Today, George Land

[12] https://www.reddit.com/r/IAmA/comments/2rgsan/i_am_elon_musk_ceocto_of_a_rocket_company_ama/cnfre0a/

What Are You Doing About It? Reaching Deep Fluency with Mental Models

The mental models approach is very intellectually appealing, almost seductive to a certain type of person. (It certainly is for us.)

The whole idea is to take the world's greatest, most useful ideas and make them work for you!

How hard can it be?

Nearly all of the models themselves are perfectly well understandable by the average well-educated knowledge worker, including all of you reading this piece. Ideas like Bayes' rule, multiplicative thinking, hindsight bias, or the bias from envy and jealousy, are all obviously true and part of the reality we live in.

There's a bit of a problem we're seeing though: People are reading the stuff, enjoying it, agreeing with it…but not taking action. It's not becoming part of their standard repertoire.

Let's say you followed up on Bayesian thinking after reading our post on it — you spent some time soaking in Thomas Bayes‘ great wisdom on updating your understanding of the world incrementally and probabilistically rather than changing your mind in black-and-white. Great!

But a week later, what have you done with that knowledge? How has it actually impacted your life? If the honest answer is “It hasn't,” then haven't you really wasted your time?

Ironically, it's this habit of “going halfway” instead of “going all the way,” like Sisyphus constantly getting halfway up the mountain, which is the biggest waste of time!

See, the common reason why people don't truly “follow through” with all of this stuff is that they haven't raised their knowledge to a “deep fluency” — they're skimming the surface. They pick up bits and pieces — some heuristics or biases here, a little physics or biology there, and then call it a day and pull up Netflix. They get a little understanding, but not that much, and certainly no doing.

The better approach, if you actually care about making changes, is to imitate Charlie Munger, Charles Darwin, and Richard Feynman, and start raising your knowledge of the Big Ideas to a deep fluency, and then figuring out systems, processes, and mental tricks to implement them in your own life.

Let's work through an example.

***

Say you're just starting to explore all the wonderful literature on heuristics and biases and come across the idea of Confirmation Bias: The idea that once we've landed on an idea we really like, we tend to keep looking for further data to confirm our already-held notions rather than trying to disprove our idea.

This is common, widespread, and perfectly natural. We all do it. John Kenneth Galbraith put it best:

“In the choice between changing one's mind and proving there's no need to do so, most people get busy on the proof.”

Now, what most people do, the ones you're trying to outperform, is say “Great idea! Thanks Galbraith.” and then stop thinking about it.

Don't do that!

The next step would be to push a bit further, to get beyond the sound bite: What's the process that leads to confirmation bias? Why do I seek confirmatory information and in which contexts am I particularly susceptible? What other models are related to the confirmation bias? How do I solve the problem?

The answers are out there: They're in Daniel Kahneman and in Charlie Munger and in Elster. They're available by searching through Farnam Street.

The big question: How far do you go? A good question without a perfect answer. But the best test I can think of is to perform something like the Feynman technique, and to think about the chauffeur problem.

Can you explain it simply to an intelligent layperson, using vivid examples? Can you answer all the follow-ups? That's fluency. And you must be careful not to fool yourself, because in the wise words of Feynman, “…you are the easiest person to fool.

While that's great work, you're not done yet. You have to make the rubber hit the road now. Something has to happen in your life and mind.

The way to do that is to come up with rules, systems, parables, and processes of your own, or to copy someone else's that are obviously sound.

In the case of Confirmation Bias, we have two wonderful models to copy, one from each of the Charlies — Darwin, and Munger.

Darwin had rule, one we have written about before but will restate here: Make a note, immediately, if you come across a thought or idea that is contrary to something you currently believe. 

As for Munger, he implemented a rule in his own life: “I never allow myself to have an opinion on anything that I don’t know the other side’s argument better than they do.”

Now we're getting somewhere! With the implementation of those two habits and some well-earned deep fluency, you can immediately, tomorrow, start improving the quality of your decision-making.

Sometimes when we get outside the heuristic/biases stuff, it's less obvious how to make the “rubber hit the road” — and that will be a constant challenge for you as you take this path.

But that's also the fun part! With every new idea and model you pick up, you also pick up the opportunity to synthesize for yourself a useful little parable to make it stick or a new habit that will help you use it. Over time, you'll come up with hundreds of them, and people might even look to you when they're having problems doing it themselves!

Look at Buffett and Munger — both guys are absolute machines, chock full of pithy little rules and stories they use in order to implement and recall what they've learned.

For example, Buffett discovered early on the manipulative psychology behind open-outcry auctions. What did he do? He made a rule to never go to one! That's how it's done.

Even if you can't come up with a great rule like that, you can figure out a way to use any new model or idea you learn. It just takes some creative thinking.

Sometimes it's just a little mental rule or story that sticks particularly well. (Recall one of the prime lessons from our series on memory: Salient, often used, well-associated, and important information sticks best.)

We did this very thing recently with Lee Kuan Yew's Rule. What a trite way to refer to the simple idea of asking if something actually works…attributing it to a Singaporean political leader!

But that's exactly the point. Give the thing a name and a life and, like clockwork, you'll start recalling it. The phrase “Lee Kuan Yew's Rule” actually appears in my head when I'm approaching some new system or ideology, and as soon as it does, I find myself backing away from ideology and towards pragmatism. Exactly as I'd hoped.

Your goal should be to create about a thousand of those little tools in your head, attached to a deep fluency in the material from which it came. 

***

I can hear the objection coming. Who has time for this stuff?

You do. It's about making time for the things that really matter. And what could possibly matter more than upgrading your whole mental operating system? I solemnly promise that you're spending way more time right now making sub-optimal decisions and trying to deal with the fallout.

If you need help learning to manage your time right this second, check out our Productivity Seminar, one that's changed some people's lives entirely. The central idea is to become more thoughtful and deliberate with how you spend your hours. When you start doing that, you'll notice you do have an hour a day to spend on this Big Ideas stuff. It's worth the 59 bucks.

If you don't have 59 bucks, at least imitate Cal Newport and start scheduling your days and put an hour in there for “Getting better at making all of my decisions.”

Once you find that solid hour (or more), start using it in the way outlined above, and let the world's great knowledge actually start making an impact. Just do a little every day.

What you'll notice, over the weeks and months and years of doing this, is that your mind will really change! It has to! And with that, your life will change too. The only way to fail at improving your brain is by imitating Sisyphus, pushing the boulder halfway up, over and over.

Unless and until you really understand this, you'll continue spinning your wheels. So here's your call to action. Go get to it!

A Few Useful Mental Tools from Richard Feynman

We've covered the brilliant physicist Richard Feynman (1918-1988) many times here before. He was a genius. A true genius. But there have been many geniuses — physics has been fortunate to attract some of them — and few of them are as well known as Feynman. Why is Feynman so well known? It's likely because he had tremendous range outside of pure science, and although he won a Nobel Prize for his work in quantum mechanics, he's probably best known for other things, primarily his wonderful ability to explain and teach.

This ability was on display in a series of non-technical lectures in 1963, memorialized in a short book called The Meaning of it All: Thoughts of a Citizen Scientist. The lectures are a wonderful example of how well Feynman's brain worked outside of physics, talking through basic reasoning and some of the problems of his day.

Particularly useful are a series of “tricks of the trade” he gives in a section called This Unscientific Age. These tricks show Feynman taking the method of thought he learned in pure science and applying it to the more mundane topics most of us have to deal with every day. They're wonderfully instructive. Let's check them out.

Mental Tools from Richard Feynman

Before we start, it's worth noting that Feynman takes pains to mention that not everything needs to be considered with scientific accuracy. So don't waste your time unless it's a scientific matter. So let's start with a deep breath:

Now, that there are unscientific things is not my grief. That's a nice word. I mean, that is not what I am worrying about, that there are unscientific things. That something is unscientific is not bad; there is nothing the matter with it. It is just unscientific. And scientific is limited, of course, to those things that we can tell about by trial and error. For example, there is the absurdity of the young these days chanting things about purple people eaters and hound dogs, something that we cannot criticize at all if we belong to the old flat foot floogie and a floy floy or the music goes down and around. Sons of mothers who sang about “come, Josephine, in my flying machine,” which sounds just about as modern as “I'd like to get you on a slow boat to China.” So in life, in gaiety, in emotion, in human pleasures and pursuits, and in literature and so on, there is no need to be scientific, there is no reason to be scientific. One must relax and enjoy life. That is not the criticism. That is not the point.

As we enter the realm of “knowable” things in a scientific sense, the first trick has to do with deciding whether someone truly knows their stuff or is mimicking:

The first one has to do with whether a man knows what he is talking about, whether what he says has some basis or not. And my trick that I use is very easy. If you ask him intelligent questions—that is, penetrating, interested, honest, frank, direct questions on the subject, and no trick questions—then he quickly gets stuck. It is like a child asking naive questions. If you ask naive but relevant questions, then almost immediately the person doesn't know the answer, if he is an honest man. It is important to appreciate that.

And I think that I can illustrate one unscientific aspect of the world which would be probably very much better if it were more scientific. It has to do with politics. Suppose two politicians are running for president, and one goes through the farm section and is asked, “What are you going to do about the farm question?” And he knows right away— bang, bang, bang.

Now he goes to the next campaigner who comes through. “What are you going to do about the farm problem?” “Well, I don't know. I used to be a general, and I don't know anything about farming. But it seems to me it must be a very difficult problem, because for twelve, fifteen, twenty years people have been struggling with it, and people say that they know how to solve the farm problem. And it must be a hard problem. So the way that I intend to solve the farm problem is to gather around me a lot of people who know something about it, to look at all the experience that we have had with this problem before, to take a certain amount of time at it, and then to come to some conclusion in a reasonable way about it. Now, I can't tell you ahead of time what conclusion, but I can give you some of the principles I'll try to use—not to make things difficult for individual farmers, if there are any special problems we will have to have some way to take care of them,” etc., etc., etc.

That's a wonderfully useful way to figure out whether someone is Max Planck or the chaffeur.

The second trick regards how to deal with uncertainty:

People say to me, “Well, how can you teach your children what is right and wrong if you don't know?” Because I'm pretty sure of what's right and wrong. I'm not absolutely sure; some experiences may change my mind. But I know what I would expect to teach them. But, of course, a child won't learn what you teach him.

I would like to mention a somewhat technical idea, but it's the way, you see, we have to understand how to handle uncertainty. How does something move from being almost certainly false to being almost certainly true? How does experience change? How do you handle the changes of your certainty with experience? And it's rather complicated, technically, but I'll give a rather simple, idealized example.

You have, we suppose, two theories about the way something is going to happen, which I will call “Theory A” and “Theory B.” Now it gets complicated. Theory A and Theory B. Before you make any observations, for some reason or other, that is, your past experiences and other observations and intuition and so on, suppose that you are very much more certain of Theory A than of Theory B—much more sure. But suppose that the thing that you are going to observe is a test. According to Theory A, nothing should happen. According to Theory B, it should turn blue. Well, you make the observation, and it turns sort of a greenish. Then you look at Theory A, and you say, “It's very unlikely,” and you turn to Theory B, and you say, “Well, it should have turned sort of blue, but it wasn't impossible that it should turn sort of greenish color.” So the result of this observation, then, is that Theory A is getting weaker, and Theory B is getting stronger. And if you continue to make more tests, then the odds on Theory B increase. Incidentally, it is not right to simply repeat the same test over and over and over and over, no matter how many times you look and it still looks greenish, you haven't made up your mind yet. But if you find a whole lot of other things that distinguish Theory A from Theory B that are different, then by accumulating a large number of these, the odds on Theory B increase.

Feynman is talking about Grey Thinking here, the ability to put things on a gradient from “probably true” to “probably false” and how we deal with that uncertainty. He isn't proposing a method of figuring out absolute, doctrinaire truth.

Another term for what he's proposing is Bayesian updating — starting with a priori odds, based on earlier understanding, and “updating” the odds of something based on what you learn thereafter. An extremely useful tool.

Feynman's third trick is the realization that as we investigate whether something is true or not, new evidence and new methods of experimentation should show the effect getting stronger and stronger, not weaker. He uses an excellent example here by analyzing mental telepathy:

I give an example. A professor, I think somewhere in Virginia, has done a lot of experiments for a number of years on the subject of mental telepathy, the same kind of stuff as mind reading. In his early experiments the game was to have a set of cards with various designs on them (you probably know all this, because they sold the cards and people used to play this game), and you would guess whether it's a circle or a triangle and so on while someone else was thinking about it. You would sit and not see the card, and he would see the card and think about the card and you'd guess what it was. And in the beginning of these researches, he found very remarkable effects. He found people who would guess ten to fifteen of the cards correctly, when it should be on the average only five. More even than that. There were some who would come very close to a hundred percent in going through all the cards. Excellent mind readers.

A number of people pointed out a set of criticisms. One thing, for example, is that he didn't count all the cases that didn't work. And he just took the few that did, and then you can't do statistics anymore. And then there were a large number of apparent clues by which signals inadvertently, or advertently, were being transmitted from one to the other.

Various criticisms of the techniques and the statistical methods were made by people. The technique was therefore improved. The result was that, although five cards should be the average, it averaged about six and a half cards over a large number of tests. Never did he get anything like ten or fifteen or twenty-five cards. Therefore, the phenomenon is that the first experiments are wrong. The second experiments proved that the phenomenon observed in the first experiment was nonexistent. The fact that we have six and a half instead of five on the average now brings up a new possibility, that there is such a thing as mental telepathy, but at a much lower level. It's a different idea, because, if the thing was really there before, having improved the methods of experiment, the phenomenon would still be there. It would still be fifteen cards. Why is it down to six and a half? Because the technique improved. Now it still is that the six and a half is a little bit higher than the average of statistics, and various people criticized it more subtly and noticed a couple of other slight effects which might account for the results.

It turned out that people would get tired during the tests, according to the professor. The evidence showed that they were getting a little bit lower on the average number of agreements. Well, if you take out the cases that are low, the laws of statistics don't work, and the average is a little higher than the five, and so on. So if the man was tired, the last two or three were thrown away. Things of this nature were improved still further. The results were that mental telepathy still exists, but this time at 5.1 on the average, and therefore all the experiments which indicated 6.5 were false. Now what about the five? . . . Well, we can go on forever, but the point is that there are always errors in experiments that are subtle and unknown. But the reason that I do not believe that the researchers in mental telepathy have led to a demonstration of its existence is that as the techniques were improved, the phenomenon got weaker. In short, the later experiments in every case disproved all the results of the former experiments. If remembered that way, then you can appreciate the situation.

This echoes Feyman's dictum about not fooling oneself: We must refine our process for probing and experimenting if we're to get at real truth, always watching out for little troubles. Otherwise, we torture the world so that results fit our expectations. If we carefully refine and re-test and the effect gets weaker all the time, it's likely to not be true, or at least not to the magnitude originally hoped for.

The fourth trick is to ask the right question, which is not “Could this be the case?” but “Is this actually the case?” Many get so caught up with the former that they forget to ask the latter:

That brings me to the fourth kind of attitude toward ideas, and that is that the problem is not what is possible. That's not the problem. The problem is what is probable, what is happening. It does no good to demonstrate again and again that you can't disprove that this could be a flying saucer. We have to guess ahead of time whether we have to worry about the Martian invasion. We have to make a judgment about whether it is a flying saucer, whether it's reasonable, whether it's likely. And we do that on the basis of a lot more experience than whether it's just possible, because the number of things that are possible is not fully appreciated by the average individual. And it is also not clear, then, to them how many things that are possible must not be happening. That it's impossible that everything that is possible is happening. And there is too much variety, so most likely anything that you think of that is possible isn't true. In fact that's a general principle in physics theories: no matter what a guy thinks of, it's almost always false. So there have been five or ten theories that have been right in the history of physics, and those are the ones we want. But that doesn't mean that everything's false. We'll find out.

The fifth trick is a very, very common one, even 50 years after Feynman pointed it out. You cannot judge the probability of something happening after it's already happened. That's cherry-picking. You have to run the experiment forward for it to mean anything:

I now turn to another kind of principle or idea, and that is that there is no sense in calculating the probability or the chance that something happens after it happens. A lot of scientists don't even appreciate this. In fact, the first time I got into an argument over this was when I was a graduate student at Princeton, and there was a guy in the psychology department who was running rat races. I mean, he has a T-shaped thing, and the rats go, and they go to the right, and the left, and so on. And it's a general principle of psychologists that in these tests they arrange so that the odds that the things that happen happen by chance is small, in fact, less than one in twenty. That means that one in twenty of their laws is probably wrong. But the statistical ways of calculating the odds, like coin flipping if the rats were to go randomly right and left, are easy to work out.

This man had designed an experiment which would show something which I do not remember, if the rats always went to the right, let's say. I can't remember exactly. He had to do a great number of tests, because, of course, they could go to the right accidentally, so to get it down to one in twenty by odds, he had to do a number of them. And its hard to do, and he did his number. Then he found that it didn't work. They went to the right, and they went to the left, and so on. And then he noticed, most remarkably, that they alternated, first right, then left, then right, then left. And then he ran to me, and he said, “Calculate the probability for me that they should alternate, so that I can see if it is less than one in twenty.” I said, “It probably is less than one in twenty, but it doesn't count.”

He said, “Why?” I said, “Because it doesn't make any sense to calculate after the event. You see, you found the peculiarity, and so you selected the peculiar case.”

For example, I had the most remarkable experience this evening. While coming in here, I saw license plate ANZ 912. Calculate for me, please, the odds that of all the license plates in the state of Washington I should happen to see ANZ 912. Well, it's a ridiculous thing. And, in the same way, what he must do is this: The fact that the rat directions alternate suggests the possibility that rats alternate. If he wants to test this hypothesis, one in twenty, he cannot do it from the same data that gave him the clue. He must do another experiment all over again and then see if they alternate. He did, and it didn't work.

The sixth trick is one that's familiar to almost all of us, yet almost all of us forget about every day: The plural of anecdote is not data. We must use proper statistical sampling to know whether or not we know what we're talking about:

The next kind of technique that's involved is statistical sampling. I referred to that idea when I said they tried to arrange things so that they had one in twenty odds. The whole subject of statistical sampling is somewhat mathematical, and I won't go into the details. The general idea is kind of obvious. If you want to know how many people are taller than six feet tall, then you just pick people out at random, and you see that maybe forty of them are more than six feet so you guess that maybe everybody is. Sounds stupid.

Well, it is and it isn't. If you pick the hundred out by seeing which ones come through a low door, you're going to get it wrong. If you pick the hundred out by looking at your friends you'll get it wrong because they're all in one place in the country. But if you pick out a way that as far as anybody can figure out has no connection with their height at all, then if you find forty out of a hundred, then, in a hundred million there will be more or less forty million. How much more or how much less can be worked out quite accurately. In fact, it turns out that to be more or less correct to 1 percent, you have to have 10,000 samples. People don't realize how difficult it is to get the accuracy high. For only 1 or 2 percent you need 10,000 tries.

The last trick is to realize that many errors people make simply come from lack of information. They don't even know they're missing the tools they need. This can be a very tough one to guard against — it's hard to know when you're missing information that would change your mind — but Feynman gives the simple case of astrology to prove the point:

Now, looking at the troubles that we have with all the unscientific and peculiar things in the world, there are a number of them which cannot be associated with difficulties in how to think, I think, but are just due to some lack of information. In particular, there are believers in astrology, of which, no doubt, there are a number here. Astrologists say that there are days when it's better to go to the dentist than other days. There are days when it's better to fly in an airplane, for you, if you are born on such a day and such and such an hour. And it's all calculated by very careful rules in terms of the position of the stars. If it were true it would be very interesting. Insurance people would be very interested to change the insurance rates on people if they follow the astrological rules, because they have a better chance when they are in the airplane. Tests to determine whether people who go on the day that they are not supposed to go are worse off or not have never been made by the astrologers. The question of whether it's a good day for business or a bad day for business has never been established. Now what of it? Maybe it's still true, yes.

On the other hand, there's an awful lot of information that indicates that it isn't true. Because we have a lot of knowledge about how things work, what people are, what the world is, what those stars are, what the planets are that you are looking at, what makes them go around more or less, where they're going to be in the next 2000 years is completely known. They don't have to look up to find out where it is. And furthermore, if you look very carefully at the different astrologers they don't agree with each other, so what are you going to do? Disbelieve it. There's no evidence at all for it. It's pure nonsense.

The only way you can believe it is to have a general lack of information about the stars and the world and what the rest of the things look like. If such a phenomenon existed it would be most remarkable, in the face of all the other phenomena that exist, and unless someone can demonstrate it to you with a real experiment, with a real test, took people who believe and people who didn't believe and made a test, and so on, then there's no point in listening to them.

***

Still Interested? Check out the (short) book: The Meaning of it All: Thoughts of a Citizen-Scientist.

Richard Feynman on Teaching Math to Kids and the Lessons of Knowledge

Legendary scientist Richard Feynman (1918-1911) was famous for his penetrating insight and clarity of thought. Famous for not only the work he did to garner a Nobel Prize, but also for the lucidity of explanations of ordinary things such as why trains stay on the tracks as they go around a curve, how we look for new laws of science, how rubber bands work, and .

Feynman knew the difference between knowing the name of something and knowing something. And was often prone to telling the emperor they had no clothes as this illuminating example from James Gleick's book Genius: The Life and Science of Richard Feynman shows.

Educating his children gave him pause as to how the elements of teaching should be employed. By the time his son Carl was four, Feynman was “actively lobbying against a first-grade science book proposed for California schools.”

It began with pictures of a mechanical wind-up dog, a real dog, and a motorcycle, and for each the same question: “What makes it move?” The proposed answer—“ Energy makes it move”— enraged him.

That was tautology, he argued—empty definition. Feynman, having made a career of understanding the deep abstractions of energy, said it would be better to begin a science course by taking apart a toy dog, revealing the cleverness of the gears and ratchets. To tell a first-grader that “energy makes it move” would be no more helpful, he said, than saying “God makes it move” or “moveability makes it move.”

Feynman proposed a simple test for whether one is teaching ideas or mere definitions: “Without using the new word which you have just learned, try to rephrase what you have just learned in your own language. Without using the word energy, tell me what you know now about the dog’s motion.”

The other standard explanations were equally horrible: gravity makes it fall, or friction makes it wear out. You didn't get a pass on learning because you were a first-grader and Feynman's explanations not only captured the attention of his audience—from Nobel winners to first-graders—but also offered true knowledge. “Shoe leather wears out because it rubs against the sidewalk and the little notches and bumps on the sidewalk grab pieces and pull them off.” That is knowledge. “To simply say, ‘It is because of friction,’ is sad, because it’s not science.”

Richard Feynman on Teaching

Choosing Textbooks for Grade Schools

In 1964 Feynman made the rare decision to serve on a public commission for choosing mathematics textbooks for California's grade schools. As Gleick describes it:

Traditionally this commissionership was a sinecure that brought various small perquisites under the table from textbook publishers. Few commissioners— as Feynman discovered— read many textbooks, but he determined to read them all, and had scores of them delivered to his house.

This was the era of new math in children's textbooks: introducing high-level concepts, such as set theory and non decimal number systems into grade school.

Feynman was skeptical of this approach but rather than simply let it go, he popped the balloon.

He argued to his fellow commissioners that sets, as presented in the reformers’ textbooks, were an example of the most insidious pedantry: new definitions for the sake of definition, a perfect case of introducing words without introducing ideas.

A proposed primer instructed first-graders: “Find out if the set of the lollipops is equal in number to the set of the girls.”

To Feynman this was a disease. It confused without adding precision to the normal sentence: “Find out if there are just enough lollipops for the girls.”

According to Feynman, specialized language should wait until it is needed. (In case you're wondering, he argued the peculiar language of set theory is rarely, if ever, needed —only in understanding different degrees of infinity—which certainly wasn't necessary at a grade-school level.)

Feynman convincingly argued this was knowledge of words without actual knowledge. He wrote:

It is an example of the use of words, new definitions of new words, but in this particular case a most extreme example because no facts whatever are given…. It will perhaps surprise most people who have studied this textbook to discover that the symbol ∪ or ∩ representing union and intersection of sets … all the elaborate notation for sets that is given in these books, almost never appear in any writings in theoretical physics, in engineering, business, arithmetic, computer design, or other places where mathematics is being used.

The point became philosophical.

It was crucial, he argued, to distinguish clear language from precise language. The textbooks placed a new emphasis on precise language: distinguishing “number” from “numeral,” for example, and separating the symbol from the real object in the modern critical fashion— pupil for schoolchildren, it seemed to Feynman. He objected to a book that tried to teach a distinction between a ball and a picture of a ball— the book insisting on such language as “color the picture of the ball red.”

“I doubt that any child would make an error in this particular direction,” Feynman said, adding:

As a matter of fact, it is impossible to be precise … whereas before there was no difficulty. The picture of a ball includes a circle and includes a background. Should we color the entire square area in which the ball image appears all red? … Precision has only been pedantically increased in one particular corner when there was originally no doubt and no difficulty in the idea.

In the real world absolute precision can never be reached and the search for degrees of precision that are not possible (but are desirable) causes a lot of folly.

Feynman has his own ideas for teaching children mathematics.

***

Process vs. Outcome

Feynman proposed that first-graders learn to add and subtract more or less the way he worked out complicated integrals— free to select any method that seems suitable for the problem at hand.A modern-sounding notion was, The answer isn’t what matters, so long as you use the right method. To Feynman no educational philosophy could have been more wrong. The answer is all that does matter, he said. He listed some of the techniques available to a child making the transition from being able to count to being able to add. A child can combine two groups into one and simply count the combined group: to add 5 ducks and 3 ducks, one counts 8 ducks. The child can use fingers or count mentally: 6, 7, 8. One can memorize the standard combinations. Larger numbers can be handled by making piles— one groups pennies into fives, for example— and counting the piles. One can mark numbers on a line and count off the spaces— a method that becomes useful, Feynman noted, in understanding measurement and fractions. One can write larger numbers in columns and carry sums larger than 10.

To Feynman the standard texts were flawed. The problem

29
+3

was considered a third-grade problem because it involved the concept of carrying. However, Feynman pointed out most first-graders could easily solve this problem by counting 30, 31, 32.

He proposed that kids be given simple algebra problems (2 times what plus 3 is 7) and be encouraged to solve them through the scientific method, which is tantamount to trial and error. This, he argued, is what real scientists do.

“We must,” Feynman said, “remove the rigidity of thought.” He continued “We must leave freedom for the mind to wander about in trying to solve the problems…. The successful user of mathematics is practically an inventor of new ways of obtaining answers in given situations. Even if the ways are well known, it is usually much easier for him to invent his own way— a new way or an old way— than it is to try to find it by looking it up.”

It was better in the end to have a bag of tricks at your disposal that could be used to solve problems than one orthodox method. Indeed, part of Feynman's genius was his ability to solve problems that were baffling others because they were using the standard method to try and solve them. He would come along and approach the problem with a different tool, which often led to simple and beautiful solutions.

***

If you give some thought to how Farnam Street helps you, one of the ways is by adding to your bag of tricks so that you can pull them out when you need them to solve problems. We call these tricks mental models and they work kinda like lego — interconnecting and reinforcing one another. The more pieces you have, the more things you can build.

Complement this post with Feynman's excellent advice on how to learn anything.

Richard Feynman on Refusing an Honorary Degree, Being Driven, and Understanding his Circle of Competence

Perfectly Reasonable Deviations From the Beaten Track is a wonderful collection of letters written to and from the physicist and professor Richard Feynmanchampion of understanding, explainer, an exemplar of curiosity, lover of beauty, knowledge seeker, asker of questions—during his life and career in science.

The book explores the timeless qualities that we cherish in Feynman. Let's dive a little deeper.

Driven

Feynman was precocious; it's clear that even early in his career, he knew he had the intelligence and drive to make an impact in science. At the age of 24 he had the foresight to mention, in a letter to his parents defying their wish that he not marry a dying woman (his fiancé Arlene had tuberculosis, a deadly diagnosis in those days), that:

I have other desires and aims in the world. One of them is to contribute to physics as much as I can. This, in my mind, is of even more importance than my love for Arlene.

He worked hard at that goal, and he showed signs of enjoying the process. In letters he wrote during his time working in academia and on the atomic bomb, Feynman writes that:

I'm hitting some mathematical difficulties which I will either surmount, walk around, or go a different way—all of which consumes all of my time—but I like to do (it) very much and am very happy indeed. I have never thought so much so steadily about one problem—so if I get nowhere I really will be very disturbed—However, I have gotten somewhere, quite far—to Prof. Wheeler's satisfaction. However the problem is not at completion, although I'm just beginning to see how far it is to the end and how we might get there (although aforementioned mathematical difficulties loom ahead)—SOME FUN!

This week has been unusual. There is an especially important problem to be worked out on the project, and it's a lot of fun so I am working quite hard on it. I get up at about 10:30 AM after a good night's rest, and go to work until 12:30 or 1 AM the next morning when I go back to bed. Naturally I take off about 2 hrs for my two meals. I don't eat any breakfast, but I eat a midnight snack before I go to bed. It's been that way for 4 or 5 days.

We see this frequently in genius-level contributors doing intensive work. It is not so much that they find the work easy, but they do find pleasure in the struggle. (There is actually another book about Feynman called “The Pleasure in Finding Things Out.”) Warren Buffett has said many times that he taps dances to work every day, and those who have spent time with him have corroborated the story. It's not a lie. Charlie Munger has mentioned that one of the main reasons for Berkshire's success is the fact that they enjoy the work.

Feynman is an interesting character though; for a super-genius scientist, he comes off as unusually romantic with passages like the following one, in a letter to his then-wife, Arlene:

There is a third thing you will be interested in. I love you. You are a strong and beautiful woman. You are not always as strong as other times but it rises and falls like the flow of a mountain stream. I feel I am a reservoir for your strength — without you I would be empty and weak like I was before I knew you — but your moments of strength make me strong and thus I am able to comfort you with your strength when your steam is low.

And long-time readers will remember the heart-breaking letter he wrote after she had passed away.

Honor and Honesty 

As the book rolls along and Feynman gets older and more famous, he is regularly asked to be honored. Generally, as most who have studied Feynman would know, he showed considerable discomfort with the process, which valued exclusivity and puffery over knowledge. One letter is typical of the middle-aged Feynman:

Dear George,

Yours is the first honorary degree that I have ben offered, and I thank you for considering me for such an honor.

However, I remember the work I did to get a real degree at Princeton and the guys on the same platform receiving honorary degrees without work—and felt an “honorary degree” was a debasement of the idea of a “degree which confirms certain work has been accomplished.” It is like giving an “honorary electrician's license.” I swore then that if by chance I was ever offered one, I would not accept it.

Now at last (twenty-five years later) you have given me a chance to carry out my vow.

So thank you, but I do not wish to accept the honorary degrees you offered.

Sincerely yours,

Richard P. Feynman

He also offers his usual wit upon resigning from the National Academy of Sciences:

Dear Prof. Handler:

My request for resignation from the National Academy of Sciences is based entirely on personal psychological quirks. It represents in no way any implied or explicit criticism of the Academy, other than those characteristics that flow from the fact that most of the membership consider their installation as a significant honor.

Sincerely yours,
Richard P. Feynman

In fact, Feynman was constantly displaying his tendency towards intellectual honesty, whenever possible. He understood his circle of competence. Several letters scattered throughout his life show him essentially throwing up his hands and saying “I don't know,” and he took pride in doing so. His general philosophy towards ignorance and learning was summed up in a statement he made in 1963 that “I feel a responsibility as a scientist who knows the great value of a satisfactory philosophy of ignorance, and the progress made possible by such a philosophy…that doubt is not to be feared, but it is to be welcomed…”

The following letter was typical of his lack of intellectual arrogance, this one coming in response to something he'd written about teaching kids math in his younger years:

Dear Mrs. Cochran:

As I get more experience I realize that I know nothing whatsoever as to how to teach children arithmetic. I did write some things before I reached my present state of wisdom. Perhaps the references you heard came from the article which I enclose.

At present, however, I do not know whether I agree with my past self or not.

Wishy-washy,
Richard P. Feynman

He does it again here, opening a reply to a highly critical letter about a TV appearance with the following:

Dear Mr. Rogers,

Thank you for your letter about my KNXT interview. You are quite right that I am very ignorant about smog and many other things, including the use of Finest English.

I won the Nobel Prize for work I did in physics trying to uncover the laws of nature. The only thing I really know very much about are these laws….

***

In the end, Feynman's greatest strength, outside of his immense scientific talent, was his basic philosophy on life. In 1954, Feynman wrote with tenderness to his mother:

Wealth is not happiness nor is swimming pools and villas. Nor is great work alone reward, or fame. Foreign places visited themselves give nothing. It is only you who bring to the places your heart, or in your great work feeling, or in your large house place. If you do this there is happiness.

Check out Reasonable Deviations from the Beaten Track, and learn more about life and learning from the best.

Cargo Cult Science: Richard Feynman On Believing What Isn’t True

“The first principle is that you must not fool yourself—and you are the easiest person to fool.”

— Richard Feynman

Richard Feynman (1918-1988) has long been one of my favorites — for both his wisdom and heart.

Reproduced below you can find the entirety of his 1974 commencement address at Caltech entitled Cargo Cult Science.

The entire speech requires about 10 minutes to read, which is time well invested if you ask me. If you're pressed for time, however, there are two sections I wish to draw to your attention.

In the South Seas there is a Cargo Cult of people. During the war they saw airplanes land with lots of good materials, and they want the same thing to happen now. So they’ve arranged to make things like runways, to put fires along the sides of the runways, to make a wooden hut for a man to sit in, with two wooden pieces on his head like headphones and bars of bamboo sticking out like antennas—he’s the controller—and they wait for the airplanes to land. They’re doing everything right. The form is perfect. It looks exactly the way it looked before. But it doesn’t work. No airplanes land. So I call these things Cargo Cult Science, because they follow all the apparent precepts and forms of scientific investigation, but they’re missing something essential, because the planes don’t land.

You're probably chuckling at this point. Yet many of us are no better. This is all around us. Thinking is hard and we fool ourselves, in part, because it's easy. That's Feynman's point.

The first principle is that you must not fool yourself—and you are the easiest person to fool. So you have to be very careful about that. After you’ve not fooled yourself, it’s easy not to fool other scientists. You just have to be honest in a conventional way after that.

Your job is to find the current cargo cults.

When we start a project without determining what success looks like … when we mistake the map for the territory … when we look at outcomes without looking at process … when we blindly copy what others have done …. when we confuse correlation and causation we find ourselves on the runway.

***

Cargo Cult Science

During the Middle Ages there were all kinds of crazy ideas, such as that a piece of rhinoceros horn would increase potency. (Another crazy idea of the Middle Ages is these hats we have on today—which is too loose in my case.) Then a method was discovered for separating the ideas—which was to try one to see if it worked, and if it didn’t work, to eliminate it. This method became organized, of course, into science. And it developed very well, so that we are now in the scientific age. It is such a scientific age, in fact, that we have difficulty in understanding how­ witch doctors could ever have existed, when nothing that they proposed ever really worked—or very little of it did.

But even today I meet lots of people who sooner or later get me into a conversation about UFOs, or astrology, or some form of mysticism, expanded consciousness, new types of awareness, ESP, and so forth. And I’ve concluded that it’s not a scientific world.

Most people believe so many wonderful things that I decided to investigate why they did. And what has been referred to as my curiosity for investigation has landed me in a difficulty where I found so much junk to talk about that I can’t do it in this talk. I’m overwhelmed. First I started out by investigating various ideas of mysticism, and mystic experiences. I went into isolation tanks (they’re dark and quiet and you float in Epsom salts) and got many hours of hallucinations, so I know something about that. Then I went to Esalen, which is a hotbed of this kind of thought (it’s a wonderful place; you should go visit there). Then I became overwhelmed. I didn’t realize how much there was.

I was sitting, for example, in a hot bath and there’s another guy and a girl in the bath. He says to the girl, “I’m learning massage and I wonder if I could practice on you?” She says OK, so she gets up on a table and he starts off on her foot—working on her big toe and pushing it around. Then he turns to what is apparently his instructor, and says, “I feel a kind of dent. Is that the pituitary?” And she says, “No, that’s not the way it feels.” I say, “You’re a hell of a long way from the pituitary, man.” And they both looked at me—I had blown my cover, you see—and she said, “It’s reflexology.” So I closed my eyes and appeared to be meditating.

That’s just an example of the kind of things that overwhelm me. I also looked into extrasensory perception and PSI phenomena, and the latest craze there was Uri Geller, a man who is supposed to be able to bend keys by rubbing them with his finger. So went to his hotel room, on his invitation, to see a demonstration of both mind reading and bending keys. He didn’t do any mind reading that succeeded; nobody can read my mind, I guess. And my boy held a key and Geller rubbed it, and nothing happened. Then he told us it works better under water, and so you can picture all of us standing in the bathroom with the water turned on and the key under it, and him rubbing the key with his finger. Nothing happened. So I was unable to investigate that phenomenon.

But then I began to think, what else is there that we believe? (And I thought then about the witch doctors, and how easy it would have been to check on them by noticing that nothing really worked.) So I found things that even more people believe, such as that we have some knowledge of how to educate. There are big schools of reading methods and mathematics methods, and so forth, but if you notice, you’ll see the reading scores keep going down—or hardly going up—in spite of the fact that we continually use these same people to improve the methods. There’s a witch doctor remedy that doesn’t work. It ought to be looked into: how do they know that their method should work? Another example is how to treat criminals. We obviously have made no progress—lots of theory, but no progress—in decreasing the amount of crime by the method that we use to handle criminals.

Yet these things are said to be scientific. We study them. And I think ordinary people with commonsense ideas are intimidated by this pseudoscience. A teacher who has some good idea of how to teach her children to read is forced by the school system to do it some other way—or is even fooled by the school system into thinking that her method is not necessarily a good one. Or a parent of bad boys, after disciplining them in one way or another, feels guilty for the rest of her life because she didn’t do “the right thing,” according to the experts.

So we really ought to look into theories that don’t work, and science that isn’t science.

I tried to find a principle for discovering more of these kinds of things, and came up with the following system. Any time you find yourself in a conversation at a cocktail party—in which you do not feel uncomfortable that the hostess might come around and say, “Why are you fellows talking shop?’’ or that your wife will come around and say, “Why are you flirting again?”—then you can be sure you are talking about something about which nobody knows anything.

Using this method, I discovered a few more topics that I had forgotten—among them the efficacy of various forms of psychotherapy. So I began to investigate through the library, and so on, and I have so much to tell you that I can’t do it at all. I will have to limit myself to just a few little things. I’ll concentrate on the things more people believe in. Maybe I will give a series of speeches next year on all these subjects. It will take a long time.

I think the educational and psychological studies I mentioned are examples of what I would like to call Cargo Cult Science. In the South Seas there is a Cargo Cult of people. During the war they saw airplanes land with lots of good materials, and they want the same thing to happen now. So they’ve arranged to make things like runways, to put fires along the sides of the runways, to make a wooden hut for a man to sit in, with two wooden pieces on his head like headphones and bars of bamboo sticking out like antennas—he’s the controller—and they wait for the airplanes to land. They’re doing everything right. The form is perfect. It looks exactly the way it looked before. But it doesn’t work. No airplanes land. So I call these things Cargo Cult Science, because they follow all the apparent precepts and forms of scientific investigation, but they’re missing something essential, because the planes don’t land.

Now it behooves me, of course, to tell you what they’re missing. But it would be just about as difficult to explain to the South Sea Islanders how they have to arrange things so that they get some wealth in their system. It is not something simple like telling them how to improve the shapes of the earphones. But there is one feature I notice that is generally missing in Cargo Cult Science. That is the idea that we all hope you have learned in studying science in school—we never explicitly say what this is, but just hope that you catch on by all the examples of scientific investigation. It is interesting, therefore, to bring it out now and speak of it explicitly. It’s a kind of scientific integrity, a principle of scientific thought that corresponds to a kind of utter honesty—a kind of leaning over backwards. For example, if you’re doing an experiment, you should report everything that you think might make it invalid—not only what you think is right about it: other causes that could possibly explain your results; and things you thought of that you’ve eliminated by some other experiment, and how they worked—to make sure the other fellow can tell they have been eliminated.

Details that could throw doubt on your interpretation must be given, if you know them. You must do the best you can—if you know anything at all wrong, or possibly wrong—to explain it. If you make a theory, for example, and advertise it, or put it out, then you must also put down all the facts that disagree with it, as well as those that agree with it. There is also a more subtle problem. When you have put a lot of ideas together to make an elaborate theory, you want to make sure, when explaining what it fits, that those things it fits are not just the things that gave you the idea for the theory; but that the finished theory makes something else come out right, in addition.

In summary, the idea is to try to give all of the information to help others to judge the value of your contribution; not just the information that leads to judgment in one particular direction or another.

The easiest way to explain this idea is to contrast it, for example, with advertising. Last night I heard that Wesson Oil doesn’t soak through food. Well, that’s true. It’s not dishonest; but the thing I’m talking about is not just a matter of not being dishonest, it’s a matter of scientific integrity, which is another level. The fact that should be added to that advertising statement is that no oils soak through food, if operated at a certain temperature. If operated at another temperature, they all will—including Wesson Oil. So it’s the implication which has been conveyed, not the fact, which is true, and the difference is what we have to deal with.

We’ve learned from experience that the truth will out. Other experimenters will repeat your experiment and find out whether you were wrong or right. Nature’s phenomena will agree or they’ll disagree with your theory. And, although you may gain some temporary fame and excitement, you will not gain a good reputation as a scientist if you haven’t tried to be very careful in this kind of work. And it’s this type of integrity, this kind of care not to fool yourself, that is missing to a large extent in much of the research in Cargo Cult Science.

A great deal of their difficulty is, of course, the difficulty of the subject and the inapplicability of the scientific method to the subject. Nevertheless, it should be remarked that this is not the only difficulty. That’s why the planes don’t land—but they don’t land.

We have learned a lot from experience about how to handle some of the ways we fool ourselves. One example: Millikan measured the charge on an electron by an experiment with falling oil drops and got an answer which we now know not to be quite right. It’s a little bit off, because he had the incorrect value for the viscosity of air. It’s interesting to look at the history of measurements of the charge of the electron, after Millikan. If you plot them as a function of time, you find that one is a little bigger than Millikan’s, and the next one’s a little bit bigger than that, and the next one’s a little bit bigger than that, until finally they settle down to a number which is higher.

Why didn’t they discover that the new number was higher right away? It’s a thing that scientists are ashamed of—this history—because it’s apparent that people did things like this: When they got a number that was too high above Millikan’s, they thought something must be wrong—and they would look for and find a reason why something might be wrong. When they got a number closer to Millikan’s value they didn’t look so hard. And so they eliminated the numbers that were too far off, and did other things like that. We’ve learned those tricks nowadays, and now we don’t have that kind of a disease.

But this long history of learning how to not fool ourselves—of having utter scientific integrity—is, I’m sorry to say, something that we haven’t specifically included in any particular course that I know of. We just hope you’ve caught on by osmosis.

The first principle is that you must not fool yourself—and you are the easiest person to fool. So you have to be very careful about that. After you’ve not fooled yourself, it’s easy not to fool other scientists. You just have to be honest in a conventional way after that.

I would like to add something that’s not essential to the science, but something I kind of believe, which is that you should not fool the layman when you’re talking as a scientist. I’m not trying to tell you what to do about cheating on your wife, or fooling your girlfriend, or something like that, when you’re not trying to be a scientist, but just trying to be an ordinary human being. We’ll leave those problems up to you and your rabbi. I’m talking about a specific, extra type of integrity that is not lying, but bending over backwards to show how you’re maybe wrong, that you ought to do when acting as a scientist. And this is our responsibility as scientists, certainly to other scientists, and I think to laymen.

For example, I was a little surprised when I was talking to a friend who was going to go on the radio. He does work on cosmology and astronomy, and he wondered how he would explain what the applications of this work were. “Well,” I said, “there aren’t any.” He said, “Yes, but then we won’t get support for more research of this kind.” I think that’s kind of dishonest. If you’re representing yourself as a scientist, then you should explain to the layman what you’re doing—and if they don’t want to support you under those circumstances, then that’s their decision.

One example of the principle is this: If you’ve made up your mind to test a theory, or you want to explain some idea, you should always decide to publish it whichever way it comes out. If we only publish results of a certain kind, we can make the argument look good. We must publish both kinds of result. For example—let’s take advertising again—suppose some particular cigarette has some particular property, like low nicotine. It’s published widely by the company that this means it is good for you—they don’t say, for instance, that the tars are a different proportion, or that something else is the matter with the cigarette. In other words, publication probability depends upon the answer. That should not be done.

I say that’s also important in giving certain types of government advice. Supposing a senator asked you for advice about whether drilling a hole should be done in his state; and you decide it would be better in some other state. If you don’t publish such a result, it seems to me you’re not giving scientific advice. You’re being used. If your answer happens to come out in the direction the government or the politicians like, they can use it as an argument in their favor; if it comes out the other way, they don’t publish it at all. That’s not giving scientific advice.

Other kinds of errors are more characteristic of poor science. When I was at Cornell. I often talked to the people in the psychology department. One of the students told me she wanted to do an experiment that went something like this—I don’t remember it in detail, but it had been found by others that under certain circumstances, X, rats did something, A. She was curious as to whether, if she changed the circumstances to Y, they would still do, A. So her proposal was to do the experiment under circumstances Y and see if they still did A.

I explained to her that it was necessary first to repeat in her laboratory the experiment of the other person—to do it under condition X to see if she could also get result A—and then change to Y and see if A changed. Then she would know that the real difference was the thing she thought she had under control.

She was very delighted with this new idea, and went to her professor. And his reply was, no, you cannot do that, because the experiment has already been done and you would be wasting time. This was in about 1935 or so, and it seems to have been the general policy then to not try to repeat psychological experiments, but only to change the conditions and see what happens.

Nowadays there’s a certain danger of the same thing happening, even in the famous field of physics. I was shocked to hear of an experiment done at the big accelerator at the National Accelerator Laboratory, where a person used deuterium. In order to compare his heavy hydrogen results to what might happen to light hydrogen he had to use data from someone else’s experiment on light hydrogen, which was done on different apparatus. When asked he said it was because he couldn’t get time on the program (because there’s so little time and it’s such expensive apparatus) to do the experiment with light hydrogen on this apparatus because there wouldn’t be any new result. And so the men in charge of programs at NAL are so anxious for new results, in order to get more money to keep the thing going for public relations purposes, they are destroying—possibly—the value of the experiments themselves, which is the whole purpose of the thing. It is often hard for the experimenters there to complete their work as their scientific integrity demands.

All experiments in psychology are not of this type, however. For example, there have been many experiments running rats through all kinds of mazes, and so on—with little clear result. But in 1937 a man named Young did a very interesting one. He had a long corridor with doors all along one side where the rats came in, and doors along the other side where the food was. He wanted to see if he could train the rats to go in at the third door down from wherever he started them off. No. The rats went immediately to the door where the food had been the time before.

The question was, how did the rats know, because the corridor was so beautifully built and so uniform, that this was the same door as before? Obviously there was something about the door that was different from the other doors. So he painted the doors very carefully, arranging the textures on the faces of the doors exactly the same. Still the rats could tell. Then he thought maybe the rats were smelling the food, so he used chemicals to change the smell after each run. Still the rats could tell. Then he realized the rats might be able to tell by seeing the lights and the arrangement in the laboratory like any commonsense person. So he covered the corridor, and, still the rats could tell.

He finally found that they could tell by the way the floor sounded when they ran over it. And he could only fix that by putting his corridor in sand. So he covered one after another of all possible clues and finally was able to fool the rats so that they had to learn to go in the third door. If he relaxed any of his conditions, the rats could tell.

Now, from a scientific standpoint, that is an A‑Number‑l experiment. That is the experiment that makes rat‑running experiments sensible, because it uncovers the clues that the rat is really using—not what you think it’s using. And that is the experiment that tells exactly what conditions you have to use in order to be careful and control everything in an experiment with rat‑running.

I looked into the subsequent history of this research. The subsequent experiment, and the one after that, never referred to Mr. Young. They never used any of his criteria of putting the corridor on sand, or being very careful. They just went right on running rats in the same old way, and paid no attention to the great discoveries of Mr. Young, and his papers are not referred to, because he didn’t discover anything about the rats. In fact, he discovered all the things you have to do to discover something about rats. But not paying attention to experiments like that is a characteristic of Cargo Cult Science.

Another example is the ESP experiments of Mr. Rhine, and other people. As various people have made criticisms—and they themselves have made criticisms of their own experiments—they improve the techniques so that the effects are smaller, and smaller, and smaller until they gradually disappear. All the parapsychologists are looking for some experiment that can be repeated—that you can do again and get the same effect—statistically, even. They run a million rats—no, it’s people this time—they do a lot of things and get a certain statistical effect. Next time they try it they don’t get it any more. And now you find a man saying that it is an irrelevant demand to expect a repeatable experiment. This is science?

This man also speaks about a new institution, in a talk in which he was resigning as Director of the Institute of Parapsychology. And, in telling people what to do next, he says that one of the things they have to do is be sure they only train students who have shown their ability to get PSI results to an acceptable extent—not to waste their time on those ambitious and interested students who get only chance results. It is very dangerous to have such a policy in teaching—to teach students only how to get certain results, rather than how to do an experiment with scientific integrity.

So I wish to you—I have no more time, so I have just one wish for you—the good luck to be somewhere where you are free to maintain the kind of integrity I have described, and where you do not feel forced by a need to maintain your position in the organization, or financial support, or so on, to lose your integrity. May you have that freedom. May I also give you one last bit of advice: Never say that you’ll give a talk unless you know clearly what you’re going to talk about and more or less what you’re going to say.