Tag: Biology

Half Life: The Decay of Knowledge and What to Do About It

Understanding the concept of a half-life will change what you read and how you invest your time. It will explain why our careers are increasingly specialized and offer a look into how we can compete more effectively in a very crowded world.

The Basics

A half-life is the time taken for something to halve its quantity. The term is most often used in the context of radioactive decay, which occurs when unstable atomic particles lose energy. Twenty-nine elements are known to be capable of undergoing this process. Information also has a half-life, as do drugs, marketing campaigns, and all sorts of other things. We see the concept in any area where the quantity or strength of something decreases over time.

Radioactive decay is random, and measured half-lives are based on the most probable rate. We know that a nucleus will decay at some point; we just cannot predict when. It could be anywhere between instantaneous and the total age of the universe. Although scientists have defined half-lives for different elements, the exact rate is completely random.

Half-lives of elements vary tremendously. For example, carbon takes millions of years to decay; that’s why it is stable enough to be a component of the bodies of living organisms. Different isotopes of the same element can also have different half-lives.

Three main types of nuclear decay have been identified: alpha, beta, and gamma. Alpha decay occurs when a nucleus splits into two parts: a helium nucleus and the remainder of the original nucleus. Beta decay occurs when a neutron in the nucleus of an element changes into a proton. The result is that it turns into a different element, such as when potassium decays into calcium. Beta decay also releases a neutrino — a particle with virtually no mass. If a nucleus emits radiation without experiencing a change in its composition, it is subject to gamma decay. Gamma radiation contains an enormous amount of energy.

The Discovery of Half-Lives

The discovery of half-lives (and alpha and beta radiation) is credited to Ernest Rutherford, one of the most influential physicists of his time. Rutherford was at the forefront of this major discovery when he worked with physicist Joseph John Thompson on complementary experiments leading to the discovery of electrons. Rutherford recognized the potential of what he was observing and began researching radioactivity. Two years later, he identified the distinction between alpha and beta rays. This led to his discovery of half-lives, when he noticed that samples of radioactive materials took the same amount of time to decay by half. By 1902, Rutherford and his collaborators had a coherent theory of radioactive decay (which they called “atomic disintegration”). They demonstrated that radioactive decay enabled one element to turn into another — research which would earn Rutherford a Nobel Prize. A year later, he spotted the missing piece in the work of the chemist Paul Villard and named the third type of radiation gamma.

Half-lives are based on probabilistic thinking. If the half-life of an element is seven days, it is most probable that half of the atoms will have decayed in that time. For a large number of atoms, we can expect half-lives to be fairly consistent. It’s important to note that radioactive decay is based on the element itself, not the quantity of it. By contrast, in other situations, the half-life may vary depending on the amount of material. For example, the half-life of a chemical someone ingests might depend on the quantity.

In biology, a half-life is the time taken for a substance to lose half its effects. The most obvious instance is drugs; the half-life is the time it takes for their effect to halve, or for half of the substance to leave the body. The half-life of caffeine is around 6 hours, but (as with most biological half-lives) numerous factors can alter that number. People with compromised liver function or certain genes will take longer to metabolize caffeine. Consumption of grapefruit juice has been shown in some studies to slow caffeine metabolism. It takes around 24 hours for a dose of caffeine to fully leave the body.

The half-lives of drugs vary from a few seconds to several weeks. To complicate matters, biological half-lives vary for different parts of the body. Lead has a half-life of around a month in the blood, but a decade in bone. Plutonium in bone has a half-life of a century — more than double the time for the liver.

Marketers refer to the half-life of a campaign — the time taken to receive half the total responses. Unsurprisingly, this time varies among media. A paper catalog may have a half-life of about three weeks, whereas a tweet might have a half-life of a few minutes. Calculating this time is important for establishing how frequently a message should be sent.

“Every day that we read the news we have the possibility of being confronted with a fact about our world that is wildly different from what we thought we knew.”

— Samuel Arbesman

The Half-Life of Facts

In The Half-Life of Facts: Why Everything We Know Has an Expiration Date, Samuel Arbesman (see our Knowledge Project interview) posits that facts decay over time until they are no longer facts or perhaps no longer complete. According to Arbesman, information has a predictable half-life: the time taken for half of it to be replaced or disproved. Over time, one group of facts replaces another. As our tools and knowledge become more advanced, we can discover more — sometimes new things that contradict what we thought we knew, sometimes nuances about old things. Sometimes we discover a whole area that we didn’t know about.

The rate of these discoveries varies. Our body of engineering knowledge changes more slowly, for example, than does our body of psychological knowledge.

Arbesman studied the nature of facts. The field was born in 1947, when mathematician Derek J. de Solla Price was arranging a set of philosophical books on his shelf. Price noted something surprising: the sizes of the books fit an exponential curve. His curiosity piqued, he began to see whether the same curve applied to science as a whole. Price established that the quantity of scientific data available was doubling every 15 years. This meant that some of the information had to be rendered obsolete with time.

Scientometrics shows us that facts are always changing, and much of what we know is (or soon will be) incorrect. Indeed, much of the available published research, however often it is cited, has never been reproduced and cannot be considered true. In a controversial paper entitled “Why Most Published Research Findings Are False,” John Ioannides covers the rampant nature of poor science. Many researchers are incentivized to find results that will please those giving them funding. Intense competition makes it essential to find new information, even if it is found in a dubious manner. Yet we all have a tendency to turn a blind eye when beliefs we hold dear are disproved and to pay attention only to information confirming our existing opinions.

As an example, Arbesman points to the number of chromosomes in a human cell. Up until 1965, 48 was the accepted number that medical students were taught. (In 1953, it had been declared an established fact by a leading cytologist). Yet in 1956, two researchers, Joe Hin Tjio and Albert Levan, made a bold assertion. They declared the true number to be 46. During their research, Tjio and Levan could never find the number of chromosomes they expected. Discussing the problem with their peers, they discovered they were not alone. Plenty of other researchers found themselves two chromosomes short of the expected 48. Many researchers even abandoned their work because of this perceived error. But Tjio and Levan were right (for now, anyway). Although an extra two chromosomes seems like a minor mistake, we don’t know the opportunity costs of the time researchers invested in faulty hypotheses or the value of the work that was abandoned. It was an emperor’s-new-clothes situation, and anyone counting 46 chromosomes assumed they were the ones making the error.

As Arbesman puts it, facts change incessantly. Many of us have seen the ironic (in hindsight) doctor-endorsed cigarette ads from the past. A glance at a newspaper will doubtless reveal that meat or butter or sugar has gone from deadly to saintly, or vice versa. We forget that laughable, erroneous beliefs people once held are not necessarily any different from those we now hold. The people who believed that the earth was the center of the universe, or that some animals appeared out of nowhere or that the earth was flat, were not stupid. They just believed facts that have since decayed. Arbesman gives the example of a dermatology test that had the same question two years running, with a different answer each time. This is unsurprising considering the speed at which our world is changing.

As Arbesman points out, in the last century the world’s population has swelled from 2 billion to 7 billion, we have taken on space travel, and we have altered the very definition of science.

Our world seems to be in constant flux. With our knowledge changing all the time, even the most informed people can barely keep up. All this change may seem random and overwhelming (Dinosaurs have feathers? When did that happen?), but it turns out there is actually order within the shifting noise. This order is regular and systematic and is one that can be described by science and mathematics.

The order Arbesman describes mimics the decay of radioactive elements. Whenever new information is discovered, we can be sure it will break down and be proved wrong at some point. As with a radioactive atom, we don’t know precisely when that will happen, but we know it will occur at some point.

If we zoom out and look at a particular body of knowledge, the random decay becomes orderly. Through probabilistic thinking, we can predict the half-life of a group of facts with the same certainty with which we can predict the half-life of a radioactive atom. The problem is that we rarely consider the half-life of information. Many people assume that whatever they learned in school remains true years or decades later. Medical students who learned in university that cells have 48 chromosomes would not learn later in life that this is wrong unless they made an effort to do so.

OK, so we know that our knowledge will decay. What do we do with this information? Arbesman says,

… simply knowing that knowledge changes like this isn’t enough. We would end up going a little crazy as we frantically tried to keep up with the ever changing facts around us, forever living on some sort of informational treadmill. But it doesn’t have to be this way because there are patterns. Facts change in regular and mathematically understandable ways. And only by knowing the pattern of our knowledge evolution can we be better prepared for its change.

Recent initiatives have sought to calculate the half-life of an academic paper. Ironically, academic journals have largely neglected research into how people use them and how best to fund the efforts of researchers. Research by Philip Davis shows the time taken for a paper to receive half of its total downloads. Davis’s results are compelling. While most forms of media have a half-life measured in days or even hours, 97 percent of academic papers have a half-life longer than a year. Engineering papers have a slightly shorter half-life than other fields of research, with double the average (6 percent) having a half-life of under a year. This makes sense considering what we looked at earlier in this post. Health and medical publications have the shortest overall half-life: two to three years. Physics, mathematics, and humanities publications have the longest half-lives: two to four years.

The Half-Life of Secrets

According to Peter Swire, writing in “The Declining Half-Life of Secrets,” the half-life of secrets (by which Swire generally means classified information) is shrinking. In the past, a government secret could be kept for over 25 years. Nowadays, hacks and leaks have shrunk that time considerably. Swire writes:

During the Cold War, the United States developed the basic classification system that exists today. Under Executive Order 13526, an executive agency must declassify its documents after 25 years unless an exception applies, with stricter rules if documents stay classified for 50 years or longer. These time frames are significant, showing a basic mind-set of keeping secrets for a time measured in decades.

Swire notes that there are three main causes: “the continuing effects of Moore’s Law — or the idea that computing power doubles every two years, the sociology of information technologists, and the different source and methods for signals intelligence today compared with the Cold War.” One factor is that spreading leaked information is easier than ever. In the past, it was often difficult to get information published. Newspapers feared legal repercussions if they shared classified information. Anyone can now release secret information, often anonymously, as with WikiLeaks. Governments cannot as easily rely on media gatekeepers to cover up leaks.

Rapid changes in technology or geopolitics often reduce the value of classified information, so the value of some, but not all, classified information also has a half-life. Sometimes it’s days or weeks, and sometimes it’s years. For some secrets, it’s not worth investing the massive amount of computer time that would be needed to break them because by the time you crack the code, the information you wanted to know might have expired.

(As an aside, if you were to invert the problem of all these credit card and SSN leaks, you might conclude that reducing the value of possessing this information would be more effective than spending money to secure it.)

“Our policy (at Facebook) is literally to hire as many talented engineers as we can find. The whole limit in the system is that there are not enough people who are trained and have these skills today.”

— Mark Zuckerberg

The Half-Lives of Careers and Business Models

The issue with information having a half-life should be obvious. Many fields depend on individuals with specialized knowledge, learned through study or experience or both. But what if those individuals are failing to keep up with changes and clinging to outdated facts? What if your doctor is offering advice that has been rendered obsolete since they finished medical school? What if your own degree or qualifications are actually useless? These are real problems, and knowing about half-lives will help you make yourself more adaptable.

While figures for the half-lives of most knowledge-based careers are hard to find, we do know the half-life of an engineering career. A century ago, it would take 35 years for half of what an engineer learned when earning their degree to be disproved or replaced. By the 1960s, that time span shrank to a mere decade. Today that figure is probably even lower.

In 1966 paper entitled “The Dollars and Sense of Continuing Education,” Thomas Jones calculated the effort that would be required for an engineer to stay up to date, assuming a 10-year half-life. According to Jones, an engineer would need to devote at least five hours per week, 48 weeks a year, to stay up to date with new advancements. A typical degree requires about 4800 hours of work. Within 10 years, the information learned during 2400 of those hours would be obsolete. The five-hour figure does not include the time necessary to revise forgotten information that is still relevant. A 40-year career as an engineer would require 9600 hours of independent study.

Keep in mind that Jones made his calculations in the 1960s. Modern estimates place the half-life of an engineering degree at between 2.5 and 5 years, requiring between 10 and 20 hours of study per week. Welcome to the treadmill, where you have to run faster and faster so that you don’t fall behind.

Unsurprisingly, putting in this kind of time is simply impossible for most people. The result is an ever-shrinking length of a typical engineer’s career and a bias towards hiring recent graduates. A partial escape from this time-consuming treadmill that offers little progress is to recognize the continuous need for learning. If you agree with that, it becomes easier to place time and emphasis on developing heuristics and systems to foster learning. The faster the pace of knowledge change, the more valuable the skill of learning becomes.

A study by PayScale found that the median age of workers in most successful technology companies is substantially lower than that of other industries. Of 32 companies, just six had a median worker age above 35, despite the average across all workers being just over 42. Eight of the top companies had a median worker age of 30 or below — 28 for Facebook, 29 for Google, and 26 for Epic Games. The upshot is that salaries are high for those who can stay current while gaining years of experience.

In a similar vein, business models have ever shrinking half-lives. The nature of capitalism is that you have to be better last year than you were this year — not to gain market share but to maintain what you already have. If you want to get ahead, you need asymmetry; otherwise, you get lost in trench warfare. How long would it take for half of Uber or Facebook’s business models to be irrelevant? It’s hard to imagine it being more than a couple of years or even months.

In The Business Model Innovation Factory: How to Stay Relevant When the World Is Changing, Saul Kaplan highlights the changing half-lives of business models. In the past, models could last for generations. The majority of CEOs oversaw a single business for their entire careers. Business schools taught little about agility or pivoting. Kaplan writes:

During the industrial era once the basic rules for how a company creates, delivers, and captures value were established[,] they became etched in stone, fortified by functional silos, and sustained by reinforcing company cultures. All of a company’s DNA, energy, and resources were focused on scaling the business model and beating back competition attempting to do a better job executing the same business model. Companies with nearly identical business models slugged it out for market share within well-defined industry sectors.


Those days are over. The industrial era is not coming back. The half-life of a business model is declining. Business models just don’t last as long as they used to. In the twenty-first century business leaders are unlikely to manage a single business for an entire career. Business leaders are unlikely to hand down their businesses to the next generation of leaders with the same business model they inherited from the generation before.

The Burden of Knowledge

The flip side of a half-life is the time it takes to double something. A useful guideline to calculate the time it takes for something to double is to divide 70 by the rate of growth. This formula isn’t perfect, but it gives a good indication. Known as the Rule of 70, it applies only to exponential growth when the relative growth rate remains consistent, such as with compound interest.

The higher the rate of growth, the shorter the doubling time. For example, if the population of a city is increasing by 2 percent per year, we divide 70 by 2 to get a doubling time of 35 years. The rule of 70 is a useful heuristic; population growth of 2 percent might seem low, but your perspective might change when you consider that the city’s population could double in just 35 years. The Rule of 70 can also be used to calculate the time for an investment to double in value; for example, $100 at 7 percent compound interest will double in just a decade and quadruple in 20 years. The average newborn baby doubles its birth weight in under four months. The average doubling time for a tumor is also four months.

We can see how information changes in the figures for how long it takes for a body of knowledge to double in size. The figures quoted by Arbesman (drawn from Little Science, Big Science … and Beyond by Derek J. de Solla Price) are compelling, including:

  • Time for the number of entries in a dictionary of national biographies to double: 100 years
  • Time for the number of universities to double: 50 years
  • Time for the number of known chemical compounds to double: 15 years
  • Time for the number of known asteroids to double: 10 years

Arbesman also gives figures for the time taken for the available knowledge in a particular field to double, including:

  • Medicine: 87 years
  • Mathematics: 63 years
  • Chemistry: 35 years
  • Genetics: 32 years

The doubling of knowledge increases the learning load over time. As a body of knowledge doubles so does the cost of wrapping your head around what we already know. This cost is the burden of knowledge. To be the best in a general field today requires that you know more than the person who was the best only 20 years ago. Not only do you have to be better to be the best, but you also have to be better just to stay in the game.

The corollary is that because there is so much to know, we specialize in very niche areas. This makes it easier to grasp the existing body of facts, keep up to date on changes, and rise to the level of expert. The problem is that specializing also makes it easier to see the world through the narrow focus of your specialty, makes it harder to work with other people (as niches are often dominated by jargon), and makes you prone to overvalue the new and novel.


As we have seen, understanding how half-lives work has numerous practical applications, from determining when radioactive materials will become safe to figuring out effective drug dosages. Half-lives also show us that if we spend time learning something that changes quickly, we might be wasting our time. Like Alice in Wonderland — and a perfect example of the Red Queen Effect — we have to run faster and faster just to keep up with where we are. So if we want our knowledge to compound, we’ll need to focus on the invariant general principles.


Members can discuss this post on the Learning Community Forum.

Activation Energy: Why Getting Started Is the Hardest Part

The Basics

The beginning of any complex or challenging endeavor is always the hardest part.

Not all of us wake up and jump out of bed ready for the day. Some of us, like me, need a little extra energy to transition out of sleep and into the day. Once I've had a cup of coffee, my energy level jumps and I'm good for the rest of the day.

Chemical reactions work in much the same way. They need their coffee, too.

Understanding how this works can be a useful perspective as part of our latticework of mental models.

Whether you use chemistry in your everyday work or have tried your best not to think about it since school, the ideas behind activation energy are simple and useful outside of chemistry. Understanding the principle can, for example, help you get kids to eat their vegetables, motivate yourself and others, and overcome inertia.

How Activation Energy Works in Chemistry

Chemical reactions need a certain amount of energy to begin working. Activation energy is the minimum energy required to cause a reaction to occur.

To understand activation energy, we must first think about how a chemical reaction occurs.

Anyone who has ever lit a fire will have an intuitive understanding of the process, even if they have not connected it to chemistry.

Most of us have a general feel for the heat necessary to start flames. We know that putting a single match to a large log will not be sufficient and a flame thrower would be excessive. We also know that damp or dense materials will require more heat than dry ones. The imprecise amount of energy we know we need to start a fire is representative of the activation energy.

For a reaction to occur, existing bonds must break and new ones form. A reaction will only proceed if the products are more stable than the reactants. In a fire, we convert carbon in the form of wood into CO2 and is a more stable form of carbon than wood, so the reaction proceeds and in the process produces heat. In this example, the activation energy is the initial heat required to get the fire started. Our effort and spent matches are representative of this.

We can think of activation energy as the barrier between the minima (smallest necessary values) of the reactants and products in a chemical reaction.

The Arrhenius Equation

Svante Arrhenius, a Swedish scientist, established the existence of activation energy in 1889.

Arrhenius equation

Arrhenius developed his eponymous equation to describe the correlation between temperature and reaction rate.

The Arrhenius equation is crucial for calculating the rates of chemical reactions and, importantly, the quantity of energy necessary to start them.

In the Arrhenius equation, K is the reaction rate coefficient (the rate of reaction). A is the frequency factor (how often molecules collide), R is the universal gas constant (units of energy per temperature increment per mole), T represents the absolute temperature (usually measured in kelvins), and E is the activation energy.

It is not necessary to know the value of A to calculate Ea as this can be figured out from the variation in reaction rate coefficients in relation to temperature. Like many equations, it can be rearranged to calculate different values. The Arrhenius equation is used in many branches of chemistry.

Why Activation Energy Matters

Understanding the energy necessary for a reaction to occur gives us control over our surroundings.

Returning to the example of fire, our intuitive knowledge of activation energy keeps us safe. Many chemical reactions have high activation energy requirements, so they do not proceed without an additional input. We all know that a book on a desk is flammable, but will not combust without heat application. At room temperature, we need not see the book as a fire hazard. If we light a candle on the desk, we know to move the book away.

If chemical reactions did not have reliable activation energy requirements, we would live in a dangerous world.


Chemical reactions which require substantial amounts of energy can be difficult to control.

Increasing temperature is not always a viable source of energy due to costs, safety issues, or simple impracticality. Chemical reactions that occur within our bodies, for example, cannot use high temperatures as a source of activation energy. Consequently, it is sometimes necessary to reduce the activation energy required.

Speeding up a reaction by lowering the rate of activation energy required is called catalysis. This is done with an additional substance known as a catalyst, which is generally not consumed in the reaction. In principle, you only need a tiny amount of catalyst to cause catalysis.

Catalysts work by providing an alternative pathway with lower activation energy requirements. Consequently, more of the particles have sufficient energy to react. Catalysts are used in industrial scale reactions to lower costs.

Returning to the fire example, we know that attempting to light a large log with a match is rarely effective. Adding some paper will provide an alternative pathway and serve as a catalyst — firestarters do the same.

Within our bodies, enzymes serve as catalysts in vital reactions (such as building DNA.)

How we Can Apply the Concept of Activation Energy to Our Lives

“Energy can have two dimensions. One is motivated, going somewhere, a goal somewhere, this moment is only a means and the goal is going to be the dimension of activity, goal oriented-then everything is a means, somehow it has to be done and you have to reach the goal, then you will relax. But for this type of energy, the goal never comes because this type of energy goes on changing every present moment into a means for something else, into the future. The goal always remains on the horizon. You go on running, but the distance remains the same.

No, there is another dimension of energy: that dimension is unmotivated celebration. The goal is here, now; the goal is not somewhere else. In fact, you are the goal. In fact, there is no other fulfillment than that of this moment–consider the lilies. When you are the goal and when the goal is not in the future, when there is nothing to be achieved, rather you are just celebrating it, then you have already achieved it, it is there. This is relaxation, unmotivated energy.”
— Osho, Tantra


Although activation energy is a scientific concept, we can use it as a practical mental model.

Returning to the morning coffee example, many of the things we do each day depend upon an initial push.

Take the example of a class of students set an essay for their coursework. Each student requires a different sort of activation energy for them to get started. For one student, it might be hearing their friend say she has already finished hers. For another, it might be blocking social media and turning off their phone. A different student might need a few cans of Red Bull and an impending deadline. Or, for another, reading an interesting article on the topic which provides a spark of inspiration. The act of writing an essay necessitates a certain sort of energy.

Getting kids to eat their vegetables can be a difficult process. In this case, incentives can act as a catalyst. “You can't have your dessert until you eat your vegetables” is not only a psychological play on incentives; it also often requires less energy than constantly fighting with the kids to eat their vegetables. Once kids eat a carrot, they generally eat another one and another one. While they still want dessert, you won't have to remind them each time, so you'll save a lot of energy.

The concept of activation energy can also apply to making drastic life changes. Anyone who has ever done something dramatic and difficult (such as quitting an addiction, leaving an abusive relationship, quitting a long-term job, or making crucial lifestyle changes) knows that it is necessary to reach a breaking point first. The bigger and more challenging an action is, the more activation energy we require to do it.

Our coffee drinker might crave little activation energy (a cup or two) to begin their day if they are well rested. Meanwhile, it will take a whole lot more coffee for them to get going if they slept badly and have a dull day to get through.


To understand and use the concept of activation energy in our lives does not require a degree in chemistry. While the concept as used by scientists is complex, we can use the basic idea.

It is no coincidence that many of most useful mental models in our latticework originate from science. There is something quite poetic about the way in which human behavior mirrors what occurs at a microscopic level.

For other examples, look to Occam’s Razor, falsification, feedback loops, and equilibrium.

Survival of the Fittest: Groups versus Individuals

If ‘survival of the fittest’ is the prime evolutionary tenet, then why do some behaviors that lead to winning or success, seemingly justified by this concept, ultimately leave us cold?

Taken from Darwin’s theory of evolution, survival of the fittest is often conceptualized as the advantage that accrues with certain traits, allowing an individual to both thrive and survive in their environment by out-competing for limited resources. Qualities such as strength and speed were beneficial to our ancestors, allowing them to survive in demanding environments, and thus our general admiration for these qualities is now understood through this evolutionary lens.

However, in humans this evolutionary concept is often co-opted to defend a wide range of behaviors, not all of them good. Winning by cheating, or stepping on others to achieve goals.

Why is this?

One answer is that humans are not only concerned with our individual survival, but the survival of our group. (Which, of course, leads to improved individual survival, on average.) This relationship between individual and group survival is subject to intense debate among biologists.

Selecting for Unselfishness?

Humans display a wide range of behavior that seems counter-intuitive to the survival of the fittest mentality until you consider that we are an inherently social species, and that keeping our group fit is a wise investment of our time and energy.

One of the behaviors that humans display a lot of is “indirect reciprocity”. Distinguished from “direct reciprocity”, in which I help you and you help me, indirect reciprocity confers no immediate benefit to the one doing the helping. Either I help you, then you help someone else at a later time, or I help you and then someone else, some time in the future, helps me.

Martin A. Nowak and Karl Sigmund have studied this phenomenon in humans for many years. Essentially, they ask the question “How can natural selection promote unselfish behavior?”

Many of their studies have shown that “propensity for indirect reciprocity is widespread. A lot of people choose to do it.”


Humans are the champions of reciprocity. Experiments and everyday experience alike show that what Adam Smith called ‘our instinct to trade, barter and truck' relies to a considerable extent on the widespread tendency to return helpful and harmful acts in kind. We do so even if these acts have been directed not to us but to others.

We care about what happens to others, even if the entire event is one that we have no part in. If you consider evolution in terms of survival of the fittest group, rather than individual, this makes sense.

Supporting those who harm others can breed mistrust and instability. And if we don’t trust each other, day to day transactions in our world will be completely undermined. Sending your kids to school, banking, online shopping: We place a huge amount of trust in our fellow humans every day.

If we consider this idea of group survival, we can also see value in a wider range of human attributes and behaviors. It is now not about “I have to be the fittest in every possible way in order to survive“, but recognizing that I want fit people in my group.

In her excellent book, Quiet: The Power of Introverts in a World That Can’t Stop Talking, author Susan Cain explores, among other things, the relevance of introverts to social function. How their contributions benefit the group as a whole. Introverts are people who “like to focus on one task at a time, … listen more than they talk, think before they speak, … [and] tend to dislike conflict.”

Though out of step with the culture of “the extrovert ideal” we are currently living in, introverts contribute significantly to our group fitness. Without them we would be deprived of much of our art and scientific progress.

Cain argues:

Among evolutionary biologists, who tend to subscribe to the vision of lone individuals hell-bent on reproducing their own DNA, the idea that species include individuals whose traits promote group survival is hotly debated and, not long ago, could practically get you kicked out of the academy.

But the idea makes sense. If personality types such as introverts aren’t the fittest for survival, then why did they persist? Possibly because of their value to the group.

Cain looks at the work of Dr. Elaine Aron, who has spent years studying introverts, and is one herself. In explaining the idea of different personality traits as part of group selection in evolution, Aron offers this story in an article posted on her website:

I used to joke that when a group of prehistoric humans were sitting around the campfire and a lion was creeping up on them all, the sensitive ones [introverts] would alert the others to the lion's prowling and insist that something be done. But the non-sensitive ones [extroverts] would be the ones more likely to go out and face the lion. Hence there are more of them than there are of us, since they are willing and even happy to do impulsive, dangerous things that will kill many of us. But also, they are willing to protect us and hunt for us, if we are not as good at killing large animals, because the group needs us. We have been the healers, trackers, shamans, strategists, and of course the first to sense danger. So together the two types survive better than a group of just one type or the other.

The lesson is this: Groups survive better if they have individuals with different strengths to draw on. The more tools you have, the more likely you can complete a job. The more people you have that are different the more likely you can survive the unexpected.

Which Group?

How then, does one define the group? Who am I willing to help? Arguably, I’m most willing to sacrifice for my children, or family. My immediate little group. But history is full of examples of those who sacrificed significantly for their tribes or sports teams or countries.

We can’t argue that it is just about the survival of our own DNA. That may explain why I will throw myself in front of a speeding car to protect my child, but the beaches of Normandy were stormed by thousands of young, childless men. Soldiers from World War I, when interviewed about why they would jump out of a trench, trying to take a slice of no man’s land, most often said they did it “for the guy next to them”. They initially joined the military out of a sense of “national pride”, or other very non-DNA reasons.

Clearly, human culture is capable of defining “groups” very broadly though a complex system of mythology, creating deep loyalty to “imaginary” groups like sports teams, corporations, nations, or religions.

As technology shrinks our world, our group expands. Technological advancement pushes us into higher degrees of specialization, so that individual survival becomes clearly linked with group survival.

I know that I have a vested interest in doing my part to maintain the health of my group. I am very attached to indoor plumbing and grocery stores, yet don’t participate at all in the giant webs that allow those things to exist in my life. I don’t know anything about the configuration of the municipal sewer system or how to grow raspberries. (Of course, Adam Smith called this process of the individual benefitting the group through specialization the Invisible Hand.)

When we see ourselves as part of a group, we want the group to survive and even thrive. Yet how big can our group be? Is there always an us vs. them? Does our group surviving always have to be at the expense of others? We leave you with the speculation.


Under One Roof: What Can we Learn from the Mayo Clinic?

The biologist Lewis Thomas, who we've written about before, has a wonderful thought on creating great organizations.

For Thomas, creating great science was not about command-and-control. It was about Getting the Air Right.

It cannot be prearranged in any precise way; the minds cannot be lined up in tidy rows and given directions from printed sheets. You cannot get it done by instructing each mind to make this or that piece, for central committees to fit with the pieces made by the other instructed minds. It does not work this way.

What it needs is for the air to be made right. If you want a bee to make honey, you do not issue protocols on solar navigation or carbohydrate chemistry, you put him together with other bees (and you’d better do this quickly, for solitary bees do not stay alive) and you do what you can to arrange the general environment around the hive. If the air is right, the science will come in its own season, like pure honey.

One organization which clearly “gets the air right” is the much lauded Mayo Clinic in Rochester, Minnesota.

The organization has 4,500 physicians and over $10 billion in revenue from three main campuses, and it is regularly rated among the top hospital systems in the United States in a wide variety of specialities, and yet was founded back in the late 20th century by William Worrall Mayo. Its main campus is in Rochester, Minnesota; not exactly a hub of bustling activity, yet its patients are willing to fly or drive hundreds of miles to receive care. (So-called “destination medicine.”)

How does an organization sustain that kind of momentum for more than 150 years, in an industry that's changed as much as medicine? What can the rest of us learn from that?

It's a prime example of where culture eats strategy. Even Warren Buffett admires the system:

A medical partnership led by your area’s premier brain surgeon may enjoy outsized and growing earnings, but that tells little about its future. The partnership’s moat will go when the surgeon goes. You can count, though, on the moat of the Mayo Clinic to endure, even though you can’t name its CEO.

Pulling the Same Oar

The Mayo Clinic is an integrated, multi-specialty organization — they're known for doing almost every type of medicine at a world class level. And the point of having lots of specialities integrated under one roof is teamwork: Everyone is pulling the same oar. Integrating all specialities under one umbrella and giving them a common set of incentives focuses Mayo's work on the needs of the patient, not the hospital or the doctor.

This extreme focus on patient needs and teamwork creates a unique environment that is not present in most healthcare systems, where one's various care-takers often don't know each other, fail to communicate, and even have trouble accessing past medical records. (Mayo is able to have one united electronic patient record system because of its deep integration.)

Importantly, they don't just say they focus on integrated care, they do it. Everything is aligned in that direction. For example, as with Apple Retail stores (also known for extreme customer focus), there are no bonuses or incentive payments for physicians — only salaries.

An interesting book called Management Lessons from the Mayo Clinic (recommended by the great Sanjay Bakshi) details some of Mayo's interesting culture:

The clinic ardently searches for team players in its hiring and then facilitates their collaboration through substantial investment in communications technology and facilities design. Further encouraging collaboration is an all-salary compensation system with no incentive payments based on the number of patients seen or procedures performed. A Mayo physician has no economic reason to hold onto patients rather than referring them to colleagues better suited to meet their needs. Nor does taking the time to assist a colleague result in lost personal income.


The most amazing thing of all about the Mayo clinic is the fact that hundreds of members of the most highly individualistic profession in the world could be induced to live and work together in a small town on the edge of nowhere and like it.

The Clinic was carefully constructed by self-selection over time: It's a culture that attracts teamwork focused physicians and then executes on that promise.

One of the internists in the book is quoting as saying working at Mayo is like “working in an organism; you are not a single cell when you are out there practicing. As a generalists, I have access to the best minds on any topic, any disease or problem I come up with and they're one phone call away.”

In that sense, part of the Mayo's moat is simply a feedback loop of momentum: Give a group of high performers an amazing atmosphere in which to do their work, and eventually they will simply be attracted by each other. This can go on a long time.

Under One Roof

The other part of Mayo's success — besides correct incentives, a correct system, and a feedback loop — is simply scale and critical mass. Mayo is like a Ford in its early days: They can do everything under one roof, with all of the specialities and sub-specialities covered. That allows them to deliver a very different experience, accelerating the patient care cycle due to extreme efficiency relative to a “fractured” system.

Craig Smoldt, chair of the department of facilities and support services in Rochester, makes the point that Mayo clinic can offer efficient care–the cornerstone of destination medicine–because it functions as one integrated organization. He notes the fact that everyone works under one roof, so to speak, and is on the payroll of the same organization, makes a huge difference. The critical mass of what we have here is another factor. Few healthcare organizations in the country have as many specialities and sub-specialities working together in one organization.” So Mayo Clinic patients come to one of three locations, and virtually all of their diagnoses and treatment can be delivered by that single organization in a short time.

Contrast that to the way care is delivered elsewhere, the fractured system that represents Mayo's competitors. This is another factor in Mayo's success — they're up against a pretty uncompetitive lot:

Most U.S. healthcare is not delivered in organizations with a comparable degree of integrated operations. Rather than receiving care under one roof, a single patient's doctors commonly work in offices scattered around a city. Clinical laboratories and imaging facilities may be either in the local hospital or at different locations. As a report by the Institute of Medicine and the National Academy of Engineering notes, “The increase in specialization in medicine has reinforced the cottage-industry structure of U.S. healthcare, helping to create a delivery system characterized by disconnected silos of function and specialization.

How does this normally work out in practice, at places that don't work like Mayo? We're probably all familiar with the process. The Institute of Medicine report referenced above continues:

“Suppose the patient has four medical problems. That means she would likely have at least five different doctors.” For instance, this patient could have (1) a primary care doctor providing regular examinations and treatments for general health, (2) an orthopedist who treats a severely arthritic knee, (3) a cardiologist who is monitoring the aortic valve in her heart that may need replacement soon, (4) a psychiatrist who is helping her manage depression, and (5) and endocrinologist who is helping her adjust her diabetes medications. Dr. Cortese then notes,”With the possible exception of the primary care physician, most of these doctors probably do not know that the patient is seeing the others. And even if they do know, it is highly unlikely they know the impressions and recommendations the other doctors have recorded in the medical record, or exactly what medications and dosages are prescribed.” If the patient is hospitalized, it is probably that only the admitting physician and the primary care physician will have that knowledge.

Coordinating all of these doctors takes time and energy on the part of the patient. Repeat, follow-up visits are done days later; often test results, MRI results, or x-ray results are not determined quickly or communicated effectively to the other parts of the chain.

Mayo solves that by doing everything efficiently and under one roof. The patient or his/her family doesn't have to push to get efficient service. Take the case of a woman with fibrocystic breast disease who had recently found a lump. Her experience at Mayo took a few hours; the same experience in the past had taken multiple days elsewhere, and initiative on her end to speed things up.

As a patient in the breast clinic, she began with an internist/breast specialists who took the medical history and performed an exam. The mammogram followed in the nearby breast imaging center. The breast ultrasound, ordered to evaluate a specific area on the breast, was done immediately after the mammogram.

The breast radiologist who performed the ultrasound had all the medical history and impressions of the other doctors available in the electronic medical record (EMR). The ultrasound confirmed that the lump was a simple cyst, not a cancer. The radiologist shared this information with the patient and offered her an aspiration of the cyst that would draw off fluid if the cyst was painful. But comforted with the diagnosis of the simple cyst and with the fact that it was not painful, the veteran patient declined the aspiration. Within an hour of completing the breast imaging, the radiologist communicated to the breast specialist a “verbal report” of the imaging findings. The patient returned to the internist/breast specialist who then had a wrap-up visit with the patient and recommended follow-up care. This patient's care at Mayo was completed in three and one-half hours–before lunch.

So what are some lessons we can pull together from studying Mayo?

The book offers a bunch, but one in particular seemed broadly useful, from a chapter describing Mayo's “systems” approach to consistently improving the speed and level of care. (Industrial engineers are put to work fixing broken systems inside Mayo.)

Mayo wins by solving the totality of the customer's problem, not part of it. This is the essence of an integrated system. While this wouldn't work for all types of businesses; it's probably a useful way for most “service” companies to think.

Why is this lesson particularly important? Because it leads to all the others. Innovation in patient care, efficiency in service delivery, continuous adoption of new technology, “Getting the Air Right” to attract and retain the best possible physicians, and creating a feedback loop are products of the “high level” thought process below: Solve the whole problem.

Lesson 1: Solve the customer's total problem. Mayo Clinic is a “systems seller” competing with a connected, coordinated service. systems sellers market coordinated solutions to the totality of their customers' problems; they offer whole solutions instead of partial solutions. In system selling, the marketer puts together all the services needed by customers to do it themselves. The Clinic uses systems thinking to execute systems selling that pleasantly surprises patients (and families) and exceeds their expectations.

The scheduling and service production systems at Mayo Clinic have created a differentiated product–destination medicine–that few competitors can approach. So even if patients feel that the doctors and hospitals at home are fine, they still place a high value on a service system that can deliver a product in days rather than weeks or months.


Patients not only require competent care but also coordinated and efficient care. Mayo excels in both areas. In a small Midwestern town, it created a medical city offering “systems solutions” that encourage favorable word of mouth and sustained brand strength, and then it exported the model to new campuses in Arizona and Florida.

If you liked this post, you might like these as well:

Creating Effective Incentive Systems: Ken Iverson on the Principles that Unleash Human Potential — Done poorly, compensation systems foster a culture of individualism and gaming. Done properly, however, they unleash the potential of all employees.

Can Health Care Learn From Restaurant Chains? — Atul Gawande pens a fascinating piece in the New Yorker about what health care can learn from the Cheesecake Factory.

Principles for an Age of Acceleration

MIT Media Lab is a creative nerve center where great ideas like One Laptop per Child, LEGO Mindstorms, and Scratch programming language have emerged.

Its director, Joi Ito, has done a lot of thinking about how prevailing systems of thought will not be the ones to see us through the coming decades. In his book Whiplash: How to Survive our Faster Future, he notes that sometime late in the last century, technology began to outpace our ability to understand it.

We are blessed (or cursed) to live in interesting times, where high school students regularly use gene editing techniques to invent new life forms, and where advancements in artificial intelligence force policymakers to contemplate widespread, permanent unemployment. Small wonder our old habits of mind—forged in an era of coal, steel, and easy prosperity—fall short. The strong no longer necessarily survive; not all risk needs to be mitigated; and the firm is no longer the optimum organizational unit for our scarce resources.

Ito's ideas are not specific to our moment in history, but adaptive responses to a world with certain characteristics:

1. Asymmetry
In our era, effects are no longer proportional to the size of their source. The biggest change-makers of the future are the small players: “start-ups and rogues, breakaways and indie labs.”

2. Complexity
The level of complexity is shaped by four inputs, all of which are extraordinarily high in today’s world: heterogeneity, interconnection, interdependency and adaptation.

3. Uncertainty
Not knowing is okay. In fact, we’ve entered an age where the admission of ignorance offers strategic advantages over expending resources–subcommittees and think tanks and sales forecasts—toward the increasingly futile goal of forecasting future events.”

When these three conditions are in place, certain guiding principles serve us best. In his book, Ito shares some of the maxims that organize his “anti-disciplinary” Media Lab in a complex and uncertain world.

Emergence over Authority

Complex systems show properties that their individual parts don’t possess, and we call this process “emergence”. For example, life is an emergent property of chemistry. Groups of people also produce a wondrous variety of emergent behaviors—languages, economies, scientific revolutions—when each intellect contributes to a whole that is beyond the abilities of any one person.

Some organizational structures encourage this kind of creativity more than others. Authoritarian systems only allow for incremental changes, whereas nonlinear innovation emerges from decentralized networks with a low barrier to entry. As Stephen Johnson describes in Emergence, when you plug more minds into the system, “isolated hunches and private obsessions coalesce into a new way of looking at the world, shared by thousands of individuals.”

Synthetic biology best exemplifies the type of new field that can arise from emergence. Not to be confused with genetic engineering, which modifies existing organisms, synthetic biology aims to create entirely new forms of life.

Having emerged in the era of open-source software, synthetic biology is becoming an exercise in radical collaboration between students, professors, and a legion of citizen scientists who call themselves biohackers. Emergence has made its way into the lab.

As a result, the cost of sequencing DNA is plummeting at six times the rate of Moore’s Law, and a large Registry of Standard Biological Parts, or BioBricks, now offers genetic components that perform well-understood functions in whatever organism is being created, like a block of Lego.

There is still a place for leaders in an organization that fosters emergence, but the role may feel unfamiliar to a manager from a traditional hierarchy. The new leader spends less time leading and more time “gardening”—pruning the hedges, watering the flowers, and otherwise getting out of the way. (As biologist Lewis Thomas puts it, a great leader must get the air right.)

Pull over Push

“Push” strategies involve directing resources from a central source to sites where, in the leader’s estimation, they are likely to be needed or useful. In contrast, projects that use “pull” strategies attract intellectual, financial and physical resources to themselves just as they are needed, rather than stockpiling them.

Ito is a proponent of the sharing economy, through which a startup might tap into the global community of freelancers and volunteers for a custom-made task force instead of hiring permanent teams of designers, programmers or engineers.

Here's a great example:

When the Fukushima nuclear meltdown happened, Ito was living just outside of Tokyo. The Japanese government took a command-and-control (“push”) approach to the disaster, in which information would slowly climb up the hierarchy, and decisions would then be passed down stepwise to the ground-level workers.

It soon became clear that the government was not equipped to assess or communicate the radioactivity levels of each neighborhood, so Ito and his friends took the problem into their own hands. Pulling in expertise and money from far-flung scientists and entrepreneurs, they formed a citizen science group called Safecast, which built its own GPS-equipped Geiger counters and strapped them to cars for faster monitoring. They launched a website that continues to share data – more than 50 million data points so far – about local environments.

To benefit from these kinds of “pull” strategies, it pays to foster an environment that is rich with weak ties – a wide network of acquaintances from which to draw just-in-time knowledge and resources, as Ito did with Safecast.

Compasses over Maps

Detailed maps can be more misleading than useful in a fast-changing world, where a compass is the tool of choice. In the same way, organizations that plan exhaustively will be outpaced in an accelerating world by ones that are guided by a more encompassing mission.

A map implies a straightforward knowledge of the terrain, and the existence of an optimum route; the compass is a far more flexible tool and requires the user to employ creativity and autonomy in discovering his or her own path.

One advantage to the compass approach is that when a roadblock inevitably crops up, there is no need to go back to the beginning to form another plan or draw up multiple plans for each contingency. You simply navigate around the obstacle and continue in your chosen direction.

It is impossible, in any case, to make detailed plans for a complex and creative organization. The way to set a compass direction for a company is by creating a culture—or set of mythologies—that animates the parts in a common worldview.

In the case of the MIT Media Lab, that compass heading is described in three values: “Uniqueness, Impact, and Magic”. Uniqueness means that if someone is working on a similar project elsewhere, the lab moves on.

Rather than working to discover knowledge for its own sake, the lab works in the service of Impact, through start-ups and physical creations. It was expressed in the lab’s motto “Deploy or die”, but Barack Obama suggested they work on their messaging, and Ito shortened it to “Deploy.”

The Magic element, though hard to define, speaks to the delight that playful originality so often awakens.

Both students and faculty at the lab are there to learn, but not necessarily to be “educated”. Learning is something you pursue for yourself, after all, whereas education is something that’s done to you. The result is “agile, scrappy, permissionless innovation”.

The new job landscape requires more creativity from everybody. The people who will be most successful in this environment will be the ones who ask questions, trust their instincts, and refuse to follow the rules when the rules get in their way.

Other principles discussed in Whiplash include Risk over Safety, Disobedience over Compliance, Practice over Theory, Diversity over Ability, Resilience over Strength, and Systems over Objects.

Mozart’s Brain and the Fighter Pilot

Most of us want to be smarter but have no idea how to go about improving our mental apparatus. We intuitively think that if we raised our IQ a few points that we'd be better off intellectually. This isn't necessarily the case. I know a lot of people with high IQs that make terribly stupid mistakes. The way around this is by improving not our IQ, but our overall cognition.

Cognition, argues Richard Restak, “refers to the ability of our brain to attend, identify, and act.” You can think of this as a melange of our moods, thoughts, decisions, inclinations and actions.

Included among the components of cognition are alertness, concentration, perceptual speed, learning, memory, problem solving, creativity, and mental endurance.

All of these components have two things in common. First, our efficacy at them depends on how well the brain is functioning relative to its capabilities. Second, this efficacy function can be improved with the right discipline and the right habits.

Restak convincingly argues that we can make our brains work better by “enhancing the components of cognition.” How we go about improving our brain performance, and thus cognition, is the subject of his book Mozart’s Brain and the Fighter Pilot.

Improving Our Cognitive Power

To improve the brain we need to exercise our cognitive powers. Most of us believe that physical exercise helps us feel better and live healthier; yet how many of us exercise our brain? As with our muscles and our bones, “the brain improves the more we challenge it.”

This is possible because the brain retains a high degree of plasticity; it changes in response to experience. If the experiences are rich and varied, the brain will develop a greater number of nerve cell connections. If the experiences are dull and infrequent, the connections will either never form or die off.

If we’re in stimulating and challenging environments, we increase the number of nerve cell connections. Our brain literally gets heavier, as the number of synapses (connections between neurons) increases. The key that many people miss here is “rich and varied.”

Memory is the most important cognitive function. Imagine if you lost your memory permanently: Would you still be you?

“We are,” Restak writes, “what we remember.” And poor memories are not limited to those who suffer from Alzheimer's disease. While some of us are genetically endowed with superlative memories, the rest of us need not fear.

Aristotle suggested that our mind was a wax tablet in a short book on memory, arguing that the passage of time fades the image unless we take steps to preserve it. He was right in ways he never knew; memory researchers know now that, like a wax tablet, our memory changes every time we access it, due to the plasticity Restak refers to. It can also be molded and improved, at least to a degree.

Long ago, the Greeks hit upon the same idea — mostly starting with Plato — that we don’t have to accept our natural memory. We can take steps to improve it.

Learning and Knowledge Acquisition

When we learn something new, we expand the complexity of our brain. We literally increase our brainpower.

[I]ncrease your memory and you increase your basic intelligence. … An increased memory leads to easier, quicker accessing of information, as well as greater opportunities for linkages and associations. And, basically, you are what you can remember.

Too many of us can’t remember these days, because we’ve outsourced our brain. One of the most common complaints at the neurologist's office for people over forty is poor memory. Luckily most of these people do not suffer from something neurological, but rather the cumulative effect of disuse — a graceful degradation of their memory.

Those who are not depressed (the commonest cause of subjective complaints of memory impairment) are simply experiencing the cumulative effect of decades of memory disuse. Part of this disuse is cultural. Most businesses and occupations seldom demand that their employees recite facts and figures purely from memory. In addition, in some quarters memory is even held in contempt. ‘He’s just parroting a lot of information he doesn’t really understand’ is a common put-down when people are enviously criticizing someone with a powerful memory. Of course, on some occasions, such criticisms are justified, particularly when brute recall occurs in the absence of understanding or context. But I’m not advocating brute recall. I’m suggesting that, starting now, you aim for a superpowered memory, a memory aimed at quicker, more accurate retrieval of information.

Prior to the printing press, we had to use our memories. Epics such as The Odyssey and The Iliad, were recited word-for-word. Today, however, we live in a different world, and we forget that these things were even possible. Information is everywhere. We need not remember anything thanks to technology. This helps and hinders the development of our memory.

[Y]ou should think of the technology of pens, paper, tape recorders, computers, and electronic diaries as an extension of the brain. Thanks to these aids, we can carry incredible amounts of information around with us. While this increase in readily available information is generally beneficial, there is also a downside. The storage and rapid retrieval of information from a computer also exerts a stunting effect on our brain’s memory capacities. But we can overcome this by working to improve our memory by aiming at the development and maintenance of a superpowered memory. In the process of improving our powers of recall, we will strengthen our brain circuits, starting at the hippocampus and extending to every other part of our brain.

Information is only as valuable as what it connects to. Echoing the latticework of mental models, Restek states:

Everything that we learn is stored in the brain within that vast, interlinking network. And everything within that network is potentially connected to everything else.

From this we can draw a reasonable conclusion: if you stop learning mental capacity declines.

That’s because of the weakening and eventual loss of brain networks. Such brain alterations don’t take place overnight, of course. But over a varying period of time, depending on your previous training and natural abilities, you’ll notice a gradual but steady decrease in your powers if you don’t nourish and enhance these networks.

The Better Network: Your Brain or the Internet

Networking is a fundamental operating principle of the human brain. All knowledge within the brain is based on networking. Thus, any one piece of information can be potentially linked with any other. Indeed, creativity can be thought of as the formation of novel and original linkages.

In his book, Weaving the Web: The Original Design and the Ultimate Destiny of the World Wide Web, Tim Berners-Lee, the creator of the Internet, distills the importance of the brain forming connections.

A piece of information is really defined only by what it’s related to, and how it’s related. There really is little else to meaning. The structure is everything. There are billions of neurons in our brains, but what are neurons? Just cells. The brain has no knowledge until connections are made between neurons. All that we know, all that we are, comes from the way our neurons are connected.

Cognitive researchers now accept that it may not be the size of the human brain which gives it such unique abilities — other animals have large brains as well. Rather its our structure; the way our neurons are structured, arranged, and linked.

The more you learn, the more you can link. The more you can link, the more you increase the brain's capacity. And the more you increase the capacity of your brain the better able you’ll be to solve problems and make decisions quickly and correctly. This is real brainpower.

Multidisciplinary Learning

Restak argues that a basic insight about knowledge and intelligence is: “The existence of certain patterns, which underlie the diversity of the world around us and include our own thoughts, feelings, and behaviors.”

Intelligence enhancement therefore involves creating as many neuronal linkages as possible. But in order to do this we have to extricate ourselves from the confining and limiting idea that knowledge can be broken down into separate “disciplines” that bear little relation to one another.

This brings the entire range of ideas into play, rather than just silos of knowledge from human-created specialities. Charlie Munger and Richard Feynman would probably agree that such over-specialization can be quite limiting. As the old proverb goes, the frog in the well knows nothing of the ocean.

Charles Cameron, a game theorist, adds to this conversation:

The entire range of ideas can legitimately be brought into play: and this means not only that ideas from different disciplines can be juxtaposed, but also that ideas expressed in ‘languages’ as diverse as music, painting, sculpture, dance, mathematics and philosophy can be juxtaposed, without first being ‘translated’ into a common language.

Mozart's Brain and the Fighter Pilot goes on to provide 28 suggestions and exercises for enhancing your brain's performance, a few of which we’ll cover in future posts.