Tag: Fragility

Albert Bandura on Acquiring Self-Efficacy and Personal Agency

Albert Bandura

Psychologist Albert Bandura is famous for his social learning theory which is really more of a model than a theory.

He stresses the importance of observational learning. Who you spend time with matters. “Learning would be exceedingly laborious, not to mention hazardous, if people had to rely solely on the effects of their own actions to inform them what to do,” Bandura explains.

There is an excerpt in Stronger: Develop the Resilience You Need to Succeed that explains how we can acquire and maintain the factors of personal resilience.

1. Seek to successfully demonstrate and repeatedly practice each of our five factors of personal resilience. Success is a powerful learning tool—Just do it! If the challenge is too large or complex at first, start by taking small steps in the desired direction. Don't try to achieve too much at first. And keep trying until you succeed. The first success is the hardest.

2. Observe resilient people. Use them as role models. Human beings learn largely by observation. Frequent venues where you can watch people exhibiting the skills you wish to acquire. Read books about people who have overcome obstacles similar to those you face. Call or write them. Ask them to share their lessons learned. Their successes will be contagious.

3. Vigorously pursue the encouragement and support of others. Affiliate with supportive and compassionate people who are willing to give of themselves to be supportive of you.

4. Practice self-control. In highly stressful times, myriad physiological and behavioral reactions occur. Physiologically, people experience the fight-or-flight response we mentioned in Chapter One. This cascade of hormones such as adrenalin better prepares you to fight or to flee a threat. They increase your heart rate, muscle strength, and tension. They dramatically improve your memory for certain things while decreasing your ability to remember others, and they cause your blood vessels to shift their priorities. This often results in headaches, cold hands and feet, and even an upset gastrointestinal system. The most significant problem, however, is that this very basic survival mechanism also tends to interfere with rational judgment and problem solving.

According to Bandura we need to control the stress around us so that it doesn't become excessive, in part because we often act without thinking in stressful situations.

People often act impulsively in reaction to stressful events, sometimes running away from them. Remember the 1999 movie Runaway Bride, starring Richard Gere and Julia Roberts? It was the fictional story of a woman who had a penchant for falling in love and getting engaged, then developing cold feet and leaving her fiances at the altars. On a more somber note, after the conclusion of the Vietnam War, many veterans chose to retreat to lives of isolation and solitude. The stress of war and the lack of social support motivated many to simply withdraw from society.

Similarly, over many years of clinical practice, we have seen individuals who have great difficulty establishing meaningful relationships after surviving a traumatic or vitriolic divorce. It's hard for them to trust another person after having been “betrayed.” They exhibit approach-avoidance behaviors—engaging in a relationship initially but backing away when it intensifies.

Contrary to these patterns of escape and avoidance, sometimes people will impulsively act aggressively in response to stressful situations. Chronic irritability is often an early warning sign of subsequent escalating aggressive behavior. Rarely, although sometimes catastrophically, people will choose to lie, cheat, or steal in highly stressful situations. For years, psychologists have tried to predict dishonesty using psychological testing. The results have been uninspiring. The reason is that the best predictor of dishonesty is finding oneself in a highly stressful situation. So in highly stressful times, resist the impulsive urges to take the easy way out.

Also, remember to take care of yourself, physically as well as psychologically. Maladaptive self-medication is a common pattern of behavior for people who find themselves in the abyss. Alcohol has long been observed as a chemical crutch. Others that have only recently emerged are the myriad energy drinks on the market. Both of these crutches have been linked to numerous physical ailments and even deaths. If you are looking for the best single physical mechanism to aid you in your ascent from the abyss, it's establishing healthy patterns of rest and sleep.

But note the distinction between controlling and suppressing. Often controlling is impossible so we suppress and fool ourselves into thinking we're controlling. And suppressing volatility is often a horrible idea, especially in the long-run.

Instead of what's intended, we create a coiled spring that most often leads to negative leaping emergent effects. In the end this moves us toward fragility and away from robustness and resiliency.

***

If you're still curious, The Hour Between Dog and Wolf: How Risk Taking Transforms Us, Body and Mind discusses a bit of this topic as well.

10 Principles to Live an Antifragile Life

hydra

“How can you think yourself a great man, when the first accident that comes along can wipe you out completely.”

— Euripides

It's one thing to live and another to live your life in a way that is antifragile.

What is Antifragility

Author Nassim Taleb says defines the term antifragile this way:

Some things benefit from shocks; they thrive and grow when exposed to volatility, randomness, disorder, and stressors and love adventure, risk, and uncertainty. Yet, in spite of the ubiquity of the phenomenon, there is no word for the exact opposite of fragile. Let us call it antifragile. Antifragility is beyond resilience or robustness. The resilient resists shocks and stays the same; the antifragile gets better. This property is behind everything that has changed with time: evolution, culture, ideas, revolutions, political systems, technological innovation, cultural and economic success, corporate survival, good recipes (say, chicken soup or steak tartare with a drop of cognac), the rise of cities, cultures, legal systems, equatorial forests, bacterial resistance … even our own existence as a species on this planet.

Things that are antifragile benefit from randomness, uncertainty, and variation.

Living an Antifragile Life

Now that we have this knowledge, what should we do with it?

Life is messy and seemingly getting messier. Can we position ourselves to gain from this disorder … to not only recover from mistakes but get stronger?

The answer is yes. There are principles we can follow to help us.

Buster Benson has some excellent thoughts on how to live an antifragile life, giving us these core principles taken from the Antifragile book:

  • Stick to simple rules
  • Build in redundancy and layers (no single point of failure)
  • Resist the urge to suppress randomness
  • Make sure that you have your soul in the game
  • Experiment and tinker — take lots of small risks
  • Avoid risks that, if lost, would wipe you out completely
  • Don’t get consumed by data
  • Keep your options open
  • Focus more on avoiding things that don’t work than trying to find out what does work
  • Respect the old — look for habits and rules that have been around for a long time

In short, stop optimizing for today or tomorrow and start playing the long game. That means being less efficient in the short term but more effective in the long term. It's easy to optimize for today, simply spend more money than you make or eat food that's food designed in a lab to make you eat more and more. But if you play the long game you stop optimizing and start thinking ahead to the second order consequences of your decisions.

It's hard to play the long game when there is a visible negative as the first step. You have to be willing to look like an idiot in the short term to look like a genius in the long term. I believe that's why so many people play the short game. But as the old adage goes, when you do what everyone else does, don't be surprised when you get the same results everyone else does.

Breakpoint: When Bigger is Not Better

Jeff Stibel's book Breakpoint: Why the Web will Implode, Search will be Obsolete, and Everything Else you Need to Know about Technology is in Your Brain is an interesting read. The book is about “understanding what happens after a breakpoint. Breakpoints can't and shouldn't be avoided, but they can be identified.”

What is missing—what everyone is missing—is that the unit of measure for progress isn’t size, it’s time.

In any system continuous growth is impossible. Everything reaches a breakpoint. The real question is how the system responds to this breakpoint. “A successful network has only a small collapse, out of which a stronger network emerges wherein it reaches equilibrium, oscillating around an ideal size.”

The book opens with an interesting example.

In 1944, the United States Coast Guard brought 29 reindeer to St. Matthew Island, located in the Bering Sea just off the coast of Alaska. Reindeer love eating lichen, and the island was covered with it, so the reindeer gorged, grew large, and reproduced exponentially. By 1963, there were over 6,000 reindeer on the island, most of them fatter than those living in natural reindeer habitats.

There were no human inhabitants on St. Matthew Island, but in May 1965 the United States Navy sent an airplane over the island, hoping to photograph the reindeer. There were no reindeer to be found, and the flight crew attributed this to the fact that the pilot didn’t want to fly very low because of the mountainous landscape. What they didn’t realize was that all of the reindeer, save 42 of them, had died. Instead of lichen, the ground was covered with reindeer skeletons.

The network of St. Matthew Island reindeer had collapsed: the result of a population that grew too large and consumed too much. The reindeer crossed a pivotal point , a breakpoint, when they began consuming more lichen than nature could replenish. Lacking any awareness of what was happening to them, they continued to reproduce and consume. The reindeer destroyed their environment and, with it, their ability to survive. Within a few short years, the remaining 42 reindeer were dead. Their collapse was so extreme that for these reindeer there was no recovery.

Jeff Stibel

In the wild, of course, reindeer can move if they run out of lichen, which allows lichen in the area to be replenished before they return.

Nature rarely allows the environment to be pushed so far that it collapses. Ecosystems generally keep life balanced. Plants create enough oxygen for animals to survive, and the animals, in turn, produce carbon dioxide for the plants. In biological terms, ecosystems create homeostasis.

We evolved to reproduce and consume whatever food is available.

Back when our ancestors started climbing down from the trees, this was a good thing: food was scarce so if we found some, the right thing to do was gorge. As we ate more, our brains were able to grow, becoming larger than those of any other primates. This was a very good thing. But brains consume disproportionately large amounts of energy and, as a result, can only grow so big relative to body size. After that point, increased calories are actually harmful. This presents a problem for humanity, sitting at the top of the food pyramid. How do we know when to stop eating? The answer, of course, is that we don’t. People in developed nations are growing alarmingly obese, morbidly so. Yet we continue to create better food sources, better ways to consume more calories with less bite.

Mother Nature won’t help us because this is not an evolutionary issue: most of the problems that result from eating too much happen after we reproduce, at which point we are no longer evolutionarily important. We are on our own with this problem. But that is where our big brains come in. Unlike reindeer, we have enough brainpower to understand the problem, identify the breakpoint, and prevent a collapse.

We all know that physical things have limits. But so do the things we can't see or feel. Knowledge is an example. “Our minds can only digest so much. Sure, knowledge is a good thing. But there is a point at which even knowledge is bad.” This is information overload.

We have been conditioned to believe that bigger is better and this is true across virtually every domain. When we try to build artificial intelligence, we start by shoveling as much information into a computer as possible. Then we stare dumbfounded when the machine can't figure out how to tie its own shoes. When we don't get the results we want, we just add more data. Who doesn't believe that the smartest person is the one with the biggest memory and the most degrees, that the strongest person has the largest muscles, that the most creative person has the most ideas?

Growth is great until it goes too far.

[W]e often destroy our greatest innovations by the constant pursuit of growth. An idea emerges, takes hold, crosses the chasm, hits a tipping point, and then starts a meteoric rise with seemingly limitless potential. But more often than not, it implodes, destroying itself in the process.

Growth isn't bad. It's just not as good as we think.

Nature has a lesson for us if we care to listen: the fittest species are typically the smallest. The tinest insects often outlive the largest lumbering animals. Ants, bees, and cockroaches all outlived the dinosaurs and will likely outlive our race. … The deadliest creature is the mosquito, not the lion. Bigger is rarely better in the long run. What is missing—what everyone is missing—is that the unit of measure for progress isn't size, it's time.

Of course, “The world is a competitive place, and the best way to stomp out potential rivals is to consume all the available resources necessary for survival.”

Otherwise, the risk is that someone else will come along and use those resources to grow and eventually encroach on the ones we need to survive.

Networks rarely approach limits slowly “… they often don't know the carrying capacity of their environments until they've exceeded it. This is a characteristic of limits in general: the only way to recognize a limit is to exceed it. ” This is what happened with MySpace. It grew too quickly. Pages became cluttered and confusing. There was too much information. It “grew too far beyond its breakpoint.”

There is an interesting paradox here though: unless you want to keep small social networks, the best way to keep the site clean is actually to use a filter that prevents you from seeing a lot of information, which creates a filter bubble.

Stibel offers three phases to any successful network.

first, the network grows and grows and grows exponentially; second, the network hits a breakpoint, where it overshoots itself and overgrows to a point where it must decline, either slightly or substantially; finally, the network hits equilibrium and grows only in the cerebral sense, in quality rather than in quantity.

He offers some advice:

Rather than endless growth, the goal should be to grow as quickly as possible—what technologists call hypergrowth—until the breakpoint is reached. Then stop and reap the benefits of scale alongside stability.

Breakpoint goes on to predict the fall of Facebook.

Nassim Taleb and the Seven Rules of Anti-Fragility

Nassim Taleb writing in an edge.org piece.

Something central, very central, is missing in historical accounts of scientific and technological discovery. The discourse and controversies focus on the role of luck as opposed to teleological programs (from telos, “aim”), that is, ones that rely on pre-set direction from formal science.

He continues:

The luck versus knowledge story is as follows. Ironically, we have vastly more evidence for results linked to luck than to those coming from the teleological, outside physics—even after discounting for the sensationalism. In some opaque and nonlinear fields, like medicine or engineering, the teleological exceptions are in the minority, such as a small number of designer drugs. This makes us live in the contradiction that we largely got here to where we are thanks to undirected chance, but we build research programs going forward based on direction and narratives. And, what is worse, we are fully conscious of the inconsistency.

The point we will be making here is that logically, neither trial and error nor “chance” and serendipity can be behind the gains in technology and empirical science attributed to them. By definition chance cannot lead to long term gains (it would no longer be chance); trial and error cannot be unconditionally effective: errors cause planes to crash, buildings to collapse, and knowledge to regress.

As for the seven rules of anti-fragility:

  1. Convexity is easier to attain than knowledge.
  2. A “1/N” strategy is almost always best with convex strategies.
  3. Serial optionality
  4. Nonnarrative Research
  5. Theory is born from (convex) practice more often than the reverse (the nonteleological property)
  6. Premium for simplicity
  7. Better cataloguing of negative results

Taleb is the author of Antifragile.

Five Rules to help you Learn to Love Volatility

Nassim Taleb's book Antifragile: Things That Gain from Disorder is having a profound impact on how I see the world.

In this adapted piece from Antifragile, which appeared in the Wall Street Journal, Taleb offers five policy rules that can help us establish antifragility as a principle of our socioeconomic life.

***

Rule 1: Think of the economy as being more like a cat than a washing machine.

We are victims of the post-Enlightenment view that the world functions like a sophisticated machine, to be understood like a textbook engineering problem and run by wonks. In other words, like a home appliance, not like the human body. If this were so, our institutions would have no self-healing properties and would need someone to run and micromanage them, to protect their safety, because they cannot survive on their own.

By contrast, natural or organic systems are antifragile: They need some dose of disorder in order to develop. Deprive your bones of stress and they become brittle. This denial of the antifragility of living or complex systems is the costliest mistake that we have made in modern times. Stifling natural fluctuations masks real problems, causing the explosions to be both delayed and more intense when they do take place. As with the flammable material accumulating on the forest floor in the absence of forest fires, problems hide in the absence of stressors, and the resulting cumulative harm can take on tragic proportions.

And yet our economic policy makers have often aimed for maximum stability, even for eradicating the business cycle. “No more boom and bust,” as voiced by the U.K. Labor leader Gordon Brown, was the policy pursued by Alan Greenspan in order to “smooth” things out, thus micromanaging us into the current chaos. Mr. Greenspan kept trying to iron out economic fluctuations by injecting cheap money into the system, which eventually led to monstrous hidden leverage and real-estate bubbles. On this front there is now at least a glimmer of hope, in the U.K. rather than the U.S., alas: Mervyn King, governor of the Bank of England, has advocated the idea that central banks should intervene only when an economy is truly sick and should otherwise defer action.

Promoting antifragility doesn't mean that government institutions should avoid intervention altogether. In fact, a key problem with overzealous intervention is that, by depleting resources, it often results in a failure to intervene in more urgent situations, like natural disasters. So in complex systems, we should limit government (and other) interventions to important matters: The state should be there for emergency-room surgery, not nanny-style maintenance and overmedication of the patient—and it should get better at the former.

In social policy, when we provide a safety net, it should be designed to help people take more entrepreneurial risks, not to turn them into dependents. This doesn't mean that we should be callous to the underprivileged. In the long run, bailing out people is less harmful to the system than bailing out firms; we should have policies now that minimize the possibility of being forced to bail out firms in the future, with the moral hazard this entails.

Rule 2: Favor businesses that benefit from their own mistakes, not those whose mistakes percolate into the system.

Some businesses and political systems respond to stress better than others. The airline industry is set up in such a way as to make travel safer after every plane crash. A tragedy leads to the thorough examination and elimination of the cause of the problem. The same thing happens in the restaurant industry, where the quality of your next meal depends on the failure rate in the business—what kills some makes others stronger. Without the high failure rate in the restaurant business, you would be eating Soviet-style cafeteria food for your next meal out.

These industries are antifragile: The collective enterprise benefits from the fragility of the individual components, so nothing fails in vain. These businesses have properties similar to evolution in the natural world, with a well-functioning mechanism to benefit from evolutionary pressures, one error at a time.

By contrast, every bank failure weakens the financial system, which in its current form is irremediably fragile: Errors end up becoming large and threatening. A reformed financial system would eliminate this domino effect, allowing no systemic risk from individual failures. A good starting point would be reducing the amount of debt and leverage in the economy and turning to equity financing. A firm with highly leveraged debt has no room for error; it has to be extremely good at predicting future revenues (and black swans). And when one leveraged firm fails to meet its obligations, other borrowers who need to renew their loans suffer as the chastened lenders lose their appetite to extend credit. So debt tends to make failures spread through the system.

A firm with equity financing can survive drops in income, however. Consider the abrupt deflation of the technology bubble during 2000. Because technology firms were relying on equity rather than debt, their failures didn't ripple out into the wider economy. Indeed, their failures helped to strengthen the technology sector.

Rule 3: Small is beautiful, but it is also efficient.

Experts in business and government are always talking about economies of scale. They say that increasing the size of projects and institutions brings costs savings. But the “efficient,” when too large, isn't so efficient. Size produces visible benefits but also hidden risks; it increases exposure to the probability of large losses. Projects of $100 million seem rational, but they tend to have much higher percentage overruns than projects of, say, $10 million. Great size in itself, when it exceeds a certain threshold, produces fragility and can eradicate all the gains from economies of scale. To see how large things can be fragile, consider the difference between an elephant and a mouse: The former breaks a leg at the slightest fall, while the latter is unharmed by a drop several multiples of its height. This explains why we have so many more mice than elephants.

So we need to distribute decisions and projects across as many units as possible, which reinforces the system by spreading errors across a wider range of sources. In fact, I have argued that government decentralization would help to lower public deficits. A large part of these deficits comes from underestimating the costs of projects, and such underestimates are more severe in large, top-down governments. Compare the success of the bottom-up mechanism of canton-based decision making in Switzerland to the failures of authoritarian regimes in Soviet Russia and Baathist Iraq and Syria.

Rule 4: Trial and error beats academic knowledge.

Things that are antifragile love randomness and uncertainty, which also means—crucially—that they can learn from errors. Tinkering by trial and error has traditionally played a larger role than directed science in Western invention and innovation. Indeed, advances in theoretical science have most often emerged from technological development, which is closely tied to entrepreneurship. Just think of the number of famous college dropouts in the computer industry.

But I don't mean just any version of trial and error. There is a crucial requirement to achieve antifragility: The potential cost of errors needs to remain small; the potential gain should be large. It is the asymmetry between upside and downside that allows antifragile tinkering to benefit from disorder and uncertainty.

Perhaps because of the success of the Manhattan Project and the space program, we greatly overestimate the influence and importance of researchers and academics in technological advancement. These people write books and papers; tinkerers and engineers don't, and are thus less visible. Consider Britain, whose historic rise during the Industrial Revolution came from tinkerers who gave us innovations like iron making, the steam engine and textile manufacturing. The great names of the golden years of English science were hobbyists, not academics: Charles Darwin, Henry Cavendish, William Parsons, the Rev. Thomas Bayes. Britain saw its decline when it switched to the model of bureaucracy-driven science.

America has emulated this earlier model, in the invention of everything from cybernetics to the pricing formulas for derivatives. They were developed by practitioners in trial-and-error mode, drawing continuous feedback from reality. To promote antifragility, we must recognize that there is an inverse relationship between the amount of formal education that a culture supports and its volume of trial-and-error by tinkering. Innovation doesn't require theoretical instruction, what I like to compare to “lecturing birds on how to fly.”

Rule 5: Decision makers must have skin in the game.

At no time in the history of humankind have more positions of power been assigned to people who don't take personal risks. But the idea of incentive in capitalism demands some comparable form of disincentive. In the business world, the solution is simple: Bonuses that go to managers whose firms subsequently fail should be clawed back, and there should be additional financial penalties for those who hide risks under the rug. This has an excellent precedent in the practices of the ancients. The Romans forced engineers to sleep under a bridge once it was completed.

Because our current system is so complex, it lacks elementary clarity: No regulator will know more about the hidden risks of an enterprise than the engineer who can hide exposures to rare events and be unharmed by their consequences. This rule would have saved us from the banking crisis, when bankers who loaded their balance sheets with exposures to small probability events collected bonuses during the quiet years and then transferred the harm to the taxpayer, keeping their own compensation.

***

In these five rules, I have sketched out only a few of the more obvious policy conclusions that we might draw from a proper appreciation of antifragility. But the significance of antifragility runs deeper. It is not just a useful heuristic for socioeconomic matters but a crucial property of life in general. Things that are antifragile only grow and improve under adversity. This dynamic can be seen not just in economic life but in the evolution of all things, from cuisine, urbanization and legal systems to our own existence as a species on this planet.

We all know that the stressors of exercise are necessary for good health, but people don't translate this insight into other domains of physical and mental well-being. We also benefit, it turns out, from occasional and intermittent hunger, short-term protein deprivation, physical discomfort and exposure to extreme cold or heat. Newspapers discuss post-traumatic stress disorder, but nobody seems to account for post-traumatic growth. Walking on smooth surfaces with “comfortable” shoes injures our feet and back musculature: We need variations in terrain.

Modernity has been obsessed with comfort and cosmetic stability, but by making ourselves too comfortable and eliminating all volatility from our lives, we do to our bodies and souls what Mr. Greenspan did to the U.S. economy: We make them fragile. We must instead learn to gain from disorder.

***

Still curious? Buy the book. It'll change the way you see the world.