I was talking with someone the other day about Antifragility, and I realized that, while a lot of people use the word, not many people have read: Antifragile, where Nassim Taleb defines it.
Just as being clear on what constitutes a black swan allowed us to better discuss the subject, so too will defining antifragility.
The classic example of something antifragile is Hydra, the greek mythological creature that has numerous heads. When one is cut off, two grow back in its place.
Some things benefit from shocks; they thrive and grow when exposed to volatility, randomness, disorder, and stressors and love adventure , risk, and uncertainty. Yet, in spite of the ubiquity of the phenomenon, there is no word for the exact opposite of fragile. Let us call it antifragile. Antifragility is beyond resilience or robustness. The resilient resists shocks and stays the same; the antifragile gets better. This property is behind everything that has changed with time: evolution, culture, ideas, revolutions, political systems, technological innovation, cultural and economic success, corporate survival, good recipes (say, chicken soup or steak tartare with a drop of cognac), the rise of cities, cultures, legal systems, equatorial forests, bacterial resistance … even our own existence as a species on this planet. And antifragility determines the boundary between what is living and organic (or complex), say, the human body, and what is inert, say, a physical object like the stapler on your desk.
The antifragile loves randomness and uncertainty, which also means— crucially—a love of errors, a certain class of errors. Antifragility has a singular property of allowing us to deal with the unknown, to do things without understanding them— and do them well. Let me be more aggressive: we are largely better at doing than we are at thinking, thanks to antifragility. I’d rather be dumb and antifragile than extremely smart and fragile, any time.
It is easy to see things around us that like a measure of stressors and volatility: economic systems , your body, your nutrition (diabetes and many similar modern ailments seem to be associated with a lack of randomness in feeding and the absence of the stressor of occasional starvation), your psyche. There are even financial contracts that are antifragile: they are explicitly designed to benefit from market volatility.
Antifragility makes us understand fragility better. Just as we cannot improve health without reducing disease, or increase wealth without first decreasing losses, antifragility and fragility are degrees on a spectrum.
By grasping the mechanisms of antifragility we can build a systematic and broad guide to nonpredictive decision making under uncertainty in business, politics, medicine, and life in general— anywhere the unknown preponderates, any situation in which there is randomness, unpredictability, opacity, or incomplete understanding of things.
It is far easier to figure out if something is fragile than to predict the occurrence of an event that may harm it. Fragility can be measured; risk is not measurable (outside of casinos or the minds of people who call themselves “risk experts”). This provides a solution to what I’ve called the Black Swan problem— the impossibility of calculating the risks of consequential rare events and predicting their occurrence. Sensitivity to harm from volatility is tractable, more so than forecasting the event that would cause the harm. So we propose to stand our current approaches to prediction, prognostication, and risk management on their heads.
In every domain or area of application, we propose rules for moving from the fragile toward the antifragile, through reduction of fragility or harnessing antifragility. And we can almost always detect antifragility (and fragility) using a simple test of asymmetry : anything that has more upside than downside from random events (or certain shocks) is antifragile; the reverse is fragile.
Deprivation of Antifragility
Crucially, if antifragility is the property of all those natural (and complex) systems that have survived, depriving these systems of volatility, randomness, and stressors will harm them. They will weaken, die, or blow up. We have been fragilizing the economy, our health, political life, education, almost everything … by suppressing randomness and volatility. … stressors. Much of our modern, structured, world has been harming us with top-down policies and contraptions (dubbed “Soviet-Harvard delusions” in the book) which do precisely this: an insult to the antifragility of systems. This is the tragedy of modernity: as with neurotically overprotective parents, those trying to help are often hurting us the most (see iatrogenics)
Antifragile is the antidote to Black Swans. The modern world may increase technical knowledge but it will also make things more fragile.
… Black Swans hijack our brains, making us feel we “sort of” or “almost” predicted them, because they are retrospectively explainable. We don’t realize the role of these Swans in life because of this illusion of predictability. Life is more, a lot more, labyrinthine than shown in our memory— our minds are in the business of turning history into something smooth and linear, which makes us underestimate randomness. But when we see it, we fear it and overreact. Because of this fear and thirst for order, some human systems, by disrupting the invisible or not so visible logic of things, tend to be exposed to harm from Black Swans and almost never get any benefit. You get pseudo-order when you seek order; you only get a measure of order and control when you embrace randomness.
Complex systems are full of interdependencies— hard to detect— and nonlinear responses. “Nonlinear” means that when you double the dose of, say, a medication, or when you double the number of employees in a factory, you don’t get twice the initial effect, but rather a lot more or a lot less. Two weekends in Philadelphia are not twice as pleasant as a single one— I’ve tried. When the response is plotted on a graph, it does not show as a straight line (“linear”), rather as a curve. In such environments, simple causal associations are misplaced; it is hard to see how things work by looking at single parts.
Man-made complex systems tend to develop cascades and runaway chains of reactions that decrease, even eliminate, predictability and cause outsized events. So the modern world may be increasing in technological knowledge, but, paradoxically, it is making things a lot more unpredictable.
An annoying aspect of the Black Swan problem— in fact the central, and largely missed , point —is that the odds of rare events are simply not computable.
Robustness is not enough.
Consider that Mother Nature is not just “safe.” It is aggressive in destroying and replacing, in selecting and reshuffling . When it comes to random events, “robust” is certainly not good enough. In the long run everything with the most minute vulnerability breaks, given the ruthlessness of time— yet our planet has been around for perhaps four billion years and, convincingly, robustness can’t just be it: you need perfect robustness for a crack not to end up crashing the system. Given the unattainability of perfect robustness, we need a mechanism by which the system regenerates itself continuously by using, rather than suffering from, random events, unpredictable shocks, stressors, and volatility.
Fragile and antifragile are relative — there is no absolute. You may be more antifragile than your neighbor but that doesn't make you antifragile.
The Triad is FRAGILE — ROBUST — ANTIFRAGILE.
Here's an example
All of this can lead to some pretty significant conclusions. Often it's impossible to be antifragile, but falling short of that you should be robust, not fragile. How do you become robust? Make sure you're not fragile. Eliminate things that make you fragile. In an interview, Taleb offers some ideas:
You have to avoid debt because debt makes the system more fragile. You have to increase redundancies in some spaces. You have to avoid optimization. That is quite critical for someone who is doing finance to understand because it goes counter to everything you learn in portfolio theory. … I have always been very skeptical of any form of optimization. In the black swan world, optimization isn’t possible. The best you can achieve is a reduction in fragility and greater robustness.
If you haven't already, I highly encourage you to read Antifragile.
Image credit: velinov.