The first thing we do is try to figure out what went wrong. When people in organizations evaluate poor outcomes, determining what went wrong, and why is one of the first steps.
Once we have a cause, whether accurate or (often) not, we distribute this information around the organization with the hopes that the knowledge of why we made a mistake will prevent us from repeating that mistake.
We attempt to eliminate the mistake from happening again.
In his masterful book, Seeing What Others Don’t: The Remarkable Ways We Gain Insights, Gary Klein writes:
“Organizations have lots of reasons to dislike errors: they can pose severe safety risks, they disrupt coordination, they lead to waste, they reduce the chance for project success, they erode the culture, and they can result in lawsuits and bad publicity. … In your job as a manager, you find yourself spending most of your time flagging and correcting errors. You are continually checking to see if workers meet their performance standards. If you find deviations, you quickly respond to get everything back on track. It’s much easier and less frustrating to manage by reducing errors than to try to boost insights. You know how to spot errors.”
We hate errors, and we make every effort not to repeat them.
Here’s an idea that I’ve been toying around with recently — we can’t repeat the same error twice, in part because things are always changing.
In his wonderful book of Fragments, Heraclitus writes:
No man ever steps in the same river twice, for it’s not the same river and he’s not the same man.
The river changes, and so does the person.
Evolution is blind to failure.
Evolution doesn’t have intent. When the DNA copy of a species creates a variation—say a shorter beak or sweeter taste—it does so without realizing these traits might have been tried before. These traits are not purposeful; evolution is blind to previous failures and cares, not whether a mutation that failed 8 years ago occurs again. This is not a conscious process. What failed to become an advantaged trait two generations ago may become one today. It may be that the environment changed, and where there was once a preference for a shorter beak, a longer one now offers an advantage, however slight.
By repeating errors, evolution adapts. This is why natural selection works. Artificial selection, on the other hand, makes us fragile because the selection isn’t blind anymore.
So why do we fail? One of the reasons for failure is our own ignorance.
“We may err because science has given us only a partial understanding of the world and how it works,” writes Atul Gawande in The Checklist Manifesto. “There are skyscrapers we do not yet know how to build, snowstorms we cannot predict, heart attacks we still haven’t learned how to stop.”
These things are within our grasp, but we are not quite there yet. Human knowledge grows by the day. Knowledge, in this case, can be positive ‘what works’ and negative ‘what doesn’t work.’ For example, we can now build skyscrapers hundreds of stories; this knowledge didn’t exist 100 years ago. Thanks to computers and technology, we can now model more variables, and we’re better able to predict the weather.
(In these endeavors we’re improving quickly in terms of knowledge and technology, while the environment changes slower.)
The same water doesn’t cross your foot. The world is always changing. What used to be a tailwind is now a headwind and vice versa.
Excusing Ignorance
We can excuse ignorance when we only have limited understanding, but we cannot excuse ineptitude. Failures when the knowledge exists, and we act contrary to it, become hard to forgive. This is important in the context of organizations because we tend to forgive someone who makes a ‘mistake’ for the first time but punishes the person who makes the same ‘mistake’ again. This is a form of artificial selection.
So we punish a person, who, whether intentionally or not, is mimicking evolution. Yet we can never really make the ‘same mistake’ twice because the same exact conditions do not exist again. We’re not the same, and neither is the world. (Of course, they are only punished if the outcome is negative.)
I’m not trying to say learning from mistakes is bad, only that it is limited (and a form of artificial selection). It’s a piece to the puzzle of knowledge. But if your process for learning from mistakes doesn’t account for changing knowledge/technology and environments, you have a blind spot. Things change.
Improving our ability to learn from mistakes involves more than simply determining what went wrong and trying to avoid that again in the future. We need a deeper understanding of the key variables that govern the situation (and their relation to the environment), the decision-making process, and our knowledge at the time of the decision.
Sometimes it’s smart to attempt things without knowledge of previous mistakes and sometimes it’s not.