Category: Learning

The Narratives of History: Applying Lessons from the Past

“History is written by the winners” is the popular view. But your winner may not be my winner. A lot depends on the narrative you are trying to build.

History is rewritten all the time.

Sometimes it is rewritten because new information has come to light, perhaps from an archeological find or previously classified documents. When this happens, it is exciting. We joyfully anticipate that more information will deepen our understanding.

But rewriting frequently happens in the service of building a cultural or national narrative. We highlight bits of the past that support our perceived identities and willfully ignore the details that don’t fit. We like our history uncomplicated. It’s hard for us to understand our groups or our countries, and by extension ourselves, as both good and not-good at the same time.

Culture is collective memory. It’s the interwoven stories that we use to explain who we are as nations, organizations, or just loosely formed groups.

Many of us belong to multiple cultural groups, but only one national group. Margaret MacMillan, in The Uses and Abuses of History, explains that “Collective memory is more about the present than the past because it is integral to how the group sees itself.” And “while collective memory is usually grounded in fact, it need not be.”

We have seen how people justify all kinds of mistakes to preserve the personal narratives they are invested in, and groups also engage in this behavior. Countries rewrite their histories, from the textbook up, to support how they see themselves now. Instinctively we may recoil from this idea, believing that it’s better to turn over all the rocks and confront what is lurking underneath. However, as MacMillan writes, “It can be dangerous to question the stories people tell about themselves because so much of our identity is both shaped by and bound up with our history. That is why dealing with the past, in deciding on which version we want, or on what we want to remember and to forget, can become so politically charged.”

For example, when Canada’s new war museum opened, controversy immediately ensued because part of the World War II exhibit called attention “to the continuing debate over both the efficacy and the morality of the strategy of the Royal Air Force’s bomber command, which sought to destroy Germany’s capacity to fight on by massive bombing of German industrial and civilian targets.” RAF veterans were outraged that their actions were considered morally ambiguous. Critics of the exhibit charged that the veterans should have the final say because, after all, “they were there.”

We can see that this rationale makes no sense. Galilean relativity shows that the pilots who flew the bombing campaigns are actually the least likely to have an objective understanding of the events. And the ends don’t always justify the means. It is possible to do bad things in the pursuit of morally justified outcomes.

MacMillan warns that the danger of abusing history is that it “flattens out the complexity of human experience and leaves no room for different interpretations of the past.”

Which leaves us asking, What do we want from history? Do we want to learn from it, with the hopes that in doing so we will avoid mistakes by understanding the experiences of others? Or do we want to practice self-justification on a national level, reinforcing what we already believe about ourselves in order to justify what we did and what we are doing? After all, “you could almost always find a basis for your claims in the past if you looked hard enough.”

As with medicine, there is a certain fallibility to history. Our propensity to fool ourselves with self-justified narratives is hard to overcome. If we selectively use the past only to reinforce our claims in the present, then the situation becomes precarious when there is pressure to change. Instead of looking as objectively as possible at history, welcoming historians who challenge us, we succumb to confirmation bias, allowing only those interpretations that are consistent with the narrative we are invested in.

Consider what MacMillan writes about nationalism, which “is a very late development indeed in terms of human history.”

It all started so quietly in the nineteenth century. Scholars worked on languages, classifying them into different families and trying to determine how far back into history they went. They discovered rules to explain changes in language and were able to establish, at least to their own satisfaction, that texts centuries old were written in early forms of, for example, German or French. Ethnographers like the Grimm brothers collected German folk tales as a way of showing that there was something called the German nation in the Middle Ages. Historians worked assiduously to recover old stories and pieced together the history of what they chose to call their nation as though it had an unbroken existence since antiquity. Archaeologists claimed to have found evidence that showed where such nations had once lived, and where they had moved to during the great waves of migrations.

The cumulative result was to create an unreal yet influential version of how nations formed. While it could not be denied that different peoples, from Goths to Slavs, had moved into and across Europe, mingling as they did so with peoples already there, such a view assumed that at some point, generally in the Middle Ages, the music had stopped. The dancing pieces had fallen into their chairs, one for the French, another for the Germans and yet another for the Poles. And there history had fixed them as “nations.” German historians, for example, could depict an ancient German nation whose ancestors had lived happily in their forests from before the time of the Roman Empire and which at some time, probably in the first century A.D., had become recognisably “German.” So — and this was the dangerous question — what was properly the German nation’s land? Or the land of any other “nation”? Was it where the people now lived, where they had lived at the time of their emergence in history, or both?

Would the scholars have gone on with their speculations if they could have seen what they were preparing the way for? The bloody wars that created Italy and Germany? The passions and hatred that tore apart the old multinational Austria-Hungary? The claims, on historical grounds, by new and old nations after World War I for the same pieces of territory? The hideous regimes of Hitler and Mussolini with their elevation of the nation and the race to the supreme good and their breathtaking demands for the lands of others?

When we selectively reach back into the past to justify claims in the present, we reduce the complexity of history and of humanity. This puts us in an awkward position because the situations we are confronted with are inherently complex. If we cut ourselves off from the full scope of history because it makes us uncomfortable, or doesn’t fit with the cultural narrative in which we live, we reduce our ability to learn from the past and apply those lessons to the situations we are facing today.

MacMillan says, “There are also many lessons and much advice offered by history, and it is easy to pick and choose what you want. The past can be used for almost anything you want to do in the present. We abuse it when we create lies about the past or write histories that show only one perspective. We can draw our lessons carefully or badly. That does not mean we should not look to history for understanding, support and help; it does mean that we should do so with care.”

We need to accept that people can do great things while still having flaws. Our heroes don’t have to be perfect, and we can learn just as much from their imperfections as from their achievements.

We have to allow that there are at least two sides to every story, and we have to be willing to listen to both. There are no conflicts in which one side doesn’t feel morally justified in their actions; that’s why your terrorist can be my freedom fighter. History can be an important part of bridging this divide only if we are willing to lift up all the rocks and shine our lights on what is lurking underneath.

The Art of Having an Informed Opinion

“What the pupil must learn, if he learns anything at all, is that the world will do most of the work for you, provided you cooperate with it by identifying how it really works and aligning with those realities. If we do not let the world teach us, it teaches us a lesson.”

— Joseph Tussman

The first thing they always do is tell you what they think. When someone has an opinion about everything, they want to share it with you. They often tout stats and research as if they had an imaginary checklist of facts they need to be able to rattle off to establish themselves as an expert in a field they actually know very little about. Because they have an opinion on everything, they are quick to judge others – for their lack of opinions, for their lack of knowledge, for their lack of outrage … the list goes on.

I'm a firm believer that you can learn something from everyone. Sometimes that effort is more time-consuming than others. People who have opinions about everything barf so much noise that it's hard to find the signal. Your brain has to work overtime to figure out if they did the work to come up with their opinions themselves or if they're simply regurgitating some op-ed in a newspaper. Over time, opinionated people also end up in their own prisons and they try to take you with them.

The problem comes from how we see the world. Our opinions are often rooted in how we think the world should work, according to our morals, values, and principles. If we see the world through the lens of our opinions, much of what happens will not agree with us. This is feedback, and how we respond to this feedback is key.

The world never tells you that you're wrong; it only gives you outcomes.

When an outcome is not what you want it to be, things get tough. You can ignore the result and continue to think that you're right. This protects your ego. It also carries the risk of your continuing to believe something that isn't true. Alternatively, you can calibrate your believability on the subject at hand by lowering the odds that you're right. For example, maybe you gave yourself an 85 out of 100 for the ability to hold a firm opinion on this subject, and now you lower your score to 75. If the world continues to provide undesirable outcomes, eventually you get the hint and change your beliefs. Finally, you can give up your opinions and just respond to the world as it is. This option is the hardest.

People who can't change their minds never move forward. Worse still, they see themselves as heroes. And I mean “heroes” in the Hollywood sense. They hold opinions that have been proven wrong over and over again. And they pay a dear price.

They stop getting promoted. Their work colleagues avoid them. Their friends call less often. Their disagreeable dispositions mean that people don't want them around. They are prisoners of their beliefs. They want everyone to see that they're right. If they persist long enough, the only people they have in their circles are people who have the same (incorrect) worldview.

If you insist on having an opinion, carry a mental scorecard. Start it with 50/50 on all subjects and adjust it based on outcomes. Use a decision journal. When you're right – and “right” means that you're right for the right reasons – you raise your score. When you're wrong, lower the score. Over time, you'll calibrate your circle of competence.

If that sounds like a lot of work, just say, “I don't have an opinion on that; why don't you tell me how you got to have such a firm one? It sounds like I could learn something.”

Comment on Facebook | Discuss on Twitter

Finding Truth in History

If we are to learn from the past, does the account of it have to be true? One would like to think so. Otherwise you might be preparing for the wrong battle. There you are, geared up for mountains, and instead you find swamps. You've done a bunch of reading, trying to understand the terrain you are about to enter, only to find it useless. The books must have been written by crazy people. You are upset and confused. Surely there must be some reliable, objective account of the past. How are you supposed to prepare for the possibilities of the future if you can't trust the accuracy of the reports on anything that has come before?

For why do we study history, anyway? Why keep a record of things that have happened? We fear that if we don't, we are doomed to repeat history; but often that doesn't seem to stop us from repeating it. And we have an annoying tendency to remember only the things which don't really challenge or upset us. But still we try to capture what we can, through museums and ceremonies and study, because somehow we believe that eventually we will come to learn something about why things happen the way they do. And armed with this knowledge, we might even be able to shape our future.

This “problem of historical truth” is explored by Isaiah Berlin in The Hedgehog and the Fox: An Essay on Tolstoy's View of History. He explains that Tolstoy was driven by a “desire to penetrate to first causes, to understand how and why things happen as they do and not otherwise.” We can understand this goal – because if we know how the world really works, we know everything.

Of course, it's not that simple, and — spoiler alert — Tolstoy never figured it out. But Berlin's analysis can illuminate the challenges we face with history and help us find something to learn from.

Tolstoy's main problem with historical efforts at the time was that they were “nothing but a collection of fables and useless trifles. … History does not reveal causes; it presents only a blank succession of unexplained events.” Seen like this, the study of history is a waste of time, other than for trivia games or pub quizzes. Being able to recite what happened is supremely uninteresting if you can't begin to understand why it happened in the first place.

But Tolstoy was also an expert at tearing down the theories of anyone who attempted to make sense of history and provide the why. He thought that they “must be imposters, since no theories can possibly fit the immense variety of possible human behavior, the vast multiplicity of minute, undiscoverable causes and effects which form that interplay of men and nature which history purports to record.”

History is more than just factoids, but its complexity makes it difficult for us to learn exactly why things happened the way they did.

And therein lies the spectrum of the problem for Tolstoy. History is more than just factoids, but its complexity makes it difficult for us to learn exactly why things happened the way they did. A battle is more than dates and times, but trying to trace the real impact of the decisions of Napoleon or Churchill is a fool's errand. There is too much going on – too many decisions and interactions happening in every moment – for us to be able to conclude cause and effect with any certainty. After leaving an ice cube to melt on a table, you can't untangle exactly what happened with each molecule from the puddle. That doesn't mean we can't learn from history; it means only that we need to be careful with the lessons we draw and the confidence we have in them.

Berlin explains:

There is a particularly vivid simile [in War and Peace] in which the great man is likened to the ram whom the shepherd is fattening for slaughter. Because the ram duly grows fatter, and perhaps is used as a bellwether for the rest of the flock, he may easily imagine that he is the leader of the flock, and that the other sheep go where they go solely in obedience to his will. He thinks this and the flock may think it too. Nevertheless the purpose of his selection is not the role he believes himself to play, but slaughter – a purpose conceived by beings whose aims neither he nor the other sheep can fathom. For Tolstoy, Napoleon is just such a ram, and so to some degree is Alexander, and indeed all the great men of history.

Arguing against this view of history was N. I. Kareev, who said:

…it is men, doubtless, who make social forms, but these forms – the ways in which men live – in their turn affect those born into them; individual wills may not be all-powerful, but neither are they totally impotent, and some are more effective than others. Napoleon may not be a demigod, but neither is he a mere epiphenomenon of a process which would have occurred unaltered without him.

This means that studying the past is important for making better decisions in the future. If we can't always follow the course of cause and effect, we can at least discover some very strong correlations and act accordingly.

We have a choice between these two perspectives: Either we can treat history as an impenetrable fog, or we can figure out how to use history while accepting that each day might reveal more and we may have to update our thinking.

Sound familiar? Sounds a lot like the scientific method to me – a preference for updating the foundation of knowledge versus being adrift in chaos or attached to a raft that cannot be added to.

Berlin argues that Tolstoy spent his life trying to find a theory strong enough to unify everything. A way to build a foundation so strong that all arguments would crumble against it. Although that endeavor was ambitious, we don't need to fully understand the why of history in order to be able to learn from it. We don't need the foundation of the past to be solid and fixed in order to gain some insight into our future. We can still find some truth in history.


Funnily enough, Berlin clarifies that Tolstoy “believed that only by patient empirical observation could any knowledge be obtained.” But he also believed “that simple people often know the truth better than learned men, because their observation of men and nature is less clouded by empty theories.”

Unhelpfully, Tolstoy's position amounts to “the more you know, the less you learn.”

The answer to finding truth in history is not to be found in Tolstoy's writing. He was looking for “something too indivisibly simple and remote from normal intellectual processes to be assailable by the instruments of reason, and therefore, perhaps, offering a path to peace and salvation.” He never was able to conclude what that might be.

But there might be an answer in how Berlin interprets Tolstoy's major dissonance in life, the discrepancy that drove him and was never resolved. Tolstoy “tried to resolve the glaring contradiction between what he believed about men and events, and what he thought he believed, or ought to believe.”

Finding truth in history is about understanding that this truth is not absolute. In this sense, truth is based on perspective. The perspective of the person who captured it and the person interpreting it. And the perspective of the translators and editors and primary sources. We don't get to be invisible observers of moments in the past, and we don't get to go into other minds. The best we can do is keep our eyes open and keep our biases in check. And what history can teach us is found not just in the moments it tries to describe, but also in what we choose to look at and how we choose to represent it.

Loops of Progress, or How Modern Are You?

On your way to work, you grab breakfast from one of the dozen coffee shops you pass. Most of the goods you buy get delivered right to your door. If you live in a large city and have a car, you barely use it, preferring Uber or ride-sharing services. You feel modern. Your parents didn’t do any of this. Most of their meals were consumed at home, and they took their cars everywhere, in particular to purchase all the stuff they needed.

You think of your life as being so different from theirs. It is. You think of this as progress. It isn’t.

We tend to consider social development as occurring in a straight line: we progressed from A to B to C, with each step being more advanced and, we assume, better than the one before. This perception isn’t always accurate, though. Part of learning from the past is appreciating that we humans have tried many different ways to organize ourselves, with lots of repetitions. If we want success now, we need to understand our past efforts in order to see what changes might be needed this time around.

Would you be surprised to learn that in Victorian London (the nineteenth century), the vast majority of people ate their food on the run? That ride sharing was common? Or that you could purchase everything you needed without ever leaving your house?

To be fair, these situations didn’t exist in the exact instantiations that they do today. Obviously, there was no back then. But while the parallels are not exact, they are worth exploring, if only to remind us that no matter the array of pressures we face as a society, there are only so many ways we can organize ourselves.

To start with, street food was the norm. All classes except the very wealthy (thus, essentially, anyone who worked) ate on the run. At outdoor stalls or indoor counters. Food purchased from street vendors or chophouses (the Victorian equivalent of fast-food outlets). Food was purchased and consumed outside of the home, on the commute to or from work.

Why? Why would everyone from the middle classes to the working poor eat out?

Unlike today, eating out was cheaper then. As Judith Flanders explains in The Victorian City:

Today, eating out is more expensive than cooking at home, but in the nineteenth century the situation was reversed. Most of the working class lived in rooms, not houses. They might have had access to a communal kitchen, but more often they cooked in their own fireplace: to boil a kettle before going to work, leaving the fire to burn when there was no one home, was costly, time-consuming and wasteful. … Several factors — the lack of storage space, routine infestations of vermin and being able, because of the cost, to buy food only in tiny quantities — meant that storing any foodstuff, even tea, overnight was unusual.

Even food delivery isn’t new.

Every eating place expected to deliver meals, complete with cutlery, dishes and even condiments, which were brought by waiters who then stayed on, if wanted, to serve. Endless processions of meals passed through the streets daily. … Large sums of money were not necessary for this service.

People need to eat. It’s fundamental. No matter what living conditions we find ourselves in, the drive away from starvation means that we are willing to experiment in how we organize to get our food.

Public transportation took hold in Victorian London and is another interesting point of comparison. Then, its use was not due to a sense of civic responsibility or concerns about the environment. Public transportation succeeded because it was faster. Most cities had grown organically, and streets were not designed for the volume they had to carry in the nineteenth century. There was no flow, and there were no traffic rules. The population was swelling and road surfaces would be devastating to today’s SUVs. It was simply painful to get anywhere.

Thus the options exploded. Buses and cabs to get about the city. Stagecoaches and the railroad for longer excursions (and commutes!). And the Underground. Buses “increased the average speed of travel to nearly six miles an hour; with the railway this figure rose to over twelve, sometimes double that.” Public transportation allowed people to move faster, and “therefore, areas that had traditionally been on the edges of London now housed commuters.”

As a direct consequence of the comparable efficiency of the public transportation system, “most people could not imagine ever owning a private carriage. It was not just the cost of the carriage itself, of the horse and its accoutrements — harnesses and so on — but the running costs: the feed and care of the horse, the stabling, as well as the taxes that were imposed on carriages throughout the century.” As well as the staff. A driver, footmen, their salaries and uniforms.

A form of ride-sharing was also common then. For travel outside of the city, one could hire a post-chaise. “A post-chaise was always hired privately, to the passenger’s own schedule, but the chaise, horses, driver and postboys all belonged to the coaching inn or a local proprietor.”

Aside from the cost of owning your own transportation, neither the work day nor the city infrastructure was designed for reliance on individual transport. London in the nineteenth century (and to a large extent today) functioned better with an extensive public transport system.

There was no social safety net. You worked or you died.

Finally, living in London in the nineteenth century was very much about survival. There was no social safety net. You worked or you died. And given the concentration of wealth in the top tier of society, there was a lot of competition among the working poor for a slight edge that would mean the difference between living another day and starvation.

This situation is likely part of the reason that sellers went to buyers, rather than the other way around. Unlike today, when so many bookstores are owned by the same company or when a conglomerate makes multiple brands of “unique” luxury goods, a watercress girl owned and sold only the watercress she could carry. And this watercress was no different from the bundles the girl one street over had. The competition to sell was fierce.

And so, as Flanders describes, in the first half of the nineteenth century, street vendors in all neighborhoods sold an astonishing array of goods and services. First chimney sweeps, then milkmaids; “the next sellers were the watercress girls, followed by the costermongers, then the fishmongers’, the butchers’ and the bakers’ boys to take the daily orders.” Next came the guy selling horsemeat.

Other goods regularly available from itinerant sellers in the suburbs included: footstools; embroidery frames; clothes horses, clothes-pegs and clothes line; sponges, chamois leathers, brushes and brooms; kitchen skewers, toasting-forks and other tinware; razors and penknives; trays, keyrings, and small items of jewellery; candlesticks, tools, trivets, pots and pans; bandboxes and hatboxes; blackleading for kitchen ranges and grates, matches and glue; china ornaments and crockery; sheets, shirts, laces, thread, ribbons, artificial flowers, buttons, studs, handkerchiefs; pipes, tobacco, snuff, cigars; spectacles, hats, combs and hairbrushes; firewood and sawdust.

You didn’t have to leave your house to purchase items for meeting your daily needs.

This is not to say that Victorian London had everything figured out or that progress is always a loop. For example, there is no time in history in which it was better to be a woman than it is now, and modern medicine and the scientific method are significant steps up over what has come before. But reading these accounts of how London functioned almost two hundred years ago hints that a lot of what we consider modern innovations have been tried before.

Maybe ways of organizing come and go depending on time and place. When things are useful, they appear; as needs change, those things disappear. There really is no new way of doing business.

But we can look at the impact of social progress, how it shapes communities, and what contributes to its ebb and flow. Flanders notes that in the second half of the nineteenth century, there was a shift to going out to shop in stores. What changes did this give rise to? And how did those changes contribute to the loop we are experiencing and to our current desire to have everything brought to us?

Zero — Invented or Discovered?

It seems almost a bizarre question. Who thinks about whether zero was invented or discovered? And why is it important?

Answering this question, however, can tell you a lot about yourself and how you see the world.

Let’s break it down.

“Invented” implies that humans created the zero and that without us, the zero and its properties would cease to exist.

“Discovered” means that although the symbol is a human creation, what it represents would exist independently of any human ability to label it.

So do you think of the zero as a purely mathematical function, and by extension think of all math as a human construct like, say, cheese or self-driving cars? Or is math, and the zero, a symbolic language that describes the world, the content of which exists completely independently of our descriptions?

The zero is now a ubiquitous component of our understanding.

The concept is so basic it is routinely mastered by the pre-kindergarten set. Consider the equation 3-3=0. Nothing complicated about that. It is second nature to us that we can represent “nothing” with a symbol. It makes perfect sense now, in 2017, and it's so common that we forget that zero was a relatively late addition to the number scale.

Here's a fact that's amazing to most people: the zero is actually younger than mathematics. Pythagoras’s famous conclusion — that in a right-angled triangle, the square of the hypotenuse is equal to the sum of the squares of the other two sides — was achieved without a zero. As was Euclid’s entire Elements.

How could this be? It seems surreal, given the importance the zero now has to mathematics, computing, language, and life. How could someone figure out the complex geometry of triangles, yet not realize that nothing was also a number?

Tobias Dantzig, in Number: The Language of Science, offers this as a possible explanation: “The concrete mind of the ancient Greeks could not conceive the void as a number, let alone endow the void with a symbol.” This gives us a good direction for finding the answer to the original question because it hints that you must first understand the concept of the void before you can name it. You need to see that nothingness still takes up space.

It was thought, and sometimes still is, that the number zero was invented in the pursuit of ancient commerce. Something was needed as a placeholder; otherwise, 65 would be indistinguishable from 605 or 6050. The zero represents “no units” of the particular place that it holds. So for that last number, we have six thousands, no hundreds, five tens, and no singles.

A happy accident of no great original insight, zero then made its way around the world. In addition to being convenient for keeping track of how many bags of grain you were owed, or how many soldiers were in your army, it turned our number scale into an extremely efficient decimal system. More so than any numbering system that preceded it (and there were many), the zero transformed the power of our other numerals, propelling mathematics into fantastic equations that can explain our world and fuel incredible scientific and technological advances.

But there is, if you look closely, a missing link in this story.

What changed in humanity that made us comfortable with confronting the void and giving it a symbol? And is it reasonable to imagine creating the number without understanding what it represented? Given its properties, can we really think that it started as a placeholder? Or did it contain within it, right from the beginning, the notion of defining the void, of giving it space?

In Finding Zero, Amir Aczel offers some insight. Basically, he claims that the people who discovered the zero must have had an appreciation of the emptiness that it represented. They were labeling a concept with which they were already familiar.

He rediscovered the oldest known zero, on a stone tablet dating from 683 CE in what is now Cambodia.

On his quest to find this zero, Aczel realized that it was far more natural for the zero to first appear in the Far East, rather than in Western or Arab cultures, due to the philosophical and religious understandings prevalent in the region.

Western society was, and still is in many ways, a binary culture. Good and evil. Mind and body. You’re either with us or against us. A patriot or a terrorist. Many of us naturally try to fit our world into these binary understandings. If something is “A,” then it cannot be “not A.” The very definition of “A” is that it is not “not A.” Something cannot be both.

Aczel writes that this duality is not at all reflected in much Eastern thought. He describes the catuskoti, found in early Buddhist logic, that presents four possibilities, instead of two, for any state: that something is, is not, is both, or is neither.

At first, a typical Western mind might rebel against this kind of logic. My father is either bald or not bald. He cannot be both and he cannot be neither, so what is the use of these two other almost nonsensical options?

A closer examination of our language, though, reveals that the expression of the non-binary is understood, and therefore perhaps more relevant than we think. Take, for example, “you’re either with us or against us.” Is it possible to say “I’m both with you and against you”? Yes. It could mean that you are for the principles but against the tactics. Or that you are supportive in contrast to your values. And to say “I’m neither with you nor against you” could mean that you aren’t supportive of the tactic in question, but won’t do anything to stop it. Or that you just don’t care.

Feelings, in particular, are a realm where the binary is often insufficient. Watching my children, I know that it's possible to be both happy and sad, a traditional binary, at the same time. And the zero itself defies binary categorization. It is something and nothing simultaneously.

Aczel reflects on a conversation he had with a Buddhist monk. “Everything is not everything — there is always something that lies outside of what you may think covers all creation. It could be a thought, or a kind of void, or a divine aspect. Nothing contains everything inside it.”

He goes on to conclude that “Here was the intellectual source of the number zero. It came from Buddhist meditation. Only this deep introspection could equate absolute nothingness with a number that had not existed until the emergence of this idea.”

Which is to say, certain properties of the zero likely were understood conceptually before the symbol came about — nothingness was a thing that could be represented. This idea fits with how we treat the zero today; it may represent nothing, but that nothing still has properties. And investigating those properties demonstrates that there is power in the void — it has something to teach us about how our universe operates.

Further contemplation might illuminate that the zero has something to teach us about existence as well. If we accept zero, the symbol, as being discovered as part of our realization about the existence of nothingness, then trying to understand the zero can teach us a lot about moving beyond the binary of alive/not alive to explore other ways of conceptualizing what it means to be.

Let Go of the Learning Baggage

We all want to learn better. That means retaining information, processing it, being able to use it when needed. More knowledge means better instincts; better insights into opportunities for both you and your organization. You will ultimately produce better work if you give yourself the space to learn. Yet often organizations get in the way of learning.

How do we learn how to learn? Usually in school, combined with instructions from our parents, we cobble together an understanding that allows us to move forward through the school years until we matriculate into a job. Then because most initial learning comes from doing, less from books, we switch to an on-the-fly approach.

Which is usually an absolute failure. Why? In part, because we layer our social values on top and end up with a hot mess of guilt and fear that stymies the learning process.

Learning is necessary for our success and personal growth. But we can’t maximize the time we spend learning because our feelings about what we ‘should’ be doing get in the way.

We are trained by our modern world to organize our day into mutually exclusive chunks called ‘work’, ‘play’, and ‘sleep’. One is done at the office, the other two are not. We are not allowed to move fluidly between these chunks, or combine them in our 24 hour day. Lyndon Johnson got to nap at the office in the afternoon, likely because he was President and didn’t have to worry about what his boss was going to think. Most of us don’t have this option. And now in the open office debacle we can’t even have a quiet 10 minutes of rest in our cubicles.

We have become trained to equate working with doing. Thus the ‘doing’ has value. We deserve to get paid for this. And, it seems, only this.

What does this have to do with learning?

It’s this same attitude that we apply to the learning process when we are older, with similarly unsatisfying results.

If we are learning for work, then in our brains learning = work. So we have to do it during the day. At the office. And if we are not learning, then we are not working. We think that walking is not learning. It’s ‘taking a break’. We instinctively believe that reading is learning. Having discussions about what you’ve read, however, is often not considered work, again it’s ‘taking a break’.

To many, working means sitting at your desk for eight hours a day. Being physically present, mental engagement is optional. It means pushing out emails and rushing to meetings and generally getting nothing done. We’ve looked at the focus aspect of this before. But what about the learning aspect?

Can we change how we approach learning, letting go of the guilt associated with not being visibly active, and embrace what seems counter-intuitive?

Thinking and talking are useful elements of learning. And what we learn in our ‘play’ time can be valuable to our ‘work’ time, and there’s nothing wrong with moving between the two (or combining them) during our day.

When mastering a subject, our brains actually use different types of processing. Barbara Oakley explains in A Mind for Numbers: How to Excel at Math and Science (even if you flunked algebra) that our brain has two general modes of thinking – ‘focused’ and ‘diffuse’ – and both of these are valuable and required in the learning process.

The focused mode is what we traditionally associate with learning. Read, dive deep, absorb. Eliminate distractions and get into the material. Oakley says “the focused mode involves a direct approach to solving problems using rational, sequential, analytical approaches. … Turn your attention to something and bam – the focused mode is on, like the tight, penetrating beam of a flashlight.”

But the focused mode is not the only one required for learning because we need time to process what we pick up, to get this new information integrated into our existing knowledge. We need time to make new connections. This is where the diffuse mode comes in.

Diffuse-mode thinking is what happens when you relax your attention and just let your mind wander. This relaxation can allow different areas of the brain to hook up and return valuable insights. … Diffuse-mode insights often flow from preliminary thinking that’s been done in the focused mode.

Relying solely on the focused mode to learn is a path to burnout. We need the diffuse mode to cement our ideas, put knowledge into memory and free up space for the next round of focused thinking. We need the diffuse mode to build wisdom. So why does diffuse mode thinking at work generally involve feelings of guilt?

Oakley’s recommendations for ‘diffuse-mode activators’ are: go to the gym, walk, play a sport, go for a drive, draw, take a bath, listen to music (especially without words), meditate, sleep. Um, aren’t these all things to do in my ‘play’ time? And sleep? It’s a whole time chunk on its own.

Most organizations do not promote a culture that allow these activities to be integrated into the work day. Go to the gym on your lunch. Sleep at home. Meditate on a break. Essentially do these things while we are not paying you.

We ingest this way of thinking, associating the value of getting paid with the value of executing our task list. If something doesn’t directly contribute, it’s not valuable. If it’s not valuable I need to do it in my non-work time or not at all. This is learned behavior from our organizational culture, and it essentially communicates that our leaders would rather see us do less than trust in the potential payoff of pursuits that aren’t as visible or ones that don’t pay off as quickly. The ability to see something is often a large component of trust. So if we are doing any of these ‘play’ activities at work, which are invisible in terms of their contribution to the learning process, we feel guilty because we don’t believe we are doing what we get paid to do.

If you aren’t the CEO or the VP of HR, you can’t magic a policy that says ‘all employees shall do something meaningful away from their desks each day and won’t be judged for it’, so what can you do to learn better at work? Find a way to let go of the guilt baggage when you invest in proven, effective learning techniques that are out of sync with your corporate culture.

How do you let go of the guilt? How do you not feel it every time you stand up to go for a walk, close your email and put on some headphones, or have a coffee with a colleague to discuss an idea you have? Because sometimes knowing you are doing the right thing doesn’t translate into feeling it, and that’s where guilt comes in.

Guilt is insidious. Not only do we usually feel guilt, but then we feel guilty about feeling guilty. Like, I go to visit my grandmother in her old age home mostly because I feel guilty about not going, and then I feel guilty because I’m primarily motivated by guilt! Like if I were a better person I would be doing it out of love, but I’m not, so that makes me terrible.

Breaking this cycle is hard. Like anything new, it’s going to feel unnatural for a while but it can be done.

How? Be kind to yourself.

This may sound a bit touchy-feely, but it is really a just a cognitive-behavioral approach with a bit of mindfulness thrown in. Dennis Tirch has done a lot of research into the positive benefits of compassion for yourself on worry, panic and fear. And what is guilt but worry that you aren’t doing the right thing, fear that you’re not a good person, and panic about what to do about it?

In his book, The Compassionate-Mind Guide to Overcoming Anxiety, Tirch writes:

the compassion focused model is based on research showing that some of the ways in which we instinctively regulate our response to threats have evolved from the attachment system that operates between infant and mother and from other basic relationships between mutually supportive people. We have specific systems in our brains that are sensitive to the kindness of others, and the experience of this kindness has a major impact on the way we process these threats and the way we process anxiety in particular.

The Dalai Lama defines compassion as “a sensitivity to the suffering of others, with a commitment to do something about it,” and Tirch also explains that we are greatly impacted by our compassion to ourselves.

In order to manage and overcome emotions like guilt that can prevent us from learning and achieving, we need to treat ourselves the same way we would the person we love most in the world. “We can direct our attention to inner images that evoke feelings of kindness, understanding, and support,” writes Tirch.

So the next time you look up from that proposal on the new infrastructure schematics and see that the sun is shining, go for a walk, notice where you are, and give your mind a chance to go into diffuse-mode and process what you’ve been focusing on all morning. And give yourself a hug for doing it.