Tag: TED

Can We Reason Our Way to a Better Morality?

In a 2012 TED talk, NYU professor Rebecca Goldstein, author of Plato at the Googleplex, sat down with her husband Harvard professor Steven Pinker for an interesting (and polarizing) conversation: Does pure reason eventually lead us to a better morality?

Goldstein argues yes; all progress is necessarily reason-based, and this should give us hope. Arguing against that fact would, indeed, require reasoning! Pinker, author of the controversial but well-received The Better Angels of our Nature, takes the devil's advocate position (though clearly for rhetorical effect). Perhaps reason is overrated? Perhaps morality is indeed a matter of the heart?

The animated (literally) conversation is below. Here's an interesting excerpt, and if you make it to the end, they also speculate on the things that we do today which may eventually be judged harshly by history.

Rebecca: Well, you didn't mention what might be one of our most effective better angels: reason. Reason has muscle. It's reason that provides the push to widen that circle of empathy. Every one of the humanitarian developments that you mentioned originated with thinkers who gave reasons for why some practice was indefensible. They demonstrated that the way people treated some particular group of others was logically inconsistent with the way they insisted on being treated themselves.

Steven: Are you saying that reason can actually change people's minds? Don't people just stick with whatever conviction serves their interests or conforms to the culture that they grew up in?

Rebecca: Here's a fascinating fact about us: Contradictions bother us, at least when we're forced to confront them, which is just another way of saying that we are susceptible to reason. And if you look at the history of moral progress, you can trace a direct pathway from reasoned arguments to changes in the way that we actually feel. Time and again, a thinker would lay out an argument as to why some practice was indefensible, irrational, inconsistent with values already held. Their essay would go viral, get translated into many languages, get debated at pubs and coffee houses and salons, and at dinner parties, and influence leaders, legislators, popular opinion. Eventually their conclusions get absorbed into the common sense of decency, erasing the tracks of the original argument that had gotten us there. Few of us today feel any need to put forth a rigorous philosophical argument as to why slavery is wrong or public hangings or beating children. By now, these things just feel wrong. But just those arguments had to be made, and they were, in centuries past.

 

Still Interested? Check out Pinker on how to educate yourself properly and how to improve your professional writing.

A Visual History of Human Knowledge

Infographics expert Manuel Lima, who brought us the amazing The Book of Trees: Visualizing Branches of Knowledge, has a TED talk on how knowledge grows, which ends up being a fascinating history of visualizations as well as an insightful look into our cultural urge to map what we know.

For a long period of time, we believed in a natural ranking order in the world around us, also known as the great chain of being, or “Scala naturae” in Latin, a top-down structure that normally starts with God at the very top, followed by angels, noblemen, common people, animals, and so on. This idea was actually based on Aristotle's ontology, which classified all things known to man in a set of opposing categories, like the ones you see behind me. But over time, interestingly enough, this concept adopted the branching schema of a tree in what became known as the Porphyrian tree, also considered to be the oldest tree of knowledge.

The branching scheme of the tree was, in fact, such a powerful metaphor for conveying information that it became, over time, an important communication tool to map a variety of systems of knowledge. We can see trees being used to map morality, with the popular tree of virtues and tree of vices, … with these beautiful illustrations from medieval Europe. We can see trees being used to map consanguinity, the various blood ties between people. We can also see trees being used to map genealogy, perhaps the most famous archetype of the tree diagram. … We can see trees even mapping systems of law, the various decrees and rulings of kings and rulers. And finally, of course, also a very popular scientific metaphor, we can see trees being used to map all species known to man. And trees ultimately became such a powerful visual metaphor because in many ways, they really embody this human desire for order, for balance, for unity, for symmetry.

However, nowadays we are really facing new complex, intricate challenges that cannot be understood by simply employing a simple tree diagram. And a new metaphor is currently emerging, and it's currently replacing the tree in visualizing various systems of knowledge. It's really providing us with a new lens to understand the world around us. And this new metaphor is the metaphor of the network. And we can see this shift from trees into networks in many domains of knowledge.

We can see this shift in the way we try to understand the brain. While before, we used to think of the brain as a modular, centralized organ, where a given area was responsible for a set of actions and behaviors, the more we know about the brain, the more we think of it as a large music symphony, played by hundreds and thousands of instruments. This is a beautiful snapshot created by the Blue Brain Project, where you can see 10,000 neurons and 30 million connections. And this is only mapping 10 percent of a mammalian neocortex. We can also see this shift in the way we try to conceive of human knowledge.

These are some remarkable trees of knowledge, or trees of science, by Spanish scholar Ramon Llull. And Llull was actually the precursor, the very first one who created the metaphor of science as a tree, a metaphor we use every single day, when we say, “Biology is a branch of science,” when we say, “Genetics is a branch of science.” But perhaps the most beautiful of all trees of knowledge, at least for me, was created for the French encyclopedia by Diderot and d'Alembert in 1751. This was really the bastion of the French Enlightenment, and this gorgeous illustration was featured as a table of contents for the encyclopedia. And it actually maps out all domains of knowledge as separate branches of a tree.

But knowledge is much more intricate than this. These are two maps of Wikipedia showing the inter-linkage of articles — related to history on the left, and mathematics on the right. And I think by looking at these maps and other ones that have been created of Wikipedia — arguably one of the largest rhizomatic structures ever created by man — we can really understand how human knowledge is much more intricate and interdependent, just like a network.

Emilie Wapnick: Why Some of us Don’t Have One True Calling

What do you want to be when you grow up? Well, if you're not sure you want to do just one thing for the rest of your life, you're not alone. In this illuminating talk, writer and artist Emilie Wapnick describes the kind of people she calls “multipotentialites” — who have a range of interests and jobs over one lifetime. Are you one?

See, the problem wasn't that I didn't have any interests — it's that I had too many. In high school, I liked English and math and art and I built websites and I played guitar in a punk band called Frustrated Telephone Operator. Maybe you've heard of us.

This continued after high school, and at a certain point, I began to notice this pattern in myself where I would become interested in an area and I would dive in, become all-consumed, and I'd get to be pretty good at whatever it was, and then I would hit this point where I'd start to get bored. And usually I would try and persist anyway, because I had already devoted so much time and energy and sometimes money into this field. But eventually this sense of boredom, this feeling of, like, yeah, I got this, this isn't challenging anymore — it would get to be too much. And I would have to let it go.

But then I would become interested in something else, something totally unrelated, and I would dive into that, and become all-consumed, and I'd be like, “Yes! I found my thing,” and then I would hit this point again where I'd start to get bored. And eventually, I would let it go. But then I would discover something new and totally different, and I would dive into that.

This pattern caused me a lot of anxiety, for two reasons. The first was that I wasn't sure how I was going to turn any of this into a career. I thought that I would eventually have to pick one thing, deny all of my other passions, and just resign myself to being bored. The other reason it caused me so much anxiety was a little bit more personal. I worried that there was something wrong with this, and something wrong with me for being unable to stick with anything. I worried that I was afraid of commitment, or that I was scattered, or that I was self-sabotaging, afraid of my own success.

If you can relate to my story and to these feelings, I'd like you to ask yourself a question that I wish I had asked myself back then. Ask yourself where you learned to assign the meaning of wrong or abnormal to doing many things. I'll tell you where you learned it: you learned it from the culture.

Brené Brown on The Difference Between Guilt and Shame

Brené Brown studies vulnerability, courage, authenticity, and shame. She's a researcher-storyteller and author of Daring Greatly: How the Courage to Be Vulnerable Transforms the Way We Live, Love, Parent, and Lead, a book that argues we should embrace vulnerability and imperfection, to live wholeheartedly, and engage in our lives.

In this TED talk, a follow-on to her one on vulnerability, she engagingly brings us into the “unspoken epidemic” of shame and explores what happens when people confront their shame head-on.

I think the main point of her two TED talks is to embrace our vulnerabilities and expose them to others so we can live a more meaningful life.

Shame is a focus on self, guilt is a focus on behavior. Shame is “I am bad.” Guilt is “I did something bad.” How many of you, if you did something that was hurtful to me, would be willing to say, “I'm sorry. I made a mistake?” How many of you would be willing to say that? Guilt: I'm sorry. I made a mistake. Shame: I'm sorry. I am a mistake.

Chimamanda Adichie: The Danger of a Single Story

Our lives, our cultures, are composed of many overlapping stories. Novelist Chimamanda Adichie, author of Americanah, one of The New York Times's ten best books of the year, tells the story of how she found her authentic cultural voice — and warns that if we hear only a single story about another person or country, we risk a critical misunderstanding.

I was 19. My American roommate was shocked by me. She asked where I had learned to speak English so well, and was confused when I said that Nigeria happened to have English as its official language. She asked if she could listen to what she called my “tribal music,” and was consequently very disappointed when I produced my tape of Mariah Carey. … She assumed that I did not know how to use a stove.

What struck me was this: She had felt sorry for me even before she saw me. Her default position toward me, as an African, was a kind of patronizing, well-meaning, pity. My roommate had a single story of Africa. A single story of catastrophe. In this single story there was no possibility of Africans being similar to her, in any way. No possibility of feelings more complex than pity. No possibility of a connection as human equals.

I must say that before I went to the U.S. I didn't consciously identify as African. But in the U.S. whenever Africa came up people turned to me. Never mind that I knew nothing about places like Namibia. But I did come to embrace this new identity. And in many ways I think of myself now as African. Although I still get quite irritable when Africa is referred to as a country. The most recent example being my otherwise wonderful flight from Lagos two days ago, in which there was an announcement on the Virgin flight about the charity work in “India, Africa and other countries.”

So after I had spent some years in the U.S. as an African, I began to understand my roommate's response to me. If I had not grown up in Nigeria, and if all I knew about Africa were from popular images, I too would think that Africa was a place of beautiful landscapes, beautiful animals, and incomprehensible people, fighting senseless wars, dying of poverty and AIDS, unable to speak for themselves, and waiting to be saved, by a kind, white foreigner. I would see Africans in the same way that I, as a child, had seen Fide's family.

This single story of Africa ultimately comes, I think, from Western literature.

Daniel Kahneman Explains The Machinery of Thought

Daniel Kahneman

Israeli-American psychologist and Nobel Laureate Daniel Kahneman is the founding father of modern behavioral economics. His work has influenced how we see thinking, decisions, risk, and even happiness.

In Thinking, Fast and Slow, his “intellectual memoir,” he shows us in his own words some of his enormous body of work.

Part of that body includes a description of the “machinery of … thought,” which divides the brain into two agents, called System 1 and System 2, which “respectively produce fast and slow thinking.” For our purposes these can also be thought of as intuitive and deliberate thought.

The Two Systems

Psychologists have been intensely interested for several decades in the two modes of thinking evoked by the picture of the angry woman and by the multiplication problem, and have offered many labels for them. I adopt terms originally proposed by the psychologists Keith Stanovich and Richard West, and will refer to two systems in the mind, System 1 and System 2.

  • System 1 operates automatically and quickly, with little or no effort and no sense of voluntary control.
  • System 2 allocates attention to the effortful mental activities that demand it, including complex computations. The operations of System 2 are often associated with the subjective experience of agency, choice, and concentration.

If asked to pick which thinker we are, we pick system 2. However, as Kahneman points out:

The automatic operations of System 1 generate surprisingly complex patterns of ideas, but only the slower System 2 can construct thoughts in an orderly series of steps . I also describe circumstances in which System 2 takes over, overruling the freewheeling impulses and associations of System 1. You will be invited to think of the two systems as agents with their individual abilities, limitations, and functions.

System One
These vary by individual and are often “innate skills that we share with other animals.”

We are born prepared to perceive the world around us, recognize objects, orient attention, avoid losses, and fear spiders. Other mental activities become fast and automatic through prolonged practice. System 1 has learned associations between ideas (the capital of France?); it has also learned skills such as reading and understanding nuances of social situations. Some skills, such as finding strong chess moves, are acquired only by specialized experts. Others are widely shared. Detecting the similarity of a personality sketch to an occupational stereotype requires broad knowledge of the language and the culture, which most of us possess. The knowledge is stored in memory and accessed without intention and without effort.

System Two
This is when we do something that does not come naturally and requires some sort of continuous exertion.

In all these situations you must pay attention, and you will perform less well, or not at all, if you are not ready or if your attention is directed inappropriately.

Paying attention is not really the answer as that is mentally expensive and can make people “effectively blind, even to stimuli that normally attract attention.” This is the point of Christopher Chabris and Daniel Simons in their book The Invisible Gorilla. Not only are we blind to what is plainly obvious when someone points it out but we fail to see that we are blind in the first place.

The Division of Labour

Systems 1 and 2 are both active whenever we are awake. System 1 runs automatically and System 2 is normally in a comfortable low-effort mode, in which only a fraction of its capacity is engaged. System 1 continuously generates suggestions for System 2: impressions, intuitions, intentions, and feelings. If endorsed by System 2, impressions and intuitions turn into beliefs, and impulses turn into voluntary actions. When all goes smoothly, which is most of the time, System 2 adopts the suggestions of System 1 with little or no modification. You generally believe your impressions and act on your desires, and that is fine— usually.

When System 1 runs into difficulty, it calls on System 2 to support more detailed and specific processing that may solve the problem of the moment. System 2 is mobilized when a question arises for which System 1 does not offer an answer, as probably happened to you when you encountered the multiplication problem 17 × 24. You can also feel a surge of conscious attention whenever you are surprised. System 2 is activated when an event is detected that violates the model of the world that System 1 maintains. In that world, lamps do not jump, cats do not bark, and gorillas do not cross basketball courts. The gorilla experiment demonstrates that some attention is needed for the surprising stimulus to be detected. Surprise then activates and orients your attention: you will stare, and you will search your memory for a story that makes sense of the surprising event. System 2 is also credited with the continuous monitoring of your own behavior—the control that keeps you polite when you are angry, and alert when you are driving at night. System 2 is mobilized to increased effort when it detects an error about to be made. Remember a time when you almost blurted out an offensive remark and note how hard you worked to restore control. In summary, most of what you (your System 2) think and do originates in your System 1, but System 2 takes over when things get difficult, and it normally has the last word.

The division of labor between System 1 and System 2 is highly efficient: it minimizes effort and optimizes performance. The arrangement works well most of the time because System 1 is generally very good at what it does: its models of familiar situations are accurate, its short-term predictions are usually accurate as well, and its initial reactions to challenges are swift and generally appropriate. System 1 has biases, however, systematic errors that it is prone to make in specified circumstances. As we shall see, it sometimes answers easier questions than the one it was asked, and it has little understanding of logic and statistics. One further limitation of System 1 is that it cannot be turned off.

[…]

Conflict between an automatic reaction and an intention to control it is common in our lives. We are all familiar with the experience of trying not to stare at the oddly dressed couple at the neighboring table in a restaurant. We also know what it is like to force our attention on a boring book, when we constantly find ourselves returning to the point at which the reading lost its meaning. Where winters are hard, many drivers have memories of their car skidding out of control on the ice and of the struggle to follow well-rehearsed instructions that negate what they would naturally do: “Steer into the skid, and whatever you do, do not touch the brakes!” And every human being has had the experience of not telling someone to go to hell. One of the tasks of System 2 is to overcome the impulses of System 1. In other words, System 2 is in charge of self-control.

[…]

The question that is most often asked about cognitive illusions is whether they can be overcome. The message of these examples is not encouraging. Because System 1 operates automatically and cannot be turned off at will, errors of intuitive thought are often difficult to prevent. Biases cannot always be avoided, because System 2 may have no clue to the error. Even when cues to likely errors are available, errors can be prevented only by the enhanced monitoring and effortful activity of System 2. As a way to live your life, however, continuous vigilance is not necessarily good, and it is certainly impractical. Constantly questioning our own thinking would be impossibly tedious, and System 2 is much too slow and inefficient to serve as a substitute for System 1 in making routine decisions. The best we can do is a compromise: learn to recognize situations in which mistakes are likely and try harder to avoid significant mistakes when the stakes are high. The premise of this book is that it is easier to recognize other people’s mistakes than our own.

Still Curious? Thinking, Fast and Slow is a tour-de-force when it comes to thinking.

(image source)

12