Daniel Kahneman: Thinking, Fast and Slow (2011)

'We have two modes of thinking: System 1 that is quick, automatic and unconscious, and System 2 that is slow, effortful and under conscious control. The operation of System 1 leads to systematic biases that System 2 is commonly too inattentive to correct.' My notes on the book.

Daniel Kahneman: Thinking, Fast and Slow (2011)

 

In a paragraph 

We have two modes of thinking:  System 1 that is quick, automatic and unconscious, and System 2 that is slow, effortful and under conscious control.  The operation of System 1 leads to systematic biases that System 2 is commonly too inattentive to correct.  Kahneman’s Prospect Theory better describes behaviour than the model of rational economic man.

 

Key points

Introduction and conclusion

  • Worked with Amos Tversky from 1969 to 1996. Amos and I discovered that we enjoyed working together. Amos was always very funny, and in his presence, I became funny as well, so we spent hours of solid work in continuous amusement. We were sufficiently similar to understand each other easily, and sufficiently different to surprise each other. A shared mind that was superior to our individual minds and a relationship that made our work fun as well as productive. 
  • We documented systematic errors in the thinking of normal people, and we traced these errors to the design of the machinery of cognition rather than to the corruption of thought by emotion. 
  • I often cringe when my work with Amos is credited with demonstrating that human choices are irrational, when in fact our research only showed that Humans are not well described by the rational-agent model.
  • Easier to criticise others than self, aiming for better water cooler conversations. A richer language is essential to the skill of constructive criticism. Labels such as “anchoring effects,” “narrow framing,” or “excessive coherence” bring together in memory everything we know about a bias, its causes, its effects, and what can be done about it.

 

Two systems

  • Two modes of thing.System 1 responds automatically and quickly without conscious awareness.  System 2 is slower and effortful and requires conscious control.
  • The attentive System 2 is who we think we are. System 1 is indeed the origin of much that we do wrong, but it is also the origin of most of what we do right — which is most of what we do.
  • System 2 has a natural speed and limited cognitive capacity.Its operation is marked by the brain using energy and pupils dilating.  Self-control is draining, unpleasant and limited – ego depletion.  People vary in extent of engagement of System 2, as needed to be rational. 
  • System 1 works by associating ideas. It can be primed. It takes cognitive ease as a marker of familiarity. It represents categories by stereotypes.
  • System 1 assumes causal patterns, without reasoning about causes. System 2 can employ statistical thinking.
  • System 1 is a machine for jumping to conclusions. It ignores information quality as it creates narratives. System 1 is gullible and biased to believe, System 2 is in charge of doubting and unbelieving.
  • System 1 will answer an easier question. It is rarely stumped.

 

Heuristics and biases

  • The Law of Small Numbers. Extreme outcomes more likely with small samples.
  • Anchoring occurs from System 1 priming and System 2 adjusting.
  • Availability heuristic – judge frequency by ease with which instances come to mind. Availability bias causes partners to believe they are each contributing more than half.
  • Affect heuristic – judge by emotions not analysis. Probability neglect.  Risk policies to combine the experts’ knowledge with the public’s emotions and intuitions.
  • System 1 suggested the incorrect intuition, and System 2 endorsed it and expressed it in a judgment, from either ignorance or laziness. 
  • Bayesian essentials: use plausible base rate and question dignostics of evidence. 
  • Judged more likely that Linda is a feminist bank teller than a bank teller. Conjunction fallacy. Plausible story beats probability.
  • Regression to the mean. System 1 tries to give causal explanation for a statistical artifact.

 

Overconfidence

  • The sense-making machinery of System 1 makes us see the world as more tidy, simple, predictable, and coherent than it really is. The illusion that one has understood the past feeds the further illusion that one can predict and control the future. The Illusion of Understanding. Narrative fallacy.  Hindsight bias. 
  • The Illusion of Validity. Errors of prediction are inevitable because the world is unpredictable. High subjective confidence is not to be trusted.
  • In stock picking, the key question is whether information is in the price. Traders cannot answer this, but they appear to be ignorant of their ignorance.
  • Intuitions and Formulas. Apgar scores.  Recruitment by traits.
  • Gary Klein had spent much time with fireground commanders, clinical nurses, and other professionals who have real expertise. I had spent more time thinking about clinicians, stock pickers, and political scientists trying to make unsupportable long-term forecasts. Not surprisingly, his default attitude was trust and respect; mine was scepticism. Pseudo experts. intuition cannot be trusted in the absence of stable regularities.
  • The Outside View – Reference Class Forecasting. Planning Fallacy – assume close to best case.
  • Optimistic individuals play a disproportionate role in shaping our lives. Engine of capitalism.

 

Choices

  • Utility a logarithmic function of wealth.  Explanation for risk aversion.
  • Prospect Theory. Three features: evaluation relative to reference point, diminishing sensitivity, loss aversion.
  • Loss aversion causes Endowment Effect, and Reference-dependent Fairness.
  • Overweighting (or ignoring) of low probabilities (possibility effect) and high probabilities (certainty effect).
  • Fourfold pattern. Risk seeking for possible gains, risk averse for probable gains (disappointment).  Risk averse for possible losses, risk seeking for probable losses.
  • Think like a trader, you win some you lose some. The outside view and the risk policy are remedies against two distinct biases that affect many decisions: the exaggerated optimism of the planning fallacy and the exaggerated caution induced by loss aversion.
  • An Econ knows how to deal with small print when it matters. In contrast, the recommendations of Nudge require firms to offer contracts that are sufficiently simple to be read and understood by Human customers.
  • The ultimate currency that rewards or punishes is often emotional. Mental accounts as narrow framing for control by compartmentalising.
  • The precautionary principle, which prohibits any action that might cause harm. The dilemma between intensely loss-averse moral attitudes and efficient risk management.
  • We employ a logic for undoing counterfactuals that underlies regret and blame. If only.
  • Framing effects evoke different reactions to the same situation meaning we cannot be fully rational. 90 % survival sounds encouraging whereas 10 % mortality is frightening.

 

Two selves 

  • The experiencing self and the remembering self. Peak-end rule.  Duration neglect.
  • The general conclusion is as clear for well-being as it was for colonoscopies: people’s evaluations of their lives and their actual experience may be related, but they are also different. Life satisfaction is not a flawed measure of their experienced well-being, as I thought some years ago. It is something else entirely.
  • Focusing illusion. Nothing in life is as important as you think it is when you are thinking about it. You get pleasure (or displeasure) from your car when you think about your car, which is probably not very often. Living in California is like having ten toes: nice, but not something one thinks much about.

 

Comments

‘Thinking Fast and Slow’ has been widely influential since it was published in 2011, but re-reading it reminded me just how good it is.  It has much of importance to teach, and it is written so well.

In the book, Kahneman sets out many valuable findings from his long career researching the mind’s heuristics, and resulting systematic biases.  Most of this work was carried out in collaboration with Amos Tversky, who died in 1999, but Kahneman presents the ideas afresh under the unifying theme of dividing the mind between Two Systems: the fast, intuitive and subconscious System 1, and the slow, effortful, consciously controlled System 2. 

The first part of the book introduces the two systems, highlighting the general effectiveness but systematic errors of System 1 as a machine for jumping to conclusions, and the limited energies of System 2 in its role as reviewer.  The second part considers a series of cognitive biases, and the third part discusses the overconfidence that System 1 produces. Part four explains Prospect Theory, the behavioural economics model for which Kahneman won the Nobel Prize for Economics. Finally, Part 5 discusses how the moment-by-moment happiness of the Experiencing Self differs from the recollections of the Remembering Self.

The glaring weakness of the book from today’s perspective is its enthusiastic reporting of psychological experiments that have since been shown not to replicate.  It now appears that researchers such as those who thought they had showed that talking about old people primed students to walk more slowly were deceiving themselves.  The best way to read the book is to assume that any cited psychology experiment that sounds too good to be true probably is.  But this should not discourage reading the book – none of Kahneman’s important lessons have been invalidated by the replication crisis.  The valuable perspective that he sets out remains.

I unhesitatingly recommend this book for everyone interested in ideas.  It should be required reading for psychologists, philosophers, economists and anyone who needs to understand human nature.  It is a model of non-fiction writing: clear, well-structured and compelling, made particularly engaging by the author’s warmth and gentle humour.   Don’t let the replication crisis put you off, this is a book you need to read and will enjoy reading.

 

Links 

Thinking Fast and Slow on Amazon UK.  I recommend the audiobook.

Daniel Kahneman Princeton website.

Daniel Kahneman on Jolly Swagman Podcast.

The Undoing Project: A Friendship That Changed the World by Michael Lewis (2017) on Amazon UK.  The book that tells the story of the partnership between Tversky and Kahneman.

 

EXTRACTS

Introduction

So, this is my aim for watercooler conversations: improve the ability to identify and understand errors of judgment and choice, in others and eventually in us, by providing a richer and more precise language to discuss them. In at least some cases, an accurate diagnosis may suggest an intervention to limit the damage that bad judgments and choices often cause.

I trace the central ideas to the lucky day in 1969 when I asked a colleague to speak as a guest to a seminar I was teaching in the Department of Psychology at the Hebrew University of Jerusalem. Amos Tversky.

Intuitive statistics. Even statisticians were not good intuitive statisticians.

While writing the article that reported these findings, Amos and I discovered that we enjoyed working together. Amos was always very funny, and in his presence, I became funny as well, so we spent hours of solid work in continuous amusement. We were sufficiently similar to understand each other easily, and sufficiently different to surprise each other. “Judgment Under Uncertainty: Heuristics and Biases.”

Social scientists in the 1970s broadly accepted two ideas about human nature. First, people are generally rational, and their thinking is normally sound. Second, emotions such as fear, affection, and hatred explain most of the occasions on which people depart from rationality. Our article challenged both assumptions.

We documented systematic errors in the thinking of normal people, and we traced these errors to the design of the machinery of cognition rather than to the corruption of thought by emotion.

We almost always included in our articles the full text of the questions we had asked ourselves and our respondents. These questions served as demonstrations for the reader, allowing him to recognize how his own thinking was tripped up by cognitive biases.

“Prospect Theory: An Analysis of Decision Under Risk,” 

A shared mind that was superior to our individual minds and a relationship that made our work fun as well as productive.

Our collaboration on judgment and decision making was the reason for the Nobel Prize that I received in 2002, which Amos would have shared had he not died, aged fifty-nine, in 1996.

I present a view of how the mind works that draws on recent developments in cognitive and social psychology. One of the more important developments is that we now understand the marvels as well as the flaws of intuitive thought. We can now draw a richer and more balanced picture, in which skill and heuristics are alternative sources of intuitive judgments and choices.

The fund manager had apparently not considered the one question that an economist would call relevant: Is Ford stock currently under-priced? Instead, he had listened to his intuition; he liked the cars, he liked the company, and he liked the idea of owning its stock. From what we know about the accuracy of stock picking, it is reasonable to believe that he did not know what he was doing.

An important advance is that emotion now looms much larger in our understanding of intuitive judgments and choices than it did in the past. The executive’s decision would today be described as an example of the affect heuristic, where judgments and decisions are guided directly by feelings of liking and disliking, with little deliberation or reasoning.

This is the essence of intuitive heuristics: when faced with a difficult question, we often answer an easier one instead, usually without noticing the substitution.

The distinction between fast and slow thinking has been explored by many psychologists over the last twenty-five years. Most of this book is about the workings of System 1 and the mutual influences between it and System 2.

The book is divided into five parts. Part 1 presents the basic elements of a two-systems approach. Part 2 updates the study of judgment heuristics and explores a major puzzle: Why is it so difficult for us to think statistically? Part 3 describes a puzzling limitation of our mind: our excessive confidence in what we believe we know. Part 4 is a conversation with the discipline of economics on the nature of decision making. This section of the book provides a current view, informed by the two-system model, of the key concepts of prospect theory, the model of choice that Amos and I published in 1979. Part 5 describes recent research that has introduced a distinction between two selves, the experiencing self and the remembering self, 

Two articles I wrote with Amos are reproduced as appendixes to the book. The first is the review of judgment under uncertainty that I described earlier. The second, published in 1984, summarizes prospect theory as well as our studies of framing effects. The articles present the contributions that were cited by the Nobel committee — and you may be surprised by how simple they are. Reading them will give you a sense of how much we knew a long time ago, and also of how much we have learned in recent decades.

 

Part 1: TWO SYSTEMS

1: The Characters of the Story

Psychologists have been intensely interested in several decades in the two modes of thinking evoked by the picture of the angry woman and by the multiplication problem and have offered many labels for them. I adopt terms originally proposed by the psychologists Keith Stanovich and Richard West, and will refer to two systems in the mind, System 1 and System 2. The labels of System 1 and System 2 are widely used in psychology, but I go further than most in this book, which you can read as a psychodrama with two characters. “Automatic system” and “effortful system”

Everyone has some awareness of the limited capacity of attention, and our social behaviour makes allowances for these limitations. The gorilla study illustrates two important facts about our minds: we can be blind to the obvious, and we are also blind to our blindness.

The division of labour between System 1 and System 2 is highly efficient: it minimizes effort and optimizes performance. The arrangement works well most of the time because System 1 is generally very good at what it does. 

System 2 is much too slow and inefficient to serve as a substitute for System 1 in making routine decisions. The best we can do is a compromise: learn to recognize situations in which mistakes are likely and try harder to avoid significant mistakes when the stakes are high. The premise of this book is that it is easier to recognize other people’s mistakes than our own.

 

2: Attention and Effort

During a mental multiplication, the pupil normally dilated to a large size within a few seconds and stayed large as long as the individual kept working on the problem; it contracted immediately when she found a solution or gave up. 

“How did you know?” to which we would reply, “We have a window to your soul.”

The life of System 2 is normally conducted at the pace of a comfortable walk, sometimes interrupted by episodes of jogging and on rare occasions by a frantic sprint.

 

3: The Lazy Controller

System 2 also has a natural speed. You expend some mental energy in random thoughts and in monitoring what goes on around you even when your mind does nothing in particular, but there is little strain.  The law of least effort.

In a state of flow, maintaining focused attention on these absorbing activities requires no exertion of self-control, thereby freeing resources to be directed to the task at hand.

People who are simultaneously challenged by a demanding cognitive task and by a temptation are more likely to yield to the temptation. Ego depletion. The exertion of self-control is depleting and unpleasant. The idea of mental energy is more than a mere metaphor.

Walter Mischel and his students exposed four-year-old children to a cruel dilemma.

Keith Stanovich and his longtime collaborator Richard West originally introduced the terms System 1 and System 2. Stanovich published his conclusions in a book titled Rationality and the Reflective Mind. Stanovich’s concept of a rational person is similar to what I earlier labelled “engaged.” The core of his argument is that rationality should be distinguished from intelligence. In his view, superficial or “lazy” thinking is a flaw in the reflective mind, a failure of rationality.

 

4: The Associative Machine

Associative activation: ideas that have been evoked trigger many other ideas, in a spreading cascade of activity in your brain.  All this happens quickly and all at once, yielding a self-reinforcing pattern of cognitive, emotional, and physical responses that is both diverse and integrated — it has been called associatively coherent.

David Hume reduced the principles of association to three: resemblance, contiguity in time and place, and causality.  Psychologists think of ideas as nodes in a vast network, called associative memory.

The notion that we have limited access to the workings of our minds is difficult to accept because, naturally, it is alien to our experience, but it is true: you know far less about yourself than you feel you do.

Exposure to a word causes immediate and measurable changes in the ease with which many related words can be evoked. Priming effect.  EAT primes the idea of SOUP.

The idea of old age had not come to their conscious awareness, but their actions had changed, nevertheless. This remarkable priming phenomenon — the influencing of an action by the idea — is known as the ideomotor effect. The idea of money primes individualism.  The users of the kitchen contributed almost three times as much in “eye weeks” as they did in “flower weeks.”

 

5: Cognitive Ease

You experience greater cognitive ease in perceiving a word you have seen earlier, and it is this sense of ease that gives you the impression of familiarity. A reliable way to make people believe in falsehoods is frequent repetition, because familiarity is not easily distinguished from truth. Creativity is associative memory that works exceptionally well.  Mood evidently affects the operation of System 1: when we are uncomfortable and unhappy, we lose touch with our intuition.

 

6: Norms, Surprises, and Causes

We are evidently ready from birth to have impressions of causality, which do not depend on reasoning about patterns of causation. Immaterial divinity is the ultimate cause of the physical world, and immortal souls temporarily control our bodies while we live and leave them behind as we die.

 Apply causal thinking inappropriately, to situations that require statistical reasoning.  Statistical thinking derives conclusions about individual cases from properties of categories and ensembles.

 

7: A Machine for Jumping to Conclusions

Jumping to conclusions is efficient if the conclusions are likely to be correct and the costs of an occasional mistake acceptable, and if the jump saves much time and effort. A definite choice was made, but you did not know it. System 1 is gullible and biased to believe, System 2 is in charge of doubting and unbelieving.  Halo effect.

The representation of the world that System 1 generates is simpler and more coherent than the real thing. System 1 is radically insensitive to both the quality and the quantity of the information that gives rise to impressions and intuitions.  It is the consistency of the information that matters for a good story, not its completeness. Indeed, you will often find that knowing little makes it easier to fit everything you know into a coherent pattern.

 

8: How Judgments Happen

System 1 has been shaped by evolution to provide a continuous assessment of the main problems that an organism must solve to survive: How are things going? Is there a threat or a major opportunity? Is everything normal? Should I approach or avoid?  Good mood and cognitive ease are the human equivalents of assessments of safety and familiarity.

Basic assessment. Evaluate, in a single glance at a stranger’s face, two potentially crucial facts about that person: how dominant (and therefore potentially threatening) he is, and how trustworthy.

Because System 1 represents categories by a prototype or a set of typical exemplars, it deals well with averages but poorly with sums.  Almost complete neglect of quantity in such emotional contexts has been confirmed many times.  An underlying scale of intensity allows matching across diverse dimensions.  Prediction by matching is statistically wrong — although it is perfectly natural to System 1, and for most people except statisticians it is also acceptable to System 2.

 

9: Answering an Easier Question

A remarkable aspect of your mental life is that you are rarely stumped.  I propose a simple account of how we generate intuitive opinions on complex matters. If a satisfactory answer to a hard question is not found quickly, System 1 will find a related question that is easier and will answer it.  My feelings about dying dolphins must be expressed in dollars. Another capability of System 1, intensity matching, is available to solve that problem.

Only visual artists and experienced photographers have developed the skill of seeing the drawing as an object on the page. For the rest of us, substitution occurs: the dominant impression of 3-D size dictates the judgment of 2-D size. The illusion is due to a 3-D heuristic.  It happens so deep in the perceptual system that you simply cannot help it.

 The psychologist Paul Slavic has proposed an affect heuristic in which people let their likes and dislikes determine their beliefs about the world.

 

Part 2: HEURISTICS AND BIASES

10: The Law of Small Numbers

Extreme outcomes (both high and low) are more likely to be found in small than in large samples. What scientists call artifacts, observations that are produced entirely by some aspect of the method of research — in this case, by differences in sample size.

Amos and I called our first joint article “Belief in the Law of Small Numbers.”

System 1 is not prone to doubt. It suppresses ambiguity and spontaneously constructs stories that are as coherent as possible. Unless the message is immediately negated, the associations that it evokes will spread as if the message were true. System 2 is capable of doubt, because it can maintain incompatible possibilities at the same time. However, sustaining doubt is harder work than sliding into certainty. The law of small numbers is a manifestation of a general bias that favours certainty over doubt,

The hot hand is entirely in the eye of the beholders, who are consistently too quick to perceive order and causality in randomness.  The truth is that small schools are not better on average; they are simply more variable.

 

11: Anchors

It is now clear that Amos and I were both right. Two different mechanisms produce anchoring effects — one for each system. There is a form of anchoring that occurs in a deliberate process of adjustment, an operation of System 2. And there is anchoring that occurs by a priming effect, an automatic manifestation of System 1.

 

12: The Science of Availability

We defined the availability heuristic as the process of judging frequency by “the ease with which instances come to mind.”

 Substitution of questions inevitably produces systematic errors.  Maintaining one’s vigilance against biases is a chore — but the chance to avoid a costly mistake is sometimes worth the effort.

As expected, the self-assessed contributions added up to more than 100 %. The explanation is a simple availability bias: both spouses remember their own individual efforts and contributions much more clearly.

 The ease with which instances come to mind is a System 1 heuristic, which is replaced by a focus on content when System 2 is more engaged. Multiple lines of evidence converge on the conclusion that people who let themselves be guided by System 1 are more strongly susceptible to availability biases than others who are in a state of higher vigilance.

 

13: Availability, Emotion, and Risk

An affect heuristic, in which people make judgments and decisions by consulting their emotions. The affect heuristic is an instance of substitution, in which the answer to an easy question (How do I feel about it?) serves as an answer to a much harder question (What do I think about it

Jonathan Haidt said in another context, “The emotional tail wags the rational dog.” The affect heuristic simplifies our lives by creating a world that is much tidier than reality.

Differences between experts and the public are explained in part by biases in lay judgments, but Slavic draws attention to situations in which the differences reflect a genuine conflict of values. He points out that experts often measure risks by the number of lives (or life-years) lost, while the public draws finer distinctions, for example between “good deaths” and “bad deaths,” or between random accidental fatalities and deaths that occur in the course of voluntary activities.

Cass Sunstein disagrees sharply with Slavic’s stance on the different views of experts and citizens and defends the role of experts as a bulwark against “populist” excesses. Sunstein has not been persuaded by Slovic’s argument that risk and its measurement is subjective. Many aspects of risk assessment are debatable, but he has faith in the objectivity that may be achieved by science, expertise, and careful deliberation. Sunstein came to believe that biased reactions to risks are an important source of erratic and misplaced priorities in public policy.

An availability cascade is a self-sustaining chain of events, which may start from media reports of a relatively minor event and lead up to public panic and large-scale government action. In Love Canal, buried toxic waste was exposed during a rainy season in 1979.  The “Alar scare” of 1989. Alar is a chemical that was sprayed on apples to regulate their growth and improve their appearance. The scare began with press stories that the chemical, when consumed in gigantic doses, caused cancerous tumours in rats and mice. The Alar tale illustrates a basic limitation in the ability of our mind to deal with small risks: we either ignore them altogether or give them far too much weight — nothing in between.

 “Probability neglect.”  Terrorism speaks directly to System 1.  Psychology should inform the design of risk policies that combine the experts’ knowledge with the public’s emotions and intuitions.

 

14: Tom W’s Specialty      

The question about probability (likelihood) was difficult, but the question about similarity was easier, and it was answered instead.

Michael Lewis’s bestselling Moneyball is a story about the inefficiency of this mode of prediction.

When an incorrect intuitive judgment is made, System 1 and System 2 should both be indicted. System 1 suggested the incorrect intuition, and System 2 endorsed it and expressed it in a judgment. However, there are two possible reasons for the failure of System 2 — ignorance or laziness.

 The second sin of representativeness is insensitivity to the quality of evidence.

 The essential keys to disciplined Bayesian reasoning can be simply summarized: Anchor your judgment of the probability of an outcome on a plausible base rate. Question the diagnostic of your evidence.

 

15: Linda: Less is More

The best-known and most controversial of our experiments involved a fictitious lady called Linda. Amos and I made up the Linda problem to provide conclusive evidence of the role of heuristics in judgment and of their incompatibility with logic.

Which alternative is more probable? Linda is a bank teller. Linda is a bank teller and is active in the feminist movement. This stark version of the problem made Linda famous in some circles, and it earned us years of controversy. About 85 % to 90 % of undergraduates at several major universities chose the second option, contrary to logic. Conjunction fallacy. The most coherent stories are not necessarily the most probable, but they are plausible.

 

16: Causes Trump Statistics

Stereotypes, both correct and false, are how we think of categories. Resistance to stereotyping is a laudable moral position, but the simplistic idea that the resistance is costless is wrong.  Causal base rates.

“Helping experiment”  Even normal, decent people do not rush to help when they expect others to take on the unpleasantness of dealing with a seizure. And that means you, too.

 

17: Regression to the Mean

Pilots he praised for a performance were likely to be followed by a disappointing performance, and punishments were typically followed by an improvement. But the inference he had drawn about the efficacy of reward and punishment was completely off the mark. What he had observed is known as regression to the mean,

    success = talent + luck.   great success = a little more talent + a lot of luck

Regression effects are ubiquitous, and so are misguided causal stories to explain them. Regression to the mean was discovered and named late in the nineteenth century by Sir Francis Galton.  Whenever the correlation between two scores is imperfect, there will be regression to the mean. Highly intelligent women tend to marry men who are less intelligent than they are. Our mind is strongly biased toward causal explanations and does not deal well with “mere statistics.”

 

Part 3: OVERCONFIDENCE

19: The Illusion of Understanding

Talab introduced the notion of a narrative fallacy to describe how flawed stories of the past shape our views of the world. Our comforting conviction that the world makes sense rests on a secure foundation: our almost unlimited ability to ignore our ignorance. Some people thought well in advance that there would be a crisis, but they did not know it.

The core of the illusion is that we believe we understand the past, which implies that the future also should be knowable, but in fact we understand the past less than we believe we do.  The mind that makes up narratives about the past is a sense-making organ. “I-knew-it-all-along” effect, or hindsight bias, outcome bias. 

Decision makers who expect to have their decisions scrutinized with hindsight are driven to bureaucratic solutions — and to an extreme reluctance to take risks.

The sense-making machinery of System 1 makes us see the world as more tidy, simple, predictable, and coherent than it really is. The illusion that one has understood the past feeds the further illusion that one can predict and control the future.  Stories of success and failure consistently exaggerate the impact of leadership style and management practices on firm outcomes, and thus their message is rarely useful.

 

20: The Illusion of Validity

Our impression of each candidate’s character was as direct and compelling as the colour of the sky. Despite our definite impressions about individual candidates, we knew with certainty that our forecasts were largely useless. The illusion of validity.

Subjective confidence in a judgment is not a reasoned evaluation of the probability that this judgment is correct. Confidence is a feeling, which reflects the coherence of the information and the cognitive ease of processing it. Illusion of skill.

Many individual investors lose consistently by trading, an achievement that a dart-throwing chimp could not match. Although professionals are able to extract a considerable amount of wealth from amateurs, few stock pickers, if any, have the skill needed to beat the market consistently. In highly efficient markets, educated guesses are no more accurate than blind guesses. The consistent correlations that would indicate differences in skill were not to be found. Their own experience of exercising careful judgment on complex problems was far more compelling to them than an obscure statistical fact. The key question is whether the information about the firm is already incorporated in the price of its stock. Traders apparently lack the skill to answer this crucial question, but they appear to be ignorant of their ignorance.

People can maintain an unshakable faith in any proposition, however absurd, when they are sustained by a community of like-minded believers.

The illusion that we understand the past fosters overconfidence in our ability to predict the future. We think that we should be able to explain the past by focusing on either large social movement and cultural and technological developments or the intentions and abilities of a few great men. The idea that large historical events are determined by luck is profoundly shocking, although it is demonstrably true. There was a probability of one-eighth of a twentieth century without any of the three great villains.The illusion of valid prediction

Isaiah Berlin’s essay on Tolstoy, “The Hedgehog and the Fox.” The foxes recognize that reality emerges from the interactions of many different agents and forces, including blind luck, often producing large and unpredictable outcomes. It was the foxes who scored best in Philip Tatlock’s study, although their performance was still very poor. But they are less likely than hedgehogs to be invited to participate in television debates.

The first lesson is that errors of prediction are inevitable because the world is unpredictable. The second is that high subjective confidence is not to be trusted as an indicator of accuracy (low confidence could be more informative).

 

21: Intuitions vs. Formulas

Dawes showed that marital stability is well predicted by a formula: frequency of lovemaking minus frequency of quarrels.

Apgar jotted down five variables (heart rate, respiration, reflex, muscle tone, and colour) and three scores (0, 1, or 2, depending on the robustness of each sign).  The virtues of checklists and simple rules.

For recruitment, start by selecting a few traits that are prerequisites for success in the position.

 

22: Expert Intuition: When Can We Trust it?

Gary Klein.  Naturalistic Decision Making, or NDM. The recognition-primed decision (RPD) model, which applies to firefighters but also describes expertise in other domains, including chess.

Klein had spent much time with fireground commanders, clinical nurses, and other professionals who have real expertise. I had spent more time thinking about clinicians, stock pickers, and political scientists trying to make unsupportable long-term forecasts. Not surprisingly, his default attitude was trust and respect; mine was scepticism.

There are many pseudo-experts who have no idea that they do not know what they are doing. Remember this rule: intuition cannot be trusted in the absence of stable regularities in the environment.

 

23: The Outside View

Surely all of us “knew” that a minimum of seven years and a 40 % chance of failure was a more plausible forecast of the fate of our project than the numbers we had written on our slips of paper a few minutes earlier. But we did not acknowledge what we knew. irrational perseverance

The inside view and the outside view.  Planning fallacy.  Plans and forecasts that are unrealistically close to best-case scenarios could be improved by consulting the statistics of similar cases. Reference class forecasting.

 

24: The Engine of Capitalism

Optimistic individuals play a disproportionate role in shaping our lives. Their decisions make a difference; they are the inventors, the entrepreneurs, the political and military leaders.  60 % of new restaurants are out of business after three years.  They told us without irony or self-consciousness that they had been able to buy it cheap, “because six or seven previous owners had failed to make a go of it.” A common thread of boldness and optimism links businesspeople, from motel owners to superstar CEOs.

 “90 % of drivers believe they are better than average”. Financial officers of large corporations had no clue about the short-term future of the stock market. The main benefit of optimism is resilience in the face of setbacks. Gary Klein’s idea of the premortem

 

Part 4: CHOICES

25: Bernoulli’s Errors

To a psychologist, it is self-evident that people are neither fully rational nor completely selfish, and that their tastes are anything but stable. Our two disciplines seemed to be studying different species, which the behavioural economist Richard Thaler later dubbed Econs and Humans.

 Economists adopted expected utility theory in a dual role: as a logic that prescribes how decisions should be made, and as a description of how Econs make choices.

 Five years after we began our study of gambles, we finally completed an essay that we titled “Prospect Theory: An Analysis of Decision under Risk.”

 Fechner’s project was to find the psychophysical laws that relate the subjective quantity in the observer’s mind to the objective quantity in the material world. He proposed that for many dimensions, the function is logarithmic.

 In 1738, the Swiss scientist Daniel Bernoulli anticipated Fechner’s reasoning and applied it to the relationship between the psychological value or desirability of money (now called utility) and the actual amount of money. He argued that a gift of 10 ducats has the same utility to someone who already has 100 ducats as a gift of 20 ducats to someone whose current wealth is 200 ducats. Utility is a logarithmic function of wealth. Bernoulli proposed that the diminishing marginal value of wealth (in the modern jargon) is what explains risk aversion.

 

26: Prospect Theory

Three cognitive features at the heart of prospect theory. Evaluation is relative to a neutral reference point, which is sometimes referred to as an “adaptation level.”Diminishing sensitivity. Loss aversion.

The “loss aversion ratio” has been estimated in several experiments and is usually in the range of 1.5 to 2.5.   In the bad case, the bending of the value curve (diminishing sensitivity) causes risk seeking. The pain of losing $900 is more than 90 % of the pain of losing $1,000.

Prospect theory cannot deal with disappointment and regret.

 

27: The Endowment Effect

Two aspects of choice that the standard model of indifference curves does not predict. First, tastes are not fixed; they vary with the reference point. Second, the disadvantages of a change loom larger than its advantages, inducing a bias that favours the status quo.

The values were unequal because of loss aversion: giving up a bottle of nice wine is more painful than getting an equally good bottle is pleasurable. The mugs experiment.

 

28: Bad Events

We and our animal cousins are quickly alerted to signs of opportunities to mate or to feed, and advertisers design billboards accordingly. Still, threats are privileged above opportunities, as they should be.  Pleasure indicates the direction of a biologically significant improvement.

Players were more successful when putting for par than for a birdie. People tend to be much more easy-going when they bargain over an expanding pie. Potential losers will be more active and determined than potential winners.

The existing wage, price, or rent sets a reference point, which has the nature of an entitlement that must not be infringed. It is considered unfair for the firm to impose losses on its customers or workers relative to the reference transaction.  Reference-dependent fairness.

 

29: The Fourfold Pattern

The large impact of 0 ➞ 5 % illustrates the possibility effect. The improvement from 95 % to 100 % is another qualitative change that has a large impact, the certainty effect. Overweighting of small probabilities increases the attractiveness of both gambles and insurance policies. 

 The expectation principle, by which values are weighted by their probability, is poor psychology. Von Neumann and Morgenstern proved that any weighting of uncertain outcomes that is not strictly proportional to probability leads to inconsistencies. Systematic deviations from expected value are costly in the long run.

Distinctive pattern of preferences that we called the fourfold pattern. Risk taking of this kind often turns manageable failures into disasters. The thought of accepting the large sure loss is too painful, and the hope of complete relief too enticing, to make the sensible decision that it is time to cut one’s losses. 

 

30: Rare Events

“Highly unlikely events are either ignored or overweighted.”

The Princeton team argued that the low sensitivity to probability that had been observed for emotional outcomes is normal. Gambles on money are the exception. The sensitivity to probability is relatively high for these gambles, because they have a definite expected value.

If your attention is drawn to the winning marbles, you do not assess the number of nonwinning marbles with the same care. Vivid imagery contributes to denominator neglect.

The prosecutor, of course, will favour the more abstract frame — hoping to fill the jurors’ minds with decimal points.

Choice from description yields a possibility effect — rare outcomes are overweighted relative to their probability. In sharp contrast, overweighting is never observed in choice from experience, and underweighting is common.

 System 1 generates global representations of Adele and Brian, which include an emotional attitude and a tendency to approach or avoid.

Obsessive concerns (the bus in Jerusalem), vivid images (the roses), concrete representations (1 of 1,000), and explicit reminders (as in choice from description) all contribute to overweighting. And when there is no overweighting, there will be neglect. When it comes to rare probabilities, our mind is not designed to get things quite right. For the residents of a planet that may be exposed to events no one has yet experienced, this is not good news.

 

31: Risk Policies

Because we are susceptible to WY SIATI and averse to mental effort, we tend to make decisions as problems arise.

The aggregation of favourable gambles rapidly reduces the probability of losing, and the impact of loss aversion on his preferences diminishes accordingly. You win a few, you lose a few. The main purpose of the mantra is to control your emotional response when you do lose.  Experienced traders in financial markets live by it every day, shielding themselves from the pain of losses by broad framing. “Think like a trader.”   Closely following daily fluctuations is a losing proposition, because the pain of the frequent small losses exceeds the pleasure of the equally frequent small gains.

A risk policy is a broad frame that embeds a particular risky choice in a set of similar choices.  The outside view and the risk policy are remedies against two distinct biases that affect many decisions: the exaggerated optimism of the planning fallacy and the exaggerated caution induced by loss aversion.

 

32: Keeping Score

The ultimate currency that rewards or punishes is often emotional.

The mental accounts that we use to organize and run our lives.  Mental accounts are a form of narrow framing; they keep things under control and manageable.

The disposition effect is an instance of narrow framing. The investor has set up an account for each share that she bought, and she wants to close every account as a gain. Investors sell more losers in December, when taxes are on their mind. The tax advantage is available all year, of course, but for 11 months of the year mental accounting prevails over financial common sense.

The asymmetry in the risk of regret favours conventional and risk-adverse choices. The precautionary principle, which prohibits any action that might cause harm. The dilemma between intensely loss-averse moral attitudes and efficient risk management.

 

33: Reversals

Poignancy (a close cousin of regret) is a counterfactual feeling, which is evoked because the thought “if only he had shopped at his regular store …” comes readily to mind.

The legal system, contrary to psychological common sense, favours single evaluation.  The system of administrative penalties is coherent within agencies but incoherent globally.

 

34: Frames and Reality

The fact that logically equivalent statements evoke different reactions makes it impossible for Humans to be as reliably rational as Econs. Losses evoke stronger negative feelings than costs. Choices are not reality-bound because System 1 is not reality-bound. The rationality indexes.  Neuroeconomics — the study of what a person’s brain does while he makes decisions.

The most “rational” subjects — those who were the least susceptible to framing effects — showed enhanced activity in a frontal area.

90 % survival sounds encouraging whereas 10 % mortality is frightening. The “Asian disease problem” 

Sceptics about rationality are not surprised. They are trained to be sensitive to the power of inconsequential factors as determinants of preference — my hope is that readers of this book have acquired this sensitivity.

 

Part 5: TWO SELVES

35: Two Selves

Experienced utility and decision utility.  Francis Edgeworth speculated about this topic in the nineteenth century and proposed the idea of a “hedonimeter,” “Area under the curve.”

“The total amount of pain” Peak-end rule. Duration neglect.

The hedonimeter total and the retrospective assessment are systematically different. The hedonimeter totals are computed by an observer from an individual’s report of the experience of moments. We call these judgments duration-weighted, because the computation of the “area under the curve” assigns equal weights to all moments.

The experiencing self is the one that answers the question: “Does it hurt now?” The remembering self is the one that answers the question: “How was it, on the whole?” Memories are all we get to keep from our experience of living, and the only perspective that we can adopt as we think about our lives is therefore that of the remembering self.

Confusing experience with the memory of it is a compelling cognitive illusion — and it is the substitution that makes us believe a past experience can be ruined. The cold-hand situation.  The rules that govern the remembering self of humans have a long evolutionary history.

 

36: Life as a Story

Odd as it may seem, I am my remembering self, and the experiencing self, who does my living, is like a stranger to me.

 

37: Experienced Well-Being

Experience sampling. Day Reconstruction Method. The percentage of time that an individual spends in an unpleasant state the U-index. Inequality in the distribution of emotional pain.

Our emotional state is largely determined by what we attend to, and we are normally focused on our current activity and immediate environment.

The general conclusion is as clear for well-being as it was for colonoscopies: people’s evaluations of their lives and their actual experience may be related, but they are also different. Life satisfaction is not a flawed measure of their experienced well-being, as I thought some years ago. It is something else entirely.

 

38: Thinking About Life

The score that you quickly assign to your life is determined by a small sample of highly available ideas, not by a careful weighting of the domains of your life.  One recipe for a dissatisfied adulthood is setting goals that are especially difficult to attain. We must accept the complexities of a hybrid view, in which the well-being of both selves is considered.

Focusing illusion.  Nothing in life is as important as you think it is when you are thinking about it. You get pleasure (or displeasure) from your car when you think about your car, which is probably not very often. Living in California is like having ten toes: nice, but not something one thinks much about.

Daniel Gilbert and Timothy Wilson introduced the word mis wanting to describe bad choices that arise from errors of affective forecasting.

The value of an episode — I have called it a hedonimeter total — is simply the sum of the values of its moments. But this is not how the mind represents episodes.

 

Conclusions

I began this book by introducing two fictitious characters, spent some time discussing two species, and ended with two selves.

The logic of duration weighting is compelling, but it cannot be considered a complete theory of well-being because individuals identify with their remembering self and care about their story. A theory of well-being that ignores what people want cannot be sustained. On the other hand, a theory that ignores what actually happens in people’s lives and focuses exclusively on what they think about their life is not tenable either. The remembering self and the experiencing self must both be considered, because their interests do not always coincide. Philosophers could struggle with these questions for a long time.

 The definition of rationality as coherence is impossibly restrictive; it demands adherence to rules of logic that a finite mind is not able to implement.

 I often cringe when my work with Amos is credited with demonstrating that human choices are irrational, when in fact our research only showed that Humans are not well described by the rational-agent model.

 Thaler and Sunstein advocate a position of libertarian paternalism, in which the state and other institutions are allowed to nudge people to make decisions that serve their own long-term interests. The designation of joining a pension plan as the default option is an example of a nudge. Choice architecture.  An Econ knows how to deal with small print when it matters. In contrast, the recommendations of Nudge require firms to offer contracts that are sufficiently simple to be read and understood by Human customers.

The attentive System 2 is who we think we are. System 1 is indeed the origin of much that we do wrong, but it is also the origin of most of what we do right — which is most of what we do.

It is much easier to identify a minefield when you observe others wandering into it than when you are about to do so. Observers are less cognitively busy and more open to information than actors. That was my reason for writing a book that is oriented to critics and gossipers rather than to decision makers.

A richer language is essential to the skill of constructive criticism. Labels such as “anchoring effects,” “narrow framing,” or “excessive coherence” bring together in memory everything we know about a bias, its causes, its effects, and what can be done about it.

 

Appendix A: Judgment Under Uncertainty: Heuristics and Biases

This article described three heuristics: representativeness, availability and anchor. These heuristics are highly economical and usually effective, but they lead to systematic and predictable errors.

 

Appendix B: Choices, Values, and Frames

Making decisions is like speaking prose — people do it all the time, knowingly or unknowingly. It is hardly surprising, then, that the topic of decision making is shared by many disciplines, from mathematics and statistics, through economics and political science, to sociology and psychology. The study of decisions addresses both normative and descriptive questions. The normative analysis is concerned with the nature of rationality and the logic of decision making. The descriptive analysis, in contrast, is concerned with people’s beliefs and preferences as they are, not as they should be. The tension between normative and descriptive considerations characterizes much of the study of judgment and choice.

Very low probabilities are either overweighted quite grossly or neglected altogether, making the decision weights highly unstable in that region. People are often risk seeking in dealing with improbable gains and risk averse in dealing with unlikely losses. Thus, the characteristics of decision weights contribute to the attractiveness of both lottery tickets and insurance policies.  For ordinary decision makers, however, the correspondence of decision values between experience values is far from perfect.