Written by Michael Mauboussin, Head of Consilient Research at Counterpoint Global and previously Head of Global Financial Strategies at Credit Suisse, Think Twice is a book about decision making—specifically the cognitive biases and mistakes that affect us when thinking clearly.

If you’re a decision making or behavioural economics aficionado, you might already be familiar with the concepts in this book and can safely skip it.

For me, while I recognised quite a few biases (e.g. anchoring), a lot of it were new to me. So, I’d have to admit I learnt a lot from this book.

Here are my notes from Think Twice.

The inside view vs the outside view

  • We have a tendency to favour the inside view over the outside view.
  • An inside view considers a problem by focusing on the specific task and by using information that is close at hand, and makes predictions based on that narrow and unique set of inputs.
  • This is the approach that most people use in building models of the future and is indeed common for all forms of planning.
  • The outside view asks if there are similar situations that can provide a statistical basis for making a decision. Rather than seeing a problem as unique, the outside view wants to know if others have faced comparable problems and, if so, what happened.
  • The outside view is an unnatural way to think, precisely because it forces people to set aside all the cherished information they have gathered.

Why do people tend to embrace the inside view?

  • Social psychologists distinguish three illusions that lead people to the inside view.
  • Illusion of superiority. This suggests that people have an unrealistically positive view of themselves. The least capable people often have the largest gaps between what they think they can do and what they actually achieve. (Dunning-Kruger Effect)
  • Illusion of optimism. Most people see their future as brighter than that of others.
  • Illusion of control. People behave as if chance events are subject to their control.

Insufficient consideration of alternatives

  • In deciding, people often start with a specific piece of information or trait (anchor) and adjust as necessary to come up with a final answer.
  • The bias is for people to make insufficient adjustments from the anchor, leading to off-the-mark responses. Systematically, the final answer leans too close to the anchor, whether or not the anchor is sensible.
  • Anchoring is a symptom of a “insufficient consideration of alternatives.”
  • First, people reason from a set of premises and only consider compatible possibilities. As a result, people fail to consider what they believe is false.
  • Second, and related, is the point that how a person sees a problem—how it’s described to him, how he feels about it, and his individual knowledge—shapes how he reasons about it. Since we are poor logicians, a problem’s presentation strongly influences how we choose.
  • Last, a mental model is an internal representation of an external reality, an incomplete representation that trades detail for speed. Once formed, mental models replace more cumbersome reasoning processes, but are only as good as their ability to match reality. An ill-suited mental model will lead to a decision-making fiasco.
  • Anchoring is relevant in high-stakes political or business negotiations. In situations with limited information or uncertainty, anchors can strongly influence the outcome. For instance, studies show that the party that makes the first offer can benefit from a strong anchoring effect in ambiguous situations.
  • Developing and recognising a full range of outcomes is the best protection against the anchoring effect if you are sitting on the other side of the negotiating table.
  • The availability heuristic, judging the frequency or probability of an event based on what is readily available in memory, poses a related challenge. We tend to give too much weight to the probability of something if we have seen it recently or if it is vivid in our mind.
  • Failure to reflect reversion to the mean is the result of extrapolating earlier performance into the future without giving proper weight to the role of chance. Models based on past results forecast in the belief that the future will be characteristically similar to history. In each case, our minds—or the models our minds construct—anticipate without giving suitable consideration to other possibilities.
  • Cognitive dissonance is one facet of our next mistake, the rigidity that comes with the innate human desire to be internally and externally consistent.
  • Many times we resolve the discomfort by figuring out how to justify our actions, for example, the man who recognises that wearing a seat belt improves safety but who doesn’t do it.
  • While cognitive dissonance is about internal consistency, the confirmation bias is about external consistency. The confirmation bias occurs when an individual seeks information that confirms a prior belief or view and disregards, or disconfirms, evidence that counters it.

The Expert Squeeze

  • As networks harness the wisdom of crowds and computing power grows, the ability of experts to add value in their predictions is steadily declining. (“Expert squeeze”)
  • The expert squeeze means that people stuck in old habits of thinking are failing to use new means to gain insight into the problems they face. Knowing when to look beyond experts requires a totally fresh point of view, and one that does not come naturally.
  • To be sure, the future for experts is not all bleak. Experts retain an advantage in some crucial areas. The challenge is to know when and how to use them.
  • Experts do well with rules-based problems with a wide range of outcomes because they are better than computers at eliminating bad choices and making creative connections between bits of information.
  • Once you have properly classified a problem, turn to the best method for solving it. As we will see, computers and collectives remain under-utilised guides for decision making across a host of realms including medicine, business, and sports.
  • That said, experts remain vital in three capacities. First, experts must create the very systems that replace them.
  • Next, we need experts for strategy. I mean strategy broadly, including not only day-to-day tactics but also the ability to troubleshoot by recognising interconnections as well as the creative process of innovation, which involves combining ideas in novel ways.
  • Finally, we need people to deal with people. A lot of decision making involves psychology as much as it does statistics. A leader must understand others, make good decisions, and encourage others to buy in to the decision.

Mistake: relying on experts instead of the wisdom of crowds.

  • Scott Page, a social scientist who has studied problem solving by groups, offers a very useful approach for understanding collective decision making. He calls it the diversity prediction theorem, which states: Collective error = average individual error − prediction diversity
  • The diversity prediction theorem tells us that a diverse crowd will always predict more accurately than the average person in the crowd.
  • This suggests that modesty is in order, but most people do not think of themselves as average—and certainly not as below average. Yet in reality, half of all people must be below average, and so you should sort out when you are likely to be one of them.
  • Also important is that collective accuracy is equal parts ability and diversity. You can reduce the collective error either by increasing ability or by increasing diversity. Both ability and diversity are essential.
  • Finally, while not a formal implication of the theorem, the collective is often better than even the best individual. So a diverse collective always beats the average person and frequently beats everyone.
  • With the diversity prediction theorem in hand, we can flesh out when crowds predict well. Three conditions must be in place: diversity, aggregation, and incentives. Each condition clicks into the equation. Diversity reduces the collective error. Aggregation assures that the market considers everyone’s information. Incentives help reduce individual errors by encouraging people to participate only when they think they have an insight.

Warning: don’t lean too much on either formula-based approaches or the wisdom of crowds

  • While computers and collectives can be very useful, they do not warrant blind faith.
  • An example of over-reliance on numbers is what Malcolm Gladwell calls the mismatch problem. The problem, which you will immediately recognise, occurs when experts use ostensibly objective measures to anticipate future performance. In many cases, experts rely on measures that have little or no predictive value.
  • Unchecked devotion to the wisdom of crowds is also folly. While free-market devotees argue that prices reflect the most accurate assessments available, markets are extremely fallible. That is because when one or more of the three wisdom-of-crowds conditions are violated, the collective error can swell. Not surprisingly, diversity is the most likely condition to fail because we are inherently social and imitative.

Situational awareness

  • Our situation influences our decisions enormously.
  • Making good decisions in the face of subconscious pressure requires a very high degree of background knowledge and self-awareness.
  • Mistake #1: belief that our decisions are independent of our experiences.
  • Mistake #2: the perception that people decide what is best for them independent of how the choice is framed.
  • In reality, many people simply go with default options. This applies to a wide array of choices, from insignificant issues like the ringtone on a new cell phone to consequential issues like financial savings, educational choice, and medical alternatives.
  • Richard Thaler, an economist, and Cass Sunstein, a law professor, call the relationship between choice presentation and the ultimate decision “choice architecture.” They convincingly argue that we can easily nudge people toward a particular decision based solely on how we arrange the choices for them.
  • Mistake #3: relying on immediate emotional reactions to risk instead of on an impartial judgment of possible future outcomes.
  • The basic concept is that how we feel about something influences how we decide about it. Affective responses occur quickly and automatically, are difficult to manage, and remain beyond our awareness.
  • Affect research reveals two core principles related to probabilities and outcomes. First, when the outcomes of an opportunity are without potent affective meaning, people tend to overweight probabilities.
  • In contrast, when outcomes are vivid, people pay too little attention to the probabilities and too much to the outcomes.
  • Mistake #4: explaining behaviour by focusing on people’s dispositions, rather than considering the situation.

More is different

  • You cannot understand the swarm’s complex behaviour by analysing the decisions of a few key individuals. This is because of complex adaptive systems.
  • Think of a complex adaptive system in three parts.
  • First, there is a group of heterogeneous agents. These agents can be neurons in your brain, bees in a hive, investors in a market, or people in a city. Heterogeneity means each agent has different and evolving decision rules that both reflect the environment and attempt to anticipate change in it.
  • Second, these agents interact with one another, and their interactions create structure—scientists often call this emergence.
  • Finally, the structure that emerges behaves like a higher-level system and has properties and characteristics that are distinct from those of the underlying agents themselves.
  • Humans have a deep desire to understand cause and effect, as such links probably conferred humans with evolutionary advantage. In complex adaptive systems, there is no simple method for understanding the whole by studying the parts, so searching for simple agent-level causes of system-level effects is useless.
  • Yet our minds are not beyond making up a cause to relieve the itch of an unexplained effect. When a mind seeking links between cause and effect meets a system that conceals them, accidents will happen.
  • Mistake #1: inappropriately extrapolating individual behaviour to explain collective behaviour
  • Mistake #2: When you are dealing with a system that has lots of interconnected parts, tweaking one part can have unforeseen consequences for the whole.
  • Mistake #3: Isolating individual performance without proper consideration of the individual’s surrounding system.
  • A star’s performance relies to some degree on the people, structure, and norms around him—the system. Analysing results requires sorting the relative contributions of the individual versus the system, something we are not particularly good at. When we err, we tend to overstate the role of the individual.
  • All three mistakes have the same root: a focus on an isolated part of a complex adaptive system without an appreciation of the system dynamics.

Understand context

  • Frequently, people try to cram the lessons or experiences from one situation into a different situation. But that strategy often crashes because the decisions that work in one context often fail miserably in another. The right answer to most questions that professionals face is, “It depends.”
  • Mistake #1: embracing a strategy without fully understanding the conditions under which it succeeds or fails.
  • Mistake #2: the failure to distinguish between correlation and causality. This problem arises when researchers observe a correlation between two variables and assume that one caused the other.
  • Mistake #3: inflexibility in the face of evidence that change is necessary.

Phase transitions

  • Feedback can be negative or positive, and within many systems you see a healthy balance of the two. Negative feedback is a stabilising factor, while positive feedback promotes change. But too much of either type of feedback can leave a system out of balance.
  • Negative feedback resists change by pushing in the opposite direction. Positive feedback reinforces an initial change in the same direction.
  • Phase transitions: where small incremental changes in causes lead to large-scale effects.
  • To be clear, in all these systems cause and effect are proportionate most of the time. But they also have critical points, or thresholds, where phase transitions occur. You can think of these points as occurring when one form of feedback overwhelms the other.
  • The vital insight is the existence of a critical point.
  • In December 2000, Arup engineers enlisted volunteers to walk on the bridge in order to determine the level at which the unsafe swaying would occur. Their test showed that 156 people could walk on the bridge with little impact (see figure 7-2). But adding just 10 more pedestrians caused the amplitude to change dramatically, as the positive feedback kicked in.
  • For the first 156 people who crossed the bridge, there was little sway and no sense of any potential hazard, even though the bridge was on the cusp of a phase transition. This shows why critical points are so important for proper counterfactual thinking: considering what might have been. For every phase transition you do see, how many close calls were there?
  • With lots of phenomena, including human heights and athletic results, the outcomes don’t stray too far from average.
  • But there are systems with heavily skewed distributions, where the idea of average holds little or no meaning. These distributions are better described by a power law, which implies that a few of the outcomes are really large (or have a large impact) and most observations are small.
  • Nassim Taleb, an author and former derivatives trader, calls the extreme outcomes within power law distributions black swans . He defines a black swan as an outlier event that has a consequential impact and that humans seek to explain after the fact.
  • Here’s where critical points and phase transitions come in. Positive feedback leads to outcomes that are outliers. And critical points help explain our perpetual surprise at black swan events because we have a hard time understanding how such small incremental perturbations can lead to such large outcomes.
  • The presence of phase transitions invites a few common decision-making mistakes. The first is the problem of induction, or how you should logically go from specific observations to general conclusions.
  • Repeated, good outcomes provide us with confirming evidence that our strategy is good and everything is fine. This illusion lulls us into an unwarranted sense of confidence and sets us up for a (usually negative) surprise. The fact that phase transitions come with sudden change only adds to the confusion.
  • Another mistake that we make when dealing with complex systems is what psychologists call reductive bias, “a tendency for people to treat and interpret complex circumstances and topics as simpler than they really are, leading to misconception.”
  • When asked to decide about a system that’s complex and nonlinear, a person will often revert to thinking about a system that is simple and linear. Our minds naturally offer an answer to a related but easier question, often with costly consequences.

Sorting luck from skill

  • We have difficulty sorting skill and luck in lots of fields, including business and investing. As a result, we make a host of predictable and natural mistakes, such as failing to appreciate the team’s or the individual’s inevitable reversion to the mean.
  • Reversion to the mean: for many types of systems, an outcome that is not average will be followed by an outcome that has an expected value closer to the average.
  • Galton’s significant insight was that, even as reversion to the mean occurs from one generation to the next, the overall distribution of heights remains stable over time. This combination sets a trap for people because reversion to the mean suggests things become more average over time, while a stable distribution implies things don’t change much. Fully grasping how change and stability go together is the key to understanding reversion to the mean.
  • Consider how a golfer may score on two rounds on different days. If the golfer scores well below his handicap for the first round, how would you expect him to do for the second one? The answer is not as well. The exceptional score on the first round resulted from his being skilful but also very lucky. Even if he is just as skilful while playing the second round, you would not expect the same good luck.
  • Any system that combines skill and luck will revert to the mean over time.
  • When you ignore the concept of reversion to the mean, you make three types of mistakes. The first mistake is thinking you’re special.
  • A counterintuitive implication of mean reversion is that you get the same result whether you run the data forward or backward. So the parents of tall children tend to be tall, but not as tall as their children. Companies with high returns today had high returns in the past, but not as high as the present.
  • The halo effect is the human proclivity to make specific inferences based on general impressions.
  • In The Halo Effect , Phil Rosenzweig showed that this mistake pervades the business world. Rosenzweig pointed out that we tend to observe financially successful companies, attach attributes (e.g., great leadership, visionary strategy, tight financial controls) to that success, and recommend that others embrace the attributes to achieve their own success. Researchers who study management often follow this formula and rarely recognise the role of luck in business performance. And the substantial data the researchers use to support their claims is all for nothing if they fall into the trap of the halo effect.
  • Here’s a simple test of whether an activity involves skill: ask if you can lose on purpose.
  • You should therefore be careful when you draw conclusions about outcomes in activities that involve luck—especially conclusions about short-term results. We’re not very good at deciding how much weight to give to skill and to luck in any given situation. When something good happens, we tend to think it’s because of skill. When something bad happens, we write it off to chance.
  • The more that luck contributes to the outcomes you observe, the larger the sample you will need to distinguish between skill and luck.
  • In addition, when a large number of people participate in an activity that is influenced by chance, some of them will succeed by sheer luck. So you have to scrutinise even long, successful track records in fields with lots of participants.
  • Streaks, continuous success in a particular activity, require large doses of skill and luck. In fact, a streak is one of the best indicators of skill in a field. Luck alone can’t carry a streak.

Enjoyed this? Then buy the book or read more book summaries.