Anchoring Bias

What is the Anchoring bias?

Anchoring bias is a cognitive bias that causes us to rely too heavily on the first piece of information we are given about a topic. When we are setting plans or making estimates about something, we interpret newer information from the reference point of our anchor, instead of seeing it objectively. This can skew our judgment, and prevent us from updating our plans or predictions as much as we should.

Where this bias occurs

Imagine you’re out shopping for a present for a friend. You find a pair of earrings that you know they’d love, but they cost $100, way more than you budgeted for. After putting the expensive earrings back, you find a necklace for $75—still more than your budget, but hey, it’s cheaper than the earrings!

Individual effects

When we become anchored to a specific figure or plan of action, we end up filtering all new information through the framework we initially drew up in our head, distorting our perception. This makes us reluctant to make significant changes to our plans, even if the situation calls for it.

Systemic effects

Anchoring bias is extremely pervasive, and it’s thought to drive many other cognitive biases, such as the planning fallacy and the spotlight effect. Anchoring can even influence courtroom judgments, where research shows that prison sentences assigned by jurors and judges can be swayed by providing an anchor.2,3

Why it happens

Anchoring bias is one of the most robust effects in psychology. Many studies have confirmed its effects, and shown that we can often become anchored by values that aren’t even relevant to the task at hand. In one study, for example, people were asked for the last two digits of their social security number. Next, they were shown a number of different products, including things like computer equipment, bottles of wine, and boxes of chocolate. For each item, participants indicated whether they would be willing to pay the amount of money formed by their two digits. For example, if somebody’s number ended in 34, they would say whether or not they would pay $34 for each item. After that, the researchers asked what the maximum amount was that the participants would be willing to pay.

Even though somebody’s social security number is nothing more than a random series of digits, those numbers had an effect on their decision making. People whose digits amounted to a higher number were willing to pay significantly more for the same products, compared to those with lower numbers.9 Anchoring bias also hold up when anchors are obtained by rolling some dice or spinning a wheel, and when researchers remind people that the anchor is irrelevant.4

Given its ubiquity, anchoring appears to be deeply rooted in human cognition. Its causes are still being debated, but the most recent evidence suggests that it happens for different reasons depending on where the anchoring information comes from. We can become anchored to all kinds of values or pieces of information, whether we came up with them ourselves or we were provided with them,4 but apparently for different reasons.

When we come up with anchors ourselves: The anchor-and-adjust hypothesis

The original explanation for anchoring bias comes from Amos Tversky and Daniel Kahneman, two of the most influential figures in behavioral economics. In a 1974 paper called “Judgment under Uncertainty: Heuristics and Biases,” Tversky and Kahneman theorized that, when people try to make estimates or predictions, they begin with some initial value, or starting point, and then adjust from there. Anchoring bias happens because the adjustments usually aren’t big enough, leading us to incorrect decisions. This has become known as the anchor-and-adjust hypothesis.

To back up their account of anchoring, Tversky and Kahneman ran a study where they had high school students guess the answers to mathematical equations in a very short period of time. Within five seconds, the students were asked to estimate the product:

8 x 7 x 6 x 5 x 4 x 3 x 2 x 1

Another group was given the same sequence, but in reverse:

1 x 2 x 3 x 4 x 5 x 6 x 7 x 8

The media estimate for the first problem was 2,250, while the median estimate for the second was 512. (The correct answer is 40,320.) Tversky and Kahneman argued that this difference arose because the students were doing partial calculations in their heads, and then trying to adjust these values to get to an answer. The group who was given the descending sequence was working with larger numbers to start with, so their partial calculations brought them to a larger starting point, which they became anchored to (and vice-versa for the other group).5

Tversky and Kahneman’s explanation works well to explain anchoring bias in situations where people generate an anchor on their own.6 However, in cases where an anchor is provided by some external source, the anchor-and-adjust hypothesis is not so well supported. In these situations, the literature favors a phenomenon known as selective accessibility.

The selective accessibility hypothesis

This theory relies on priming, another prevalent effect in psychology. When people are exposed to a given concept, it is said to become primed, meaning that the areas of the brain related to that concept remain activated at some level. This makes the concept more easily accessible, and more able to influence people’s behavior without their realizing.

Just like anchoring, priming is a robust and ubiquitous phenomenon that plays a role in many other biases and heuristics—and as it turns out, anchoring might be one of them. According to this theory, when we are first presented with an anchoring piece of information, the first thing we do is to mentally test whether it is a plausible value for whatever target object or situation we are considering. We do this by building a mental representation of the target. For example, if I were to ask you whether the Mississippi River is longer or shorter than 3,000 miles, you might try to imagine the north-south extension of the United States, and use that to try to figure out the answer.7

As we’re building our mental model and testing out the anchor on it, we end up activating other pieces of information that are consistent with the anchor. As a result, all of this information becomes primed, and more likely to affect our decision making. However, because the activated information lives within our mental model for a specific concept, anchoring bias should be stronger when the primed information is applicable to the task at hand. So, after you answered my first Mississippi question, if I were to follow it up by asking how wide the river is, the anchor I gave you (3,000 miles) shouldn’t affect your answer as much, because in your mental model, this figure was only related to length.

To test this idea, Strack and Mussweiler (1997) had participants fill out a questionnaire. First, they made a comparative judgment, meaning they were asked to guess whether some value of a target object was higher or lower than an anchor. For example, they might have been asked whether the Brandenburg Gate (the target) is taller or shorter than 150 meters (the anchor). After this, they made an absolute judgment about the target, such as being asked to guess how tall the Brandenburg Gate is. For some participants, however, the absolute judgment involved a different dimension than the comparative judgment—for example, asking about a structure’s width instead of its height.

The results showed that the anchor effect was much stronger if the object dimension was the same for both questions,7 lending support to the theory of selective accessibility. This does not mean that the anchor-and-adjust hypothesis is incorrect, however. Instead, it means that anchoring bias relies on multiple, different mechanisms, and it happens for different reasons depending on the circumstances.

Bad moods weigh us down

The research on anchoring has turned up a number of other factors that influence anchoring bias. One of these is mood: evidence shows that people in sad moods are more susceptible to anchoring, compared to others in good moods. This result is surprising, because usually, experiments have found the opposite to be true: happy moods result in more biased processing, whereas sadness causes people to think things through more carefully.4

This finding makes sense in the context of the selective accessibility theory. If sadness makes people more thorough processors, that would mean that they activate more anchor-consistent information, which would then enhance anchoring bias.8

Why it is important

Anchoring bias is one of the most pervasive cognitive biases. Whether we’re setting a schedule for a project or trying to decide on a reasonable budget, this bias can skew our perspective and cause us to cling to a particular number or value, even when it’s irrational.

Anchoring is so ubiquitous that it is thought to be a driving force behind a number of other biases and heuristics. One example of these is the planning fallacy, a bias that describes how we tend to underestimate the time we’ll need to finish a task, as well as the costs of doing so. Once we set an initial plan for completing a project, we can become anchored to it, which in turn makes us reluctant to update our plan—even if it becomes clear that we will need more time, or a higher budget. This can have significant consequences, especially in the business world, where there could be a lot of money tied up in a venture.1

How to avoid it

Avoiding anchoring bias entirely probably isn’t possible, given how ubiquitous and how powerful it is. Like all cognitive biases, anchoring bias happens subconsciously, and when one isn’t aware something is happening, it’s difficult to interrupt it. Even more frustrating, some of the strategies that intuitively sound like good ways to avoid bias might not work with anchoring. For example, it’s usually a good idea to take one’s time with making a decision, and think it through carefully—but, as discussed above, thinking more about an anchor might actually make this effect stronger, because it results in more anchor-consistent information being activated.

One strategy to combat anchoring bias that is evidence-based, and pretty straightforward, is to come up with reasons why that anchor is inappropriate for the situation. In one study, car experts were asked to judge whether a resale price of a certain car (the anchor) was too high or too low, after which they were asked to provide a better estimate. However, before giving their own price, half of the experts were also asked to come up with arguments against the anchor price. These participants showed a weaker anchoring effect, compared to those who hadn’t come up with counterarguments.10

Considering alternative options is always a good idea to aid decision making. This strategy is similar to that of red teaming, which involves designating people to oppose and challenge the ideas of a group.11 By building a step into the decision making process that is specifically dedicated to exposing the weaknesses of a plan, and considering alternatives, it might be possible to reduce the influence of an anchor.

How it all started

The first mention of anchoring bias was in a 1958 study by Muzafer Sherif, Daniel Taub, and Carl Hovland. These researchers were running a study in psychophysics, a branch of psychology that investigates how we perceive the physical properties of objects. This particular experiment involved having participants estimate the weights of objects. They used the term “anchor” to describe how the presence of one extreme weight influenced judgments of the other objects.4 The anchoring effect wasn’t conceptualized as bias that affected decision making until the late 1960s, and it wasn’t until the 1970s that Daniel Kahneman and Amos Tversky introduced the anchor-and-adjust hypothesis in order to explain this phenomenon.

Example 1 – Anchors in the courtroom

In the criminal justice system, prosecutors and attorneys typically demand a certain length of prison sentence for those convicted of a crime. In other cases, a sentence might be recommended by a probation officer. Technically speaking, the judge in a case still has the freedom to sentence a person as they see fit—but research shows that these demands can serve as anchors, influencing the final judgment.

In one study, criminal judges were given a hypothetical criminal case, including what the prosecutor in the case demanded as a prison sentence. For some of the judges, the recommended sentence was 2 months; for others, it was 34 months. First, the judges rated whether they thought the demand was too low, too high, or adequate. After that, they indicated how long a sentence they would assign if they were presiding over the case.

As the researchers expected, the anchor had a significant effect on the length of the sentence prescribed. On average, the judges who had been given the higher anchor gave a sentence of 28.7 months, while the group given the lower anchor had an average sentence of 18.78 months.12 These results show how sentencing demands might color a judge’s perception of a criminal case, and could seriously skew their judgment. Even people who are seen as experts in their fields aren’t immune to anchoring bias.

Example 2 – The anchoring effect and portion sizes

As most of us know from experience, it’s easier to end up overeating when we are served a large portion, compared to a smaller one. This effect might be due to anchoring. In a study on anchoring and food intake, participants were asked to imagine being served either a small or a large portion of food, and then to indicate whether they would eat more or less than this amount. Next, they specified exactly how much they believed they would eat. The results showed that participants’ estimates of how much they would eat were influenced by the anchor they had been exposed to (the imagined large or small portion). This effect was found even when participants had been told to discount the anchor.13

Summary

What it is

Anchoring bias is a pervasive cognitive bias that causes us to rely too heavily on information that we received early on in the decision making process. Because we use this “anchoring” information as a point of reference, our perception of the situation can become skewed.

Why it happens

There are two dominant theories behind anchoring bias. The first one, the anchor-and-adjust hypothesis, says that when we make decisions under uncertainty, we start by calculating some initial value and adjusting it, but our adjustments are usually insufficient. The second one, the selective accessibility theory, says that anchoring bias happens because we are primed to recall and notice anchor-consistent information.

Example 1 – Anchors in the courtroom

In criminal court cases, prosecutors often demand a certain length of sentence for the accused. Research shows these demands can become anchors that bias the judge’s decision making.

Example 2 – Anchoring and portion sizes

The common tendency to eat more when faced with a larger portion might be explained by anchoring. In one study, participants’ estimates of how much they would eat were influenced by an anchoring portion size (large or small) they had been told to imagine previously.

How to avoid it

The anchoring effect is difficult (if not impossible) to completely avoid, but research shows that it can be reduced by considering reasons why the anchor doesn’t fit the situation well.

Related articles

How Does Anchoring Impact Our Decisions?

This article explores how anchoring can affect our decisions as consumers. Beyond just influencing how we think about an item’s price, evidence shows that it can also affect our perception of its quality. It also explores some other mechanisms that might contribute to anchoring bias.

Does Anchoring Work In The Courtroom?

This article goes into depth about another study on the effects of anchoring in criminal law cases. Rather than using a more naturalistic scenario, like the experiment described above, this study investigated whether anchoring bias still occurred when the anchoring information was arbitrary, or even random (as in a dice throw). The results, dishearteningly, showed that these anchors still had a big effect.

-> Does that look Greek to you? Do you need help with your Product, Strategy or Business? I can help, let's talk! <-