Category Archives: Heuristics and Biases

Heuristics Representativeness

What are Heuristics?

People rely on heuristics because they facilitate the task of assessing probabilities and predicting values; they allow us to make decisions quickly and instinctually. Although heuristics like schemas are often inaccurate, people look for evidence of that the heuristic or schema is true and ignore failures of judgment (Tversky and Kahneman, 1974). Heuristic errors are known as systematic errors, and they occur because the heuristic cannot cope with the complexity of the task. Heuristics simply lack the validity.

Representativeness

Representativeness is when probabilities of B are evaluated by how much it resembles A, taking for granted the degree to which A and B are related (Tversky and Kahneman, 1974). Usually, representativeness heuristics are quite accurate because if A resembles B, there is a likelihood that they are somehow related. Unfortunately, similarities can be misleading as they are influenced by factors, which need to be taken in considering when judging probability.  Factors that influence similarity include: prior probability outcomes, sample size and chance. 

– Insensitivity to Prior Probability Outcomes

A major effect on probability is base-rate frequency. For example, even though Steve has the characteristics of a librarian compared to a farmer, the fact that there are many more farmers than librarians in his population needs to be taken into account when assessing the likelihood of him having one occupation over the other. If in Steve’s population there are a lot of farmers because of the rich soil in his area, the base-rate frequency suggests that Steve is more likely to be a farmer than a librarian.

In 1973, Kahneman and Tversky conducted an experiment to show how often people overlook base-rate frequency when assessing probability outcomes. Participants were shown short personality descriptions of several individuals sampled from 100 people. The 100 people consisted of lawyers and engineers, and the experiment task was to assess which people were likely to be lawyers or engineers. In condition A, participants were given the following base rate, 70 engineers and 30 lawyers. In condition B, the base rate was 30 engineers and 70 lawyers. The two conditions produced virtually the same probability judgements despite the significantly different base rate probabilities clearly given to the participants. Participants only used the base rate probabilities when they were given without personality descriptions.

Goodie and Fantino (1996) also studied base-rate frequency. Participants were asked to determine the probability that a taxi seen by a witness was blue or green. Even though participants were give the base-rate of taxi colours in the city, participants still determined probability by the reliability of the witnesses.

– Insensitivity to Sample Size

Another major effect on probability is sample size. The similarity of sample statistic to a population parameter does not depend on size; therefore, if probabilities are assessed by representativeness, the probability is independent of sample size. Tversky and Kahneman (1974) conducted a participant to show evidence of insensitivity to sample size. Participants were given the following information:

–       There are two hospitals in a town, one small and one larger

–       About 45 babies are born per day at the large hospital

–       About 15 babies are born per day at the small one

–       50% of the babies born are boys, but this figure differs slightly everyday

Participants were then asked which hospital is more likely to report a day with 60 male births. Most of the participants answered that the hospitals are equally likely to have 60 births. However, sample theory warrants that the small hospital would be more likely because the larger hospital is less likely to stray from the mean.

newborn

Unfortunately, the general public are not the only ones to fall victim to sample size. In 1971, Tversky and Kahneman conducted a meta-analysis on experienced research psychology, the majority stood by the validity of small group sizes. The researches simply put too much faith in their results from small sample sizes, underestimating the high chance of representativeness. It is likely that the reason the benefits of a large sample size are drilled into psychology students from day one is to avoid errors like this.

– Misconceptions of Chance

People expect that a randomly generated sequence of events will represent the essential characteristics of that process even when the sequence is short. In other words, people thing that the likelihood of getting H-T-H-T-T-H is more likely than H-H-H-H-H-H even though they are equally likely. This because every T or H has to be assessed as an individual probability event. In other words, in trail one, you have a 50% chance of getting a T or an H. In the second trail, the results of the first have no impact; therefore, you again have a 50% chance to get either letter. Probability matching is another word for this misconception of chance. Andrade and May (2004) describe another scenario based on real life misconceptions of chance.

First, participants are given a jar of 100 balls and told that 80% are white and 20% are white. The most commonly observed strategy when asked what ball colour will become next is one that imitates the pattern of 20% white and 20% red. In reality, the most efficient strategy is to say red for every draw because the probability event, as stated above, needs to be assessed for each individual draw not for the task as a whole. The implications of probability in gambling are huge, so it is not surprising that the gambler’s fallacy is another name for probability matching. People simply believe in the “law of averages,” that if an event has occurred less often that probability suggests, it is more likely to occur in the future.

– Insensitivity to Predictability

Another issue with probability is insensitivity to predictability, which is when people ignore the reliability of a description and instead pay attention to in related factors. For example, a person will pay more attention to a particular review and given greater reliability if the person’s name is the same as yours. Another example would be ignoring negative reviews and only paying attention to positive ones because they confirm your own belief. Obviously, doing so means disregarding the reliability of the evidence.

nicolas_copernicus_lecturing_wellcome_m0013598

Tversky and Kahneman conducted an experiment in 1973 in which participants were given several descriptions of the performance of student teacher during a particular lesson. Some participants were asked to evaluate the quality of lesson described into percentile scores, and other participants were asked to predict the standing of the student teacher five years after the practice lessons. The judgments of the second group were based on the other participants’ evaluations. Even though the participants were aware of the limited predictability of judging a person’s performance five years into the future, they expressed high confidence in judging the student teacher’s performance to be identical to now. Sadly, high confidence in the face of poor judgment of probability is common and known as the illusion of validity. Confidence displayed by people in their predictions usually depends on representativeness and regard for other factor is usually ignored; the illusion persists even when a person is aware of the limited accuracy of prediction (ibid).

–  Misconceptions of Regression

People simply do expect regression to occur even in contexts where it is common (Tversky and Kahneman, 1974). A good example of regression towards the mean is with height; two above average height parents are more likely to have a child of average height than above average height. Despite this fact, people tend to dismiss regression because it is inconsistent with their beliefs. Failure to accept the truth; however, leads to overestimation of the effectiveness of punishment and the underestimation of the effectiveness of reward.

– Implicature

Implicature refers to what a sentence suggests rather than what is literally said (Blackburn, 1996). For example, the sentence “I showered and went to bed” implies that first I showered and then I went to bed; however, if you take the sentence literally, I could mean I went to bed and then I showered in the morning. Both possibilities are true, but given the context it would be strange for me not mean that I showered before going to sleep. Sometimes a qualification is added, which adds new information to the context. Even if the sentence is not altered itself, a qualification clarifies our implication. An example of a qualification would, in that exact order: “I showered and went to bed, in that exact order.”

– The Conjunction Fallacy

The conjunction fallacy, first outlined by Tversky and Kahneman in 1983, refers to tendency to believe that two events are more likely to occur together rather than independently. The example provided by Tversky and Kahneman is as follows:

Linda is a bank teller question is a good example:

“Linda is 31 years old, single, outspoken, and very bright. She studied philosophy at University. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-war demonstrations.

Which is more likely?

A) Linda is a Bank Teller

B) Linda is a Bank Teller and is active in the feminist movement.”

The results showed that people chose B more often than A because they see the information above as indicative of Linda’s personality, and B seems like a better fit for her as a person. The truth is than A and B do not have the same likelihood because there is no way of knowing if she is a feminist or not. Linda is a bank teller is obviously a fact. Regardless of that, the general public as well as statistical experts still rely on the representative heuristic, ignoring the probability a play.

Overconfidence

People tend be overly confident in their estimates even when they are pointed out the irrationality of their thinking (Fischhoff et al. 1977). Baron (1994) found evidence that one reason for our inappropriately high confidence our tendency not to search for reasons why we might be wrong. As ridiculous as this may seem, studies consistently confirm that people ignore new information to hold on to their original belief. Weinstein (1989) reported a study where racetrack handicappers were slowly given more and more information about the outcomes of the race. Despite becoming more informed, the participants held onto their original belief with more confidence. DeBondt and Thaler (1986) propose that when new information arrives, investors revise their beliefs by overweighting the new information and underweighting the earlier information.

 

Availability Heuristics

Availability

People tend to rate the frequency of an occurrence or the probability of an event by the ease with which you can remember it. Availability is usually an effective heuristic tool because if you can easily remember an event, there is a chance that you just experienced it or were exposed to it. However, just as with similarity and probability, availability is affected by other factors as well, namely in the form of biases.

Biases Due to the Retrievability of Instances

When the size of a class is judged by the availability of instances, a person is displaying a bias due to the retrievability of instances. A class that is difficult to remember is likely to be judged as having fewer members. Salience and recency are largely biased by how easy them come to mind.  In 1973, Galbraith and Underwood showed that because it is easier to think of abstract words used in various contexts compared to concrete words, participants judged the frequency of the word occurrence of abstract words and higher than concrete words. Even though the concrete words are more common, abstract words are not contextually constrained so they appear more salient.

availability

Another example is evident from the Lichenstein et al. 1978 study. They asked participants to rate the likelihood of particular causes of death. He found that participants believed that accidents caused as many deaths as disease, and that murder was more common than suicide. In reality, diseases cause sixteen times as many deaths compared to accidents, and suicides are almost twice as common as murders. Availability heuristics suggest that murders and accidents are common because media coverage is far greater. As various types of media are far more accessible to us compared to disease or suicide statistics, information spread by the media becomes far more salient. Hence, the more salient information often translates into a false perception of its occurrence.

Biases of Imaginability

The imaginability of biases is a heuristic when one has to assess the likelihood of an event not using instances stored in your memory but according to a particular rule or task. For example, consider a group about to embark on an expedition together. As part of the preparation they must consider all possible difficulties they might encounter. Based on this “rule” the ease it takes them to recall possible difficulties in the wake of their trip seem more likely than they actually are. Again, discussing instances of difficulty heightens salience to a point of interference.

Illusory Correlation

Chapman and Chapman (1967) described illusory correlation as the judgment of the frequency with which two events co-occur. The same year, Chapman and Chapman carried out a study investigating the strength of illusory correction. Participants were presented with several hypothetical mental patients. Each patient came with a clinical diagnosis and a drawing of a person made by that patient (Draw-a-person test). The results showed that participants overestimated the frequency of co-occurrence of “natural associates” such as suspiciousness and peculiar eyes; a finding remarkably similar to the overall clinical reports of the same task. Despite being presented with contradictory data, the illusory correlation remained resistant to the point of preventing participants from detecting relationships that were present.

Recognition Heuristic

As the name suggests, with recognition heuristics, a salience bias is established by how well we recognise an object, name, etc. Goldstein and Gigerenzer (2002) asked German and American students about four cities. Specifically, which city did they think was larger: San Antonio or San Diego? And Hamburg or Cologne? Contrary to expectations, the results showed that American students were more accurate for the German cities, and the German students were more accurate for the American cities. Unlike the other studies, heuristics actually benefited the students when they new little about the other country. San Diego is larger and also featured more often in films or the news; the same is true for Hamburg. Therefore, guessing that San Diego and Hamburg are the larger makes sense for foreign students because they have a very limited amount of information available to influence salience. For the natives, answering the question becomes more difficult because both cities are salient and factors such as their own familiarity or experiences are likely to interfere with their answer.

Biases Due to the Effectiveness of a Search Set

Lastly, Tverky and Kahenman (1974) describe the bias due to the effectiveness of a search set. The easiest way to describe this bias is through an example. For example, if participants are asked to list words beginning with and containing “T” it is easier to think of words starting with it than containing it; therefore, words beginning with “T” have greater salience. As has been established, greater salience means participants are likely to list far more words beginning with “T.”

Anchoring and Adjustment

Anchoring and adjustment is a heuristic where people make estimates by starting from an initial value, which is subconsciously adjusted to provide the final answer. Typically, the adjustments made to the initial value are insufficient. Studies have focused on presenting two groups of participants with different starting points. Consistently, different starting points yield different estimates (Slovin and Lichteinstein, 1971). Tversky and Kahneman (1974) based their experiment on this hypothesis. Participants were asked to estimate various quantities in percentages (like the number of African countries in the UN). For the initial value, a wheel of fortune was spun to determine the number. Participants then had to determine whether the number was higher or lower than the number that was spun. Results showed that their initial number significantly influenced participants.

Biases of the Evaluation of Conjunctive and Disjunctive Events

Studies of choice amongst gamblers show that they tend to overestimate the probability of conjunctive events and to underestimate the probability of disjunctive events (Cohen and Chesnick, 1972).  The basic idea is that people take more gambling risks to avoid losses than for a possible gain. In relation to anchoring and adjustment, gamblers tend to anchor their choices to his mentality, failing to adjust for probability. For example, when participants are given a simple task (choosing a red marble from a bag 50% red, 50% white), conjunctive task (choosing a red marble seven times in a row from a bag contained 90% red, 10% white), and a disjunctive task (choosing a red marble at least one in 7 goes with a bag contained 10% red and 90% white), participants will choose the disjunctive task. The disjunctive task is framed to avoid losses, but the other tasks have better odds. Regardless, the gamblers avoid adjustments to their mentality and therefore have to deal with lower odds. Dawes (1988) found that even when people are pointed out the inconsistencies in their thinking, they still stick by their original choice.

darts

Tversky and Kahneman (1974) devised the prospect theory to explain the behaviour of loss aversion. The theory assumes that individuals identify with a reference point, and that people are more sensitive to potential losses than to potentials gains. As such, people are more willing to accept lower winning odds if it means lowering the possibility of losses. Of course casinos would be bankrupt by now if everyone person thought this way. Distorted judgement, wishful thinking or even sheer desperation affect the way making decisions can override our natural instincts (Edwards, 1968).

Understanding Risk: Gigerenzer and Edwards, 2003

Single event probabilities are probability statements about a single event; they leave the class of events for which the probability refers to open for interpretation. For example, there is a 50% chance of a pop quiz tomorrow. References class can be area, time of day, etc., anything which alter the meaning of the single event probability statement. Framing appropriately can avoid confusion. Framing is the expression of the same information in multiple ways.  As with gambling, framing can be presented positively vs. negatively, by gains vs. losses and also by reference classes.

Framing and Single Event Probabilities

In 2002 Gigerenzer worked with a psychiatrist and his patients to observe first hand the importance of reference frames. The psychiatrist prescribed an anti-depressant to his patient and told them that they have a 30-50% chance of developing a sexual problem.  Even though the psychiatrist thought he was clear in saying that out of every 10 patients, 3-5 will experience sexual dysfunction the patients misunderstood, his patients still believed that in 30-50% of their sexual encounters something would go wrong. Gigerenzer asserted that all the misunderstandings could be reduced or avoided by specifying a reference class or only using frequency statements (3 out of 10 patients rather than 30-50%).

darts 2

Framing and Conditional Probabilities

Conditional probabilities refers to the likelihood of A given B. For example, if someone has low mood and anxiety then the likelihood of having depression is high. Conditional probabilities should not be confused with B given A. Conditional probabilities are very important with disease detection. Gigerenzer and Edwards, 2003 attest the framing of probabilities can easily confuse patients. As with the psychiatrist’s warnings, giving information in a clear, alternative format can reduce confusion, in this case, natural frequencies.  In 2002, Gigerenzer found that even doctors struggle with reference classes when they are presented as probabilities. In a study with 48 doctors, they were asked to estimate the probability that women with a positive mammogram result actually has breast cancer. Of all the doctors who received conditional probabilities, very few gave the correct probabilities. On the other hand, most doctors who were given natural frequencies gave the correct answer. An example of what the doctors were given is as follows:

CP: The probability that woman has breast cancer is 0.8%

Natural frequencies: Eight out every 1000 women have breast cancer.

CP: If a woman has breast cancer, the probability that the mammogram will short a positive result is 90%.

 NF: Of every eight women with breast cancer, one will get a false negative.

Wright 2001, explored why women having taken a mammogram misunderstand relative risks. Observations showed that most women misunderstand relative risk because they think that the number related to women like themselves who take part in the screening. In truth, relative risks actually relate to a different class of women: women who die of breast cancer without being screened. Confusion can be avoided by using natural frequencies of absolute risks.

Positive vs. Negative Framing

Positive framing is more effective than negative framing in persuading people to take risky treatment options (Edwards et al. 2001; Kuehberger, 1998). For example, a 70% success rate for a surgery sounds more promising than a 30% failure rate. Gain or loss framing is equally important in communicating clinical risk; however, loss framing tends to be more persuasive when promoting action.  The most obvious example is loss framing for screening (Edwards and Mulley, 2002); listing the number of deaths for women that did not get screen for breast cancer is scarier than presenting the natural frequency of deaths due to breast cancer for the entire population. Manipulation can also be presented in the form of charts and population crowd figures (Shapire et al. 2001).