Anchoring and adjustment is a heuristic where people make estimates by starting from an initial value, which is subconsciously adjusted to provide the final answer. Typically, the adjustments made to the initial value are insufficient. Studies have focused on presenting two groups of participants with different starting points. Consistently, different starting points yield different estimates (Slovin and Lichteinstein, 1971). Tversky and Kahneman (1974) based their experiment on this hypothesis. Participants were asked to estimate various quantities in percentages (like the number of African countries in the UN). For the initial value, a wheel of fortune was spun to determine the number. Participants then had to determine whether the number was higher or lower than the number that was spun. Results showed that their initial number significantly influenced participants.
Biases of the Evaluation of Conjunctive and Disjunctive Events
Studies of choice amongst gamblers show that they tend to overestimate the probability of conjunctive events and to underestimate the probability of disjunctive events (Cohen and Chesnick, 1972). The basic idea is that people take more gambling risks to avoid losses than for a possible gain. In relation to anchoring and adjustment, gamblers tend to anchor their choices to his mentality, failing to adjust for probability. For example, when participants are given a simple task (choosing a red marble from a bag 50% red, 50% white), conjunctive task (choosing a red marble seven times in a row from a bag contained 90% red, 10% white), and a disjunctive task (choosing a red marble at least one in 7 goes with a bag contained 10% red and 90% white), participants will choose the disjunctive task. The disjunctive task is framed to avoid losses, but the other tasks have better odds. Regardless, the gamblers avoid adjustments to their mentality and therefore have to deal with lower odds. Dawes (1988) found that even when people are pointed out the inconsistencies in their thinking, they still stick by their original choice.
Tversky and Kahneman (1974) devised the prospect theory to explain the behaviour of loss aversion. The theory assumes that individuals identify with a reference point, and that people are more sensitive to potential losses than to potentials gains. As such, people are more willing to accept lower winning odds if it means lowering the possibility of losses. Of course casinos would be bankrupt by now if everyone person thought this way. Distorted judgement, wishful thinking or even sheer desperation affect the way making decisions can override our natural instincts (Edwards, 1968).
Understanding Risk: Gigerenzer and Edwards, 2003
Single event probabilities are probability statements about a single event; they leave the class of events for which the probability refers to open for interpretation. For example, there is a 50% chance of a pop quiz tomorrow. References class can be area, time of day, etc., anything which alter the meaning of the single event probability statement. Framing appropriately can avoid confusion. Framing is the expression of the same information in multiple ways. As with gambling, framing can be presented positively vs. negatively, by gains vs. losses and also by reference classes.
Framing and Single Event Probabilities
In 2002 Gigerenzer worked with a psychiatrist and his patients to observe first hand the importance of reference frames. The psychiatrist prescribed an anti-depressant to his patient and told them that they have a 30-50% chance of developing a sexual problem. Even though the psychiatrist thought he was clear in saying that out of every 10 patients, 3-5 will experience sexual dysfunction the patients misunderstood, his patients still believed that in 30-50% of their sexual encounters something would go wrong. Gigerenzer asserted that all the misunderstandings could be reduced or avoided by specifying a reference class or only using frequency statements (3 out of 10 patients rather than 30-50%).
Framing and Conditional Probabilities
Conditional probabilities refers to the likelihood of A given B. For example, if someone has low mood and anxiety then the likelihood of having depression is high. Conditional probabilities should not be confused with B given A. Conditional probabilities are very important with disease detection. Gigerenzer and Edwards, 2003 attest the framing of probabilities can easily confuse patients. As with the psychiatrist’s warnings, giving information in a clear, alternative format can reduce confusion, in this case, natural frequencies. In 2002, Gigerenzer found that even doctors struggle with reference classes when they are presented as probabilities. In a study with 48 doctors, they were asked to estimate the probability that women with a positive mammogram result actually has breast cancer. Of all the doctors who received conditional probabilities, very few gave the correct probabilities. On the other hand, most doctors who were given natural frequencies gave the correct answer. An example of what the doctors were given is as follows:
CP: The probability that woman has breast cancer is 0.8%
Natural frequencies: Eight out every 1000 women have breast cancer.
CP: If a woman has breast cancer, the probability that the mammogram will short a positive result is 90%.
NF: Of every eight women with breast cancer, one will get a false negative.
Wright 2001, explored why women having taken a mammogram misunderstand relative risks. Observations showed that most women misunderstand relative risk because they think that the number related to women like themselves who take part in the screening. In truth, relative risks actually relate to a different class of women: women who die of breast cancer without being screened. Confusion can be avoided by using natural frequencies of absolute risks.
Positive vs. Negative Framing
Positive framing is more effective than negative framing in persuading people to take risky treatment options (Edwards et al. 2001; Kuehberger, 1998). For example, a 70% success rate for a surgery sounds more promising than a 30% failure rate. Gain or loss framing is equally important in communicating clinical risk; however, loss framing tends to be more persuasive when promoting action. The most obvious example is loss framing for screening (Edwards and Mulley, 2002); listing the number of deaths for women that did not get screen for breast cancer is scarier than presenting the natural frequency of deaths due to breast cancer for the entire population. Manipulation can also be presented in the form of charts and population crowd figures (Shapire et al. 2001).