Category Archives: Uncategorized

The Modal Model of a Working Memory

Working memory is quite a difficult concept to understand. Largely, this is because working memory has been classified as a branch of short-term memory (STM). Today, people are still not exactly sure whether to separate them. Working memory is concerned with immediate processing. For example, holding a phone number in your head whilst trying to find the phone. Arguably, this rote repetition can be considered a form of short-term memory learning. However, for the sake on the model we are about to discuss. I will consider working memory as separate from short-term memory.

Atkinson & Schriffin

The first major working memory model was proposed by Atkinson & Schriffin in 1968. Atkinson & Schriffin argued that information enters into our memory from the environment. Information which is then processed by two sensory memory systems: iconic and echoic. Unlike later models, this model states that any form of rehearsal is sufficient for learning. The only thing that determines whether or not information enters long-term storage is the length of time information spends in short-term storage. Also unlike other models, Atkinson & Schriffin believe that the short term memory store serves as the working memory store. The implications of this type of model is that long-term memory (LTM) is entirely dependent on short-term memory, and that levels of processing are irrelevant.

Major Criticisms

Today, Atkinson and Schriffin’s model is not accepted because a myriad of studies have proven that major factors of their model are implausible. Four major sources of criticism are neurological evidence that proves LTM can be sustained even with damage to STM, serial position effects, levels of processing and Baddeley and Hitch’s 1974 experiment.

Shallice and Warrington (1969) studied patient KF who suffered from poor STM but his LTM remained intact. KF suffered damage to his left parietoccipital region of the brain. KF showed very poor digit span (less than 2) but showed normal performance on a LTM task. KF’s performance proved that an intact STM was not necessary for a normally function LTM. Of course, in order for new information to enter long-term memory and intact short-term memory is an important feature. However, it does mean that information that is already stored in long-term memory is not effected by damage to short-term memory stores.

Two studies carried about by Tzeng (1973) and Baddeley and Hitch (1977) respectively disproved the modal model by testing for the serial positioning effect. The serial positioning effect suggests that learning words at the beginning or end of a list is easier than learning the middle words. Tzeng (1973) conducted a test of free-recall where participants were given a list of words. An interpolated task was introduced to disrupt recall. Zheng discovered both primacy and recency effects. Even though the modal model believed that interpolating after every word would remove the word for STM, primacy and recency effects were still observed. Baddeley and Hitch studied rugby players recall of the names of players they had previously played against. Baddeley and Hitch found that the more recent the game, the more names they recalled. Thus, they were able to suggest the it is unlikely the recency effect is due to limited short-term storage capacity.

Another experiment carried out by Baddeley and Hitch in 1974 suggested that working memory and short-term memory are in fact separate entities. Participants had to carry out dual tasks: digit span and grammatical reasoning. There was a significant increasing in reasoning time, but participants suffered no accuracy impairment. The results suggest STM and WM serve separate roles.

Lastly and most obviously, learning depends on more than just the amount of time it spends in short-term storage. Levels of processing are important when it comes to how well we learn. Learning depends on how material is processed (Craik and Lockhart 1972). Deep and meaningful information is far more permanent than shallow, sensory processing. Craik and Lockhart suggest that there are two major forms of rehearsal and maintenance and elaborative. In a test of this theory, Hyde and Jenkins (1973) gave participants a list of several words and asked them to complete tasks. This tasks different in the amount of processing involves: the first, rating the word for pleasantness of meaning, and the second, detecting letter occurrence. The results showed a significantly higher recall of the participants in elaborative (meaningful) processing condition.

Heuristics Representativeness

What are Heuristics?

People rely on heuristics because they facilitate the task of assessing probabilities and predicting values; they allow us to make decisions quickly and instinctually. Although heuristics like schemas are often inaccurate, people look for evidence of that the heuristic or schema is true and ignore failures of judgment (Tversky and Kahneman, 1974). Heuristic errors are known as systematic errors, and they occur because the heuristic cannot cope with the complexity of the task. Heuristics simply lack the validity.

Representativeness

Representativeness is when probabilities of B are evaluated by how much it resembles A, taking for granted the degree to which A and B are related (Tversky and Kahneman, 1974). Usually, representativeness heuristics are quite accurate because if A resembles B, there is a likelihood that they are somehow related. Unfortunately, similarities can be misleading as they are influenced by factors, which need to be taken in considering when judging probability.  Factors that influence similarity include: prior probability outcomes, sample size and chance. 

– Insensitivity to Prior Probability Outcomes

A major effect on probability is base-rate frequency. For example, even though Steve has the characteristics of a librarian compared to a farmer, the fact that there are many more farmers than librarians in his population needs to be taken into account when assessing the likelihood of him having one occupation over the other. If in Steve’s population there are a lot of farmers because of the rich soil in his area, the base-rate frequency suggests that Steve is more likely to be a farmer than a librarian.

In 1973, Kahneman and Tversky conducted an experiment to show how often people overlook base-rate frequency when assessing probability outcomes. Participants were shown short personality descriptions of several individuals sampled from 100 people. The 100 people consisted of lawyers and engineers, and the experiment task was to assess which people were likely to be lawyers or engineers. In condition A, participants were given the following base rate, 70 engineers and 30 lawyers. In condition B, the base rate was 30 engineers and 70 lawyers. The two conditions produced virtually the same probability judgements despite the significantly different base rate probabilities clearly given to the participants. Participants only used the base rate probabilities when they were given without personality descriptions.

Goodie and Fantino (1996) also studied base-rate frequency. Participants were asked to determine the probability that a taxi seen by a witness was blue or green. Even though participants were give the base-rate of taxi colours in the city, participants still determined probability by the reliability of the witnesses.

– Insensitivity to Sample Size

Another major effect on probability is sample size. The similarity of sample statistic to a population parameter does not depend on size; therefore, if probabilities are assessed by representativeness, the probability is independent of sample size. Tversky and Kahneman (1974) conducted a participant to show evidence of insensitivity to sample size. Participants were given the following information:

–       There are two hospitals in a town, one small and one larger

–       About 45 babies are born per day at the large hospital

–       About 15 babies are born per day at the small one

–       50% of the babies born are boys, but this figure differs slightly everyday

Participants were then asked which hospital is more likely to report a day with 60 male births. Most of the participants answered that the hospitals are equally likely to have 60 births. However, sample theory warrants that the small hospital would be more likely because the larger hospital is less likely to stray from the mean.

newborn

Unfortunately, the general public are not the only ones to fall victim to sample size. In 1971, Tversky and Kahneman conducted a meta-analysis on experienced research psychology, the majority stood by the validity of small group sizes. The researches simply put too much faith in their results from small sample sizes, underestimating the high chance of representativeness. It is likely that the reason the benefits of a large sample size are drilled into psychology students from day one is to avoid errors like this.

– Misconceptions of Chance

People expect that a randomly generated sequence of events will represent the essential characteristics of that process even when the sequence is short. In other words, people thing that the likelihood of getting H-T-H-T-T-H is more likely than H-H-H-H-H-H even though they are equally likely. This because every T or H has to be assessed as an individual probability event. In other words, in trail one, you have a 50% chance of getting a T or an H. In the second trail, the results of the first have no impact; therefore, you again have a 50% chance to get either letter. Probability matching is another word for this misconception of chance. Andrade and May (2004) describe another scenario based on real life misconceptions of chance.

First, participants are given a jar of 100 balls and told that 80% are white and 20% are white. The most commonly observed strategy when asked what ball colour will become next is one that imitates the pattern of 20% white and 20% red. In reality, the most efficient strategy is to say red for every draw because the probability event, as stated above, needs to be assessed for each individual draw not for the task as a whole. The implications of probability in gambling are huge, so it is not surprising that the gambler’s fallacy is another name for probability matching. People simply believe in the “law of averages,” that if an event has occurred less often that probability suggests, it is more likely to occur in the future.

– Insensitivity to Predictability

Another issue with probability is insensitivity to predictability, which is when people ignore the reliability of a description and instead pay attention to in related factors. For example, a person will pay more attention to a particular review and given greater reliability if the person’s name is the same as yours. Another example would be ignoring negative reviews and only paying attention to positive ones because they confirm your own belief. Obviously, doing so means disregarding the reliability of the evidence.

nicolas_copernicus_lecturing_wellcome_m0013598

Tversky and Kahneman conducted an experiment in 1973 in which participants were given several descriptions of the performance of student teacher during a particular lesson. Some participants were asked to evaluate the quality of lesson described into percentile scores, and other participants were asked to predict the standing of the student teacher five years after the practice lessons. The judgments of the second group were based on the other participants’ evaluations. Even though the participants were aware of the limited predictability of judging a person’s performance five years into the future, they expressed high confidence in judging the student teacher’s performance to be identical to now. Sadly, high confidence in the face of poor judgment of probability is common and known as the illusion of validity. Confidence displayed by people in their predictions usually depends on representativeness and regard for other factor is usually ignored; the illusion persists even when a person is aware of the limited accuracy of prediction (ibid).

–  Misconceptions of Regression

People simply do expect regression to occur even in contexts where it is common (Tversky and Kahneman, 1974). A good example of regression towards the mean is with height; two above average height parents are more likely to have a child of average height than above average height. Despite this fact, people tend to dismiss regression because it is inconsistent with their beliefs. Failure to accept the truth; however, leads to overestimation of the effectiveness of punishment and the underestimation of the effectiveness of reward.

– Implicature

Implicature refers to what a sentence suggests rather than what is literally said (Blackburn, 1996). For example, the sentence “I showered and went to bed” implies that first I showered and then I went to bed; however, if you take the sentence literally, I could mean I went to bed and then I showered in the morning. Both possibilities are true, but given the context it would be strange for me not mean that I showered before going to sleep. Sometimes a qualification is added, which adds new information to the context. Even if the sentence is not altered itself, a qualification clarifies our implication. An example of a qualification would, in that exact order: “I showered and went to bed, in that exact order.”

– The Conjunction Fallacy

The conjunction fallacy, first outlined by Tversky and Kahneman in 1983, refers to tendency to believe that two events are more likely to occur together rather than independently. The example provided by Tversky and Kahneman is as follows:

Linda is a bank teller question is a good example:

“Linda is 31 years old, single, outspoken, and very bright. She studied philosophy at University. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-war demonstrations.

Which is more likely?

A) Linda is a Bank Teller

B) Linda is a Bank Teller and is active in the feminist movement.”

The results showed that people chose B more often than A because they see the information above as indicative of Linda’s personality, and B seems like a better fit for her as a person. The truth is than A and B do not have the same likelihood because there is no way of knowing if she is a feminist or not. Linda is a bank teller is obviously a fact. Regardless of that, the general public as well as statistical experts still rely on the representative heuristic, ignoring the probability a play.

Overconfidence

People tend be overly confident in their estimates even when they are pointed out the irrationality of their thinking (Fischhoff et al. 1977). Baron (1994) found evidence that one reason for our inappropriately high confidence our tendency not to search for reasons why we might be wrong. As ridiculous as this may seem, studies consistently confirm that people ignore new information to hold on to their original belief. Weinstein (1989) reported a study where racetrack handicappers were slowly given more and more information about the outcomes of the race. Despite becoming more informed, the participants held onto their original belief with more confidence. DeBondt and Thaler (1986) propose that when new information arrives, investors revise their beliefs by overweighting the new information and underweighting the earlier information.

 

Methods Used for Studying Infants’ Perception

Part of getting onto a good masters or Ph.D programme means having real-life experience. As only a second year undergraduate that can sometimes seem like an age away, but time really does fly by. In order to get some experience in research I transcribed videos for a developmental researcher at my department. Even though my job was pretty menial in the whole scale of things, writing down all the speech and movements of infants really made me appreciate something substantial; infants are very hard to understand and observe. Their intentions, their desires and even just their knowledge can be difficult to interpret. As such, psychologists use a set of methods to study infant perception, intentions, desires and capabilities.

baby

This post will deal with studying infant perception.

Preference Technique

Basic set-up

1. A researcher presents two stimuli to an infant simultaneously

2. The researcher monitors the infant’s eye movement. Researchers use various techniques for this, one being the ASL Model 504.

3. If the infant looks more at one stimulus than the other, it is inferred that the infant prefers that stimulus over the other.

If accurate, measures of the eye movements can be made, this technique is quite simple and effective. The infants preference can be inferred because of habituation, a fancy word for boredom.

Habituation

Habituation and dishabituation are another method used to study infant perception and preference. After looking at a stimulus for a certain amount of time, we become bored of it. Just like after awhile we stop feeling the clothes on our body. Our brain gets bored with the touch sensation, and so eventually it stops informing us of it. On this basis, psychologists infer that babies will stop looking at a stimulus if they gets bored of it. If a stimulus is then presented with a new stimulus, it is likely he or she will prefer looking at the new stimulus that the infant has not seen before. If the infant does prefer the new stimulus, we can infer that the infant is capable of discriminating between the two stimuli. Discrimination between two stimuli allows researchers to detect the stage of perceptual development of infant has reached.

Conditioning

Classical and operational conditioning are terms you should be familiar with have you ever taken an introductory psychology course. Conditioning with infants consists of the same learning system. Fortunately, infant studies usually just involve rewarding the infant with pleasant sounds or images, usually of or from their mother.

baby1

Basic set-up

1. Infant is given a dummy or pacifier

2. Researcher waits for the infant to begin sucking on it at their usual rate

3. If the infant begins sucking at a faster rate than usual they are rewarded with the sound of their mothers voice

4. The infant will soon learn that as long as her or she continues sucking at the increased rate, they will hear their mother’s voice

5. After awhile, habituation sets in as the baby loses interest in the sound and their sucking rate decreases

6. The researcher then proceeds to introduce a new sound

7. If the infant is capable of discriminating the new sounds, they will begin to suck more again to her this new sound

All of these various tests of perception, as mentioned above are used to measure the development of infants.

Low Functioning Autism’s Genetic Roots

I am currently taking at genetics course at Duke University, hence the inspiration for my last post. One of my assignments also looked at a correlation between genetics and autism I thought I would share it on here! I hope you enjoy reading :)

A new study published in the American Journal of Human Genetics has found evidence suggesting a recessive genetic component to low-functioning autism. Professor Eric Morrow and his colleges of Brown University analysed the DNA of over 2,100 children born with autism into “simplex” families, families where only one child suffers from autism and no other immediate family (Brown University, 2013). Morrow et al. investigated the genomes of the autistic children and their siblings for “runs of homozygosity”; runs of homozygosity refer to long strands of DNA that are contributed to a child’s DNA by both parents (ibid). Morrow and his colleges discovered that the children with low-functioning autism had longer runs of homozygosity compared to the DNA of their siblings. Normally, as humans we do share large blocks of DNA sequences; nearly all the participants had at least 1 million letters in common. In fact, about 1/3 of all the participants had about 2.5 million shared letters, which translates to a “shared common ancestor approximately 40 generations or 1,000 years ago” (ibid). However, in 500 of the participants, where the autistic children had an IQ below 70, the runs of homozygosity surpassed even the 2.5 million mark and the runs of their siblings. The original article, titled “Intellectual Disability Is Associated with Increased Runs of Homozygosity in Simplex Autism” stresses that increased runs of homozygosity do not predict low-functioning autism; yet, a greater number does increase the likelihood of carrying the shared “recessive variants” necessary for developing low-functioning autism (Morrow et al., 2013). Basically, Morrow et al. concluded that the longer runs of identical DNA lettering meant that a child was far more likely than other children, even their own siblings to inherit rare genetic traits.

These findings are relevant to the course as they illustrate that a pedigree for low-functioning autism exists even if the carried recessive trait is extremely illusive in simplex families. Unfortunately, the whole picture of the genetics behind autism is still a bit of a blur; however, the established correlation between low-functioning autism and runs of homozygosity does suggest that inheritance does play a role and that individual errors are far less likely to be the only possible perpetrators of autism.

The role of inheritance in low-functioning autism is a breakthrough considering the elusiveness of the disorder. Currently, most genetic explanations for low-functioning autism focus on “spontaneous mutations and having too many or too few copies of a gene”; however, in some families these hypotheses fail to explain their particular case of autism (Brown University, 2013). Runs of homozygosity at least allows for parents to look somewhere new for answers. In fact, if a test is run on the child at an early age, high runs of homozygosity might help health professionals guide the development of the child by focusing treatment on the particular associated issues such as problems with speech and social interaction. With all genetic testing it seems people are weary; however, given that autism is very difficult to diagnose without symptoms, I feel that testing for runs of homozygosity in the early stages of a diagnosis can only be beneficial for parents trying to understand their child’s diagnosis. Furthermore, continued genetic testing on children from simplex families can only increase the likelihood of fine-tuning the genetic component of low-functioning autism.

Citation:

Brown University (2013, July 3). DNA markers in low-IQ autism suggest       heredity. ScienceDaily. Retrieved July 4, 2013, from            http://www.sciencedaily.com­/releases/2013/07/130703140236.htm

Assignment: Part of Biology 156 at Duke University

 

Spotlight Study: “Narcissistic Employee of the Month”

eotm

An assortment of studies from the United Kingdom have found a positive correlation between “pathological personality and success at work.” The studies, conducted on over 5,000 British workers have found that certain personality traits often consider dysfunctional, do in fact serve as an advantage in certain roles. A case study on author Joanne Limburg, a self-confessed perfectionist with OCD traits, encloses transparent sleeves with drafts of her poems to University of Cambridge. This obviously obsessive behaviour may seem completely unnecessary to most people; however, Limburg’s OCD tendencies to compulsively “triple-check and organise” makes people like her invaluable employees in “auditing and other detail-oriented work.”

Cognitive psychologist at New York University, Scott Barry Kaufman and researcher Adrian Furham have found that schizotypal people whom exhibit “some combination of social withdrawal, strange beliefs and cognitive disorganisation” display the extraordinary capacity to notice patterns others miss. Not to mention, people with this seemingly dysfunctional trait also show great promise in the arts as well as sales with their “out-of-the-box creativity.”

eotm2

Finally, professor Peter Harms of the University of Nebraska and Michael Maccoby of Oxford University have come to the conclusion that people with a “grandiose self-regarded” otherwise known as narcissism, can make excelled leaders. Narcissistic people believe that they are the personification of an excellent leader, pushing them to prove they are better than others meanwhile motivating others to learn and improve as well. Lastly, this grandiose self-regard leads narcissists to produce “beyond successful to world-changing” work.

Citation:

Yu, Alan. “ Narcissistic Employee of the Month. ” Psychology Today August 2012: 10. Print.

Attribution Errors

Fundamental Attribution Error 

Fundamental attribution error as defined by Ross et al. 1977 is the tendency to overestimate the impact of dis positional factors and underestimate the impact of situational factors in making casual attributions to behavior. In his study, he randomly divided a sample into questioners and contestants. The questioners were asked to come up with the hardest questions they could think of. Even though both the questioners and contestants knew this, both rated the questioners as more intelligent than the contestants. This clearly illustrates the fundamental attribution error. Other researchers have also illustrated the FAE in other studies. Barjonet 1980 found that people consistently attribute poor driving to the disposition of the drives rather than external factors such as road conditions or perhaps an emergency situation.

weather

Interestingly enough, although perhaps not all too surprising, the fundamental attribution errors appears to be more prevalent in western societies. Miller (1984) found that US adults were far more likely than Indian adults to commit the fundamental attribution error.

Actor-Observer Differences

Jones  and Nisbett (1972) described actor-observer differences as the tendency for actors to attribute their behaviour to external rather than internal causes. In his study Nisbet (1972) asked male students to write 2 essays. One essay was about their girlfriend and their choice of course. The other essay was about their best friend’s girlfriend and their friend’s choice of course. Nisbett found that when the male students wrote about themselves they made far more situational attributions to their choices, and when they wrote about their best friend, they made far more dispositional attributions to their choices.

Actor-observer differences were also observed by West (1975) during the Watergate Scandal involving President Nixon. He found that observers such as the public and the press blamed the scandal on the dispositions of the White House staff, where as the Nixon administrations blamed the circumstances for their behaviour.

weather2

Storms et al. (1973) attributed the actor-observer differences to perceptual focus. If you change perceptual focus, he believed you could also change attribution style. In the study, actor attributions become less situational and more dispositional when a videotape of the conversation between the participant and another person was viewed from an observer’s view point. The actor attributions became more situational and less dispositional when video was shown from the person they had been taking to’s view point.

False Consensus Effect 

In 1977 Ross et al. also described the false consensus effect as a criticism of the ANOVA model. The false consensus effect suggests that people rely less on distinctiveness and stimulus and more on consensus. Ross et al. believed that people would act the same us as in a given situation (consensus). In the study, Ross et al. asked students if they would walk around campus for 30 minutes wearing a sandwich board. 62% of the students that agreed to wear the sandwich board thought other people would also agree. 67% of the students that refused to wear the sandwich board through others would also refuse. Clearly, 67% plus 62% adds up to more than 100!

weather3

Sometimes, the false consensus effect works the other way. When we feel strongly about and issue or when something is very important to us, we want to believe others do not believe the same as us or did not do as well as us. Fenigstein’s (1996) study is an excellent paradigm of this. He observed that students who received an A on a test underestimated the amount of other students that also received an  A. Fenigstein believed that we fall prey to the false consensus effect because it helps boost our self-esteem to over-estimate or under-estimate consensus in certain circumstances.

Self-Serving Bias 

Self-serving bias is the tendency to contribute success to internal causes (self-enhancing bias) but failure to external causes (self-protecting bias). Self-serving bias comes in many forms such as the self-centred bias and self-handicapping.

Pupils sit GCSE exams in a school hall

The self-enhancing bias is far more pervasive than the self-protecting bias, quite unsurprisingly. Williams et al. (1979) found that exam success was attributed to intelligence where as exam failure was attribute to either poor lecturing ability or bad luck. Self-centred bias is quite similar, but it specifically refers to taking too much responsibility (or too little) for jointly produced outcomes. A classic example is couples blaming their significant other for sexual dysfunction in their relationship (Mall and Volpato 1989). Self-handicapping is also self-enhancing and is something we all tend to do around this time of year. We exaggerate a factor that detrimentally affects performance in order to decrease feelings of guilt and responsibility in case of failure but also to increase our self-esteem if we do happen to succeed.

Unrealistic Optimism 

Unrealistic optimism is the false belief that you are slightly better than average and that good things are more likely to happen to you. This is consistent with human tendency to believe in a just world and also to disguise any feelings of vulnerability to death. Manstead et al. 1992 found that even after being hospitalised due to car accident, these drivers still rated their driving abilities as above average. Even more astounding is the survey conducted by Burger and Burns (1998) who found that women who did not use contraception thought they were less at risk of getting pregnant compared to other women.

weather5