Working memory is quite a difficult concept to understand. Largely, this is because working memory has been classified as a branch of short-term memory (STM). Today, people are still not exactly sure whether to separate them. Working memory is concerned with immediate processing. For example, holding a phone number in your head whilst trying to find the phone. Arguably, this rote repetition can be considered a form of short-term memory learning. However, for the sake on the model we are about to discuss. I will consider working memory as separate from short-term memory.
Atkinson & Schriffin
The first major working memory model was proposed by Atkinson & Schriffin in 1968. Atkinson & Schriffin argued that information enters into our memory from the environment. Information which is then processed by two sensory memory systems: iconic and echoic. Unlike later models, this model states that any form of rehearsal is sufficient for learning. The only thing that determines whether or not information enters long-term storage is the length of time information spends in short-term storage. Also unlike other models, Atkinson & Schriffin believe that the short term memory store serves as the working memory store. The implications of this type of model is that long-term memory (LTM) is entirely dependent on short-term memory, and that levels of processing are irrelevant.
Today, Atkinson and Schriffin’s model is not accepted because a myriad of studies have proven that major factors of their model are implausible. Four major sources of criticism are neurological evidence that proves LTM can be sustained even with damage to STM, serial position effects, levels of processing and Baddeley and Hitch’s 1974 experiment.
Shallice and Warrington (1969) studied patient KF who suffered from poor STM but his LTM remained intact. KF suffered damage to his left parietoccipital region of the brain. KF showed very poor digit span (less than 2) but showed normal performance on a LTM task. KF’s performance proved that an intact STM was not necessary for a normally function LTM. Of course, in order for new information to enter long-term memory and intact short-term memory is an important feature. However, it does mean that information that is already stored in long-term memory is not effected by damage to short-term memory stores.
Two studies carried about by Tzeng (1973) and Baddeley and Hitch (1977) respectively disproved the modal model by testing for the serial positioning effect. The serial positioning effect suggests that learning words at the beginning or end of a list is easier than learning the middle words. Tzeng (1973) conducted a test of free-recall where participants were given a list of words. An interpolated task was introduced to disrupt recall. Zheng discovered both primacy and recency effects. Even though the modal model believed that interpolating after every word would remove the word for STM, primacy and recency effects were still observed. Baddeley and Hitch studied rugby players recall of the names of players they had previously played against. Baddeley and Hitch found that the more recent the game, the more names they recalled. Thus, they were able to suggest the it is unlikely the recency effect is due to limited short-term storage capacity.
Another experiment carried out by Baddeley and Hitch in 1974 suggested that working memory and short-term memory are in fact separate entities. Participants had to carry out dual tasks: digit span and grammatical reasoning. There was a significant increasing in reasoning time, but participants suffered no accuracy impairment. The results suggest STM and WM serve separate roles.
Lastly and most obviously, learning depends on more than just the amount of time it spends in short-term storage. Levels of processing are important when it comes to how well we learn. Learning depends on how material is processed (Craik and Lockhart 1972). Deep and meaningful information is far more permanent than shallow, sensory processing. Craik and Lockhart suggest that there are two major forms of rehearsal and maintenance and elaborative. In a test of this theory, Hyde and Jenkins (1973) gave participants a list of several words and asked them to complete tasks. This tasks different in the amount of processing involves: the first, rating the word for pleasantness of meaning, and the second, detecting letter occurrence. The results showed a significantly higher recall of the participants in elaborative (meaningful) processing condition.