Exploring the Craik and Lockhart Model of Memory Processing: Understanding the Levels of Analysis

Cover Image for Exploring the Craik and Lockhart Model of Memory Processing: Understanding the Levels of Analysis

The levels of processing framework were presented by Fergus Craik and Robert Lockhart in 1972 as an alternative to theories of memory that postulated separate stages for sensory, working, and long-term memory. The general idea is that incoming stimuli are subjected to a series of analyses starting with shallow sensory analysis and proceeding to deeper more complex abstract and semantic analysis.

Whether a stimulus is processed at a shallow or deep stage depends on the nature of the stimulus and the time available for processing. An item processed at a deep level is less likely to be forgotten than one processed at a shallow level. At the earliest level, incoming stimuli are subjected to sensory and featural analysis; at a deeper level the item may be recognized employing pattern recognition and extraction of meaning; at a still deeper level, it may engage the subject's long-term associations. With deeper processing, a greater degree of semantic or cognitive analysis is undertaken.

Craik and Lockhart's (1972) main theoretical assumptions were as follows:

  • The level or depth of processing of a stimulus has a large effect on its memorability
  • The deeper level of analysis produces more elaborate, longer lasting, and stronger memory traces than do shallow levels of analysis

Therefore, instead of concentrating on the stores or structures involved, this theory concentrates on the processes involved in memory.

According to the model, three levels of processing can occur when information is encoded into memory: shallow processing (sensory), intermediate processing (phonetic), and deep processing (semantic).

  • Shallow processing: the first level of processing is the shallowest which involves structural processing. it is simply a sensory level of processing through which we merely become aware of the incoming information i.e., when we encode only the physical qualities of something, for example, the typeface of a word or how the letters look.
  • Intermediate processing: phonemic processing occurs at which stage which is when we encode the sound of a stimulus.

Shallow and intermediate processing only involves maintenance rehearsal and leads to fairly short-term retention of information.

  • Deep processing (semantic processing): to ensure that the input is retained for a longer period, it is important that it gets analyzed and understood in terms of the meaning that it carries. Semantic processing happens when we encode the meaning of a word and relate it to similar words with similar meanings.

Deep levels of processing encourage recall because of two factors: distinctiveness and elaboration. Distinctiveness means that a stimulus is different from other memory traces. The second factor is an elaboration that requires rich processing in terms of meaning and interconnected concepts.

Craik and Tulving (1975) compared recognition performance as a function of the task performed at learning.

  • Shallow graphemic task: decide whether each word is uppercase or lowercase
  • Intermediate phonemic task: decide whether each word rhymes with a target word
  • Deep semantic task: decide whether each word with a sentence containing a blank

Craik and Tulving(1975) assumed that the semantic task involved deep processing and the uppercase or lowercase task involved shallow processing. They measured both the time to make a decision and the recognition of the rated words. The data obtained are interpreted as showing that

  • deeper processing takes longer to accomplish and
  • recognition of encoded words increases as a function of the level to which they are processed, with those words engaging semantic aspects better recognized than those engaging only the phonological or structural aspects.

New light was shed on the levels of processing concept when Rogers, Kuiper, and Kirker (1977) showed that self-reference is a powerful factor. According to the self-reference effect, we remember more information if we try to relate that information to ourselves (Burns 2006; Rogers et. al.1977).

Eysenck(1979) argued that distinctive or unique memory traces are easier to retrieve than those resembling other memory traces.

Morris et. al.(1977) argued that stored information is remembered only if it is relevant to the memory test.

Strength of Craik and Lockhart levels of processing model of memory:

  • Theoretical elegance: the model is based on a simple and elegant principle, which makes it easy to understand and apply.
  • Empirical support: the model has been supported by a large body of empirical evidence which provides strong support for the validity of the model.
  • Real-world applications: the model has been applied to a wide range of real-world situations, such as explaining why people are better at remembering information that is personally relevant or meaningful to them.
  • Implications for memory improvement: by understanding the factors that influence memory we can develop strategies for encoding information in a way that leads to more durable and accurate memories.
  • Influence on other theories: it has influenced the development of other models of memories, such as the levels of processing theory of semantic memory proposed by Schacter and Tulving (1974).

Limitations of Craik and Lockhart's model of memory

  • The model underestimates the importance of retrieval environments in determining memory performance.
  • It does not take into account individual differences in memory ability; some people may be better at forming and recalling memories than others regardless of how deeply they process information.
  • Long-term memory is influenced by the depth of processing, elaboration of processing, and distinctiveness of processing. however, the relative importance of these factors and how they are interrelated remains unclear.
  • The model does not account for the role of attention in memory formation. A person's ability to pay attention to and encode information can significantly determine the strength of their memory.
  • The model needs to explain more precisely why deep processing is so effective.


Sayani Banerjee
Author: Sayani Banerjee

Hey there, curious minds! I'm Sayani Banerjee, and I'm thrilled to be your companion on the fascinating journey through the realm of psychology. As a dedicated student pursuing my master's in Clinical Psychology at Calcutta University, I'm constantly driven by the desire to unravel the mysteries of the human mind and share my insights with you. My passion for teaching and my love for research come together on my blog, psychologymadeeasy.in, where we explore the world of psychology in the simplest and most engaging way possible.

Read more on: Cognitive Psychology

More Posts

Cover Image for Atkinson and Shiffrin Information Processing Model of Memory (Modal model, Three store Model, Traditional Model, Stage Model)

Atkinson and Shiffrin Information Processing Model of Memory (Modal model, Three store Model, Traditional Model, Stage Model)

The Atkinson-Shiffrin model is a psychological model proposed in 1968 by Richard Atkinson and Richard Shiffrin as a proposal for the structure of memory. It proposed that human memory involves a sequence of three stages: sensory memory, working or short term memory, and long term memory.

Sayani Banerjee
Author: Sayani Banerjee