A Cognitive Neuroscience Perspective on Memory for ...

15
A Cognitive Neuroscience Perspective on Memory for Programming Tasks Chris Parnin 1 College of Computing Georgia Tech Institute of Technology [email protected] Abstract. When faced with frequent interruptions and task-switching, programmers have diffi- culty keeping relevant task knowledge in their mind. An understanding of how programmers actively manage this knowledge provides a foundation for evaluating cognitive theories and building better tools. Recently, advances in cognitive neuroscience and brain imaging technology has provided new insight into the inner workings of the mind; unfortunately, theories such as program understanding have not been accordingly advanced. In this paper, we review recent findings in cognitive neu- roscience and examine the impacts on our theories of how programmers work and the design of programming environments. 1 Introduction Researchers have long been perplexed in understanding how programmers can make sense of mil- lions of lines of source code text, extract meaningful representations, and then perform complex programming tasks, all within the limited means of human memory and cognition. To perform a programming task, a programmer must have the ability to read code, investigate connections, formulate goals and hypotheses, and finally distill relevant information into transient represen- tations that are maintained long enough to execute the task. Amazingly, programmers routinely perform these mental feats across several active programming projects and tasks in fragmented work sessions fraught with interruptions and external workplace demands. In coping with these demands and limitations, the programmer must have mental capac- ity for dealing with large workloads for short periods of time and cognitive mechanisms for maintaining and coordinating transient representations. As of yet, we have no cognitive model that adequately explains how programmers perform difficult programming tasks in the face of constant interruption. As a consequence, we have a limited basis for predicting the effects of interruption or evaluating different tools that may support task-switching for programmers. Perhaps, new perspectives on memory and programmers are needed. Early models of mem- ory, which we review below, have identified several key processes and provided many fruitful predictions. However, when pressed with more strenuous tasks, such as dealing with an inter- ruption, these models have difficulty accounting for sustained performance [20]. Further, new results continue to emerge from studies of patients with novel brain lesions (injuries to specific brain regions after a stroke or accident) who display behaviors that undermine many of the as- sumptions of early memory models [55]. Likewise, early perspectives on programmers now seem dated. Shneiderman, who has published several influential articles on programmer memory and comprehension, once likened the ability of musicians to memorize every note of thousands of songs or long symphonies to that of programmers and suggested programmers would obtain the same ability to commit entire programs to memory in exact detail [50]. Rather the opposite has seemed to occur: Programs are not untouched sacred tomes, but organic and social documents that are understood and navigated with the assistance of abstract memory cues such as search keywords and spatial memory within a tree view of documents or scrollbars [27]. The methods available to researchers have expanded greatly. For example, it is now possible to administer drugs that interfere with memory formation or genetically engineer rats, whose basic brain structure for memory is remarkably similar, without the genes for neurotransmitters

Transcript of A Cognitive Neuroscience Perspective on Memory for ...

A Cognitive Neuroscience Perspective on Memory forProgramming Tasks

Chris Parnin1

College of ComputingGeorgia Tech Institute of Technology

[email protected]

Abstract. When faced with frequent interruptions and task-switching, programmers have diffi-culty keeping relevant task knowledge in their mind. An understanding of how programmers activelymanage this knowledge provides a foundation for evaluating cognitive theories and building bettertools. Recently, advances in cognitive neuroscience and brain imaging technology has provided newinsight into the inner workings of the mind; unfortunately, theories such as program understandinghave not been accordingly advanced. In this paper, we review recent findings in cognitive neu-roscience and examine the impacts on our theories of how programmers work and the design ofprogramming environments.

1 Introduction

Researchers have long been perplexed in understanding how programmers can make sense of mil-lions of lines of source code text, extract meaningful representations, and then perform complexprogramming tasks, all within the limited means of human memory and cognition. To performa programming task, a programmer must have the ability to read code, investigate connections,formulate goals and hypotheses, and finally distill relevant information into transient represen-tations that are maintained long enough to execute the task. Amazingly, programmers routinelyperform these mental feats across several active programming projects and tasks in fragmentedwork sessions fraught with interruptions and external workplace demands.

In coping with these demands and limitations, the programmer must have mental capac-ity for dealing with large workloads for short periods of time and cognitive mechanisms formaintaining and coordinating transient representations. As of yet, we have no cognitive modelthat adequately explains how programmers perform difficult programming tasks in the face ofconstant interruption. As a consequence, we have a limited basis for predicting the effects ofinterruption or evaluating different tools that may support task-switching for programmers.

Perhaps, new perspectives on memory and programmers are needed. Early models of mem-ory, which we review below, have identified several key processes and provided many fruitfulpredictions. However, when pressed with more strenuous tasks, such as dealing with an inter-ruption, these models have difficulty accounting for sustained performance [20]. Further, newresults continue to emerge from studies of patients with novel brain lesions (injuries to specificbrain regions after a stroke or accident) who display behaviors that undermine many of the as-sumptions of early memory models [55]. Likewise, early perspectives on programmers now seemdated. Shneiderman, who has published several influential articles on programmer memory andcomprehension, once likened the ability of musicians to memorize every note of thousands ofsongs or long symphonies to that of programmers and suggested programmers would obtain thesame ability to commit entire programs to memory in exact detail [50]. Rather the opposite hasseemed to occur: Programs are not untouched sacred tomes, but organic and social documentsthat are understood and navigated with the assistance of abstract memory cues such as searchkeywords and spatial memory within a tree view of documents or scrollbars [27].

The methods available to researchers have expanded greatly. For example, it is now possibleto administer drugs that interfere with memory formation or genetically engineer rats, whosebasic brain structure for memory is remarkably similar, without the genes for neurotransmitters

necessary for consolidating short-term memories into long-term memories. Additionally, fMRImachines provide the ability to measure changes in blood oxygenation levels associated withincreased brain activity within 1-2 seconds to regions of brain with 1-3 mm3 precision [62].These methods have not been previously available have lead to the founding of a new inter-disciplinary field: Cognitive neuroscience, coined by George Miller and Michael Gazzaniga, is“understanding how the functions of the physical brain can yield the thoughts and ideas ofan intangible mind” [24]. For researchers studying the cognitive aspects of programmers, neverhave more opportunities been available to expand our understanding of the inner workings ofthe programmer’s mind.

In this paper, we review perspectives on memory from the cognitive neuroscience literatureto gain insight into how a programmer maintains and remembers knowledge used during aprogramming task. The perspectives on memory offered by classical psychology have difficultyaccounting for programmers in practice and have followed us in our formation of theories ofprogram comprehension. Following our review, we discuss implications for the design of pro-gramming environments and comprehension theories as well as remaining issues.

2 Memory and Theories of Program Comprehension

2.1 Psychological Studies of Memory

Memory research has had a long and rich history in the psychology community. Here, we brieflyattempt to cover some of the key findings.

One of the earliest contributions to memory was Miller’s work in 1956 on limitations oninformation processing. Regardless of what item a participant was being asked to memorize,Miller observed that the capacity for short-term memory appeared to be 5-9 items [35]. Recentresearch has suggested the actual limit is closer to 4 items [16].

In 1968, Atkinson and Shiffrin presented an influential model of memory called the modalmodel of memory [7]. In the modal model, information is first stored in sensory memory. Atten-tional processes select items from sensory memory and hold them in short-term storage. Withrehearsal, the items can then be moved into long-term storage. The model characterizes the pro-cess of obtaining long-term memory as a serial and intentional process with many opportunitiesto lose information along the way via decay or interference from newly formed memories.

Attempting to refine the modal model’s account of short-term memory, in 1974 Baddeleyand Hitch introduced the idea of working memory [8] to help explain how items could bemanipulated and processed in separate modalities (e.g., visual versus verbal). The originalmodel included separate storage of verbal (phonological loop) and visual-spatial memory with acentral executive process that guided attention and retrieval from the stores. In 2000, Baddeleyadded an episodic buffer which allowed temporary binding of items.

Chase and Simon proposed that experts such as chess players can manage larger men-tal workloads by learning how to effectively chunk information after extensive practice andstudy [14]. The chunking theory proposes that it takes about 8 seconds to learn a new chunk,and that only about seven chunks can be held in short-term memory. For example, a chessmaster can outmaneuver an expert player because they can store and recall larger amounts ofplausible moves and better assess positions of the chess board.

Several researchers have raised concerns about limitations with the chunking theory. First,information for tasks such as playing chess did not appear to be stored in short-term or workingmemory (or at least was transfered to long-term memory faster than predicted by chunkingtheory). Charness found when chess players interpolated playing chess with other tasks longenough to eliminate short-term memory, no or minimal effect on recall was found [12]. Second,chunking theory has a hard time explaining how people performing everyday tasks [19] orexperts [20] could handle unpacking and shifting between multiple chunks with such a limitedstore.

An important alternative to the chunking theory was articulated over a series of papers byChase, Ericsson and Staszewski [13, 21], who observed mental strategies used by mnemonistsand experts. The resulting skilled memory theory identifies two key strategies experts use toachieve their remarkable memory and problem-solving ability: (a) Information is encoded withnumerous and elaborated cues related to prior knowledge (similar to Tulving’s encoding speci-ficity principle [61]; and (b) experts develop a retrieval structure for indexing information inlong-term memory (for example, experts might associate locations within a room with mate-rial to memorize – by mentally visiting locations within the room, the expert could retrieveassociated items from those locations).

Recently, the skilled memory theory has been extended into the long-term working memorytheory, which claims many of the problems with previous theories can be explained if workingmemory actually involves immediate storage and activation of long-term memories [20].

2.2 Cognitive Theories in the Psychology of Programmers

In studying the psychology of programmers, many researchers devised theories based on no-tions of memory that were available at the time. For example, many theories use the conceptof chunking to build cognitive models of programming. Despite the problems noted by otherpsychologists, many of these notions still persist today. Here, we briefly review current theoriesof programmer cognition and comprehension.

In top-down comprehension [11] the programmers formulate a hypothesis about the programthat is refined by expanding the code hierarchy. The programmers are guided by using cues calledbeacons that are similar to information scent in information foraging theory [43]. In bottom-upcomprehension [49, 42], the programmer gradually understands code by chunking the source codeinto syntactic and semantic knowledge units. In opportunistic and systematic strategies [28],programmers either systematically examine the program behavior or seek boundaries to limittheir scope of comprehension on an as-needed basis. Von Mayrhauser and Vans offered anintegrated metamodel [63] to situate the different comprehension strategies in one model.

3 Memory in Cognitive Neuroscience

3.1 Building Blocks of Memory: Long-term Potentiation (LTP)

Like physicists who seek to understand the building blocks of atoms to understand the world,we seek to understand the building blocks of the brain, especially those that contribute tomemory. Nearly a century after scientists recognized the atom as a fundamental unit of matter,neuroscientists followed by recognizing that the neurons play a similar role. Certainly, whenexamining the neuron in depth today, the picture has much changed from the simple view ofpassive integration of incoming signals, into the view of a complex interplay of voltage-gated ionchannels with local synaptic regulation. Here, we focus on the fundamental aspects of a neuronthat explains how a brief stimulus from the world can have long-lasting effects on the brain.

The neurological basis for memory is widely believed to be the long-term potentiation (LTP)of neuron synapses. After a synapse undergoes LTP, subsequent stimulus of the synapse willdisplay a stronger response than prior to undergoing LTP. In 1973, Bliss and Lomo [10] firstobserved LTP after repeatedly stimulating rabbit brain cells and found responses to increase2-3 times and persist for several hours. Some researchers consider LTP to be a neurobiologicalcodification of the Hebbian learning process: Neurons that fire together, wire together [26].

An interesting aspect of LTP is its various forms of persistence and its connection with mem-ory consolidation. It is now understood that LTP occurs in at least two stages: early LTP andlate LTP. In early LTP, increased response is achieved for a few hours by temporarily increasingthe sensitivity and number of receptors at a given synapse occurring within 1-2 seconds [29].

In late LTP, more long-lasting changes involve production of proteins to signal changes to thesynapse’s surface area and additional dendritic spines associated with stimulation [29].

But how long are these long-lasting changes? In general, synapses undergoing early LTP willreturn to baseline within three hours. Late LTP, however, has a much longer duration: lastingfrom several hours or days to months or over years (See Abraham’s review on LTP duration [2]for a more in-depth coverage). LTP in the rat hippocampus lasting months and in one instance,one year, has been observed in the laboratory simply after applying four instances of highfrequency stimulation spaced by five minutes [3]. In the human brain, newly formed memoriesare only expected to persist in the hippocampus for a few months or years until system memoryconsolidation into the neocortex is complete. This is consistent with amnesia patients who havedifficulty recalling long-term memories a few months or years prior to their accident [55].

Further neurological processes of memory are of interest such as long-term depression (LTD)and neurogenesis. Whereas LTP increases the efficacy of synaptic transmission, LTD unravelsthose improvements to make it more difficult for two neurons two fire. The interactions betweenLTD and LTP are not yet entirely understood; however, it is known that during initial phasesof LTP, reversal is more easily accomplished but becomes less so as time passes. If LTP isa mechanism for rapid memorization, are there other possible mechanisms for changes in thebrain to occur? In short, yes, with neurogenesis it is possible to grow new neurons and formnew growths of white matter. Brain cells were once considered to be like teeth, once lost wecould not regrow new brain cells. It has been demonstrated that brain cells routinely die andnew ones grow throughout our lives [51]. One of the most striking examples is a study of taxidrivers in London (who need to know very detailed spatial and contextual representations suchas street intersections, routes, and traffic conditions of the city) that found when comparingthe size of the hippocampus (an area of the brain responsible for remembering associations andspatial memory) of taxi drivers with that of the general population, a significant increase in sizewas observed and was correlated with time on the job [30].

3.2 Role of Hippocampus in Rapid Memorization

Few medical cases both arrest the imagination and have made a profound impact on memoryresearch as has the story of H.M. [48]. H.M. was a man suffering from severe seizures whoelected to have most of his medial temporal lobe bilaterally removed in an attempt to reducethe occurrence of the seizures. Although the surgery was successfully in reducing the seizures,an unforeseen consequence was that H.M. now suffered from anterograde amnesia, a conditionwhere a patient cannot recall or form new memories but can otherwise recall past life eventsand facts and operate normally. H.M., with very few exceptions, could not learn new semanticfacts such as new words or remember recent events such as meeting a person or having a meal.For H.M., retention of new memories generally only lasted a few minutes. If H.M. was having aconversation with a person for the first time, who then left the room and then reentered after afew minutes, afterward H.M. would not have recollection of having met the person or even havinga conversation. A detailed analysis of the surgery performed on H.M. indicates that virtually allof the entorhinal cortex and perhinal cortex were removed, about half of hippocampal cortexremained although severely atrophied, and a largely intact parahippocampal cortex [15]. SinceH.M., numerous cases have emerged demonstrating how different lesions result in different lossmemory abilities; however, the case of H.M. illustrates the essential role of the hippocampus informing long-lasting memories.

Morris and Frey postulate that the hippocampus provides the ability for an “automaticrecording of attended experience” [38]. They argue that many important events cannot beanticipated nor may not occur again, and therefore traces and features of experiences mustbe recorded in real-time as they happen. Further, Morris makes the argument based on neu-roanatomical studies that the hippocampus does not store sensory stimuli directly, but ratherassociates indices into other cortical regions [39]. For example, the memory of eating a new

food at a restaurant is associated with various stimuli (the visual appearance, aroma, taste),contextual details such as the scuffle and movements of other patrons, and semantic detailssuch as the name of the restaurant. The hippocampus is perfectly situated and equipped forthis role of automatic association: with very plastic neurons able to undergo LTP and with con-nections from numerous regions such as visual and auditory pathways having already performedbottom-up processing, and connections with the prefrontal cortex for top-down processing.

Although studies of amnesia patients provide insight into loss of ability, they cannot ac-count for how these systems operate for healthy people. Imaging studies of people performingmemorization tasks have provided even more understanding of the hippocampus. In one study,subjects memorized a list of words, and then were asked to recall the studied words [18]. Whatwas unique about this study was that fMRI images were taken while the subjects where study-ing and recalling the words. The researchers found that failure to recall a word was linked toweaker activity in the hippocampus during memorization; in contrast, success of recalled wordswas linked to stronger activity. From this study, one could conclude that if a stimulus failed toinduce LTP in hippocampal cells at the time of the event, then no conscious memory is likelyto remain. Another study has found a similar effect in the entorhinal cortex for items judged tobe familiar but not recalled [37].

Research has also found evidence suggesting that specific subareas (e.g., perirhinal or parahip-pocampal cortices) and specific lateralization (left or right) appear to be associated with differ-ent functions (e.g., familiarity recognition or encoding) and different modalities (e.g., spatial vs.verbal). However, it is not still not entirely clear how well we can localize function. For example,the parahippocampus was associated with encoding and recall of spatial memories [44], but ac-tivity in the parahippocampus was also found to be highly associated with recognizing objectswith unique contextual associations: A hardhat invokes a specific context of dusty constructionyards and therefore is associated with higher parahippocampal response; whereas a book has aless specific context and thus less activity [9]. One view put forward by Mayes, proposes thatrather than operating on specific modalities of a hard-coded domain, such as verbal specific pro-cessing, the hippocampus supports different types of associations —inter-item, within-domain,and between-domain associations —and requires different computations for these associationstypes [31]. This domain dichotomy view explains why a process such as familiarity recognitionmay be associated with different regions because recognizing a familiar object would requiredifferent processing (and thus different regions) than recognizing a familiar object and locationassociation.

3.3 Memory Organization and Architectures

Memory Types As previously mentioned, researchers distinguish between sensory, short-term, working memory, and long-term memory. For long-term memory, Squire proposed a tax-omony [56] that divides types of long-term memory hierarchically starting with a distinctionbetween non-declarative (implicit) and declarative (explicit) memories. Non-declarative mem-ory includes priming and muscle memory whereas declarative memory includes knowledge offacts and events. Tulving, an influential memory researcher publishing for over 50 years, de-scribes semantic memory as knowledge of facts and episodic memory as a recollection of pastevents. Tulving’s experience with an amnesiac patient E.P., who could learn new facts but notremember how he came to learn about them, lead Tulving to distinguish between our abilityto know (to recall that the sky is blue) and remember (to relieve a past experience via mentaltime travel) [59].

Studies of patients with newly acquired amnesia have revealed further subtypes of memo-ries. This includes familiarity, recency, and source memories. Familiarity memory involves the“feeling of knowing” that an object in a particular context has been encountered before withoutnecessarily recalling the context (e.g., seeing a face in the crowd that seems familiar but does

not trigger a name). Familiarity memory is not to be confused with priming. First, in prim-ing, a person previously exposed to an item is more likely to recall that item in the future;however, without a conscious recollection of having been primed. With familiarity, a person isaware that something seems familiarity. Second, priming and familiarity have doubly dissociatedbrain regions: Familiarity is supported in the entorhinal cortex; priming is believed to involvemodification of the object representations within perceptual memory (for example, H.M. couldbe primed only for words he had learned prior to his accident) [45].

Some tasks involve recalling how long ago an event occurred, called recency memory. Milnerstudied patients who underwent surgery affecting the frontal lobes and found certain patientswould have difficulty recalling how recently they have seen a word [36]. This suggests theprefrontal cortex plays a role in maintaining and binding a temporal context to memories.Further research has uncovered the importance of top-down involvement of the prefrontal cortexin episodic memory. Although many associations can be remembered in a bottom-up fashion aspart of episodic memory, certain types of memories require top-down control and thus directinvolvement of the prefrontal cortex.

Often when we learn facts we can associate the initial experience where we learned thatfact; these types of memories are called source memory. Activation of the prefrontal cortex isnecessary for forming source memories [25].

Memory Systems Since the modal model of memory was proposed in 1968, numerous findingshave challenged many of basic premises of the model and, accordingly, several researchers havesought to put forth their own account. Here, we review a few of these models.

In Tulving’s serial-parallel-independent (SPI) model [60], rather then providing a mechanis-tic model of memory, Tulving provides a few guiding principles or generalizations of memory.Simply, he believes the process of encoding a memory to be a serial process (output of onesystem provides the input for another), the process of storage to be distributed and in parallel(traces of a single stimulus exists in multiple regions of the brain with the potential for lateraccess), and the retrieval of memory to be independent (different systems do not depend onothers – once a memory is formed it is available within that system even if others are damaged).This view highlights the importance of viewing memory as a network of coordinating systemsrather than an unified store.

In Fuster’s account of memory systems [22], he uses a two-stage model derived from anatom-ical and neuropsychological studies of the brain. In general, raw senses are processed at progres-sively higher and higher levels of analysis starting from the most posterior region of the brainto the most anterior region of the brain. Fuster divides this journey into two major compo-nents: perceptual memory and executive memory. Within the perceptual region, processes andmemory start from phyletic sensory memory, then integrating into polysensory, forming intoepisodic memory, generalizing into semantic memory, and abstracting into conceptual memory.After a brief hop over the motor system, the executive memory region involves concept, plan,program, act and phyletic motor memories. Fuster’s account again highlights the specializationand localization of memory, but highlights the importance of top-down and bottom-up processesin memory, and how processing of an stimuli is collocated with its memory.

In Anderson (of ACT-R fame [6]) and colleagues’ account of memory [5], a cognitive ar-chitecture is composed of several modules responsible for specialized processing of information.Modules have access to buffers which include: goal buffer, a retrieval buffer, a visual buffer, anda motor buffer. Interestingly, the model also includes a production system for learning based onthe basal ganglia. The basal ganglia is mainly responsible for motor control; however, it also hasbeen “recruited” by the prefrontal cortex for reward-based learning of rules [34]. As such, thestrength the ACT-R model is in simulating learning and problem solving; however, the model isless effective in modeling memory retention (for an exception, see Altmann and Trafton’s workon memory for goals [4]).

3.4 PFC: Goal Memory and Executive Processes

Humans fluidly perform and seamlessly switch among different tasks in a day. Routine activities,such as ordering a cup of coffee, can be performed without much cognitive effort. The rulesare readily apparent: selecting a cup size, specifying a brew, paying the cashier – yet routineactivities can be highly dynamic and even arbitrary. People have no problem adapting the ruleto buy a cup of tea instead, or, given a rule never before encountered, clap if you hear a phonering, most people would have no problem performing the task. However, if the rule was instead,clap if you hear a phone ring in a coffee shop, then people may fail to remember to apply therule. Such forgetting would be a failure of prospective memory, remembering to remember. Howdoes the brain store and manage prospective memories and other supporting memories neededfor performing tasks?

The prefrontal cortex (PFC) is a region situated in the most anterior (toward forehead)portion of the frontal lobe. The PFC, a recent evolutionary addition, extends from the motorcontrol regions of the frontal lobe to provide cognitive and executive control. E. Miller andCohen [33] provide a compelling and influential account of the PFC. They argue the PFCprovides the ability to bias a particular response from many possible choices. For example,when crossing a street, a person may be accustomed to looking left to check for oncoming traffic.However, if that person were an American tourist visiting London, then top-down control wouldbe required to override the typical response and bias it toward a response for looking right first.In this theory, rules, plans, and representations for tasks are learned via highly plastic PFCneurons (a view also shared by Fuster [22]), but may migrate over time. One apt metaphoroffered by Miller and Cohen is a railroad switch:

“The hippocampus is responsible for laying down new tracks and the PFC is responsiblefor flexibly switching between them.”

The PFC also plays an important role in top-down attention: In early studies of monkeybrains, when a food reward was shown to a monkey and then subsequently hidden for a delayperiod, persistent firing of neurons in the PFC was sustained during the delay period. Despitedistracting stimuli, the monkey could recall the location of the food reward. However, monkeyswith PFC lesions could not maintain attention and performed poorly at recalling the food [23].More recent work has uncovered a possible mechanism for how the PFC can simultaneouslymaintain several active items in mind. When examining the firing patterns of ensembles ofneurons, rhythmic oscillations can be observed. These oscillations are believed to encode at-tributes of an attended item. Siegel and colleagues [52] observed when multiple items need tobe attended to, distinct items were maintained in distinct phase orientations of the oscillatingsignal. Like our ability to wave a string tied to a door knob, our limit to attend multiple objectsmay be simply bound to a limit of speed and space for separating items within a frequencyspectrum (a problem well known in telephone and ethernet communications). An interestingbenefit emerging from phase coding of items is “free” temporal order of those items. In thesame experiment, when the order of items were misremembered, there was a correlation withinadequate phase separation of the encoded items: The signal still preserved enough informationto represent the items, but not enough information was available to determine order. This viewoffers an interesting alternative to the concept of working memory. The prefrontal cortex canmaintain many representations for tasks (especially if the representations refer to associationswithin the hippocampus), but can only attend to a few at a time.

Understanding how cognitive control occurs in the prefrontal cortex is still an ongoing re-search question. However, researchers have been successful in understanding how the prefrontalcortex supports one type of process that is important for suspension of tasks —prospectivememory. Prospective memory is remembering to remember to perform an action in the futureunder a specific context (e.g., setting up a mental reminder to buy milk on the way home fromwork) [64]. Often intentions appear to spontaneously pop into mind prior to an important event

or sometimes unfortunately later than intended. Researchers have sought to understand the un-derlying mechanisms for prospective memory. Essentially, some researchers believe prospectivememory requires some form of attentional resources [54]; whereas other researchers believe if re-minder cues are readily available then the process could be automatic [32]. A recent fMRI studyhas found that depending on the nature of the intention, prospective memory could involveboth strategic monitoring and automatic retrieval from cues [46].

4 Task Memory Model

Understanding both brain structures and the associated jargon (e.g., dorsolateral prefrontalcortex ) can be a daunting endeavor for anyone. Here, we present the task memory model inpart to summarize insights from cognitive neuroscience literature on memory but also to abstractfrom the intricacies of the brain and nomenclature. Our model shares similar goals with thestores model [17], where Douce argues multiple modalities, such as spatial memory, play acrucial role in code cognition. Our model intends to go further, by first accounting for theunderlying constraints of different memory, and then reconnecting these constraints to memoryrequirements in programming tasks and design of programming environments.

For the purpose of the forthcoming discussion, we introduce the term task memory, whichis the set of constructs (such as goals) and processes (such as suspension) needed to performtasks. Our goal is to explain how people such as programmers can maintain representationsfor complex and long-running tasks (over the course of many hours or several days) despiteinterruptions or task-switches. In defining task memory and its corresponding model, not onlydo we want to avoid the ambiguity of a term such as working memory, we also want to gofurther by specifying task related concepts such as suspension or goals and relate them tospecific processes and localized function within the brain.

4.1 Memory Pathways

As we perceive sensations from the world, those sensations flow along pathways that activelyprocess and interpret perceptions of our world. The impression of these perceptions is what weunderstand as memory. Even purely internal events, such as our inner thoughts, will activatethe same motor speech areas and auditory comprehension pathways (i.e., subvocalization) aslistening to ourselves talking. Therefore, to speak strictly in terms of storage, would be tomisunderstand memory —the storage of memory is interleaved with the same pathways thatprocess and and later recognize past sensations.

We divide the storage pathways of task constructs into three regions: frontal region, asso-ciative region, and perceptual region (see Figure 1).

Perceptual Region The perceptual region contains both primitive and salient representationsof stimuli. This region is segmented into visual, spatial, and semantic (including auditory)areas. Each area is responsible for interpreting and storing representations of stimuli. Theserepresentations are linked so that spreading activation is possible for learned concepts.

There are short-term effects of perceiving a stimulus. Short-term retention occurs locally,allowing for example same/different comparisons to be made. In addition, a stimulus will primerepresentations, but only at the level of which a person has previous experience (e.g., a personcan be not be primed for the semantic meaning of a word they have never learned, but underthe right conditions they can be primed for the visual perception of the word).

Associative Region The associative region receives inputs upstream from each area of theperceptual region. The associative region has several interesting capabilities. The associativeregion is capable of receiving several distinguishing features (such as visual and semantic feature)

and can create a resulting association. The formation of the association is fast and automatic—an autoassociative encoding of perceptual features —but the features are not stored, butrather indices into representation sites in the perceptual region. Associations are not formed forevery stimuli but instead are selectively formed based on stimuli strength (attention and andnovelty detection play a large role).

These associations are formed in such a way that activation of any one of the features willactivate the indices of other associated features, which will in turn activate the representationswithin the perceptual region. The duration of an association can last several hours, but ifstrengthened can last days and in some cases years. However, associations can be overturned,or may not form in the first place if similar associations already exist.

Finally, the associative region is capable of encoding familiarity. Encoding familiarity allowsstimuli to be identified more readily without requiring representations of the stimuli to be wellformed. This will allow a feature to be recognizable (e.g., a face) but not associated with otherfeatures (e.g., a name).

Frontal Region The frontal region contains important pathways for interpreting perceptions,selecting responses, forming and attending to representations and goals, and directing learningand memory. Pathways in the frontal region are well connected the perceptual and associativeregions, allowing multiple pathways to accessing representations and imposing top-down influ-ence. Within the frontal region, important pathways exist for managing tasks such as monitoringand switching tasks.

Memory supported by the frontal region includes prospective, source, recency memory. Thefrontal region provides the primary infrastructure for holding a task’s plans, goals, and task-relevant bindings. The duration of these task elements do not fade like short-term memory,but persist for hours or days. Task-relevant bindings do not store items directly, but ratherrefer to long-term memory or stores within the associative and perceptual regions. Finally, thefrontal region provides infrastructure for reminders to “pop into the mind” in the presence ofappropriate cues.

Indices

Spatial

Goals

Attention

Bindings

Perceptual

Fig. 1. Task memory model.5 Considerations

We have presented several possible mechanisms underlying memory and associated cognitivecontrol processes. In this section, we consider possible ramifications and speculate on impacton programming environments and theories of program comprehension. Design elements forprogramming environments has been discussed before [57]; here, we describe elements not pre-viously covered in light of new findings in memory and considerations such as interruptions andmultitasking. Remember, these ideas are not claims but a line of inquiry.

5.1 Programming Environment Support

Development tasks typically require coordinating software changes across multiple locations in aprograms source code. Programming environments have provided limited support for managingthe active artifacts relevant to the programming task. How a programmer represents these itemsin memory can inform how to better design programming environments.

Names and Notes Programs are comprised almost entirely of names. People forget or havedifficulty recalling names on a daily basis. Some names in programs are for common operationssuch as iteration or familiar concepts such as sorting. But for many program elements, we mayknow the face, but not the name.

To find or write code, programming environments require either knowing the name preciselyor partially, in the case of name completion system or class names. But unlike everyday objects,we do not have other aspects to assist in recall. We cannot call upon temporal or contextualclues, “what was that object I saw yesterday when I was debugging?”. We cannot easily askspatially “what was the object near the parsing code”, nor semantically “what object was forchecking security”. Programmers still try asking these questions, they just have to find creative(but costly) ways of answering them.

For written notes, we often invent our own names to represent a current thought. We mightwrite down, “labels” or “security”. Presumingly, we want to record a prospective reminder toperform some action for a programming task. Amazingly, when looking at an old note, we canoften recall the purpose of the note and the circumstance in which we wrote it. However, othertimes we do not even remember writing down the note or the note only triggers a vague recol-lection. For programmers, although notes have benefits in low overhead and conciseness, theyare deficient when capturing detailed and delocalized knowledge. When notes fail to captureappropriate detail, programmers have to resort to costly information-seeking activities such asnavigating source code or viewing source code history to rebuild their working context. Ulti-mately, neither notes nor environmental cues fully utilize program structure or state withinprogramming environments, and more importantly, neither notes nor environmental cues digi-tally link together. Support for easily attaching notes to cues could create quick and powerfulreminders: For example, pinning down a virtual sticky note on a code document or on a filewithin the document treeview.

Levels of Support: Memory and Time In Table 1, we consider different levels of supportfor interruption recovery based on decay of task memory. When suspending a task for a fewminutes, what is at most risk is the loss of an ensemble of well-crafted thought. Humans arelimited by the ability to simultaneously maintain attention to mental thoughts. Thus, a short-term interruption may not necessarily erase the memory of those thoughts, but we may neveragain find that insightful combination of those thoughts attended simultaneously with the sameactive top-down representations.

When suspending a task for a few hours, many newly formed associations and representationsmay still be intact. Upon return to the task, the programmer may need a brief reminder toreactivate suspended task goals and representations; it is not likely they would have forgottenthese yet. In support of this process, programmers may need a quick refresh of the artifacts tohelp restore the details of the representations. During the task suspension, weak associationsmay have faded. Programmers may forget a relationship they discovered between code items ornot recall where items are located.

Programmers returning to a task after several days require a different level of support. Aftersuch a delay, details such as new names of identifiers may have faded, and many representationsused for the task may no longer be active. Traces of memories will guide the programmerin returning to work: Some code sections will feel more familiar than others. Further, external

cues, such as jotted down goals, will help guide navigation and jump-start work. Finally, episodicrecall of activity will help restore plans and potentially identify what actions to perform next.

Interval Support

minutes Support for managing attention.hours Brief reminder to restore top-level goals. Support for restoring artifacts. Simple as-

sociative cues such as words from familiar code symbols effective.days Support for restoring representations. Semantic-based interfaces less effective, use

episodic-based interfaces.weeks Most representations have faded. Focus on restoring goals and plans.

Table 1. Different length intervals of task suspension require different types of support from the programmingenvironment when resuming.

Environmental Cues and Beyond Observations of developers suggest they frequently relyon cues for maintaining context during programming. For example, Ko et. al [27] observedprogrammers using open document tabs and scrollbars as aids for maintaining context duringtheir programming tasks. However environmental cues often do not provide sufficient context totrigger memories: In studies of developer navigation histories, a common finding is that devel-opers frequently visit many locations in rapid succession in a phenomenon known as navigationjitter [53]. Navigation jitter has been commonly attributed to developers flipping through opentabs and file lists when trying to recall a location [53, 41]. Environmental cues such as opentabs may be insufficient because what a developer remembers may be spatial and textual cueswithin the code document and not the semantic or structural location of the code element whenautomatically encoding working state [58].

By enriching environmental cues to take more advantage of the temporal, spatial, and con-textual aspects we have previously discussed we would expect improvements to programmerproductively. Research comparing development interfaces using names or content that a namerefers to has shown that names are slower and less accurate than content [47], and content isstrongly preferred over names when presented temporally [40]. Cues should enhance both anitem’s recency and familiarity. Temporal order of visiting an item should be easily discoverable.The context of visiting an element should also be clear: Tabs or files can be more understand-able if it was made clear how a programmer visited the item (e.g., indicate if a file was edited,visited from stepping through a debugging session, or found from a search result [with searchkeyword used to find it]). Other artifacts can be important cues for a programming task: eventson calendars, meeting notes, checkins from source control, and emails from colleagues. Exploringhow to collect, integrate and present these various cues offers an exciting research challenge.

5.2 Theories

Here, we consider some implications to current programming theories of comprehension andprovide some concepts for developing richer theories of program comprehension.

Visual Chunks In Shneiderman and Mayer’s syntax/semantic model [49], programmers donot retain memories of syntax, but only their meanings. This conclusion was reached based onthe programmer’s ability to exactly recreate a program statement: i.e., even changing a symbolfrom i to x would invalidate that statement. By these measures, programmers tended to performpoorly when exactly reproducing the syntax of statements recently read, but instead retainedtheir meanings.

When the syntax/semantic model was conceived, it was based on a variation of the modalmodel of memory (items move from sensory memory, short-term memory, and then long-termmemory through active rehearsal). For anyone that has read paragraphs of text or lines of code,

such a model may seem counter-intuitive. Unlike attempting to rehearse a phone number, whenwe read text, we do not frequently stop to remember the words or meaning, neither do we pausewhen engaged in casual conversations.

We propose that semantic meanings of read program statements are retained without in-tentional rehearsal, but instead with autoassociative support from the hippocampal formation.In contrast to the syntax/semantic model, we suggest memory of syntax is retained —not inan exact memorization of characters of text —but via abstracted perceptual patterns or visualsketches. For example, a certain region of code containing many distinct patterns of for loopsand operations with character strings produces an unique signature of text indention and syn-tax highlighting that would be recognizable when quickly scanning source code. Such an abilitywould be advantageous to programmers who need to quickly and frequently switch documentsand skim through code without having to deeply process the text in order to recognize relevantbits.

We introduce the concept of visual chunks, regions of code which may not yet have any strongsemantic association, but which have perceptual features that are familiar and recognizable bya programmer. Visual chunks can be associated with temporal and contextual details such as asearch term or hypothesis used in finding the visual chunk. Visual chunks can also be associatedspatially within each other (e.g., above or below another visual chunk). Finally, visual chunkscan associated with subvocalized inner thought, giving it an internal nickname.

Iterative Comprehension The syntax/semantic model suggests that programmers use previ-ously learned schemas (programming plans) to interpret text into semantic chunks in a hierarchalprocess. Alternatively, top-down theories explain that programmers parse code based on theircurrent level of understanding and goals. Neither theory details the structures or mechanismsnecessary for partial understanding of code or explain how a programmer can maintain theseintermediate representations of unfamiliar code when switching between multiple tasks as ob-served in our recent experiment [40]. Opportunistic theories do not fare better: As programmersdo not necessarily – upon reaching a new understanding – revisit every previously encountereditem to update its understanding, but rather a programmer must have some form of intermedi-ate representation in mind. Finally, a failing of all of these theories is their inability to identifyexactly when learning occurs.

We also introduce the concept of iterative comprehension. With iterative comprehension, aprogrammer uses autoassociative memory of processed perceptual events to rapidly record manytraces and facts about a program, even without having seen the code before. The programmercan draw upon numerous resources —familiarity, spatial, visual, auditory, autoassociative, andprospective memory, each involving distinct parts of the brain —that collectively allow theprogrammer to maintain partial representations when solving a problem.

For a new program, a programmer initially gathers numerous visual chunks when explor-ing the program. As the programmer learns more about the program, she iteratively updatesprevious visual chunks with knowledge of new events or relates with top down concepts andgoals. The programmer can take advantage of previously learned schemas to provide strongassociations with events and rapidly consolidate new facts. This explains how a programmercan retain memory of semantic properties of code while also associating other visual and spatialproperties.

Here, we have only provided a sketch of what iterative comprehension entails. However, webelieve iterative comprehension may provide a more compelling account of how programmersmanage programming knowledge and can explain how programmers are able to explore andkeep track of many items beyond traditional accounts of memory while only having partialknowledge of the code.

6 Remaining Issues and Future Questions

Several directions can be taken to move ideas presented in this paper forward. For the most partin our discussions on structures within the brain we have omitted detail on how structures differbased on location within the left or right hemispheres of the brain, also called lateralization of thebrain. Extending models to include lateralization is both necessary for brain imaging studiesand understanding the dual but separate roles that a structure plays (such as differences inencoding and retrieval in the left and right hippocampus).

Computational architectures for cognitive models such as ACT-R [6] or SOAR [1] are steadilyimproving. Still, these models are dealing with relatively simple tasks. Recently, Altmann andTrafton’s work on memory for goals [4] have made modifications of the ACT-R architectureto include the ability to model the effect of interruptions on goal memory. Situating our workwithin these models would provide a mutual benefit of validating these ideas while suggestingmodifications to the computational architectures.

Finally, despite new advances in memory research, there remains numerous unresolved issues.The biological mechanisms for forming memory are still not fully understood: One strikingobservation has been that spatial memory appears to use distinct processes when compared tothose used in the normal associative processing occurring in the hippocampus. We do not yetunderstand the impact this has on encoding and consolidation of spatial memories. A strengthof memory research has been the various lines of evidence used to investigate memory. But thisis also a weakness: Some findings have only been established in animal studies, which may nothold in the same manner for humans. Further, these approaches have been effective at findingdissociations between brain regions and memory types but not in understanding how theseregions coordinate and what information they carry. Finally, much care must be taken whenusing the results of fMRI studies; if not carefully guarded, poor statistical designs can allowover broad interpretations.

7 Conclusion

Nearly 40 years have passed since some of the earliest cognitive models of programmers have beenproposed. Both the programming landscape and our understanding of the human brain havedramatically changed. Unfortunately, in the time since, the impact on practicing programmershas been negligible; the predictive power nearly non-existent; and, our understanding of themind furthered little beyond common sense.

In this paper, we have outlined the background, tools, concepts and vocabulary for a chal-lenging but hopefully rewarding trek forward. By understanding how a programmer managestask memory, especially in the context of multi-tasking and interruptions, we can begin tounravel this mystery.

References

1. Newell A. Unified Theories of Cognition. Harvard University Press, Cambridge, MA, 1990.2. Wickliffe C. Abraham. How long will long-term potentiation last? Philosophical transactions of the Royal

Society of London. Series B, Biological sciences, 358(1432):735–744, April 2003.3. Wickliffe C. Abraham, Barbara Logan, Jeffrey M. Greenwood, and Michael Dragunow. Induction and

Experience-Dependent Consolidation of Stable Long-Term Potentiation Lasting Months in the Hippocampus.J. Neurosci., 22(21):9626–9634, 2002.

4. E. M. Altmann and J. G. Trafton. Memory for goals: An activation-based model. Cognitive Science, 26:39–83,2002.

5. J. R. Anderson, Michael D. Byrne, Scott Douglass, Christian Lebiere, and Yulin Qin. An integrated theoryof the mind. Psychological Review, 111(4):1036–1050, 2004.

6. John R. Anderson and Christian Lebiere. The Atomic Components of Thought. Erlbaum, Mahwah, NJ, June1998.

7. R. C. Atkinson and R. M. Shiffrin. The psychology of learning and motivation (Volume 2), chapter Humanmemory: A proposed system and its control processes, pages 89–195. Academic Press, 1968.

8. A.D. Baddeley and G. Hitch. The psychology of learning and motivation: Advances in research and theory,chapter Working memory, pages 47–89. Academic Press, New York, 1974.

9. Moshe Bar and Elissa Aminoff. Cortical analysis of visual context. Neuron, 38(2):347–358, April 2003.10. T. V. Bliss and T. Lomo. Long-lasting potentiation of synaptic transmission in the dentate area of the

anaesthetized rabbit following stimulation of the perforant path. The Journal of physiology, 232(2):331–356,July 1973.

11. Ruven Brooks. Towards a theory of the comprehension of computer programs. International Journal ofMan-Machine Studies, 18:543–554, 1983.

12. N. Charness. Memory for chess positions: resistance to interference. Journal of Experimental Psychology:Human Learning and Memory, 2:641–653, 1976.

13. W. G. Chase and K. A. Ericsson. The psychology of learning and motivation, volume 16, chapter Skill andworking memory, pages 1–58. Academic Press, New York, 1982.

14. W.G. Chase and H.A. Simon. Perception in chess. Cognitive Psychology, 4:55–81, 1973.15. S. Corkin, D. G. Amaral, R. G. Gonzalez, K. A. Johnson, and B. T. Hyman. H. m.’s medial temporal lobe

lesion: findings from magnetic resonance imaging. The Journal of neuroscience : the official journal of theSociety for Neuroscience, 17(10):3964–3979, May 1997.

16. N. Cowan. The magical number 4 in short-term memory: a reconsideration of mental storage capacity. TheBehavioral and brain sciences, 24(1), February 2001.

17. Christopher Douce. The stores model of code cognition. In Programmer Psychology Interest Group, 2008.18. L. L. Eldridge, B. J. Knowlton, C. S. Furmanski, S. Y. Bookheimer, and S. A. Engel. Remembering episodes:

a selective role for the hippocampus during retrieval. Nature neuroscience, 3(11):1149–1152, November 2000.19. K. A. Ericsson and P. F. Delaney. Models of Working Memory: Mechanisms of Active Maintenance and

Executive Control, chapter Long-term working memory as an alternative to capacity models of workingmemory in everyday skilled performance, pages 257–297. Cambridge University Press, Cambridge, UK,1999.

20. K. A. Ericsson and W. Kintsch. Long-term working memory. Psychological Review, 102(2):211–245, 1995.21. K. A. Ericsson and J. J. Staszewski. Complex information processing: The impact of Herbert A. Simon,

chapter Skilled memory and expertise: Mechanisms of exceptional performance, pages 235–267. LawrenceErlbaum, Hillsdale, NJ, 1989.

22. J. M. Fuster. The prefrontal cortex–an update: time is of the essence. Neuron, 30(2):319–333, May 2001.23. J. M. Fuster and G. E. Alexander. Neuron activity related to short-term memory. Science (New York, N.Y.),

173(997):652–654, August 1971.24. Michael S. Gazzaniga, Richard B. Ivry, and George R. Mangun. Cognitive neuroscience: the biology of the

mind. Norton, 3rd edition, 2009.25. E. L. Glisky, M. R. Polster, and B. C. Routhieaux. Double dissociation between item and source memory.

Neuropsychology, 9:229–235, 1995.26. D.O. Hebb. The organization of behavior. Wiley, New York, 1949.27. Andrew J. Ko, Brad A. Myers, Michael J. Coblenz, and Htet Htet Aung. An exploratory study of how

developers seek, relate, and collect relevant information during software maintenance tasks. IEEE Trans.Softw. Eng., 32(12):971–987, 2006.

28. David C. Littman, Jeannine Pinto, Stanley Letovsky, and Elliot Soloway. Mental models and softwaremaintenance. In Papers presented at the first workshop on empirical studies of programmers on Empiricalstudies of programmers, pages 80–98, Norwood, NJ, USA, 1986. Ablex Publishing Corp.

29. M. A. Lynch. Long-term potentiation and memory. Physiological reviews, 84(1):87–136, January 2004.30. E. A. Maguire, D. G. Gadian, I. S. Johnsrude, C. D. Good, J. Ashburner, R. S. Frackowiak, and C. D. Frith.

Navigation-related structural change in the hippocampi of taxi drivers. Proceedings of the National Academyof Sciences of the United States of America, 97(8):4398–4403, April 2000.

31. Andrew Mayes, Daniela Montaldi, and Ellen Migo. Associative memory and the medial temporal lobes.Trends in cognitive sciences, 11(3):126–135, March 2007.

32. M. A. McDaniel and G. O. Einstein. Strategic and automatic processes in prospective memory retrieval: Amultiprocess framework. Applied Cognitive Psychology, 14:127–144, 2000.

33. E. K. Miller and J. D. Cohen. An integrative theory of prefrontal cortex function. Annual review of neuro-science, 24(1):167–202, 2001.

34. E.K. Miller and T.J. Buschman. The Neuroscience of Rule-Guided Behavior, chapter Rules through recursion:How interactions between the frontal cortex and basal ganglia may build abstract, complex, rules fromconcrete, simple, ones, page (in press). Oxford University Press., 2007.

35. G. A. Miller. The magical number seven, plus or minus two: some limits on our capacity for processinginformation. 1956. Psychological review, 101(2):343–352, April 1994.

36. B. Milner, P. Corsi, and G. Leonard. Frontal-lobe contribution to recency judgements. Neuropsychologia,29(6):601–618, 1991.

37. Daniela Montaldi, Tom J. Spencer, Neil Roberts, and Andrew R. Mayes. The neural system that mediatesfamiliarity memory. Hippocampus, 16(5):504–520, 2006.

38. R. G. Morris and U. Frey. Hippocampal synaptic plasticity: role in spatial learning or the automatic recordingof attended experience? Philosophical transactions of the Royal Society of London. Series B, Biologicalsciences, 352(1360):1489–1503, 1997.

39. R. G. M. Morris. Elements of a neurobiological theory of hippocampal function: the role of synaptic plasticity,synaptic tagging and schemas. European Journal of Neuroscience, 23(11):2829–2846, 2006.

40. Chris Parnin and Robert DeLine. Evaluating cues for resuming interrupted programming tasks. In CHI ’10:Proceedings of the 28th international conference on Human factors in computing systems, pages 93–102, NewYork, NY, USA, 2010. ACM.

41. Chris Parnin and Carsten Gorg. Building usage contexts during program comprehension. In ICPC ’06:Proceedings of the 14th IEEE International Conference on Program Comprehension, pages 13–22, 2006.

42. Nancy Pennington. Stimulus structures and mental representation in expert comprehension of computerprograms. Cognitive Psychology, 19:295–341, 1987.

43. Peter Pirolli and Stuart K. Card. Information foraging. Psychological Review, 106:643–675, 1999.44. C. J. Ploner, B. M. Gaymard, S. Rivaud-Pechoux, M. Baulac, S. Clemenceau, S. Samson, and C. Pierrot-

Deseilligny. Lesions affecting the parahippocampal cortex yield spatial memory deficits in humans. Cerebralcortex (New York, N.Y. : 1991), 10(12):1211–1216, December 2000.

45. B. R. Postle and S. Corkin. Impaired word-stem completion priming but intact perceptual identificationpriming with novel words: evidence from the amnesic patient h.m. Neuropsychologia, 36(5):421–440, May1998.

46. Jeremy R. Reynolds, Robert West, and Todd Braver. Distinct neural circuits support transient and sustainedprocesses in prospective memory and working memory. Cerebral cortex (New York, N.Y. : 1991), 19(5):1208–1221, May 2009.

47. Izzet Safer and Gail C. Murphy. Comparing episodic and semantic interfaces for task boundary identification.In CASCON ’07: Proceedings of the 2007 conference of the center for advanced studies on Collaborativeresearch, pages 229–243, New York, NY, USA, 2007. ACM.

48. W. B. Scoville and B. Milner. Loss of recent memory after bilateral hippocampal lesions. 1957. The Journalof neuropsychiatry and clinical neurosciences, 12(1):103–113, 2000.

49. B. Shneiderman and R. Mayer. Syntactic semantic interactions in programmer behavior: a model andexperimental results. International Journal of Computer and Information Sciences, 8(3):219–238, June 1979.

50. Ben Shneiderman. Software psychology: Human factors in computer and information systems (Winthropcomputer systems series). Winthrop Publishers, 1980.

51. T. J. Shors, G. Miesegaes, A. Beylin, M. Zhao, T. Rydel, and E. Gould. Neurogenesis in the adult is involvedin the formation of trace memories. Nature, 410(6826):372–376, March 2001.

52. Markus Siegel, Melissa R. Warden, and Earl K. Miller. Phase-dependent neuronal coding of objects inshort-term memory. Proceedings of the National Academy of Sciences of the United States of America,106(50):21341–21346, December 2009.

53. Janice Singer, Robert Elves, and Margaret-Anne Storey. Navtracks: Supporting navigation in software main-tenance. In Proceedings of the 21st IEEE International Conference on Software Maintenance, pages 325–334,2005.

54. Rebekah E. Smith. The cost of remembering to remember in event-based prospective memory: investigat-ing the capacity demands of delayed intention performance. Journal of experimental psychology. Learning,memory, and cognition, 29(3):347–361, May 2003.

55. H.J. Spiers, E.A. Macguire, and N. Burgess. Hippocampal amnesia. Neurocase, 7:352–382, 2001.56. Larry R. Squire. Memory systems of the brain: a brief history and current perspective. Neurobiology of

learning and memory, 82(3):171–177, November 2004.57. M.-A. D. Storey, F. D. Fracchia, and H. A. Muller. Cognitive design elements to support the construction

of a mental model during software exploration. J. Syst. Softw., 44(3):171–185, 1999.58. J. Gregory Trafton, Erik M. Altmann, Derek P. Brock, and Farilee E. Mintz. Preparing to resume an

interrupted task: effects of prospective goal encoding and retrospective rehearsal. International Journal ofHuman-Computer Studies, 58:583–603, 2003.

59. E. Tulving. Organization of memory, chapter Episodic and semantic memory, pages 381–403. AcademicPress, New York, 1972.

60. E. Tulving. The Cognitive Neurosciences, chapter Organization of memory: Quo vadis?, pages 839–847. MITPress, Cambridge, MA, 1995.

61. E. Tulving and D. M. Thomson. Encoding specificity and retrieval processes in episodic memory. PsychologicalReview, 80:352–373, 1973.

62. A. Villringer. Functional MRI, chapter Physiological Changes During Brain Activation, pages 3–13. Springer,2000.

63. A. von Mayrhauser and A. M. Vans. From code understanding needs to reverse engineering tools capabilities.In CASE ’93: The Sixth International Conference on Computer-Aided Software Engineering, (Institute ofSystems Science, National University of Singapore, Singapore; July 19-23, 1993), pages 230–239, July 1993.

64. E. Winograd. Practical Aspects of Memory: Current Research and Issues, volume 2, chapter Some observa-tions on prospective remembering, pages 348–353. Wiley, Chichester, 1988.