PSY 368 Human Memory Semantic Memory. Announcements Due date changes: Data from Experiment 3 due...
-
Upload
lorin-taylor -
Category
Documents
-
view
226 -
download
2
Transcript of PSY 368 Human Memory Semantic Memory. Announcements Due date changes: Data from Experiment 3 due...
PSY 368 Human Memory
Semantic Memory
Announcements
• Due date changes:• Data from Experiment 3 due April 9 (Mon, 1
week from today)• Experiment 3 Report due April 16 (2 weeks
from today)
• Interaction of Episodic and Semantic Memory (Exp 3) (Download detailed instructions form Blackboard)
• Modification of Anderson, Bjork, & Bjork (1994)• (see Blackboard Media Library Optional Readings to
download a pdf of this paper if you want to read more)
• Question: Can the retrieval of some items impact the retrieval of others?• e.g., Suppose that you are studying for a test. You
decide to study half the material. Does studying half the material have an impact on the half of the material that you didn’t study?
Experiment 3
• Interaction of Episodic and Semantic Memory (Exp 3) (Download detailed instructions form Blackboard)
• Stimuli: 4 categories• Drinks, Weapons, Fish, Fruits• Six exemplars from each category• Write out category and exemplar on index cards
Experiment 3
• The full list of 24 items is in the detailed instructions
• Subjects: find 3 willing participants
• Interaction of Episodic and Semantic Memory (Exp 3) (Download detailed instructions form Blackboard)
• Procedure: 4 phases• Study phase: subs will study all categories and exemplars
• Shuffle all of the cards, read Study phase 1 instructions, present each card to subject for 3 seconds in random order
Experiment 3
• Interaction of Episodic and Semantic Memory (Exp 3) (Download detailed instructions form Blackboard)
• Procedure: 4 phases• Practice phase: subs will attempt to remember some of
the studied items (half from 2 of the categories) by coming up with exemplars with cues (category and first letter)• Give practice phase recall sheet to subject, Read practice
phase instructions to subject, give subs category and first letter (see ordered list in detailed instructions) and give them 15 secs to practice it before moving to next item
• “drinks – v”, “weapons – s”, “drinks – r”, “weapons – r”, “drinks – g”, “weapons – t”
Experiment 3
• Interaction of Episodic and Semantic Memory (Exp 3) (Download detailed instructions form Blackboard)
• Procedure: 4 phases• Distractor phase: complete a city generation task
• Read distractor phase instructions, Give distractor US Cities Task sheet
Experiment 3
• Interaction of Episodic and Semantic Memory (Exp 3) (Download detailed instructions form Blackboard)
• Procedure: 4 phases• Test phase: free recall of all studied items (by
category)• Read test phase instructions, give recall test response
sheets (1 for each of the 4 categories)• Give 30 seconds for recall for each category
Experiment 3
• Interaction of Episodic and Semantic Memory (Exp 3) (Download detailed instructions form Blackboard)
• Scoring:
Experiment 3
Subject #1 Data
Practiced # recalled
% (divide # by 6)
Non-practiced # recalled
% (divide # by 6)
Control # recalled
% (divide # by 12)
• Interaction of Episodic and Semantic Memory (Exp 3) (Download detailed instructions form Blackboard)
• Scoring:
Experiment 3
Subject #1 Data
Practiced # recalled 3
% (divide # by 6) 3/6 = 50%
Non-practiced # recalled
% (divide # by 6)
Control # recalled
% (divide # by 12)“drinks – v”, “weapons – s”, “drinks – r”, “weapons – r”, “drinks – g”, “weapons – t”
• Interaction of Episodic and Semantic Memory (Exp 3) (Download detailed instructions form Blackboard)
• Scoring:
Experiment 3
Subject #1 Data
Practiced # recalled 3
% (divide # by 6) 3/6 = 50%
Non-practiced # recalled
3
% (divide # by 6) 3/6 = 50%
Control # recalled
% (divide # by 12)“drinks – v”, “weapons – s”, “drinks – r”, “weapons – r”, “drinks – g”, “weapons – t”
Don’t count “beer”, not
on list
• Interaction of Episodic and Semantic Memory (Exp 3) (Download detailed instructions form Blackboard)
• Scoring:
Experiment 3
Subject #1 Data
Practiced # recalled 3
% (divide # by 6) 3/6 = 50%
Non-practiced # recalled
3
% (divide # by 6) 3/6 = 50%
Control # recalled 6
% (divide # by 12) 6/12 = 50%“drinks – v”, “weapons – s”, “drinks – r”, “weapons – r”, “drinks – g”, “weapons – t”
Don’t count “beer”, not
on list
• What is knowledge?
Semantic Memory
• Semantic• Facts & Knowledge
• Focus is on how this information is organized (rather than on encoding things into semantic memory)
“When Lisa was on her way back from the shop with the balloon, she fell and the balloon floated
away”
- Lisa is a child- She bought the balloon in the
shop- She got a fright and was hurt- The balloon was on a string- This was not the outcome she
wanted…
• Semantic memory is made up of concepts
• How are these nuggets of knowledge accessed, stored and manipulated
• Systems view• Subsystem of Declarative system
Semantic Memory
• Semantic• Facts & Knowledge
• Focus is on how this information is organized (rather than on encoding things into semantic memory)
Semantic vs. Episodic Memory
• Declarative memory for general knowledge and facts lacking reference to the episodic context in which it was learned.
• Examples:• World knowledge• Vocabulary• Rules, formulae, and
algorithms
• “Knowing awareness”
• Memory for specific events in context
• Comes with a sense of reliving the event• Called conscious
recollection or “Mental time-travel”
• “Self-knowing”
Semantic Memory Episodic Memory
15
Semantic vs. Episodic Memory
• While all anterograde amnesics have profound deficits in episodic memory, most have only minor (if any) semantic impairments:• Spiers, Maguire, and Burgess’s (2001) reviewed 147 cases.• Vargha-Khadem’s (1997) patients, Jon and Beth (impaired as
children but developed normal semantic memories).
• Are they really distinct?• Evidence from Neuropsychological Dissociations
Semantic vs. Episodic Memory
• Patients with retrograde amnesia often have a selective deficit in either episodic or semantic memory • Episodic impairment with spared semantic memory:
• Tulving’s (2002) patient, KC (intact pre-trauma semantic memory)
• Semantic impairment with spared pre-trauma episodic memory:• Yasuda, Watanabe, and Ono’s (1997) patient
• Are they really distinct?• Evidence from Neuropsychological Dissociations
Semantic vs. Episodic Memory
• Are they really distinct?• Evidence from Neuroimaging Dissociations
• Different brain areas are activated for semantic and episodic memory tasks (Wheeler et al., 1997)• During memory encoding:
• More left prefrontal cortical activity for episodic tasks than semantic.
• During memory retrieval:• More right prefrontal cortical activity during episodic
memory retrieval than semantic.
• This also suggests that episodic and semantic memory are different types of memory.
• How is semantic information stored/organized?• Network models
• Propositions• Hierarchical networks• Spreading activation
• Exemplar and prototype models• List models
• Smith’s Feature overlap model
• Compound Cue Models• Scripts and Schemas
• How does the organization impact memory behavior?
Models of Semantic Memory
• Representing meaning• Proposition = verifiable
statement• Two or more concepts with a
relationship between them
A mouse bit a cat
bit (mouse, cat)
Propositions
• Representing meaning• Proposition = verifiable
statement (T/F)• Two or more concepts with a
relationship between them
A mouse bit a cat
bit (mouse, cat)
Propositions
• Representing meaning• Proposition = verifiable
statement (T/F)• Two or more concepts with a
relationship between them
• Networks: Propositions can be represented as connected nodes• Concepts nodes – arguments
• Times, places, people, objects, etc.
• Linked by relations• Verbs, adjectives, etc.
mouse
bit
cat
agent
patient
relation
Propositions
Propositions
• More complex example:• Children who are slow eat bread that is cold
• Slow children• Children eat bread• Bread is cold
rela
tion
subje
ct
time
relationrela
tion subject
Slow Children
Past Eat
ColdBread
Propositions
• Memory better for sentences with fewer propositions
“The horse stumbled and broke a leg” horse stumbled horse broke leg
Three propositions
Two propositions
“The crowded passengers squirmed uncomfortably” passengers crowded passengers squirmed passengers uncomfortable
• Kintsch (1974)
Propositions
• Bransford & Franks (1971) Constructed four-fact sentences, and
broke them down into smaller sentences: 4 - The ants in the kitchen ate the sweet jelly
that was on the table. 3 - The ants in the kitchen ate the sweet jelly 2 - The ants in the kitchen ate the jelly. 1 - The jelly was sweet.
Propositions
• Bransford & Franks (1971) Study: Heard 1-, 2-, and 3-fact sentences
only Test: Heard 1-, 2-, 3-, 4- fact sentences
(most of which were never presented)
Propositions
• Bransford & Franks (1971) Results:
the more facts in the sentences, the more likely Ss would judge them as “old” and with higher confidence
Even if they hadn’t actually seen the sentence Constructive Model: we integrate info from
individual sentences in order to construct larger ideas
emphasizes the active nature of our cognitive processes
So how might we organize this information in memory?
Hierarchical Network
Animal has skincan move around
breathes
has finscan swim
has gills
has featherscan fly
has wingsBird Fish
• Representation permits cognitive economy• Reduce redundancy of semantic features
SemanticFeatures
Lexical entry
Collins and Quillian Hierarchical Network model (1969)
Lexical entries stored in a hierarchy
IS A IS A
• Testing the model• Semantic verification task
• An A is a B True/False
Use time on verification tasks to map out the structure of the lexicon.
An apple has teeth
Hierarchical Network
Animal has skincan move around
breathes
Bird
has featherscan fly
has wings
Robin eats worms
has a red breast
• Testing the model
Sentence Verification time
Robins eat worms 1310 msecs
Robins have feathers 1380 msecs
Robins have skin 1470 msecs
• Participants do an intersection search
Hierarchical Network
Animal has skincan move around
breathes
Bird
has featherscan fly
has wings
Robin eats worms
has a red breast
Robins eat worms• Testing the model
Sentence Verification time
Robins eat worms 1310 msecs
Robins have feathers 1380 msecs
Robins have skin 1470 msecs
• Participants do an intersection search
Hierarchical Network
Animal has skincan move around
breathes
Bird
has featherscan fly
has wings
Robin eats worms
has a red breast
Robins have feathers• Testing the model
Sentence Verification time
Robins eat worms 1310 msecs
Robins have feathers 1380 msecs
Robins have skin 1470 msecs
• Participants do an intersection search
Hierarchical Network
Animal has skincan move around
breathes
Bird
has featherscan fly
has wings
Robin eats worms
has a red breast
Robins have feathers• Testing the model
Sentence Verification time
Robins eat worms 1310 msecs
Robins have feathers 1380 msecs
Robins have skin 1470 msecs
• Participants do an intersection search
Hierarchical Network
Animal has skincan move around
breathes
Bird
has featherscan fly
has wings
Robin eats worms
has a red breast
Robins have skin• Testing the model
Sentence Verification time
Robins eat worms 1310 msecs
Robins have feathers 1380 msecs
Robins have skin 1470 msecs
• Participants do an intersection search
Hierarchical Network
Animal has skincan move around
breathes
Bird
has featherscan fly
has wings
Robin eats worms
has a red breast
Robins have skin• Testing the model
Sentence Verification time
Robins eat worms 1310 msecs
Robins have feathers 1380 msecs
Robins have skin 1470 msecs
• Participants do an intersection search
Hierarchical Network
• Problems with the model• Difficulty representing some relationships
• How are “truth”, “justice”, and “law” related?
• No prediction about false sentences• A whale is a fish vs. A horse is a fish
• Neither whale or horse is a fish (whale is a mammal), but people are faster to reject horse than fish
Hierarchical Network
• Problems with the model• Conrad (1972) Effect may be due to frequency of
association• For most relationships organization and conjoint
frequency confounded• Subjects generated properties for concepts – weren’t
generated according to levels predictions (breathes generated for horse, instead of animal)
• Also had subjects verify statements - faster based on frequency, not level• “A robin breathes” is less frequent than “A robin eats worms”
Hierarchical Network
• Problems with the model Smith, Shoben & Rips
(1974) showed that there are hierarchies where more distant categories can be faster to categorize than closer ones
A chicken is a bird was slower to verify
than A chicken is an animal
Animal
Bird
has featherscan fly
has wings
Chicken lays eggs
clucks
Hierarchical Network
• Problems with the model• Assumption that all lexical entries at the same
level are equal• The Typicality Effect (e.g., Katz, 1981)
• Which is a more typical bird? Ostrich or Robin.
Hierarchical Network
Animal has skincan move around
breathes
Fishhas finscan swim
has gillsBird
has featherscan fly
has wings
Robin eats worms
has a red breast
Ostrichhas long legsis fast
can’t flyVerification times: “a robin is a bird” faster than “an ostrich is a bird”
Robin and Ostrich occupy the same relationship with bird.
Hierarchical Network
• Spreading activation • Most popular model• Recognizes diversity of information in a semantic
network• Captures complexity of our semantic
representation (at least some of it)
• Consistent with C & Q’s (1969) results• Consistent with results from priming studies
Spreading Activation Models
Collins & Loftus (1975)
• Spreading activation:• Bring back the network model, but make some
modifications:• The length of the link matters.
• The less related two concepts are, the longer the link. This gets typicality effects (put CHICKEN farther from BIRD than ROBIN).
• Search is a process called spreading activation. • Activate the two nodes involved in a question and spread that
activation along links. The farther it goes, the weaker it gets. When you get an intersection between the two spreading activations, you can decide on the answer to the question.
• This model gets around a lot of the problems with the earlier network model.
Spreading Activation Models
Collins & Loftus (1975)
Spreading Activation Models
street
carbus
vehicle
red
Fire engine
truck
roses
blue
orange
flowers
fire
house
applepear
tulips
fruit
Words represented in lexicon as a network of relationships
Organization is a web of interconnected nodes in which connections can represent:
categorical relations degree of association typicality
Collins & Loftus (1975)
street
carbus
vehicle
red
Fire engine
truck
roses
blue
orange
flowers
fire
house
applepear
tulips
fruit
Retrieval of information Spreading activation Limited amount of
activation to spread Verification times
depend on closeness of two concepts in a network
Collins & Loftus (1975)
Spreading Activation Models
• Semantic priming• A semantically-related word facilitates the
processing/identification of a target word• e.g. It is faster to say “BUTTER” is a real word
if preceded by “BREAD” instead of an unrelated word like “NURSE” (Meyer & Schvaneveldt, 1976).
• In the model: Priming is accounted for by the Spreading of Activation between related concepts.
Spreading Activation Models
Semantic Feature Lists
• Decomposing concepts into smaller semantic attributes/primitives
Features
“father”
“mother”
“daughter”
“son”
Human + + + +
Older + + - -
Female - + + -• Perhaps there is a set of necessary and sufficient features
• Necessary – features have to be present for inclusion
• Sufficient – if these features are present no other features are necessary for inclusion
“John is a bachelor.”
• What does bachelor mean?• What if John:
• is married?
• is divorced?
• has lived with the mother of his children for 10 years but they aren’t married?
• has lived with his partner Joe for 10 years?
• Suggests that there probably is no set of necessary and sufficient features that make up word meaning• (other classic examples “game” “chair”)
Semantic Feature Lists
Feature Overlap Model
• Smith’s Feature Overlap Model• Used lists of characteristics instead of a network• Concepts are clusters of semantic features. There are two
kinds:• Distinctive features: Core parts of the concept. They must
be present to be a member of the concept, they’re the defining features. For example, WINGS for BIRD.
• Characteristic features: Typically associated with the concept, but not necessary. For example, CAN FLY for BIRD.
• These features are stored in a redundant manner
• The decision of whether one concept is an example of an another depends upon the level of overlap
Feature Overlap Model
• Smith’s Feature Overlap Model• Some examples
BIRD MAMMAL
Distinctive WingsFeathers…
Nurses-youngWarm-bloodedLive-birth…
Characteristic FliesSmall…
Four-legs…
ROBIN WHALE
Distinctive WingsFeathers…
SwimsLive-birthNurses-young…
Characteristic Red-breast Large
Feature Overlap Model
• Smith’s Feature Overlap Model
• OK:• A robin is a true bird.• Technically speaking, a
chicken is a bird.
• Feels wrong:• Technically speaking, a
robin is a bird.• A chicken is a true bird.
• The answer depends on the kinds of feature overlap.
• Why characteristic features? Various evidence, such as hedges:
Feature Overlap Model
• Smith’s Feature Overlap Model• Similar concepts
stored together
• Answering a semantic verification question is a two-step process.• Compare on all features.
If there is a lot of overlap it’s an easy “yes.” If there is almost no overlap, it’s an easy “no.” In the middle, go to step two.
• Compare distinctive features. This involves an extra stage and should take longer.
Feature Overlap Model
• Smith’s Feature Overlap Model
Feature Overlap Model
• Smith’s Feature Overlap Model
Easy “yes” Easy “no” Hard “yes” Hard “no”
A robin is a bird
A robin is a fish
A whale is a mammal
A whale is a fish
• The model can account for:• Typicality effects: One step for more typical members,
two steps for less typical members, that explains the time difference.
• Answering “no”: Why are “no” responses different? Depends on the number of steps (feature overlap).
• Hierarchy: Since it isn’t a hierarchy but similarity, we can understand why different types of decisions take different amounts of time.
• Criticisms:• No objective way to distinguish defining and
characteristic feature• Many items in category do not share a defining
feature• Furniture - do all items share a defining feature? Games?• How many of the features of a bird can you lose and still have
a bird?• Because features are all that’s important in the model,
forward and backward associations should be the same
• Forward vs. backward associations• So when asked to do word association task, people say
“insect” for concept of “butterfly”, but rarely say “butterfly” as an example of an “insect”
Feature Overlap Model
• The spreading activation model is more flexible than the hierarchical network model.• Pros of flexibility:
• The spreading activation model can account for more empirical findings.
• Cons of flexibility:• The flexibility also reduces the specificity of the
model’s predictions, making the spreading activation model more difficult to test.
Comparing the Models
Semantics as Prototypes
• Prototype theory: store feature information with most “prototypical” instance (Eleanor Rosch, 1975)
chaircouc
h
tabledesk
1) chair1) sofa2) couch3) table::12) desk13) bed::42) TV54)
refrigerator
bed
TV
refrigerator
Rate on a scale of 1 to 7 if these are good examples of category: Furniture
Semantics as Prototypes
• Prototype theory: store feature information with most “prototypical” instance (Eleanor Rosch, 1975)
• Prototypes: • Some members of a category are better instances of the
category than others• Fruit: apple vs. pomegranate
• What makes a prototype?• Possibly an abstraction of exemplars
• More central semantic features• What type of dog is a prototypical dog?
• What are the features of it?
• We are faster at retrieving prototypes of a category than other members of the category
Semantics as Prototypes
• The main criticism of the theory• The model fails to provide a rich enough representation of conceptual
knowledge • How can we think logically if our concepts are so vague?
• Why do we have concepts which incorporate objects which are clearly dissimilar, and exclude others which are apparently similar (e.g. mammals)?
• How do our concepts manage to be flexible and adaptive, if they are fixed to the similarity structure of the world?• features have different importance in different contexts
• what determines the feature weights
• If each of us represents the prototype differently, how can we identify when we have the same concept, as opposed to two different concepts with the same label?
Semantics as Exemplars
• Instance theory: each concept is represented as examples of previous experience (e.g., Medin & Schaffer, 1978)• Make comparisons to stored instances• Typically have a probabilistic component
• Which instance gets retrieved for comparison
dog
• Info stored with context
• To retrieve info, cues are used to match with stored contexts
• Can also account for episodic memory
• SAM, MINERVA 2, TODAM
• Math models that predict sets of results based on strength of cue associations
• Also popular models among researchers
Compound Cue Models
• Compound-cue model must be combined with theory of memory• Make predictions about performance in memory retrieval tasks
• In SAM (search of associative memory), a matrix of association among cues and memory traces, which are called images• Cues are assembled in a short-term store, or probe set, which is the
match against all item in memory
• In TODAM (theory of distributed associative memory), to-be-remembered items are represented as vectors of features• Sum of vectors, convolution
• The resulting scalar can be mapped into familiarity and, in turn, into response time and accuracy
• Examine mechanisms of priming and extent to explain of priming effects
Compound Cue Models
• Scripts and schemas (Bartlett, Schank):• Knowledge is packaged in integrated
conceptual structures. • Scripts: Typical action sequences (e.g., going to the
restaurant, going to the doctor…)• Schemas: Organized knowledge structures (e.g., your
knowledge of cognitive psychology).
• It would be possible to describe these with nodes and links.• For example, a schema could be a sub-network
related to a particular area.
Schema Theory
• Restaurant Schema• Enter - seated by maître
d• Read menu - order from
waiter• Waiter brings food• Waiter brings check• Pay check - leave
Schema Theory
• Restaurant Schema
Schema Theory
• 73% of respondents reported these common events when going to a restaurant:• Sit down• Look at menu• Order• Eat• Pay bill• Leave
• 48% also included:• Enter restaurant• Give reservation
name• Order drinks• Discuss menu• Talk• Eat appetizer• Order dessert• Eat dessert• Leave a tip
Bower, Black, and Turner (1979)
• Bartlett (1932) • Read unfamiliar story • Remembered differently depending on
expectation
Schema Theory
• Scripts and schemas (Bartlett, Schank):• Evidence: When people see stories like this:
• Chief Resident Jones adjusted his face mask while anxiously surveying a pale figure secured to the long gleaming table before him. One swift stroke of his small, sharp instrument and a thin red line appeared. Then an eager young assistant carefully extended the opening as another aide pushed aside glistening surface fat so that vital parts were laid bare. Everyone present stared in horror at the ugly growth too large for removal. He now knew it was pointless to continue.
Schema Theory
• Scripts and schemas (Bartlett, Schank):• And you ask them to recognize words that might
have been part of the story, they tend to recognize material that is script or schema typical even if it wasn’t presented. Let’s try:• Scalpel?• Assistant?• Nurse?• Doctor?• Operation?• Hospital?
Schema Theory
• Scripts and schemas (Bartlett, Schank):• People also tend to fill in missing details from
scripts and schemas if they are not provided (as long as those parts are typical).
• When people are told the script or schema that is appropriate before hearing some material they tend to understand it better than if they are not told it at all or are told it after the material.
Schema Theory
Rocky slowly got up from the mat, planning his escape. He hesitated a moment and thought. Things were not going well. What bothered him most was being held, especially since the charge against him had been weak. He considered his present situation. The lock that held him was strong but he thought he could break it. He knew, however, that his timing would have to be perfect. Rocky was aware that it was because of his early roughness that he had been penalized so severely - much too severely from his point of view. The situation was becoming frustrating; the pressure had been grinding on him for too long. He was being ridden unmercifully. Rocky was getting angry now. He felt he was ready to make his move. He knew that his success or failure would depend on what he did in the next few seconds.
PrisonWrestling
Other
?
Every Saturday night, four good friends get together. When Jerry, Mike, and Pat arrived, Karen had just finished writing some notes. She quickly arranged the cards and stood up to greet her friends at the door. They followed her into the living room and sat down facing each other. They began to play. Karen's recorder filled the room with soft and pleasant music. Her hand flashed in front of everyone's eyes and they all noticed her diamonds. They continued for many hours until everyone was exhausted and quite silly. Jerry made his friends laugh as he theatrically took a bow, entertaining them all with the wildness of his playing. Finally, Karen's friends went home.
Playing music
Playing cards
Other
?
Summary of Semantic Memory
• Semantic memory = knowledge
• Some evidence for a separate system
• Early models suggested hierarchical network - cognitive economy
• Results suggest no strict hierarchy or cognitive economy
• But current network models suggest loosened hierarchy (spreading activation)
• Other ideas: schemas, compound cues
72
If memory for speech is episodic, what are linguistic symbols?
Reply: ``Maybe linguistic symbols (words, phonemes, etc) are like prototypes.’’
1. Many categories have a prototype, an ideal mean, centroid token that best represents the category (Rosch, 1978). Prototype members of a category come to mind faster, are recognized more quickly, etc.
2. Categories that are more abstract have fewer features than concretes.
Granny Smith apple > apple > fruit
Fluffy > tabby cat > housecat > cat > pet
Bob saying: `tomato’ > English word: `tomato’
HOWEVER,
mathematical models of memory exhibit the behaviors that support prototypes and abstractions. But do it by storing rich detail and ``computing abstractions and prototypes’’ whenever needed.
73
Minerva 2: Storing Episodes
Let’s look closer at a specific model.
Minerva 2 Model (Doug Hintzman, 1986): Every episode or exemplar is stored as a trace – a long vector of features, added to memory. For words, the features represent many kinds of information. The features can only be +1, -1 or 0 in Minerva 2.
Exemplar Memory: a matrix of feature vectors for each exemplar in the experiment.
1010-1-11110-110-111-11100-11
pronunciation ftrs orthographic ftrs semantic ftrs contextual ftrs
74
Minerva 2Probing Memory. Each new episode is a probe
into the memory matrix.
The similarity of the probe is computed to all traces.
Traces of the most similar episodes become highly active.
The memory response (or echo) can show greater or lesser activity overall (intensity) and a certain prominent pattern of activity (content).
Echo Intensity. Stimulate memory with a probe. The more activation across features and traces, the greater the intensity of response. So if there are many similar copies, the higher familiarity of the probe.
Recognition Task: For a new/old recognition task, you set a threshold. If total Intensity is above threshold, say `old’, if below, say `new’.
Prototypes: If the probe is an abstract category (eg, fruit), the most intense traces are its prototypes.
75
Minerva 2:Echo Content. The probe activates a
subset of traces. The common features across this set are computed. These features specify an abstract pattern similar to the probe but generic – a kind of abstract category for the probe.
The features not shared cancel out leaving an abstract vector with fewer features – a prototype or schema or abstract object.
Thus hearing the word `tomato’ activates the prototype pronunciation and the abstracted meaning of `tomato.‘
Our intuitions about abstract symbols – words, phonemes, etc – may reflect integration of content across traces.
76
Other models• TODAM
• Associative Theories• ACT-R, TODAM
• Search Models• SAM, REM
• Trace Theories• Perturbation Model
• Connectionist Models• PDP, EPIC
• Biological-Based Theories• HERA, CARA
Priming Propositions
Ratcliff and McKoon (1978)
Involves two propositions: P1 [OVERLOOK, MAUSOLEUM, SQUARE] P2 [ENSHRINE, MAUSOLEUM, TSAR].
“The mausoleum that enshrined the tsar overlooked the
square.”
Priming Propositions
Condition Examples RT to Target Priming Effects
Across sentences
Between two propositions in the same sentence
Within a single proposition
square-clutch
square-Tsar
square-mausoleum
671 msec
571 msec
551 msec
None; baseline
100 msec facilitation
120 msed facilitation
Ratcliff and McKoon (1978) Results in a cued memory task (how long
does it take to verify “square” was in the sentence):