Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
An Exemplar-Retrieval Model of
Short-term Memory Search:
Linking Categorization and
Probe Recognition
Robert M. Nosofsky
Indiana University Bloomington, Bloomington, IN, United States
E-mail: nosofsky@indiana.edu
Contents
1. Introduction and Background 48
1.1 Introduction 48
1.2 Background 49
2. The “Core” Version of the Formal Model 50
3. Short-term Probe Recognition in a Continuous-Dimension Similarity Space 54
4. Short-term Probe Recognition of Discrete Stimuli 58
5. A Power Law of Memory Strength 62
6. Bridging Short-term and Long-term Probe Recognition and Incorporating the Role 64
of Previous Memory Sets
6.1 Review of Empirical Findings 64
6.2 The Extended EBRW Model: Conceptual Description 67
6.3 The Extended EBRW Model: Formal Description 71
6.4 Modeling Application 74
7. Evidence for a Joint Role of Categorization and Familiarity Processes 76
8. Summary and Conclusions 80
Acknowledgments 82
References 82
Abstract
Exemplar-retrieval models such as the exemplar-based random walk (EBRW) model
have provided good accounts of response time (RT) and choice-probability data in a
wide variety of categorization paradigms. In this chapter, I review recent work showing
that the model also accounts accurately for RT and choice-probability data in a wide
variety of probe-recognition, short-term, memory-search paradigms. According to
the model, observers store items from study lists as individual exemplars in memory.
When a test probe is presented, it causes the exemplars to be retrieved. The exemplars
that are most readily retrieved are those that are highly similar to the test probe and
Psychology of Learning and Motivation, Volume 65
© 2016 Elsevier Inc.
j
ISSN 0079-7421
http://dx.doi.org/10.1016/bs.plm.2016.03.002 All rights reserved. 47
48 Robert M. Nosofsky
that have the greatest memory strengths. The retrieved exemplars drive a familiarity-
based evidence-accumulation process that determines the speed and accuracy of
oldenew recognition decisions. The model accounts for effects of memory-set size,
old-new status of test probe, and study-test lag; effects of the detailed similarity
structure of the memory set; and the role of the history of previously experienced mem-
ory sets on performance. Applications of the model reveal a quantitative law of how
memory strength varies with the retention interval. In addition, the model provides a
unified account of how probe recognition operates in cases involving short and
long study lists. Furthermore it provides an account of the classic distinction between
controlled versus automatic processing depending on the types of memory-search
practice in which observers engage. In short the model brings together and extends
prior research and theory on categorization, attention and automaticity, short- and
long-term memory, and evidence-accumulation models of choice RT to move the field
closer to a unified account of diverse forms of memory search.
1.2 Background
In the seminal “memory-scanning” paradigm introduced by Sternberg
(1966, 1969), observers maintain short lists of items in memory and are
then presented with a test probe. The observers’ task is to classify the probe
as “old” or “new” as rapidly as possible while minimizing errors. Under
Sternberg’s conditions of testing, the result was that mean RTs for both
old and new probes were linearly increasing functions of the size of the
memory set. Furthermore the RT functions for the old and new probes
were parallel to one another. These results led Sternberg to formulate his
classic serial-exhaustive model of memory search. Since that time, a wide
variety of other information-processing models have also been developed
to account for performance in the task (for reviews and analysis, see
Reed, 1973 and Townsend & Ashby, 1983).
One modern formal model of short-term probe recognition is the exem-
plar-based random walk (EBRW) model (Nosofsky et al., 2011). According
to this model, short-term probe recognition is governed by the same
principles of global familiarity and exemplar-based similarity that are theo-
rized to underlie long-term recognition and forms of categorization (Clark
& Gronlund, 1996; Gillund & Shiffrin, 1984; Hintzman, 1988; Kahana &
Sekuler, 2002; Medin & Schaffer, 1978; Murdock, 1985; Nosofsky, 1986,
1991; Nosofsky & Palmeri, 1997; Shiffrin & Steyvers, 1997). The model
assumes that each item of a memory set is stored as an individual exemplar
in memory. When a test probe is presented, it causes the individual exem-
plars to be retrieved. The exemplars that are most readily retrieved are those
that are highly similar to the test probe and that have the greatest memory
strengths. The retrieved exemplars drive a familiarity-based evidence-
accumulation process that determines the speed and the accuracy of olde
new recognition decisions.
50 Robert M. Nosofsky
(possibly the same one as on the previous step) and the process continues.
The recognition decision time is determined by the total number of steps
required to complete the random walk. It should be noted that the concept
of a “criterion” appears in two different locations in the model. First, as
explained above, the strength setting of the criterion elements influences
the direction and rate of drift of the random walk. Second, the magnitude
of the Rold and Rnew thresholds determine how much evidence is needed
before an old or a new response is made. Again other well-known sequen-
tial-sampling models include analogous criterion-related parameters at these
same two locations (for extensive discussion, see, eg, Ratcliff, 1985).
Given the detailed assumptions in the EBRW model regarding the race
process (see Nosofsky & Palmeri, 1997, p. 268), it turns out that, on each
step of the random walk, the probability (p) that the counter is incremented
toward the Rold threshold is given by
pi ¼ Ai =ðAi þ CÞ; (4)
where Ai is the summed activation of all of the old exemplars (given pre-
sentation of item i), and C is the summed activation of the criterion
elements. (The probability that the random walk steps toward the Rnew
threshold is given by qi ¼ 1pi.) In general, therefore, test items that match
recently presented exemplars (with high memory strengths) will cause high
exemplar-based activations, leading the random walk to march quickly to
the Rold threshold and resulting in fast OLD RTs. By contrast, test items that
are highly dissimilar to the memory-set items will not activate the stored
exemplars, so only criterion elements will be retrieved. In this case, the
random walk will march quickly to the Rnew threshold, leading to fast NEW
RTs. Through experience in the task, the observer is presumed to learn an
appropriate setting of criterion-element activation (C) such that summed
activation (Ai) tends to exceed C when the test probe is old, but tends to be
less than C when the test probe is new. In this way, the random walk will
tend to drift to the appropriate response thresholds for old versus new lists. In
most applications, for simplicity, I assume the criterion-element activation is
linearly related to memory set size. (Because summed activation of exem-
plars, Ai, tends to increase with memory-set size, the observer needs to adopt
a stricter criterion as memory-set size increases.)
Given these processing assumptions and the computed values of pi (Eq.
(4)), it is then straightforward to derive analytic predictions of recognition
choice probabilities and mean RTs for any given test probe and memory
set. The relevant equations are summarized by Nosofsky and Palmeri
54 Robert M. Nosofsky
(1997, pp. 269e270, 291e292). Simulation methods are used when the
model is applied to predict fine-grained RT distribution data.
In sum having outlined the general form of the model, I now review spe-
cific applications of the model to predicting RTs and accuracies in different
variants of the short-term probe-recognition paradigm.
Figure 2 Summary data from the short-term memory experiment of Nosofsky et al.
(2011). (Top) Observed error rates and mean response times (RTs). (Bottom) Predictions
from the exemplar-based random walk model. Reprinted from Nosofsky, R.M., Little, D.R.,
Donkin, C., & Fific, M. (2011). Short-term memory scanning viewed as exemplar-based
categorization. Psychological Review, 188, 288. Copyright 2011 by APA. Reprinted with
permission.
counted backward from the end of the list.) For old probes, there was a big
effect of lag: In general, the more recently a probe appeared on the study list,
the shorter was the mean RT. Indeed once one takes lag into account, there
is little remaining effect of set size on the RTs for the old probes. That is, as
can be seen, the different set size functions are nearly overlapping (cf.
McElree & Dosher, 1989; Monsell, 1978). The main exception is a persis-
tent primacy effect, in which the mean RT for the item at the longest lag
for each set size is “pulled down.” (The item at the longest lag occupies
the first serial position of the list.) By contrast, for the lures, there is a big
effect of set size, with longer mean RTs as set size increases. The mean
proportions of errors for the different types of lists, shown in the top-left
panel of Fig. 2, mirror the mean RT data just described.
The goal of the EBRW modeling, however, was not simply to account
for these summary trends. Instead, the goal was to predict the choice prob-
abilities and mean RTs observed for each of the individual lists. Because
56 Robert M. Nosofsky
there were 360 unique lists in the experiment, this goal entailed simulta-
neously predicting 360 choice probabilities and 360 mean RTs. The results
of that model-fitting goal are shown in the top and bottom panels of Fig. 3.
The top panel plots, for each individual list, the observed probability that the
subjects judged the probe to be “old” against the predicted probability from
the model. The bottom panel does the same for the mean RTs. Although
there are a few outliers in the plots, overall the model achieves a good fit
to both data sets, accounting for 96.5% of the variance in the choice prob-
abilities and for 83.4% of the variance in the mean RTs.
The summary-trend predictions that result from these global fits are
shown in the bottom panels of Fig. 2. It is evident from inspection that
the EBRW does a good job of capturing these summary results. For the
old probes, it predicts the big effect of lag on the mean RTs and the nearly
overlapping set-size functions. Likewise it predicts with good quantitative
accuracy the big effect of set size on the lure RTs. The error-proportion
data (left panels of Fig. 2) are generally also well predicted.
The explanation of these results in terms of the EBRW model is straight-
forward. According to the best-fitting parameters from the model (see
Nosofsky et al., 2011), more recently presented exemplars had greater mem-
ory strengths and sensitivities than did less recently presented exemplars.
From a psychological perspective, this pattern seems highly plausible. For
example, presumably, the more recently an exemplar was presented, the
greater should be its strength in memory. Thus if an old test probe matches
the recently presented exemplar, it will give rise to greater overall activation,
leading to shorter mean old RTs. In the case of a lure, as set size increases, the
overall summed activation yielded by the lure will also tend to increase. This
pattern arises both because a greater number of exemplars will contribute to
the sum, and because the greater the set size, the higher is the probability that
it at least one exemplar from the memory set will be highly similar to the
lure. As summed activation yielded by the lures increases, the probability
that the random walk takes correct steps toward the Rnew threshold decreases,
and so mean RTs for the lures get longer.
Beyond accounting well for these summary trends, inspection of the
detailed scatterplots in Fig. 3 reveals that the model accounts for fine-grained
changes in choice probabilities and mean RTs depending on the fine-
grained similarity structure of the lists. For example, consider the choice-
probability plot (Fig. 3, top panel) and the Lure-Size-4 items (open
diamonds). Whereas performance for those items is summarized by a single
point on the summary-trend figure (Fig. 2), the full scatterplot reveals
Memory Search 57
was not varied. A key aspect of Monsell’s procedure was that individual
stimulus presentations were fairly rapid, and the test probe was presented
either immediately or with brief delay. Critically the purpose of this proce-
dure was to discourage subjects from rehearsing the individual consonants of
the memory set. If rehearsal takes place, then the psychological recency of
the individual memory-set items is unknown, because it will vary depending
on each subject’s rehearsal strategy. By discouraging rehearsal, the psycho-
logical recency of each memory set item should be a systematic function
of its lag. (Another important aspect of Monsell’s design, which I consider
later in this review, is that he varied whether or not lures were presented
on recent lists. The present applications are to data that are collapsed across
this variable.)
The mean RTs and error rates observed by Monsell (1978) in the imme-
diate condition are reproduced in the top panel of Fig. 4. (The results
obtained in the brief-delay condition showed a similar pattern.) Inspection
of Monsell’s RT data reveals a pattern that is very similar to the one we
observed in the previous section after averaging across the individual tokens
of the main types of lists (ie, compare to the observed-RT panel of Fig. 2). In
particular, the mean old RTs vary systematically as a function of lag, with
shorter RTs associated with more recently presented probes. Once lag is
taken into account, there is little if any remaining influence of memory-
set size on old-item RTs. For new items, however, there is a big effect of
memory-set size on mean RT, with longer RTs associated with larger set
sizes. Because of the nonconfusable nature of the consonant stimuli, error
rates are very low; however, what errors there are tend to mirror the
RTs. Another perspective on the observed data is provided in Fig. 5, which
plots mean RTs for old and new items as a function of memory-set size, with
the old RTs averaged across the differing lags. This plot shows roughly linear
increases in mean RTs as a function of memory-set size, with the positive
and negative functions being roughly parallel to one another. (The main
exception to that overall pattern is the fast mean RT associated with positive
probes to 1-item lists.) This overall pattern shown in Fig. 5 is, of course,
extremely commonly observed in the probe-recognition memory-scanning
paradigm.
Nosofsky et al. (2011) fitted the EBRW model to the Fig. 4 data by using
a weighted least-squares criterion (see the original article for details). The
predicted mean RTs and error probabilities from the model are shown
graphically in the bottom panel of Fig. 4. Comparison of the top and bottom
panels of the figure reveals that the EBRW model does an excellent job of
60 Robert M. Nosofsky
capturing the performance patterns in Monsell’s (1978) study. Mean RTs for
old patterns get systematically longer with increasing lag, and there is little
further effect of memory-set size once lag is taken into account. Mean
RTs for lures are predicted correctly to get longer with increases in mem-
ory-set size. (The model is also in the right ballpark for the error proportions,
although in most conditions the errors are near floor.) Fig. 5 shows the
EBRW model’s predictions of mean RTs for both old and new probes as
Memory Search 61
a function of memory-set size (averaged across differing lags), and the model
captures the data from this perspective as well. Beyond accounting for the
major qualitative trends in performance, the EBRW model provides an
excellent quantitative fit to the complete set of data.
The best-fitting parameters from the model (see Nosofsky et al., 2011)
were highly systematic and easy to interpret. As expected, the memory-
strength parameters decreased systematically with lag, reproducing the
pattern seen in the fits to the data from the previous section. The best-fitting
value of the similarityemismatch parameter (s ¼ 0.050) reflected the low
confusability of the consonant stimuli from Monsell’s experiment. The con-
ceptual explanation of the model’s predictions is essentially the same as
already provided in the previous section.
In sum, without embellishment, the EBRW model appears to provide a
natural account of the major patterns of performance in the standard version
of the probe-recognition paradigm in which discrete alphanumeric charac-
ters are used, at least in cases in which the procedure discourages rehearsal
and where item recency exerts a major impact. In addition, I should note
that although the present chapter focuses on predictions and results at the
level of mean RTs, the exemplar model has also been shown to provide suc-
cessful quantitative accounts of probe-recognition performance at the level
of complete RT distributions. Examples of such applications are provided by
62 Robert M. Nosofsky
Nosofsky et al. (2011), Donkin and Nosofsky (2012b), and Nosofsky, Cao,
Cox, and Shiffrin (2014).
1.0 1.0
Participant 1 Participant 3
0.8 0.8
0.6 0.6
0.4 0.4
0.2 0.2
0.0 0.0
Memory Strength
1 3 5 7 9 1 3 5 7 9
1.0 1.0
Participant 2 Participant 4
0.8 0.8
0.6 0.6
0.4 0.4
0.2 0.2
0.0 0.0
1 3 5 7 9 1 3 5 7 9
Lag Lag
Figure 6 Model-based results from the probe-recognition experiment of Donkin and
Nosofsky (2012a). Estimated memory strengths (open circles) are plotted as a function
of lag, along with the best-fitting power functions. Reprinted from Donkin, C., & Nosofsky,
R.M., (2012). A Power-law model of psychological memory strength in short- and long-term
recognition. Psychological Science, 23, 625e634. Copyright 2012 by Sage. Reprinted with
permission.
Observed
1000 Varied-Old
Varied-New
950 Allnew-Old
Allnew-New
900 Consist-Old
Mean Correct RT (ms)
Consist-New
850
800
750
700
650
600
550
0 2 4 6 8 10 12 14 16
Set Size
Predicted
1000 Varied-Old
Varied-New
950 Allnew-Old
Allnew-New
900 Consist-Old
Mean Correct RT (ms)
Consist-New
850
800
750
700
650
600
550
0 2 4 6 8 10 12 14 16
Set Size
Figure 7 Mean correct response times (RTs) for old probes and new probes plotted as a
function of set size in the varied mapping (VM), all-new (AN), and consistent-mapping
(CM) conditions. Top panel ¼ observed, bottom panel ¼ predicted. Adapted from
Nosofsky, R.M., Cox, G.E., Cao, R., & Shiffrin, R.M. (2014). An exemplar-familiarity model
predicts short-term and long-term probe recognition across diverse forms of memory
search. Journal of Experimental Psychology: Learning, Memory, and Cognition, 40, 1528.
Copyright 2014 by APA. Adapted with permission.
Memory Search 69
Observed
Varied-Old
Varied-New
0.3 Allnew-Old
Allnew-New
Consist-Old
0.25
Consist-New
0.2
P(Error)
0.15
0.1
0.05
0 2 4 6 8 10 12 14 16
Set Size
Predicted
Varied-Old
Varied-New
0.3 Allnew-Old
Allnew-New
Consist-Old
0.25
Consist-New
0.2
P(Error)
0.15
0.1
0.05
0 2 4 6 8 10 12 14 16
Set Size
Figure 8 Mean error proportions for old probes and new probes plotted as a function
of set size in the varied-mapping, all-new, and consistent-mapping conditions. Top
panel ¼ observed, bottom panel ¼ predicted. Adapted from Nosofsky, R.M., Cox, G.E.,
Cao, R., & Shiffrin, R.M. (2014). An exemplar-familiarity model predicts short-term and long-
term probe recognition across diverse forms of memory search. Journal of Experimental
Psychology: Learning, Memory, and Cognition, 40, 1529. Copyright 2014 by APA.
Adapted with permission.
70 Robert M. Nosofsky
distant past. In recent years, the precise assumptions I have made concerning
the nature of these previous-list traces have been evolving (and will likely
continue to evolve as new experiments are conducted). Here I present a
fairly general version of the extended model to date, which is being devel-
oped in collaboration with Rui Cao and Richard Shiffrin (Nosofsky, Cao, &
Shiffrin, in preparation).
The basic idea in the extended model is that when a test probe is pre-
sented, it causes the retrieval of not only the old exemplars on the current
list, but also exemplars from previous lists in the experiment. These
“long-term memory” (LTM) exemplars enter the evidence-accumulation
process of the random walk in the same manner as the current-list exemplars.
The probability with which an LTM exemplar is retrieved depends jointly
on its memory strength, its similarity to the test probe, and the extent to
which any “context” elements associated with the LTM exemplar match
or mismatch the current list context (cf. Howard & Kahana, 2002;
Raaijmakers & Shiffrin, 1981).
An important open question, however, concerns that extent to which
the “old” and “new” labels that are associated with the LTM exemplars
are themselves stored with those exemplars. In the case of what I will
term a “familiarity-only” version of the model, the response labels are not
part of the exemplar traces. Instead if an LTM exemplar is retrieved, it always
causes the random walk to take a step toward the old threshold.
An alternative version of the extended model, which I will term a
“labeling” model, assumes that the “old” and “new” labels associated with
the test probes on previous trials are stored along with the exemplars them-
selves. So, for example, if test probe T was “old” on a given trial, then a rep-
resentation of T-old would be stored in memory; whereas if T was a new test
probe on that trial, then a representation of T-new would be stored in mem-
ory. In making an oldenew decision for the current list, if an LTM exemplar
is retrieved that has an “old” label, then the random walk takes a step in the
direction of the old threshold. Crucially, however, if the LTM exemplar that
is retrieved has a “new” label, then the random walk steps toward the new
response threshold. Note that this “labeling” version of the model is basically
a type of exemplar-based categorization model (as originally formalized by
Nosofsky & Palmeri, 1997), with the categories being “old” versus “new.”
The most straightforward assumption is that the labeling/categorization
strategy does indeed apply in the CM version of the task. Indeed, one of
the key hypotheses advanced by Shiffrin and Schneider (1977) is that a
major component in the development of “automatic” processing in CM
Memory Search 71
Furthermore let Old-O(k) denote the activation of all old LTM exemplars
given presentation of an old (O) test probe in condition k; Old-N(k) denote
the activation of all old LTM exemplars given a new (N) test probe; and
analogously for New-O(k) and New-N(k). Then for an old test probe, the
probability that each individual random-walk step moves toward the
ROLD response threshold is given by,
pold ¼ ½ðwList $ Ai þ wLTM $Old OÞ=½ðwList $Ai þ wLTM $Old OÞ
þ ðwList $C þ wLTM $New OÞ: (5)
For example, the random walk steps toward the ROLD threshold anytime
an old exemplar from the current memory set is retrieved (measured by
wList$Ai), or anytime an old exemplar from LTM is retrieved (measured by
wLTM$Old-O). Conversely the random walk steps toward the Rnew
threshold anytime that a criterion element is retrieved (measured by
wList$C), or anytime an exemplar from LTM is retrieved that is associated
with the NEW category. (Note that the probability that the random-walk
moves toward the Rnew threshold on any given step is simply qold ¼ 1pold.)
Analogously, for new test probes, the probability that each individual
random-walk step is toward the ROLD response threshold is given by
pnew ¼ ½ðwList $ Ai þ wLTM $Old NÞ=½ðwList $Ai þ wLTM $Old NÞ
þ ðwList $C þ wLTM $New NÞ;
(6a)
whereas the probability that the random walk steps toward the Rnew
threshold is simply
qnew ¼ 1 pnew
¼ ½ðwList $C þ wLTM $New NÞ=½ðwList $Ai þ wLTM $Old NÞ
þ ðwList $C þ wLTM $New NÞ:
(6b)
[It should be emphasized that, in this notation, the probability of taking
steps toward the Rold and Rnew thresholds is denoted by p and q, respectively;
whereas the type of test probe (old vs new) is denoted by the subscript on p
and q.] Thus Eq. (6b) formalizes the idea that, for new test probes, the
random walk correctly steps toward the NEW threshold anytime that a cri-
terion element is retrieved or anytime that an LTM exemplar is retrieved
that is associated with the NEW category label.
Memory Search 73
the listeweight parameter wList and the LTM activation parameters Old-O,
Old-N, New-O, and New-N. Based on various model-fitting explorations,
and to achieve greater parsimony and interpretability of the parameter
estimates, I introduced a variety of parameter constraints. In particular, the
residual-time (t0) and scaling (k) parameters were held fixed across all three
conditions; the parameters b, a, u, v, Rold, Rnew, and wList were held fixed
across conditions AN and VM; and the Old-O and Old-N parameters as
well as the New-O and New-N parameters were set equal to one another
in conditions AN and VM. The latter constraints arise because in the VM
and AN conditions, old and new test probes should yield equal matches
to old test probes (and new test probes) from previous lists.
Although Nosofsky, Cao, et al. (2014) modeled the data separately for
each individual subject, all subjects showed the same qualitative patterns
of results. Therefore, in reviewing the summary trends here, I report the
data averaged across the four subjects.
The mean correct RTs are plotted as a function of condition (VM
vs CM), memory-set size, probe type (old vs new), and repeat status of
the probe in Fig. 9. The mean proportions of errors are plotted as a function
of these variables in Fig. 10.
First, note that the results from the standard (no-repeat) conditions are
similar to those I reported in the previous section (Figs. 7 and 8). In the stan-
dard VM condition (solid triangles), mean RTs for both the old and new
probes get longer with increases in memory-set size, and this lengthening
is curvilinear in form. The error proportions in the standard VM condition
show the same pattern. In the standard CM condition (solid squares), the
mean RTs and error proportions for the new probes are a flat function of
memory-set size, whereas the mean RTs and error proportions for the
old probes lengthen curvilinearly with increases in set size. The old-item
set-size functions in the CM condition are not as steep as in the VM
condition.
NEW OLD
900 900
CM-repeat
CM-no repeat
800 800 VM-repeat
VM-no repeat
700 700
Mean RT
600 600
500 500
400 400
300 300
0 5 10 15 0 5 10 15
Set Size Set Size
Figure 9 Mean correct response times (ms) for old and new test probes plotted as a
function of condition (VM vs CM), repeat manipulation, and set size. VM, varied map-
ping, CM, consistent mapping. Reprinted from Nosofsky, R.M., Cao, R., Cox, G.E., &
Shiffrin, R.M. (2014). Familiarity and categorization processes in memory search. Cognitive
Psychology, 75, 102. Copyright 2014 by Elsevier. Reprinted with permission.
Memory Search 79
NEW OLD
1 1
CM-repeat
0.9 0.9 CM-no repeat
VM-repeat
0.8 0.8 VM-no repeat
0.7 0.7
Probability Error
0.6 0.6
0.5 0.5
0.4 0.4
0.3 0.3
0.2 0.2
0.1 0.1
0 0
0 5 10 15 0 5 10 15
Set Size Set Size
Figure 10 Mean probability of error plotted as a function of condition (VM vs CM), olde
new status of probe, set size, and repeat manipulation. VM, varied mapping, CM, consis-
tent mapping. Reprinted from Nosofsky, R.M., Cao, R., Cox, G.E., & Shiffrin, R.M. (2014).
Familiarity and categorization processes in memory search. Cognitive Psychology, 75,
103. Copyright 2014 by Elsevier. Reprinted with permission.
The data from the VM-repeat condition are symbolized by X’s. Perhaps
the most dramatic results are that, for the new probes, compared to the
standard VM condition (solid triangles), there is a major lengthening in
mean RTs and a major increase in error rates in the VM-repeat condition
(cf. Monsell, 1978). The mean RTs for the new probes in the VM-repeat
condition are not monotonic with set size, but this pattern varied consider-
ably across the different subjects and the irregular plot probably reflects noise
due to the smaller sample sizes in the VM-repeat condition. Note that the
error proportions for the new probes in the VM-repeat condition do
increase in highly regular fashion as set size increases. Regarding the old
probes, there is little change in mean RTs and a slight decrease in error rates
(except at set-size 16) when the probe repeats from the previous trial.
The data from the CM-repeat condition are symbolized by open circles.
Whereas there was a dramatic slowdown for repeat-new probes in the VM
condition, there was no change in RT for the repeat-new probes in the CM
condition (and error rates remain essentially at zero). In addition, mean RTs
got shorter for old probes in the CM-repeat condition and error rates got
even lower than in the CM no-repeat condition.
80 Robert M. Nosofsky
ACKNOWLEDGMENTS
This work was supported by Grant FA9550-14-1-0357 from the Air Force Office of
Scientific Research to Robert Nosofsky.
REFERENCES
Anderson, J. R., & Schooler, L. J. (1991). Reflections of the environment in memory.
Psychological Science, 2, 396e408.
Banks, W. P., & Atkinson, R. C. (1974). Accuracy and speed strategies in scanning active
memory. Memory & Cognition, 2, 629e636.
Berman, M. G., Jonides, J., & Lewis, R. L. (2009). In search of decay in verbal short-term
memory. Journal of Experimental Psychology: Learning, Memory, and Cognition, 35(2),
317e333.
Brady, T. F., Konkle, T., Alvarez, G. A., & Oliva, A. (2008). Visual long-term memory has a
massive storage capacity for object details. Proceedings of the National Academy of Sciences of
the United States of America, 105, 14325e14329.
Brown, S. D., & Heathcote, A. (2008). The simplest complete model of choice response
time: linear ballistic accumulation. Cognitive Psychology, 57(3), 153e178.
Burrows, D., & Okada, R. (1975). Memory retrieval from long and short lists. Science, 188,
1031e1033.
Cheng, P. W. (1985). Restructuring versus automaticity: alternative accounts of skill
acquisition. Psychological Review, 92, 414e423.
Clark, S. E., & Gronlund, S. D. (1996). Global matching models of recognition memory:
how the models match the data. Psychonomic Bulletin & Review, 3, 37e60.
Donkin, C., & Nosofsky, R. M. (2012a). A power law of psychological memory strength in
short- and long-term recognition. Psychological Science, 23, 625e634.
Donkin, C., & Nosofsky, R. M. (2012b). The structure of short-term memory scanning: an
investigation using response-time distribution models. Psychonomic Bulletin & Review, 19,
363e394.
Garner, W. R. (1974). The processing of information and structure. Potomac, MD: LEA.
Gillund, G., & Shiffrin, R. M. (1984). A retrieval model for both recognition and recall.
Psychological Review, 91, 1e65.
Hintzman, D. L. (1986). “Schema abstraction” in a multiple-trace memory model. Psycholog-
ical Review, 93, 411e428.
Hintzman, D. L. (1988). Judgments of frequency and recognition memory in a multiple-trace
memory model. Psychological Review, 95, 528e551.
Howard, M. W., & Kahana, M. J. (2002). A distributed representation of temporal context.
Journal of Mathematical Psychology, 46(3), 269e299.
Kahana, M. J., & Sekuler, R. (2002). Recognizing spatial patterns: a noisy exemplar
approach. Vision Research, 42, 2177e2192.
Lamberts, K. (2000). Information-accumulation theory of speeded categorization. Psycholog-
ical Review, 107(2), 227.
Lamberts, K., Brockdorff, N., & Heit, E. (2003). Feature-sampling and random-walk models of
individual-stimulus recognition. Journal of Experimental Psychology: General, 132(3), 351.
Logan, G. D. (1988). Toward an instance theory of automatization. Psychological Review, 95,
492e527.
Memory Search 83