Sei sulla pagina 1di 64

Introduction to ACT-R 5.

0
Tutorial
24th Annual Conference
Cognitive Science Society

Christian Lebiere
Human Computer Interaction Institute
Carnegie Mellon University
Pittsburgh, PA 15213
cl@cmu.edu

ACT-R Home Page: http://act.psy.cmu.edu


Tutorial Overview
1. Introduction
2. Symbolic ACT-R
Declarative Representation: Chunks
Procedural Representation: Productions
ACT-R 5.0 Buffers: A Complete Model for Sentence Memory
3. Chunk Activation in ACT-R
Activation Calculations
Spreading Activation: The Fan Effect
Partial Matching: Cognitive Arithmetic
Noise: Paper Rocks Scissors
Base-Level Learning: Paired Associate
4. Production Utility in ACT-R
Principles and Building Sticks Example
5. Production Compilation
Principles and Successes
6. Predicting fMRI BOLD response
Principles and Algebra example
Motivations for a Cognitive Architecture

1. Philosophy: Provide a unified understanding of the


mind.

2. Psychology: Account for experimental data.

3. Education: Provide cognitive models for intelligent


tutoring systems and other learning environments.

4. Human Computer Interaction: Evaluate artifacts and


help in their design.

5. Computer Generated Forces: Provide cognitive agents


to inhabit training environments and games.

6. Neuroscience: Provide a framework for interpreting data


from brain imaging.
Approach: Integrated Cognitive Models

 Cognitive model = computational process


that thinks/acts like a person

Integrated cognitive models …
User
Model
Driver
Model
User
Model
•••
Study 1: Dialing Times

 Total time to complete dialing

Model Predictions Human Data


9 9

Baseline Baseline

8 8
Driving Driving

7 7

6 6

5 5

4 4

3 3

2 2

1 1

0 0

Full-Manual Speed-Manual Full-Voice Speed-Voice Full-Manual Speed-Manual Full-Voice Speed-Voice


Study 1: Lateral Deviation

 Deviation from lane center (RMSE)

Model Predictions Human Data

0.60 0.60

0.50 0.50

0.40 0.40

0.30 0.30

0.20 0.20

0.10 0.10

0.00 0.00

No Dialing Full- Speed- Full-Voice Speed- No Dialing Full- Speed- Full-Voice Speed-

Manual Manual Voice Manual Manual Voice


These Goals for Cognitive Architectures
Require

1. Integration, not just of different aspects of higher level


cognition but of cognition, perception, and action.

2. Systems that run in real time.

3. Robust behavior in the face of error, the unexpected, and


the unknown.

4. Parameter-free predictions of behavior.

5. Learning.
History of the ACT-framework

Predecessor HAM (Anderson & Bower 1973)

Theory versions ACT-E (Anderson, 1976)


ACT* (Anderson, 1978)
ACT-R (Anderson, 1993)
ACT-R 4.0 (Anderson & Lebiere, 1998)
ACT-R 5.0 (Anderson & Lebiere, 2001)
Implementations GRAPES (Sauers & Farrell, 1982)
PUPS (Anderson & Thompson, 1989)
ACT-R 2.0 (Lebiere & Kushmerick, 1993)
ACT-R 3.0 (Lebiere, 1995)
ACT-R 4.0 (Lebiere, 1998)
ACT-R/PM (Byrne, 1998)
ACT-R 5.0 (Lebiere, 2001)
Windows Environment (Bothell, 2001)
Macintosh Environment (Fincham, 2001)
~ 100 Published Models in ACT-R 1997-2002
I. Perception & Attention III. Problem Solving & Decision Making
1. Psychophysical Judgements 1. Tower of Hanoi
2. Visual Search 2. Choice & Strategy Selection
3. Eye Movements 3. Mathematical Problem Solving
4. Psychological Refractory Period 4. Spatial Reasoning
5. Task Switching 5. Dynamic Systems
6. Subitizing 6. Use and Design of Artifacts
7. Stroop 7. Game Playing
8. Driving Behavior 8. Insight and Scientific
9. Situational Awareness Discovery
10. Graphical User Interfaces
IV. Language Processing
II. Learning & Memory 1. Parsing
1. List Memory 2. Analogy & Metaphor
2. Fan Effect 3. Learning
3. Implicit Learning 4. Sentence Memory
4. Skill Acquisition
5. Cognitive Arithmetic V. Other
6. Category Learning 1. Cognitive Development
7. Learning by Exploration 2. Individual Differences
and Demonstration 3. Emotion
8. Updating Memory & 4. Cognitive Workload
Prospective Memory 5. Computer Generated Forces
9. Causal Learning 6. fMRI
7. Communication, Negotiation,
Group Decision
Visit http://act.psy.cmu.edu/papers/ACT-R_Models.htm link. Making
ACT-R 5.0
Intentional Module Declarative Module
(not identified) (Temporal/Hippocampus)

Goal Buffer Retrieval Buffer


(DLPFC) (VLPFC)

(Basal Ganglia)
Matching (Striatum)

Productions
Selection (Pallidum)

Execution (Thalamus)

Visual Buffer Manual Buffer


(Parietal) (Motor)

Visual Module Manual Module


(Occipital/etc) (Motor/Cerebellum)

Environment
ACT-R: Knowledge Representation
Declarative-Procedural Distinction

Declarative Knowledge: Chunks

Configurations of small numbers of elements

addend1 sum

Three Addition-Fact Seven

addend2

Four

336

Procedural Knowledge: Production Rules +848

for retri eving chunks to solve problems. 4

IF the goal is to add the numbers in a column


 goal buffer
and n1 + n2 are in the column  visual buffer
THEN retrieve the sum of n1 and n2.  retrieval buffer

Productions serve to coordinate the retrieval of

information from declarative memory and the enviroment

to produce transformations in the goal state.


ACT-R: Assumption Space

Performance
Declarative Procedural

Retrieval of Application of
Symbolic Chunks Production Rules
Noisy Activations Noisy Utilities
Subsymbolic Control Speed and Control Choice
Accuracy

Learning
Declarative Procedural

Encoding Production
Symbolic Environment and Compilation
Caching Goals
Bayesian Bayesian
Subsymbolic Learning Learning
Chunks: Example

( CHUNK-TYPE NAME SLOT1 SLOT2 SLOTN )

(F ACT3+4

isa ADDITION-FACT

ADDEND1 THREE

ADDEND2 FOUR

SUM SEVEN
)
Chunks: Example

(CLEAR-ALL)
(CHUNK-TYPE addition-fact addend1 addend2 sum)
(CHUNK-TYPE integer value)
(ADD-DM (fact3+4
isa addition-fact
addend1 three
addend2 four
sum seven)
(three
isa integer
value 3)
(four
isa integer
value 4)
(seven
isa integer
value 7)
Chunks: Example

ADDITION-FACT
3 7

VALUE isa VALUE

ADDEND1 SUM
THREE FACT3+4 SEVEN

ADDEND2

isa FOUR 4 isa


VALUE
isa

INTEGER
Chunks: Exercise I

Fact: The cat sits on the mat.

Encoding:
proposition

(Chunk-Type proposition agent action object)


isa

(Add-DM cat007 fact007 mat


(fact007 agent object
isa proposition
agent cat007 action
action sits_on
object mat)
)
sits_on
Chunks: Exercise II
Fact The black cat with 5 legs sits on the mat.

Chunks
(Chunk-Type proposition agent action object)
(Chunk-Type cat legs color)
cat proposition
(Add-DM
(fact007 isa proposition
agent cat007
action sits_on isa isa
object mat) legs
5 cat007 fact007 mat
(cat007 isa cat agent object
legs 5
color black) color action
)

black sits_on
Chunks: Exercise III

(Chunk-Type proposition agent action object)


(Chunk-Type prof money-status age)
(Chunk-Type house kind price status)
Fact (Add-DM
The rich young professor buys a (fact008 isa proposition
beautiful and expensive city
Chunk agent prof08
action buys
house. object house1001
proposition )
prof house expensive
(prof08 isa prof
money-status rich
isa price age young
isa isa
)
agent object (obj1001 isa house
rich fact008 obj1001 kind city-house
money- prof08
status price expensive
status kind status beautiful
age action
)
)
beautiful
young buys city-house
A Production is

1. The greatest idea in cognitive science.

2. The least appreciated construct in cognitive science.

3. A 50 millisecond step of cognition.

4. The source of the serial bottleneck in otherwise parallel


system.

5. A condition-action data structure with “variables”.

6. A formal specification of the flow of information from


cortex to basal ganglia and back again.
Productions

Key Properties • modularity


• abstraction
• goal/buffer factoring
• conditional asymmetry

Structure of productions
( p name
Specification of
condition part
Buffer Tests
delimiter ==>

Specification of
action part
Buffer Transformations
)
ACT-R 5.0 Buffers
1. Goal Buffer (=goal, +goal)
-represents where one is in the task
-preserves information across production cycles
2. Retrieval Buffer (=retrieval, +retrieval)
-holds information retrieval from declarative memory
-seat of activation computations
3. Visual Buffers
-location (=visual-location, +visual-location)
-visual objects (=visual, +visual)
-attention switch corresponds to buffer transformation
4. Auditory Buffers (=aural, +aural)
-analogous to visual
5. Manual Buffers (=manual, +manual)
-elaborate theory of manual movement include feature
preparation, Fitts law, and device properties
6. Vocal Buffers (=vocal, +vocal)
-analogous to manual buffers but less well developed
Model for Anderson (1974)

Participants read a story consisting of Active and Passive sentences.

Subjects are asked to verify either active or passive sentences.

All Foils are Subject-Object Reversals.

Predictions of ACT-R model are “almost” parameter-free.

DATA: Studied-form/Test-form

Active-active Active-passive Passive-active Passive-passive


Targets: 2.25 2.80 2.30 2.75
Foils: 2.55 2.95 2.55 2.95

Predictions:
Active-active Active-passive Passive-active Passive-passive
Targets: 2.36 2.86 2.36 2.86
Foils: 2.51 3.01 2.51 3.01

CORRELATION: 0.978
MEAN DEVIATION: 0.072
250m msec in the life of ACT-R:
Reading the Word “The”

Identifying Left-most Location


Time 63.900: Find-Next-Word Selected
Time 63.950: Find-Next-Word Fired
Time 63.950: Module :VISION running command FIND-LOCATION

Attending to Word
Time 63.950: Attend-Next-Word Selected
Time 64.000: Attend-Next-Word Fired
Time 64.000: Module :VISION running command MOVE-ATTENTION
Time 64.050: Module :VISION running command FOCUS-ON

Encoding Word
Time 64.050: Read-Word Selected
Time 64.100: Read-Word Fired
Time 64.100: Failure Retrieved

Skipping The
Time 64.100: Skip-The Selected
Time 64.150: Skip-The Fired
Attending to a Word in Two Productions
(P find-next-word
=goal>  no word currently being processed.
ISA comprehend-sentence
word nil
==>
+visual-location>
ISA visual-location  find left-most unattended location
screen-x lowest
attended nil
=goal>
word looking  update state
)

(P attend-next-word
=goal>
ISA comprehend-sentence  looking for a word
word looking
=visual-location>
ISA visual-location  visual location has been identified
==>
=goal>
word attending  update state
+visual>
ISA visual-object  attend to object in that location
screen-pos =visual-location
)
Processing “The” in Two Productions
(P read-word
=goal>
ISA comprehend-sentence
word attending  attending to a word
=visual>
ISA text
value =word  word has been identified
status nil
==>
=goal>
word =word  hold word in goal buffer
+retrieval>
ISA meaning
word =word  retrieve word’s meaning
)

(P skip-the
=goal>
ISA comprehend-sentence
word "the" the word is “the”
==>
=goal>
word nil  set to process next word
)
Processing “missionary” in 450 msec.

Identifying left-most unattended Location


Time 64.150: Find-Next-Word Selected
Time 64.200: Find-Next-Word Fired
Time 64.200: Module :VISION running command FIND-LOCATION

Attending to Word
Time 64.200: Attend-Next-Word Selected
Time 64.250: Attend-Next-Word Fired
Time 64.250: Module :VISION running command MOVE-ATTENTION
Time 64.300: Module :VISION running command FOCUS-ON

Encoding Word
Time 64.300: Read-Word Selected
Time 64.350: Read-Word Fired
Time 64.550: Missionary Retrieved

Processing the First Noun


Time 64.550: Process-First-Noun Selected
Time 64.600: Process-First-Noun Fired
Processing the Word “missionary”

Missionary 0.000
isa MEANING
word "missionary"

(P process-first-noun
=goal>
ISA comprehend-sentence
agent nil  neither agent or action
action nil has been assigned

word =y
=retrieval>
ISA meaning  word meaning has been
retrieved
word =y
==>
=goal>
agent =retrieval  assign meaning to agent
and set to process next word
word nil
)
Three More Words in the life of ACT-R: 950 msec.

Processing “was” Processing “by”


Time 64.600: Find-Next-Word Selected Time 65.300: Find-Next-Word Selected
Time 64.650: Find-Next-Word Fired Time 65.350: Find-Next-Word Fired
Time 64.650: Module :VISION running command FIND-LOCATION Time 65.350: Module :VISION running command FIND-LOCATION
Time 64.650: Attend-Next-Word Selected Time 65.350: Attend-Next-Word Selected
Time 64.700: Attend-Next-Word Fired Time 65.400: Attend-Next-Word Fired
Time 64.700: Module :VISION running command MOVE-ATTENTION Time 65.400: Module :VISION running command MOVE-ATTENTION
Time 64.750: Module :VISION running command FOCUS-ON Time 65.450: Module :VISION running command FOCUS-ON
Time 64.750: Read-Word Selected Time 65.450: Read-Word Selected
Time 64.800: Read-Word Fired Time 65.500: Read-Word Fired
Time 64.800: Failure Retrieved Time 65.500: Failure Retrieved
Time 64.800: Skip-Was Selected Time 65.500: Skip-By Selected
Time 64.850: Skip-Was Fired Time 65.550: Skip-By Fired

Processing “feared”
Time 64.850: Find-Next-Word Selected
Time 64.900: Find-Next-Word Fired
Time 64.900: Module :VISION running command FIND-LOCATION
Time 64.900: Attend-Next-Word Selected
Time 64.950: Attend-Next-Word Fired
Time 64.950: Module :VISION running command MOVE-ATTENTION
Time 65.000: Module :VISION running command FOCUS-ON
Time 65.000: Read-Word Selected
Time 65.050: Read-Word Fired
Time 65.250: Fear Retrieved
Time 65.250: Process-Verb Selected
Time 65.300: Process-Verb Fired
Reinterpreting the Passive

(P skip-by
=goal>
ISA comprehend-sentence
word "by"
agent =per
==>
=goal>
word nil
object =per
agent nil
)
Two More Words in the life of ACT-R: 700 msec.

Processing “the”
Time 65.550: Find-Next-Word Selected
Time 65.600: Find-Next-Word Fired
Time 65.600: Module :VISION running command FIND-LOCATION
Time 65.600: Attend-Next-Word Selected
Time 65.650: Attend-Next-Word Fired
Time 65.650: Module :VISION running command MOVE-ATTENTION
Time 65.700: Module :VISION running command FOCUS-ON
Time 65.700: Read-Word Selected
Time 65.750: Read-Word Fired
Time 65.750: Failure Retrieved
Time 65.750: Skip-The Selected
Time 65.800: Skip-The Fired
Processing “cannibal”
Time 65.800: Find-Next-Word Selected
Time 65.850: Find-Next-Word Fired
Time 65.850: Module :VISION running command FIND-LOCATION
Time 65.850: Attend-Next-Word Selected
Time 65.900: Attend-Next-Word Fired
Time 65.900: Module :VISION running command MOVE-ATTENTION
Time 65.950: Module :VISION running command FOCUS-ON
Time 65.950: Read-Word Selected
Time 66.000: Read-Word Fired
Time 66.200: Cannibal Retrieved
Time 66.200: Process-Last-Word-Agent Selected
Time 66.250: Process-Last-Word-Agent Fired
Retrieving a Memory: 250 msec

Time 66.250: Retrieve-Answer Selected


Time 66.300: Retrieve-Answer Fired
Time 66.500: Goal123032 Retrieved

(P retrieve-answer
=goal>
ISA comprehend-sentence
agent =agent  sentence processing complete
action =verb
object =object
purpose test
==>
=goal>
purpose retrieve-test  update state
+retrieval>
ISA comprehend-sentence
action =verb  retrieve sentence involving verb
purpose study
)
Generating a Response: 410 ms.
Time 66.500: Answer-No Selected
Time 66.700: Answer-No Fired
Time 66.700: Module :MOTOR running command PRESS-KEY
Time 66.850: Module :MOTOR running command PREPARATION-COMPLETE
Time 66.910: Device running command OUTPUT-KEY

(P answer-no
=goal>
ISA comprehend-sentence
 ready to test
agent =agent
action =verb
object =object
purpose retrieve-test
=retrieval>
ISA comprehend-sentence
- agent =agent
 retrieve sentence does not
action =verb
match agent or object
- object =object
purpose study
==>
=goal>
 update state
purpose done
+manual>
ISA press-key  indicate no
key "d"
)
Subsymbolic Level

The subsymbolic level reflects an analytic characterization of connectionist


computations. These computations have been implemented in ACT-RN
(Lebiere & Anderson, 1993) but this is not a practical modeling system.

1. Production Utilities are responsible for determining which productions get selected
when there is a conflict.

2. Production Utilities have been considerably simplified in ACT-R 5.0 over ACT-R 4.0.

3. Chunk Activations are responsible for determining which (if any chunks) get
retrieved and how long it takes to retrieve them.

4. Chunk Activations have been simplified in ACT-R 5.0 and a major step has been
taken towards the goal of parameter-free predictions by fixing a number of the
parameters.

As with the symbolic level, the subsymbolic level is not a static level, but is changing
in the light of experience. Subsymbolic learning allows the system to adapt to the
statistical structure of the environment.
Activation
A i =Bi +∑ Wj ⋅ Sji +∑ MPk ⋅ Simkl +N(0,s)
j k
Seven

Sum

Addend1 Addend2
Three Chunk i Four
Bi

Sji
=Goal>
   isa write +Retrieval>
+    isa addition­fact +
   relation sum Conditions
   arg1 Three    addend1 Three Actions
   arg2 Four     addend2 Four Sim
kl
Chunk Activation

activation base
= activation(
+
source
activation* strength )(
associative
+
mismatch
penalty *
similarity
value )
+ noise

A i =Bi +∑ Wj ⋅ Sji +∑ MPk ⋅Simkl +N(0,s)


j k

Activation makes chunks available to the degree that past experiences


indicate that they will be useful at the particular moment:
Base-level: general past usefulness
Associative Activation: relevance to the general context
Matching Penalty: relevance to the specific match required
Noise: stochastic is useful to avoid getting stuck in local minima
Activation, Latency and Probability
• Retrieval time for a chunk is a negative
exponential function of its activation:
−Ai
Timei =F ⋅e
• Probability of retrieval of a chunk follows the
Boltzmann (softmax) distribution: Ai
e t
6⋅ σ Pi = Aj
t = 2⋅ s =
π ∑ e t
j

• The chunk with the highest activation is retrieved


provided that it reaches the retrieval threshold 
• For purposes of latency and probability, the
threshold can be considered as a virtual chunk
Base-level Activation
base
activation = activation

Ai = Bi

The base level activation Bi of chunk Ci reflects a context-


independent estimation of how likely Ci is to match a production, i.e.
Bi is an estimate of the log odds that Ci will be used.

Two factors determine Bi:


• frequency of using Ci
• recency with which Ci was used

P(Ci)
Bi = ln ( P(C ) )
i
Source Activation

+
( source
activation *
associative
strength )
+ W j * Sji
j

The source activations Wj reflect the amount of attention


given to elements, i.e. fillers, of the current goal. ACT-R
assumes a fixed capacity for source activation

W=  Wj reflects an individual difference parameter.


Associative Strengths

+
( source
activation *
associative
strength )
+ W j * Sji

The association strength Sji between chunks Cj and Ci is a measure of


how often Ci was needed (retrieved) when Cj was element of the goal,
i.e. Sji estimates the log likelihood ratio of Cj being a source of
activation if Ci was retrieved.

Sji = ln
( P(Ni Cj)
P(Ni) )
= S - ln(P(Ni|Cj))
Application: Fan Effect
Partial Matching

+( mismatch
penalty *
similarity
value )
+∑ MPk ⋅ Simkl
k

• The mismatch penalty is a measure of the amount of control over memory


retrieval: MP = 0 is free association; MP very large means perfect
matching; intermediate values allow some mismatching in search of a
memory match.
• Similarity values between desired value k specified by the production and
actual value l present in the retrieved chunk. This provides generalization
properties similar to those in neural networks; the similarity value is
essentially equivalent to the dot-product between distributed
representations.
Application: Cognitive Arithmetic
Table 3.1
Data from Siegler & Shrager (1984)
and ACT-R’s Predictions

Data
Other
Including
Problem Answer Retrieval
Failure
0 1 2 3 4 5 6 7 8
1+1 - .05 .86 - .02 - .02 - - .06
1+2 - .04 .07 .75 .04 - .02 - - .09
1+3 - .02 - .10 .75 .05 .01 .03 - .06
2+2 .02 - .04 .05 .80 .04 - .05 - -
2+3 - - .07 .09 .25 .45 .08 .01 .01 .06
3+3 .04 - - .05 .21 .09 .48 - .02 .11

Predictions
Other
Including
Problem Answer Retrieval
Failure
0 1 2 3 4 5 6 7 8
1+1 - .10 .75 .10 .01 - - - - .04
1+2 - .01 .10 .75 .10 .- - - - .04
1+3 - - .01 .10 .78 .06 - - - .04
2+2 - - .0 .1 .82 .02 - - - .04
2+3 - - - .03 .32 .45 .06 .01 - .13
3+3 - - - .04 .04 .08 .61 .08 .01 .18
Noise

+ noise

+N(0,s)

• Noise provides the essential stochasticity of human behavior


• Noise also provides a powerful way of exploring the world
• Activation noise is composed of two noises:
• A permanent noise accounting for encoding variability
• A transient noise for moment-to-moment variation
Application: Paper Rocks Scissors
(Lebiere & West, 1999)
• Too little noise makes the system too deterministic.
• Too much noise makes the system too random.
• This is not limited to game-playing situations!
Effect of Noise (Lag2 Against Lag2) Effect of Noise (Lag2 Against Lag1)

400
Noise = 0 Lag2 Noise = 0
Noise = 0.1 400
Lag2 Noise = 0.1
Noise = 0.25 Lag2 noise = 0.25
300
200

200 - Low Noise)


rence (High Noise Score Difference (Lag2 - Lag1)
0

100
-200

0 -400
0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0

Noise Level Noise Level of Lag1 Model


Base-Level Learning

Based on the Rational Analysis of the Environment


(Schooler & Anderson, 1997)

Base-Level Activation reflects the log-odds that a chunk


will be needed. In the environment the odds that a fact
will be needed decays as a power function of how long it
has been since it has been used. The effects of multiple 1.5

uses sum in determining the odds of being used.


1.0
n

− d

B = ln(  )
− 0.5
i j

Base-Level Learning Equation


j = 1

⎡k n- k L1-d - t k1-d ⎤⎥
0.0
Activation Level

Bi ≈ ln ∑ tk +
⎢ −d
⎢⎣j =1 1- d L - tk ⎥⎦ -0.5

-1.0

≈ n(n / (1-d)) - d*n(L)


-1.5
0 50 100 150 200

Seconds

Note: The decay parameter d has been set to .5 in most


ACT-R models
Paired Associate: Study
Time 5.000: Find Selected
Time 5.050: Module :VISION running command FIND-LOCATION
Time 5.050: Find Fired
Time 5.050: Attend Selected
Time 5.100: Module :VISION running command MOVE-ATTENTION
Time 5.100: Attend Fired
Time 5.150: Module :VISION running command FOCUS-ON
Time 5.150: Associate Selected
Time 5.200: Associate Fired

(p associate
=goal>
isa goal
arg1 =stimulus  attending word during study
step attending
state study
=visual>
isa text
value =response  visual buffer holds response
status nil
==>
=goal>
isa goal
arg2 =response  store response in goal with stimulus
step done
+goal>
isa goal
state test  prepare for next trial
step waiting)
Paired Associate: Successful Recall

Time 10.000: Find Selected


Time 10.050: Module :VISION running command FIND-LOCATION
Time 10.050: Find Fired
Time 10.050: Attend Selected
Time 10.100: Module :VISION running command MOVE-ATTENTION
Time 10.100: Attend Fired
Time 10.150: Module :VISION running command FOCUS-ON
Time 10.150: Read-Stimulus Selected
Time 10.200: Read-Stimulus Fired
Time 10.462: Goal Retrieved
Time 10.462: Recall Selected
Time 10.512: Module :MOTOR running command PRESS-KEY
Time 10.512: Recall Fired
Time 10.762: Module :MOTOR running command PREPARATION-COMPLETE
Time 10.912: Device running command OUTPUT-KEY
Paired Associate: Successful Recall (cont.)

(p read-stimulus (p recall
=goal> =goal>
isa goal isa goal
step attending relation associate
state test arg1 =val
=visual> step testing
isa text =retrieval>
value =val isa goal
==> relation associate
+retrieval> arg1 =val
isa goal arg2 =ans
relation associate ==>
arg1 =val +manual>
=goal> isa press-key
isa goal key =ans
relation associate =goal>
arg1 =val step waiting)
step testing)
Paired Associate Example
Data Predictions
Trial Accuracy Latency Trial Accuracy Latency
1 .000 0.000 1 .000 0.000
2 .526 2.156 2 .515 2.102
3 .667 1.967 3 .570 1.730
4 .798 1.762 4 .740 1.623
5 .887 1.680 5 .850 1.584
6 .924 1.552 6 .865 1.508
7 .958 1.467 7 .895 1.552
8 .954 1.402 8 .930 1.462

? (collect-data 10)  Note simulated runs show random fluctuation.

ACCURACY
(0.0 0.515 0.570 0.740 0.850 0.865 0.895 0.930)
CORRELATION: 0.996
MEAN DEVIATION: 0.053

LATENCY
(0 2.102 1.730 1.623 1.589 1.508 1.552 1.462)
CORRELATION: 0.988
MEAN DEVIATION: 0.112
NIL
Production Utility

Making Choices: Conflict Resolution

P is expected probability of success


Expected Gain = E = PG-C G is value of goal
C is expected cost

E / t

i
t reflects noise in evaluation
Probability of choosing i =
e

E / t
and is like temperature in
e
j

the Bolztman equation


uccesses
P= uccesses+Failures a is prior successes
m is experienced successes
 is prior failures
uccesses=−+m n is experienced failures
Failures=−+n
Building Sticks Task (Lovett)
INITIAL STATE

desired:

current:

building:

a b c

possible first moves

desired: desired: desired:

current: current: current:

building: building: building:

a b c a b c a b c

UNDERSHOOT OVERSHOOT UNDERSHOOT

Undershoot Overshoot
More Successful More Successful
Looks 10 Undershoot 10 (5) Undershoot
Undershoot 0 Overshoot 10 (15) Overshoot
Looks 10 (15) Undershoot 0 Undershoot
Overshoot 10 (5) Overshoot 10 Overshoot
Lovett & Anderson, 1996
Observed Data

Biased Condition (2/3) Extreme-Biased Condition

(5/6)
1 1

3 3 1
0.9 0.9 0
1
1
0
1
3

0.8 0.8 3
0 3

0.7 3 0 0.7

1
0.6 1 0.6 3

Proportion Choice More Successful Operator 0.5 0.5 1

3
0
0.4 3 0.4
0
1
1

0.3 1 0.3

0.2 0.2
0
0
0
0.1 0.1
0

0 0

High Low Neutral Low High High Low Neutral Low High

Against Against Toward Toward Against Against Toward Toward

Test Problem Bias Test Problem Bias

Predictions of Decay-Based ACT-R

1 1

3 3
1 1
0.9 0.9
3 1
3
1

0 0 0
0
0.8 0.8

0.7 0.7 3
3
1 1

0.6 0.6

0 3
3
Proportion Choice More Successful Operator 0.5 0.5

1 1 0

0.4 0.4
3 3

1 1
0.3 0.3

0.2 0.2

0 0 0 0

0.1 0.1

0 0

High Low Neutral Low High High Low Neutral Low High

Against Against Toward Toward Against Against Toward Toward

Test Problem Bias Test Problem Bias


Building Sticks Demo
Decide-Under
If the goal is to solve the BST task
and the undershoot difference is less
than the overshoot difference
Then choose undershoot.

Decide-Over
If the goal is to solve the BST task
and the overshoot difference is less
than the undershoot difference
Then choose overshoot.

Force-Under
If the goal is to solve the BST task
Then choose undershoot.

Force-Over
If the goal is to solve the BST task
Then choose overshoot. Web Address:
ACT-R Home Page
Published ACT-R Models
Atomic Components of Thought
Chapter 4
Building Sticks Model
ACT-R model probabilities before and after
problem-solving experience in Experiment 3
(Lovett & Anderson, 1996)
Prior Final Value
Production Probability
of Success 67% Condition 83% Condition

Force-Under
More Successful
.50 .60 .71
Context Free
Force-Over
Less Successful .50 .38 .27
Context Free
Decide-Under
More Successful .96 .98 .98
Context Sensitive
Decide-Over
Less Successful .96 .63 .54
Context Sensitive
Decay of Experience
m

Success Discounting
− d

Successes ( t ) = − 
j

j = 1

Failure Discounting
− d

Failures ( t ) = ∑ t
j

j = 1

Data
1

Decay Model
0.9

No Decay Model
0.8

0.7

0.6

0.5

0.4

0.3

0.2

0.1

U, U O, U U, O O, O

Outcome on Trial N-2, N-1

Note: Such temporal weighting is critical in the real world.


Production Compilation: The Basic Idea
(p read-stimulus relation associate
=goal> arg1 =val
isa goal arg2 =ans
step attending ==>
state test +manual>
=visual> isa press-key
isa text key =ans
value =val =goal>
==> step waiting)
+retrieval>
isa goal (p recall-vanilla
relation associate =goal>
arg1 =val isa goal
arg2 =ans step attending
=goal> state test
relation associate =visual>
arg1 =val isa text
step testing) value "vanilla
==>
(p recall +manual>
=goal> isa press-key
isa goal key "7"
relation associate =goal>
arg1 =val relation associate
step testing arg1 "vanilla"
=retrieval> step waiting)
isa goal
Production Compilation: The Principles

1. Perceptual-Motor Buffers: Avoid compositions that will result in


jamming when one tries to build two operations on the same buffer
into the same production.

2. Retrieval Buffer: Except for failure tests proceduralize out and


build more specific productions.

3. Goal Buffers: Complex Rules describing merging.

4. Safe Productions: Production will not produce any result that the
original productions did not produce.

5. Parameter Setting:
Successes = P*initial-experience*
Failures = (1-P) *initial-experience*
Efforts = (Successes + Efforts)(C + *cost-penalty*)
Production Compilation: The Successes

1. Taatgen: Learning of inflection (English past and German plural).


Shows that production compilation can come up with generalizations.

2. Taatgen: Learning of air-traffic control task – shows that production compilation


can deal with complex perceptual motor skill.

3. Anderson: Learning of productions for performing paired associate task from


instructions. Solves mystery of where the productions for doing an experiment
come from.

4. Anderson: Learning to perform an anti-air warfare coordinator task from


instructions. Shows the same as 2 & 3.

5. Anderson: Learning in the fan effect that produces the interaction between fan
and practice. Justifies a major simplification in the parameterization of
productions – no strength separate from utility.

Note all of these examples involve all forms of learning occurring in ACT-R
simultaneous – acquiring new chunks, acquiring new productions, activation
learning, and utility learning.
Predicting fMRI Bold Response from
Buffer Activity

Example: Retrieval buffer during equation-solving predicts activity


in left dorsolateral prefrontal cortex.

BR(t) =∑ .344Di (t −ti ) e


2 −(t−t i )/2

where Di is the duration of the ith retrieval and ti is the time of


initiation of the retrieval.
21 Second Structure of fMRI Trial

Load Equation Blank Period


a=18
b=6
c=5 cx+3=a

1.5 Second Scans


Solving 5 x + 3 = 18

Time 3.000: Find-Right-Term Selected


Time 3.050: Find-Right-Term Fired
Time 3.050: Module :VISION running command FIND- Time 3.050: Attend-Next-Term-Equation Selected
Time 3.100: Attend-Next-Term-Equation Fired
Time 3.100: Module :VISION running command MOVE- Time 3.150: Module :VISION running command FOCUS-
ON
Time 3.150: Encode Selected
Time 3.200: Encode Fired
Time 3.281: 18 Retrieved
Time 3.281: Process-Value-Integer Selected
Time 3.331: Process-Value-Integer Fired
Time 3.331: Module :VISION running command FIND- Time 3.331: Attend-Next-Term-Equation Selected
Time 3.381: Attend-Next-Term-Equation Fired
Time 3.381: Module :VISION running command MOVE- Time 3.431: Module :VISION running command FOCUS-
ON
Time 3.431: Encode Selected
Time 3.481: Encode Fired
Time 3.562: 3 Retrieved
Time 3.562: Process-Op1-Integer Selected
Time 3.612: Process-Op1-Integer Fired
Time 3.612: Module :VISION running command FIND- Time 3.612: Attend-Next-Term-Equation Selected
Time 3.662: Attend-Next-Term-Equation Fired
Time 3.662: Module :VISION running command MOVE- Time 3.712: Module :VISION running command FOCUS-
ON
Time 3.712: Encode Selected
Solving 5 x + 3 = 18 (cont.)

Time 4.412: Process-Operator Fired


Time 5.012: F318 Retrieved
Time 5.012: Finish-Operation1 Selected
Time 5.062: Finish-Operation1 Fired
Time 5.062: Module :VISION running command FIND- Time 5.062: Attend-Next-Term-Equation Selected
Time 5.112: Attend-Next-Term-Equation Fired
Time 5.112: Module :VISION running command MOVE-
Time 5.162: Module :VISION running command FOCUS-ON
Time 5.162: Encode Selected
Time 5.212: Encode Fired
Time 5.293: 5 Retrieved
Time 5.293: Process-Op2-Integer Selected
Time 5.343: Process-Op2-Integer Fired
Time 5.943: F315 Retrieved
Time 5.943: Finish-Operation2 Selected
Time 5.993: Finish-Operation2 Fired
Time 5.993: Retrieve-Key Selected
Time 6.043: Retrieve-Key Fired
Time 6.124: 3 Retrieved
Time 6.124: Generate-Answer Selected
Time 6.174: Generate-Answer Fired
Time 6.174: Module :MOTOR running command PRESS-KEY
Time 6.424: Module :MOTOR running command PREPARATION- Time 6.574: Device running command OUTPUT-KEY
("3" 3.574)
Left Dorsolateral
Prefrontal Cortex
Bold Response for 2 Equation Types
Left Dorsolateral Prefrontal Cortex
0.6

0.5
5x + 3 = 18
Percent Activation

0.4
cx + 3 = a
Change

0.3

0.2

0.1

0.0
1 2 3 4 5 6 7 8 9 10 11 12 13 14
-0.1

Scan (1.5 sec.)

Potrebbero piacerti anche