Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
0
Tutorial
24th Annual Conference
Cognitive Science Society
Christian Lebiere
Human Computer Interaction Institute
Carnegie Mellon University
Pittsburgh, PA 15213
cl@cmu.edu
Baseline Baseline
8 8
Driving Driving
7 7
6 6
5 5
4 4
3 3
2 2
1 1
0 0
0.60 0.60
0.50 0.50
0.40 0.40
0.30 0.30
0.20 0.20
0.10 0.10
0.00 0.00
No Dialing Full- Speed- Full-Voice Speed- No Dialing Full- Speed- Full-Voice Speed-
5. Learning.
History of the ACT-framework
(Basal Ganglia)
Matching (Striatum)
Productions
Selection (Pallidum)
Execution (Thalamus)
Environment
ACT-R: Knowledge Representation
Declarative-Procedural Distinction
addend1 sum
addend2
Four
336
Performance
Declarative Procedural
Retrieval of Application of
Symbolic Chunks Production Rules
Noisy Activations Noisy Utilities
Subsymbolic Control Speed and Control Choice
Accuracy
Learning
Declarative Procedural
Encoding Production
Symbolic Environment and Compilation
Caching Goals
Bayesian Bayesian
Subsymbolic Learning Learning
Chunks: Example
(F ACT3+4
isa ADDITION-FACT
ADDEND1 THREE
ADDEND2 FOUR
SUM SEVEN
)
Chunks: Example
(CLEAR-ALL)
(CHUNK-TYPE addition-fact addend1 addend2 sum)
(CHUNK-TYPE integer value)
(ADD-DM (fact3+4
isa addition-fact
addend1 three
addend2 four
sum seven)
(three
isa integer
value 3)
(four
isa integer
value 4)
(seven
isa integer
value 7)
Chunks: Example
ADDITION-FACT
3 7
ADDEND1 SUM
THREE FACT3+4 SEVEN
ADDEND2
INTEGER
Chunks: Exercise I
Encoding:
proposition
Chunks
(Chunk-Type proposition agent action object)
(Chunk-Type cat legs color)
cat proposition
(Add-DM
(fact007 isa proposition
agent cat007
action sits_on isa isa
object mat) legs
5 cat007 fact007 mat
(cat007 isa cat agent object
legs 5
color black) color action
)
black sits_on
Chunks: Exercise III
Structure of productions
( p name
Specification of
condition part
Buffer Tests
delimiter ==>
Specification of
action part
Buffer Transformations
)
ACT-R 5.0 Buffers
1. Goal Buffer (=goal, +goal)
-represents where one is in the task
-preserves information across production cycles
2. Retrieval Buffer (=retrieval, +retrieval)
-holds information retrieval from declarative memory
-seat of activation computations
3. Visual Buffers
-location (=visual-location, +visual-location)
-visual objects (=visual, +visual)
-attention switch corresponds to buffer transformation
4. Auditory Buffers (=aural, +aural)
-analogous to visual
5. Manual Buffers (=manual, +manual)
-elaborate theory of manual movement include feature
preparation, Fitts law, and device properties
6. Vocal Buffers (=vocal, +vocal)
-analogous to manual buffers but less well developed
Model for Anderson (1974)
DATA: Studied-form/Test-form
Predictions:
Active-active Active-passive Passive-active Passive-passive
Targets: 2.36 2.86 2.36 2.86
Foils: 2.51 3.01 2.51 3.01
CORRELATION: 0.978
MEAN DEVIATION: 0.072
250m msec in the life of ACT-R:
Reading the Word “The”
Attending to Word
Time 63.950: Attend-Next-Word Selected
Time 64.000: Attend-Next-Word Fired
Time 64.000: Module :VISION running command MOVE-ATTENTION
Time 64.050: Module :VISION running command FOCUS-ON
Encoding Word
Time 64.050: Read-Word Selected
Time 64.100: Read-Word Fired
Time 64.100: Failure Retrieved
Skipping The
Time 64.100: Skip-The Selected
Time 64.150: Skip-The Fired
Attending to a Word in Two Productions
(P find-next-word
=goal> no word currently being processed.
ISA comprehend-sentence
word nil
==>
+visual-location>
ISA visual-location find left-most unattended location
screen-x lowest
attended nil
=goal>
word looking update state
)
(P attend-next-word
=goal>
ISA comprehend-sentence looking for a word
word looking
=visual-location>
ISA visual-location visual location has been identified
==>
=goal>
word attending update state
+visual>
ISA visual-object attend to object in that location
screen-pos =visual-location
)
Processing “The” in Two Productions
(P read-word
=goal>
ISA comprehend-sentence
word attending attending to a word
=visual>
ISA text
value =word word has been identified
status nil
==>
=goal>
word =word hold word in goal buffer
+retrieval>
ISA meaning
word =word retrieve word’s meaning
)
(P skip-the
=goal>
ISA comprehend-sentence
word "the" the word is “the”
==>
=goal>
word nil set to process next word
)
Processing “missionary” in 450 msec.
Attending to Word
Time 64.200: Attend-Next-Word Selected
Time 64.250: Attend-Next-Word Fired
Time 64.250: Module :VISION running command MOVE-ATTENTION
Time 64.300: Module :VISION running command FOCUS-ON
Encoding Word
Time 64.300: Read-Word Selected
Time 64.350: Read-Word Fired
Time 64.550: Missionary Retrieved
Missionary 0.000
isa MEANING
word "missionary"
(P process-first-noun
=goal>
ISA comprehend-sentence
agent nil neither agent or action
action nil has been assigned
word =y
=retrieval>
ISA meaning word meaning has been
retrieved
word =y
==>
=goal>
agent =retrieval assign meaning to agent
and set to process next word
word nil
)
Three More Words in the life of ACT-R: 950 msec.
Processing “feared”
Time 64.850: Find-Next-Word Selected
Time 64.900: Find-Next-Word Fired
Time 64.900: Module :VISION running command FIND-LOCATION
Time 64.900: Attend-Next-Word Selected
Time 64.950: Attend-Next-Word Fired
Time 64.950: Module :VISION running command MOVE-ATTENTION
Time 65.000: Module :VISION running command FOCUS-ON
Time 65.000: Read-Word Selected
Time 65.050: Read-Word Fired
Time 65.250: Fear Retrieved
Time 65.250: Process-Verb Selected
Time 65.300: Process-Verb Fired
Reinterpreting the Passive
(P skip-by
=goal>
ISA comprehend-sentence
word "by"
agent =per
==>
=goal>
word nil
object =per
agent nil
)
Two More Words in the life of ACT-R: 700 msec.
Processing “the”
Time 65.550: Find-Next-Word Selected
Time 65.600: Find-Next-Word Fired
Time 65.600: Module :VISION running command FIND-LOCATION
Time 65.600: Attend-Next-Word Selected
Time 65.650: Attend-Next-Word Fired
Time 65.650: Module :VISION running command MOVE-ATTENTION
Time 65.700: Module :VISION running command FOCUS-ON
Time 65.700: Read-Word Selected
Time 65.750: Read-Word Fired
Time 65.750: Failure Retrieved
Time 65.750: Skip-The Selected
Time 65.800: Skip-The Fired
Processing “cannibal”
Time 65.800: Find-Next-Word Selected
Time 65.850: Find-Next-Word Fired
Time 65.850: Module :VISION running command FIND-LOCATION
Time 65.850: Attend-Next-Word Selected
Time 65.900: Attend-Next-Word Fired
Time 65.900: Module :VISION running command MOVE-ATTENTION
Time 65.950: Module :VISION running command FOCUS-ON
Time 65.950: Read-Word Selected
Time 66.000: Read-Word Fired
Time 66.200: Cannibal Retrieved
Time 66.200: Process-Last-Word-Agent Selected
Time 66.250: Process-Last-Word-Agent Fired
Retrieving a Memory: 250 msec
(P retrieve-answer
=goal>
ISA comprehend-sentence
agent =agent sentence processing complete
action =verb
object =object
purpose test
==>
=goal>
purpose retrieve-test update state
+retrieval>
ISA comprehend-sentence
action =verb retrieve sentence involving verb
purpose study
)
Generating a Response: 410 ms.
Time 66.500: Answer-No Selected
Time 66.700: Answer-No Fired
Time 66.700: Module :MOTOR running command PRESS-KEY
Time 66.850: Module :MOTOR running command PREPARATION-COMPLETE
Time 66.910: Device running command OUTPUT-KEY
(P answer-no
=goal>
ISA comprehend-sentence
ready to test
agent =agent
action =verb
object =object
purpose retrieve-test
=retrieval>
ISA comprehend-sentence
- agent =agent
retrieve sentence does not
action =verb
match agent or object
- object =object
purpose study
==>
=goal>
update state
purpose done
+manual>
ISA press-key indicate no
key "d"
)
Subsymbolic Level
1. Production Utilities are responsible for determining which productions get selected
when there is a conflict.
2. Production Utilities have been considerably simplified in ACT-R 5.0 over ACT-R 4.0.
3. Chunk Activations are responsible for determining which (if any chunks) get
retrieved and how long it takes to retrieve them.
4. Chunk Activations have been simplified in ACT-R 5.0 and a major step has been
taken towards the goal of parameter-free predictions by fixing a number of the
parameters.
As with the symbolic level, the subsymbolic level is not a static level, but is changing
in the light of experience. Subsymbolic learning allows the system to adapt to the
statistical structure of the environment.
Activation
A i =Bi +∑ Wj ⋅ Sji +∑ MPk ⋅ Simkl +N(0,s)
j k
Seven
Sum
Addend1 Addend2
Three Chunk i Four
Bi
Sji
=Goal>
isa write +Retrieval>
+ isa additionfact +
relation sum Conditions
arg1 Three addend1 Three Actions
arg2 Four addend2 Four Sim
kl
Chunk Activation
activation base
= activation(
+
source
activation* strength )(
associative
+
mismatch
penalty *
similarity
value )
+ noise
Ai = Bi
P(Ci)
Bi = ln ( P(C ) )
i
Source Activation
+
( source
activation *
associative
strength )
+ W j * Sji
j
+
( source
activation *
associative
strength )
+ W j * Sji
Sji = ln
( P(Ni Cj)
P(Ni) )
= S - ln(P(Ni|Cj))
Application: Fan Effect
Partial Matching
+( mismatch
penalty *
similarity
value )
+∑ MPk ⋅ Simkl
k
Data
Other
Including
Problem Answer Retrieval
Failure
0 1 2 3 4 5 6 7 8
1+1 - .05 .86 - .02 - .02 - - .06
1+2 - .04 .07 .75 .04 - .02 - - .09
1+3 - .02 - .10 .75 .05 .01 .03 - .06
2+2 .02 - .04 .05 .80 .04 - .05 - -
2+3 - - .07 .09 .25 .45 .08 .01 .01 .06
3+3 .04 - - .05 .21 .09 .48 - .02 .11
Predictions
Other
Including
Problem Answer Retrieval
Failure
0 1 2 3 4 5 6 7 8
1+1 - .10 .75 .10 .01 - - - - .04
1+2 - .01 .10 .75 .10 .- - - - .04
1+3 - - .01 .10 .78 .06 - - - .04
2+2 - - .0 .1 .82 .02 - - - .04
2+3 - - - .03 .32 .45 .06 .01 - .13
3+3 - - - .04 .04 .08 .61 .08 .01 .18
Noise
+ noise
+N(0,s)
400
Noise = 0 Lag2 Noise = 0
Noise = 0.1 400
Lag2 Noise = 0.1
Noise = 0.25 Lag2 noise = 0.25
300
200
100
-200
0 -400
0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0
− d
B = ln( )
− 0.5
i j
⎡k n- k L1-d - t k1-d ⎤⎥
0.0
Activation Level
Bi ≈ ln ∑ tk +
⎢ −d
⎢⎣j =1 1- d L - tk ⎥⎦ -0.5
-1.0
Seconds
(p associate
=goal>
isa goal
arg1 =stimulus attending word during study
step attending
state study
=visual>
isa text
value =response visual buffer holds response
status nil
==>
=goal>
isa goal
arg2 =response store response in goal with stimulus
step done
+goal>
isa goal
state test prepare for next trial
step waiting)
Paired Associate: Successful Recall
(p read-stimulus (p recall
=goal> =goal>
isa goal isa goal
step attending relation associate
state test arg1 =val
=visual> step testing
isa text =retrieval>
value =val isa goal
==> relation associate
+retrieval> arg1 =val
isa goal arg2 =ans
relation associate ==>
arg1 =val +manual>
=goal> isa press-key
isa goal key =ans
relation associate =goal>
arg1 =val step waiting)
step testing)
Paired Associate Example
Data Predictions
Trial Accuracy Latency Trial Accuracy Latency
1 .000 0.000 1 .000 0.000
2 .526 2.156 2 .515 2.102
3 .667 1.967 3 .570 1.730
4 .798 1.762 4 .740 1.623
5 .887 1.680 5 .850 1.584
6 .924 1.552 6 .865 1.508
7 .958 1.467 7 .895 1.552
8 .954 1.402 8 .930 1.462
ACCURACY
(0.0 0.515 0.570 0.740 0.850 0.865 0.895 0.930)
CORRELATION: 0.996
MEAN DEVIATION: 0.053
LATENCY
(0 2.102 1.730 1.623 1.589 1.508 1.552 1.462)
CORRELATION: 0.988
MEAN DEVIATION: 0.112
NIL
Production Utility
E / t
i
t reflects noise in evaluation
Probability of choosing i =
e
E / t
and is like temperature in
e
j
uccesses
P= uccesses+Failures a is prior successes
m is experienced successes
is prior failures
uccesses=−+m n is experienced failures
Failures=−+n
Building Sticks Task (Lovett)
INITIAL STATE
desired:
current:
building:
a b c
possible first moves
a b c a b c a b c
Undershoot Overshoot
More Successful More Successful
Looks 10 Undershoot 10 (5) Undershoot
Undershoot 0 Overshoot 10 (15) Overshoot
Looks 10 (15) Undershoot 0 Undershoot
Overshoot 10 (5) Overshoot 10 Overshoot
Lovett & Anderson, 1996
Observed Data
(5/6)
1 1
3 3 1
0.9 0.9 0
1
1
0
1
3
0.8 0.8 3
0 3
0.7 3 0 0.7
1
0.6 1 0.6 3
3
0
0.4 3 0.4
0
1
1
0.3 1 0.3
0.2 0.2
0
0
0
0.1 0.1
0
0 0
High Low Neutral Low High High Low Neutral Low High
1 1
3 3
1 1
0.9 0.9
3 1
3
1
0 0 0
0
0.8 0.8
0.7 0.7 3
3
1 1
0.6 0.6
0 3
3
Proportion Choice More Successful Operator 0.5 0.5
1 1 0
0.4 0.4
3 3
1 1
0.3 0.3
0.2 0.2
0 0 0 0
0.1 0.1
0 0
High Low Neutral Low High High Low Neutral Low High
Decide-Over
If the goal is to solve the BST task
and the overshoot difference is less
than the undershoot difference
Then choose overshoot.
Force-Under
If the goal is to solve the BST task
Then choose undershoot.
Force-Over
If the goal is to solve the BST task
Then choose overshoot. Web Address:
ACT-R Home Page
Published ACT-R Models
Atomic Components of Thought
Chapter 4
Building Sticks Model
ACT-R model probabilities before and after
problem-solving experience in Experiment 3
(Lovett & Anderson, 1996)
Prior Final Value
Production Probability
of Success 67% Condition 83% Condition
Force-Under
More Successful
.50 .60 .71
Context Free
Force-Over
Less Successful .50 .38 .27
Context Free
Decide-Under
More Successful .96 .98 .98
Context Sensitive
Decide-Over
Less Successful .96 .63 .54
Context Sensitive
Decay of Experience
m
Success Discounting
− d
Successes ( t ) = −
j
j = 1
Failure Discounting
− d
Failures ( t ) = ∑ t
j
j = 1
Data
1
Decay Model
0.9
No Decay Model
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
U, U O, U U, O O, O
4. Safe Productions: Production will not produce any result that the
original productions did not produce.
5. Parameter Setting:
Successes = P*initial-experience*
Failures = (1-P) *initial-experience*
Efforts = (Successes + Efforts)(C + *cost-penalty*)
Production Compilation: The Successes
5. Anderson: Learning in the fan effect that produces the interaction between fan
and practice. Justifies a major simplification in the parameterization of
productions – no strength separate from utility.
Note all of these examples involve all forms of learning occurring in ACT-R
simultaneous – acquiring new chunks, acquiring new productions, activation
learning, and utility learning.
Predicting fMRI Bold Response from
Buffer Activity
0.5
5x + 3 = 18
Percent Activation
0.4
cx + 3 = a
Change
0.3
0.2
0.1
0.0
1 2 3 4 5 6 7 8 9 10 11 12 13 14
-0.1