Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
X2 oi
f (net )i
.
.
wi
. xj
x
xn r di-oi di
c
We then obtain
wi = (di-oi) f1(neti)x
or, for the single weight the adjustment
becomes
wij = (di-oi) f1(neti)xj, for j=1,2,…,n
(2.40b)
note that weight adjustments computed
based on minimization of the squared error
Delta learning Rule
Figure 2.25
Winner_take_All Learning Rule
Wm=[wm1 wm2 …. Wmn]t
containing weights highlighted in the fig 2.25 is
the only one adjusted in the given unsupervised
learning step
Its increment is computed as follows
wm=(x-wm)
or,the individual weight adjustment becomes
wmj= (xj-wmj)
for j=1,2, …n
Winner_take_All Learning Rule
Where >0 is a small learning
constant,typically decreasing as learning
progresses
the winner selection is based on the following
criterion of max activation among all p
neurons participating in a competition:
wmt x = max(witx) i=1,2, … n
Outstar Learning Rule
The weight adjustments in this rule are
computed as follows
wj = (d-wj)
or, the individual adjustments are
wmj = (dm-wmj) for m=1,2,..p
note that in contrast to any learning rule
discussed so far, the adjusted weights are
fanning out of the j’th node in this learning
Outstar Learning Rule
method and the weight vector is defined
accordingly as
wj=[w1j w2j … wpj]t
W11
X1 o1
W1j d1
W1n wij
. Wm1
Wmj op
.
Xj Wmn dm
Wp1 wmj
. Wpj
on
.Xn
Wpn dp
. wpj
Figure 2.26