Sei sulla pagina 1di 18

Delta learning Rule

 The delta learning rule only valid for


continuous activation function and in the
supervised training mode
 The learning signal for this mode is called
delta and is defined as follows
r = [di- f(wtix)] f1(wtix)
 the term f1(wtix) is the derivative of the
activation function f(net) computed for
net = wtix
Continuous
x1
perception

X2 oi
f (net )i
.

.
wi
. xj
x
xn r di-oi di
c

Delta learning rule


Delta learning Rule

 Learning rule can derived from the condition


of least squared error between oi and di
 Calculating the gradient vector with respect to
wi of the squared error defined as
E=1/2 (di-oi)2
 which is equivalent to
E=1/2 [di-f(wtix)]2
Delta learning Rule

 We obtain the error gradient vector value


 E= -(di-oi) f1(wtix)x
 The components of the gradient vector are
 since the minimization of the error requires
the weight changes to be in the negative
gradient direction,we take
 wi= - E where is a positive constant
Delta learning Rule

 We then obtain
 wi = (di-oi) f1(neti)x
 or, for the single weight the adjustment
becomes
 wij = (di-oi) f1(neti)xj, for j=1,2,…,n
(2.40b)
 note that weight adjustments computed
based on minimization of the squared error
Delta learning Rule

 Considering the use of the general learning


rule and plugging in the learning signal the
weighting adjustment becomes
 wi = c(di-oi) f1(neti)x
Widrow-Hoff learning Rule
 The Windrow-Hoff learning rule is applicable for
the supervised training of neural networks
 It is independent of the activation function of
neurons used since it maximizes the squared error
between the desired output value di and the
neuron’s activation value
neti = wit x
Widrow-Hoff learning Rule

 The learning signal for this rule is defined as follows


r = d i - w it x
 the weight vector increment under this learning rule
is w i =c (d i - w i t x) x
or, for the single weight, the adjustment is
w ij =c (d i - w i t x) x j j = 1, 2 ….n
 this rule can be considered a special case of the
delta learning rule .
Widrow-Hoff learning Rule
 assuming that f(witx)= witx, or the activation function is simply the
identity function
f(net)=net, f ’(net)=1.

 This rule is sometimes called the LMS (Least mean


square)learning rule.
 weights are initialized at any values in this method.
Correlation Learning Rule
 By substituting r = di into the general learning rule we obtain
the correlation learning rule.
 The adjustments for the weight vector and the single weights
respectively, are wi=cdix
 wij =cdixj for j=1,2,….n
Winner_take_All Learning Rule
 Winner_take_All Learning Rule is used for
learning statistical properties of input.
 The learning is based on the premise that one
of the neurons in the layer, say the m’th, has
the max. response due to input x,as shown in.
 This neuron is declared the winner.As a result
of this winning event, the weight vector wm
W11
X1 o1
W1j
W1n
.
Wm1 Winning
neuron
.
Xj Wmj
Wmn
op
Wp1
.
Wpj
.Xn
on
Wpn
.

Figure 2.25
Winner_take_All Learning Rule
 Wm=[wm1 wm2 …. Wmn]t
 containing weights highlighted in the fig 2.25 is
the only one adjusted in the given unsupervised
learning step
 Its increment is computed as follows
wm=(x-wm)
 or,the individual weight adjustment becomes
wmj= (xj-wmj)
for j=1,2, …n
Winner_take_All Learning Rule
 Where >0 is a small learning
constant,typically decreasing as learning
progresses
 the winner selection is based on the following
criterion of max activation among all p
neurons participating in a competition:
wmt x = max(witx) i=1,2, … n
Outstar Learning Rule
 The weight adjustments in this rule are
computed as follows
wj = (d-wj)
 or, the individual adjustments are
wmj = (dm-wmj) for m=1,2,..p
 note that in contrast to any learning rule
discussed so far, the adjusted weights are
fanning out of the j’th node in this learning
Outstar Learning Rule
method and the weight vector is defined
accordingly as
wj=[w1j w2j … wpj]t
W11
X1 o1
W1j d1
W1n wij
. Wm1 
Wmj op
.
Xj Wmn dm
Wp1 wmj 
. Wpj
on
.Xn
Wpn dp
. wpj

Figure 2.26

Potrebbero piacerti anche