Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Supervised learning
Introduction, or how the brain works
The neuron as a simple computing element
The perceptron
Multilayer neural networks
Accelerated learning in multilayer neural networks
The Hopfield network
Bidirectional associative memories (BAM)
Summary
Negnevitsky, Pearson Education, 2005
Soma
Dendrites
Synapse
Dendrites
Axon
Soma
Synapse
Input Signals
Output Signals
Middle Layer
Input Layer
OutputLayer
xn
Weights
OutputSignals
Y
w1
w2
wn
Neuron
Y
Y
X xi wi
i 1
1, if X
Y
1, if X
10
+1
+1
-1
-1
0
-1
1, if X 0 sign 1, if X 0 sigmoid
step
Y
Y
Y
0, if X 0
1, if X 0
-1
1
1 e X
Y linear X
11
12
Linear
Combiner
Hard
Limiter
Output
Y
w2
x2
Threshold
13
The Perceptron
The operation of Rosenblatts perceptron is based
on the McCulloch and Pitts neuron model. The
model consists of a linear combiner followed by a
hard limiter.
The weighted sum of the inputs is applied to the
hard limiter, which produces an output equal to +1
if its input is positive and 1 if it is negative.
14
xi wi 0
i 1
15
16
e( p) Yd( p) Y( p)
where p = 1, 2,
3, . . .
17
18
19
Y ( p ) step
i 1
xi ( p ) wi ( p )
20
Step 4: Iteration
Increase iteration p by one, go back to Step 2 and
repeat the process until convergence.
Negnevitsky, Pearson Education, 2005
21
Inputs
Desired
outp
ut
Yd
Initial
weights
w1 w2
Actual
outp
ut
Y
Error
Final
weights
w1 w2
x1
x2
0
0
1
1
0
1
0
1
0
0
0
1
0.3
0.3
0.3
0.2
0.1
0.1
0.1
0.1
0
0
1
0
0
0
1
1
0.3
0.3
0.2
0.3
0.1
0.1
0.1
0.0
0
0
1
1
0
1
0
1
0
0
0
1
0.3
0.3
0.3
0.2
0.0
0.0
0.0
0.0
0
0
1
1
0
0
1
0
0.3
0.3
0.2
0.2
0.0
0.0
0.0
0.0
0
0
1
1
0
1
0
1
0
0
0
1
0.2
0.2
0.2
0.1
0.0
0.0
0.0
0.0
0
0
1
0
0
0
1
1
0.2
0.2
0.1
0.2
0.0
0.0
0.0
0.1
0
0
1
1
0
1
0
1
0
0
0
1
0.2
0.2
0.2
0.1
0.1
0.1
0.1
0.1
0
0
1
1
0
0
1
0
0.2
0.2
0.1
0.1
0.1
0.1
0.1
0.1
0
0
1
1
0
1
0
1
0
0
0
1
0.1
0.1
0.1
0.1
0.1
0.1
0.1
0.1
0
0
0
1
0
0
0
0
0.1
0.1
0.1
0.1
0.1
0.1
0.1
0.1
Threshold: =0.2;learningrate:
Negnevitsky, Pearson Education, 2005
=0.1
22
x2
x2
1
x1
x1
0
(b) OR ( x 1 x 2 )
x1
0
(c) Ex cl usive- OR
(x 1 x 2 )
23
24
Input
layer
First
hidden
layer
Second
hidden
layer
Output Sign
als
Input Signal
s
Output
layer
25
26
27
28
xi
y1
y2
yk
yl
2
i
wij
wjk
xn
Input
layer
Hidden
layer
Output
layer
Error signals
Negnevitsky, Pearson Education, 2005
29
2.4
2.4
F
F
i
i
30
Step 2: Activation
Activate the back-propagation neural
network by applying inputs x1(p), x2(p),, xn(p)
and desired outputs yd,1(p), yd,2(p),, yd,n(p).
(a) Calculate the actual outputs of the neurons in
the hidden layer:
y j ( p) sigmoid
i 1
xi ( p) wij ( p) j
31
yk ( p) sigmoid
j 1
x jk ( p) wjk ( p) k
32
yk ( p) 1 yk ( p) ek ( p)
where ek ( p) yd,k ( p) yk ( p)
Calculate the weight corrections:
wjk ( p) yj ( p) k ( p)
33
y j ( p) [1 y j ( p)] k ( p) w jk ( p)
k1
xi ( p) j ( p)
34
Step 4: Iteration
Increase iteration p by one, go back to Step 2 and
repeat the process until the selected error criterion
is satisfied.
As an example, we may consider the three-layer
back-propagation network. Suppose that the
network is required to perform logical operation
Exclusive-OR. Recall that a single-layer perceptron
could not do this operation. Now we will apply the
three-layer net.
35
w13
w23
1
w35
5
5
x2
w24
w24
Input
layer
y5
w45
4
4
1
Hiddenlayer
Output
layer
36
37
38
39
x1 3 0.110.0381 0.0038
x2 3 0.110.0381 0.0038
(1) 3 0.1(1) 0.0381 0.0038
x1 4 0.11( 0.0147
) 0.0015
x2 4 0.11(0.0147
) 0.0015
(1) 4 0.1(1) (0.0147) 0.0015
40
41
Sum-Squared Error
10 0
10 -1
10 -2
10 -3
10 -4
50
100
Epoch
150
200
42
x1 x2
1
0
1
0
1
1
0
0
Desired
output
yd
0
1
1
0
Actual
output
y5
0.0155
0.9849
0.9849
0.0175
Sum of
squared
errors
0.0010
43
x1
+1.0
1
2.0
+1.0
+0.5
5
x2
+1.0
+1.0
y5
+1.0
4
+0.5
1
44
Decision boundaries
x2
x2
x2
x1 +x2 1.5 =0
x1 +x2 0.5 =0
1
x1
x1
0
1
(a)
1
(b)
x1
0
1
(c)
45
a
1 ebX
where a and b are constants.
Suitable values for a and b are:
a = 1.716 and b = 0.667
Negnevitsky, Pearson Education, 2005
46
wjk ( p)
wjk ( p 1) yj ( p) k ( p)
47
10 1
10 0
10 -1
10 -2
10 -3
10 -4
20
40
60
Epoch
80
100
120
1. 5
Learning Rate
1
0. 5
0
-0.5
-1
20
40
60
80
100
120
140
Epoch
Negnevitsky, Pearson Education, 2005
48