Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Introduction
Hebbian learning
Generalised Hebbian learning algorithm
Competitive learning
Self-organising computational map:
Kohonen network
Summary
Introduction
The main property of a neural network is an
ability to learn from its environment, and to
improve its performance through learning. So
far we have considered supervised or active
learning - learning with an external teacher
or a supervisor who presents a training set to the
network. But another type of learning also
exists: unsupervised learning.
Hebbian learning
In 1949, Donald Hebb proposed one of the key
ideas in biological learning, commonly known as
Hebbs Law.
Hebbs Law states that if neuron i is near enough
to excite neuron j and repeatedly participates in its
activation, the synaptic connection between these
two neurons is strengthened and neuron j becomes
more sensitive to stimuli from neuron i.
Output Signals
Input Signals
y j ( p) xi ( p) wij ( p) - q j
i 1
Step 3: Learning.
Update the weights in the network:
1
0
0
0
y1
y2
y3
y4
y5
1
x
5
Input layer
1
x5
5
Input layer
Output layer
1
0
0
0
y1
y2
y3
y4
y5
Output layer
1
0
0
0
0
1
0
0
0
1
0
0
0
0
0
0
0
1
0
0
0
0
0
1
O u t pu t l a yer
1
1
2
3
4
5
0
0
0
0
0 2.0204
2.0204 0
0
0 1.0200 0
00 .9996 0
0
0 2.0204
2.0204 0
0
Y sign 0
0
0
0
0
0 1 0.4940 0
2.0204 0
0 2.0204 0 0.2661 1
0 1.0200 0
0 0 - 0.0907 0
0
0 0.9996 0 0 0.9478 0
2.0204 0
0 2.0204 1 0.0737 1
Example of Hebb
http://blog.sina.com.tw/jiing/article.php?pbgid=872&entryid=573223
Competitive learning
In competitive learning, neurons compete among
themselves to be activateda.
While in Hebbian learning, several output neurons
can be activated simultaneously, in competitive
learning, only a single output neuron is active at
any time.
Kohonen layer
Input layer
1
0
(a)
Input layer
1
0
(b)
Output
layer
Connection
strength
Excitatory
effect
0
Inhibitory
effect
Distance
Inhibitory
effect
2
d X - W j ( xi - wij )
i 1
j = 1, 2, . . ., m
0
.
12
0
.
81
0.42
W2
0
.
70
0.43
W3
0
.
21
w23
0
.
21
0
.
01
0
.
20
2
j X ( p) min X - W j ( p) [ xi - wij ( p)]
j
i 1
j = 1, 2, . . ., m
n
Step 3: Learning.
Update the synaptic weights
wij ( p 1) wij ( p) wij ( p)
wij ( p)
[ xi - wij ( p)] ,
0,
j L j ( p)
j L j ( p)
Step 4: Iteration.
Increase iteration p by one, go back to Step 2 and
continue until the minimum-distance Euclidean
criterion is satisfied, or no noticeable changes
occur in the feature map.
W(2,j)
0.2
0
-0.2
-0.4
-0.6
-0.8
-1
-1
-0.8
-0.6
-0.4
-0.2
0
W(1,j)
0.2
0.4
0.6
0.8
W(2,j)
0.2
0
-0.2
-0.4
-0.6
-0.8
-1
-1
-0.8
-0.6
-0.4
-0.2
0
W(1,j)
0.2
0.4
0.6
0.8
W(2,j)
0.2
0
-0.2
-0.4
-0.6
-0.8
-1
-1
-0.8
-0.6
-0.4
-0.2
0
W(1,j)
0.2
0.4
0.6
0.8
W(2,j)
0.2
0
-0.2
-0.4
-0.6
-0.8
-1
-1
-0.8
-0.6
-0.4
-0.2
0
W(1,j)
0.2
0.4
0.6
0.8