Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Architecture
z A simple neural network consists
of a layer of input units and a
single output unit.
z In pattern classification problems,
each input vector (pattern)
belongs, or does not belong, to a
particular class or category.
z For a neural net approach, we
assume we have a set of training
patterns for which the correct
classification is known.
z Eg. Pattern classification is used
to detect heart abnormalities
(1963-Donald Specht)
1
Biases and Thresholds
Linear Separability
y _ in = b + xi wi
i
z The boundary between yes (y_in > 0) and no (y_in < 0) is called
the decision boundary and determined by
b + xi wi = 0 (bipolar signal)
i
2
AND function
Input (x1, x2) Output (t)
(1,1) +1
(1,-1) -1
(-1,1) -1
(-1,-1) -1
A possible decision boundary:
x2=-x1+1
b=-1
w1=1
w2=1
OR function
Input (x1, x2) Output (t)
(1,1) +1
(1,-1) +1
(-1,1) +1
(-1,-1) -1
A possible decision boundary:
x2=-x1-1
b=1
w1=1
w2=1
3
XOR function
4
Hebb Networks
z Hebb proposed that learning occurs by modification of the
synapse strengths (weights) in a manner such that if two
neurons are both on at the same time, the weight
between those neurons should be increased. (Hebb rule)
z Extended Hebb rule: Also increase the weight if both
neurons are off at the same time.
z A single-layer neural net trained using Hebb rule is called
as a Hebb net.
wi (new) = wi (old ) + xi y
z There are some limitations of the Hebb rule for binary
data.
5
zNo learning occurs for second, third and
fourth training inputs.
Input Target Weight Changes Weights
(x1 x2 1) (w1 w2 b) (w1 w2 b)
(1 0 1) 0 (0 0 0) (1 1 1)
(0 1 1) 0 (0 0 0) (1 1 1)
(0 0 1) 0 (0 0 0) (1 1 1)
6
zLearning continues for second, third and
fourth training patterns.
7
Weight updates for the second input are:
Input Target Weight Changes Weights
(x1 x2 1) (w1 w2 b) (w1 w2 b)
(1 1 1)
(1 -1 1) -1 (-1 1 -1) (0 2 0)
Separating line: x2 =0
Response of the net is now correct for the first two input
points.
8
Weight updates for the fourth input are:
Input Target Weight Changes Weights
(x1 x2 1) (w1 w2 b) (w1 w2 b)
(1 1 -1)
(-1 -1 1) -1 (1 1 -1) (2 2 -2)
Perceptron
z Originally developed by Rosenblatt.
z Have three layers of neurons sensory units,
associator units and response units.
z Sensory units are connected to the associator
units with fixed weights of +1, 0 or -1.
z Activation function for associator unit is binary
step function with a threshold.
z The output of perceptron is y=f(y_in).
1 if y_in >
f ( y _ in) = 0 if - y_in
1 if y_in < -
9
zWeights from the associator units to the
response (output) units are adjusted by
the perceptron learning rule.
zIf an error occurs, the weights would be
changed according to the formula
wi(new)=wi(old) + txi
t is target value (+1 or -1)
is the learning rate
zTraining would continue until no error
occurred.
10
Perceptron for the AND Function: Binary
inputs, bipolar targets
We take =1, initial weights and bias = 0, threshold = 0.2.
w = t(x1, x2, 1) or 0 (no error)
Separating lines:
x1+x2+1 = 0.2 and x1+x2+1 = -0.2
11
INPUT NET OUT TARGET W WEIGHTS
(x1 x2 1) (w1 w2 b)
(0 0 -1)
(0 0 1) -1 -1 -1 (0 0 0) (0 0 -1)
Response for input (1,1) is not correct. We train with the second epoch.
12
INPUT NET OUT TARGET W WEIGHTS
(x1 x2 1) (w1 w2 b)
(0 0 -2)
(1 1 1) -2 -1 1 (1 1 1) (1 1 -1)
(1 0 1) 0 0 -1 (-1 0 -1) (0 1 -2)
(0 1 1) -1 -1 -1 (0 0 0) (0 1 -2)
(0 0 1) -2 -1 -1 (0 0 0) (0 1 -2)
(1 1 1) -1 -1 1 (1 1 1) (1 2 -1)
(1 0 1) 0 0 -1 (-1 0 -1) (0 2 -2)
(0 1 1) 0 0 -1 (0 -1 -1) (0 1 -3)
(0 0 1) -3 -1 -1 (0 0 0) (0 1 -3)
(1 1 1) -2 -1 1 (1 1 1) (1 2 -2)
(1 0 1) -1 -1 -1 (0 0 0) (1 2 -2)
(0 1 1) 0 0 -1 (0 -1 -1) (1 1 -3)
(0 0 1) -3 -1 -1 (0 0 0) (1 1 -3)
(1 1 1) 0 0 1 (1 1 1) (2 3 -2)
(1 0 1) 0 0 -1 (-1 0 -1) (1 3 -3)
(0 1 1) 0 0 -1 (0 -1 -1) (1 2 -4)
(0 0 1) -4 -1 -1 (0 0 0) (1 2 -4)
(1 1 1) -1 -1 1 (1 1 1) (2 3 -3)
(1 0 1) -1 -1 -1 (0 0 0) (2 3 -3)
(0 1 1) 0 0 -1 (0 -1 -1) (2 2 -4)
(0 0 1) -4 -1 -1 (0 0 0) (2 2 -4)
(1 1 1) 0 0 1 (1 1 1) (3 3 -3)
(1 0 1) 0 0 -1 (-1 0 -1) (2 3 -4)
(0 1 1) -1 -1 -1 (0 0 0) (2 3 -4)
(0 0 1) -4 -1 -1 (0 0 0) (2 3 -4)
13
INPUT NET OUT TARGET W WEIGHTS
(x1 x2 1) (w1 w2 b)
(1 1 1) 1 1 1 (0 0 0) (2 3 -4)
(1 0 1) -2 -1 -1 (0 0 0) (2 3 -4)
(0 1 1) -1 -1 -1 (0 0 0) (2 3 -4)
(0 0 1) -4 -1 -1 (0 0 0) (2 3 -4)
Positive response:
2x1 + 3x2 - 4 > 0.2
Boundary line x2 = -2/3x1 + 7/5
Negative response:
2x1 + 3x2 4 < -0.2
Boundary line x2 = -2/3x1 + 19/5
(1 1 1) 1 1 1 (0 0 0) (1 1 -1)
(1 -1 1) -1 -1 -1 (0 0 0) (1 1 -1)
(-1 1 1) -1 -1 -1 (0 0 0) (1 1 -1)
(-1 -1 1) -3 -1 -1 (0 0 0) (1 1 -1)
14
ADALINE
zAdaptive Linear Neuron (ADALINE) uses
bipolar activations for its input signals and
target output.
zADALINE is trained using the delta rule
(also known as least mean squares (LMS)
or Widrow-Hoff rule).
zAn ADALINE is a single neuron that
receives input from several units. It has
only one output unit.
15