Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Contd...
Processing
Units:Can Possess a Local Memory
Can carry out localized information processing
Processing unit output can be of any desired
mathematical type desired.
The information processes in the unit is
completely local, ie, input arriving at the
units,values stored in memory.
REPRESENTATION OF A NEURON
Artificial Neuron
The Artificial Neuron was designed to mimic the first order
Characteristics of biological neuron.
Artificial Neuron
X1
w1
X2
w2
X3
w3
X4
NET= XW
w4
NET=X1*w1+ X2*w2+X3*w3+X4*w4
ACTIVATION FUNCTION
(Transfer Functions)
A function used in between the actual output and the
NET.
Activation function processes the NET .
Activation function can be
Simple Linear function
The Threshold Function
The Sigmoid Function.
NET= X*W
OUT=F(NET)
X,W are vectors
Threshold Function
OUT = K(NET)
K is a constant
OUT=1 if NET>T
OUT=0 otherwise.
Sigmoid Function
OUT=
1
(1 EXP NET )
Simple Linear
Function
0.1
OUT = K(NET)
K is a constant
0.08
0.06
0.04
f(NET)
0.02
K=0.01
0
-0.02
-0.04
-0.06
-0.08
-0.1
-10
-5
0
NET
10
Threshold Function
OUT=1 if NET>T
OUT=0 otherwise.
1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
-5
-2.5
2.5
f(NET)
0.6
0.5
0.4
0.3
0.2
0.1
0
-10
-5
NET
Sigmoid Function
10
b
Input
General Neuron
a= f( w*p+b)
X1
X2
W1,1
F
W1,2
NET= XW
X3
X4
W1,3
W1,4
OUT
Artificial Neuron
p
Rx1
a
W
+
b
R
n
1x1
1x1
Weight Indices
The first index represent the particular neuron destination for
that weight.
The second index represents the source of the signal fed to
the neuron.
The indices in W1,2 say that weight represents the
connection to the first neuron from the second source.
p1
n1
b1
W1,2
a2
p2
p3
p4
W3,3
a1
n2
b2
n3
b3
a3
P=
p1
p
2
p3
(Rx1)
Weight Matrix
W=
w
w
w
2
,
1
2
,
2
2, R
ws ,1 ws , 2 ws , R
n1
a1
b1
p2
PR
n2
n3
b3
a1
b1
b2
p3
n1
a2
a3
n2
b2
n3
b3
a2
a3
p
Rx1
a
W
SxR
b
Sx1
n
S x1
S x1
p
Rx1 W1
S xR
1
b1
R
S1 x 1
S1x1 W2
S2x S1
b
S1 x 1
n
S x1
2
S2 x1
A network
Training
is accomplished by sequentially
Types Of Training
Supervised Training.
Unsupervised Training.
Supervised Training.
Supervised
Supervised Training.(Contd)
Unsupervised training
TYPES OF NETWORKS
Perceptron
Perceptron is a feed forward network. In this the
summing unit multiplies the input vector by a weight
and sums the weighted output.
If this sum is greater than a predetermined threshold
value, the output is one; otherwise Zero (in case of
Hardlim and -1 in case of Hardlims)
X1
w1
X2
w2
X3
w3
X4
w4
Threshold
F
NET= XW
Artificial Neuron
OUT
Perceptron Representation
Representation & Learning
Representation refers to the ability of the network to
simulate a specified function.
Learning requires the existence of a systematic procedure for
adjusting the weights to produce that function.
Example: Representation
Can we represent a Odd/Even Number discriminating
machine by a perceptron?
SENSOR
SORTER
APPLE ORANGE
P=
Shape
Texture
Weight
1
P1=
1
1
1
P2=
1
1
Prototype of orange
Prototype of apple
p1
p2
b
a = hardlims (wp+b)
-1
n>0
P1
n<0
-1
Example 2:
Let for the above two input perceptron w11=-1 and
w12=1
Then
a=hardlims([ -1 1 ]p+b)
if b= -1,
n=[-1 1 ]p-1=0 represent a boundary line.
n>0
n<0
-1
3x1.
Perceptron equation
a=Hardlims([w11 w12 w13]
p1
p
2
p3
+b )
1
1
Orange =
Apple =
1
1
1
P3
Orange (1 -1 -1 )
P1
P2
Apple (1 1 -1)
p1
p
2
p3
+b )=0.
[0 1
p1
0] p
2
p3
+ 0 =0.
Example 2
Is X-OR Problem is representational?
Take two input XOR gate
A0
B0
B1
A1
Y
A1
B0
xw1+ yw2
A0
B1
X
Exapmle:3
Check whether AND, OR functions are linearly seperable ?
Linear Separability
For some class of function the input vectors can be
separated geometrically .For two input case ,the
separator is a straight line. For three inputs it can
be done with a flat plane., cutting the resultant
three dimensional space. For four or more inputs
visualization is difficult . we can generalize to a
space of n dimensions divided by a
HYPERPLANE, which divides the space into four
or more regions.
Perceptron
training
Algorithmas follows
Training methods
used can
be summarized
1. Apply an input pattern and calculate the output.
2. .
a) If the output is correct, go to step 1.
b) If the output is incorrect, and is zero, add each input to
its corresponding weight;or
c) If the output is incorrect, and is one, subtract each
input to its corresponding weight.
3.Go to step 1.
= 0 step 2a
>0 step2b
< 0 step2c
= ( T-A )
i = xi
wi(n+1) =wi(n)+ i
Where
i
= the correction associated with i th input xi
wi(n+1) = the value of weight i after adjustment
wi(n)
= the value of weight i before adjustment
Module2
Back propagation: Training Algorithm - Application Network Configurations - Network Paralysis - Local
Minima - Temporal instability.
BACK PROPAGATION
Back propagation is a systematic method for training
multilayer artificial neural networks