Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
org
Seminar
On
Artificial Neural Networks
x1ϵ{0,1}
x1
w1 (x1 w1)
x1 x2
(x2w2)
o = x1 AND x2 o = x1 AND x2
w2
x2ϵ{0,1} x2
by a connection weight , wn
sum = w1 x1 + ……+ wnxn
wn
These products are simply summed,
xn
fed through the transfer function, f(
) to generate a result and then
output.
Biological Terminology Artificial Neural Network Terminology
Neuron Node/Unit/Cell/Neurode
Synapse Connection/Edge/Link
Hidden layers
connections Desired
output
Neural network
Input Actual
Including output
output layer output
connections Comp
(called weights) are
between neuron
Input layer
Layer0 Layer3
(Input layer) (Output layer) Layer0 Layer3
Layer 1 Layer2
(Input layer) (Output layer)
Layer 1 Layer2
Hidden Layer
Hidden Layer
Fig : Acyclic network fig : Feedforward network
This is the subclass of the layered
networks in which there is no intra- This is a subclass of acyclic
layer connections. In other words, a networks in which a connection
connection may exist between any is allowed from a node in layer i
node in layer i and any node in layer j only to nodes in layer i+1
for i < j, but a connection is not allowed
for i=j.
CONTD…
S U P E R VI SE D L E A R N ING
UNSUPERVISED
LEARNING
• A teacher is available to indicate This is learning by doing.
whether a system is performing
correctly, or to indicate the amount of In this approach no sample
error in system performance. Here a outputs are provided to the
teacher is a set of training data. network against which it can
• The training data consist of pairs of measure its predictive
input and desired output values that performance for a given
are traditionally represented in data vector of inputs.
vectors.
One common form of
• Supervised learning can also be unsupervised learning is
referred as classification, where we clustering where we try to
have a wide range of classifiers,
(Multilayer perceptron, k nearest
categorize data in different
neighbor..etc) clusters by their similarity.
The backpropagation algorithm (Rumelhart and McClelland,
1986) is used in layered feed-forward Artificial Neural
Networks.
Back propagation is a multi-layer feed forward, supervised
learning network based on gradient descent learning rule.
we provide the algorithm with examples of the inputs and
outputs we want the network to compute, and then the error
(difference between actual and expected results) is calculated.
The idea of the backpropagation algorithm is to reduce this
error, until the Artificial Neural Network learns the training
data.
The activation function of the artificial neurons in
ANNs implementing the backpropagation
algorithm is a weighted sum (the sum of the inputs
xi multiplied by their respective weights wji) Inputs, x
output
The most common output function is the sigmoidal
function:
the adjustment of each weight (Δwji ) will be the negative of a constant eta (η)
multiplied by the dependance of the “wji” previous weight on the error of the
network.
First, we need to calculate how much the error depends on the output
Next, how much the output depends on the activation, which in turn depends
on the weights
output
where