Sei sulla pagina 1di 19

Learning Rules in Neural

Network
By
Dr.A.Sharmila, SELECT
• Learning rule is a method or a mathematical logic.

• Learning means to do and adapt the change in itself as and when there is a change in
environment. ANN is a complex system or more precisely we can say that it is a complex adaptive
system, which can change its internal structure based on the information passing through it.

• During ANN learning, to change the input/output behavior, we need to adjust the weights. Hence,
a method is required with the help of which the weights can be modified. These methods are called
Learning rules, which are simply algorithms or equations.

Different learning rules in the Neural network:


• Hebbian learning rule – It identifies, how to modify the weights of nodes of a network.
• Perceptron learning rule – Network starts its learning by assigning a random value to each weight.
• Delta learning rule – Modification in sympatric weight of a node is equal to the multiplication of
error and the input.
• Correlation learning rule – The correlation rule is the supervised learning.
• Outstar learning rule – We can use it when it assumes the nodes or neurons in a network
arranged in a layer.
Hebbian learning
 In 1949, Donald Hebb proposed one of the key ideas in biological
learning, commonly known as Hebb’s Law.
 Hebb’s Law states that if neuron i is near enough to excite neuron j
and repeatedly participates in its activation, the synaptic connection
between these two neurons is strengthened and neuron j becomes
more sensitive to stimuli from neuron i.
Hebb’s Law can be represented in the form of two rules:
1. If two neurons on either side of a connection are activated
synchronously, then the weight of that connection is
increased.
2. If two neurons on either side of a connection are activated
asynchronously, then the weight of that connection is
decreased.
Hebb’s Law provides the basis for learning without a teacher.
Learning here is a local phenomenon occurring without
feedback from the environment.
 In contrast to supervised learning, unsupervised or self-organised learning
does not require an external teacher. During the training session, the neural
network receives a number of different input patterns, discovers significant
features in these patterns and learns how to classify input data into
appropriate categories. Unsupervised learning tends to follow the neuro-
biological organisation of the brain.

 Unsupervised learning algorithms aim to learn rapidly and can be used in


real-time.
• From the above postulate, we can conclude that the connections
between two neurons might be strengthened if the neurons fire at the
same time and might weaken if they fire at different times.
• Hebb Learning is a kind of feed-forward , unsupervised learning.
Architecture of Hebb Net
• The Hebb net consists of bias which is exactly as a
weight on a connection from a unit whose
activation is always 1.
• If the bias is increased, it increases the net input of
the unit.
• The input and output data should be in bipolar
form.
• If it is in binary form, the Hebb net cannot learn-
Limitation
• Used for training other nets.
• Single layer net, input layer-many input units,
output layer-one output unit.
• Basic architecture that performs pattern
classification.
• The basis included for the net is found to be ‘1’
which helps in increasing the net input.

Potrebbero piacerti anche