Sei sulla pagina 1di 14

ARTIFICIAL NEURAL

NETWORKS
By: Kirti Susan Varghese
Yamini Mohan
Vidya Academy of Science
and Technology
INTRODUCTION
Artificial neural systems or networks are physical cellular
systems which can acquire ,store and utilize experiential
knowledge.
ANN - middle ground between engineering and artificial
intelligence.
In contrast to conventional computers, neural networks
must be taught or trained.
NEURONS
The human brain is a collection of more than 10 billion
interconnected neurons.
Dendrites
Axon
Synapses


Artificial neural networks (ANN) have been developed as
generalizations of mathematical models of biological
nervous systems.

NEURONS
The basic processing elements of neural networks are
called artificial neurons, or simply neurons or nodes.
Learning capability
The neuron, when considered as a signal processing
device, its inputs are the frequency of pulses received at
the synapses, and its output is the frequency of action
potentials generated- in essence, a neuron is a pulse
frequency signal processing device.
THRESHOLD GATES

Artificial neural networks are parallel computational
models comprising of densely interconnected adaptive
processing units.
binary mappings.
An LTG maps a vector of input data x into a single binary
output y. The transfer function of an LTG is given by:
y =

> =

=
otherwise
T x w x if w
n
i
i i
T
; 0
; 1
1
THRESHOLD GATES
PTG
A polynomial threshold gate with order r n is capable of
realizing a function of n inputs. The transfer function for a
PTG(r) is:
y =

> +
+ +


= = =
= = =

otherwise
T x x x w
x x w x w if
n
i
n
i i
n
i i
i i i i i i
n
i
n
i
n
i i
i i i i i i
r r
r r
; 0
....... ...
....... 1
1
....
1 1
1 1 2 1
2
1
2 1
1 1 1 2
2 1 2 1 1
THRESHOLD GATES
Operation separating surface linear, hyperplane -
flexibility
The function counting theorem also useful in giving an
upper bound on the number of linear dichotomies for
points in arbitrary position.
The theorem states that the number of linearly separable
dichotomies of m points in general position in Euclidean
d-space is
C (m, d) =


The general position requirement on the m points is a
necessary and sufficient condition.

+ s
+ >
|
|
.
|

\
|

1 2
1
1
2
d m
d m
i
m
m
THRESHOLD GATES
The theorem also states that the statistical capacity of a
single LTG is equal to twice the number of its degrees of
freedom.
A threshold gate with d degrees of freedom designed to
classify m patterns, will on average respond to a new
input pattern as long as m 2d.
So, it is found that probability of generalization error
approaches zero asymptotically as d/2m.
As d tends to infinity, average number of patterns
needed for characterization is twice the number of
degrees of freedom d of a threshold gate.
LEARNING RULES
Learning = learning by adaptation
Supervised learning
Unsupervised learning
Reinforcement learning
These rules fit with energy minimization principles.
ENERGY = measure of task performance error

Mc.CULLOCH PITTS NEURON
MODEL
Neuron model
The McCulloch-Pitts model:
spikes are interpreted as spike rates;
synaptic strength are translated as synaptic weights;
excitation means positive product between the
incoming spike rate and the corresponding synaptic
weight;
inhibition means negative product between the
incoming spike rate and the corresponding synaptic
weight;


Mc.CULLOCH PITTS NEURON
MODEL
The unity delay property of the Mc Culloch Pitts
neuron model makes it possible to build
sequential digital circuitry.


MODELS OF ANN
The neural network
Two main types of neural networks are
1) Feed forward network
2) Feedback network
The recall can be performed in the feed forward mode ,
or from input toward output only-no memory-the network
responds only to its present input.
feedback operation- These networks can be considered
as dynamical systems, and a certain time interval is
needed for their recall to be completed-also called
recurrent.
Recurrent networks can operate either in a discrete or
continuous time mode.
ADVANTAGES
More like a real nervous system.
Parallel organization permits solutions to problems
where multiple constraints must be satisfied
simultaneously.
Rules are implicit rather than explicit.
APPLICATIONS
Tasks to be solved by artificial neural networks:
controlling the movements of a robot based on self-
perception and other information (e.g., visual
information);
deciding the category of potential food items (e.g.,
edible or non-edible) in an artificial world;
recognizing a visual object (e.g., a familiar face);
predicting where a moving object goes, when a
robot wants to catch it.

Potrebbero piacerti anche