Sei sulla pagina 1di 10

VR Siddhartha Engineering College

TechnoFest-2009
Computer Science &Engineering

A Paper presentation on

KOHONEN NEURAL NETWORKS

Presented by:

Vuyyuru Deepu Kumar Jakka Santhosh


06A81A0567 06A81A0576
III/IV BTech, CSE III/IV BTech, CSE
vuyyurudeepukumar@gmail.com santhoshmay24@gmail.com
Ph: 9966454405 Ph: 9490784744

SRI VASAVI ENGINEERING COLLEGE


TADEPALLIGUDEM, WEST GODAVARI DT.
When the only tool you have is a hammer, every problem you
encounter tends to resemble a nail

ABSTRACT:

Neural network has taken its importance since 1950s


from the invention of computer. For many centuries, one of the goals of
human mankind has been to develop machines we envision these
machines as performing all cumbersome and tedious tasks so that we
might enjoy a more fruitful life. As we see that people and animals
recognition of patterns and images is much faster than most advanced
computers though computers outperform both biological and artificial
neural systems for tasks based on precise and fast arithmetic operations,
artificial neural systems represent the promising new generation of
information processing networks. Over the past fifteen years, a view has
emerged that computing based on models inspired by our understanding
of the structure and function of the biological neural networks may hold
the key to the success of solving intelligent tasks by machines the new
field is called “Artificial neural networks or Neural networks”.

CONTENTS

1. INTRODUCTION
2. ELEMENTARY NEURO PHYSIOLOGY
3. KOHENON NETWORKS
4. BENEFITS OF NEURAL NETWORK
5. FUTURE TRENDS IN NEURAL NETWORKS
1. INTRODUCTION pattern recognition such as the one just
The only tool we have is a described. Since our conventional
sequential computer, and then computers are obviously not suited to this
every problem we encounter will type of problem, we therefore borrow
be cast in terms of a sequential features from the physiology of the brain as
algorithm. the basis for our new processing models.
Every Computer designers, This technology has
engineers, and programmers, all of whom Come to be known as artificial neural
are striving to create more “intelligent system or simply neural networks.
“computer systems think ………………
…..! We begin defining a neural network
structure as a collection of parallel
Why can’t we build a computer that processors connected together in the form
thinks? Why can’t we except machines that of a direct graph. By referring, a figure as
can perform 100 million floating-point atypical
calculations per second to be able to Network diagram we can schematically
comprehend the meaning of shapes in represents each processing element as a
visual images, or even to distinguish node
between different kinds of similar objects? with connections between units indicated
Why can’t that same machine learn from by arcs. In this example we will restrict the
experience, rather than repeating forever an Number of characters the neural network
explicit of instructions generated by a must recognize to 10 decimal digits, 0,
human programmer? 1,2……9.Our objective is to have the
neural network determine which of the 10
There are many applications that digits a particular Hand drawn character is,
we would like to automate ,but have not we can create a net work structure that has
automated due to complexities associated 10 discrete outputs One for each character
with programming a computer to perform to be identified .if we insist the output units
the tasks to a large extent these tasks are behave according to A simple on –off
not possible for programming as they pose strategy in the process of converting an
many oppositions . input signal to output signal becomes a
simple majority function.
Let us take a small example shown
by picture below. How is that we can see The schematic diagram represents the
the dog in the image quickly, yet a character-recognition problem described in
computer cannot visualize the the text. In this example, application of an
discrimination as we do it. input pattern on the bottom layers of
processors can cause many of the second
This question is especially poignant layer, or hidden layer, units to activate. The
when we considered that the switching time activity on the hidden layer should then
of the components in modern electronic cause exactly one of the output layers to
computers are more than seven orders of activate-the one associated with the pattern
magnitude faster than the cells that being identified. We can also observe the
comprise our neurobiological system. This large number of connections required for
is partially answered by the fact that the the small network.
human brain is a complex built system
when compared to the computer. Based on the conditions,
we can arbitrarily choose the pixel image as
In many real world applications, we a 10*8matrix, using a 1 to represent the
want our computers to perform complex pixel that is “on” and a “0” to represent a
pixel that is “off”. At this point, all that
remains is to size the number of processing
units (called hidden units) that must be used
internally, to connect them to the input and
output units called weight connections.

2. ELEMENTARY
NEUROPHYSIOLOGY
There are many basic concepts that
Simple pictorial representation of the neuron
have a more universal significance .In this
regard we look first at individual neurons
then at the synaptic junctions between
neurons.
2.1. SINGLE NEURON PHYSIOLOGY:
The features of biological neural
networks are attributed to its structure and
its function. The fundamental unit of the
network is called a neuron or a nerve cell.
The major components of a typical nerve
cell in the central nervous system are as
follows
• Cell body;
• Axon
• Nucleus
• Dendrites
• Synapse
• Myelin sheath
The neuron which consists of a cell body or
soma where the cell nucleus is located tree
like nerve fibers called dendrites are The transmission of a signal one cell
associated with cell body they connect one to another at a synapse is a complex
nerve chemical process in which specific
Cell to the other and also use full in transmitter ionic compounds transmit
transmitting one neuron to the other. A signals when these ionic
single long Compounds are released. The effect is to
Fiber called axon extends from the cell rise or lower electric potential inside the
body which eventually branches in to body of the receiving cell. if this signal
stands and reaches a threshold, an electrical activity in
Sub stands connecting to many other the form of
neurons at the synaptic junction or the Short pulses are generated. When this
synapse. happens the cell is said to be fired.
Generally the electrically activity is
confined to the interior of the neuron where
as the chemical mechanism operates at the
synapse. Dendrites serve as receptors for
signals from other
neurons. The purpose of the axon is to
transmit the generated neural activity to
other nerve cell or to other muscle fiber
neuron.

2.1.1Process:
In the state of inactivity the
interior of the neuron, the protoplasm is
negatively charged against the surrounding
neural liquid containing positive sodium
ions (Na+). The Resulting resting potential
of about -70mv is supported by the action
of the cell membrane which is impenetrable
for the sodium (Na+) ions. This causes a
deficiency of +ve ions in the protoplasm. The presynaptic terminals.
Signals arriving from the synaptic junctions
may result in Temporary depolarization of
the resting potential. When the potential is
increased -60mv, the membrane loses its
impermeability against the sodium (Na+)
ion which enters the protoplasm and
reduces the potential difference.

This sudden change in membrane


potential causes the neuron to discharge
then the neuron is said to be fired. The The process of transmission
intensity of the signal is encoded in the of the signal in a neuron.
frequency of sequence of pulses of activity,
which can Range from about 1 to 100/sec. The cell body of a neuron acts as a kind of
if the induced polarization potential is summing device due to the net depolarizing
positive the synapse is termed excitatory effect of its signals this net effects decay
because the influence of the synapse tends with time constants of 5 to 10 milli seconds
to activate the post synaptic neuron. If the When the total magnitude of the
polarization potential is negative the depolarization potential in the cell body
synapse is called Inhibitory, since its exceeds the critical threshold (about 10mv),
contracts excitation of the neuron. All the the neuron fires. The activity of the given
synaptic endings of an Axon are either of an synapse depends on the rate of arriving
excitatory or an inhibitory nature signals. An active synapse, which
repeatedly triggers activation of its post
synaptic neuron, will grow in strength,
while others will gradually will weaken
thus the strength of the synaptic connection
gets modified continuously .This
mechanism of synaptic plasticity in
structure of neural connectivity, known As
Hebb’s rule, appears to play a dominant
role in the complex process of learning
A Kohenon net has the special
property that it does its best to preserve
continuity. That is, if two colors are nearby
each other in the input space, they will be
nearby each other in output space. In fact,
this continuous mapping turns out to be
simply the topological stretching of a figure
in n-space into a corresponding figure in
output space.

Neurotransmission. Of course, the network isn't


perfectly successful at this task, for a
3. KOHENON NETWORK number of reasons. First, some figures
simply cannot be mapped continuously into
2-space. Consider, for example, the solid
3.1 PATTERN RECOGNITION BY
cube: it just can't be flattened into the plane.
KOHENON NETWORK:
Secondly, the network is constructed of a
collection of discrete neurons, rather than a
A Kohenon Network is a
continuous surface. Therefore, sometimes
breed of neural network designed to group
continuity appears to get lost: If one color
similar inputs together by having them
region gets stretched into a long blob on the
represented by nearby neurons in a neural
neuron surface, then two nearby colors may
network. After the net is trained, each
appear distant on the map. There is still a
neuron represents a particular input feature
continuous transition between them, but it
vector, and domain of inputs has been
is lost in the discrete steps of the neurons.
organized spatially across the field of
I.e., Kohonen nets preserve continuity, but
neurons in a way that reflects the original
they don't care much about preserving
organization of the input data.
distance.
One way to visualize this process is
Finally, Kohenon nets go about
as follows. The input is a set of feature
constructing this mapping in a fairly
vectors in n-space. These vectors will be
haphazard way, examining only one input
mapped onto the field of neurons in m
vector at a time. In light of this, they do
dimensions. A particular feature vector is
amazingly well at their task. With badly
mapped onto the neuron which most closely
tuned parameters, they construct rather poor
resembles it; in this way, we can represent
mappings; even with good tuning, the net
all possible inputs with a finite number of
often builds its way into a corner, causing it
neurons. We can think of the output of the
to have to break continuity for some
net being the discrete vector ``address'' of
regions.
the winning neuron.

In my program, input vectors are


confined to the unit cube, and each vector is
displayed as an RGB triplet color. The
neural net is a flat surface (2-space), so
m=2. It's certainly conceivable to construct
neural surfaces of greater dimension, but
two dimensions illustrate the point quite
handily, and are easy to display and
visualize.
3.2 Different types of kohenon
networks

3.2.1. Linear Vector Quantization


Each competitive unit
corresponds to a cluster, the centre of which
is called ‘the code book vector’ (also
‘The Focus’). When an input is presented to
the network, the codebook vector nearest
the input is found and the code book vector
is then alerted to be more like the input.
The codebook vector is moved a proportion
of the distance between the input and the
codebook vector. This is very similar to
Left: the edges of the unit cube, as macqueen’s k-means algorithm [21] except
colored in RGB space. Right: A correct in this case, the distance moved is the
mapping into 2-space which preserves reciprocal of the number of cases that have
continuity. been assigned to the winning cluster. The
VQ algorithm is often used for off-line
learning where the training data is stored
and the training algorithm is passed over
the data many times.

3.2.2 Self organizing maps:


The kohonen self organizing map
(SOM) is a topological mapping from input
space. To the cluster of classes. This is the
Figure 2. Left: A frustrated Kohonen most typical form of kohonen network and
network can't quite continuously connect is inspired by the way in which human
two of the original edges. Right: A correct sensory impressions are neurologically
solution discovered by the network relies mapped. In to the brain such those spatial
on the ``wrap-around'' shape of toroidal 2- or other relations among stimuli correspond
space. The arrows indicate where the figure to the spatial relations between neurons. In
connects with itself. the SOM, the neurons are arranged in an n-
dimensional grid (where n is usually 2). As
Overall, however, self-ordering long as the number of inputs is greater than
maps are surprisingly successful. Even dimensionality of the input space, any
humans have to think a little bit to number of inputs can be used to train the
discover how the edges of a cube can be networks. a SOM attempts to arrange
mapped into 2-space. When the Kohonen clusters so that any two similar clusters are
net solved this problem, it almost always spatially close . How ever two inputs that
preserved the continuity of at least ten are close to each other in the input space
edges, and once even came up with an are not always in clusters close to each
unexpected but correct solution which takes other. The SOM, performs data
advantage of toroidal 2-space (from the compression, this means that high
above figure) dimensional data can be represented in a
much lower dimensional space. The SOM
is effectively a Smooth mapping between
input space and grid space.
ALGORITHM REPRESENTING THE The ``Weighted'' option, if checked, causes
KOHONENE NETWORK: neighbor neurons modified in step 4 to be
affected less as a function of their distance
A Kohonen net trains in the following from the center of the neighborhood. The
cycle: winning neuron is modified according to
Select an input vector. the full value of alpha as in step 3, and
Find the neuron which most closely more distant neurons are modified
resembles it. (Winner takes all.) according to alpha*f(distance), where f is a
Make that neuron even more like the input normal bell curve. Without weighted
vector. The alpha training constant (or not- neighborhoods, every neuron in the
so-constant) determines how much this neighborhood is affected in the same
neuron is changed. An alpha of 1.0 will amount as the winning neuron.
make this neuron exactly like the input The learning parameters include alpha,
vector; an alpha of zero will not change it at alpha goal, and decay rate. As it happens,
all. training is most effective when alpha and
Make all of his nearby neighbors more the neighborhood radius are allowed to
closely resemble the input vector, as well. decrease with time. To accommodate this,
The shape and size of the neighborhood are the program has a decay feature. Every
also adjustable training parameters. decay times through the training loop, the
Repeat. neighborhood radius is decreased by one,
Notice how closely this process resembles and alpha is moved closer to alpha goal.
the Hair Care Training Process: The value of alpha is determined by
Lather. linearly interpolating between its initial
Rinse. value and the goal, so that alpha arrives at
Repeat. alpha goal just as the neighborhood radius
In som, you have a wide variety of arrives at one. The training is stopped after
options with regard to just how these steps training with a radius of one, since not
are performed. First, you may configure the many neurons would be affected in a
arrangement of the neurons. You may input neighborhood of zero radius.
the desired dimensions of the map, The map view is updated every vectors
measured in neurons. A hexagonal times through the training loop when the
arrangement makes each neuron have six per vectors option is chosen. Updating the
equally near neighbors; a square (Cartesian) view with each vector makes it convenient
arrangement gives each neuron four closest to see the effects of varying the
neighbors. Hexagonal maps must have an neighborhood options, at the expense of
even number of rows. You may leave the speed. Updating with each vector is
map a simple box figure with an outside different from updating per 1 vectors in that
boundary, or close the figure into the the each vector option only redraws those
surface of a torus or donut. The map can be neurons that appeared in the neighborhood
initialized to either zero (black) neurons or (to save a little time). Try watching the very
neurons representing a uniform random beginning of a training sequence in each
distribution of inputs. The euclidean versus mode--you'll see directly how the weighting
taxicab option affects the measure of and shape of neighborhoods affects
distance used for feature space in step 2. neurons.
The neighborhood properties affect step 4. The grid checkbox draws the inverse image
You may input the initial radius of the of the mapping as a grid surface in input
neighborhood. On Cartesian maps, the space. The window which displays input
shape of the neighborhoods may be either vectors will have a grid surface
square or circular. Neighborhood shape is superimposed on it. Each node on the grid
confined to a hexagon for hexagonal maps. corresponds to the value of a particular
neuron. This lets you directly visualize how distributed through out the network. Non
the map is stretching to fill feature space. I linearity is highly important property
recommend leaving the grid display off, particularly if the underlying physical
then turning it on to display a particular mechanism responsible for
grid of your choice; otherwise, the program The generation of a input signal is
spends a lot of time redrawing. inherently non linear.
All of the control parameters only affect the
initial settings of the map; they will not 4.2 INPUT AND OUTPUT MAPPING:
change a map in the calculation process. To A popular paradigm of learning
invoke the changes, discarding the current called supervised learning involves the
map, click Reset. Stop will pause the modification of synaptic weights of neural
training so the display can be inspected; networks by applying a label training
Continue will resume calculation without samples.The training of the network is
reloading the parameters and initializing the repeated for the many examples in the set
neurons. Read re-reads the input file; it is until the network reaches steady state.
functionally identical to Reset, but it lends a Where there areno further significant
nice symmetry to the display. Write stores changes in the synaptic weights. Thus the
the map to the output file as previously network learns from the examples by
described. (When running in background constructing the input and output
mode, the map is automatically written Mapping
after training ceases, and the program
terminates.) Quit is fairly self-explanatory. I 4.3 ADAPTIVITY:
included a sentence on it for prosaic Neural network has a built in
closure, and because the capital Q's in this capability to adapt their synaptic weights to
typeface are very pretty. changes in surroundings environment in
particular a neural network trained to
4. BENEFITS OF NEURAL operate
NETWORKS. In a specific environment conditions. More
over when it is operating in a non stationary
It is apparent that a neural network environment a neural network can be
derives its computing power through, first, designed to change its synaptic weights in
its massively parallel distributed structure real time. The natural architecture of a
and, second, its ability to learn and neural network for pattern classification,
therefore generalize; generalization refers signal processing, and control applications,
to the neural network producing reasonable coupled with the adaptive capability of the
outputs for inputs not encountered during network make it a n ideal tool for use in
training. These two information processing adaptive pattern classification, Adaptive
capabilities make it possible for neural signal processing and adaptive control.
network to solve complex (large scale)
problems that are currently intractable. 4.4 EVIDENTIAL RESPONSE:
The use of neural networks offers In the context of pattern
the following useful properties and classification a neural network can be
capabilities: designed to provide information not only
about which particular pattern selects but
4.1 NON-LINEARITY: also about the confidence in the decision
A neuron is basically a non made this latter information may be used to
linear device. A neural network made up of reject ambiguous
interconnection of neurons is itself non Patterns and there by improving the
linear more over the non linearity is of a classification performance of the network
special kind in the sense that it is
Neurons in one form or another represents
4.5 CONTEXTUAL INFORMATION: and ingredient common to all neural
Knowledge is represented by the networks. This common ability makes it
very structure and activation state of neural possible to share theories and learning
network every neuron in the network is algorithms in different applications of
potentially affected by the global activity neural networks Modular networks can be
Of all other neurons in the network built through a seem less integration of
consequently, contextual information is modules
dealt with naturally by a neural network.
4.9 NEURO BIOLOGICAL ANALOGY:
The design of the neural network is
4.6 FAULT TOLERANCE: motivated by analogy with the brain which
A neural network implemented in is a living proof that fault tolerant parallel
the hard ware form has the potential to be processing is not only physically possible
inherently fault tolerant in the sense that its but also fast and power full? Neuro
performance is degraded grace fully under biologists look to neural networks as
adverse operating conditions. How ever research tool for the interpretation of neuro
owing to the distributed nature of biological phenomena. On the other hand
information in the network the damage has engineers look to neuro
to be extensive before the over all response Biology for new ideas to solve problems
of the network is degraded seriously thus in more complex than those based on
principle a neural network exhibits a conventional hardwired designed
graceful degradation in performance rather techniques. It provides a hope and belief
than catastrophic failure. that physical under standing of neuro
biological structures could indeed influence
the art of electronics and VLSI
4.7 V.L.S.I IMPLEMENTABILITY:
The massively parallel nature of the 5. FUTURE TRENDS IN
neural network makes it potentially fast for NEURAL NETWORKS:
the computation of certain tasks. This same
feature makes a neural network ideally Neural networks and fuzzy logic
Suited for implementation using very large outline many applications. Most of the
scale integrated (V L S I) technology. The applications fall in to few main categories.
particular virtue of VLSI is that it provides According to the paradigms they are
a means of capturing truly complex literally thousands of applications of neural
behavior networks and fuzzy logic in science and
In highly hierarchical fashion which makes technology and business with more and
it possible to use the neural network as a more applications added as time goes on.
tool Thus neural network plays an important
For real time applications involving pattern role in the future of development of
recognition signal processing and control. mankind.
.
4.8 UNIFORMITY OF ANALYSIS AND
DESIGN:
Neural networks enjoy universality
as information processors it is in the sense
that the same notation is used in all the
domains involving the application of neural
Networks this feature manifests itself in
different ways

Potrebbero piacerti anche