Sei sulla pagina 1di 2

ABSTRACT- ARTIFICIAL NEURAL NETWORK

INTRODUCTION
The simplest definition of a neural network, more properly referred to as an 'artificial' neural network (ANN), is provided
by the inventor of one of the first neurocomputers, Dr. Robert Hecht-Nielsen. He defines a neural network as:
"...a computing system made up of a number of simple, highly interconnected processing elements, which process
information by their dynamic state response to external inputs.
ANNs are processing devices (algorithms or actual hardware) that are loosely modeled after the neuronal structure of
the mammalian cerebral cortex but on much smaller scales. A large ANN might have hundreds or thousands of processor
units, whereas a mammalian brain has billions of neurons with a corresponding increase in magnitude of their overall
interaction and emergent behavior. Although ANN researchers are generally not concerned with whether their networks
accurately resemble biological systems, some have. For example, researchers have accurately simulated the function of
the retina and modeled the eye rather well.
Although the mathematics involved with neural networking is not a trivial matter, a user can rather easily gain at least
an operational understanding of their structure and function.

PROBLEM STATEMENT
Computers are great at solving algorithmic and math problems, but often the world can't easily be defined with a
mathematical algorithm. Facial recognition and language processing are a couple of examples of problems that can't
easily be quantified into an algorithm, however these tasks are trivial to humans. The key to Artificial Neural Networks
is that their design enables them to process information in a similar way to our own biological brains, by drawing
inspiration from how our own nervous system functions. This makes them useful tools for solving problems like facial
recognition, which our biological brains can do easily.

METHODS
Neural networks can be hardware- (neurons are represented by physical components) or software-based (computer
models), and can use a variety of topologies and learning algorithms.
1. Feedforward neural network
2. Radial basis function (RBF) network
3. Kohonen self-organizing network
4. Learning vector quantization
5. Recurrent neural network
a. Fully recurrent network
i. Hopfield network
ii. Boltzmann machine
b. Simple recurrent networks
c. Echo state network
d. Long short term memory network
e. Bi-directional RNN
f. Hierarchical RNN
g. Stochastic neural networks
6. Modular neural networks
a. Committee of machines
i. Associative neural network (ASNN)
7. Physical neural network
8. Other types of networks
a. Holographic associative memory
b. Instantaneously trained networks
c. Spiking neural networks
d. Dynamic neural networks
e. Cascading neural networks
f. Neuro-fuzzy networks
g. Compositional pattern-producing networks
h. One-shot associative memory
i. Hierarchical temporal memory
APPLICATIONS

Neural networks are universal approximators, and they work best if the system you are using them to model
has a high tolerance to error. One would therefore not be advised to use a neural network to balance one's
cheque book! However they work very well for:
 Capturing associations or discovering regularities within a set of patterns;
 Where the volume, number of variables or diversity of the data is very great;
 The relationships between variables are vaguely understood; or,
 The relationships are difficult to describe adequately with conventional approaches.

CONCLUSION
The computing world has a lot to gain from neural networks. Their ability to learn by example makes them very flexible
and powerful. Furthermore there is no need to devise an algorithm in order to perform a specific task; i.e. there is no
need to understand the internal mechanisms of that task. They are also very well suited for real time systems because of
their fast response and computational times which are due to their parallel architecture.
Neural networks also contribute to other areas of research such as neurology and psychology. They are regularly used
to model parts of living organisms and to investigate the internal mechanisms of the brain.
Perhaps the most exciting aspect of neural networks is the possibility that someday 'conscious' networks might be
produced. There is a number of scientists arguing that consciousness is a 'mechanical' property and that 'conscious'
neural networks are a realistic possibility.
Finally, we can say that even though neural networks have a huge potential we will only get the best of them when they
are integrated with computing, AI, fuzzy logic and related subjects.

Potrebbero piacerti anche