Sei sulla pagina 1di 3

In machine learning and cognitive science, artificial neural networks (ANNs) are a

family

of

statistical

learning

algorithms

inspired

by biological

neural

networks (the central nervous systems of animals, in particular the brain) and are used
to estimate or approximate functions that can depend on a large number of inputs and
are generally unknown. Artificial neural networks are generally presented as systems of
interconnected "neurons" which can compute values from inputs, and are capable
of machine learning as well as pattern recognition.

TYPES:

Feed Forward Neural Network - A simple neural network type where


synapses are made from an input layer to zero or more hidden layers, and
finally to an output layer. The feedforward neural network is one of the most
common types of neural network in use. It is suitable for many types of
problems. Feedforward neural networks are often trained with simulated
annealing, genetic algorithms or one of the propagation techniques.
Self Organizing Map (SOM) - A neural network that contains two layers and
implements a winner take all strategy in the output layer. Rather than taking the
output of individual neurons, the neuron with the highest output is considered
the winner. SOM's are typically used for classification, where the output neurons
represent groups that the input neurons are to be classified into. SOM's are
usually trained with a competitive learning strategy.
Hopfield Neural Network - A simple single layer recurrent neural network.
The Hopfield neural network is trained with a special algorithm that teaches it to
learn to recognize patterns. The Hopfield network will indicate that the pattern
is recognized by echoing it back. Hopfield neural networks are typically used for
pattern recognition.

Simple Recurrent Network (SRN) Elman Style - A recurrent neural network


that has a context layer. The context layer holds the previous output from the
hidden layer and then echos that value back to the hidden layer's input. The
hidden layer then always receives input from its previous iteration's output.
Elman neural networks are generally trained using genetic, simulated annealing,
or one of the propagation techniques. Elman neural networks are typically used
for prediction.
Simple Recurrent Network (SRN) Jordan Style - A recurrent neural
network that has a context layer. The context layer holds the previous output
from the output layer and then echos that value back to the hidden layer's
input. The hidden layer then always receives input from the previous iteration's
output layer. Jordan neural networks are generally trained using genetic,
simulated annealing, or one of the propagation techniques. Jordan neural
networks are typically used for prediction.
Simple Recurrent Network (SRN) Self Organizing Map - A recurrent self
organizing map that has an input and output layer, just as a regular SOM.
However, the RSOM has a context layer as well. This context layer echo's the
previous iteration's output back to the input layer of the neural network. RSOM's
are trained with a competitive learning algorithm, just as a non-recurrent SOM.
RSOM's can be used to classify temporal data, or to predict.
Feedforward Radial Basis Function (RBF) Network - A feedforward
network with an input layer, output layer and a hidden layer. The hidden layer is
based on a radial basis function. The RBF generally used is the gaussian
function. Several RBF's in the hidden layer allow the RBF network to
approximate a more complex activation function than a typical feedforward
neural network. RBF networks are used for pattern recognition. They can be
trained using genetic, annealing or one of the propagation techniques. Other
means must be employed to determine the structure of the RBF's used in the
hidden layer.