Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Michael Scherger
Department of Computer Science
Kent State University
• One should never lose sight of how crude the approximations are, and how
over-simplified our ANNs are compared to real brains.
• They can learn and generalize from training data – so there is no need for
enormous feats of programming
• They are very noise tolerant – so they can cope with situations where
normal symbolic systems would have difficulty
• 1949 Hebb published his book The Organization of Behavior, in which the Hebbian learning rule was
proposed.
• 1958 Rosenblatt introduced the simple single layer networks now called Perceptrons.
• 1969 Minsky and Papert’s book Perceptrons demonstrated the limitation of single layer perceptrons,
and almost the whole field went into hibernation.
• 1982 Kohonen developed the Self-Organizing Maps that now bear his name.
• 1986 The Back-Propagation learning algorithm for Multi-Layer Perceptrons was re-discovered and the
whole field took off again.
• 2000s The power of Ensembles of Neural Networks and Support Vector Machines becomes apparent.
• They can learn from training data and generalize to new situations.
• They are useful for brain modeling and real world applications
involving pattern recognition, function approximation, prediction, …
• Brains have been evolving for tens of millions of years, computers have
been evolving for tens of decades.
• Bipolar sigmoid
2
g ( x) 2 f ( x) 1 x
1
1 e
• We shall see explicitly how one can construct simple networks that perform
NOT, AND, and OR.
• It is then a well known result from logic that we can construct any logical
function from these three operations.
• The resulting networks, however, will usually have a much more complex
architecture than a simple Perceptron.
θ=2
x1 x2 y x1
2
0 0 0
y
0 1 1
1 0 1 x2 2
1 1 1
θ=2
x1 x2 y x1
1
0 0 0
y
0 1 0
1 0 0 x2 1
1 1 1
θ=2
x1 y x1
-1
0 1
y
1 0
1
2
bias
θ=2
x1 x2 y x1
2
0 0 0
y
0 1 0
1 0 1 x2 -1
1 1 0
x1 x2 y x1
?
0 0 0
y
0 1 1
1 0 1 x2 ?
1 1 0
2
Heat x1 y1
2
-1 z1
2
2 1
Cold x2 z2 y2
Heat 0
Apply Cold
Cold 1
Heat 0
Remove Cold 0
Cold 0 1
Heat 0
Cold 0 0
Cold 0
Heat 0
Apply Cold
Cold 1
Heat 0
Cold 1 1
Heat 0
• How do we construct
a neural network that
can classify any type
of bomber or fighter?
• 2. Take the simplest form of network you think might be able to solve your problem, e.g. a
simple Perceptron.
• 3. Try to find appropriate connection weights (including neuron thresholds) so that the
network produces the right outputs for each input in its training data.
• 4. Make sure that the network works on its training data, and test its generalization by
checking its performance on new testing data.
• 5. If the network doesn’t perform well enough, go back to stage 3 and try harder.
• 6. If the network still doesn’t perform well enough, go back to stage 2 and try harder.
• 7. If the network still doesn’t perform well enough, go back to stage 1 and try harder.
• Generally we would have one output unit for each class, with
activation 1 for ‘yes’ and 0 for ‘no’
• With just two classes here, we can have just one output unit, with
activation 1 for ‘fighter’ and 0 for ‘bomber’ (or vice versa)
• y = z1 OR z2 -1
x2 z2 2
2
y _ in b xi wi
i
1 if y_in
y 0 if - y_in
1 if y_in -
• The Perceptron Learning Rule will find weights for linearly separable
problems in a finite number of iterations.
• In other words:
– 1. If two neurons on either side of a synapse (connection) are activated
simultaneously (i.e. synchronously), then the strength of that synapse is
selectively increased.