Sei sulla pagina 1di 39

Bias

b
x1 w
1 Activation
Local function
Field
v Output
Input x w2 y
2
signal

Summing
function

xm wm
Dr. Meenakshi Sood
Synaptic
weights
Associate Professor, NITTTR Chandigarh
meenkashi@nitttrchd.ac.in
Classification
Properties of Artificial Neural Nets
(ANNs)
⚫ Many simple neuron-like threshold switching units
⚫ Many weighted interconnections among units
⚫ Input is high-dimensional discrete or real-valued (e.g. raw
sensor input)
⚫ Output is discrete or real valued
⚫ Output is a vector of values
⚫ Form of target function sometimes unknown
⚫ Highly parallel, distributed processing
⚫ Learning by tuning the connection weights

4/8/2020 M Sood NITTTR CHD 3


Types of Layers
• The input layer.
– Introduces input values into the network.
– No activation function or other processing.

• The hidden layer(s).


– Perform classification of features
– Two hidden layers are sufficient to solve any problem
– Features imply more layers may be better

• The output layer.


– Functionally just like the hidden layers
– Outputs are passed on to the world outside
the neural network.

4/8/2020 M Sood NITTTR CHD 4


Neural Network Topologies
Single layer feed-forward networks
⚫ Input layer projecting into the output layer

Single layer
network

Input Output
layer layer

4/8/2020 M Sood NITTTR CHD 5


Cont...
Multi-layer feed-forward networks
⚫ One or more hidden layers. Input projects only from
previous layers onto a layer.
Input Hidden Output
layer layer layer
2-layer or
1-hidden layer
fully connected
network

Organized into different layers


Unidirectional connections
memory-less: output depends only on the present
input
4/8/2020 M Sood NITTTR CHD 6
A Multilayer Neural Network
Output Class

Output nodes

Hidden nodes

wij - weights

Input nodes
Network is fully connected
Input Record : xi

4/8/2020 M Sood NITTTR CHD 7


Neural Netwok Architecture

⚫ Feed forward Architecture


⚫ Feedback (Recurrent ) Architecture
⚫ Competitive network

4/8/2020 M Sood NITTTR CHD 8


Feed Forward Neural Networks
⚫ The information is propagated
from the inputs to the outputs
Output layer
⚫ The first layer is the input and the
last layer is the output.
2nd hidden
layer
⚫ The activities of the neurons in
1st hidden each layer are a non-linear
layer function of the activities in the
layer below.

⚫ Time has no role (NO cycle


x1 x2 ….. xn between outputs and inputs)
4/8/2020 M Sood NITTTR CHD 9
Recurrent Neural Networks
⚫ A network with feedback, where some of its inputs are
connected to some of its outputs (discrete time).

⚫ Delays are associated to a specific weight


⚫ Training is more difficult
⚫ Stable Outputs may be more difficult to evaluate
⚫ Unexpected behavior (oscillation, chaos, …)

4/8/2020 M Sood NITTTR CHD 10


Architectures:
Feedforward and Feedback

11/48
4/8/2020 M Sood NITTTR CHD
Feedforward vs Feedback:

⚫ Possess no dynamics
⚫ Demonstrate powerful properties
⚫ Universal function approximation

⚫ Find widespread applications in pattern


classification. Multilayer Perceptrons

X ∈ Rn S = f (X)
12/48
4/8/2020 M Sood NITTTR CHD
Feedforward vs
Feedback:
Recurrent Neural Networks

⚫ Non-linear dynamical systems


⚫ New state of the network is a
function of the current input and
the present state of the network
⚫ Capable of performing powerful
tasks such as
⚫ pattern completion
⚫ topological feature mapping
⚫ pattern recognition

13/48
4/8/2020 M Sood NITTTR CHD
Competitive Networks ( later
sessions)

14 4/8/2020 M Sood NITTTR CHD


SUMMARY
Neural Network types can be classified based on following attributes:

• Applications
-Classification
-Clustering
-Function approximation
-Prediction
• Connection Type
- Static (feedforward)
- Dynamic (feedback)
• Topology
- Single layer
- Multilayer
- Recurrent
- Self-organized

• Learning Methods
- Supervised
- Unsupervised
4/8/2020 M Sood NITTTR CHD 15
TRAINING A NEURAL NETWORK

16 4/8/2020 M Sood NITTTR CHD


Overview
The ability of the neural network (NN) to learn from its
environment and to improve its performance through
learning.

- The NN is stimulated by an environment


- The NN undergoes changes in its free parameteres
- The NN responds in a new way to the environment

4/8/2020 M Sood NITTTR CHD 17


Definition of Learning
⚫ Learning is a process by which the free parameters of a
neural network are adapted through a process of
stimulation by the environment in which the network is
embedded.

⚫ The type of the learning is determined by the manner in


which the parameter changes take place.

4/8/2020 M Sood NITTTR CHD 18


A Simple Artificial Neuron
It receives input from some other units, or perhaps from an external source.
Each input has an associated weight w, which can be modified so as to model
synaptic learning.
The unit computes some function f of the weighted sum of its inputs

•Its output, in turn, can serve as input to other units.

•Note that wij refers to the weight from unit j to unit i


(not the other way around).

•The function f is the unit's activation function. In the


simplest case, f is the identity function, and the unit's
output is just its net input. This is called a linear unit.
4/8/2020 M Sood NITTTR CHD 19
Represent a single input single output functions: y = f(x)
Adjust weights (w) to learn a given target function:
y = f(x)
Given a set of training data X→Y

INPUT DATA SET- X

CORRESPONDING OUTPUT - Y

4/8/2020 M Sood NITTTR CHD 20


⚫ Weight update formula:

ERROR FUNCTION/ COST FUNCTION

4/8/2020 M Sood NITTTR CHD 21


LEARNING PROCESS
⚫ NN is stimulated by environment

⚫ NN undergoes changes in its free parameter( weights)

⚫ NN responds in a new way to the environment

Set of well defined rules fort the solution is called LEARNING ALGORITHM

4/8/2020 M Sood NITTTR CHD 22


Learning Rule
⚫ Learning rule: a procedure (training algorithm) for
modifying the weights and the biases of a network.

⚫ The purpose of the learning rule is to train the network to


perform some task.

4/8/2020 M Sood NITTTR CHD 23


Supervised learning,
Unsupervised learning and
Reinforcement learning.

4/8/2020 M Sood NITTTR CHD 24


Rudimentary Example of classification
for Understanding Supervised and Unsupervised
Learning

4/8/2020 M Sood NITTTR CHD 25


Two possible Solutions…

26 4/8/2020 M Sood NITTTR CHD


Supervised Learning
⚫ It is based on a labeled
training set.
⚫ The class of each piece of
data in training set is
known.
⚫ Class labels are
pre-determined and
provided in the training
phase.

27 4/8/2020 M Sood NITTTR CHD


Supervised Learning

⚫ The learning rule is provided with a set of examples (the


training set) of proper network behavior: {x1,y1},
{x2,y2},…, {xQ,yQ} where xq is an input to the network
and yq is the corresponding correct (target) output.

⚫ As the inputs are applied to the network, the network


outputs are compared to the targets. The learning rule is
then used to adjust the weights and biases of the network
in order to move the network outputs closer to the
targets.

4/8/2020 M Sood NITTTR CHD 28


Supervised learning
⚫ In supervised training, both the inputs and the outputs are
provided.
⚫ The network then processes the inputs and compares its resulting
outputs against the desired outputs.
⚫ Errors are then propagated back through the system, causing the
system to adjust the weights which control the network.
⚫ This process occurs over and over as the weights are continually
tweaked. The set of data which enables the training is called the
training set.
⚫ During the training of a network the same set of data is processed
many times as the connection weights are ever refined.
⚫ Example architectures : Multilayer perceptions

4/8/2020 M Sood NITTTR CHD 29


Learning Paradigms
⚫ Learning with a Teacher (=supervised learning)
⚫ The teacher has knowledge of the environment
⚫ Error-performance surface

4/8/2020 M Sood NITTTR CHD 30


Unsupervised learning

⚫ In unsupervised training, the network is provided with inputs


but not with desired outputs.

⚫ The system itself must then decide what features it will use to
group the input data. This is often referred to as
self-organization or adaption.

⚫ Example architectures : Kohonen, ART

4/8/2020 M Sood NITTTR CHD 31


Unsupervised Learning

⚫ The weights and biases are modified in response to


network inputs only. There are no target outputs
available.
⚫ Most of these algorithms perform some kind of
clustering operation. They learn to categorize the
input patterns into a finite number of classes. This is
especially in such applications as vector
quantization.

4/8/2020 M Sood NITTTR CHD 32


Unsupervised Learning
⚫ There is no external teacher or critic to oversee the
learning process.

⚫ The provision is made for a task independent measure of


the quality of representation that the network is required to
learn.

The training process extracts the statistical properties of the


training set and groups similar vectors into classes.
4/8/2020 M Sood NITTTR CHD 33
Reinforcement learning
⚫ The learning of input-output mapping is performed
through continued interaction with the environment
in oder to minimize a scalar index of performance.

Task independent measure

Delayed Reinforcement

Credit assignment

Minimize a cost function

4/8/2020 M Sood NITTTR CHD 34


Reinforcement Learning
⚫ The learning rule is similar to supervised learning,
except that, instead of being provided with the correct
output for each network input, the algorithm is only
given a grade.

⚫ The grade (score) is a measure of the network


performance over some sequence of inputs.
⚫ It appears to be most suited to control system
applications.

4/8/2020 M Sood NITTTR CHD 35


Supervised Vs Unsupervised
⚫ Task performed ⚫ Task performed
Classification Clustering
Pattern
Recognition ⚫ NN Model :
⚫ NN model : Self Organizing
Preceptron Maps
Feed-forward NN

4/8/2020 M Sood NITTTR CHD 36


Learning algorithm
Epoch : Presentation of the entire training set to the neural network.
In the case of the AND function an epoch consists of four sets of inputs
being presented to the network (i.e. [0,0], [0,1], [1,0], [1,1])
Error: The error value is the amount by which the value output by the network differs
from the target value. For example, if we required the network to output 0
and it output a 1, then Error = -1

4/8/2020 M Sood NITTTR CHD 37


Learning algorithm
Target Value, T : When we are training a network we not only present it with the
input but also with a value that we require the network to produce. For
example, if we present the network with [1,1] for the AND function the
training value will be 1
Output , O : The output value from the neuron
Ij : Inputs being presented to the neuron
Wj : Weight from input neuron (Ij) to the output neuron
LR : The learning rate. This dictates how quickly the network converges. It is set by a
matter of experimentation. It is typically 0.1

4/8/2020 M Sood NITTTR CHD 38


THANK YOU

4/8/2020 M Sood NITTTR CHD 39

Potrebbero piacerti anche