Sei sulla pagina 1di 38

Multilayer Neural Network

Under the supervision of


Dr. Ghaida Almulla
Worked by :
1- Fatima Fouad Jasim
2- Ola Abbas Saeed
3- Fatima Aqeel Hassan
4- Duna Muzahim Muhammad
5- Fatima Taha Jwad
6- Saja Hassan Dohan
Introduction:
by: Fatima Fouad

In the past 10 years, the best-performing artificial-intelligence systems —


such as the speech recognizers on smartphones or Google’s latest
automatic translator — have resulted from a technique called “deep
learning.”

Deep learning is in fact a new name for an approach to artificial intelligence


called neural networks, which have been going in and out of fashion for
more than 70 years. Neural networks were first proposed in 1944 by
Warren McCullough and Walter Pitts, two University of Chicago
researchers who moved to MIT in 1952 as founding members of
what’s sometimes called the first cognitive science department.
Introduction:
by: Fatima Fouad

• Neural nets were a major area of research in both neuroscience and


computer science until 1969, when, according to computer science lore,
they were killed off by the MIT mathematicians Marvin Minsky and
Seymour Papert, who a year later would become co-directors of the new
MIT Artificial Intelligence Laboratory.

• The technique then enjoyed a resurgence in the 1980s, fell into eclipse
again in the first decade of the new century, and has returned like
gangbusters in the second, fueled largely by the increased processing
power of graphics chips.
Introduction:
by: Fatima Fouad

“There’s this idea that ideas in science are a bit like epidemics of viruses,”
says Tomaso Poggio, the Eugene McDermott Professor of Brain and
Cognitive Sciences at MIT, an investigator at MIT’s McGovern Institute for
Brain Research, and director of MIT’s Center for Brains, Minds, and
Machines. “There are apparently five or six basic strains of flu viruses, and
apparently each one comes back with a period of around 25 years. People
get infected, and they develop an immune response, and so they don’t get
infected for the next 25 years. And then there is a new generation that is
ready to be infected by the same strain of virus. In science, people fall in
love with an idea, get excited about it, hammer it to death, and then get
immunized — they get tired of it. So ideas should have the same kind of
periodicity!”
What is a neural network?
By: Ola Abbas

Neural nets are a means of doing


machine learning, in which a
computer learns to perform some
task by analyzing training examples.
Usually, the examples have been
hand-labeled in advance. An object
recognition system, for instance,
might be fed thousands of labeled
images of cars, houses, coffee cups,
and so on, and it would find visual
patterns in the images that
consistently correlate with particular
labels.
What is a neural network?
By: Ola Abbas

• Modeled loosely on the human brain, a neural net consists of thousands


or even millions of simple processing nodes that are densely
interconnected. Most of today’s neural nets are organized into layers of
nodes, and they’re “feed-forward,” meaning that data moves through
them in only one direction. An individual node might be connected to
several nodes in the layer beneath it, from which it receives data, and
several nodes in the layer above it, to which it sends data.
What is a neural network?
By: Ola Abbas

• To each of its incoming connections, a node


will assign a number known as a “weight.”
When the network is active, the node
receives a different data item — a different
number — over each of its connections and
multiplies it by the associated weight. It then
adds the resulting products together, yielding
a single number. If that number is below a
threshold value, the node passes no data to
the next layer. If the number exceeds the
threshold value, the node “fires,” which in
today’s neural nets generally means sending
the number — the sum of the weighted
inputs — along all its outgoing connections.
By: Fatima Fouad
What is a neural network?
By: Ola Abbas

• When a neural net is being trained,


all of its weights and thresholds are
initially set to random values.
Training data is fed to the bottom
layer — the input layer — and it
passes through the succeeding
layers, getting multiplied and
added together in complex ways,
until it finally arrives, radically
transformed, at the output layer.
During training, the weights and
thresholds are continually adjusted
until training data with the same
labels consistently yield similar
outputs.
By: Fatima Fouad
By: Fatima Fouad
Modelling of one neuron:
by: Fatima Aqeel

• A general treatment of models, however, would necessarily lead to the


examination of a number of philosophical questions. Here we simply
discuss some aspects of modelling that in our experience have proved to
be useful in the construction and application of models. These topics are
not usually considered in the neurophysiological modelling literature, but
an understanding of the basic assumptions of modelling, and the
presumed relation between model and reality is essential for constructive
work in computational neuroscience.
Modelling of one neuron:
by: Fatima Aqeel

• Neuronal modelling is the process by which a biological neuron is


represented by a mathematical structure that incorporates its
biophysical and geometrical characteristics. This structure is referred to
as the mathematical model or the model of the neuron. The behavior of
this representation may serve a number of purposes: for example, it
may be used as the basis for estimating the biophysical parameters of
real neurons or it may be used to define the computational and
information processing properties of a neuron. Neuronal modelling
requires not only an understanding of mathematical and computational
techniques, but also an understanding of the what the process of
modelling entails.
Modelling of one neuron:
by: Fatima Aqeel

The fundamental building block of every nervous system is the


single neuron. Understanding how these exquisitely structured
elements operate is an integral part of the quest to solve the
mysteries of the brain. Quantitative mathematical models have
proved to be an indispensable tool in pursuing this goal. We
review recent advances and examine how single-cell models on
five levels of complexity, from black-box approaches to detailed
compartmental simulations, address key questions about neural
dynamics and signal processing.
Modelling of one neuron:
by: Fatima Aqeel

• Realistic single-neuron modeling organizes and clarifies physiological


hypotheses. It extends the experimenter's intuition and leads to testable
predictions. A powerful new algorithm, several user-friendly software
packages and the advent of fast, cheap computers have together made
this tool accessible to a broad range of neurobiologists. Equally dramatic
advances in experimental findings have increased the level of
sophistication of the models. Here we provide a guide to single-neuron
modeling, illustrate its power with a few examples and speculate on
possible future directions for this rapidly growing field.
Learning rules for one neuron:
by : Duna Muzahim

• Learning rule or Learning process is a method or a mathematical logic. It


improves the Artificial Neural Network’s performance and applies this rule
over the network. Thus learning rules updates the weights and bias levels
of a network when a network simulates in a specific data environment.
Applying learning rule is an iterative process. It helps a neural network to
learn from the existing conditions and improve its performance.
Learning rules for one neuron:
by : Duna Muzahim

This definition of the learning process implies the following sequence of events:
1. The neural network is stimulated by an environment.
2. The neural network undergoes changes in its free parameters as a result of
this stimulation.
3. The neural network responds in a new way to the environment because of
the changes which have occurred in its internal structure.
Learning rules for one neuron:
by : Duna Muzahim

• different learning rules in the Neural network:


• Hebbian learning rule – It identifies, how to modify the weights of
nodes of a network.
• Perceptron learning rule – Network starts its learning by assigning a
random value to each weight.
• Delta learning rule – Modification in sympatric weight of a node is
equal to the multiplication of error and the input.
• Correlation learning rule – The correlation rule is the supervised
learning.
• Outstar learning rule – We can use it when it assumes that nodes or
neurons in a network arranged in a layer.
Learning rules for one neuron:
by : Duna Muzahim

Hebbian Learning Rule:


The Hebbian rule was the first learning rule. In 1949 Donald Hebb developed
it as learning algorithm of the unsupervised neural network. We can use it
to identify how to improve the weights of nodes of a network.
The Hebb learning rule assumes that – If two neighbor neurons activated
and deactivated at the same time. Then the weight connecting these
neurons should increase. For neurons operating in the opposite phase,
the weight between them should decrease. If there is no signal
correlation, the weight should not change.
When inputs of both the nodes are either positive or negative, then a
strong positive weight exists between the nodes. If the input of a node is
positive and negative for other, a strong negative weight exists between
the nodes.
Learning rules for one neuron:
by : Duna Muzahim

At the start, values of all weights are set to zero. This learning rule can be
used0 for both soft- and hard-activation functions. Since desired responses
of neurons are not used in the learning procedure, this is the unsupervised
learning rule. The absolute values of the weights are usually proportional to
the learning time, which is undesired.
Learning rules for one neuron:
by : Duna Muzahim

Perceptron Learning Rule:


As you know, each connection in a neural network has an associated
weight, which changes in the course of learning. According to it, an
example of supervised learning, the network starts its learning by assigning
a random value to each weight.
Calculate the output value on the basis of a set of records for which we
can know the expected output value. This is the learning sample that
indicates the entire definition. As a result, it is called a learning sample.
The network then compares the calculated output value with the expected
value. Next calculates an error function ∈, which can be the sum of
squares of the errors occurring for each individual in the learning sample.
Computed as follows:
Learning rules for one neune:
by: Duna Muzahim

Delta Learning Rule:


Developed by Widrow and Hoff, the delta rule, is one of the most common learning
rules. It depends on supervised learning.
This rule states that the modification in sympatric weight of a node is equal to the
multiplication of error and the input.
In Mathematical form the delta rule is as follows:
Learning rules for one neune:
by: Duna Muzahim

Correlation Learning Rule:


The correlation learning rule based on a similar principle as the Hebbian learning
rule. It assumes that weights between responding neurons should be more positive,
and weights between neurons with opposite reaction should be more negative.
Contrary to the Hebbian rule, the correlation rule is the supervised learning. Instead
of an actual
The response, oj, the desired response, dj, uses for the weight-change calculation.
In Mathematical form the correlation learning rule is as follows:
Learning rules for one neunes:
by: Duna Muzahim

Out Star Learning Rule:


We use the Out Star Learning Rule when we assume that nodes or neurons in a
network arranged in a layer. Here the weights connected to a certain node should
be equal to the desired outputs for the neurons connected through those weights.
The out start rule produces the desired response t for the layer of n nodes.
Apply this type of learning for all nodes in a particular layer. Update the weights for
nodes are as in Kohonen neural networks.
In Mathematical form, express the out star learning as follows:
Layered neural network:
by: Fatima Taha

Layer is a general term that applies to a collection of 'nodes' operating


together at a specific depth within a neural network.
The core building block of neural networks is the layer, a data-processing
module that you can think of as a filter for data. Some data goes in, and it
comes out in a more useful form. Specifically, layers extract
representations out of the data fed into them—hopefully, representations
that are more meaningful for the problem at hand. Most of deep learning
consists of chaining together simple layers that will implement a form of
progressive data distillation. A deep-learning model is like a sieve for data
processing, made of a succession of increasingly refined data filters—the
layers.
Layered neural network:
by: Fatima Fouad
Layerd neural network:
by: Fatima Taha

• The input layer is contains your raw data (you can think of each variable
as a 'node').
• The hidden layer(s) are where the black magic happens in neural
networks. Each layer is trying to learn different aspects about the data by
minimizing an error/cost function. The most intuitive way to understand
these layers is in the context of 'image recognition' such as a face. The
first layer may learn edge detection, the second may detect eyes, third a
nose, etc. This is not exactly what is happening but the idea is to break the
problem up in to components that different levels of abstraction can piece
together much like our own brains work (hence the name 'neural
networks').
• The output layer is the simplest, usually consisting of a single output for
classification problems. Although it is a single 'node' it is still considered
a layer in a neural network as it could contain multiple nodes.
Layerd neural network:
by: Fatima Fouad
Layred neural network:
by: Fatima Taha

• Every layer has a separate activation function and input/output


connection weights.
• The output of the first hidden layer will be multiplied by a weight,
processed by an activation function in the next layer and so on. Single
layer neural networks are very limited for simple tasks, deeper NN can
perform far better than a single layer.
• However, do not use more than layer if your application is not fairly
complex. In conclusion, 100 neurons layer does not mean better neural
network than 10 layers x 10 neurons but 10 layers are something
imaginary unless you are doing deep learning. start with 10 neurons in the
hidden layer and try to add layers or add more neurons to the same layer
to see the difference. learning with more layers will be easier but more
training time is required.
Learning of multilayer neuron network:
by: Saja Hassan

Multilayer networks solve the classification problem for non linear sets by
employing hidden layers, whose neurons are not directly connected to
the output. The additional hidden layers can be interpreted geometrically
as additional hyper-planes, which enhance the separation capacity of the
network.
Learning of multilayer neuron network:
by: Saja Hassan

• The training occurs in a supervised style. The basic idea is to present the
input vector to the network; calculate in the forward direction the output
of each layer and the final output of the network. For the output layer the
desired values are known and therefore the weights can be adjusted as
for a single layer network; in the case of the BP algorithm according to the
gradient decent rule.
• To calculate the weight changes in the hidden layer the error in the
output layer is back-propagated to these layers according to the
connecting weights. This process is repeated for each sample in the
training set. One cycle through the training set is called an epoch. The
number of epoches needed to train the network depends on various
parameters, especially on the error calculated in the output layer.

Learning of multilayer neuron network:
by: Saja Hassan

• The most popular class of


multilayer feed-forward ANNs is the
multilayer perceptron, with one or more
layers between the input and output
layer. The multilayer perceptron network
is most commonly used with the back-
propagation algorithm. Multiple layers of
neurons with nonlinear transfer
functions allow the network to learn
nonlinear and linear relationships
between input and output vectors.
Learning of multilayer neuron network:
by: Saja Hassan

• The back-propagation algorithm is composed of two passes through


different layers of the network. The first is the forward pass, in which the
input vector is applied to the network, and the network is trained with the
back-propagation learning algorithm layer by layer. In this pass,
the synaptic weights are assumed to be constant. The second one is the
backward pass in which, unlike the forward pass, parameters of the
network are inconstant and modified. First the error, the discrepancy
between the output of the network and the expected output, is carried
out through back-propagation to update the synaptic weights. The
weights are continuously updated after each piece of data is processed.
The input vector is presented to the network and the process continues
until the output of the network comes closer to the desired output.
Learning of multilayer neuron network:
by: Fatima Fouad
Learning of multilayer neuron network:
by: Fatima Fouad
Refrencess:
by: Fatima Fouad

• https://ai.stackexchange.com/questions/3262/1-hidden-layer-with-1000-
neurons-vs-10-hidden-layers-with-100-neurons
• https://www.teco.edu/~albrecht/neuro/html/node18.html
• https://www.sciencedirect.com/topics/chemical-engineering/multilayer-
neural-networks
• https://link.springer.com/chapter/10.1007/978-3-642-58552-4_8
• https://www.sciencedirect.com/science/article/abs/pii/10445765929003
06
• https://www.engineeringenotes.com/artificial-intelligence-2/neural-
network-artificial-intelligence-2/learning-neural-networks-and-learning-
rules-artificial-intelligence/35478
Refrencess:
by: Fatima Fouad

• https://stackoverflow.com/questions/35345191/what-is-a-layer-in-a-
neural-network#:~:text=Layer%20is%20a%20general%20term,magic
%20happens%20in%20neural%20networks
• https://data-flair.training/blogs/learning-rules-in-neural-network/?
utm_campaign=Submission&utm_medium=Community&utm_source=Gro
wthHackers.com
Thank You

Potrebbero piacerti anche