Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
• The technique then enjoyed a resurgence in the 1980s, fell into eclipse
again in the first decade of the new century, and has returned like
gangbusters in the second, fueled largely by the increased processing
power of graphics chips.
Introduction:
by: Fatima Fouad
“There’s this idea that ideas in science are a bit like epidemics of viruses,”
says Tomaso Poggio, the Eugene McDermott Professor of Brain and
Cognitive Sciences at MIT, an investigator at MIT’s McGovern Institute for
Brain Research, and director of MIT’s Center for Brains, Minds, and
Machines. “There are apparently five or six basic strains of flu viruses, and
apparently each one comes back with a period of around 25 years. People
get infected, and they develop an immune response, and so they don’t get
infected for the next 25 years. And then there is a new generation that is
ready to be infected by the same strain of virus. In science, people fall in
love with an idea, get excited about it, hammer it to death, and then get
immunized — they get tired of it. So ideas should have the same kind of
periodicity!”
What is a neural network?
By: Ola Abbas
This definition of the learning process implies the following sequence of events:
1. The neural network is stimulated by an environment.
2. The neural network undergoes changes in its free parameters as a result of
this stimulation.
3. The neural network responds in a new way to the environment because of
the changes which have occurred in its internal structure.
Learning rules for one neuron:
by : Duna Muzahim
At the start, values of all weights are set to zero. This learning rule can be
used0 for both soft- and hard-activation functions. Since desired responses
of neurons are not used in the learning procedure, this is the unsupervised
learning rule. The absolute values of the weights are usually proportional to
the learning time, which is undesired.
Learning rules for one neuron:
by : Duna Muzahim
• The input layer is contains your raw data (you can think of each variable
as a 'node').
• The hidden layer(s) are where the black magic happens in neural
networks. Each layer is trying to learn different aspects about the data by
minimizing an error/cost function. The most intuitive way to understand
these layers is in the context of 'image recognition' such as a face. The
first layer may learn edge detection, the second may detect eyes, third a
nose, etc. This is not exactly what is happening but the idea is to break the
problem up in to components that different levels of abstraction can piece
together much like our own brains work (hence the name 'neural
networks').
• The output layer is the simplest, usually consisting of a single output for
classification problems. Although it is a single 'node' it is still considered
a layer in a neural network as it could contain multiple nodes.
Layerd neural network:
by: Fatima Fouad
Layred neural network:
by: Fatima Taha
Multilayer networks solve the classification problem for non linear sets by
employing hidden layers, whose neurons are not directly connected to
the output. The additional hidden layers can be interpreted geometrically
as additional hyper-planes, which enhance the separation capacity of the
network.
Learning of multilayer neuron network:
by: Saja Hassan
• The training occurs in a supervised style. The basic idea is to present the
input vector to the network; calculate in the forward direction the output
of each layer and the final output of the network. For the output layer the
desired values are known and therefore the weights can be adjusted as
for a single layer network; in the case of the BP algorithm according to the
gradient decent rule.
• To calculate the weight changes in the hidden layer the error in the
output layer is back-propagated to these layers according to the
connecting weights. This process is repeated for each sample in the
training set. One cycle through the training set is called an epoch. The
number of epoches needed to train the network depends on various
parameters, especially on the error calculated in the output layer.
•
Learning of multilayer neuron network:
by: Saja Hassan
• https://ai.stackexchange.com/questions/3262/1-hidden-layer-with-1000-
neurons-vs-10-hidden-layers-with-100-neurons
• https://www.teco.edu/~albrecht/neuro/html/node18.html
• https://www.sciencedirect.com/topics/chemical-engineering/multilayer-
neural-networks
• https://link.springer.com/chapter/10.1007/978-3-642-58552-4_8
• https://www.sciencedirect.com/science/article/abs/pii/10445765929003
06
• https://www.engineeringenotes.com/artificial-intelligence-2/neural-
network-artificial-intelligence-2/learning-neural-networks-and-learning-
rules-artificial-intelligence/35478
Refrencess:
by: Fatima Fouad
• https://stackoverflow.com/questions/35345191/what-is-a-layer-in-a-
neural-network#:~:text=Layer%20is%20a%20general%20term,magic
%20happens%20in%20neural%20networks
• https://data-flair.training/blogs/learning-rules-in-neural-network/?
utm_campaign=Submission&utm_medium=Community&utm_source=Gro
wthHackers.com
Thank You