Sei sulla pagina 1di 10

Instituto Politécnico Nacional

Unidad Profesional Interdisciplinaria


en Ingeniería y Tecnologías
Avanzadas

Neural Networks

Comparison between perceptron and ADALINE

Teacher’s Name: Blanca Tovar Corona

Student’s Name: Hugo Osorio Gayosso

Due Date: March 1st, 2019


Introduction
The process presented in this document will indicate the problem statement and its solution
while neural networks. The problem presented is about classification. It is needed to
separate objects in two different groups, for example, rabbits and bears. This can be solved
with two different types of neurons: perceptron and Adaline. Each neuron has its
advantages and disadvantages; everything depends on the application.

Background
The ADALINE (Adaptive Linear Neuron or later Adaptive Linear Element) is an early single-
layer artificial neural network and the name of the physical device that implemented this
network. The network uses memistors. It was developed by Professor Bernard Widrow and
his graduate student Ted Hoff at Stanford University in 1960. It is based on the McCulloch–
Pitts neuron. It consists of a weight, a bias and a summation function [1].
Besides, the perceptron is an algorithm for supervised learning of binary classifiers. A binary
classifier is a function which can decide whether or not an input, represented by a vector of
numbers, belongs to some specific class.[1] It is a type of linear classifier, i.e. a classification
algorithm that makes its predictions based on a linear predictor function combining a set of
weights with the feature vector [2].

Objective
To know how use two different neurons depending on the problem we need to solve.
Besides, it is important that we, as mechatronics engineers, know how to implement a
Perceptron and Adaline neurons in a software like MATLAB, because some problems may
require specific characteristics with specific parameters.

Development
The Adaline (Adaptive Linear Element) and the Perceptron are both linear classifiers when
considered as individual units. They both take an input, and based on a threshold, output e.g.
either a 0 or a 1.

The main difference between the two, is that a Perceptron takes that binary response (like a
classification result) and computes an error used to update the weights, whereas
an Adaline uses a continuous response value to update the weights (so before the binarized
output is produced).
The fact that the Adaline does this, allows its updates to be more representative of the actual
error, before it is thresholded, which in turn allows a model to converge more quickly.
The Figure 1 contains all the code implemented in MATLAB that trains Perceptron and
Adaline as well.
Results
In this first result figure, the red line in the first axis is the corresponding to Perceptron
and the black line corresponds to Adaline.
The blue points correspond to the patterns the ‘x’ points are the rabbits and the ‘o’ points
are the bears.
I can be appreciated that Perceptron finds a solution in the first iteration out of 10.

Figure 1 Training with 10 epochs


This time, the number of epochs is 50. This results in a more calibrated Adaline limit line, whilst
the Perceptron got stucked near the rabbits’ group.

Figure 2 Training with 50 epochs


In the figure number 3, the Adaline limit line is better calibrated with 200 epochs than the
previous two configurations. Once again, the Perceptron found a faster solution, however it may
be prejudicial that the limit line is very close to the patterns.

Figure 3 Training with 200 epochs


In the second subsection of the practice, it is asked to configure the number of epochs to 100,
1 1
whilst 𝛼 will vary from to .
𝜆 max∗4 𝜆 max∗16
1
In this figure 𝛼 = 𝜆 max∗4 and it could be appreciated that sometimes, with a high random value in
W and B the Adaline algorithm couldn’t find a solution.

Figure 4 Training with alpha/4


1
With 𝛼 = 𝜆 max∗8 the it was unusual to not obtain a solution, but the algorithm didn’t find the
solution as fast as the previous configuration.

Figure 5 Training with alpha/8


Finally, as we can see in the error graph, the solution delayed more than the previous ones with 𝛼 =
1
. However, the solution of the Adaline, in my opinion, is better calibrated. That means that
𝜆 max∗16
it was found the value of 𝛼 that fits the best in this case.

Figure 6 Training with alpha/16


Conclusions
The variety of options to solve an engineering problem is quite wide. And it is important to know
when to use a type and when the other. But also is important to know how to vary the different
parameters of the neuron.

In addition, we observed how different these two types if neurons are, one is more efficient but
less accurate than the other. Perceptron found a faster solution, but this solution is not as smooth
as the ADALINE’s response.

Potrebbero piacerti anche