Sei sulla pagina 1di 4

NEWP creates aperceptron. The first argument specifies the expected ranges of two inputs.

The
second argument determines that there is only one neuron in the layer.

Coding

Add the neuron's initial attempt at classification to the plot.

The initial weights are set to zero, so any input gives the same output and the classification line does
not even appear on the plot.

Coding

Gambar

ADAPT returns a new network object that performs as a better classifier, the network output, and
the error. This loop allows the network to adapt for 3 passes, plots the classification line, and
continues until the error is zero.

Coding

Gambar

Note that it took the perceptron many epochs to train. This a very long time for such a simple problem.
The reason for the long training time is the outlier vector. Despite the long training time, the
perceptron still learns properly and can be used to classify other inputs.

Now SIM can be used to classify any other input vector. For example, classify an input vector of [0.7;
1.2].

A plot of this new point with the original training set shows how the network performs. To distinguish
it from the training set, color it red.

Coding

Gambar

Turn on "hold" so the previous plot is not erased. Add the training set and the classification line to the
plot.

Coding

Gambar

Finally, zoom into the area of interest.

The perceptron correctly classified our new point (in red) as category "zero" (represented by a circle)
and not a "one" (represented by a plus). Despite the long training time, the perceptron still learns
properly. To see how to reduce training times associated with outlier vectors, see the "Normalized
Perceptron Rule" demo.

Coding

Gambar
Normalized Percepton Rule

A 2-input hard limit neuron is trained to classify 5 input vectors into two categories. Despite the fact
that one input vector is much bigger than the others, training with LEARNPN is quick.

Each of the five column vectors in P defines a 2-element input vectors, and a row vector T defines the
vector's target categories. Plot these vectors with PLOTPV.

Coding

Gambar

Note that 4 input vectors have much smaller magnitudes than the fifth vector in the upper left of the
plot. The perceptron must properly classify the 5 input vectors in P into the two categories defined by
T.

NEWP creates aperceptron. The first argument specifies the expected ranges of two inputs. The
second argument determines that there is only one neuron in the layer. LEARNPN is less sensitive to
large variations in input vector size than LEARNP (the default).

Coding

Add the neuron's initial attempt at classification to the plot.

The initial weights are set to zero, so any input gives the same output and the classification line does
not even appear on the plot.

Coding

Gambar

ADAPT returns a new network object that performs as a better classifier, the network output, and the
error. This loop allows the network to adapt for 3 passes, plots the classification line, and continues
until the error is zero.

Coding

Gambar

Note that training with LEARNP took only 3 epochs, while solving the same problem with LEARNPN
required 32 epochs. Thus, LEARNPN does much better job than LEARNP when there are large
variations in input vector size.

Now SIM can be used to classify any other input vector. For example, classify an input vector of [0.7;
1.2].

A plot of this new point with the original training set shows how the network performs. To distinguish
it from the training set, color it red.

Coding

Gambar

Turn on "hold" so the previous plot is not erased. Add the training set and the classification line to the
plot.

Coding
Gambar

Finally, zoom into the area of interest.

The perceptron correctly classified our new point (in red) as category "zero" (represented by a circle)
and not a "one" (represented by a plus). The perceptron learns properly in much shorter time in spite
of the outlier (compare with the "Outlier Input Vectors" demo).

Coding

Gambar

Lineary Non Separable

A 2-input hard limit neuron fails to properly classify 5 input vectors because they are linearly non-
separable.

Each of the five column vectors in P defines a 2-element input vectors, and a row vector T defines the
vector's target categories. Plot these vectors with PLOTPV.

Coding

Gambar

The perceptron must properly classify the input vectors in P into the categories defined by T. Because
the two kinds of input vectors cannot be separated by a straight line, the perceptron will not be able
to do it. NEWP creates a perceptron.

Coding

Add the neuron's initial attempt at classification to the plot. The initial weights are set to zero, so any
input gives the same output and the classification line does not even appear on the plot.

Coding

Gambar

ADAPT returns a new network object that performs as a better classifier, the network outputs, and
the error. This loop allows the network to adapt for 3 passes, plots the classification line, and stops
after 25 iterations.

Coding
Gambar

Potrebbero piacerti anche