Sei sulla pagina 1di 2

Week 1

In keras, you use the word dense to define a layer of connected neurons.
Successive layers are defined in sequence, hence the word sequential
loss measures the error and sends it to the optimizer(sgd – stochaistic gradient decent algorithm is
used) so that it can reduce the cost function in a better way compared to the previous guess.
As the guesses becomes better and better and the accuracy increases towards 100%, the term
convergence is used.
Epochs = training loop

Week 2

Fashion MNIST dataset – 70K images, 10 categories, Images are 28*28, enough to train a neural
network. Images are in greyscale.

The last layer has 10 neurons because we have 10 classes in the dataset. The flatten layer takes the
28*28 matrix and converts it to a simple linear array (1-D array). The middle layer is a dense layer
with 128 neurons in it. These neurons could be thought as variables of a function. There exists a
rule that converts the pixel values of the ankle boot image to the number 9. Relu effectively means
"If X>0 return X, else return 0" -- so what it does it it only passes values 0 or greater to the next
layer in the network. Softmax takes a set of values, and effectively picks the biggest one, so, for
example, if the output of the last layer looks like [0.1, 0.1, 0.05, 0.1, 9.5, 0.1, 0.05, 0.05, 0.05], it
saves you from fishing through it looking for the biggest value, and turns it into [0,0,0,0,1,0,0,0,0]
-- The goal is to save a lot of coding!
How can I stop a training when I reach a point in training when I want to be at?
This can be addressed by the use of call back functions
logs = contains a set of information about the training
In the above code snippet, if loss is less than 4, the training is getting cancelled.

In the main code, we first instantiate the class for call back as follows:
callbacks = myCallback()
model.fit(training_images, training_labels, epochs = 5, callbacks=[callbacks])

WEEK 3

Convolutional Neural Networks

In the previous neural network built for fashion MNIST, there was a lot of space in images that was
getting wasted. It would have been useful if there was a way to condense tbe image down to the
important features. Convolutions is nothing but running a filter over the image and multiplying the
pixel values of the image with the corresponding pixel value in filter. The idea here is that some
convolutions will change the image in such a way that certain features in the image gets
emphasized. Pooling is a way of compressing an image. The most simplest way of pooling is to go
through the image four pixels at a time (ie) the current pixel and its neighbors underneath and right
of it. Of these four keep the biggest value and keep just that. This will preserve the features that
were highlighted by the convolution while simultaneously quartering the size of the image .

Implementing convolutional layers

First line – Here the first convolution is performed. Keras will generate 64 filters of 3 by 3 and relu
activation. We specify input shape as before (28, 28, 1). The extra 1 is added for tallying it by using
a single byte for color depth as the images used here are greyscale.

Potrebbero piacerti anche