Sei sulla pagina 1di 21

RAJ KUMAR GOEL INSTITUTE OF

TECHNOLOGY AND MANAGEMENT

TOPIC : DEEP LEARNING

Submitted by: Submitted to:


Disha singh (1533310023)
Vinay sir
(Asst. prof)
Outline
 Machine Learning basics
 Introduction to Deep Learning
 what is Deep Learning
 why is it useful
 Neural networks
 Training
 Applications
 Limitations
 Future scope
Machine Learning Basics
Machine learning is a field of computer science that gives computers
the ability to learn without being explicitly programmed

Machine Learning
Labeled Data algorithm

Training
Prediction

Learned
Labeled Data Prediction
model
Types of Learning

Supervised Unsupervised Reinforcement learning

class A

class A

Clustering
Classification
ML vs. Deep Learning
Most machine learning methods work well because of human-designed
representations and input features
ML becomes just optimizing weights to best make a final prediction
What is Deep Learning (DL) ?
Deep Learning is the field where the machines learn by themselves
by imitating the human brain. Imitate in the sense, the machines can
perform tasks requiring human intelligence.

https://www.xenonstack.com/blog/static/public/uploads/media/machine-learning-vs-deep-learning.png
Now, let’s understand how?

Human brain can easily differentiate between a cat and a dog.

But how can we make a machine differentiate between a cat


and a dog?
But how can we make a machine differentiate between a cat and
a dog?

We would feed the machine (also called training) with a lot of


images of cats and dogs.
Why is DL useful?
o Manually designed features are often over-specified,
incomplete and take a long time to design and
validate

o Learned Features are easy to adapt, fast to learn

o Deep learning provides a very flexible, (almost?)


universal.

o Can learn both unsupervised and supervised

o Effective end-to-end joint system learning

o Utilize large amounts of training data


Neural Network Intro
Weights

𝒉 = 𝝈(𝐖𝟏 𝒙 + 𝒃𝟏 )
𝒚 = 𝝈(𝑾𝟐 𝒉 + 𝒃𝟐 )

Activation functions

How do we train?

𝒚
4 + 2 = 6 neurons (not counting inputs)
𝒙 [3 x 4] + [4 x 2] = 20 weights
4 + 2 = 6 biases
𝒉 26 learnable parameters
Training
Forward it Back-
Sample Update the
labeled data through the
network, get
propagate network
(batch) the errors weights
predictions

Optimize (min. or max.) objective/cost function 𝑱(𝜽)


Generate error signal that measures difference
between predictions and target values

Use error signal to change the weights and get


more accurate predictions
Subtracting a fraction of the gradient moves you
towards the (local) minimum of the cost function
https://medium.com/@ramrajchandradevan/the-evolution-of-gradient-descend-optimization-algorithm-4106a6702d39
Applications –
Virtual assistants
Translation
Image recognition
Voice recognition
Video surveillance & diagnostics
Data mining
Facial Recognition
Personalized shopping & Entertaining
Pharmaceuticals
Image Coloration (B&W to colored)
Some limitations:

deep learning is a tool, not a solution.

large labeled training size to learn new concepts.

lack of interpretability of the models that are learned.


Future Scope :

Self-driving cars:
 Conversational assistants and chat-bots
Games
References
 http://web.stanford.edu/class/cs224n

 https://www.coursera.org/specializations/deep-learning

 https://chrisalbon.com/#Deep-Learning

 http://www.asimovinstitute.org/neural-network-zoo

 http://cs231n.github.io/optimization-2

 https://medium.com/@ramrajchandradevan/the-evolution-of-gradient-descend-optimization-
algorithm-4106a6702d39

 https://arimo.com/data-science/2016/bayesian-optimization-hyperparameter-tuning

 http://www.wildml.com/2015/12/implementing-a-cnn-for-text-classification-in-tensorflow

 http://www.wildml.com/2015/11/understanding-convolutional-neural-networks-for-nlp

Potrebbero piacerti anche