Sei sulla pagina 1di 59

“Classifiers”

Under the guidance of R & D project by

Prof. Pushpak Bhattacharyya Aditya M Joshi


pushpakbh@gmail.com adityaj@cse.iitb.ac.in
IIT Bombay IIT Bombay
Overview
Introduction to
Classification
What is classification?
A machine learning task that deals with identifying the class to which
an instance belongs

A classifier performs classification

( Perceptive
Age,
(Test
Textual
Marital
instance
features
inputs
status,
):
Ngrams ) Classifier Discrete-valued

Category of document?
Health
Attributes
status, Salary ) Issue Loan?
Steer? { Left,{Yes,
Straight,
No}
{Politics,
Class Movies,
Right }label
(a1, a2,… an) Biology}
Classification learning

Training Testing
phase phase
Learning the classifier Testing how well the classifier
from the available data performs
‘Training set’ ‘Testing set’
(Labeled)
Generating datasets
• Methods:
– Holdout (2/3rd training, 1/3rd testing)
– Cross validation (n – fold)
• Divide into n parts
• Train on (n-1), test on last
• Repeat for different combinations
– Bootstrapping
• Select random samples to form the training set
Evaluating classifiers
• Outcome:
– Accuracy
– Confusion matrix
– If cost-sensitive, the expected cost of
classification ( attribute test cost +
misclassification cost)
etc.
Decision Trees
Example tree
Intermediate nodes : Attributes

Edges : Attribute value tests

Leaf nodes : Class predictions

Example algorithms: ID3, C4.5, SPRINT, CART

Diagram from Han-Kamber


Decision Tree schematic
Training data set

a1 a2 a3 a4 a5 a6

X Y Z

Impure node, Impure node, Pure node,


Select best attribute Select best attribute Leaf node:
and continue and continue Class RED
Decision Tree Issues
How to avoid overfitting?
Problem: Classifier performs well on training data, but fails
How to determine the attribute for split?
to give good results on test data
Alternatives:
How does the type of attribute affect the split?
1. Information Gain
Example: Split on primary key gives pure nodes and good
• Discrete-valued:
accuracy on training Each
– not branch corresponding to a value
for testing
•Gain (A, S) = Entropy (S)
Continuous-valued: Each– Σbranch
( (Sj/S)*Entropy(Sj)
may be a range) of values
(e.g.: splits may be age < 30, 30 < age < 50, age > 50 )
Alternatives:
Other options:
(aimed at maximizing the gain/gainatratio)
1. Pre-prune : Halting construction a certain level of tree /
Gain ratio, etc.
level of purity
2. Post-prune : Remove a node if the error rate remains
the same without it. Repeat process for all nodes in the d.tree
Lazy learners
Lazy learners
•‘Lazy’: Do not create a model of the training instances in advance

•When an instance arrives for testing, runs the algorithm to get the
class prediction

•Example, K – nearest neighbour classifier


(K – NN classifier)
“One is known by the company
one keeps”
K-NN classifier schematic
For a test instance,
1) Calculate distances from training pts.
2) Find K-nearest neighbours (say, K = 3)
3) Assign class label based on majority
K-NN classifier Issues
How good is it?
AnyHow other modifications?
to determine distances between values of categorical
How
attributes?
to determine value of K?
• How
Susceptible
to maketoreal-valued
noisy values prediction?
Alternatives:
•1. Slow because
Weighted of distance
attributes calculation
to decide final label
Alternatives:
Alternatives:
2. Alternate
Alternative: approaches:
1.1.Assign
Determine
Booleandistance to missing
distance
K experimentally. values
(1 if same,The as
0 ifK <max>
different)
that gives minimum

3.1.K=1 Distances
Average
returnsthe to
classrepresentative
values
label
returned points
of nearest only
by K-nearest
neighbourneighbours
error is selected.
• Partial distance
2. Differential grading (e.g. weather – ‘drizzling’ and ‘rainy’ are
closer than ‘rainy’ and ‘sunny’ )
Decision Lists
Decision Lists
• A sequence of boolean functions that lead
to a result

• if h1 (y) = 1 then set f (y) = c1


f ( y ) =else
cj, if h2 (y)
if j ==min { i | hiset
1 then (y) f=(y)
1 }=exists
c2
0 otherwise
…. else set f (y) = cn
Decision List example

Class label

Test instance

(hi,ci)

Unit
Decision List learning
R S’ = S - Qk

( h k, 1 / 0 )

If
For each hi, (| Pi| - pn * | Ni |
Select hk, the feature >
Set of candidate Qi = Pi U Ni with |Ni| - pp *|Pi| )
feature functions highest utility
( hi = 1 ) then 1

else 0
U i = max { | Pi| - pn * | Ni | , |Ni| - pp *|Pi| }
Decision list Issues

Pruning?
Accuracy / Complexity
What is the terminatingtradeoff?
condition?
hi is not required if :
Size of Rof: Complexity
1. Size R (an upper(Length of the list)
threshold)

1.
2.c
S’ Qi k= c (r+1)
contains examples of both classes : Accuracy (Purity)
= null
2.3.There is no h j ( j > i ) such that
S’ contains examples of same class
Qi=Qj
Probabilistic
classifiers
Probabilistic classifiers : NB
• Based on Bayes rule
• Naïve Bayes : Conditional independence
assumption
Naïve Bayes Issues

How are different


Problems types of of
due to sparsity attributes
data?
handled?
Problem : Probabilities for some values may be zero
1. Discrete-valued : P ( X | Ci ) is according to
formula
Solution : Laplace smoothing
2. Continous-valued : Assume gaussian distribution.
Plug in mean
For each attribute value,and variance for the attribute
and assign
update probability m /itntoasP :( (m
X |+Ci1)) / (n + k)
where k = domain of values
Probabilistic classifiers : BBN
• Bayesian belief networks : Attributes ARE
dependent
• A directed acyclic graph and conditional
probability tables An added term for conditional
probability between attributes:

Diagram from Han-Kamber


BBN learning
(when network structure known)
• Input : Network topology of BBN
• Output : Calculate the entries in conditional
probability table
(when network structure not known)
• ???
Learning structure of BBN
• Use Naïve Bayes as a basis pattern
Loan

Family Marital
Age
status status

• Add edges as required


• Examples of algorithms: TAN, K2
Artificial
Neural Networks
Artificial Neural Networks

• Based on biological concept of neurons


• Structure of a fundamental unit of ANN:
w0
threshold
w1
input
output: : activation function
wn p (v) where
p (v) = sgn (w0 + w1x1 + …
+ wnxn )
Perceptron learning algorithm
• Initialize values of weights
• Apply training instances and get output
• Update weights according to the update rule:
n : learning rate
t : target output
o : observed output
• Repeat till converges

• Can represent linearly separable functions only


Sigmoid perceptron

• Basis for multilayer feedforward networks


Multilayer feedforward networks
• Multilayer? Feedforward?

Input layer Hidden layer Output layer

Diagram from Han-Kamber


Backpropagation
• Apply training
instances as input
and produce output
• Update weights in the
‘reverse’ direction as
follows:
w ji  joi
 j  (t j  o j )o j (1  o j )
Diagram from Han-Kamber
ANN Issues

Addition of momentum
Choosing
What are the types
the learning
of factor
learning
Learning the structure of the network
approaches?
A small learning factor means multiple iterations
1. Construct a complete network
required.
Deterministic:
2. Prune usingUpdate weights after summing up
heuristics:
Errors over all examples
• Remove
A large learning edges with weights
factor means nearly zero
the learner
• Remove edges if the removal does not affect
may
But skip theUpdate
why?
Stochastic: global minimum
weights per example
accuracy
Support vector
machines
Support vector machines
• Basic ideas
Margin

“Maximum separating-
margin classifier”
+1

Support vectors

-1
Separating hyperplane : wx+b = 0
SVM training
• Problem formulation

Minimize (1 / 2) || w ||2

iLagrangian
w.r.t. (yi ( w xi + b ) – 1) >= 0 for allzero multipliers are
for data instances other
than support
Dot product vectors
of xk and xl
Focussing on dot product

• For non-linear separable points,


we plan to map them to a higher dimensional (and linearly
separable) space
• The product can be time-consuming.
Therefore, we use kernel functions
Kernel functions

• Without having to know the non-linear mapping, apply


kernel function, say,

• Reduces the number of computations required to generate


Q kl values.
Testing SVM

Test Class
SVM
instance label
SVM Issues

What if n-classes are to be predicted?


SVMs are immune to the removal of
non-support-vector
Problem : SVMs points
deal with two-class classification

Solution : Have multiple SVMs each for one class


Combining
classifiers
Combining Classifiers
• ‘Ensemble’ learning
• Use a combination of models for prediction
– Bagging : Majority votes
– Boosting : Attention to the ‘weak’ instances
• Goal : An improved combined model
Bagging
Classifier
Classifier
model
learning
Majority M1
vote scheme Class Label

Classifier Sample
model Training
D1
Mn dataset
Test D
set
Total set

At random. May use bootstrap sampling with


replacement
Boosting (AdaBoost)
Error Classifier
Classifier
model
Weighted learning
M1
vote scheme Class Label

Classifier Sample
Error model Training
D1
Mn Weights of dataset
Test D
set correctly classified
instances multiplied
Total set
by error / (1 – error)
Initialize weights
Selection based on
of instances
weight. May
Iftoerror
1/d
use > 0.5?
bootstrap sampling with replacement
`
The last slice
Data preprocessing
• Attribute subset selection
– Select a subset of total attributes to reduce
complexity

• Dimensionality reduction
– Transform instances into ‘smaller’ instances
Attribute subset selection
• Information gain measure for attribute
selection in decision trees

• Stepwise forward / backward elimination of


attributes
Dimensionality reduction
Number of attributes of
• High dimensions : Computational
a data instance
complexity
instance x in
p-dimensions

s = Wx W is k x p transformation mtrx.

instance x in
k-dimensions
k<p
Principal Component Analysis

• Computes k orthonormal vectors :


Principal components
• Essentially provide a new set of axes – in
decreasing order of variance

Eigenvector matrix
(p X( n)
p X(pp) X n )
(k (Xpn)
X n ) (k X p) First k are k PCs
Diagram from Han-Kamber
Weka &
Weka Demo
Weka & Weka Demo
• Collection of ML algorithms
• Get it from :
http://www.cs.waikato.ac.nz/ml/weka/
• ARFF Format
• ‘Weka Explorer’
ARFF file format
@RELATION nursery Name of the relation

@ATTRIBUTE children numeric Attribute definition


@ATTRIBUTE housing {convenient, less_conv, critical}
@ATTRIBUTE finance {convenient, inconv}
@ATTRIBUTE social {nonprob, slightly_prob, problematic}
@ATTRIBUTE health {recommended, priority, not_recom}
@ATTRIBUTE pr_val
{recommend,priority,not_recom,very_recom,spec_prior}

@DATA Data instances : Comma separated, each on a new line


3,less_conv,convenient,slightly_prob,recommended,spec_prior
Parts of weka

Experimenter
Explorer
Knowledge Flow
Comparing
Basic interface
experiments
to run ML
Similar to Work Flow
Algorithms
on different algorithms
‘Customized’ to one’s needs
Weka demo
Key References
• Data Mining – Concepts and techniques; Han and
Kamber, Morgan Kaufmann publishers, 2006.

• Machine Learning; Tom Mitchell, McGraw Hill


publications.

• Data Mining – Practical machine learning tools and


techniques; Witten and Frank, Morgan Kaufmann
publishers, 2005.
end of slideshow
Extra slides 1

Difference between decision lists and decision trees:


1. Lists are functions tested sequentially (More than one
attributes at a time)
Trees are attributes tested sequentially

2. Lists may not require a ‘complete’ coverage for values


of an attribute.
All values of an attribute correspond to atleast one
branch of the attribute split.
Learning structure of BBN
• K2 Algorithm :
– Consider nodes in an order
– For each node, calculate utility to add an edge from
previous nodes to this one
• TAN :
– Use Naïve Bayes as the baseline network
– Add different edges to the network based on utility
• Examples of algorithms: TAN, K2
Delta rule

• Delta rule enables to converge to a best fit


if points are not linearly separable
• Uses gradient descent to choose the
hypothesis space

Potrebbero piacerti anche