Sei sulla pagina 1di 10

Random Ferns for patch description

Pi19404
January 20, 2014

Contents

Contents
Random Ferns for patch description
0.1 Introduction . . 0.2 Implementation 0.3 Code . . . . . . . . References . . . . . . . . . . . . Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

3
3 5 6 9

2 | 10

Random Ferns for patch description

Random Ferns for patch description


0.1 Introduction
 Let us consider the problem of object recognition in which we have to decide if a given patch of image contains the desired object or not.  One way to achieve this to characterize the texture about a key-points across images acquired under widely varying poses and lightning conditions. Due to its robustness to partial occlusions and computational efnAciency, recognition of image patches extracted around detected key-points is crucial for many vision problems.  One way to achieve this is to compute a statistical model for the object/patch and then estimate if a given patch has a good probability of being sampled from the object model.  Let us consider a mathematical representation of the patch to be classified by representing it in terms of a feature vector.  Let
f

be the set of features computed over the patch.


i = argmaxP (C = ci jf ) P (f jC = ci )P (C = ci ) = ci jf ) = P (f ) c i = argmaxP (f jC = ci )
c

(1) (2) (3) (4)

P C

For real time application we require features that can be easily computed and encoded.The concept of Local Binary patterns has become popular for texture analysis and patch description due to it small memory and computational requirement.
 In case of Local binary features each patch is described by a bit string. The value of each bit is derived by performing some binary test on the image patch

3 | 10

Random Ferns for patch description


 The tests can in principle be in any binary test that can effectively encode the information present in the patch as 0 or 1.  Typically each binary feature is determined by performing intensity comparisons . The intensity comparisons can be between two different pixel locations or between image and transformed image at specified pixel location.  A set of binary test constitutes the feature vector.  However these features are very simple and hence a large number of such features may be required to accurately describe a patch.  If we consider a joint probability distribution we would require to store 2N entries for each class.However if we assume independence between features using Naive Bayes assumption we are required to store N feature ,however this completely ignores any correlation between features.  We consider a randomized fern algorithm,which consists of randomly selecting S features out of total pool of N features into single group.  All the features with a group are considered to be dependent and a joint distribution is constructed over these features.  Each group is called a fern and for each fern fk we can compute P (fk jC = ci ), Thus we have K random classifier that provide us the probability that feature vector fk belong to class P (fk jC = ci ).  Here each fern is constructed from a random subset of S features. 

Since each group contains S features and we are required to maintain a joint PDF we require 2S entries.And since there are M groups total entires required are M 2S .
F

k corresponds binary feature vector for each fern k.

 The feature points used in binary tests are selected randomly from a normal distribution. The grouping of features points is also performed randomly.   We will approximate the joint PDF using histogram.Each fern consists of S binary features

4 | 10

Random Ferns for patch description


 Since features are binary ,total number of bins the joint histogram are 2S  For example for a fern with 2 features the histogram bins are indexed as 00,01,10,11.Each bin can be indexed by a integral value.  During the training phase we need estimate the class conditional probabilities that require the estimation of 2S M parameters.

An ensemble techniques is used to combine the result of K classifiers


c

i = argmax

1
K

= 1K P (fk jC = ci )

(5) (6)

 A important feature of this scheme of modelling does not require us to store the training data.  Also we can perform incremental learning since we have to update only the counts in the statistical model.

0.2 Implementation Details


 The points where the features are computed are stored in 2D vector of length n umF erns 2 n umF eatures, this is stored in a vector where each element is of type P oint2f .  As mentioned above to maintain a Joint PDF we have to maintain a joint histogram of 2n umF eatures and there are n umF erns groups.The data is stored in a single data structure of type std::vector.  3 such vectors are maintained positive,negative and posterior probability distribution.  Each of the locations in the joint PDF can be addressed using a integer data type ie The feature vector extracted for each Fern/group is a integer data type.  Whenever we learn a positive/negative sample,we increment the count corresponding to the bins indexed by the integer valued feature.Since feature vector is of length n umF erns the vectors are indexed as i n umI ndices + f eatures[i] where n umI ndices = 2n umF eatures i represents the fern and f eatures[i] represents integer feature value.

5 | 10

Random Ferns for patch description


 Thus to maintain histogram ,the bin indices corresponding to decimal equivalent to binary features are incremented.  Positive vector is updated upon learning positive sample ,Negative vector is updated upon learning negative sample and Posteriors probability vector is updated after updating positive or negative samples and contains the confidence/probability that binary feature belongs to positive patch.  Give a binary vector to compute probability that it represents a positive sample we just look at the posterior probabilities of each group P (Fk jC = ci ) and compute the average over all groups k = 1 : : :n umF erns.

0.3 Code

 The code for the same can be found at git repo https://github. com/pi19404/OpenVision/ in I mgF eatures=randomf erns:cpp and I mgF eatures=randomf files  Some important functions are also provided below  class
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
RandomF erns

is the main class for the file

nclude "randomferms.h" give a feature vector of size M(number of ferms) we compute the output of ensemble classifier as average of individual classifiers oat RandomFerns::calcConfidence(vector<int> features) float conf=0.0; for(int i=0;i<features.size();i++) { //posterioir is a vector consisting of posterior probabilities of PDF //i*numIndices marks the start point of joint PDF of each fern conf=conf+posteriors[i*_numIndices+features[i]]; } return conf/(features.size());

while updating the posterior probabilities after updating the class histograms id RandomFerns::updatePosterior(vector<int> features,bool class1,int ammount)

6 | 10

Random Ferns for patch description


21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64

for(int i=0;i<features.size();i++) { int arrayIndex=(i*_numIndices)+features[i]; class1?positive[arrayIndex]+=ammount:negatives[arrayIndex]+=ammount; //update the posterior posteriors[arrayIndex]=(((float)positive[arrayIndex]))/((float)positive[ } / writeToFile("/home/pi19404/config_oc.txt");

compute the location points for the binary tests for locations of ferns we select the points such that they lie at random locat in a rectangular ROI of size 1,the points are selected such that adjacent points will most likely not lie in the same quadrant random location of points as well as random selection of quadrant are used id RandomFerns::init() points.resize(_numFerns); int toggle=0; for(int i=0;i<_numFerns;i++) { vector<Point2f> px=points[i]; for(int j=0;j<2*_numFeatures;j++) { Point2f p; p.x=((float)std::rand())/(float)RAND_MAX; p.y=((float)std::rand())/(float)RAND_MAX; p.x=p.x/2; p.y=p.y/2; toggle=((float)std::rand())/(float)RAND_MAX; if(toggle<0.25) { p.x=0.5-p.x; p.y=0.5-p.y; } else if(toggle<0.5) { p.x=0.5+p.x;

7 | 10

Random Ferns for patch description


65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108

p.y=0.5+p.y; } else if(toggle < 0.75) { p.x=0.5-p.x; p.y=0.5+p.y; } else if(toggle <1) { p.x=0.5+p.x; p.y=0.5-p.y; } px.push_back(p); toggle=!toggle; } points[i]=px;

function to compute fern feature for a rectangular region in the image ctor<int> RandomFerns::computeFeatures(const Rect r,const Mat image)

vector<int> features; features.resize(0); Mat roi=image(r); for(int i=0;i<points.size();i++) { int index=0; vector<Point2f> pp=points[i]; for(int j=0;j<pp.size();j=j+2) { index <<=1;

8 | 10

Random Ferns for patch description


109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124

Point2f p=pp[j]*r.width; Point2f p1=pp[j+1]*r.height; uchar val1=roi.at<uchar>(p.x,p.y); uchar val2=roi.at<uchar>(p1.x,p1.y); if((int)val1 >(int)val2) { index|=1; } } } return features; features.push_back(index);

9 | 10

Bibliography

Bibliography
[1] Martin Godec et al.  On-Line Random Naive Bayes for Tracking. In: ICPR. IEEE, 2010, pp. 35453548.

icpr2010.html#GodecLSB10.
[2]

url: http://dblp.uni- trier.de/db/conf/icpr/

Zdenek Kalal, Krystian Mikolajczyk, and Jiri Matas.  Tracking-Learning-Detection. In: IEEE Transactions on Pattern Analysis and Machine Intelligence 34.7 (2012), pp. 14091422.

org/10.1109/TPAMI.2011.239.
[3]

issn:

0162-8828.

doi: http://doi.ieeecomputersociety.

Mustafa zuysal, Pascal Fua, and Vincent Lepetit.  Fast keypoint recognition in ten lines of code. In: In Proc. IEEE Conference on Computing Vision and Pattern
Recognition.

2007.

10 | 10

Potrebbero piacerti anche