Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Referent
http://vis-www.cs.umass.edu/lfw/index.html
http://www.decom.ufop.br/menotti/rp122/sem/
http://faculty.ucmerced.edu/mhyang/course/cse185/lectures/face_detection.ppt
INTRODUCTION
Several algorithms have been developing for face detections,
however remain difficult to compare due to the lack of enough detail
to reproduce the published results.
This paper presents a new data set of face images with more faces
and more accurate annotations for face regions. Also, propose two
rigorous and precise methods for evaluating the performance of face
detection algorithms. Finally, reports results of several standard
algorithms on the new benchmark.
Near-duplicate detection
Evaluation
A detection corresponds to a contiguous image region.
Any post-processing required to merge overlapping detections
has already been done.
Each detection corresponds to exactly one entire face.
10
The determination of the minimum weight matching in a weighted bipartite graph has an
equivalent dual formulation as finding the solution of the minimum weighted (vertex) cover
problem on a related graph.
11
Evaluation metrics
12
Experimental Setup
10 fold cross-validation
A 10-fold cross-validation is performed using a fixed partitioning of
the data set into ten folds.
Unrestricted training
Data outside the FDDB data set is permitted to be included in the
training set.
13
Benchmark
14
The
one use primary_up X4 of Dlib library. The
one use primary_up
X2 of Dlib library we use. The
one not use primary_up.
detectContROC
detectDiscROC
15
The red one use primary_up X4. The black one use primary_up X2
and the blue one not use primary_up. The graph a bit change of primary_up X4.
detectDiscROC
16
continuous ROC
17
discontinued ROC
18
19
20
21
I find that FDDB use 10-fold cross-validation in the evaluation and I find
the information.
k-fold cross-validation1
k-fold cross-validation In k-fold cross-validation, the original sample is randomly
partitioned into k equal sized subsamples. Of the k subsamples, a single subsample is
retained as the validation data for testing the model, and the remaining k 1 subsamples
are used as training data. The cross-validation process is then repeated k times (the
folds), with each of the k subsamples used exactly once as the validation data. The k
results from the folds can then be averaged (or otherwise combined) to produce a single
estimation. The advantage of this method over repeated random sub-sampling (see
below) is that all observations are used for both training and validation, and each
observation is used for validation exactly once. 10-fold cross-validation is commonly
used,[7] but in general k remains an unfixed parameter.
1 https://en.wikipedia.org/wiki/Cross-validation_(statistics)
https://wipawanblog.files.wordpress.com/2013/08/lab_datamining.pdf
22
http://image.slidesharecdn.com/processminingchapter03datamining-110510153206phpapp02/95/process-mining-chapter-3-data-mining-29-728.jpg?cb=1305044621
23