Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
based Eigenfaces
by
Sourav Gupta
Hitesh Tikmani
Dipanjan Das
Mayank Shekhar
Manish Chakraborty
Overview
Introduction
Objectives
Problem Statement
Principle Component Analysis
Methodology
Flowchart
Data Flow Diagram
Results
Discussion
Conclusion
Future Scopes
References
Introduction
Access Control
Entertainment
Smart Cards
Information Security
Law Enforcement & Surveillance
Objective
Problem Statement
Given
Other
In
Training Set
EigenFace
Weight Vector
For recognition, weight of the largest Eigenfaces is
calculated from the training faces. When the new face
image to be recognized, we calculate the weights
associated with the Eigen faces, which linearly approximate
the face or can be used to reconstruct the face. Now these
weights are compared with the weights of the known face
images so that it can be recognized as a known face or
unknown face
Euclidean Distance
Euclidean distance or Euclidean metric is the ordinary distance
between two points that one would measure with a ruler, and is
given by the Pythagorean formula. By using this formula as
distance, Euclidean space becomes a metric space. The
Euclidean distance between points p and q is the length of the
line segment connecting them. Here we use the Euclidian
distance to compare the training faces and input faces. We
calculate the Euclidian distance between the input image and
training face. Known face is with minimum Euclidian distance
and unknown face with largest distance. The input face is
considered to belong to a class if Euclidean distance (k) is
below a threshold . Then the face image is considered to be a
known face. If the input image is above the threshold, the face
is determined as unknown.
Methodology
1.Loading Database
. The Database used in this project is AT&T Database of faces.
. The database contains 400 images of 40 different persons
i.e. 10 images for each person. . For some subjects, the
images were taken at different times, varying the lighting,
facial expressions (open / closed eyes, smiling / not smiling)
and facial details (glasses / no glasses).
. Later for test cases the database is modified only 3 images
of each person is kept and for another test case 5 for each
person
. No extra code is needed to load the database as the images
are stored in same folder where the matlab file for project
source code is kept which help us get rid of some another
block of codes.
Shows the training set of project, each person with 1 image only
2. Normalization of Images
Here we change the mean and std of all
images. We normalize all images.
This is done to reduce the error due to
lighting conditions.
Transpose of all images are taken into
consideration to decrease the dimensionality
of images.
Images are converted into column vectors as
shown in the Fig.
3. Calculation of Mean
Now after the images are normalized the
common features of all images are retrieved
called mean face and after calculation of
mean face, from each and every image in
database the mean is subtracted so that
each and every image only contain its
unique features.
Image you merged all your columns into one
column. Thus, you will add all Images
Columns pixel-by-pixel or row-by-row. And
then, it is divided by M total images. You
should have a single column which contains
the mean pixels. This is what we called
Mean face
RESULTS
M is the number of face images in training set.
1. Training set with 400 images including 40 persons with
10
images of each .
In total M=400, Threshold=1.2200e+04
* Input Image is one of the images in database itself
Discussion
Only
The
More
Threshold
Conclusion
The experiment has been done in a short period of time. Only one
algorithm was analyzed in this paper. So from the result we can
generalize in a rough scale. As many other issues were ignored to
simplify the research scope, this generalization may not be entirely
relevant to a real life dataset.
Future Scopes
References
https://www.cs.princeton.edu/picasso/mats/PCA-TutorialIntuition_jp.pdf
IOSR Journal of Engineering e-ISSN: 2250-3021, p-ISSN: 22788719, www.iosrphr.org Vol. 2, Issue 12 (Dec. 2012), ||V4|| PP 15-23