Sei sulla pagina 1di 45

AUTOMATED ATTENDANCE SYSTEM BASED ON

FACIAL RECOGNITION

A PROJECT REPORT
Submitted to

VISVESVARAYA TECHNOLOGICAL UNIVERSITY


Jnana Sangama, BELAGAVI- 590018

By
Rakshitha USN: 4MW12EC059
S R Dhanush USN: 4MW12EC065
Shreeraksha Shetty USN: 4MW12EC075
Sushmitha USN: 4MW12EC088

Under the guidance of


Mr. Chetan R
Assistant Professor, Dept. of Electronics & Communication Engineering
In partial fulfillment of the requirements for the award of the degree of
Bachelor of Engineering

Department of Electronics & Communication Engineering


SHRI MADHWA VADIRAJA INSTITUTE OF TECHNOLOGY AND MANAGEMENT
Vishwothama Nagar, BANTAKAL – 574115, Udupi District
SHRI MADHWA VADIRAJA INSTITUTE OF TECHNOLOGY AND MANAGEMENT
(A Unit of Shri Sode Vadiraja Mutt Education Trust ®, Udupi)
Vishwothama Nagar, BANTAKAL – 574 115, Udupi District, Karnataka, INDIA

Department of Electronics & Communication Engineering

CERTIFICATE

Certified that the Project Work titled ‘AUTOMATED ATTENDANCE SYSTEM


BASED ON FACIAL RECOGNITION’ is carried out by:

Ms. RAKSHITHA USN: 4MW12EC059


Mr. S R DHANUSH USN: 4MW12EC065
Ms. SHREERAKSHA SHETTY USN: 4MW12EC075
Ms. SUSHMITHA USN: 4MW12EC088

a bonafide students of Shri Madhwa Vadiraja Institute of Technology and Management, in


partial fulfillment for the award of the degree of Bachelor of Engineering in Electronics
& Communication Engineering of Visvesvaraya Technological University, Belagavi
during the year 2015-16. It is certified that all the corrections / suggestions indicated during
Internal Assessment have been incorporated in the report. The report has been approved as
it satisfies the academic requirements in respect of Project Work prescribed for the said
degree.

Mr. Chetan R Dr. Thirumaleshwara Bhat Dr. Balachandra Achar


Asst. Professor & Guide Professor & Principal Professor and HOD
Dept. of E&C Engineering SMVITM, Bantakal Dept. of E&C Engineering
Signature with date and seal:

External Viva
Name of the Examiners: Signature with Date
1.
2.
ABSTRACT
________________________________________________
Nowadays Educational institutions are concerned about regularity of student
attendance. This is mainly due to students’ overall academic performance is affected by his
or her attendance in the institute. Mainly there are two conventional methods of marking
attendance which are calling out the roll call or by taking student sign on paper. They both
were more time consuming and difficult. Hence, there is a requirement of computer-based
student attendance management system which will assist the faculty for maintaining
attendance record automatically.

In this project we have implemented the automated attendance system using


MATLAB. We have projected our ideas to implement “Automated Attendance System
Based on Facial Recognition”, in which it imbibes large applications. The applicatio n
includes face identification, which saves time and eliminates chances of proxy attendance
because of the face authorization. Hence, this system can be implemented in a field where
attendance plays an important role.

The system is designed using MATLAB platform. The proposed system uses
Principal Component Analysis (PCA) algorithm which is based on eigenface approach.
This algorithm compares the test image and training image and determines students who
are present and absent. The attendance record is maintained in an excel sheet which is
updated automatically in the system.

i
ACKNOWLEDGEMENT
________________________________________________
It is our pleasure to express our heartfelt thanks to Mr. Chetan R, Assistant Professor,
Department of Electronics and Communication Engineering, for his supervision and
guidance which enabled us to understand and develop this project.

We are indebted to Prof. Dr. Thirumaleshwara Bhat, Principal, Prof. Dr. A Ganesha,
Dean (Academics) and Prof. Dr. H. V. Balachandra Achar, Head of the Department, for
their advice and suggestions at various stages of the work.

Special thanks go to the Management of Shri Madhwa Vadiraja Institute of Technology


and Management, Bantakal, Udupi for providing us with a good study environment and
laboratories facilities. Besides, we appreciate the support and help rendered by the teaching
and non-teaching staff of Electronics and Communication Engineering.

Lastly, we take this opportunity to offer our regards to all of those who have supported us
directly or indirectly in the successful completion of this project work.

Rakshitha USN 4MW12EC059


S R Dhanush USN 4MW12EC065
Shreeraksha Shetty USN 4MW12EC075
Sushmitha USN 4MW12EC088

ii
TABLE OF CONTENTS
________________________________________________
ABSTRACT ......................................................................................................................... i
ACKNOWLEDGEMENT................................................................................................. ii
TABLE OF CONTENTS ..................................................................................................iii
TABLE OF FIGURES....................................................................................................... v
1. INTRODUCTION...................................................................................................... 1
1.1 Motivation and Theoretical overview ................................................................... 1
1.2 Problem Statement ................................................................................................ 1
1.3 Research objective................................................................................................. 2
1.4 Scope of study ....................................................................................................... 2
1.5 Research Methodology.......................................................................................... 2
1.6 Organization of Report .......................................................................................... 3
2. LITERATURE SURVEY.......................................................................................... 4
3. FACE RECOGNITION ............................................................................................ 6
3.1 PCA Approach to Face Recognition .......................................................................... 6
3.2 Viola- Jones algorithm for face detection .................................................................. 9
4. PROPOSED SYSTEM ............................................................................................ 14
4.1 Block diagram .......................................................................................................... 14
5. MATLAB .................................................................................................................. 16
5.1 Image Processing Toolbox....................................................................................... 17
5.2 Computer Vision Toolbox ....................................................................................... 17
5.3 Image Acquisition Toolbox ..................................................................................... 18
5.4 Spreadsheet Link...................................................................................................... 19
6. SYSTEM IMPLEMENTATION ............................................................................ 22
6.1 System Pre- Requisites ............................................................................................. 23
6.2 Image Processing ..................................................................................................... 23
6.3 Update the Attendance sheet in Excel...................................................................... 25
6.4 MATLAB Functions created ................................................................................... 25
7. RESULTS AND ANALYSIS .................................................................................. 27
7.2 Graphical user interface ........................................................................................... 30
8. ADVANTAGES AND APPLICATIONS ............................................................... 32
9. FUTURE SCOPE ..................................................................................................... 34

iii
10. CONCLUSION ..................................................................................................... 35
REFERENCES................................................................................................................. 36

iv
TABLE OF FIGURES
________________________________________________
Figure 3-1 The integral image.............................................................................................. 9
Figure 3-2 Sum calculation .................................................................................................. 9
Figure 3-3 The different types of features ......................................................................... 10
Figure 3-4 The modified AdaBoost algorithm................................................................... 11
Figure 3-5 The cascade classifier....................................................................................... 12
Figure 4-1 Proposed system ............................................................................................... 14
Figure 5-1 Image Acquisition toolbox components........................................................... 19
Figure 5-2 Spreadsheet Link EX toolbox .......................................................................... 20
Figure 6-1 System Flowchart ............................................................................................. 22
Figure 6-2 Image processing procedure............................................................................. 22
Figure 7-1 Collected set of training images ....................................................................... 27
Figure 7-2 Capturing of the classroom image and face detection ..................................... 28
Figure 7-3 A student is recognized and appropriate message is displayed ........................ 29
Figure 7-4 Output obtained in the excel format (.xlsx) ..................................................... 30

v
Automated Attendance System based on Facial Recognition

Chapter 1

INTRODUCTION

1.1 Motivation and Theoretical overview

In the recent years, Image processing which deals with extracting useful information from
a digital image plays a unique role in the advent of technological advancements. It
focusses on two tasks
• Improvement of pictorial information for human interpretation
• Processing of image data for storage, transmission and representation for
autonomous machine perception.
Also people have started to use image capturing devices never as before with the advent
of smart phones and closed circuit television. Since the application of image processing is
vast, extensive work and research have been carrying out in utilizing its potential to and
to make new innovative applications.
Facial recognition has been the earliest of the application derived from this technology,
which is one of the most fool proof methods in human detection. Face is a typical
multidimensional structure and needs good computational analysis for recognitio n.
Biometrics methods have been used for the same purpose since a long time now. Although
it is effective, it is still not completely reliable for purpose of detecting a person.

1.2 Problem Statement


Attendances of every student are being maintained by every school, college and univers ity.
Empirical evidences have shown that there is a significant correlation between students’
attendances and their academic performances. There was also a claim stated that the
students who have poor attendance records will generally link to poor retention. Therefore,
faculty has to maintain proper record for the attendance.
The manual attendance record system is not efficient and requires more time to arrange
record and to calculate the average attendance of each student. Hence there is a requireme nt
of a system that will solve the problem of student record arrangement and student average
attendance calculation. One alternative to make student attendance system automatic is
provided by facial recognition.

Department of ECE, SMVITM, Bantakal Page 1


Automated Attendance System based on Facial Recognition

1.3 Research objective


Face recognition can be applied for a wide variety of problems like image and film
processing, human-computer interaction, criminal identification etc. This has motivated
researchers to develop computational models to identify the faces, which are relative ly
simple and easy to implement. The existing system represents some face space with highe r
dimensionality and it is not effective too. The important fact which is considered is that
although these face images have high dimensionality, in reality they span very low
dimensional space. So instead of considering whole face space with high dimensio nality, it
is better to consider only a subspace with lower dimensionality to represent this face space.
The goal is to implement the system (model) for a particular face and distinguish it from a
large number of stored faces with some real-time variations as well. The Eigenface
approach uses Principal Component Analysis (PCA) algorithm for the recognition of the
images. It gives us efficient way to find the lower dimensional space.

1.4 Scope of study


This includes
 Face recognition algorithms.

 Image processing using MATLAB.

 Use of MATLAB toolbox such as Image acquisition toolbox and computer vision
toolbox.

 Accessing MS Excel spreadsheet in MATLAB using Spreadsheet Link EX

1.5 Research Methodology

Here we are trying to develop a system to mark attendance automatically by using


image processing technique. An efficient face recognition algorithm has to be developed
which can recognize students efficiently. Also for image processing we have to have
effective platform to test our algorithm. MATLAB gives the best set of libraries or
toolboxes for image processing programs. Also this software gives a user friendly interface
to define functions and create graphical user interface.

Department of ECE, SMVITM, Bantakal Page 2


Automated Attendance System based on Facial Recognition

1.6 Organization of Report


The proposed system is explained in chapter 4. A brief discussion of the MATLAB IDE is
given in the chapter 5. System implementation is described in chapter 6. The practical
aspects of the project, i.e., the actual results and analysis is given along with the screenshots
of the results obtained in the chapter 7.

Department of ECE, SMVITM, Bantakal Page 3


Automated Attendance System based on Facial Recognition

Chapter 2

LITERATURE SURVEY

For our project we got motivation by the research carried out by the following people and
their published papers:

“Eigenfaces for recognition’’ (Mathew Turk and Alex Pentland) [1], here they have
developed a near-real time computer system that can locate and track a subject’s head, and
then recognize the person by comparing characteristics of the face to those of known
individuals. The computational approach taken in this system is motivated by both
physiology and information theory, as well as by the practical requirements of near-real
time performance and accuracy. This approach treats the face recognition problem as an
intrinsically two-dimensional recognition problem rather than requiring recovery of three-
dimensional geometry, taking advantage of the fact that these faces are normally upright
and thus may be described by a small set of two-dimensional characteristic views. Their
experiments show that the eigenface technique can be made to perform at very high
accuracy, although with a substantial “unknown “rejection rate and thus potentially well
suited to these applications. The future scope of this project was-in addition to recognizing
face, to use eigenface analysis to determine the gender of the subject and to interpret facial
expressions.

“Fast face recognition using eigenfaces” (Arun Vyas and Rajbala Tokas) [2], their approach
signifies face recognition as a two-dimensional problem. In this approach, face
reorganization is done by Principal Component Analysis (PCA). Face images are faced
onto a space that encodes best difference among known face images. The face space is
created by eigenface methods which are eigenvectors of the set of faces, which may not
link to general facial features such as eyes, nose, and lips. The eigenface method uses the
PCA for recognition of the images. The system performs by facing pre-extracted face image
onto a set of face space that shows significant difference among known face images. Face
will be categorized as known or unknown face after imitating it with the present database.
From the obtained results, it was concluded that, for recognition, it is sufficient to take
about 10% eigenfaces with the highest eigenvalues. It is also clear that the recognition rate
increases with the number of training images.

Department of ECE, SMVITM, Bantakal Page 4


Automated Attendance System based on Facial Recognition

“Face recognition using eigenface approach” (Vinay Hiremath and Ashwini Mayakar) [5],
This paper is a step towards developing a face recognition system which can recognize
static images. It can be modified to work with dynamic images. In that case the dynamic
images received from the camera can first be converted in to the static ones and then the
same procedure can be applied on them. The scheme is based on an information theory
approach that decomposes face images into a small set of characteristic feature images
called ‘Eigenfaces’, which are actually the principal components of the initial training set
of face images. Recognition is performed by projecting a new image into the subspace
spanned by the Eigenfaces (‘face space’) and then classifying the face by comparing its
position in the face space with the positions of the known individuals. The Eigenface
approach gives us efficient way to find this lower dimensional space. Eigenfaces are the
Eigenvectors which are representative of each of the dimensions of this face space and they
can be considered as various face features. Any face can be expressed as linear
combinations of the singular vectors of the set of faces, and these singular vectors are
eigenvectors of the covariance matrices. The Eigenface approach for Face Recognitio n
process is fast and simple which works well under constrained environment. It is one of the
best practical solutions for the problem of face recognition. Many applications which
require face recognition do not require perfect identification but just low error rate. So
instead of searching large database of faces, it is better to give small set of likely matches.
By using Eigenface approach, this small set of likely matches for given images can be easily
obtained.

“Face recognition using eigenfaces and artificial neural networks” (Mayank Agarwal,
Nikunj Jain, Mr. Manish Kumar and Himanshu Agrawal) [4], this paper presents a
methodology for face recognition based on information theory approach of coding and
decoding the face image. Proposed methodology is connection of two stages – Feature
extraction using principle component analysis and recognition using the feed forward back
propagation Neural Network. The algorithm has been tested on 400 images (40 classes). A
recognition score for test lot is calculated by considering almost all the variants of feature
extraction. The proposed methods were tested on Olivetti and Oracle Research Laboratory
(ORL) face database. Test results gave a recognition rate of 97.018%

Department of ECE, SMVITM, Bantakal Page 5


Automated Attendance System based on Facial Recognition

Chapter 3

FACE RECOGNITION

One of the simplest and most effective PCA approaches used in face recognition systems
is the so-called eigenface approach. This approach transforms faces into a small set of
essential characteristics, eigenfaces, which are the main components of the initial set of
learning images (training set). Recognition is done by projecting a new image in the
eigenface subspace, after which the person is classified by comparing its position in
eigenface space with the position of known individuals. The advantage of this approach
over other face recognition systems is in its simplicity, speed and insensitivity to small or
gradual changes on the face. The problem is limited to files that can be used to recognize
the face. Namely, the images must be vertical frontal views of human faces.

3.1 PCA Approach to Face Recognition

Principal component analysis transforms a set of data obtained from possibly correlated
variables into a set of values of uncorrelated variables called principal components. The
number of components can be less than or equal to the number of original variables. The
first principal component has the highest possible variance, and each of the succeeding
components have the highest possible variance under the restriction that it has to be
orthogonal to the previous component. We want to find the principal components, in this
case eigenvectors of the covariance matrix of facial images. The first thing we need to do
is to form a training data set. 2D image Ii can be represented as a 1D vector by concatenating
rows. Image is transformed into a vector of length N = mxn as shown in (1).

𝑥 11

𝑥 11 𝑥 12 ⋯ 𝑥 1𝑛
𝑥 1𝑛
𝑥 21 𝑥 22 ⋯ 𝑥 2𝑛 𝐶𝑂𝑁𝐶𝐴𝑇𝐸𝑁𝐴𝑇𝐼𝑂𝑁
𝐼= [ ⋮ ⋮
⋮ ⋱ ⋮ ] → =𝑥 (1)
𝑥 2𝑛
𝑥 𝑚1 𝑥 𝑚2 ⋯ 𝑥 𝑚𝑛 𝑚𝑥𝑛

[𝑥 𝑚𝑛 ]

Department of ECE, SMVITM, Bantakal Page 6


Automated Attendance System based on Facial Recognition

Let M such vectors xi (i = 1, 2... M) of length N form a matrix of learning images, X. To


ensure that the first principal component describes the direction of maximum variance, it is
necessary to Centre the matrix. First we determine the vector of mean values Ψ, and then
subtract that vector from each image vector.

𝑀
1
Ψ= ∑ 𝑥𝑖 (2)
𝑀
𝑖=1

𝜙𝑖 = 𝑥 𝑖 − Ψ (3)

Averaged vectors are arranged to form a new training matrix (size NxM).
𝐴 = [ Φ1, Φ2, Φ3, Φ4 … ] (4)
The next step is to calculate the covariance matrix C, and find its eigenvectors ei and
eigenvalues λi,
Where

𝐶 = 𝐴𝐴𝑇 (5)

𝐶 ∗ 𝑒 𝑖 = 𝜆 𝑖 𝑒𝑖 (6)

Covariance matrix C has dimensions NxN. From that we get N eigen values and
eigenvectors. For an image size of 128x128, we would have to calculate the matrix of
dimensions 16.384x16.384 and find 16.384 eigenvectors. It is not very effective since we
do not need most of these vectors. Rank of covariance matrix is limited by the number of
images in learning set — if we have M images, we will have M–1 eigenvec tors
corresponding to non-zero eigenvalues. One of the theorems in linear algebra states that the
eigenvectors ei and eigenvalues λ i can be obtained by finding eigenvectors and eigenva lues
of matrix C=ATA (dimensions MxM). If νi and μi are eigenvectors and eigen values of
matrix ATA, eigenvector associated with the highest eigenvalue reflects the highest
variance, and the one associated with the lowest eigenvalue, the smallest variance.
Eigenvalues decrease exponentially so that about 90% of the total variance is contained in
Department of ECE, SMVITM, Bantakal Page 7
Automated Attendance System based on Facial Recognition

the first 5% to 10% eigenvectors. Therefore, the vectors should be sorted by eigenva lues
so that the first vector corresponds to the highest eigenvalue. These vectors are then
normalized. They form the new matrix E so that each vector ei is a column vector. The
dimensions of this matrix are NXD, where D represents the desired number of eigenvectors.
It is used for projection of data matrix A and calculation of yi vectors of matrix Y = [y1, y2,
y3....yM] The matrix Y is given as:

𝑌 = 𝐸𝑇𝐴 (7)

Each original image can be reconstructed by adding mean image Ψ to the weighted
summation of all vectors ei. The last step is the recognition of faces. Image of the person
we want to find in training set is transformed into a vector P, reduced by the mean value Ψ
and projected with a matrix of eigenvectors (eigenfaces):

𝜔 = 𝐸 𝑇 (𝑃 − Ψ) (8)

Classification is done by determining the distance, εi, between ω and each vector yi of
matrix Y. The most common is the Euclidean distance, but other measures may be used.
This paper presents the results for the Euclidean distance.
If A and B are two vectors of length D, the Euclidean distance between them is determined
as follows:

𝑑 (𝐴, 𝐵 ) = √∑(𝑎𝑖 − 𝑏𝑖 )2 = ||𝐴 − 𝐵 || (9)


𝑖 =1

If the minimum distance between test face and training faces is higher than a threshold θ,
the test face is considered to be unknown; otherwise it is known and belongs to the person
in the database.
S = argmini [Ɛi] (10)

The program requires a minimum distance between the test image and images from the
training base. Even if the person is not in the database, the face would be recognized. It is
therefore necessary to set a threshold that will allow us to determine whether a person is in
the database. There is no formula for determining the threshold. The most common way is

Department of ECE, SMVITM, Bantakal Page 8


Automated Attendance System based on Facial Recognition

to first calculate the minimum distance of each image from the training base from the other
images and place that distance in a vector rast. Threshold is taken as 0.8 times of the
maximum value of vector rast:
θ = 0.8*max (rast) (11)

3.2 Viola- Jones algorithm for face detection


In 2004 an article by Paul Viola and Michael J. Jones titled [3] “Robust Real-Time Face
Detection” was publish in the International Journal of Computer Vision. The algorithm
presented in this article has been so successful that today it is very close to being the de
facto standard for solving face detection tasks. This success is mainly attributed to the
relative simplicity, the fast execution and the remarkable performance of the algorithm.

3.2.1 The scale invariant detector


The first step of the Viola-Jones face detection algorithm is to turn the input image into an
integral image. This is done by making each pixel equal to the entire sum of all pixels above
and to the left of the concerned pixel. This is demonstrated in Figure 3-1.

Figure 3-1 The integral image

This allows for the calculation of the sum of all pixels inside any given rectangle using only
four values. These values are the pixels in the integral image that coincide with the corners
of the rectangle in the input image. This is demonstrated in Figure 3-2.

Figure 3-2 Sum calculation

Department of ECE, SMVITM, Bantakal Page 9


Automated Attendance System based on Facial Recognition

Since both rectangle B and C include rectangle A, the sum of A has to be added to the
calculation.

It has now been demonstrated how the sum of pixels within rectangles of arbitrary size can
be calculated in constant time. The Viola-Jones face detector analyzes a given sub-window
using features consisting of two or more rectangles. The different types of features are
shown in Figure 3-3.

Figure 3-3 The different types of features

Each feature results in a single value which is calculated by subtracting the sum of the white
rectangle(s) from the sum of the black rectangle(s).

Viola-Jones have empirically found that a detector with a base resolution of 24*24 pixels
gives satisfactory results. When allowing for all possible sizes and positions of the features
in Figure 4 a total of approximately 160.000 different features can then be constructed.
Thus, the amount of possible features vastly outnumbers the 576 pixels contained in the
detector at base resolution. These features may seem overly simple to perform such an
advanced task as face detection, but what the features lack in complexity they most
certainly have in computational efficiency.

One could understand the features as the computer’s way of perceiving an input image. The
hope being that some features will yield large values when on top of a face. Of course
operations could also be carried out directly on the raw pixels, but the variation due to
different pose and individual characteristics would be expected to hamper this approach.
The goal is now to smartly construct a mesh of features capable of detecting faces and this
is the topic of the next section.

3.2.2 The modified AdaBoost algorithm


AdaBoost [3] is a machine learning boosting algorithm capable of constructing a strong
classifier through a weighted combination of weak classifiers. (A weak classifier classifies

Department of ECE, SMVITM, Bantakal Page 10


Automated Attendance System based on Facial Recognition

correctly in only a little bit more than half the cases.) To match this terminology to the
presented theory each feature is considered to be a potential weak classifier.

Figure 3-4 The modified AdaBoost algorithm

An important part of the modified AdaBoost algorithm is the determination of the best
feature, polarity and threshold. There seems to be no smart solution to this problem and
Viola-Jones suggest a simple brute force method. This means that the determination of each
new weak classifier involves evaluating each feature on all the training examples in order
to find the best performing feature. This is expected to be the most time consuming part of
the training procedure.

The best performing feature is chosen based on the weighted error it produces. This
weighted error is a function of the weights belonging to the training examples. As seen in
Figure 3-4 part 4, the weight of a correctly classified example is decreased and the weight
of a misclassified example is kept constant. As a result, it is more ‘expensive’ for the second
feature (in the final classifier) to misclassify an example also misclassified by the first
feature, than an example classified correctly. An alternative interpretation is that the second
feature is forced to focus harder on the examples misclassified by the first. The point being
that the weights are a vital part of the mechanics of the AdaBoost algorithm.

Department of ECE, SMVITM, Bantakal Page 11


Automated Attendance System based on Facial Recognition

With the integral image, the computationally efficient features and the modified AdaBoost
algorithm in place it seems like the face detector is ready for implementation, but Viola -
Jones have one more ace up the sleeve.

3.2.3 The cascaded classifier


The basic principle of the Viola-Jones face detection algorithm is to scan the detector many
times through the same image – each time with a new size. Even if an image should contain
one or more faces it is obvious that an excessive large amount of the evaluated sub-windows
would still be negatives (non-faces). This realization leads to a different formulation of the
problem:

Instead of finding faces, the algorithm should discard non-faces.


The thought behind this statement is that it is faster to discard a non-face than to find a face.
With this in mind a detector consisting of only one (strong) classifier suddenly seems
inefficient since the evaluation time is constant no matter the input. Hence the need for a
cascaded classifier arises.

The cascaded classifier is composed of stages each containing a strong classifier. The job
of each stage is to determine whether a given sub-window is definitely not a face or maybe
a face. When a sub-window is classified to be a non-face by a given stage it is immedia te ly
discarded. Conversely a sub-window classified as a maybe-face is passed on to the next
stage in the cascade. It follows that the more stages a given sub-window passes, the higher
the chance the sub-window actually contains a face. The concept is illustrated with two
stages in Figure 3-5.

Figure 3-5 The cascade classifier

In a single stage classifier one would normally accept false negatives in order to reduce the
false positive rate. However, for the first stages in the staged classifier false positives are
not considered to be a problem since the succeeding stages are expected to sort them out.
Department of ECE, SMVITM, Bantakal Page 12
Automated Attendance System based on Facial Recognition

Therefore, Viola-Jones prescribe the acceptance of many false positives in the initial stages.
Consequently, the amount of false negatives in the final staged classifier is expected to be
very small.

Viola-Jones also refer to the cascaded classifier as an attentional cascade. This name
implies that more attention (computing power) is directed towards the regions of the image
suspected to contain faces. It follows that when training a given stage, say n, the negative
examples should of course be false negatives generated by stage n-1.

Department of ECE, SMVITM, Bantakal Page 13


Automated Attendance System based on Facial Recognition

Chapter 4

PROPOSED SYSTEM

The present system of attendance marking i.e., manually calling out the roll call by the
faculty have quite satisfactorily served the purpose. With the change in the educationa l
system with the introduction of new technologies in classroom such as virtual classroom,
the traditional way of taking attendance may not be viable anymore. Even with rising
number of course of study offered by universities, processing of attendance manually could
be time consuming. Hence, in our project we aim at creating a system to take attendance
using facial recognition technology in classrooms and creating an efficient database to
record them.

4.1 Block diagram

Figure 4-1 Proposed system

The block diagram in figure 4-1 describes the proposed system for Face Recognition based
Classroom attendance system. The system requires a camera installed in the classroom at a
position where it could capture all the students in the classroom and thus capture their
images effectively. This image is processed to get the desired results. The working is
explained in brief below:

Department of ECE, SMVITM, Bantakal Page 14


Automated Attendance System based on Facial Recognition

 Capturing Camera: Camera is installed in a classroom to capture the face of the


student. The camera has to be places such that it captures the face of all the students
effectively. This camera has to be interfaced to computer system for further
processing either through a wired or a wireless network. In our prototype we use
the in-built camera of the laptop.
 Image Processing: Facial recognition algorithm is applied on the captured image.
The image is cropped and stored for processing. The module recognizes the images
of the students face which have been registered manually with their names and ID
codes in the database. We use MATLAB for all the image processing and
acquisition operations. The whole process requires the following steps:
a) Train Database: Initially we take facial image of the enrolled students. In our
system we have taken three images each. This data is used later used in the facial
recognition algorithm. It is done using Image Acquisition Toolbox of the
MATLAB. All the cropped image of the face is resized to a 240 X 300 image.
b) Face Detection and cropping: The captured image of the classroom is initia lly
scanned to detect faces. This is done using Computer Vision Toolbox by the
function vision.CascadeObjectDetector(). This function work on the basis of
Viola-Jones algorithm. This algorithm focusses more on speed and reliability.
The detected faces are cropped and resized to a 240 X 300 image, same as the
train database.
c) Face Recognition: For recognition, the feature locations are refined and the
face is normalized with eyes and month in fixed locations. Images from the face
tracker are used to train a frontal Eigen space, and the leading three eigenvec tors
are retained. Since the face images have been warped into frontal views a single
eigen space is enough. Face recognition is then performed using the Eigen face
approach with additional temporal information added. The projection
coefficients of all images of each person are modelled as a Gaussian distributio n
and the face is classified based on the probability of match.
d) Attendance Recording: We use Excel spreadsheet to store the recorded
attendance for easy-to-use output format, which is also the software which is
familiar to majority of the institution staffs. This is done using Spreadsheet Link
EX toolbox. If a student is recognized, the corresponding cell is updated with
‘1’, else a ‘0’. Using the formatting in the Excel, we can effectively retrieve the
information effectively.

Department of ECE, SMVITM, Bantakal Page 15


Automated Attendance System based on Facial Recognition

Chapter 5

MATLAB

MATLAB (matrix laboratory) is a multi-paradigm numerical computing environment and


fourth-generation programming language. A proprietary programming language developed
by Math Works, MATLAB allows matrix manipulations, plotting of functions and data,
implementation of algorithms, creation of user interfaces, and interfacing with programs
written in other languages, including C, C++, Java, Fortran and Python.

Although MATLAB is intended primarily for numerical computing, an optional toolbox


uses the Mu PAD symbolic engine, allowing access to symbolic computing abilities. An
additional package, Simulink, adds graphical multi-domain simulation and model-based
design for dynamic and embedded systems.

The basic data structure in MATLAB is the array, an ordered set of real or complex
elements. This object is naturally suited to the representation of images, real-valued ordered
sets of color or intensity data.

MATLAB stores most images as two-dimensional arrays (i.e., matrices), in which each
element of the matrix corresponds to a single pixel in the displayed image. (Pixel is derived
from picture element and usually denotes a single dot on a computer display.) For example,
an image composed of 200 rows and 300 columns of different coloured dots would be
stored in MATLAB as a 200-by-300 matrix. Some images, such as RGB, require a three-
dimensional array, where the first plane in the third dimension represents the red pixel
intensities, the second plane represents the green pixel intensities, and the third plane
represents the blue pixel intensities.

This convention makes working with images in MATLAB similar to working with any
other type of matrix data, and makes the full power of MATLAB available for image
processing applications . For example, you can select a single pixel from an image
matrix using normal matrix subscripting.

Department of ECE, SMVITM, Bantakal Page 16


Automated Attendance System based on Facial Recognition

5.1 Image Processing Toolbox


The Image Processing Toolbox is a collection of functions that extend the capability of the
MATLAB® numeric computing environment. The toolbox supports a wide range of image
processing operations, including:
•Spatial image transformations
•Morphological operations
•Neighborhood and block operations
•Linear filtering and filter design
•Transforms
•Image analysis and enhancement
•Image registration
•De blurring
•Region of interest operations

Many of the toolbox functions are MATLAB M-files, a series of MATLAB statements that
implement specialized image processing algorithms. You can extend the capabilities of the
Image Processing Toolbox by writing your own M-files, or by using the toolbox in
combination with other toolboxes, such as the Signal Processing Toolbox and the Wavelet
Toolbox.

5.2 Computer Vision Toolbox


The field is of importance for such various applications as autonomous vehicles, navigating
with the help of images, captured by a mounted camera, and high precision measureme nts
using images, ta- ken by calibrated cameras. In this paper, we will present a number of
numerical routines, implemented in MATLAB, that are useful in a variety of computer
vision applications. The collection of routines will be called the Computer Vision Toolbox.
One of the main problems in Computer Vision is to calculate the 3D-structure of the scene
and the motion of the camera from measurements in the images taken from different view -
points in the scene. This problem is called structure and motion, referring to the fact that
both the structure of the scene and the motion of the camera are calculated from image
measurements only. A number of different sub problems, arising from different knowledge
of the intrinsic properties of the camera appear. Other important problems are to calculate
the structure of the 3D-scene given the motion of the camera and to calculate the motion of

Department of ECE, SMVITM, Bantakal Page 17


Automated Attendance System based on Facial Recognition

the camera given the structure of the scene. These problems are somewhat simpler than the
general structure and motion problem, but are nevertheless important for navigation and
obstacle avoidance. A related problem is to calibrate a camera, i.e. calculate the focal
distance, the principal point etc., from images of a known scene. This calibration may be
used to make a more precise reconstruction. The problem of estimating structure and
motion from image sequences are closely related to the field of closed range
photogrammetry. In this field images are used to make precise measurements of three-
dimensional objects from a number of images, often taken by calibrated cameras. The main
difference is that in computer vision, the focus is not on accuracy, but instead on reliability
and speed of calculation. In all these problems different kind of so called features can be
used. The simplest feature is points that are easily detected in the sequence, such as corners
or pain- ted marks. Other types of features are lines and curves, which are easier to detect
and track but more difficult to use in structure and motion algorithms. In order to use our
computer vision routines these features must be detected in advance and the
correspondence between different images must be established. The main emphasis of our
computer vision toolbox is to use these detected features to solve for structure and motion.

5.3 Image Acquisition Toolbox


The Image Acquisition Toolbox as in figure 5-1, is a collection of functions that extend the
capability of the MATLAB® numeric computing environment. The toolbox supports a
wide range of image acquisition operations, including
 Acquiring images through many types of image acquisition devices, from
professional grade frame grabbers to USB-based Webcams
 Viewing a preview of the live video stream
 Triggering acquisitions (includes external hardware triggers
 Configuring call back functions that execute when certain events occur
 Bringing the image data into the MATLAB workspace

Many of the toolbox functions are MATLAB M-files. You can extend the capabilities of
the Image Acquisition Toolbox by writing your own M-files, or by using the toolbox in
combination with other toolboxes, such as the Image Processing Toolbox and the Data
Acquisition Toolbox. The toolbox also includes a Simulink® interface called the Image
Acquisition Block set. This block set extends Simulink with a block that lets you bring live
video data into a model.

Department of ECE, SMVITM, Bantakal Page 18


Automated Attendance System based on Facial Recognition

Figure 5-1 Image Acquisition toolbox components

5.4 Spreadsheet Link


The Spreadsheet Link EX software Add-In integrates the Microsoft® Excel® and
MATLAB® products in a computing environment running Microsoft® Windows®. It
connects the Excel® interface to the MATLAB workspace, enabling you to use Excel work
sheet and macro programming tools to leverage the numerical, computational, and
graphical power of MATLAB.

You can use Spreadsheet Link EX functions in an Excel work sheet or macro to exchange
and synchronize data between Excel and MATLAB, without leaving the Excel

Department of ECE, SMVITM, Bantakal Page 19


Automated Attendance System based on Facial Recognition

environment. With a small number of functions to manage the link and manipulate data,
the Spreadsheet Link EX software is powerful in its simplicity.

The Spreadsheet Link EX software supports MATLAB two-dimensional numeric arrays,


one-dimensional character arrays (strings), and two-dimensional cell arrays. It does not
work with MATLAB multidimensional arrays and structures.

Figure 5-2 Spreadsheet Link EX toolbox

5.5 GUI
MATLAB is built around a programming language, and as such it’s really designed with
tool-building in mind. Guide extends MATLAB’s support for rapid coding into the realm
of building GUIs. Guide is a set of MATLAB tools designed to make building GUIs easier
and faster. Just as writing math in MATLAB is much like writing it on paper, building a
GUI with Guide is much like drawing one on paper. As a result, you can lay out a complex
graphical tool in minutes. Once your buttons and plots are in place, the Guide Callback
Editor lets you set up the MATLAB code that gets executed when a particular button is
pressed.
Modifying Properties with the Property Editor
The five tools that together make up Guide are:
•The Property Editor
•The Guide Control Panel
•The Callback Editor
•The Alignment Tool
Department of ECE, SMVITM, Bantakal Page 20
Automated Attendance System based on Facial Recognition

•The Menu Editor

Simplicity
Simplicity in design is our chief goal. A simple GUI has a clean look and a sense of unity.
It’s very easy to add functionality to the GUI you’re building, but if that functionality really
doesn’t belong, take it out. Avoid screen clutter, and only present users with choices that
advance them toward the completion of the task.

Emphasize Form, not Number


Clutter obscures valuable information. Since visualization is inherently more qualitative
than quantitative, concentrate on the shape and let the labelling vanish. Once you let
yourself remove a piece of the GUI that doesn’t absolutely need to be there, you may find
that you can eliminate a lot of supporting machinery that no longer has any purpose.

Minimize the Area of Interaction


Don’t use two figures when one will do. If you’re demonstrating input-outp ut
relationships, put the input right next to the output. The grid lines on the left don’t really
add value to the image.

Department of ECE, SMVITM, Bantakal Page 21


Automated Attendance System based on Facial Recognition

Chapter 6

SYSTEM IMPLEMENTATION

System Flowchart

Figure 6-1 System Flowchart

IMAGE PROCESSING

Figure 6-2 Image processing procedure

Department of ECE, SMVITM, Bantakal Page 22


Automated Attendance System based on Facial Recognition

6.1 System Pre-Requisites


The first step in implementing the system is to create a database of enrolled students’
database. In actual implementation this step must be a part of the admission process where
we collect the necessary information of the students.

This set of images is referred to as train database for the algorithm. The facial recognitio n
algorithm (here we use Eigenfaces method), then uses the database to calculate the
eigenfaces for face recognition.

In our project we have created a function ‘training.m’ for this purpose. This function works
as follows-

 Capture the image of the student


 Using the function ‘visionCascadeObjectdetector’ of the Computer vision toolbox,
detect the face from the image. This function works on the basis of Viola-Jones
algorithm.
 The detected faces are cropped and saved in the database.

This function has to be in the folder where the main code is saved. Also the train database
is saved in the folder ‘TrainDatabase’ which is also stored in the same folder for best results.
After this step the system is ready for recording the attendance of the registered students.

6.2 Image Processing


This is the most important part of the system since it is based in the concept of image
processing itself. This process is explained the sequence of its occurrence in the flowchart
under the title image processing.

6.2.1 Capture Image

The image of the classroom is captured such that the faces of all the students are captured
efficiently. This image is used for further processing of our algorithm. We have used the
laptop camera with a resolution of 1366x768 itself since for the prototype this resolution
is sufficient. For more accurate processing of a larger classroom, we need to use camera
with higher resolution.
Department of ECE, SMVITM, Bantakal Page 23
Automated Attendance System based on Facial Recognition

6.2.2 Face Detection and Cropping

The image captured image is read in MATLAB. The image is nothing but a matrix of
numbers which correspond to the pixel values. The software doesn’t know where in this
collection of numbers the faces are present, which are the input for our algorithm. Thus,
face detection performs this task.

We use the function ‘vision.CascadeObjectDetector()’ of the Computer Vision Toolbox for


the same. This function detects the face based on the Viola-Jones algorithm whose
description is given in the appendix. This sequence of steps in this algorithm is as follows.
 Read the image captured in the previous step
 The faces are detected from the above image as explained earlier
 We crop the area of the image where the faces are marked and saved into a
folder as individual image files in JPEG format.

The algorithm detects all the faces clearly visible in the captured image of the classroom.
Each student need to be in an upright right position to avoid exclusion of their presence by
the system.

6.2.3 Face Recognition using Eigenfaces

We have used Eigenfaces algorithm for face recognition in the project. This is a fast and
cost effective solution for face recognition giving an appreciable level of accuracy.
The two dimensional images in training data set are converted into a one-dimensio na l
vector.
 Several such vectors form a matrix of learning images
 We determine the vector of mean values and subtract that vector from each
image vector
 These average vectors are arranged to form a new training matrix.
 We calculate the covariance matrix from which we get the Eigen values
and Eigen vectors.
 Eigen vectors associated with highest Eigen values reflects the highest
variance and vice-versa
 Therefore, the vector should be stored by Eigen values so that the firs t
vector corresponds to highest Eigen value. The vectors are normalized.

Department of ECE, SMVITM, Bantakal Page 24


Automated Attendance System based on Facial Recognition

 The image of the person we want to find in the training set is transformed
into a vector, reduced by the mean value and projected with the matrix of
the Eigen vector.
 Classification is done by determining the Euclidean distance between the
two vectors of the images of the training data set and the test image
 If the minimum distance between the test face and training face is less than
the threshold. It is considered to be known and belong to the person in the
database otherwise it is considered to be unknown.

Whenever a person is successfully recognized the system automatically marks his or her
attendance in the database which is in the MS Excel. This is explained in the following
section.

6.2.4 Store Recognized Entries


Whenever the algorithm finds a match, we update the corresponding field of the person in
the excel sheet with a ‘1’ on that particular date. Else by default it is marked as ‘0’ which
says that the person is absent. MS Excel provides a very efficient way of storing the data.
This is explained in the next section.

6.3 Update the Attendance sheet in Excel


The MATLAB IDE and the MS Excel sheet is linked using the toolbox Spreadsheet Link
Ex. Whenever a detected face matches with a person in the database, the value is updated
in that particular Excel sheet. This is carried out through the function xlswrite().
 The candidate’s identity is determined through the index of the image with which
the detected face matches with.
 A spreadsheet of the desired format has to be drafted beforehand (attendance.xls
in our system)
 Using the index values corresponding cell in the sheet is updated with one along
with the time and date of the classroom

6.4 MATLAB Functions created


We have created program modules or functions for each of the blocks we have discussed
previously, in the standard MATLAB fie, i.e., in ‘.m’ format. This helps us in clearly

Department of ECE, SMVITM, Bantakal Page 25


Automated Attendance System based on Facial Recognition

defining the different aspects of the program clearly, easier debugging and in the later
phase, the creation Graphical User Interface (GUI) also. The following are the modules we
have created and its description.
a. Training.m – Capture a predefined number of images of the candidate (in our case
its three), crop the face and store the image in the TrainDatabase folder
automatically.
b. Face_detection.m – After the camera captures the students’ image in the
classroom, this function detects the faces of the student, crops them and uses these
variables as the input argument for the function which does the face recognitio n
part. Initially this module also serves as the main program in our code.
c. Taking_Snapshot.m – The laptop camera is initialized and starts capturing the
video. Then it takes a snapshot of the video and returns it to the called function.
d. CreateDatabase.m – The eigenface algorithm requires that we create one-
dimensional array from the database of images. This function does this task.
e. Eigenface.m – Eigenface algorithm requires various characteristics of the face
images in order for efficient recognition which are the mean, average and the
eigenfaces. This function performs all these functions.
f. Recognition.m – The detected faces are compared with the parameters from the
training database and gives the name of the image, which is a number to which it
matches satisfactorily.
g. Record_attendance.m – Based on the number obtained from the recognitio n
module, we determine to which student the image belongs using a switch case and
update the field in the excel sheet.
h. Delete_test.m – Finally the images captures during the process has to be deleted to
avoid unnecessary wastage of the memory. This function does the same.

Department of ECE, SMVITM, Bantakal Page 26


Automated Attendance System based on Facial Recognition

Chapter 7

RESULTS AND ANALYSIS


Using the all the functions we have created, we have tested for output in using existing test
images as well as in real-time. Following section, the screenshots of the output of differe nt
functions are given. We have tested the system with the help of four volunteers.

7.1 Collect Training Dataset


Using the function TrainDatabase we create a database of the enrolled students which is
stored in the folder.

Figure 7-1 Collected set of training images

With the help of four volunteers, three images of each candidate is stored on the database
as shown in the figure 5-1. For more accuracy we can increase the number of training
images but with a compromise in the speed of calculation. However, for our applicatio n
calculation speed variation won’t be problem since a class timing is typically at least one
hour and this period is just a lot more than the computation time takes by the algorithm.

One thing we have keep in mind during this phase is to take the picture in ambient lighting
and the frontal face must be clearly visible. Also there must be slight variation on the
position or expression of the student in each captured image for better results.

Department of ECE, SMVITM, Bantakal Page 27


Automated Attendance System based on Facial Recognition

7.2 Image capturing, Face Detection and Cropping


An appropriate image of the classroom is taken with. The camera has to be so placed such
that the faces of all the students are clearly visible.

Figure 7-2 Capturing of the classroom image and face detection

In normal lighting conditions and based on the proper sitting posture of the students the
faces are efficiently captured. The classroom lighting has to be efficiently maintained. Also
in case of blackouts appropriate alternatives have to be arranged.

All the detected faces which can be seen in figure 7-2, are cropped and saved in the Test
Database folder. From this location the next algorithm read the image and further
processing are carried out. The path of the folder must be exactly specified. Also the name
each of the faces are given as numbers automatically. This helps in easier reading of the
images from the folder.

7.3 Face Recognition


Cropped facial images are fed into the face recognition algorithm and we get the results.
The Eigen faces algorithm is applied to the image and compared with the database. We get
the output as in figure 7-3 after this process.
Department of ECE, SMVITM, Bantakal Page 28
Automated Attendance System based on Facial Recognition

Figure 7-3 A student is recognized and appropriate message is displayed

If a person whose database is not present in the database, his image is simply ignored.
However proper lighting has to be maintained in order to prevent in any false detection.

7.4 Output in MS Excel


We get the output as given below. After that we can derive the results in appropriate format
using different function in the spreadsheet as in figure 7-4. We can get the following
parameters by using this format as output as shown in the figure. This function is
performed using the Spreadsheet Link Ex toolbox of the MATLAB.

 If a person is present, a ‘1’ is passed on to the particular field of the student


 The date and time is also passed on to the sheet.

We can include any number of students’ data using this system and provided we use a better
quality of an image capturing device.

In the next section we describe how we integrate all these function y using the Graphical
User Interface (GUI). This gives an easy to use interface to the users.
Department of ECE, SMVITM, Bantakal Page 29
Automated Attendance System based on Facial Recognition

Figure 7-4 Output obtained in the excel format (.xlsx)

7.2 Graphical user interface


After the coding we have created a GUI for an easy-to-use interface as running the code
from MATLAB has no charm. We have included the GUI for taking attendance as well as
for collecting the training images as shown in figure 7-5.

Figure 7-5 Graphical User Interface for the system


Department of ECE, SMVITM, Bantakal Page 30
Automated Attendance System based on Facial Recognition

The following figures includes the snapshots of the GUI.


In the GUI we have included the following feature-
 Push button for capturing training images
 Axes which displays the captured image, detected face and the cropped image
 Push button for attendance system
 Axes showing the streaming video of the classroom

The working of the GUI is shown in the following screenshots -

Figure 7-6 Capturing training images using the GUI

Figure 7-7 Recording attendance using the GUI


Department of ECE, SMVITM, Bantakal Page 31
Automated Attendance System based on Facial Recognition

Chapter 8

ADVANTAGES AND APPLICATIONS


1. Maintains Overall Records: An automated face recognition attendance system
maintains the overall presence record of the students in the institution. Leaves taken
by the students, date of absent each data is stored in the system.
2. Get Rid of Pen & Paper System: The newest technology helps in replacing the older
paper register method efficiently. It also saves money that the organization uses to
spend on the paper. Face-recognition time attendance system gives better maintena nce
of data as it supports the electronic medium of data storage. Also the system gives a
good impression about the organization in front of the business clients and other
concerned people.
3. Financial Benefits: The face-recognition time attendance system helps in saving time,
eliminates the manual mistakes and controls the overall system. Since the face
recognition system controls every single event electronically therefore, reduces the
possibility of error. The attendance is noted down electronically therefore it saves time
of the lecturers which they can use efficiently in lecturing.
4. Easy Integration: Integrated Biometric facial systems are also easy to program into
any computer system. Usually they will work with existing software that one has in
their place.
5. High Success Rate: Facial biometrics technology today has a high success rate,
especially with the emergence of 3d face recognition technologies. It is extremely
difficult to fool the system, so one can feel secure about the system.
6. Proxy attendance is eliminated: Attendance is taken automatically by the camera
placed in the classroom therefore there will be no chances of proxy attendances.
7. Saves Time: In traditional attendance marking system Lecturer calls each student’s
name with respect to their ids which is a very much time consuming job this system
restores the time consumed for calling attendance by automatically marking
attendance.
8. Less Mistakes: here will be chances of making mistakes while manually marking
attendances by lecturers, while taking attendance automatically there will not be any
chances of mistakes since the system is computer based.
9. Virtual Classroom: Virtual classrooms are the class rooms without the lecturers to
teach as students will be learning online. This system is very useful in virtua l

Department of ECE, SMVITM, Bantakal Page 32


Automated Attendance System based on Facial Recognition

classrooms where there will be no lecturers to take attendances this system will
automatically manage the attendances of the students.
10. Simple Algorithm & Flowcharts: This system uses a simple algorithm and flowchart
which is easy to understand as there are no complicated sections, information flow is
simple as there is less hardware’s components used therefore each section is clearly
understood.

We see the system have lot of advantages of the system. But as in most systems some
drawbacks have been observed in the system.
 Sensitive to Light – If the ambient lighting in the training images and the images
taken during the processing varies, there is a high possibility in face recognitio n
incorrectly. Hence we need to keep in mind the lighting conditions of the classroom
during the process of collecting the database of the students.

Department of ECE, SMVITM, Bantakal Page 33


Automated Attendance System based on Facial Recognition

Chapter 9

FUTURE SCOPE

The system we have developed has successfully, able to accomplish the task of marking
the attendance in the classroom automatically and output is obtained in an excel sheet
as desired in real-time. However, in order to develop a dedicated system which can be
implemented in an educational institution, a very efficient algorithm which is
insensitive to the lighting conditions of the classroom has to be developed. Also a
camera of the optimum resolution has to be utilised in the system. Another important
aspect where we can work towards is creating an online database of the attendance and
automatic updating of the attendance into it keeping in mind the growing popularity of
Internet of Things. This can be done by creating a standalone module which can be
installed in the classroom having access to internet, preferably a wireless system. These
developments can greatly improve the applications of the project.

Department of ECE, SMVITM, Bantakal Page 34


Automated Attendance System based on Facial Recognition

Chapter 10

CONCLUSION
In this system we have implemented an attendance system for a lecture, section or
laboratory by which lecturer or teaching assistant can record students’ attendance. It saves
time and effort, especially if it is a lecture with huge number of students. Automated
Attendance System has been envisioned for the purpose of reducing the drawbacks in the
traditional (manual) system. This attendance system demonstrates the use of image
processing techniques in classroom. This system can not only merely help in the attendance
system, but also improve the goodwill of an institution.

Department of ECE, SMVITM, Bantakal Page 35


Automated Attendance System based on Facial Recognition

REFERENCES

[1] M. T. a. A. Pentland, "Eigenfaces For Recognition," Journal of Cognitive


Neuroscience, vol. 3, no. 1, 1991.
[2] A. V. a. R. Tokas, "Fast Face Recognition Using Eigen Faces," IJRITCC, vol. 2, no.
11, pp. 3615-3618, November 2014.

[3] Paul Viola and Michael J. Jones, "Robust Real-Time Face Detection," International
Journal of Computer Vision, vol. 57, no. 2, pp. 137-154, May 2004.

[4] N. J. M. M. K. a. H. A. Mayank Agarwal, "Face Recognition Using Eigenfac e


aproach," IRCSE, vol. 2, no. 4, pp. 1793-8201, August 2010.

[5] Vinay Hermath, Ashwini Mayakar, "Face Recognition Using Eigen Faces and,"
IACSIT, vol. 2, no. 4, pp. 1793-8201, August 2010.

Department of ECE, SMVITM, Bantakal Page 36


Personal Profile
Mr. Chethan R received his B.E. degree in E&C Enginee r ing
from SRSIT, Bangalore in the year 2008 and M.Tech. in
Electricals and Electronics from NMAM Institute of
Technology, Nitte, India in the year 2012.
He is an Assistant Professor of E&C Engineering at SMVITM
since 2011. His areas of interest include VLSI design, Embedded
systems and Bio-medical Electronics. His papers have been
published in international journals and has presented them in
Mr. Chethan R international conference.
Project Guide

Student’s Name: Rakshitha


USN: 4MW12EC059
Address: No.7 Mandavi Plaza, Udupi
E-mail ID: rakshitha.mithu@yahoo.in
Contact Phone No: +91-8861685482

Student’s Name: S R Dhanush


USN: 4MW12EC065
Address: Yakshagana Kendra, Indrali, Udupi, India
E-mail ID: dhanush_sr@yahoo.com
Contact Phone No: +91-9900769702

Student’s Name: Shreeraksha Shetty


USN: 4MW12EC075
Address: Asha Nilaya, Nellikatte Hirebettu Athradi
Udupi, India
E-mail ID: shreeraksha.asha@gmail.com
Contact Phone No: +91-9686427444
Student’s Name: Sushmitha
USN: 4MW12EC088
Address: Near Vyavahar garden 76th Badagabettu
Kukkikatte, Udupi, India
E-mail ID: sushmithaacharya31518.sa@gmail.com
Contact Phone No: +91-9886997878

Potrebbero piacerti anche