Sei sulla pagina 1di 44

1.

Introduction
This paper will show how we can implement algorithms for face detection and recognition in
image processing to build a system that will detect and recognise frontal faces of students in a
classroom. “A face is the front part of a person’s head from the forehead to the chin, or the
corresponding part of an animal” (Oxford Dictionary). In human interactions, the face is the most
important factor as it contains important information about a person or individual. All humans
have the ability to recognise individuals from their faces. The proposed solution is to develop a
working prototype of a system that will facilitate class control for Kingston University lecturers
in a classroom by detecting the frontal faces of students from a picture taken in a classroom. The
second part of the system will also be able to perform a facial recognition against a small database.
In recent years, research has been carried out and face recognition and detection systems have
been developed. Some of which are used on social media platforms, banking apps, government
offices e.g. the Metropolitan Police, Facebook etc.

1.1 FACE RECOGNIZATION:

DIFFERENT APPROACHES OF FACE RECOGNITION:

There are two predominant approaches to the face recognition problem: Geometric (feature
based) and photometric (view base). As researcher’s interest in face recognition continued,
many different algorithms were developed, three of which have been well studied in face
recognition literature.

Recognition algorithms can be divided into two main approaches:

1. Geometric: Is based on geometrical relationship between facial landmarks, or in other


words the spatial configuration of facial features. That means that the main geometrical
features of the face such as the eyes, nose and mouth are first located and then faces are
classified on the basis of various geometrical distances and angles between features.
(Figure 2)
2. Photometric stereo: Used to recover the shape of an object from a number of

Images taken under different lighting conditions. The shape of the recovered object is
defined by a gradient map, which is made up of an array of surface normals (Zhao and
Chellappa, 2006) (Figure 1)

Popular recognition algorithms include:

1. Principal Component Analysis using Eig4.

1
2. Linear Discriminate Analysis,

3. Elastic Bunch Graph Matching using the Fisher face algorithm,

Fig 1.1: Photometric stereo image

Fig 1.2: Geometric Facial Recognition


1.2 FACE DETECTION:

Face detection involves separating image windows into two classes; one containing faces (tarring the

background (clutter). It is difficult because although commonalities exist between faces, they can vary

considerably in terms of age, skin colour and facial expression. The problem is further complicated by
differing lighting conditions, image qualities and geometries, as well as the possibility of partial

occlusion and disguise. An ideal face detector would therefore be able to detect the presence of any face

2
under any set of lighting conditions, upon any background. The face detection task can be broken down

into two steps. The first step is a classification task that takes some arbitrary image as input.

and outputs a binary value of yes or no, indicating whether there are any faces present in the image. The

second step is the face localization task that aims to take an image as input and output the location of

any face or faces within that image as some bounding box with (x, y, width, height). The face detection

system can be divided into the following steps:-

1. Pre-Processing: To reduce the variability in the faces, the images are processed before they
are fed into the network. All positive examples that is the face images are obtained by cropping
images with frontal faces to include only the front view. All the cropped images are then
corrected for lighting through standard algorithms.

2. Classification: Neural networks are implemented to classify the images as faces or no


faces by training on these examples. We use both our implementation of the neural network
and the Matlab neural network toolbox for this task. Different network configurations are
experimented with to optimize the results.
3. Localization: The trained neural network is then used to search for faces in an image and
if present localize them in a bounding box. Various Feature of Face on which the work has
done on: - Position Scale Orientation Illumination.

Fig 1.3: Face Detection Algo

3
2. LITERATURE SURVEY

Face detection is a computer technology that determines the location and size of human face in
arbitrary (digital) image. The facial features are detected and any other objects like trees, buildings
and bodies etc are ignored from the digital image. It can be regarded as a specific case of object-
class detection, where the task is finding the location and sizes of all objects in an image that
belong to a given class. Face detection, can be regarded as a more general case of face localization.
In face localization, the task is to find the locations and sizes of a known number of faces (usually
one). Basically there are two types of approaches to detect facial part in the given image i.e. feature
base and image base approach. Feature base approach tries to extract features of the image and
match it against the knowledge of the face features. While image base approach tries to get best
match between training and testing images.

Fig 2.1: Detection Methods

4
2.1 FEATURE BASE APPROACH:
Active Shape Model Active shape models focus on complex non-rigid features like ac-tual
physical and higher level appearance of features Means that Active Shape Models (ASMs)
are aimed at automatically locating landmark points that define the shape of any statistically
modeled object in an image. When of facial features such as the eyes, lips, nose, mouth and
eyebrows. The training stage of an ASM involves the building of a sta-tistical

a) Facial model from a training set containing images with manually annotated landmarks. ASMs is

classified into three groups i.e. snakes, PDM, Deformable templates.

b) Snakes:The first type uses a generic active contour called snakes, first introduced by Kass et al. in
1987 Snakes are used to identify head boundaries [8,9,10,11,12]. In order to achieve the task, a snake
is first initialized at the proximity around a head boundary. It then locks onto nearby edges and
subsequently assume the shape of the head. The evolution of a snake is achieved by minimizing an
energy function, Esnake (analogy with physical systems), denoted as Esnake = Einternal + EExternal
where Einternal and EExternal are internal and external energy functions. Internal energy is the part
that depends on the intrinsic properties of the snake and defines its natural evolution. The typical
natural evolution in snakes is shrinking or expanding. The external energy counteracts the internal
energy and enables the contours to deviate from the natural evolution and eventually assume the shape
of nearby features—the head boundary at a state of equilibria. Two main consideration for forming
snakes i.e. selection of energy terms and energy minimization. Elastic energy is used commonly as
internal energy. Internal energy is vary with the distance between control points on the snake, through
which we get contour an elastic-band characteristic that causes it to shrink or expand. On other side
external energy relay on image features. Energy minimization process is done by optimization
techniques such as the steepest gradient descent. Which needs highest

computations. Huang and Chen and Lam and Yan both employ fast iteration methods by greedy
algorithms. Snakes have some demerits like contour often becomes trapped onto

false image features and another one is that snakes are not suitable in extracting
non convex features.

5
2.1.1 Deformable Templates:

Deformable templates were then introduced by Yuille et al. to take into account the a priori
of facial features and to better the performance of snakes. Locating a facial feature boundary
is not an easy task because the local evidence of facial edges is difficult to organize into a
sensible global entity using generic contours. The low brightness contrast around some of
these features also makes the edge detection process. Yuille et al. took the concept of snakes
a step further by incorporating global information of the eye to improve the reliability of the
extraction process.

Deformable templates approaches are developed to solve this problem. Deformation is based on local
valley, edge, peak, and brightness .Other than face boundary, salient feature (eyes, nose, mouth and
eyebrows) extraction is a great challenge of face recognition. E = Ev + Ee + Ep + Ei + Einternal ;
where Ev , Ee , Ep , Ei , Einternal are external energy due to valley, edges, peak and image brightness
and internal energy

2.1.2 PDM (Point Distribution Model):

Independently of computerized image analysis, and before ASMs were developed, researchers
developed statistical models of shape. The idea is that once you represent shapes as vectors, you can
apply standard statistical methods to them just like any other multivariate object. These models learn
allowable constellations of shape points from training examples and use principal components to
build what is called a Point Distribution Model. These have been used in diverse ways, for example
for categorizing Iron Age broaches. Ideal Point Distribution Models can only deform in ways that
are characteristic of the object. Cootes and his colleagues were seeking models which do exactly that
so if a beard, say, covers the chin, the shape model can “override the image" to approximate the
position of the chin under the beard. It was therefore natural (but perhaps only in retrospect) to adopt
Point Distribution Models. This synthesis of ideas from image

processing and statistical shape modelling led to the Active Shape Model. The first
parametric statistical shape model for image analysis based on principal components of inter-
landmark distances was presented by Cootes and Taylor in. On this approach, Cootes,
Taylor, and their colleagues, then released a series of papers that cumulated in what we call
the classical Active Shape Model.

6
2.2) LOW LEVEL ANALYSIS:

Based on low level visual features like color, intensity, edges, motion etc.Skin Color BaseColor
is avital feature of human faces. Using skin-color as a feature for tracking a face has several
advantages. Color processing is much faster than processing other facial features. Under certain
lighting conditions, color is orientation invariant. This property makes motion estimation much
easier because only a translation model is needed for motion estimation.Tracking human faces
using color as a feature has several problems like the color representation of a face obtained by
a camera is influenced by many factors (ambient light, object movement, etc.)

Fig 2.2: Face Detection

Majorly three different face detection algorithms are available based on RGB, YCbCr, and
HIS color space models. In the implementation of the algorithms there are three main steps
viz.

7
(1) Classify the skin region in the color space,

(2) Apply threshold to mask the skin region and

(3) Draw bounding box to extract the face image.

Crowley and Coutaz suggested simplest skin color algorithms for detecting skin pixels.

The perceived human color varies as a function of the relative direction to the illumination.
The pixels for skin region can be detected using a normalized color histogram, and can be normalized
for changes in intensity on dividing by luminance. Converted an [R, G, B] vector is converted into
an [r, g] vector of normalized color which provides a fast means of skin detection. This algorithm
fails when there are some more skin region like legs, arms, etc.Cahi and Ngan [27] suggested skin
color classification algorithm with YCbCr color space.Research found that pixels belonging to skin
region having similar Cb and Cr values. So that the thresholds be chosen as [Cr1, Cr2] and [Cb1,
Cb2], a pixel is classified to have skin tone if the values [Cr, Cb] fall within the thresholds. The skin
color distribution gives the face portion in the color image. This algorithm is also having the
constraint that the image should be having only face as the skin region. Kjeldson and Kender defined
a color predicatein HSV color space to separate skin regionsfrom background. Skin color
classification inHSI color space is the same as YCbCr color spacebut here the responsible values are
hue (H) andsaturation (S). Similar to above the threshold be chosen as [H1, S1] and [H2, S2], and a
pixel isclassified to have skin tone if the values [H,S] fallwithin the threshold and this distribution
gives thelocalized face image. Similar to above twoalgorithm this algorithm is also having the same
constraint.

2.3 EDGE BASE:

Face detection based on edges was introduced by Sakai et al. This workwas based on analysing
line drawings of the faces from photographs, aiming to locate facialfeatures. Than later Craw
et al. proposed a hierarchical framework based on Sakai et al. swork to trace a human head
outline. Then after remarkable works were carried out by many researchers in this specific
area. Method suggested by Anila and Devarajan was very simple and fast. They proposed
frame work which consist three steps i.e. initially the images are enhanced by applying median
filter for noise removal and histogram equalization for contrast adjustment. In the second step
the edge image is constructed from the enhanced image by applying sobel operator. Then a
novel edge tracking algorithm is applied to extract the sub windows from the enhanced image

8
based on edges. Further they used Back propagation Neural Network (BPN) algorithm to
classify the sub-window as either face or non-face.

2.4 FEATURE ANALYSIS

These algorithms aim to find structural features that exist even when the pose, viewpoint, or
lighting conditions vary, and then use these to locate faces. These methods are designed
mainly for face localization.

2.4.1 Feature Searching

Viola Jones Method:

Paul Viola and Michael Jones presented an approach for object detection which minimizes
computation time while achieving high detection accuracy. Paul Viola and Michael Jones

[39] proposed a fast and robust method for face detection which is 15 times quicker than any
technique at the time of release with 95% accuracy at around 17 fps.The technique relies on the
use of simple Haar-like features that are evaluated quickly through the use of a new image
representation. Based on the concept of an ―Integral Image‖ it generates a large set of features
and uses the boosting algorithm AdaBoost to reduce the over complete set and the introduction
of a degenerative tree of the boosted classifiers provides for robust and fast interferences. The
detector is applied in a scanning fashion and used on gray-scale images, the scanned window
that is applied can also be scaled, as well as the features evaluated.
Gabor Feature Method:

Sharif et al proposed an Elastic Bunch Graph Map (EBGM) algorithm that successfully
implements face detection using Gabor filters. The proposed system applies 40 different
Gabor filters on an image. As a result of which 40 images with different angles and
orientations are received. Next, maximum intensity points in each filtered image are
calculated and mark them as fiducial points. The system reduces these points in accordance
to distance between them. The next step is calculating the distances between the reduced
points using distance formula. At last, the distances are compared with database. If match
occurs, it means that the faces in the image are detected. Equation of Gabor filter [40] is
shown below`

9
2.5 CONSTELLATION METHOD

All methods discussed so far are able to track faces but still some issue like locating faces
of various poses in complex background is truly difficult. To reduce this difficulty
investigator form a group of facial features in face-like constellations using more robust
modelling approaches such as statistical analysis. Various types of face constellations have
been proposed by Burl et al. They establish use of statistical shape theory on the features
detected from a multiscale Gaussian derivative filter. Huang et al. also apply a Gaussian
filter for pre-processing in a framework based on image feature analysis.Image Base
Approach.

2.6 LINEAR SUB SPACE METHOD

Eigen faces Method:

An early example of employing eigen vectors in face recognition was done by Kohonen in
which a simple neural network is demonstrated to perform face recognition for aligned and
normalized face images. Kirby and Sirovich suggested that images of faces can be linearly
encoded using a modest number of basis images. The idea is arguably proposed first by
Pearson in 1901 and then by HOTELLING in 1933 .Given a collection of n by m pixel
training.

Images represented as a vector of size m X n, basis vectors spanning an optimal subspace are
determined such that the mean square error between the projection of the training images onto

10
this subspace and the original images is minimized.They call the set of optimal basis vectors
Eigen pictures since these are simply the eigen vectors of the covariance matrix computed
from the vectorized face images in the training set.Experiments with a set of 100 images show
that a face image of 91 X 50 pixels can be effectively encoded using only50 Eigen pictures.

Fig 2.4: A Figreasonable Likeness (i.e. capturing 95 percent of the variance

2.7 STATISTICAL APPROCH

Support Vector Machine (SVM):


SVMs were first introduced Osuna et al. for face detection. SVMs work as a new paradigm to
train polynomial function, neural networks, or radial basis function (RBF) classifiers. SVMs
works on induction principle, called structural risk minimization, which targets to minimize an
upper bound on the expected generalization error. An SVM classifier is a linear classifier where
the separating hyper plane is chosen to minimize the expected classification error of the unseen
test patterns. In Osunaet al. developed an efficient method to train an SVM for large scale
problems,and applied it to face detection. Based on two test sets of 10,000,000 test patterns of
19 X 19 pixels, their system has slightly lower error rates and runs approximately30 times faster
than the system by Sung and Poggio . SVMs have also been used to detect faces and pedestrians
in the wavelet domain.

11
2.8 HARDWARE REQUIREMENTS
Processor: Intel Core i3 or above
Ram: 4.0 GB or above
Hard disk: 250Gb and above

2.9 SOFTWARE REQUIREMENTS

Programming Language:- Python 3.x.x Version

Framework: OpenCv
Operation System: Window 8 or above

12
3. DIGITAL IMAGE PROCESSING

3.1 DIGITAL IMAGE PROCESSING

Interest in digital image processing methods stems from two principal application areas:

1. Improvement of pictorial information for human interpretation

2. Processing of scene data for autonomous machine perception

In this second application area, interest focuses on procedures for extracting image
information in a form suitable for computer processing.

Examples includes automatic character recognition, industrial machine vision for product assembly and

inspection, military recognizance, automatic processing of fingerprints etc.

Image:

Am image refers a 2D light intensity function f(x, y), where(x, y) denotes spatial
coordinates and the value of f at any point (x, y) is proportional to the brightness or gray
levels of the image at that point. A digital image is an image f(x, y) that has been
discretized both in spatial coordinates and brightness. The elements of such a digital array
are called image elements or pixels.

A simple image model:

To be suitable for computer processing, an image f(x, y) must be digitalized both


spatially and in amplitude. Digitization of the spatial coordinates (x, y) is called image
sampling. Amplitude digitization is called gray-level quantization.

The storage and processing requirements increase rapidly with the spatial resolution and
the number of gray levels.

Example: A 256 gray-level image of size 256x256 occupies 64k bytes of memory.

13
Types of image processing

• Low level processing

• Medium level processing

• High level processing

Low level processing means performing basic operations on images such as reading an
image resize, resize, image rotate, RGB to gray level conversion, histogram equalization
etc…,

The output image obtained after low level processing is raw image. Medium level processing
means extracting regions of interest from output of low level processed image. Medium level
processing deals with identification of boundaries i.e edges .This process is called
segmentation. High level processing deals with adding of artificial intelligence to medium
level processed signal.

3.2 FUNDAMENTAL STEPS IN IMAGE PROCESSING


Fundamental steps in image processing are

1. Image acquisition: to acquire a digital image

2. Image pre-processing: to improve the image in ways that increases the chances for
success of the other processes.
3. Image segmentation: to partitions an input image into its constituent parts of objects.

4. Image segmentation: to convert the input data to a from suitable for computer
processing.
5. Image description: to extract the features that result in some quantitative information of
interest of features that are basic for differentiating one class of objects from another.

6. Image recognition: to assign a label to an object based on the information provided by


its description.

14
3.3 ELEMENTS OF DIGITAL IMAGE PROCESSING SYSTEMS

A digital image processing system contains the following blocks as shown in the figure

Fig 3.2: Elements of digital image processing

15
The basic operations performed in a digital image processing system include

1. Acquisition

2. Storage

3. Processing

4. Communication

5. Display

3.3.1 A simple image formation model

Image are denoted by two-dimensional function f(x, y).f(x, y) may be characterized by 2


components.
1. The amount of source illumination i(x, y) incident on the scene

2. The amount of illumination reflected r(x, y) by the objects of the scene

3. f(x, y) = i(x, y)r(x, y), where 0 < i(x,y) < and 0 < r(x, y) < 1

• 0.01 for black velvet

• 0.65 for stainless steel

• 0.8 for flat white wall paint

• 0.9 for silver-plated metal

• 0.93 for snow Example of typical ranges of illumination i(x, y) for visible light
(average values)
• Sun on a clear day: ~90,000 lm/m^2,down to 10,000lm/m^2 on a cloudy day

• Full moon on a clear evening :-0.1 lm/m^2

16
Fig 3.3: Above table shows image types with its description.

17
4. Project Activities
-Open the software AMS_Run.py

Fig 4.1 Project Folder

Fig 4.2 MainUI

18
-After run you need to give your face data to system so enter your ID and name in box than click on
`Take Images` button.

Fig 4.3 Registring Face for Attendance

- It will collect 200 images of your faces, it save a images in `TrainingImage` folder

Fig 4.4 TrainImage Folder

19
- After that we need to train a model (for train a model click on `Train Image` button.

Fig 4.5 Successful Registration of Face after Training

- It will take 5-10 minutes for training (for 10 person data).

Fig 4.6 Successful Registration of Face

20
- After training click on `Automatic Attendance`,it can fill attendance by your face using our trained
model (model will save in `TrainingImageLabel` )

Fig 4.7 When Facemodel Trained

Fig 4.8 Program Recognising Face from its TrainImage Folder

21
- it will create `.csv` file of attendance according to time & subject.

Fig 4.9 Successfull Attendance File Report will be Generated Here

- You can store data in database (install wampserver),change the DB name according to you in
`AMS_Run.py`.

- `Manually Fill Attendance` Button in UI is for fill a manually attendance (without face recognition), it’s
also create a `.csv` and store in a database.

22
5. Modules
A module allows you to logically organize your Python code. Grouping related code into a
module makes the code easier to understand and use. A module is a Python object with arbitrarily
named attributes that you can bind and reference.
Simply, a module is a file consisting of Python code. A module can define functions, classes and
variables. A module can also include runnable code.

Example
The Python code for a module named aname normally resides in a file named aname.py. Here's
an example of a simple module, support.py
def print_func( par ):
print "Hello : ", par
return

5.1 The import Statement


You can use any Python source file as a module by executing an import statement in some other
Python source file. The import has the following syntax −
import module1[, module2[,... moduleN]
When the interpreter encounters an import statement, it imports the module if the module is
present in the search path. A search path is a list of directories that the interpreter searches before
importing a module. For example, to import the module support.py, you need to put the following
command at the top of the script −
#!/usr/bin/python

# Import module support


import support

# Now you can call defined function that module as follows


support.print_func("Zara")
When the above code is executed, it produces the following result −
Hello : Zara
A module is loaded only once, regardless of the number of times it is imported. This prevents the
module execution from happening over and over again if multiple imports occur.

5.2 The from...import Statement


Python's from statement lets you import specific attributes from a module into the current
namespace. The from...import has the following syntax −
from modname import name1[, name2[, ... nameN]]
For example, to import the function fibonacci from the module fib, use the following statement

from fib import fibonacci

23
This statement does not import the entire module fib into the current namespace; it just introduces
the item fibonacci from the module fib into the global symbol table of the importing module.

The from...import * Statement


It is also possible to import all names from a module into the current namespace by using the
following import statement −
from modname import *
This provides an easy way to import all the items from a module into the current namespace;
however, this statement should be used sparingly.

5.3 Locating Modules


When you import a module, the Python interpreter searches for the module in the following
sequences −
 The current directory.
 If the module isn't found, Python then searches each directory in the shell variable PYTHONPATH.
 If all else fails, Python checks the default path. On UNIX, this default path is normally
/usr/local/lib/python/.
The module search path is stored in the system module sys as the sys.path variable. The sys.path
variable contains the current directory, PYTHONPATH, and the installation-dependent default.

The PYTHONPATH Variable


The PYTHONPATH is an environment variable, consisting of a list of directories. The syntax of
PYTHONPATH is the same as that of the shell variable PATH.
Here is a typical PYTHONPATH from a Windows system −
set PYTHONPATH = c:\python20\lib;
And here is a typical PYTHONPATH from a UNIX system −
set PYTHONPATH = /usr/local/lib/python

24
5.4 Namespaces and Scoping
Variables are names (identifiers) that map to objects. A namespace is a dictionary of variable
names (keys) and their corresponding objects (values).
A Python statement can access variables in a local namespace and in the global namespace. If a
local and a global variable have the same name, the local variable shadows the global variable.
Each function has its own local namespace. Class methods follow the same scoping rule as
ordinary functions.
Python makes educated guesses on whether variables are local or global. It assumes that any
variable assigned a value in a function is local.
Therefore, in order to assign a value to a global variable within a function, you must first use the
global statement.
The statement global VarName tells Python that VarName is a global variable. Python stops
searching the local namespace for the variable.
For example, we define a variable Money in the global namespace. Within the function Money,
we assign Money a value, therefore Python assumes Money as a local variable. However, we
accessed the value of the local variable Money before setting it, so an UnboundLocalError is the
result. Uncommenting the global statement fixes the problem.

5.5 The globals() and locals() Functions


The globals() and locals() functions can be used to return the names in the global and local
namespaces depending on the location from where they are called.
If locals() is called from within a function, it will return all the names that can be accessed locally
from that function.
If globals() is called from within a function, it will return all the names that can be accessed
globally from that function.
The return type of both these functions is dictionary. Therefore, names can be extracted using the
keys() function.

5.6 The reload() Function


When the module is imported into a script, the code in the top-level portion of a module is
executed only once.
Therefore, if you want to reexecute the top-level code in a module, you can use
the reload() function. The reload() function imports a previously imported module again. The
syntax of the reload() function is this −
reload(module_name)
Here, module_name is the name of the module you want to reload and not the string containing
the module name. For example, to reload hello module, do the following −
reload(hello)

25
5.7 Packages in Python
A package is a hierarchical file directory structure that defines a single Python application
environment that consists of modules and subpackages and sub-subpackages, and so on.
Consider a file Pots.py available in Phone directory. This file has following line of source code

#!/usr/bin/python

def Pots():
print "I'm Pots Phone"
Similar way, we have another two files having different functions with the same name as above

 Phone/Isdn.py file having function Isdn()
 Phone/G3.py file having function G3()
Now, create one more file __init__.py in Phone directory −

 Phone/__init__.py
To make all of your functions available when you've imported Phone, you need to put explicit
import statements in __init__.py as follows −
from Pots import Pots
from Isdn import Isdn
from G3 import G3
After you add these lines to __init__.py, you have all of these classes available when you import
the Phone package.
#!/usr/bin/python

# Now import your Phone Package.


import Phone

Phone.Pots()
Phone.Isdn()
Phone.G3()
When the above code is executed, it produces the following result −
I'm Pots Phone
I'm 3G Phone
I'm ISDN Phone

26
6. Frameworks Used
6.1 OpenCV
OpenCV is a Python library which is designed to solve computer vision problems. OpenCV was
originally developed in 1999 by Intel but later it was supported by Willow Garage.

OpenCV supports a wide variety of programming languages such as C++, Python, Java etc.
Support for multiple platforms including Windows, Linux, and MacOS.

OpenCV Python is nothing but a wrapper class for the original C++ library to be used with Python.
Using this, all of the OpenCV array structures gets converted to/from NumPy arrays.

This makes it easier to integrate it with other libraries which use NumPy. For example, libraries
such as SciPy and Matplotlib.

Next up on this OpenCV Python Tutorial blog, let us look at some of the basic operations that we
can perform with OpenCV.

6.2 Basic Operations With OpenCV?


Let us look at various concepts ranging from loading images to resizing them and so on.

6.3 Loading an image using OpenCV:


Import cv2

# colored Image

Img = cv2.imread (“Penguins.jpg”,1)

# Black and White (gray scale)

Img_1 = cv2.imread (“Penguins.jpg”,0)

As seen in the above piece of code, the first requirement is to import the OpenCV module.

Later we can read the image using imread module. The 1 in the parameters denotes that it is a
color image. If the parameter was 0 instead of 1, it would mean that the image being imported is a
black and white image. The name of the image here is ‘Penguins’. Pretty straightforward, right?

6.4 Image Shape/Resolution:


We can make use of the shape sub-function to print out the shape of the image. Check out the
below image:

Import cv2

27
# Black and White (gray scale)

Img = cv2.imread (“Penguins.jpg”,0)

Print(img.shape)

By shape of the image, we mean the shape of the NumPy array. As you see from executing the
code, the matrix consists of 768 rows and 1024 columns.

6.5 Displaying the image:


Displaying an image using OpenCV is pretty simple and straightforward. Consider the below
image:

import cv2

# Black and White (gray scale)

Img = cv2.imread (“Penguins.jpg”,0)

cv2.imshow(“Penguins”, img)

cv2.waitKey(0)

# cv2.waitKey(2000)

cv2.destroyAllWindows()

As you can see, we first import the image using imread. We require a window output to display
the images, right?

We use the imshow function to display the image by opening a window. There are 2 parameters
to the imshow function which is the name of the window and the image object to be displayed.

Later, we wait for a user event. waitKey makes the window static until the user presses a key. The
parameter passed to it is the time in milliseconds.

And lastly, we use destroyAllWindows to close the window based on the waitForKey parameter.

6.6 Resizing the image:


Similarly, resizing an image is very easy. Here’s another code snippet:

import cv2

# Black and White (gray scale)

img = cv2.imread (“Penguins.jpg”,0)

resized_image = cv2.resize(img, (650,500))

cv2.imshow(“Penguins”, resized_image)

cv2.waitKey(0)

cv2.destroyAllWindows()

28
6.6 Face Detection Using OpenCV
This seems complex at first but it is very easy. Let me walk you through the entire process and you
will feel the same.

Step 1: Considering our prerequisites, we will require an image, to begin with. Later we need to
create a cascade classifier which will eventually give us the features of the face.

Step 2: This step involves making use of OpenCV which will read the image and the features file.
So at this point, there are NumPy arrays at the primary data points.

All we need to do is to search for the row and column values of the face NumPy ndarray. This is
the array with the face rectangle coordinates.

Step 3: This final step involves displaying the image with the rectangular face box.

First, we create a CascadeClassifier object to extract the features of the face as explained earlier.
The path to the XML file which contains the face features is the parameter here.

The next step would be to read an image with a face on it and convert it into a black and white
image using COLOR_BGR2GREY. Followed by this, we search for the coordinates for the
image. This is done using detectMultiScale.

What coordinates, you ask? It’s the coordinates for the face rectangle. The scaleFactor is used to
decrease the shape value by 5% until the face is found. So, on the whole – Smaller the value,
greater is the accuracy.

Finally, the face is printed on the window.

Fig 6.1 Working of OpenCV

29
6.7Adding the rectangular face box:
This logic is very simple – As simple as making use of a for loop statement. Check out the
following imageWe define the
method to create a rectangle
using cv2.rectangle by passing
parameters such as the image
object, RGB values of the box
outline and the width of the
rectangle.

Let us check out the entire code for


face detection:

Next up on this OpenCV Python


Tutorial blog, let us look at how to
use OpenCV to capture video with the computer webcam.

import cv2
# Create a CascadeClassifier Object
face_cascade = cv2.CascadeClassifier("haarcascade_frontalface_default.xml")
# Reading the image as it is
img = cv2.imread("photo.jpg")
# Reading the image as gray scale image
gray_img = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
# Search the co-ordintes of the image
faces = face_cascade.detectMultiScale(gray_img, scaleFactor = 1.05,
minNeighbors=5)
for x,y,w,h in faces:
img = cv2.rectangle(img, (x,y), (x+w,y+h),(0,255,0),3)
resized = cv2.resize(img, (int(img.shape[1]/7),int(img.shape[0]/7)))
cv2.imshow("Gray", resized).

30
7. Conclusion and Future Work

7.1 Conclusion
We developed a Attendance System Using Face Recognition method which uses various
frameworks of python (OpenCV, Pillow, numpy etc.).Through this project we can easily take
attendance without using pen papers. The automatic filling attendance feature inside this project
needed a webcam to capture the student’s image, if student is physically present in the class then
only attendance will be marked otherwise that student will be marked as absent. Teachers can
manually fill attendance of students if webcam not working properly.

7.2 Future Works

We tried our best to develop our project as per SRS and in the specified period of time .
and it’s a matter of satisfaction that we finished work accordingly.However this project has a
broad scope. If we are given time and opportunity we would like to add following features in this
app:

1. Students Registration
2. Fees Management
3. Entry Pass for students
4. Visitors Profile for Campus Visit

31
References

[1] Stack Overflow,”Learn more about OpenC” https://stackoverflow.com/questions/tagged/opencv

[2] Stack Overflow “Image processing with Opencv in Python”


https://stackoverflow.com/questions/58960158/image-processing-with-opencv-in-python
[3] Github “Project Analysis and Idea” https://github.com/vishmaurya456/ams.git

32
Appendix
Libraries Used

33
Automatic Management System

34
GUI for manually fill attendance
def manually_fill():
global sb
sb = tk.Tk()
sb.iconbitmap('AMS.ico')
sb.title("Enter subject name...")
sb.geometry('580x320')
sb.configure(background='snow')
def err_screen_for_subject():
def ec_delete():
ec.destroy()
global ec
ec = tk.Tk()
ec.geometry('300x100')
ec.iconbitmap('AMS.ico')
ec.title('Warning!!')
ec.configure(background='snow')
Label(ec, text='Please enter your subject name!!!', fg='red',
bg='white', font=('times', 16, ' bold ')).pack()
Button(ec, text='OK', command=ec_delete, fg="black", bg="lawn
green", width=9, height=1, activebackground="Red",
font=('times', 15, ' bold ')).place(x=90, y=50)
def fill_attendance():
ts = time.time()
Date =
datetime.datetime.fromtimestamp(ts).strftime('%Y_%m_%d')
timeStamp =
datetime.datetime.fromtimestamp(ts).strftime('%H:%M:%S')
Time =
datetime.datetime.fromtimestamp(ts).strftime('%H:%M:%S')
Hour, Minute, Second = timeStamp.split(":")

35
Connect to the database
try:
global cursor
connection =
pymysql.connect(host='localhost',port='3306',user='user', password='',
db='manually_fill_attendance')
cursor = connection.cursor()
except Exception as e:
print(e)

sql = "CREATE TABLE " + DB_table_name + """


(ID INT NOT NULL AUTO_INCREMENT,
ENROLLMENT varchar(100) NOT NULL,
NAME VARCHAR(50) NOT NULL,
DATE VARCHAR(20) NOT NULL,
TIME VARCHAR(20) NOT NULL,
PRIMARY KEY (ID)
);
"""

try:
cursor.execute(sql) ##for create a table
except Exception as ex:
print(ex) #

if subb=='':
err_screen_for_subject()
else:
sb.destroy()
MFW = tk.Tk()
MFW.iconbitmap('AMS.ico')
MFW.title("Manually attendance of "+ str(subb))
MFW.geometry('880x470')

36
MFW.configure(background='snow')

def del_errsc2():
errsc2.destroy()

def err_screen1():
global errsc2
errsc2 = tk.Tk()
errsc2.geometry('330x100')
errsc2.iconbitmap('AMS.ico')
errsc2.title('Warning!!')
errsc2.configure(background='snow')
Label(errsc2, text='Please enter Student & Enrollment!!!',
fg='red', bg='white',
font=('times', 16, ' bold ')).pack()
Button(errsc2, text='OK', command=del_errsc2, fg="black",
bg="lawn green", width=9, height=1,
activebackground="Red", font=('times', 15, ' bold
')).place(x=90, y=50)

def testVal(inStr, acttyp):


if acttyp == '1': # insert
if not inStr.isdigit():
return False
return True

ENR = tk.Label(MFW, text="Enter Enrollment", width=15, height=2,


fg="white", bg="blue2",
font=('times', 15, ' bold '))
ENR.place(x=30, y=100)

STU_NAME = tk.Label(MFW, text="Enter Student name", width=15,


height=2, fg="white", bg="blue2",
font=('times', 15, ' bold '))
STU_NAME.place(x=30, y=200)

37
global ENR_ENTRY
ENR_ENTRY = tk.Entry(MFW, width=20,validate='key', bg="yellow",
fg="red", font=('times', 23, ' bold '))
ENR_ENTRY['validatecommand'] = (ENR_ENTRY.register(testVal),
'%P', '%d')
ENR_ENTRY.place(x=290, y=105)

def remove_enr():
ENR_ENTRY.delete(first=0, last=22)

STUDENT_ENTRY = tk.Entry(MFW, width=20, bg="yellow", fg="red",


font=('times', 23, ' bold '))
STUDENT_ENTRY.place(x=290, y=205)

def remove_student():

STUDENT_ENTRY.delete(first=0, last =22)

38
For take images for datasets
def take_img():
l1 = txt.get()
l2 = txt2.get()
if l1 == '':
err_screen()
elif l2 == '':
err_screen()
else:
try:
cam = cv2.VideoCapture(0)
detector =
cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
Enrollment = txt.get()
Name = txt2.get()
sampleNum = 0
while (True):
ret, img = cam.read()
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
faces = detector.detectMultiScale(gray, 1.3, 5)
for (x, y, w, h) in faces:
cv2.rectangle(img, (x, y), (x + w, y + h), (255, 0, 0),
2)
# incrementing sample number
sampleNum = sampleNum + 1
# saving the captured face in the dataset folder
cv2.imwrite("TrainingImage/ " + Name + "." + Enrollment +
'.' + str(sampleNum) + ".jpg",
gray[y:y + h, x:x + w])
cv2.imshow('Frame', img)
# wait for 100 miliseconds
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# break if the sample number is morethan 100

39
elif sampleNum > 70:
break
cam.release()
cv2.destroyAllWindows()
ts = time.time()
Date = datetime.datetime.fromtimestamp(ts).strftime('%Y-%m-%d')
Time = datetime.datetime.fromtimestamp(ts).strftime('%H:%M:%S')
row = [Enrollment, Name, Date, Time]
with open('StudentDetails\StudentDetails.csv', 'a+') as csvFile:
writer = csv.writer(csvFile, delimiter=',')
writer.writerow(row)
csvFile.close()
res = "Images Saved for Enrollment : " + Enrollment + " Name : "
+ Name
Notification.configure(text=res, bg="SpringGreen3", width=50,
font=('times', 18, 'bold'))
Notification.place(x=250, y=400)
except FileExistsError as F:
f = 'Student Data already exists'
Notification.configure(text=f, bg="Red", width=21)
Notification.place(x=450, y=400)

40
For train the model
def trainimg():
recognizer = cv2.face.LBPHFaceRecognizer_create()
global detector
detector = cv2.CascadeClassifier("haarcascade_frontalface_default.xml")
try:
global faces,Id
faces, Id = getImagesAndLabels("TrainingImage")
except Exception as e:
l='please make "TrainingImage" folder & put Images'
Notification.configure(text=l, bg="SpringGreen3", width=50,
font=('times', 18, 'bold'))
Notification.place(x=350, y=400)

recognizer.train(faces, np.array(Id))
try:
recognizer.save("TrainingImageLabel\Trainner.yml")
except Exception as e:
q='Please make "TrainingImageLabel" folder'
Notification.configure(text=q, bg="SpringGreen3", width=50,
font=('times', 18, 'bold'))
Notification.place(x=350, y=400)

res = "Model Trained" # +",".join(str(f) for f in Id)


Notification.configure(text=res, bg="SpringGreen3", width=50,
font=('times', 18, 'bold'))
Notification.place(x=250, y=400)

def getImagesAndLabels(path):
imagePaths = [os.path.join(path, f) for f in os.listdir(path)]
# create empth face list
faceSamples = []
# create empty ID list
Ids = []

41
# now looping through all the image paths and loading the Ids and the
images
for imagePath in imagePaths:
# loading the image and converting it to gray scale
pilImage = Image.open(imagePath).convert('L')
# Now we are converting the PIL image into numpy array
imageNp = np.array(pilImage, 'uint8')
# getting the Id from the image

Id = int(os.path.split(imagePath)[-1].split(".")[1])
# extract the face from the training image sample
faces = detector.detectMultiScale(imageNp)
# If a face is there then append that in the list as well as Id of it
for (x, y, w, h) in faces:
faceSamples.append(imageNp[y:y + h, x:x + w])
Ids.append(Id)
return faceSamples, Ids

window.grid_rowconfigure(0, weight=1)
window.grid_columnconfigure(0, weight=1)
window.iconbitmap('AMS.ico')

def on_closing():
from tkinter import messagebox
if messagebox.askokcancel("Quit", "Do you want to quit?"):
window.destroy()
window.protocol("WM_DELETE_WINDOW", on_closing)

message = tk.Label(window, text="Face-Recognition-Based-Attendance-


Management-System", bg="cyan", fg="black", width=50,
height=3, font=('times', 30, 'italic bold '))

message.place(x=80, y=20)

42
Notification = tk.Label(window, text="All things good", bg="Green",
fg="white", width=15,
height=3, font=('times', 17, 'bold'))

lbl = tk.Label(window, text="Enter Enrollment", width=20, height=2,


fg="black", bg="deep pink", font=('times', 15, ' bold '))
lbl.place(x=200, y=200)

def testVal(inStr,acttyp):
if acttyp == '1': #insert
if not inStr.isdigit():
return False
return True

txt = tk.Entry(window, validate="key", width=20, bg="yellow", fg="red",


font=('times', 25, ' bold '))
txt['validatecommand'] = (txt.register(testVal),'%P','%d')
txt.place(x=550, y=210)

lbl2 = tk.Label(window, text="Enter Name", width=20, fg="black", bg="deep


pink", height=2, font=('times', 15, ' bold '))
lbl2.place(x=200, y=300)

txt2 = tk.Entry(window, width=20, bg="yellow", fg="red", font=('times', 25, '


bold '))
txt2.place(x=550, y=310)

clearButton = tk.Button(window, text="Clear",command=clear,fg="black"


,bg="deep pink" ,width=10 ,height=1 ,activebackground = "Red"
,font=('times', 15, ' bold '))
clearButton.place(x=950, y=210)

clearButton1 = tk.Button(window, text="Clear",command=clear1,fg="black"


,bg="deep pink" ,width=10 ,height=1, activebackground = "Red"
,font=('times', 15, ' bold '))
clearButton1.place(x=950, y=310)

43
AP = tk.Button(window, text="Check Register
students",command=admin_panel,fg="black" ,bg="cyan" ,width=19 ,height=1,
activebackground = "Red" ,font=('times', 15, ' bold '))
AP.place(x=990, y=410)

takeImg = tk.Button(window, text="Take Images",command=take_img,fg="white"


,bg="blue2" ,width=20 ,height=3, activebackground = "Red" ,font=('times',
15, ' bold '))
takeImg.place(x=90, y=500)

trainImg = tk.Button(window, text="Train Images",fg="black",command=trainimg


,bg="lawn green" ,width=20 ,height=3, activebackground = "Red"
,font=('times', 15, ' bold '))
trainImg.place(x=390, y=500)

FA = tk.Button(window, text="Automatic
Attendace",fg="white",command=subjectchoose ,bg="blue2" ,width=20
,height=3, activebackground = "Red" ,font=('times', 15, ' bold '))
FA.place(x=690, y=500)

quitWindow = tk.Button(window, text="Manually Fill Attendance",


command=manually_fill ,fg="black" ,bg="lawn green" ,width=20 ,height=3,
activebackground = "Red" ,font=('times', 15, ' bold '))
quitWindow.place(x=990, y=500)

window.mainloop()

44

Potrebbero piacerti anche