Sei sulla pagina 1di 14

NEURAL NETWORKS

AND
FACE RECOGNITION

Submitted by

V.V.V.PAPARAO K.VISWESWARA RAO


III/IV B.Tech III/IV B.Tech J.N.T.U
COLLEGE OF ENGG REGENCY INSTITUTE
ANANTAPUR OF TECHNOLGY,YANAM

Email:- contact no:- 9440478726

papsvvv@yahoo.com
rammy_666@yahoo.com
ABSTRACT

Face Recognition is the inherent capability of human


beings. Identifying a person by face is one of the most
fundamental human functions since time immemorial.
Face recognition by computer is to endow a machine
with capability to approximate in some sense, a similar
capability in human beings. To impart this basic human
capability to a machine has been a subject of interest
over the last few years. Such a machine would find
considerable utility in many commercial transactions,
personal management, security and the law enforcement
applications, especially in criminal identification,
authentication in secure system etc. Enough research has
not been carried out on identification of human faces.
However a number of automated or semi-automated
recognition studies have been reported.

Artificial Neural Networks (ANN) is an attempt to


simulate human brain; hence this method is named artificial
neural networks. Neural networks, which are inspired from the
studies of biological nervous systems, have recently been used
for various applications, due to the distributed computing
fashion over a large number of simple processing units
(Neurons). These neurons of nodes, which are simple, non-
linear computational elements, are connected by links with
variable weights. The inherent parallelism of these networks
provides high computational rates with greater degree of
robustness or fault tolerance than conventional computers.
The fault tolerance is due to the presence of many processing
nodes, each of which is responsible for a small portion of the
task. Damage to a few nodes or links, doesn’t impair overall
performance significantly.
B ASIC C ONCEPTS OF R ECOGNITION :

Recognition is regarded as a basic attribute of human


beings, as well as other living organisms. A pattern is the
description of an object to be recognized. Recognition of
concrete pattern by human beings may be considered as psycho
physiological problem, which involve a relationship between
persons and physical stimulus. When a person received or
pattern, he makes an inductive inference and associate this
perception with some general concepts which he has derived
from his prior experience. Human recognition is really a
question of estimating the relative odds that the input data can
be associated with one of the known set statistical objects,
which depend on our past experience and which from clues and
past information for recognition.
Thus the problem of pattern recognition may be
regarded as one of the discrimination the input data, not
between individual patterns, but between different patterns, via
the search for features or invariant attributes among members of
population.
The basic aim is to make machine work as human
being, i.e., development of theory and techniques for the design
of devices capable of performing a given recognition may be
traced to the early 1950’s. when the digital computer first
became a readily available information – processing tool.
T ASKS OF F ACE R ECOGNITION :
Face recognition is one of the pattern recognition
systems. Pattern recognition can be defined as the
categorization of input data into identifiable classes via the
extraction of significant features or attributes of the data from
the background of irrelevant details
W HAT I S A RTIFICIAL N EURAL N ETWORKS ?
Artificial neural networks are relatively crude electronic
model based on the neural structure of the brain. An initial
understanding of the natural thinking mechanism shows that the
brains store information as patterns. This process of storing
information as patterns, utilizing those patterns and then
solving problems encompasses a Neural Network. This field
also utilizes words very different from traditional computing,
words like behave, react, self-organize, learns generalize and
forget.
Neural networks can be brained to identify correlative patterns
between the input and target values and can subsequently outcomes
from new input conditions. Neural networks generally consist of a
number of interconnected processing elements or neurons, how the
inter-neuron connections are arranged and the nature of the
connections determines the structure of the networks.

RECOGNITION OF HUMAN FACE USING


INTERCONNECTED NETWORK

Introduction :
Identifying a person by his face is one of the most fundamental human
functions since time immemorial. To impart this basic human capability to
machine has been a subject of interest over the last few years. Such a machine
would find considerable utility in many commercial tractions, personnel
management and security and law enforcement application, specially in criminal
identification, authentication in secure system etc. Enough research has not been
carried out on identification of human face. Recently however a number of
automated recognition are mainly two folds.

• Large number of facial patterns (faces to be recognized are finitely


very large) contrary to many pattern recognition problems where the
numbers of pattern classes are finite.

• The dissimilarity amongst the facial patterns is inherently very small.

Recent research effort has been directed towards the


extraction of features from the frontal facial photographs of
human and its economical use in machine identification of
human faces. This strategy best suited is to get the outline of
the profile and extract discrete features from it. This technique
has been used by L.D. Hardmonetal for the recognition of
human faces.

S ELECTION OF F ACIAL M ARKS :

Looking at the side profile of the human face, certain points can
be readily selected on the face profile, which when correctly identified
may help in extracting certain characteristics features for that
particular face. Out of these, five facial marks are independent of
each other, while point no.(3) FOREHEAD POINT is a reflection of point
no.(2) CHIN POINT. Through point no.(1) NOSE POINT.

Making of this point helps identifying the start of the profile. It is seen that these
points do not change with age. Therefore, five points have been selected for the
extraction of various feature measurements for identification purposes. These points
have been named as under.
 Point 1 Nose point
 Point 2 Chin point
 Point 3 Forehead point
 Point 4 Bridge point
 Point 5 Upper lip point
It may be seen that the point 5 is soft tissue point and it is rather difficult to
extract them accurately. This position will be dependent on the facial expression of the
person at the time when photographs is taken (i.e. smiling, laughing, frowning).

N ETWORK FOR F ACE R ECOGNITION :


• A set of 12 features vectors have been extracted from each facial pattern and
the training of the neutral network is carried out with a set of four (4) facial
photographs. Thus the network is configured with 12 input nodes and 4
output nodes.
• Two network topologies are used one the BP not having input to hidden layer
and hidden to output layer connections and the IO net with additional direct
input to output connection.
• Instead of training the network with 12 dimensional feature vector directly,
we have used a different feature vector.
• This was done because it was observed that the network was more stable
when trained with this differential data rather than the absolute values.
• The order of variation in facial features is very small when compared to the
absolute values hence the network cannot differentiate between feature
vectors if the absolute values are chosen.
• This clearly shows that the network is dependent on the nature of input data
and thus pre-processing is an essential step for neural classification.
A LGORITM:
1. First we have to take the BMP image which is to be recognized.

2. The BMP image is converted into the RAW image.

3. From the RAW image we would calculate six facial distances and
six facial angles. Normalize these twelve inputs.

4. These collection of Twelve input’s called Twelve input parameters.


Then training starts.

5. From these twelve inputs the weights to the links between the Input
layer and the Hidden layer are calculated. these Are stored in matrix
12*7 called Random Matrix W.

6. From these twelve inputs and weights we found out Hidden layer
parameters are calculated.

7. Similarly from Hidden layer parameters the weights to the links


between hidden layer and output layer are calculated. These outputs
are compared with Target Matrix using Forward propagation
algorithms.

8. If desired accuracy is not then we have to feedback using Back


propagation Algorithm. In this process all weights to links are
adjusted to reduce error.

9. Error is compared with Target matrix if desired output then we fix


the present values as output values. This may requires thousands of
iterations.

Based on the output we will find out the present output


belongs to which person for example the output is [1000] this
belong to fig

The list of modules are

Module-1:

In this module the BMP image is converted into RAW


image.A BMP image needs to be converted into RAW image
because from this image we have to calculate the inputs. These
are given to input layer of the ANN. A RAW image from BMP
is converted according to intensity values. The all dim colors
pixels in a color image are changed to black and al bright color
pixels converted into white pixels.

Module – 2 :

In this modules distance and angles are to be calculated


these constitute twelve inputs to the input layer of the input.
These 12 values are normalized. The inputs are calculated from
the given BMP as below. To start training first we have to
calculate the 12 input’s from given bmp image. In these 12
input’s 6 constitute facial distances and rest are facial angles.
All these are calculated by side profile of required human face.
The side profile of the human face is as shown below.
Forehead point

bridge point

nose point

middle lip point


chin curve point
The distances and angles are calculated as shown below.

To calculate six distances


THE FOLLOWING DIAGRAM ICONSTITUTES THE SIDE VIEW OF THE GIVEN
HUMAN FACE

1) FORE HEAD POINT

D1 BRIDGE POINT

D2 D5
D6

NOSE POINT D3

D4 UPPER LIP POINT

CHIN CURVE POINT


D1, D2, D3, D4, D5, D6 ARE THE FIRST SIX FIDICIAL
INPUTS
TO CALCULATE THE SIX ANGLES :

THE FOLLOWING DIAGRAM CONSTITUTES THE SIDE VIEW OF THE HUMAN


FACE THE FIRST THREE ARE

2) FORE HEAD POINT

BRIDGE POINT

A1

A2

NOSE POINT

A3 UPPER LIP POINT

CHIN CURVE POINT

THE REMAINING THREE ARE CALCULATED AS


FOLLOWS

FORE HEAD POINT

BRIDGE POINT

A1

A2

NOSE POINT

A3 UPPER LIP POINT

CHIN CURVE POINT

THESE A1,A2,A3,A4,A5,A6 CONSTITUTES SIX


ANGLES.

After calculating these 12 inputs need to normalized. Because it is easier to


trained the Artificial Neural Network with Normalization values.
From these twelve inputs, we have to calculate the weight other hidden and output
parameters.

Normalization: Normalization is the process of converting a value that falls with in the
range between 0 to 1.

Module 3&4 :

In the module the weights to all links between hidden layer, input layer and hidden
parameters are calculated. In the fourth module the weights between hidden, output
layers and four output are calculated.
Available for each neutron in the output layer, adjusting the
associated weights is easily accomplished using a modification
of the delta rule. Interior layers are referred to as “hidden
layers’, as their outputs have no target values for comparison;
hence, training is more complicated.

The training process for a single weight from neutron p


in the hidden layer j to neuron q in the output layer k. the
output of a neuron in layer k is subtracted from its target value
to produce an ERROR signal. This is multiplied by the
derivative of the squashing function [OUT(1-OUT)] calculated
for the layer’s neuron k, thereby producing the value.

δ = OUT*(1-OUT)*(TARGET-OUT))…………
…1

This is multiplied by OUT from a neuron j, the source


neuron for the weight in question. This product is in turn
multiplied by a training rate coefficient n(typically 0.01) and
the result is added to the weight. An identical process is
performed for each weight processing from a neuron in the
hidden layer to a neutron in the output layer.

The following equations illustrate these calculations :

W pq.k = n δ qk OUT pj ---------------- 2

W pq.k (n+1) = W pq.k (n) + W pq.k ---------------- 3

Where W pq.k (n) = the value of weight from neuron p in


the hidden layer to neuron q in the output layer at step n
(before adjustment)0; note that the subscript k indicates
that the weight is associated with its associated with its
destination layer.

W pq.k (n+1) = value of the weight at the steps n+1 (after


adjustment)

δ qk = the value of 8 for neuron q in the output layer k

OUT pj = the value of OUT for neuron p in the hidden


layer j

Note that the subscript p and q refer to a specific neuron


, whereas subscript j and k refer to a layer.

Adjusting the weight of the hidden layer : Hidden layers have


no target vector, so the training process described above cannot
be used. This lack of training target stymied efforts to trained
multi layered networks until back propagation provided a
workable algorithm. Back propagation trains the hidden layers
by propagating the out error back through the network layer by
layer, adjusting weights by each layer.
Equations 2 and 3 are used for all layers, both output and
hidden; However, for hidden layer most be generated without
benefit of a target vector. Figure shows how this is
accomplished. First, 12 inputs is calculated for each neuron in
the output layer, as in the equation 2.4. It is used to adjust the
weights feeding into the output layer, then it is propagated back
through the same weights to generate a value for 12 neurons in
the first hidden layer. These values of 12 inputs are used, in
term, to adjust the weights of this hidden layer and, in a similar
way, are propagated back to all proceeding layers.

Considering a single neuron in the hidden layer just


before the output layer. In the forward pass, this neuron
propagates its output value to neurons in the output layer
through the interconnection weights.

Weights operate in reverse, passing value of from the output layer back to hidden
layer. Each of these weights is multiplied by the 7 values of the neurons to which it
connects the output layer. Summing all such products and multiplying by the derivative
of the squashing function produces the value of the hidden layer neutron:

δpj = OUTpj (1-OUTpj) (Σδpk Wpq,k)


With 12 inputs in hand, the weights feeding the first hidden layer can be adjusted using
equation 2 and 3 modifying to indicate the correct layers.
For each neuron in a given hidden layer, must be
calculated, and all weights associated with that layer must be
calculated and adjusted. This is repeated, moving back towards
input layer by layer, until all weights are adjusted.

Module-5 :

In this module we have implemented Forward propagation


and Backward propagation algorithms. Forward propagation is
done to calculate error. The Backward propagation is done to
reduce error.

FORWARD PROPAGATION:

1. The normalized inputs are used along with random weight


matrix W12*7 to calculate hidden layer parameters as shown by
the formula. The hidden layer parameters are then
normalization.

HM = ΣI * W

HMN = 1/(1+exp -HM )


2. The hidden layer parameters along with random weight
matrix V7*4 are used to calculate the 4 output layer parameters.
The output layer parameters are then normalized as shown in
formula.

OQ = ΣHMN * V
OQN = 1/(1+exp -OQ )
3. If we train the algorithm for four faces then we should get
the outputs.

1000 for the 1 st face

0100 for the 2 nd face


0010 for the 3 rd face
0010 for the 4 th face
Each set of four outputs corresponds to trained data.

4. Hence the target matrix in the form as shown


below

1000
0100
0010
0 0 1 0

THE BPN TRAINING ALGORITHM :

The back-propagation-training algorithm is an interactive gradient algorithm


designed to minimize the mean square error between the actual output and multi layer
feed-forward perception and the desired output. It requires continuous differentiable
non-linearities. The following assumes a sigmoid logistic non-linearity is used where the
function f(a) in figure 1 is
Step 1: Initialize Weights and offsets :
Set all weights and node threshold to small random values.
Step 2: Present input and desired output
Present a continuous input vector X0,X1………XN-1
And specify the desired outputs do,d1……….dM-1
If the net is used as a classifier then all desired outputs are typically set to zero
except for the corresponding to all the call the input is from. The desired output is one
the input should be new on each trail or samples from a training set could be presented
cyclically unity weights stabilized.

Step 3: Calculate actual outputs.

Use the sigmoid nonlinearly from above and formula as to the calculate outputs
YO,Y1Y2Y3…………YM-1

Step 4: Adapt Weights

Use first hidden layer. Adjust weight by


Wij(t+1) = Wij(t) +ηδjXi
In this equation Wij(t) is the weight from hidden node I or from an output to note j
at time t xj is either the output of note I or is an input, n is a gain term, and δ is an error
term if note j is an output node, then.
δ = Yj(I-Yj)(dj-Yj)
Where d, is the desired output of node j and y, is the actual output.
If node j is an internal hidden node,
δ = Xj(I-Xj)δWjk
Where K is an overall node in the layer above node j. internal node thresholds are
adapted in a similar manner by assuming they are connection weights on links from
auxiliary constant-valued inputs. Convergence is sometimes faster if a momentum term
is added and weight changes are smoothed by j

Wij(t+1) = Wij(t) +ηδjXi (Wij(t) - Wij(t-1))

Step 5: Repeat by going to step 2

This constitutes the training of the Neural Network

Applications of Face Recognition :

1. Performing financial operation. (banking transaction)

2. In health care (storing information about patients identifying new bases)

3. Territory protection (entrance to building, ware houses, laboratories, prisons)

4. Government organization (frontier protection, passport control


elections registration)

5. Law systems (checking driver licenses, criminal identification)


C ONCLUSION :

This Paper recognizes faces belongs to same person at


very simple. It requires only his side profile photo. If any
person did any crime and now he grown beard or grown
mustaches or any change made by him in his face to escape.
This paper recognizes him. So it can be used be used in
criminal identification. This is the major application of this
paper. The main heart of this paper is RAW image form the
RAW image it calculates the inputs. With the front profile it is
very difficult to recognize to do this we have to do a lot of
things that may require costly hardware devices Scanners,
Sensors etc. This requires only side photo with bmp format. The
front profile recognition is difficult than the side profile
because the photo is two dimensional it is not possible to
calculate the input layer parameters from the front profile. By
the side profile face is appeared as a convex shapes with nose
projected to outside so we can easily calculates the distances
and angles with making nose as a origin.

The same concept can also be used to recognize patterns


such as recognizing characters, recognizing any particular
shapes. This can be used in many applications like in many
Commercial transactions, Personal management, Security and
the law enforcement applications, especially in criminal
identification, Authentication in secure system.

References :

Introduction to Artificial Neural Systems – J.M.Zurada


Elements of Artificial Neural Networks- Kishan Mehrotra,Chelkuri K.Mohan
Neural Computing- Theory and Practice-Waserman

Websites:
www.cs.rug.nl/~peterkr/FACE/face.html
www.rst38.org.uk/faces
www.sciencedaily.com/releases/1999/06/990624080203.htm

Potrebbero piacerti anche