Sei sulla pagina 1di 5

Face Recognition using Eigenfaces-Fisher Linear Discriminant and Dynamic Fuzzy

Neural Network

Huiwen Deng*(Corresponding
author)

TangquanQi
School of Computer and Information Science
Southwest China University
Chongqing, China
e-mail: qtq2002@swu.edu.cn

Institute of Logic and Intelligence


Southwest China University
Chongqing, China
e-mail: huiwend@swu.edu.cn

regression technique, allowing a nonlinear neural network


to acquire an input/output (I/O) association using a limited
number of samples chosen from a population of input an
output patterns. A crucial problem on back-propagation is
its generation capability. A network successfully trained for
given samples is not guaranteed to provide desired
associations for untrained inputs as well. Lawrence S. and
Giles C. L. propose a face recognition system using CNN in
1997[11]. The system combines local image sampling, a
self-organizing map neural network, and a convolutional
neural work. The self-organizing map provides a
quantization of the image samples into a topological space
where inputs that are nearby in the original space are also
nearby in the output space, thereby providing
dimensionality reduction and invariance to minor changes
in the image sample, and the convolutional neural network
provides for partial invariance to translation, rotation, scale,
and deformation. The convolutional network extracts
successively larger features in a hierarchical set of layers.
Shang-Hung Lin and Sun-Yuan Kung propose a face
recognition system based on probabilistic decision-based on
neural networks(PDBNN)[12].
The PDBNN face
recognition system consists of three modules: First, a face
detector finds the location of a human face in an image.
Then an eye localizer determines the positions of both eyes
in order to generate meaningful feature vectors. Lastly, the
third module is a face recognizer. Tolba A.S. and Abu-Rezq
A.N. present a system for invariant face recognition (LVQ +
RBF)[13]. A combined classifier uses the generalization
capabilities of both learning vector quantization (LVQ) and
radial basis function (RBF) neural networks to build a
representative model of a face from a variety of training
patterns with different poses, details and facial expressions.
Phiasai T. and Aruntungrusmi S. proposes the integration of
moment invariant and PCA for varied-pose face recognition
(PCA + moment invariant)[14]. Firstly, the global feature is
extracted by PCA for determining the minimum error. If
error less than threshold, system will accepts the
classification result from PCA. On the other hand, the
system will reject and moment invariant is used to analyze
the local face such as nose and eyes.
In this work, a face recognition algorithm based on the
EFLD + DFNN is presented. The face recognition
algorithm consists of three stages: First, the feature
extraction and dimension reduction uses the Eigenface and

Abstract-In order to solve the problem of face recognition in


natural illumination, a new face recognition algorithm using
Eigenface-Fisher Linear Discriminant (EFLD) and Dynamic
Fuzzy Neural Network (DFNN) is proposed in this paper,
which can solve the dimension of feature, and deal with the
problem of classification easily. In this paper, we use EFLD
model to extract the face feature, which will be considered as
the input of the DFNN. And the DFNN is implemented as a
classifier to solve the problem of classification. The proposed
algorithm

has

experiment

been

results

tested

show

on

that

ORL face database. The

the

algorithm

reduces

the

dimension of face feature and finds a best subspace for the


classification

of

human

face.

And

by

optimizing

the

architecture of dynamic fuzzy neural network reduces the


c1assifiction error and raises the correct recognition rate. So
the algorithm works well on face database with different
expression, pose and illumination.
Keywords-face

recognition;

eigenfaces;

fisher

linear

discriminant; feature extraction; dynamic fuz;.y neural network

I.

Weiping Hu
Institute of Logic and Intelligence
Southwest China University
Chongqing, China
e-mail: huwp2000@163.com

INTRODUCTION

Face recognition is very hard to solve due to its


nonlinearity. In the problem of face recognition, face image
data are usually high-dimensional and large-scale,
recognition has to be performed in a high-dimensional
space. So it is necessary to find a dimensional reduction
technique to cope the problem in a lower-dimensional space.
Until now, people have presented many linear/nonlinear
projection methods[1,2], such as the Eigenfaces[3], PCA
(Principal
Component Analysis)[3], LDA (Linear
Discriminant Analysis) [4,5], Fisherfaces[6], DLDA (Direct
LDA) [4,6], DCV (Discriminant Common Vectors)[7], and
ICA (Independent Component Analysis) [8]. In recent work,
many people use a hybrid method, which combines some
linear/nonlinear projection methods, for example,
combining PCA and LDA [4].
Recently, a face recognition method based on PCA +
RBF has been proposed[9]. This paper describes a face
identification method using a probabilistic neural net. Their
system is compared to the method presented at ICCST'99,
based on the Karhunen-Loeve transform for feature
extraction, implemented as a classifier device, that
performed the identification process. In 1992, Matsuoka K
proposed the face recognition system of Wavelet + RBF[lO].
Back-propagation can be considered as a nonlinear
978-1-4244-5540-9/10/$26.00 2010 IEEE

166

FLD methods. Second, DFNN is taken as classifier to


classify the face feature. The last stage is the process of
recognition.
This paper is structured as follows. In section 2 we
briefly introduce the Eigenface and FLD, and describe the
algorithm of EFLD + DFNN. In section 3 we describe the
result of experiment and in section 4 we draw conclusions.
II.
A.

"I

i=1

(3)

1=1

vlk<l>k,

1, ,3,..., M

=[

Sb

i=l

i
n ui - li ui

li

(5)

(6)
Where

1
ui'
= -L
M

M i=l

= 1,2,3,.

..

, M

is the

mean of

is the mean
= t =
of i th set of eigenface. c is the number of the class and ni

eigenface images and ;;

i=l

"i ,

1,2,3,... ,

represents the number of i th class.


So, the rank of the between-class scatter matrix:

R(Sb)=c-l
R(Sw)=n-c

(7)

The rank of the within-class scatter matrix:


(8)

g) the best subspace w* is determined by below


formulation:

W'=argmawx

T
w S w

I
IW

Sww

= [W1'"'2,W3"'"

i
l

(9)

We-I]

The best subspace w* is comp uted by Lagrange


multiplier. Meanwhile, the subspace W" conside as the input
of the dynamic fuzzy neural network.
B.

Dynamic Fuzzy Neural Network Description

DFNN has dynamic characteristic. The architecture of


DFNN is not predefined, but dynamic variation. Namely,
there is no rules before learning the DFNN. The rules are
generated continuously in learning process. The architecture
of the DFNN based on extended RBF neural networks to
perform TSK model, which is shown in Fig. 1. DFNN
consists of five layers[15][16]. Layer 1 the node of layer 1
represents an input linguistic variable.
Layer 2 each node represents a membership function
that is in the form of Gaussian function:

(1)

Where, M presents the number of training face images


and Fi presents ith image vector.
c) Calculated the mean subtracted face (/Ji:
i=1,2,... ,M
(2)
The aim of this equation is to represent
into a
low-dimensional space.
d) Computing the covariance matrix C:

lPi=Fi-'I'

i=1

'1':

i=1,2,... ,M

and

Sw:
Sb = :t {

1
'I'=Lri

M i=l

Where obtain an eigenface vectors U U1, U2' U3 ,.'" U M ].


t) In order to find a best subspace for classification, and
maximize the ratio of between-class scatter and within-class
scatter, so computing between-class scatter matrix
and
within-class scatter matrix

Eigenface-Fisher Linear Discriminant Algorithm

Indeed, the Eigenfaces[16] can be considered as one of


the first approaches in this sense. An NxN image I is
linearized in a N2 vector, so that it represents a point in a N2
dimensional space. However, comparisons are not
performed in this space, but a low-dimensional space is
found by means of a dimensionality reduction technique. N.
Peter Belhumeur, P. Joao Hespanha, David J. Kriegman
develop a face recognition algorithm which is insensitive to
large variation in lighting direction and facial expression[7].
Taking a pattern classification approach, they consider each
pixel in an image as a coordinate in a high-dimensional
space. They take advantage of the observation that the
images of a particular face, under varying illumination but
fixed pose, lie in a 3D linear subspace of the high
dimensional image space, if the face is a Lambertian
surface without shadowing. However, since faces are not
truly Lambertian surfaces and do indeed produce
self-shadowing, images will deviate from this linear
subspace. Rather than explicitly modeling this deviation,
they linearly project the image into a subspace in a manner
which discounts those regions of the face with large
deviation.
In this section, we describe the process of EFLD
algorithm. EFLD algorithm consists of Eigenfaces and FLD.
Main idea behind EFLD that adopt eigenfaces to represent
face images into a lower-dimensional space, simultaneously
let FLD find a best subspace for classification and
maximize the corporation of the distance of between-class
and the distance of within-class. The basic idea of EFLD is
that the face images must be centered and of the same size.
Based on this rule, the EFLD procedures are as follows[3,
15,17]:
a) Obtain face images I h, 13,.", 1M, and each image is
I(x,y). Let the training set of face images Ib h, 13, , 1M be
b) Computing the average face

Where A [lP1 lP2 ... lP M ] .


e) Computing the Vk and Ak, which are eigenvectors and
eigenvalues for C matrix. Where Vk determine linear
combinations of the M training set of face images from the
eigenfaces}Ji [15]:
M
(4)
2

RECOGNITION APPROACH

Fb Fz, F3"'" FM-

= 1 L<P<P =AA

Fi

i
Pij(xi)= exp - (x :

167

(10)

where i =1,2,... , r , j =1,2,... , u , and lIij is the j th


membership function of Xi , cij is the center of the j th
Gaussian membership function ofxi,} is the width ofthej th
Gaussian membership function of Xi, r is the number of
input variables and f.l is the number of membership
functions.
Layer 3 each node of layer 3 represents a possible
IF-part for fuzzy rules. For the} th rule Rj, its output is

lfJ j

exp

i=1

TABLE I.

Step I Obtaining the face images


Step 2 extraction and dimension reduction
Where, extract feature and reduce dimension using the
Eigenface and FLD algorithm.
I) computing Eigenface space , Eq. (4)
2) finding the best subspace for classification, Eq. (9 )
Step 3 Initialization DFNN
Set the preparameter of DFNN
Step 4 Training the DFNN
If the set of face image is the set of training
Training the DFNN
Else
Jump to Step 5
End
Step 5 Recognition
If the face images is known face
Output true
Else
Jump to Step 2
End
Step 6 output the result of recognition

(j2
)

(11)

where X = (Xl , ... , Xy ) E 9\Yand

C j = ( cl

the summary of EFKD+ DFNN algorithm

,...Crj) E 9\Y is the

center of thej th RBF unit. From Eq. (11), we can see that
each node in this layer also represents an RBF unit. In the
sequel, RBF nodes are always used to represent rules
without interpretation.
Layer 4 We call these nodes as N nodes. Obviously, the
number of N nodes is equal to that of R nodes. The output
ofis
If! j -

fP j

-u -'

L fPk

1,2,

...

, u

(12)

Figure 1.

architecture of DFNN

k=1

Layer 5 each node of layer 5 represents an output


variable that is the summation of input signals:

y(X)=fWk'l'k
k=l

(13)

where y is the output variable and wk is the THEN-part or


connection weight of the kth rule.
For the TSK model:
(14)
Wk =akOxO +aklxl + ... +akrxy

where k =1,2,...,u. .
C.

output

Algorithm Architecture

Figure 2. the flow of the EFLD+ DFNN algorithm

In this work, a face recognition algorithm based on the


EFLD+DFNN is presented. The face recognition algorithm
consists of three stages: First, the feature extraction and
dimension reduction use the Eigenface and FLD methods.
Second, DFNN is taken as classifier to classify the
representations. Third, the last stage is the process of
recognition. The architecture of the algorithm is shown in
Fig. 2 the summary of the EFLD + DFNN algorithm is
illustrated in Table I.
This algorithm has good capability of generalization,
and can effectively reduce the dimensional of classification.
Meanwhile, this algorithm can also reduce the
computational complexity. DFNN can solve the problem of
overfitting and overtraining.

L----N-------<

Y--.output

Figure 3. the architecture of the DFNN classifier

168

Figure 4.

predefined. The input of DFNN is the best subspace IV,


which computing in the extracting feature using
Eigenfaces+FLD algorithm. Meanwhile w' is taken as the
target output of DFNN to supervise the learning of DFNN.
And initial the parameters of DFNN. The DFNN classifier
can much better classify the training face images, so that
reduce the false recognition rate in the best. The number of
the unit of RBF is 46, and the number of the claasification
is 19.
The performance of the DFNN obtains form fig.6 (a) to
(d), which show the actual error of DFNN, the fuzzy rule
generation., the root square mean error and the desired
actual outoput and input data, respectively.

the part of ORL face database images

In EFLD + DFNN algorithm, DFNN is taken as


classifier. In the process of training, the EFLD feature is
used as the input of the DFNN. Fig. 3 illustrate the
implementation architecture of the training and test of
DFNN classifier.
III.

TABLE II.
Parameter

Min

max

type

'1'

10304*1

50.1800

184.1000

double

80*100

-4.2754e+06

3.683e+06

double

w"

19*100

- 1.7 105

1.9336

double

V_FLD

80* 19

-4.0423e+07

6.344Ie+07

double

(b)

(a)

Figure 5.

(a) the mean of all training face images, (b) the output matrix of

w*.

i:O

Experiment Results

.1

The experiment achieves through Matlab 7.9.0(R2009b)


simulation. Table II shows some parameters in EFLD
algorithm. The '1' represents the mean of all training face
images. U represents the projected centered vectors onto
eigenspace, in other word, the eigenface space. And the
parameter of w* represents the best subspace. V_FLD
represents the largest (c-J) eigenvectors of matrix W*. Fig.5
(a)shows the feature distribution of the mean of all training
face images. Fig.5 (b)displays the isolines of matrix W
which shows the best subspace distribution.
B.

property
Value

EXPERIMENT

The experiment uses the ORL Database of faces. Their


Database of Faces, formerly' The ORL Database of Faces' ,
contains a set of face images taken between April 1992 and
April 1994 at the lab. The database was used in the context
of a face recognition project carried out in collaboration
with the Speech, Vision and Robotics Group of the
Cambridge University Engineering Department. There are
ten different images of each of 40 distinct subjects. For
some subjects, the images were taken at different times,
varying the lighting, facial expressions (open / closed eyes,
smiling/ not smiling) and facial details (glasses/ no glasses).
All the images were taken against a dark homogeneous
background with the subjects in an upright, frontal position,
with tilting and rotation tolerance up to 20 degree, and
tolerance of up to about 10% scaly. The files are in PGM
format. The size of each image is 92x112 pixels, with 256
grey levels per pixel. Fig. 4 shows part samples in ORL face
database.
In this experiment, we select 20 persons and 5 images
each person in random. This 100 images are implemented
as the set of training. Meanwhile, we select 100 images
form 20 persons for testing.
A.

the value of the parameters

"

... .
.
..

111

III

Il

..

.
.

Figure 6. (a) the actual output error, (b) the fuzzy rule generation, (c)the

The Results ofDFNN Classifier

root mean squared error (RMSE), (d) the desired actual output and the input

In the training, the some parameter of DFNN must be

data

169

TABLE III.

(KDCV) feature extraction and RBF


Neurocomputing. Vo1.71, 2008, pp.3044-3048.

the result of comparison of the performance,

the ' -' represents unknown the number of simulation.


property
Methods

The

number

of

Eave

/%

simulation

D.

PCA + Fisher + RBF[17]

1.92

M-PCA[18]

10

2.4

PCA+ RBF[9]

4.9

Wavelet + RBF[IO]

3.7

CNN[ll]

3.83

EFLD+ DFNN

1.80

Comparison of the Perfonnance

EFLD+DFNN algorithm is detected using ORL face


database. In experiment, to judge the performance of
algorithm uses the average false error (APE). The so-called
AFE Eave is defined by the below fonnulation:
q

IV.

niS

J. Lu, N. Plataniotis Kostantinos, N. Venetsanopoulos Anastasios.


"Face recognition using LOA-based algorithm," IEEE Trans.
Neural Networks, vol.14, no.!, 2003, pp.!9 5-200.

[5]

A.M. Martinez, A.C. Kak. "PCA versus LDA," IEEE Trans, Pattern
Anal. Machine Intell, vo1.23, no.2, 2001, pp.228-233.

[6]

Z. Liang, P.F. Shi. "Uncorrelated discriminant vectors using a kernel


method," Pattern Recognition, vo1.38, no.2, 2005, pp.307-31 O.

[7]

N. Peter Belhumeur, P. Joao Hespanha, David J. Kriegman.


"Eigenfaces vs. fisherfaces: Using class specific linear projection,"
IEEE Trans. Pattern Anal. Machine Intell, vol.1 4, no.2, 199 7,
pp.239 -256.

[8]

B. Marian, Stewart, R. Movellan Javier, J. Sejnowski Terrence. "Face


recognition by independent component analysis," IEEE Trans.
Neural Networks, vol.13, no.6, 2002, pp.!450-1 464.

[9 ]

E.D. Virginia. "Biometric Identification System Using a Radial Basis


Function Network," Proc 34nd Annual IEEE In!. Carnahan Conf. on
Security Technology, 2000, pp.47-51 .

[1 4] T. Phisai, S . Arunrungrusmi, K . Chamnongthai. "Face Recognition


System with PCA and Moment Invariant Method," Proc. IEEE In!.
Symp. Circuits and Systems. II , 200I, pp.!65-168.
[1 5] Matthew Turk, Alex Pentland. "Eigenfaces for Recognition," Journal
of Cognitive Neuroscience, vol.3, no. l , 199 1 , pp.71 -86.
[1 6] S.Q. Wu, MJ. Er. "Dynamic Fuzzy Neural Networks: A Novel
Approach to Function Approximation," IEEE Trans. Syst, Man,
Cybern, Part B: Cybern, vo1.30, 2000, pp.358-364.
[1 7] M.J. Er, S.Q. Wu. "A fast learning algorithm for parsimonious fuzzy
neural systems," Fuzzy Sets and Systems, vo1.1 26, 2002, pp.337-35I.
[1 8] V. Brennan, J. Principe. "Face Classification Using a Multiresolution
Principle Component Analysis," Proc. IEEE Workshop Neural
Networks for Signal Processing, 199 8, pp.506-515.

ACKNOWLEDGMENT
This paper is supported by the Key Project of
Chongqing(project no. 08jwsk277).
REFERENCES

X.Y. Jing, Y.F. Yao, lY. Yang, David Zhang. "A novel face
recognition approach based on kernel discriminative common vectors

[4]

[1 3] A.S. Tolba, A.N. Abu-Rezq. "Combined Classifiers for Invariant


Face Recognition," Proc. In!. Conf. Information Intelligence and
Systems, 1999, pp.350-359 .

The EFLD + DFNN algorithm was proposed in this


paper. The algorithm consists of three steps. In the first step,
the dimension reduction use Eigenface algorithm and finds
the best subspace for the classification. In the second step,
DFNN is implemented as a classifier. The last step is
recognition. The experiment result shown that the
EFLD+DFNN algorithm works well on face database with
different expression, pose and illumination.
To deal with the problem caused by natural illumination
variation, some modification of the algorithm is required.
And it will be the future work.

[2]

M. Kirby, L. Sirovich. "Application of the Karhunen-Loeve


procedure for the characterization of human faces," IEEE
Trans.Pattern Anal. Machine Intell,vo1. l 2, no. l , 1990, pp.103-108.

[12] S.H. Lin, S.Y. Kung, L.J. Lin. "Face recognition/detection by


probabilistic Decision-Based Neural Network," IEEE Trans. Neural
Networks, vol.8, 199 7, pp.!1 4-1 32.

CONCLUSION

A.F. Abate, M. Nappi, D. Riccio, G. Sabatino. "20 and 3D face


recognition: A survey," Pattern Recognition Letters, 2007,
pp.!885- l9 06.

[3]

[II] S. Lawrence, C.L. Giles, A.C. Tsoi et al. "Face recognition: A


Convolutional Neural-Network Approach," IEEE Trans. Neural
Networks, Special Issue on Neural Networks and Pattern Recognition,
vol.8, no. l , 199 7, pp.9 8-1 1 3.

...!.i=l,--_
tot

[I]

network,"

[1 0] K. Matsuoka. "Noise Injection into Inputs in Back-Propagation


Learning," IEEE Trans. Syst, Man, Cybern, vo1.22, no.3, 199 2,
pp.436-440.

(15)
qn
Where, q represents the number of experiments, and
i
n rniS is the number of wrong classification in the ith epoch.
And qntot represents the total number of testing sample.
Based on Eave rule, the result of comparison of the
perfonnance in the same ORL face database is shown in
Table III. In this paper, the EFLD + DFNN algorithm Eave is
2.42.
Eave =

neural

170

Potrebbero piacerti anche