Sei sulla pagina 1di 5

Eyes Detection in Facial Images using Circular

Hough Transform
W. M. K Wan Mohd Khairosfaizal and A. J. Noraini
Faculty of Electrical Engineering,
Universiti Teknologi Mara,
40450 Shah Alam, Selangor, Malaysia
Abstract-This paper presents an eye detection approach using
Circular Hough transform. Assuming the face region has already
been detected by any of the accurate existing face detection
methods, the search of eye pair relies primarily on the circular
shape of the eye in two-dimensional image. The eyes detection
process includes preprocessing that filtered and cropped the face
images and Circular Hough Transform is used to detect the
circular shape of the eye and to mark the eye pair on the image
precisely. This eyes detection method was tested on Face DB
database developed by Park Lab, University of Illinois at Urbana
Champaign USA. Most of the faces are frontal with open eyes
and some are tilted upwards or downwards. The detection
accuracy of the proposed method is about 86%.
Keyword- Accumulation array, Gradient magnitude, Gradient
thresholding, Circular Hough Transform
I. INTRODUCTION
Human eyes play an important role in face recognition and
facial expression analysis. In fact, the eyes can be considered
salient and relatively stable feature on the face in comparison
with other facial features. Eye detection is valuable in
determining the orientation of the face and also the gaze
direction. The position of other facial features can be
estimated using the eye position [1]. In addition, the size, the
location and the image-plane rotation of face in the image can
be normalized by only the position of both eyes. This is also
regarded as one of the most important biometrics
characteristics for personal identification.
The existing work in eye position detection can be
classified into two categories. First, the active infrared (IR)
based approaches and second the image-based passive
approaches. Eye detection based on active remote IR
illumination is a simple yet effective approach [2]. But it relies
on an active IR light source to produce the dark or bright pupil
effects. In other words, this method can only be applied to the
IR illuminated eye images. This method is not widely used,
because in many real applications the face images are not IR
illuminated.
The image-based passive methods can be classified into
three categories. First, template based method [3-6], secondly
is the appearance based method [7-9] and the third is feature
based method [10-14]. In the template based method, a generic
eye model, based on the eye shape, is designed first. Template
matching is then used to search the image for the eyes. While
this method can detect eyes accurately, it is normally time-
consuming. The appearance based method detects eyes based
on their photometric appearance. This method usually needs to
collect a large amount of training data, representing the eyes
of different subjects, under different face orientations, and
under different illumination conditions. These data are used to
train a classifier such as a neural network or the support vector
machine and detection is achieved via classification. Feature
based methods explore the characteristics such as edge and
intensity of iris, the color distributions of the sclera and the
flesh of the eyes to identify some distinctive features around
the eyes. Although this method is usually efficient, they lack
of accuracy for the images which do not have high contrast.
For example, these techniques may have mistaken eyebrows
for eyes.
In this paper, the face detection is carried out on the
identified face region without detecting the face region first as
the main focus is to detect eye pair from the face image. A
simple yet robust algorithm to locate the eye pair on grey
intensity face images is proposed. Currently, there are lots of
promising face detection methods [15-18] exist thus
assumptions have been made in this work such that (1) a rough
face region has been located, (2) the image consists of only
one face and (3) eyes in face image can be seen. The image-
based eye detection approaches is used to locate the eyes by
exploiting eyes differences in appearance and shape from the
rest of the face. The special characteristics of the eye such as
dark pupil, white sclera, circular iris, eye corners, eye shape
and etcetera are utilized to distinguish the human eyes from
other objects. The steps involve in the eye detection process
are cropping the face images to the required face region,
threshold on the gradient magnitude of the face images to get
the linear indices in the images and since the iris is nearly
circular, the Hough transform is used to detect the circular
shape of the iris of the human eye based on the linear indices.
The pupil of the eye is plotted as the circle center and the
circular shape of the iris is located and drawn as the circle
parameter with its specific radius from the circle center. The
proposed method is expected to increase the efficiency of
feature based methods.
II. METHODOLOGY
The block diagram of the proposed approach for the eye
detection is shown in Figure 1.
238
2009 5th International Colloquium on Signal Processing & Its Applications (CSPA)
978-1-4244-4152-5/09/$25.00 2009 IEEE
The process of detecting the eye pair in the face image
starts with acquiring the grey scale face image from the face
database. The image must be two dimensions with the rough
face region consists of a face and eyes. The algorithm built
can only be used under this situation. The output image is
known as the raw image. Face detection will process first
locate the rough face region.
In the second stage, an efficient feature-based method is
used to locate two rough regions of the eyes in the face, which
is the objective of the study.
A. Preprocessing
In order to obtain a proper segmentation of the image, pre-
processing of the image is carried out. To compensate for
illumination variations and to obtain more image details, a
median filter is used to enhance the brightness and the contrast
of the images [20]. It is also used to eliminate the noise from
the raw image. A median filter is based upon moving a
window over an image and computing the output pixel as the
median value of the brightness within the input image.
A useful variation on the theme of the median filter is the
percentile filter. Here the centre pixel in the window is
replaced not by the 50% (median) brightness value but rather
by the p% brightness value where p% ranges from 0% (the
minimum filter) to 100% (the maximum filter). Values other
then (p=50) % do not, in general, correspond to smoothing
filters. This step simultaneously normalizes the brightness
across an image and increases contrast. As a result, the image
is enhanced and corrected from noise.
The face region from the filtered image is cropped out
from the background. This is done to eliminate the unwanted
region and also to facilitate the process of detecting the eyes.
The output image from this stage is known as the filtered
image.
B. Eye Pair Detection
When the rough face region is detected, the eye pair
detection is sequentially applied to locate the rough regions of
both eyes. Figure 2 shows the process of the proposed method.
C. Validation of Image Parameter.
This step is to validate the filtered image parameters in
order to ensure that the subsequent algorithms used can be
applied. The parameters that need to be considered are as
follow,
i. Dimension (2-D)
ii. Size (minimum 32X32)
iii. Type ( greyscale image)
D. Building the Accumulation Array
To build the accumulation array, the first step is to
compute the gradient and the gradient magnitude of the rough
face image region. It is the first derivative of two-dimensional
image. The equations used are as follow:
i. Two dimensional first derivative;
(1)
where; x h :denotes a horizontal derivative,
y h :denotes a vertical derivative,
h :denotes the arbitrary angle derivative.
ii. Gradient, Va[m,n], of an image:
( ) ( )

+

= =
c
c
+
c
c
V y y x x y i a h i a h i
y
a
ix
x
a
a

(2)
where i
x
and i
y
are unit vectors in the horizontal and
vertical direction, respectively.
iii. Gradient magnitude,
( ) ( )
2
y
2
x a h a h a = + V
(3)
FACE IMAGE
FILTER
(Median Filter)
CROP
(Extacting the Face
Region)
Eye Pair Detection
Preprocessing
Fig. 1. Block Diagram of the
Eye Detection Process
| | | | | | y x h sin h cos h + - =
Validation of Parameter
(Accumulator Building)
Area of Interest
Hough Transform to
Detect Eye
(Circular Hough Transform)
Fig. 2. Block Diagram of the
Eye Pair Detection Process
239
Approximated by;
a h a h a
y x
+ V
~

(4)

The linear indices of the gradient magnitude are computed
using the equation as follows;
( )
j ij
n
1 j
k
i k
X a x f
=
= (5)
where;
ij
a :gradient magnitude,

j
X : Symmetry square matrix,
( )
i k
x f : Linear indices of the gradient
magnitude.
The accumulation array of the image consists of the gradient
magnitude of the image and its linear indices as in equation
(6).
Accumulator = (Gradient Magnitude, Linear Indices)
(6)
E. Area of Interest
From the segmentation process of the accumulator, the
segmented accumulator is smoothened to get better segmented
value using averaging filter. To obtain the area of interest,
local maxima mapping on the image face region is generated
by locating every local maximum on the segmented region.
Local maximum filter is built by thresholding the local
maxima mapping with the lower bound value. There are two
steps to be done separately. First, the segmented accumulator
is threshold with the non-segmented accumulator and filtered
by the local maximum filter. Next step is to label the
generated local maximum mapping by eight connected
component as in Figure 3 and then threshold by the gradient
component. The threshold process is known here by gradient
threshold and basically takes the adaptive threshold method
which threshold value varies across the entire image. The
equation of the threshold method is as in Equation (7). The
output from the second step is known as mask.
Fig. 3. Label 8 Connected Component
( )
c
f f T T , =
(7)
where; T is the threshold
f is the whole image,
c
f is 8 label image part.
The results from both steps are compared to select the area
of interest in the face image. The comparison of both value
and the reconstructed data image are as in Figures 4 and 5
respectively:
Fig. 4. Comparison Graph of values from steps 1 and 2
Fig. 5. Reconstructed Image
The reconstructed image indicates which area in the image
can be considered as the area of interest (location of eye pair).
The clipping value as seen in the graph is threshold by the
gradient magnitude values at the respective area. The
respective values in the accumulator array are replaced by
these new threshold values. Then the process of locating the
local maxima on every threshold area is done to detect the eye
area in the image. As the accumulator array of the
reconstructed image can be converted into a function of f(x),
then the local maximum is at x
o
for a>0 such that, for x (x
o
-
a, x
o
+a) there is f(x)f(x
o
). Intuitively, it means that around
x
o
the graph of f will be below f(x
o
). Each area of interest is
compiled together as further process is done only among these
components. Each local maxima candidate in every area of
interest is compiled in a group by selecting minimum number
of qualified pixel in each group of the interested area.
F. Circular Hough Transform to Detect Circle of the Eyes
Hough transform is a technique which can be used to
isolate features of a particular shape within an image. Because
it requires the desired features to be specified in some
parametric form, the classical Hough transform is most
commonly used for the detection of regular curves such as
lines, circles, ellipses, and etcetera [21]. The main advantage
of the Hough transform technique is that it is tolerant to gaps
in feature boundary descriptions and is relatively unaffected
by image noise. To detect the eye which is circular in shape
the so called Circular Hough Transform is used as in equation
(8).
( ) ( )
2 2
0
2
0
r y y x x = +
(8)
240
where, ( )
0 0
y , x is the coordinate of the circle
centre,
r is the radius of the circle.
The detection process starts with the local maxima in the
group of the area of interest is assumed as the centre of the
circle. If the linear indices among the minimum value of
qualified pixel forming the circular shape, then that area is the
eye region detected on the image. Every area of interest is
tested with this process for it occurs as an element of the circle
component which is the eye region identified in the image.
III. RESULT AND DISCUSSION
The test on using the proposed method was conducted on a
well known Face DB database [22]. The Face DB database has
been developed by the University of Illinois at Urbana-
Champaign under their Productive Aging Laboratory. The
laboratory is also known as the Park Lab, which was named
after the founder of the laboratory, Dr Denise C. Park. In this
database there are several types of face images either coloured
images (RGB) or intensity images (gray-scale).The image
format also varies from Windows Bitmap (BMP) to Joint
Photographic Expert Group (JPEG). The images selected are
in two-dimension with size consists of 646 x 480 gray-scale,
BMP format consist of 72 face images with a constraint
background. The images consist of three ethnics which are the
Asians, African-Americans and Caucasians. The categorizing
sets a wider illumination based on their skin colour.
Furthermore, each category has been subdivided into set of
age ranging from 19 to 84 years old. Out of 72 face images, 50
images have well opened eyes while the rest have eyes
partially opened. The developed software is written in Matlab
codes and the results from the eye detection process are shown
in figures 6 to 12.
Fig. 6. Original image
Fig. 7. Filtered and Cropped Face Image Region.
Generated map of local maxima
50 100 150 200
50
100
150
200
250
Fig. 8. Generated Map on Local Maxima
0
50
100
150
200
250
0
100
200
300
-40
-20
0
20
40
Accumulation array after local maximum filtering
Fig. 9. 3-D View of Accumulation Array after Local MaximumFiltering
Accumulation Array from Circular Hough Transform
50 100 150 200
50
100
150
200
250
300
350
Fig. 10. Accumulation Array from Circular Hough Transform.
0
50
100
150
200
250
0
100
200
300
400
0
200
400
600
800
3-D View of the Accumulation Array
Fig. 11. 3-D View of the Accumulation Array from Circular HoughTransform
Grayscale Image with Detected Eye Circle(center positions and radii marked)
50 100 150 200
50
100
150
200
250
300
350
Fig. 12. The Face Image With the Eyes Pair Detected.
The original face is shown in Figure 6 and Figure 7 shows the
original face that has been cropped to obtain the face region
from the facial image and filtered using median filter. Other
than reduced in noise the filtered image has also been
enhanced in term of its brightness and contrast as to
241
compensate for its illumination variations to obtain more
image details. Figure 8 shows the generated local maxima
mapping on the accumulation array for selecting the area of
interest which are the possible area of the eyes in the image.
Figure 9 shows the 3-D view of the accumulation array after
local maximum filtering. Figures 10 and 11 are the results
from the Circular Hough Transform to detect the circular
shape of the eye pair in the image. Finally, Figure 12 shows
the eye pair in the face image which was marked with a cross
(+) at the centre of the circle. From Figure 11, the 3-D view
of the accumulation array shows three points of the local
maximum. Two points for the eye regions and one point on
the hair of the person. Since the Circular Hough Transform is
used to detect the region of the eye pair, only circular shape is
detected as in Figure 12. As the hair point is not circular in
shape, the system did not detect this point as the eye region on
the face image.
The evaluation on the performance of the proposed
algorithm is carried out on 50 face images with opened eyes.
Figure 13 (a) and (b) are some examples of the face images for
which the proposed algorithm correctly detect the eye pair
while Figure 14 (a) and (b) show some example of which the
proposed algorithm failed to detect the eye pair. Since Circular
Hough Transform detects circular shape, the algorithm detects
another circular shape on the face image due to the circular
shape of the nostril.
(a) (b)
Fig.13. Detected eye pair
This happens because the face is tilted up and the nostrils are
exposed that cause the algorithm to wrongly detects the
nostrils as the eye region due to its circular shape Other factor
could be due to illumination since the circular white spot on
the noise of Figure 14 (b) was mistaken as the eye region.
(a) (b)
Fig. 14. Wrong Detection of the eye pair
IV. CONCLUSION
The success rate of the eyes being detected is about 86%
that equals to 43 eye pair detected from 50 face images. The
filter used could not totally eliminate the effect of illumination
variations for all the images tested which causes the false
detection of the eyes. Perhaps using a better filter that able to
rectify this problem can increase the number of eye pair being
detected.Other then that, the combination of other techniques
can be considered to eliminate the unwanted details on the
face. However the Circular Hough Transform is a relevant
algorithm to be considered in the eye detection process due to.
REFERENCES
[1] F. R. Brunelli and T. Poggio, Face recognition features versus
templates , Pattern IEEE Transaction Anal.ysis Machine Intelligent
15 (10) (1993) 10421052.
[2] Carlos Morimoto et al, Real-Time Detection of Eyes and Faces,
http:www.cs.ucsb.edu/conferences/PUI/PUIworkshop98/papers/Mori
moto.pdf, 1998.
[3] A.L. Yuille, P.W. Hallinan and D.S. Cohen, Feature extraction from
faces using deformable templates , International. Journal,
Computer. Vision 8 (2) (1992) 99111.
[4] X. Xie, R. Sudhakar and H. Zhuang, On improving eye feature
extraction using deformable templates, Pattern Recognition. 27
(1994) 791799.
[5] K.M. Lam and H. Yan, Locating and extracting the eye in human
face images, Pattern Recognition. 29 (1996) 771779.
[6] M.Nixon, Eye Spacing measurement for facial recognition,
Proceeding of the Society of Photo-Optical Instrument Engineers,
1985.
[7] A. Pentland, B. Moghaddam and T. Starner, View-based and
modular eigenspaces for face recognition, Proceeding. IEEE Conf.
on Computer Vision and Pattern Recognition (CVPR94), Seattle,
WA, 1994
[8] W. min Huang and R. Mariani, Face detection and precise eyes
location, Proc. Int. Conf. on Pattern Recognition (ICPR00), 2000.
[9] J. Huang, H. Wechsler, Eye detection using optimal wavelet packets
and radial basis functions (rbfs), International Journal of Pattern
Recognition. Artificial Intelligent 13 (7) (1999) 10091025.
[10] G.C. Feng and P.C. Yuen, Variance projection function and its
application to eye detection for human face recognition,
International Journal of Computer. Vision. 19 (1998) 899906.
[11] G.C. Feng and P.C. Yuen, Multi-cues eye detection on gray intensity
image, Pattern Recognition. 34 (2001) 10331046.
[12] S. Kawato and J. Ohya, Real-time detection of nodding and head-
shaking by directly detecting and tracking the between-eyes,
Proceeding 4
th
IEEE Int. Conf. on Automatic Face and Gesture
Recognition, 2000, pp. 4045.
[13] Y. Tian, T. Kanade and J.F. Cohn, Dual-state parametric eye
tracking, Proceeding 4th IEEE Int.ernational Conference on
Automatic Face and Gesture Recognition, 2000.
[14] S.A. Sirohey and A. Rosenfeld, Eye detection in a face image using
linear and nonlinear filter, Pattern Recognition 34 (2001) 1367
1391.
[15] G. Chow and X. Li, Towards a system for automatic facial feature
detection, Pattern Recognition 26 (1993) 17391755.
[16] H.A. Rowley, S. Baluja and T. Kanade, Neural network-based face
detection, IEEE Transaction Pattern Analysis Machine Intelligent.
20 (1) (1998), 2338.
[17] K.K. Sung and T. Poggio, Example-based learning for view base
human face detection, IEEE Trans, Pattern Analysis Machine.
Inteligent. 20 (1) (1998) 3951.
[18] P.C. Yuen, G.C. Feng and J.P. Zhou, A contour detection method:
initialization and contour model, Pattern Recognition Letter 20 (2)
(1999) 141148.
[19] K. Sobattka and I. Pitas. "A Novel Method for Automatic Face
Segmentation. Facial Feature Extraction and Tracking," Signal
Processing:Image Communicarion, 12(3). 1998, 263-281.
[20] Huang, T.S., G.J. Yang, and G.Y. Tang, A Fast Two-Dimensional
Median Filtering Algorithm. IEEE Transactions on Acoustics,
Speech, and SignalProcessing, 1979. ASSP-27: p. 13-18.
[21] R. Gonzalez and R. Woods, Digital Image Processing; Prentice Hall
2002.
[22] http://agingmind.cns.uiuc.edu/facedb/
242

Potrebbero piacerti anche