Proceedings oI International ConIerence on Computing Sciences
WILKES100 ICCS 2013
ISBN: 978-93-5107-172-3 A review on Iacial Ieature extraction technique Geetika Singh 1* and Indu Chhabra 2
1 Junior Research Fellow (INSPIRE), Department of Computer Science and Applications, Panjab University, Chandigarh 160014 2 Chairperson and Associate Professor, Department of Computer Science and Applications, Panjab University, Chandigarh 160014. Abstract Over the past Iew decades, Iace recognition has been an active research area as one oI the most interesting applications oI pattern recognition and image analysis, with countless application possibilities in various domains. It has been Iound to be suitable Ior authentication, Iorensic applications, security, surveillance systems, credit card veriIication and human computer interaction. Feature extraction and its interpretation are the crucial Iactors that eventually govern the recognition accuracy. This paper is an extensive review oI the important two-dimensional visual Iace recognition Ieature extraction techniques proposed in the literature during the past Iour decades. The main objective is to present a comprehensive review Iocusing on the characteristics oI the Iacial Ieature extraction techniques along with their perIormance analysis and a comparative assessment through various perIormance parameters such as invariance to illumination, pose or expression variations, image orientation and rotation, sensitivity to noise, computational cost, searching time, ease oI implementation and levels oI accuracy achieved. Geometry-based methods have proved to be robust but are not much accurate. Appearance-based methods have been Iound as the best perIormers but involve high computational cost. Template-based methods have easy implementation but suIIer Irom computational complexity. PerIormance oI color-based methods is aIIected with highlights and shadows. Neural Network and Fuzzy Logic-based methods are resistant to noise but not much successIul on low- resolution images. Moments-based methods are insensitive to noise and other variations and have shown promising results. The technique to be employed thereIore needs to be selected on the basis oI the required overall perIormance in regular conditions, ease oI implementation and robustness. 2013 Elsevier Science. All rights reserved. Keywords: Face Recognition, Facial Feature Extraction, Geometry-based, Appearance-based, Template-based, Color-based, Neural Network and Fuzzy Logic-based, Moments-based. 1. Introduction Face is our identity, which makes us unique. It helps us communicate with the world and diIIerentiates us Irom the others. Ability oI humans to recognize Iaces is remarkable. Despite large variations in the viewing conditions, aging or expressions, we are able to identiIy the Iaces uniquely. Automation oI this Iace recognition task can prove to be interesting and challenging. It can provide valuable insights into the incredible identiIying ability oI the human brain, and also has multiple practical applications in various domains. Over the past Iew decades, Iace recognition has drawn considerable interest and attracted researchers Irom several disciplines. This biometric technique has been Iound to be suitable Ior authentication, Iorensic applications, security, surveillance systems, credit card veriIication and human computer interaction. It has proved to be one oI the most promising techniques Ior recognizing individuals, as unlike other human identiIication methods such as Iingerprint or iris recognition, it does not require any cooperation on the behalI oI individuals. It can be applied not only Ior user veriIication and identiIication but also to determine the demographic characteristics oI a Iace, such as age and gender, and to recognize emotions Irom Iacial expressions. * Corresponding author. Geetika Singh, e-mail: geetikasingh.09gmail.com. 7 Elsevier Publications, 2013 7 Elsevier Publications, 2013 Geetika Singh and Indu Chhabra
1.1. Face Recognition Process The entire Iace recognition process is divided into Iace detection, alignment, Ieature extraction and classiIication. Figure 1 depicts the basic Iace recognition process. Face detection Iocuses on segmentation oI the Iace areas Irom the input image. The segmented Iace images are then subject to Iace alignment Ior normalizing Iaces with respect to geometrical and photometrical properties. The Ieature extraction step extracts the relevant inIormation (Ieatures) Irom the normalized Iaces. The extracted Iacial Ieatures are then matched against each Iace class (several images and their extracted Ieatures) stored in the database Ior the Iinal classiIication oI the image. Various approaches Ior Iace recognition process have been reported in the literature. Zhao et al. has classiIied these into holistic, geometry-based and hybrid techniques |1|. Holistic techniques consider the whole Iace region (holistic representation oI the Iace) as an input to the Iace recognition system. These recognition approaches can be realized through statistical or artiIicial intelligence (AI) means |2|. Statistical methods perIorm the recognition by correlating and comparing the input test image and the Iace images stored in the database. Some oI the methods utilizing this approach are Principal Component Analysis, Independent Component Analysis and Linear Discriminant Analysis. AI approaches recognize Iaces utilizing tools such as neural networks and machine learning techniques. Geometry-based techniques extract the local Ieatures oI the Iace such as eyes, nose and mouth, and use their local statistics as the input to the classiIier. These were the Iirst to be investigated Ior the Iace recognition research. This category includes some oI the important methods such as pure geometry methods |3||4||5| and elastic bunch graph matching |6|. Hybrid techniques employ both the global and the local approaches to achieve Iace recognition and thus, have been motivated by the human perception system |1|. Fig 1. The Face Recognition Process 2. Feature Extraction for Face Recognition Automatic Iacial Ieature extraction is a necessary and one oI the most important steps in Iace recognition. Face images are highly dimensional raw data and thus, it becomes necessary to extract only the important and signiIicant inIormation so as to reduce the computational complexity. Research projects the eIIiciency oI Iace recognition approaches in controlled environments. But real world scenarios are diIIerent and there are a number oI associated challenges that eIIectively need to be addressed such as illumination, pose, expression and emotion variations, presence or absence oI structural components such as beard, mustaches or glasses and variability among these components, image orientation, aging and occlusion. Hence, much attention is given to Iace Ieature extraction process as it has a signiIicant impact on the perIormance oI the recognition system and greatly aIIects the eIIiciency oI the classiIier. The goal oI the Ieature extraction stage is the precise representation oI the Iace data such that only the important inIormation is highlighted. These Ieatures have to possess good discrimination capability Ior eIIective Iace recognition, and should minimize the within class variations and maximize the between class variations. But, it is very important to consider what types oI Ieatures are to be used. Features can be classiIied as holistic Ieatures and local Ieatures. Holistic Ieatures represent the Iace image by one vector oI high dimensionality that contains the pixel values and hence, some dimensionality reduction techniques, such as Principal Component Analysis, are required to project the Iace data to a low dimensional space but with the preservation oI the major data variations. Local Ieatures describe only the local parts oI the Iace image and are concatenated to Iorm one global Ieature vector. These techniques, such as Gabor Iiltering, Discrete Cosine TransIorm, Local Binary Pattern Histogram, 8 Elsevier Publications, 2013 8 Elsevier Publications, 2013 A review on facial feature extraction technique
though generate one global vector Ior the Iace image but are considered local since they concatenate the local inIormation obtained Irom the diIIerent parts oI the image. Figure 2 explains the basic approach Iollowed Ior the holistic and the local Ieature extraction. Holistic Ieature extraction techniques have been most widely used in the literature. However, local Ieatures have proved to be more robust to the variations in pose, lighting, orientation and expressions. Fig 2. (a) Local Ieatures approach (b) Holistic Ieatures approach (Face image taken Irom the ORL database) 3. Review of Facial Feature Extraction Although literature (I deleted thorough) surveys on Iace recognition already exist, but a detailed review that Iocuses on the Iacial Ieature extraction techniques and their comparisons is much required. The main objective oI this paper is to provide a comprehensive review oI the important Iacial Ieature extraction techniques that have been proposed till date Ior the two-dimensional visual Iace recognition, and discuss their relative merits and demerits. Based on the studies available, the approaches involved in Iacial Ieature extraction can be classiIied as Geometry-based, Appearance-based, Template-based, Color-based, Neural Network and Fuzzy Logic-based and Moments-based techniques. 3.1. Geometry-based These methods rely on localizing some geometric points such as eyes or nose and extracting inIormation Irom them (Figure 3). They compute the geometrical relationships among the Ieature points and the relative sizes oI the major Iace components such as width oI the eyes or eyebrow thickness to Iorm a Ieature vector representing the Iace image, which can then be used to perIorm recognition. Earlier attempts Ior Ieature extraction were mostly geometry-based, with the Iirst one being proposed by Bledose (1960`s). In such methods, Ieature extraction was completely manual in a way that a human operator located the Ieature points and entered their positions into the computer. One oI the signiIicant attempts that marked the automation oI geometry-based Iacial Ieature extraction was by Kanade in 1973, who utilized simple image processing methods to extract 16 Iacial parameters such as ratios oI distances, areas or angles |3|. Their Iace recognition system achieved 75 recognition rate. The database used was oI 20 people with two images Ior each person, one used as the model and the other as the test image. Brunelli et al. automatically extracted a set oI geometrical Ieatures Irom the Iace image such as nose width and length, mouth position and chin shape. 35 Ieatures were extracted that Iormed a 35-dimensional vector |7|. Bayes classiIier was used Ior recognition and with a database oI 188 Iace images oI 47 persons, 90 recognition rate was obtained. Cox et al. proposed a mixture distance approach where each Iace was represented by 30 manually extracted distances that projected 95 recognition rate on a database oI 685 individuals |4|. Manjunath et al. used gabor wavelet transIorm to localize 35-50 Ieature points Ior each Iace image. One oI the most recent techniques is Elastic Bunch Graph Matching proposed by Wiskott et al. |6|. This approach selects a set oI Iacial points, represents each point as a node oI a Iully connected graph and labels them with the Gabor Iilters` responses applied to a window around the point |2|. Each arc is labeled with the distance between the corresponding Iace points. A set oI such graphs combined to a stack like structure Iorms the Iace bunch graph. Graphs Ior new Iace images can be generated using the Iace bunch graph. Geometry-based techniques are suitable Ior large databases, provide a high speed matching capability and a compact Iacial representation and are even robust to variations in pose, size, head orientation, rotation and 9 Elsevier Publications, 2013 9 Elsevier Publications, 2013 Geetika Singh and Indu Chhabra
illumination. However, the currently available techniques are not much accurate, and thus compromise the perIormance oI the recognition system. These also require considerable computing time. In addition, arbitrary decision on which Ieatures are to be considered as important is required. The resulting Ieature set may lack discrimination capability and no subsequent processing will be able to accommodate Ior that. 3.2. Appearance-based Appearance based methods extract the appearance changes oI the Iace. These have proved to be the most popular and the most dominant among all the Iacial Ieature extraction techniques, and have gained the utmost attention Irom the researchers. Holistic appearance based techniques are essentially the dimensionality reduction methods that map the training Iace images to a low dimensional space and hence, the Iace space comprising the Ieature vectors has a low dimensionality than the image space represented by the number oI pixels in the image. These have been categorized into linear and non-linear approaches. Linear methods transIorm data Irom high dimensional subspace into low dimensional subspace by linear mapping but they Iail to work on the non-linear data structures Ior which non-linear methods have proved to be beneIicial. Non-linear methods have also been Iound to be more eIIicient in dealing with the images involving variations in illumination. Principal Component Analysis (PCA), also termed as the EigenIace approach (Figure 3), Linear Discriminant Analysis (LDA) and Independent Component Analysis (ICA) are the most well-known linear techniques |8||9||10| and Kernel Principle Component Analysis (KPCA) and Kernel Fisher Discriminant (KFD) are two oI the popular techniques in the non-linear category that have shown a great potential |11||12|. Literature also suggests combining the two categories Ior optimized perIormance. These methods have been Iound as the best perIormers in Iacial Ieature extraction as they keep only the important inIormation oI the Iace image, reject redundant inIormation and reIlect the Iace`s global structure but their perIormance depends on the proper localization oI the Iace and they also involve high computational costs. These methods consider each pixel as important and do not Iocus on some limited points oI interest and hence, may prove to be computationally expensive. Though, they are easy to implement, provide better recognition results than their geometry-based counterparts but are sensitive to variations in pose, scale, expressions and illumination. This category also includes some oI the important local such as Gabor wavelets and Local Binary Pattern Histogram (LBP) Ieatures. Gabor wavelets were initially used Ior iris recognition and were Iirst applied Ior Iace recognition by Lades et al. |13|. These provide with a high perIormance but, are both time and memory intensive. The initial LBP operator was utilized Ior texture description |14| and has recently been used to cater the Iacial Ieature extraction problem |15|. The basic LBP operator labels the image pixels by thresholding the 33 neighborhood with the center value and considering the result as a binary number. The histogram oI these labels can be used as the image descriptor. This operator has also been extended to use neighborhoods oI diIIerent sizes. For the LBP Ieature extraction, the Iace image is divided into smaller regions Irom which the LBP histograms are extracted and Iinally concatenated to a single Ieature histogram that can be used as the Iace descriptor. LBP Ieatures are tolerant against illumination variations, possess good discrimination capability and are computationally simpler. 3.3. Template-based In the traditional template-based technique Ior Iacial Ieature extraction, an artiIicial template Ior a Ieature, say eye, was matched with the sub-regions oI an input test image and the region that best matched the template was extracted as the Ieature. An artiIicial template Ior a Ieature was generated by averaging the intensities oI the rectangular regions containing that Ieature in the images oI the Iace database. This approach Ior Iacial Ieature extraction was utilized by Baron in the early 80`s |16|. The main problem with this approach was its Iailure when it was applied to an image that was diIIerent Irom those used to generate the artiIicial templates. In 1992, a technique Ior detecting and describing the Iacial Ieatures using deIormable templates was proposed by Yuille et al. |17|. The Ieature, an eye Ior example, was described by a parameterized template. An energy Iunction was deIined which related the edges, peaks and valleys in the image intensity to the corresponding parameter values oI the template. The template dynamically interacted with the input test image with the main objective to 10 Elsevier Publications, 2013 10 Elsevier Publications, 2013 A review on facial feature extraction technique
minimize the energy Iunction by altering its parameter values i.e. deIorming itselI to Iind the best Iit (Figure 3). The Iinal parameter values could be used as descriptors Ior the Ieature. An important aspect oI this algorithm involves searching oI the Ieature in the input test image, which adds to its computational cost and thus, genetic algorithms have also been proposed Ior their eIIicient searching times. Various techniques based on this approach have been proposed |18||19|. Brunelli et al. used a set oI 4 Ieature templates, eyes, nose, mouth and the whole Iace. Their approach was based on Baron`s approach. They also compared the template matching technique with the geometry-based technique and concluded template matching to be more superior with 100 recognition rate on Irontal Iace images |7|. This approach is quite robust to the variations in scale, lighting conditions and the tilt and rotation oI head but requires prior modeling oI the templates representing the Ieatures. Describing the templates is deIinitely a diIIicult task. In the deIormable template matching technique, it is also possible, though in certain situations, Ior the energy Iunction to possess a low value with the parameter values still being incorrect. Such situations can arise iI, Ior example, the mouth template gets started on the eye and deIorms itselI to match the incorrect Ieature. However, this can be improved by providing the initial position estimates oI the Ieatures as an input to the deIormable template-matching algorithm. In addition, these techniques are computationally more expensive. Fig 3. Geometry-based, Template-based and Appearance-based approach 3.4. Color-based First proposed by Thomas C. Chang et al., this approach utilized the techniques oI color segmentation and color thresholding to extract the eyes, nostrils and mouth Irom the color images |20|. This approach was the Iirst attempt on color images with the earlier ones being attempted only on the grayscale images. Color segmentation involved segmenting and removing the skin color, thus leaving the non-skin-colored areas oI the Iace as the potential candidates Ior the eyes, hair, nostrils, mouth and the background. With Iurther processing, the hair and background were removed. With color thresholding, Iacial Ieatures were extracted Irom the input test Iace image. Chang`s algorithm is robust with respect to the skin color, eye color, hair color and Iacial hair but is quite sensitive to the highlights and the shadows. Application oI this algorithm to Iacial Ieature extraction is limited due to the diversity oI ethnical backgrounds |20|. However, the location oI the Ieatures pointed by this algorithm can be used as the initial position estimates Ior the template-matching algorithm to obtain promising results. Sobottka et al. perIormed the Iace localization based on skin color; Iacial Ieatures were then extracted Ior the segmented Iaces by applying morphological operations and minima localization to the intensity images |21|. Hsu et al. analyzed the chrominance component and established that there are high Cb and low Cr areas around the eye regions and high Cr and low Cb values around the mouth region. This inIormation was used Ior eyes and mouth localization |22|. Beigzadeh et al. extended the Hsu`s approach to consider color, luminance as well as edge properties oI the image. This method was more robust and accurate in eye and mouth localization in Iacial images with a maximum oI 30 degrees oI lateral rotation |23|. Color-based techniques are easy to implement but have a limited applicability due to their low accuracy. 3.5. Neural Network and Fuzzy Logic-based Neural Networks have mostly been proposed in the literature Ior Iace detection and classiIication. However, these do exist Ior the Iacial Ieature extraction as well. Methods based on ArtiIicial Neural Networks prove to be useIul to Iind solutions when the algorithmic methods are computationally very expensive or cannot be
(a) A set oI sample Geometrical
Ieature points (b) DeIormable Template matching Approach (c) Sample EigenIaces on the ORL database (Face Image taken Irom the ORL database) 11 Elsevier Publications, 2013 11 Elsevier Publications, 2013 Geetika Singh and Indu Chhabra
Iormulated. One oI the attempts was by Hines et al. to utilize multi-layer perceptron network to extract the position oI eyes Irom head and shoulder type oI low resolution images |24|. This attempt had a drawback oI not being able to locate the eye Ieature iI points similar to the eyes existed in the image. Another attempt was by Hutchinson et al. to compare the neural network based Ieature extraction techniques with the template matching techniques |25|. Two methods, one based on multi-layer perceptron and another using a combination oI Kohonen network and multi-layer perceptron, were proposed. Results Iavored the neural network-based techniques as compared to the template-based techniques. To overcome the drawbacks oI PCA-based and LDA-based approaches, a Iuzzy linear mapping technique to perIorm Iacial Ieature extraction with better generalization was also proposed by Yu et al. |26|. DuIIner et al. proposed a six layer network architecture Ior the Iacial Ieature detection |27|. Park et al. used Iuzzy observer Ior extracting the Ieatures oI wrinkledness|28|. These group methods have proved to be Iaster in their operations than the other group methods. They provide with more accurate and robust results, can process ambiguous or imprecise data and are quite resistant to noise. But, these have not been Iound successIul especially Ior low-resolution images. 3.6. Moments-based Moments such as Hu moments, geometric invariant central moments, Legendre moments, Zernike moments, Pseudo Zernike moments and Krawtchouk moments have been applied to many pattern recognition problems such as character recognition and palm print veriIication. Their application as a Iacial Ieature extraction technique has gained importance recently. These are essentially the global Ieature extractors and have proved their suitability under any geometric transIormation. Haddadnia et al. and Pang et al. have proposed the usage oI Pseudo Zernike moment Ieatures Ior Iace recognition |29||30||31|. Foon et al. utilized the Zernike moment Ieatures that provided with a recognition rate oI 94.26 on Essex database |32|. Arnold et al. have also proved Zernike moments to possess good discrimination capabilities |33|. Sheeba Rani proposed the Krawtchouk moments Ior extracting both the local and the global Ieatures |34|. The method was tested on the ORL and Yale databases and showed good recognition capability even in the case oI images corrupted with noise and possessing changes in Iacial expression and tilt. Saradha et al. compared the Fourier transIorm based and the moment based Ieature extraction methods using LDA as the classiIier |35|. Their experiment revealed 46.8 oI classiIication accuracy Ior Hu moments, 98.2 Ior Legendre moments, 98.25 Ior Fourier descriptors and 98.3 Ior Zernike moments on the ORL database. They also concluded the concatenation oI the Fourier descriptors and the Zernike moment Ieatures to achieve the highest classiIication accuracy oI up to 99.5 Ior the ORL database. These approaches are less susceptible to inIormation redundancy, invariant to the variations in size, pose, scale, illumination, tilt and rotation and are quite insensitive to noise which makes them superior over the existing methods. When used with the other group methods in the hybrid mode, moment based Ieatures have also provided with the improved recognition results. 4. Discussion Table 1 provides the perIormance comparison oI the various Iacial Ieature extraction approaches, including the various approaches used, the key Ieatures oI each oI these approaches and their relative merits and demerits. 5. Conclusions and the Associated Challenges Face recognition has been an actively researched area in recent years. Facial Ieature extraction is considered one oI the most important Iactors Ior achieving higher recognition rates. Several methods Ior Iacial Ieature extraction have been proposed in the literature including geometry-based, appearance-based, template-based, color-based, neural network and Iuzzy logic-based and moments-based methods. Each technique has its own beneIits in terms oI its ease oI implementation, invariance to scale and rotations, insensitivity to noise and robustness and limitations such as computational cost, searching time and dependencies on Iactors such as 12 Elsevier Publications, 2013 12 Elsevier Publications, 2013 A review on facial feature extraction technique
illumination. The technique to be employed may be selected on the basis oI the required overall perIormance in regular conditions but with the required ease oI implementation and robustness. Table 1. PerIormance comparison oI the Iacial Ieature extraction approaches Key-features Major Contributions Merits Demerits Geometry-based Localize some geometric points such as eyes or nose and extract inIormation Irom them such as width oI nose or eyebrow thickness. Result in a compact representation oI the Ieatures. Bledose, 1960`s Kanade, 1973 Manjunath et al., 1992 Brunelli et al., 1993 Cox et al., 1996 Wiskott et al., 1997 Suitable Ior large databases. Provide high speed matching capability. Robust to variations in scale, size and head orientation, rotation, and illumination. Not much accuracy. Considerable computing time. Decision on which Ieatures are to be considered is required. Appearance-based Extract appearances changes oI the Iace. These can be holistic or local methods. Holistic methods are essentially dimensionality reduction methods and can be linear, non- linear or hybrid. Turk et al., 1991 Lades et al., 1993 Belhumeur et al., 1996 Bartlett et al., 2002 Liu et al., 2002 Ahonen et al., 2004 Martin, 2006 Best perIormers. Reject redundant inIormation. Easy to implement. LBP Ieatures are tolerant to illumination, have good discrimination capability, and are computationally simpler. Holistic ones involve high computational costs. Sensitive to variations in pose, scale, expressions and illumination. Template-based An artiIicial template is matched with the sub regions oI the input test images and the region that best matches the template is extracted as the Ieature. Requires prior modeling oI the templates. Baron, 1981 Yuille et al., 1992 Brunelli et al., 1993 Zhang, 1997 Kuo et al., 2005 Robust to variations in scale, lighting condition and the tilt and rotation oI head. More superior to geometry- based techniques. Easy to implement. May not correctly match the Ieature. Computationally more expensive. IneIIicient search. Color-based Uses color segmentation and color thresholding to extract the Iacial Ieatures. Feature points extracted can be used as initial position estimates Ior template matching methods. Chang et al., 1994 Sobottka et al., 1996 Hsu et al., 2002 Beigzadeh et al., 2008 Robust to skin color, eye color, hair color and Iacial hair. More robust to lateral rotation. Easy to implement. Sensitive to the highlights and the shadows. Low accuracy. Closed eyes pose a problem. Neural Network and Fuzzy Logic-based These prove to be useIul when algorithmic methods are computationally very expensive or cannot be Iormulated. Hines et al., 1989 Hutchinson et al., 1989 Yu et al., 1996 Park et al., 2000 DuIIner et al., 2005 Faster in operation than other group methods. More accurate and robust results. Can process ambiguous or imprecise data. Resistant to noise. Not much successIul on low resolution images. Moments-based Global Ieature extractors. Most recently applied to Iace recognition. Haddadnia et al., 2003 Foon et al., 2004 Pang et al., 2004, 2006 Saradha et al., 2005 Arnold et al., 2007 Rani et al., 2012 Good discrimination capability. Less susceptible to inIormation redundancy. Improved recognition results. Insensitive to noise. Invariant to variations in size, pose, illuminations, tilt and rotations. Recognition results depend extensively on the classiIier used. Combination technique to be used in hybrid mode has to be appropriately selected. 13 Elsevier Publications, 2013 13 Elsevier Publications, 2013 Geetika Singh and Indu Chhabra
Literature reports eIIiciency oI the Iace recognition algorithms in controlled environments. However, many challenges still need to be addressed in real world scenarios, such as illumination, pose, expression and emotion variations, presence or absence oI structural components such as beard, mustaches or glasses and variability among these components, image orientation, aging and occlusion. Hence, much oI the Iocus is now shiIting towards the improvement oI the Iacial Ieature extraction stage, as it can greatly aIIect the classiIication accuracy. Though a number oI high quality automatic Iacial Ieature extraction algorithms have been proposed, there still does not exist any technique that can tackle all types oI variations possible in a Iace image, is simple, robust, Iast, computationally Ieasible, and capable oI achieving the ultimate goal oI designing a system that can match or even exceed the human capability oI recognizing Iaces. 6. Acknowledgement Financial assistance Irom the Department oI Science and Technology (DST), Govt. oI India in the Iorm oI INSPIRE Junior Research Fellowship Ior the Iirst author (JRF) is duly acknowledged. References |1| W. Zhao, R. Chellappa, P.J. Phillips and A. RosenIled, 'Face Recognition: A Literature Survey, ACM Computing Surveys, Vol. 35, No. 4, pp. 399-458, 2003. |2| R. JaIri and H. Arabnia, 'A Survey oI Face Recognition Techniques, Journal oI InIormation Processing Systems, Vol. 5, No.2, pp. 41-68, 2009. |3| T. Kanade, 'Picture Processing System by Computer Complex Recognition, Ph.D. Theses, Department oI InIormation Science, Kyoto University, 1973. |4| J. Cox, J. Ghosn, and P. N. Yiaios, 'Feature Based Face Recognition Using Mixture-Distance, Computer Vision and Pattern Recognition, 1996. |5| B. S. Manjunath, R. Chellappa and C. von der Malsburg, 'A Feature Based Approach to Face Recognition, Proceedings oI the IEEE ConIerence on Computer Vision and Pattern Recognition, pp. 373-378, 1992. |6| L. Wiskott, J. M. Fellous, N. Kruger, and C. von der Malsburg, "Face Recognition by Elastic Bunch Graph Matching," IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol.19, pp.775-779, 1997. |7| R. Brunelli and T. Poggio, 'Face recognition: Features versus Templates, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 15, No. 10, pp. 1042-1052, 1993. |8| M. Turk and A. Pentland, 'EigenIaces Ior recognition, Journal oI Cognitive Neuroscience, Vol. 3, No. 1, pp. 71-86, 1991. |9| P. N. Belhumeur, J. P. Hespanha and D. J. Kriegman, 'EigenIaces vs. FisherIaces: Recognition Using Class SpeciIic Linear Projection, Proc. oI the 4th European ConIerence on Computer Vision, ECCV, pp. 45-58, 1996. |10| M. S. Bartlett, J. R. Movellan, T. J. Sejnowski, Face Recognition by Independent Component Analysis, IEEE Trans. on Neural Networks, Vol. 13, No. 6, pp. 1450-1464, 2002. |11| S. Martin, 'An Approximate Version oI Kernel PCA, IEEE Transactions on Machine Learning and Applications, 2006. |12| Q. Liu, R. Huang, H. Lu, S. Ma, "Face Recognition Using Kernel-based Fisher Discriminant Analysis", IEEE International ConIerence on Automatic Face and Gesture Recognition, 2002. |13| M. Lades, J. C. Vorbruggen, J. Buhmann, J. Lange, C. von der Malsburg, R. P. Wutz and W. Konen, 'Distortion invariant object recognition in the dynamic link architecture, IEEE Transactions on Computers, Vol. 42, No. 3, pp. 300-311, 1993. |14| T. Ojala, M. Pietiakainen, D. Harwood, 'A Comparative Study oI Texture Measures with ClassiIication Based on Fetaured Distribution, Pattern Recognition, Vol. 29, pp. 51-59, 1996. |15| T. Ahonen, A. Hadid and M. Pietikainen, 'Face recognition with Local Binary Patterns, Proceedings oI the Eighth European ConIerence on Computer Vision, pp. 469-481, 2004. |16| R. J. Baron, "Mechanisms oI Human Facial Recognition, International Journal oI Man-Machine Studies, Vol.15, pp.137-178, 1981. |17| L. Yuille, P. W. Hallinan and D. S. Cohen, 'Feature Extraction Irom Faces Using DeIormable Templates, International Journal oI Computer Vision, pp. 99-111, 1992. |18| L. Zhang, 'Estimation oI the Mouth Features Using DeIormable Templates, IEEE International ConIerence on Image Processing, Vol. 3, pp. 328-333, 1997. |19| P. Kuo, J. Hannah, 'An Improved Eye Feature Extraction Algorithm Based on DeIormable Templates, IEEE International ConIerence on Image Processing, Vol. 2, pp. 1206-1209, 2005. |20| T. C. Chang, T. S. Huang and C. Novak, 'Facial Feature Extraction Irom Color Images, Proceedings oI the 12th IAPR International ConIerence on Pattern Recognition, Vol. 2, pp. 39-43, 1994. |21| K. Sobottka and I. Pitas, 'Face Localization and Facial Feature Extraction Based on Shape and Color InIormation, ICIP 96, pp. 483- 486, 1996. |22| R. L. Hsu, M. Abdel Mottaleb and K. Jain, 'Face Detection in Color Images, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 24, No. 5, pp. 696-706, 2002. 14 Elsevier Publications, 2013 14 Elsevier Publications, 2013 A review on facial feature extraction technique
|23| M. Beigzadeh and M. VaIadoost, 'Detection oI Face and Facial Features in Digital Images and Video Frames, Biomedical Engineering ConIerence, pp. 1-4, 2008. |24| H. L. Hines and R. A. Hutchinson, 'An Application oI Multi-Layer Perceptrons to Facial Feature Location, Proceedings oI the IEEE International ConIerence on Image Processing and its Applications, pp. 39-44, 1989. |25| R. A. Hutchinson and W. J. Welsh, 'Comparison oI Neural Networks and Conventional Techniques Ior Feature Location in Facial Images, Proceedings oI the IEEE International ConIerence on ArtiIicial Neural Networks, pp. 201-205, 1989. |26| G. Yu, H. M. Liao and J. Sheu, 'A New Fuzzy Linear Mapping Technique Ior Facial Feature Extraction and Recognition, Proceedings oI the IEEE ConIerence on Neural Networks, pp. 1179-1184, 1996. |27| S. DuIIner, C. Garcia, 'A Connexionist Approach Ior Robust and Precise Facial Feature Detection in Complex Scenes, Proceedings oI the 4th International Symposium on Image and Signal Processing and Analysis, 2005. |28| G. Park and Z. Bien, 'Neural Network-Based Fuzzy Observer with Application to Facial Analysis, Pattern Recognition Letters, pp. 93- 105, 2000. |29| J. Haddadnia, K. Faez and M. Ahmadi, 'An EIIicient Human Face Recognition System Using Pseudo-Zernike Moment Invariant and Radial Basis Function Neural Networks, International Journal oI Pattern Recognition and ArtiIicial Intelligence, Vol. 17, No.1, pp. 41- 62, 2003. |30| Y.-H Pang, A. B. J. Teoh and D. C. L. Ngo, 'Face Authentication System Using Pseudo-Zernike Moments on Wavelet Sub-Band, IEICE Electronics Express, Vol. 1, No. 10, pp. 275-280, 2004. |31| Y.-H Pang, A. B. J. Teoh and D. C. L. Ngo, 'A Discriminant Pseudo-Zernike Moments in Face Recognition, Journal oI Research and Practice in InIormation Technology, Vol. 38, No. 2, pp. 197-211, 2006. |32| N. H. Foon, Y.-H. Pang, A. T. B. Jin and D. N. C. Ling, 'An EIIicient Method Ior Human Face Recognition Using Wavelet TransIorm and Zernike Moments, Proceedings oI the International ConIerence on Computer Graphics, Imaging and Visualization, pp. 65-69, 2004. |33| W. Arnold, V. K. Madasu, W. W. Boles and P. K. Yarlagadda, 'A Feature Based Face Recognition Technique Using Zernike Moments, Proceedings oI the RNSA Security Technology ConIerence, pp. 341-345, 2007. |34| S. Rani, 'Face recognition using Hybrid Approach, International Journal oI Image and Graphics, Vol. 12, No. 1, 2012. |35| Saradha and S. Annadurai, 'A Hybrid Feature Extraction Approach Ior Face Recognition, ICGST-GVIP Journal, Vol. 5, No. 5, pp. 23- 30, 2005. |36| A. F. Abate, M. Nappi, D. Riccio and G. Sabatino, '2D and 3D Iace recognition: A survey, Pattern Recognition Letters, Vol. 28, pp. 1885-1906, 2007. 15 Elsevier Publications, 2013 15 Elsevier Publications, 2013 Index