Sei sulla pagina 1di 9

SIViP DOI 10.

1007/s11760-011-0278-9

ORIGINAL PAPER

3D facial model reconstruction, expressions synthesis and animation using single frontal face image
Narendra Patel Mukesh Zaveri

Received: 15 May 2011 / Revised: 10 November 2011 / Accepted: 12 November 2011 Springer-Verlag London Limited 2011

Abstract With better understanding of face anatomy and technical advances in computer graphics, 3D face synthesis has become one of the most active research elds for many human-machine applications, ranging from immersive telecommunication to the video games industry. In this paper we proposed a method that automatically extracts features like eyes, mouth, eyebrows and nose from the given frontal face image. Then a generic 3D face model is superimposed onto the face in accordance with the extracted facial features in order to t the input face image by transforming the vertex topology of the generic face model. The 3D-specic face can nally be synthesized by texturing the individualized face model. Once the model is ready six basic facial expressions are generated with the help of MPEG-4 facial animation parameters. To generate transitions between these facial expressions we use 3D shape morphing between the corresponding face models and blend the corresponding textures. Novelty of our method is automatic generation of 3D model and synthesis face with different expressions from frontal neutral face image. Our method has the advantage that it is fully automatic, robust, fast and can generate various views of face by rotation of 3D model. It can be used in a variety of applications for which the accuracy of depth is not critical such as games, avatars, face recognition. We have tested and evaluated our system using standard database namely, BU-3DFE.
N. Patel (B) Department of Computer Engineering, BVM Engineering College, V.V.Nagar, Gujarat, India e-mail: bvm_nmp@yahoo.com M. Zaveri Department of Computer Engineering, SVNIT, Surat, Gujarat, India e-mail: mazaveri@gmail.com

Keywords A generic 3D model Morphing Animation MPEG-4 FAPs Texture

1 Introduction Facial modeling and animation are among the fastest developing technologies in the eld of animation. The model is a description of 3D objects in a strictly dened language or data structure. The face model is the representation of the head that accurately captures the shape of head and face. The head model may be obtained using photographs, 3D scanning or modeling using special software. Models can use different representations, like polygon mesh and B-Spline. To synthesize a specic 3D face, the extraction of two basic sets of data, the vertex topology of the 3D wire frame and the texture of the specic face, has to be carried out. The vertex topology species the structure of the 3D generic model, while the face texture enhances the degree of realism of the face model. The algorithms of 3D face synthesis can be classied into the four categories [1]: 3D laser scanner, stereo images, multi view images and single view images. For the increasingly growing demand on real-time applications like video phony and video conferencing, methods generating 3D model from multiple images are no longer feasible because they need user supervision and are computationally complex. Therefore, to achieve a simple, quick system, 3D face synthesis from a single image has came to be an ideal choice. There has been a lot of work on face modeling from images. In [2] researchers have suggested the use of two orthogonal views: one frontal view and one side view to create 3D model. Such systems require the users to manually specify the face features on the two images. Blanz and Vetter [3] developed a system to create face models from a single image. Their system uses both a geometry database and an image

123

SIViP

database. Their system is computationally more expensive. Feng and yuen [4] synthesized the face only from a single image, but this method needs to estimate head rotation parameters by using another reference image. Liu [5] also proposed a system to create face models from a single face image but they have used existing software to detect features. After generation of 3D face model next important issue is synthesis human face with high degree of realism. One of the ways of achieving realism is modeling of facial expressions and animation on synthesized human face. However, the task of modeling all human expressions on to a virtual character is complicated by the richness of human facial expressions and the fact that each individual has their unique way of expressing emotions facially. Approaches of modeling facial expressions and animation required to manipulate the face meshs vertices. There are ve different approaches for facial animation [1]: interpolation, performance driven, musclebased, pseudo muscle-based and direct parameterization animation. The most commonly used animation approaches nowadays are direct parameterization, where facial animation is parameterized by a set of animation parameters [6]. These parameters not only govern global animation of the head but also are able to emulate a variety of facial expressions, overcoming the limitation of expression interpolation. The typical parameters are the afne transformation parameters, action units (AUs) and Facial animation parameters (FAPs) etc. MPEG-4 standard employs the FAPs operating on a set of facial definition parameters (FDP) or facial feature points (FP). The FDP denes the 3D location of 84 points on a neutral face. FDPs are usually corresponding to facial features and therefore roughly outline the shape of face. The FAPs specify FDPs displacements which model actual facial features movements in order to generate various expressions [7,8]. All FAPs involving translation movement are expressed in terms of facial animation parameters unit (FAPU). We have proposed a method that automatically generate 3D face model from the given frontal image of the face and generates the universal six expressions of the face, such as fear, anger, happiness, surprise, disgust and sadness. In our proposed system these expressions are represented with the help of MPEG-4 facial animation parameters. The MPEG-4 visual standard species a set of facial definition parameters (FDPs) and facial animation parameters (FAPs) for facial animation. The FAPs are used to characterize the movements of facial features dened over jaw, lips, eyes, mouth, nose and cheek. Facial animation is produced by interpolation between two or more different models using 2D morphing techniques combined with 3D transformations of geometric model. Novelty of our method is automatic generation of 3D model and synthesis face with different expressions from frontal neutral face image. We have used 3D facial

expression database BU-3DFE [9] for texture mapping and to determine the values of FAPs. It is also possible to generate new expression through the blending of selected expressions. The paper is organized as follows: Sect. 2 describes 3D face model reconstruction. It is followed by expressions generation and expression morphing in Sects. 3 and 4, respectively. Section 5 describes animation. The simulation results and conclusions are discussed in Sects. 6 and 7.

2 3D face model reconstruction The method that reconstructs the 3D face model from single frontal 2D face image consists of facial feature extraction, face model adaptation and texture mapping [10,11]. Our proposed method rst extracts features from given face image and then a generic 3D face model is superimposed onto the face in accordance with the extracted facial features in order to t the input face image by transforming the vertex topology of the generic face model. The advantage of using generic model is that the number of triangles used to represent model are xed. These triangles are deformed to t a specic face. Even in case of large image size our approach is able to reconstruct model properly because we have taken care of it in rendering part. 2.1 Facial feature extraction Facial feature extraction comprises two phases: face detection and facial feature extraction [12,13]. Face is detected by segmenting skin and non-skin pixels. It is reported that Y Cb Cr color model is more suitable for face detection than any other color model. It is also reported that the chrominance component Cb and Cr of the skin tone always have values between 77 <= Cb <= 127 and 133 <= Cr <= 173, respectively [14]. Face detection method using skin tone only detects face correctly when person has dark hair. Light color hair is also detected as skin tone. We separate hair from face with the help of nding luminance changes which is evident in the hair. To nd luminance changes standard deviation is calculated by dividing face region into 44 blocks. After calculating standard deviation, a region is found in which standard deviation is prominent. Finally face region is extracted without hair. After detection of face, the features like eyes, mouth and eyebrows are detected. We rst build two separate eye maps, one from the chrominance components and the other from the luminance component [15]. We have used upper half of the face region for preparation of eye maps to detect eyes. The eye map from the chroma is based on the observation that high Cb and low Cr values are found around the eyes. It is constructed by

123

SIViP

Ec =

1 3

2 C b + Cr 2

Cb Cr

(1)

2 where Cb , C r and Cb /Cr all are normalized to the range [0, 1] and C r is the negative of Cr (i.e. 1 Cr ) The eyes usually contain both dark and bright pixels in the luma component so grayscale morphological operators dilation () and erosion ( ) are used to emphasis brighter and darker pixels in the luma component around eye regions. It is constructed using Eq. 2.

El =

Y (x, y) G(x, y) Y (x, y) G(x, y)

(2)
Fig. 1 a Frontal face image. b Pseudo hue. c Log(G/B) of lip area

where Y (x, y) is luma component of face region and g(x, y) is structuring element. The eye map from the chroma is combined with the eye map from the luma by an AND (multiplication) operation. The resulting eye map is dilated with same structuring element to brighten eyes and suppress other facial areas. The locations of the eyes are estimated from the eye map. We have determined mean and standard deviation of eye map which is used to nd location of eyes. After the large number of experiments we have set the value of threshold (T ) = mean + 0.3 variance. Eye feature points, the left and right corners and the upper and lower middle points of the eyelids are extracted from the edge map of the eye using sobel gradient operator. After two eye corners and two middle points on the eyelids have been located two parabolas are applied on the detected eyes. The location and feature points of the eyebrows are found from the edge map of the region of the face above the eye. Lip region is extracted using the observation that the lip pixels have stronger red component but green and blue components are almost same. Skin pixels also have stronger red component but green component has higher value compared to blue component. Difference between red and green component is greater for lip pixels than skin pixels. Hulbert and poggio [16] proposed a pseudo hue definition that calculates pseudo hue as H (x, y) = R(x, y) R(x, y) + G(x, y) (3)

where R(x, y) and G(x, y) are the red and green components of the pixel (x, y), respectively. However, a person with reddish skin, as shown in Fig. 1, pseudo hue may not give correct result. Method discussed in [17] also fails when person has reddish skin. So we have combined pseudo hue H (x, y) with H 1(x, y) H 1(x, y) = Log G(x, y) B(x, y) (4)

where G(x, y) and B(x, y) are the green and blue components of the pixel (x, y), respectively. Lip pixels have lower value of Green and Blue color components so log function

is used to enhance contrast. Lip pixels have higher value of H (x, y) and lower values of H 1(x, y). The location of the mouth is detected by nding the region having higher value of H (x, y) and lower value of H 1(x, y). We have found that pseudo hue (H ) value varies from 0.55 to 0.65 and value of H 1 is to be less than 0.73 for lip pixels. It is found that lip corners are in shadow and they have lower value of intensity. Lip corner points are found using intensity component of lip region having lower value. The pseudo hue component H (x, y) of the lip region is shown in Fig. 2. It is observed that the hue value (H ) for the middle part of the lip pixels are higher when mouth is closed. But when mouth is open the hue value is lower for teeth part but higher for cavity. This observation is used to check whether mouth is closed or open. We have applied canny edge detector on intensity component of lip region and determined edge points corresponding to upper outer and lower outer lip contour for middle column. The edge map of lip region is shown in Fig. 3. When mouth is closed, inner upper and inner lower boundary edge points are same. They are the points with maximum pseudo hue value for middle column as shown in Fig. 2. We have found P2, P3 and P4 points on the upper boundary of lip as shown in Fig. 4. To nd P2 we have traversed left edge of upper lip boundary from P4 till position is decreasing. P2 is an edge point with lowest position. Similarly we have traversed right edge of upper lip boundary to nd point P3 (Fig. 4). When mouth is open feature points on inner upper lip boundary (P8) and inner lower lip boundary (P9) are determined. Teeth and tongue cause problems in determination of P8 and P9 from edge map. So after the determination of P4 we have searched for rst point in down direction up to P1 which has maximum gradient of pseudo hue (H ) for middle column, which is P8. Same way after determination of P6 we have searched for rst point in up direction up to P1 which has maximum gradient of pseudo hue (H ) for middle column, which is P9. After detecting feature points the upper lip boundary is modeled using cubic curve (cardinal spline)

123

SIViP

Fig. 4 Lip model

Fig. 2 a Lip region. b Pseudo hue (H ) Fig. 5 Facial features

face image between eye and mouth. Detected feature points are shown in Fig. 5 for frontal face image 2.2 Face model adaptation This is a process in which the generic 3D face model is deformed to t a specic face. Our proposed generic model [18] is shown in Figs. 6 and 7 which is polygon-based (triangle mesh) and consists of 350 triangles and 221 vertices. Model is adapted to given frontal face image with the help of two geometrical transformations scaling and translation. Assuming orthographic projection, the translation vector can be derived by calculating the distance between the 3D face model centers to the 2D face center. Let Cl indicate center of left eye, Cr indicate center of right eye, Cc indicate middle point between two eyes and Cm indicate center of mouth in given face. Similarly Cl , Cr , Cc and Cm are corresponding points in the 2D projection of the 3D face model. Model is scaled by an amount Sx , S y and Sz using Eq. 5.
Fig. 3 a Lip region. b Edges of lip region
|C Sx = |Ci Cr | i Cr | |C S y = |Cc Cm | c C m |

Sz =

(Sx +S y ) 2

(5)

[18]. Experimentally it is found that upper inner boundary, lower inner boundary and lower outer boundary of lip can be modeled more accurately using parabola than cubic curve which is shown in Fig. 4. The location and feature point of nose are found using vertical component of gradient of the

After global adaptation of model we perform local renement of model eyes, eyebrows and mouth with that of face features. Face boundary is detected using morphological operation erosion as shown in Eq. 6. B(x, y) = F(x, y) F(x, y) G(x, y) (6)

123

SIViP

3 Expressions generation Expressions are represented with the help of MPEG-4 facial animation parameters (FAPs). The FAPs are a set of parameters dened in the MPEG-4 visual standard for the animation of synthetic face models. There are 68 FAPs including 2 high-level FAPs used for visual phoneme and expression and 66 low-level FAPs used to characterize the facial feature movements over jaw, lips, eyes, mouth, nose, cheek, ears etc. The expression parameter FAP-2 dene the six primary facial expression as shown in Table 1. We have generated six basic expressions with the help of low-level FAPs as discussed in [17,19]. The FAPs are computed through tracking a set of facial features dened in Fig. 5 and they are measured by facial animation parameter units (FAPUs) that permit us to place FAPs on any facial model in a consistent way [20]. The FAPUs are dened with respect to the distances between key facial features in their neutral state such as eyes (ES0), eyelids (IRDS0), eye-nose (ENS0), mouth-nose (MNS0) and lip corners (MW0) as shown in Fig. 9. Table 2 gives the relation between the expressions and involved FAPs. Expressions are generated by moving and deforming various control vertices of face model according to FAPs. Negative sign with FAPs indicate opposite direction motion. If Vm indicates the neutral coordinate of the mth vertex in a certain dimension of the 3D space its animated position Vm in the same dimension can be expressed as Vm = Vm + wn FAPUn i n (7) where n is the weight of the n FAP, FAPUn is the FAPU to n FAP, i n is the amplitude of FAP ranging between [0, 1]. In fact, the term, n FAPUn denes the maximum displacement of FAPn , while coefcient i n controls the intensity of FAPn. We have developed scan line algorithm that establish the correspondence between each triangle of neutral model and expression model for each scan line for each pixel to generate expression specic texture.
Fig. 8 Chin model

Fig. 6 Generic face model

Fig. 7 Models of a eyebrow, b eyes, c mouth, d left cheek, e right cheek, f nose

4 Expression morphing where F(x, y) is face image and G(x, y) is structuring element. To get complete 3D model, model boundary points are aligned with corresponding face boundary points with the help of translation. In many faces boundary corresponding to chin may be not be found properly. So for chin we have tted parabola as shown in Fig. 8 after nding points C1, C2 and C3. C1 and C2 are the boundary points corresponding to mouth corner points. C3 is bottom boundary point for middle column, which is minimum intensity point between mouth and neck because there is shadow between face and neck. After the 3D face geometry is reconstructed, it is rendered and appropriate texture is mapped to synthesis 2D face image. Expression morphing means the generation of continuous and realistic transitions between different facial expressions. We achieve these effects by morphing between corresponding face models. 3D morphing sequence can be obtained using simple linear interpolation between the geometric coordinates of corresponding vertices in each of the two face meshes. Together with the geometric interpolation, we need to blend the associated textures. When we morph two different expressions of the same face model then rst intermediate face model is generated by geometric interpolation. Texture for this intermediate model is directly generated from neutral face by establishing corresponded between each

123

SIViP Table 1 Facial expressions dened by FAP-2

Textual description The eyebrows are relaxed. The mouth is open and mouth corners pulled back toward the ears The inner eyebrows are bent upward. The eyes are slightly closed, the mouth is relaxed The inner eyebrows are pulled downward and together. The eyes are wide open. The lips are pressed against each other or opened to expose the teeth The eyebrows are raised and pulled together. The inner eyebrows are bent upward. The eyes are tense and alert The eyebrows and eyelids are relaxed. The upper lip is raised and curled, often asymmetrically The eyebrows are raised. The upper eyelids are wide open, the lower relaxed. The jaw is opened

Expressions Happiness Sadness Anger Fear Disgust Surprise

line. New expression can be generated by blending any two expressions from the six basic expressions with the help of morphing. Our proposed morphing algorithm is as follows Step 1: For each value of interpolation factor t repeat Step 2 to 5 (0 <= t <= 1) Step 2: For each triangle of the model repeat Steps 3 to 5 to generate intermediate frame Step 3: Sort three vertices of triangle based on their y coordinate in ascending order Step 4: Interpolate vertices of triangle in source and target model. Interpolated vertices = (1 t)* source triangle vertices + t* target triangle vertices M I = M1 M1 = M I M I = M2 M2 = M I

Fig. 9 Neutral face and FAPUs Table 2 Facial expressions and FAPs Expressions Happiness FAPs no Raise corner lip, stretch corner lip, lift cheek, mouth open 59,60,6,7, 41,42,4-,5-,51-,52Lower corner lip, lower inner eyebrow, close eyelid 59-,60-,31-,32-,19,20 Close eyelid, mouth open, stretch nose, raise corner lip 19,20, 4-,5,51-,52, 61,62,59,60 Raise eyebrow, mouth open, open eyelid 31,32,33,34,35,36, 4-,5-,51-,52-,19-,20Anger Open eyelid, lower eyebrow, squeeze eyebrow, mouth open 19-,20-,31-,32-,37,38,4-,5,51-,52 Open eyelid, raise eyebrow, squeeze eyebrow, mouth open 19-,20-,31,32,33,34,35,36,37,38, 4-,5-,51-,52-

Sadness

Ms Inv (Ms ) Mt Inv(Mt )

Disgust

Surprise

Fear

Where, M1 is afne transformation matrix that relates source triangle vertices (Ms ) with intermediate triangle vertices (M I ). Same way M2 is afne transformation matrix that relates target triangle vertices (Mt ) with Intermediate triangle vertices (M I ). Step 5: Generate texture for intermediate frame by establishing correspondence between source model triangle and destination model triangle for each scan line for each pixel Image(x, y) = (1 t) Image1(x1, y1) + t Image2(x2, y2) [x1, y1, 1]T = Inv(M1) [x y1]T [x2, y2, 1]T = Inv(M2) [x y1]T (x, y) is pixel in intermediate frame, (x1, y1) is corresponding pixel in source image and (x2, y2) is corresponding pixel

triangle of neutral model and intermediate expression model for each scan line for each pixel. We have also developed algorithm which morph expressions of any two face models. We have used triangle-based wrapping method [21,22]. Intermediate texture is generated using linear interpolation of respective source and destination triangle for each scan

123

SIViP Fig. 10 Synthesized images and 3D models

in destination image. Starting x value corresponding to next scan line = previous x start value + (1/slope). Because of afne transformations this algorithm giving good results compared to our previous proposed algorithm discussed in [17,23].

5 Animation Once the expressions are modeled as described above, it is time to animate our character of interest. As mentioned earlier, we have used key-frame approach of animation. Every facial expression can be stored as key-frame by storing the values of parameters. After all key-frames are dened, animation can be created by generating intermediate frames. Intermediate frames are generated by interpolating the parameters values of successive key-frames. Global animation of the face is of great importance in the implementation of facial animation when an animator manipulates a 3D face in terms of translation, rotation and zooming. We have rotated 3D model about three primary axes and our morphing algorithm automatically generate texture for rotated model. Same way 3D model is scaled to create effect of Zoom-in and Zoom-out.

Fig. 11 A Frontal face with hair. b Synthesized face. c Synthesized face after rotation about y-axis

6 Simulation results The face images we have used for simulation are mainly from 3D facial expression database BU-3DFE. The database covers both male and female images with different expressions, various nationality and different illuminations. We have tested our algorithm on lower resolution face images as well on higher resolution face images of size 512 512. Results discussed in this paper are visually better than published in [17,19,23]. Our database consists of images of

different people with different illuminations. We have evaluated our algorithm on many face images and result of facial features extraction for one of the face image is shown in Fig. 5. Some of the results of 3D model construction are shown in Fig. 10. We have also tried to construct 3D model of the face image with hair, which is shown in Fig. 11. We have tested accuracy of our algorithm with comparing synthesized face image with original face image as pixel level. Our algorithm manipulates vertices of face model according to FAPs specied in MPEG-4 standard and generates six basic expressions. Figure 12 shows Happy, Sad, Angry, Fear, Disgust and Surprise expressions on synthesized human face. 3D morphing sequence is obtained using linear interpolation between the geometric coordinates of corresponding vertices in each of the two face meshes and blending the associated texture. In our experiment we have used linear interpolation parameter t = 0.2, 0.4, 0.6, 0.8 and 1 to generate intermediate frames. We have chosen the value of t in step of 0.2 to show smooth change from one frame to other frame which enables to run our algorithm faster without compromising quality of reconstruction. In Fig. 13 we have morphed sad and surprise expression and generated new expression. Figure 14 shows result of morphing between any two different face models. Result of 3D rotation about three principle axes is shown in Fig. 15. Results of Zoom-in and Zoom-out are shown in

123

SIViP

Fig. 16. We have developed our algorithm in MATLAB and tested on PIV, 3 GHz, and 1 GB RAM computer. The total time starting from feature extraction to synthesis of image is shown in Table 3. The overall speed of algorithm will be slower in case of large image size, which is proportional to the number of pixels to be rendered.

7 Conclusion Our proposed method adapt generic 3D model into face specic model and successfully synthesis 3D face, i.e. generate 3D topology and texture. Our proposed method does not require any user interaction. We have also implemented facial animation with the synthesized 3D face in variant manners including expressions synthesis and synthesis different views with the help of rotation of 3D model. Our proposed morphing algorithm generates new expressions from the existing

Fig. 14 Morphing between Model A and Model B

Fig. 15 a Synthesized face after rotation of model about y-axis (45). b Synthesized face after rotation of model about x-axis (45). c Synthesized face after rotation of model about z-axis (30)

Fig. 12 Synthesize expressions a happy, b sad, c angry, d fear, e disgust, f surprise

Fig. 16 a Zoom-out (Sx , S y , Sz ) = (0.5, 0.5, 0.5). b Zoom-in (Sx , S y , Sz ) = (2, 2, 2)

Fig. 13 Morphing between sad and surprise expression

123

SIViP Table 3 Time complexity of 3D model reconstruction algorithm Feature points Left eye: 4, Right eye: 4 Nose: 1, Lip: 8 Chin: 3 Face contour: 20 Left eye: 4, Right eye: 4 Nose: 1, Lip: 8 Chin: 3 Face contour: 20 Left eye: 4, Right eye: 4 Nose: 1, Lip : 8 Chin: 3 Face contour: 20 350 triangles 221 vertices 688 472 7.125 350 triangles 221 vertices 512 512 5.578 Generic 3D model 350 triangles 221 vertices Image size 200 200 Time (s) 1.438 9. Yin, L., Sun, X., Wang, Y., Rosato, M.J.: A 3D facial expression database for facial behavior research. In: Proceedings of International Conference on FGR, vol. 26, pp. 211216. UK (2006) 10. Guan, Y.: Automatic 3D face reconstruction based on single 2D image. International Conference on Multimedia and Ubiquitous Engineering, pp. 1216-1219. Seoul, Korea, 2628 April (2007) 11. Baek, S., Kim, B., Lee, K.: 3D face model reconstruction from single 2D frontal image. VRCAI 2009, pp. 95101. Yokohama, Japan, Dec. 1415, 2009 (ACM 2009, ISBN 978-1-60558-912-1). 12. Yagi, Y.: Facial feature extraction from frontal face image. In: IEEE Int. Conf. Signal Process. 2, 12251232 (2000) 13. Sheng, Y., Sadka, A.H., Kondoz, A.M.: An automatic algorithm for facial feature extraction in video applications. In: Proceedings of the Fifth International Workshop on Image Analysis for Multimedia Interactive Services (WIAMIS2004), Lisbon, Portugal, April 2123, (2004) 14. Sheng, Y., Sadka, A.H., Kondoz, A.M.: Automatic single view based 3D face synthesis for unsupervised multimedia applications. In: IEEE Trans. Circuits Syst. Video Technol. 18(17), 961974 (2008) 15. Hsu, R.L., Abdel-Mottaleb, M., Jain, A.K.: Face detection in color image. In: IEEE Trans. Pattern Anal. Mach. Intell. 24(5), 696706 (2002) 16. Hulbert, A., Poggio, T.: Synthesizing a colour algorithm from examples. Sciences 239, 482485 (1998) 17. Patel N.M., Zaveri M.: 3D Facial model construction and expressions synthesis using a single frontal face image. Int. J. Comput. Graph. SERC 1(1) (2010) 18. Patel, N.M., Patel, P., Zaveri, M.: Parametric model based facial animation synthesis. International conference on emerging trends in computing, pp. 810. Kamraj College of engg. & Tech., Tamilnadu, India (2009) 19. Narendra, P., Mukesh, Z.: 3D model construction and expression synthesis from a single frontal image. International conference on computer and communication technology, MNIT, Allahabad, Sept. 1719 (2010) 20. Krindis, S., Pitas, I.: Statistical analysis of human facial expressions. J. Inf. Hiding Multimedia Signal Process. 1(3), 241 260 (2010) 21. Pighin, F., Hecker, J., Lishinski, D., Szeliski, R., Salesin, D.H.: Synthesizing realistic facial expressions from photographs. In: Proceedings of SIGTGRAPH, pp. 7584 (1998) 22. Lee, W., Magnenat-Thalmann, N.: Head modeling from pictures and morphing in 3D with image metamorphosis based on triangulation, vol. 1537. pp. 254267. Springer, Berlin, Heidelberg (1998) 23. Narendra, P., Mukesh, Z.: 3D facial model construction and animation from a single frontal face image. International conference on communications and signal processing, NIT, Calcuit, Feb. 1012, (2011)

six basic expressions. Morphing result indirectly tell the accuracy of our algorithm because it only produce smooth result if features are properly align in the respective models.
Acknowledgments The authors would like thank to state university of New York at Binghamton for providing database BU-3DFE.

References
1. Ersotelos, N., Dong, F.: Building highly realistic facial modeling and animation: a survey. Vis. Comput. 24(1), 1330 (2008) 2. Ip, H.H.S., Yin, L.: Constructing a 3D individualized head model from two orthogonal views. Vis. Comput. 12(5), 254266 (1996) 3. Blanz, V., Vetter, T.: A morphable model for the synthesis of 3D faces. In: Proceedings of SIGGRAPH 699, pp. 187194. Los c angles, CA, USA, (1999) 4. Feng, G.C., Yuen, P.C.: Recognition of head and shoulder face image using virtual frontal view image. In: IEEE Trans. Syst. Man Cybern. Part A 30(6), 871882 (2000) 5. Liu, Z.: A fully automatic system to model faces from a single image. MSR technical report (2003) 6. Parke, F.I.: Parameterized models for facial animation. In: IEEE Comput. Graph. Appl. 2(9), 6168 (1982) 7. Pandzic, I.S., Komiya, R., Forchheimer, R.: MPEG-4 facial animation: the standard, implementation and applications. John Wiley & Sons, New York (2002). ISBN: 0-470-84465-5 8. Zhang, Y., Zhu, Z., Yi, B.: Dynamic facial expression analysis and synthesis with MPEG-4 facial animation parameters. In: IEEE Trans. Syst. Video Technol. 18(10), 13831396 (2008)

123

Potrebbero piacerti anche