1. Trang chủ
  2. » Ngoại Ngữ

Exploring face space a computational approach

122 227 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 122
Dung lượng 3,29 MB

Nội dung

EXPLORING FACE SPACE: A COMPUTATIONAL APPROACH ZHANG SHENG B.Sc., Zhejiang University, 1998 M.Sc., Chinese Academy of Sciences, 2001 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF Doctor of Philosophy in SCHOOL OF COMPUTING NATIONAL UNIVERSITY OF SINGAPORE c 2006, Zhang Sheng ii To my wonderful wife, Lu Si. iii Acknowledgements I wish to express my sincere gratitude to my supervisor, Dr. Terence Sim, for his valuable guidance on research, encouragement and enthusiasm, and his pleasant personality. Without him, this thesis would never have been completed. I am grateful to my committee members, Assoc. Prof. Leow Wee Kheng and Dr. Fang Chee Hung. I enjoyed my fruitful discussions with Assoc. Prof. Leow Wee Kheng. His expertise, questions and suggestions have been very useful on improving my PhD work. I also thank Dr. Alan Cheng for sharing with me his broad knowledge on computational geometry and Dr. Sandeep Kumar at General Motors (GM) for educating me computer security and English writing. I had a pleasant stay at the School of Computing (SOC), NUS. I am indebted to my colleagues: Guo Rui, Wang Ruixuan, Miao Xiaoping, Janakiraman Rajkumar, Saurabh Garg and Zhang Xiaopeng etc. I really enjoyed my collaborations and discussions with these brilliant people. I also take this special occasion to thank the University and the Singapore government for providing the world-class research environment and the financial support. Finally, I would like to thank my family for their endless love and support, especially my wife Lu Si, to whom this thesis is lovely dedicated. iv Zhang Sheng NATIONAL UNIVERSITY OF SINGAPORE November 2006 v Contents Dedication iii Acknowledgements iv Contents vi Abstract ix List of Tables x List of Figures xi Introduction 1.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Problem Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.5 Thesis Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Literature Survey 10 vi 2.1 2.2 Statistical Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.1.1 Eigenface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.1.2 KPCA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 2.1.3 ICA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 2.1.4 GMM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 2.1.5 Observations . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 Manifold Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 2.2.1 Multidimensional Scaling (MDS) . . . . . . . . . . . . . . . . 16 2.2.2 Isomap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 2.2.3 Locally Linear Embedding(LLE) 2.2.4 Comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 Theory . . . . . . . . . . . . . . . . 19 22 3.1 Basic Ideas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 3.2 Mathematical Modeling . . . . . . . . . . . . . . . . . . . . . . . . . 26 3.2.1 Face rendering . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 3.2.2 Face recognition . . . . . . . . . . . . . . . . . . . . . . . . . . 32 3.3 Special Case: Zero Curvature . . . . . . . . . . . . . . . . . . . . . . 34 3.4 Visualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 3.5 Representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 3.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 Geometric Analysis 62 4.1 Distance Metric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 4.2 Space Structure: Geomap . . . . . . . . . . . . . . . . . . . . . . . . 67 4.3 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 vii Application 78 5.1 Identity Ambiguity . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 5.2 Experiment: Face Recognition . . . . . . . . . . . . . . . . . . . . . . 86 Conclusion 90 6.1 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 6.2 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 6.3 Future Directions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 Bibliography 94 Appendix A Scatter Matrix 99 Appendix B PCA vs. Euclidean MDS 100 Appendix C Linear Least-Squares 102 Appendix D Computing the Jacobian Matrix 104 Appendix E Image Rendering 106 E.1 Face Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 E.2 Coordinate System . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 E.3 Rendering Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . 108 viii Abstract Face recognition has received great attention especially during the past few years. However, even after more than 30 years of active research, face recognition, no matter using still images or video, is a difficult problem. The main difficulty is that the appearance of a face changes dramatically when variations in illumination, pose and expression are present. And attempts to find features invariant to these variations have largely failed. Therefore we try to understand how face image and identity are affected by these variations, i.e., pose and illumination. In this thesis, by using image rendering, we present a new approach to study the face space, which is defined as the set of all images of faces under different viewing conditions. Based on the approach, we further explore some properties of the face space. We also propose a new approach to learn the structure of the face space that combines the global and local information. Along the way, we explain some phenomena, which have not been clarified yet. We hope the work in this thesis can help to understand the face space better, and provide useful insights for robust face recognition. ix List of Tables 1.1 Notations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 Compare Geomap with Isomap and Euclidean MDS . . . . . . . . . . 72 5.1 Classification accuracy rate (%) of two sets of face images by varying the number of training samples. . . . . . . . . . . . . . . . . . . . . . 88 x and by measuring the size of the ambiguous regions as more people are added, we can decide whether these observations are true. 2. We plan to explore the face space when pose and illumination vary simultaneously. It has been noted [33] that face recognition is getting even worse when pose and illumination vary simultaneously. Based on the properties of the face space under varying pose and illumination, we can further explore the face space under simultaneous variations. 3. Besides illumination and pose, we also intend to synthesize different facial expressions. This will add new variability to the face space. We can also synthesize beards, moustaches, and eye-glasses, then investigate these effects. 4. We want to enlarge the parameter space, e.g., from [−90◦ , 90◦ ] to [−120◦ , 120◦ ]. In [20], Lee et al. raised the face recognition problem under extreme lighting condition. By extending the parameter space, we can study this problem with our theory. For example, we can compute the identity ambiguity, which could give insights to the face recognition under extreme viewing conditions. 93 Bibliography [1] Aapo Hyv¨arinen and Erkki Oja. Independent Component Analysis: Algorithms and Applications. Neural Networks, 13(4-5):411–430, 2000. [2] Y. Adini, Y. Moses, and S. Ullman. Face Recognition: The Problem of Compensating for Changes in Illumination Direction. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(7):721–732, 1997. [3] M. Bartlett, J. Movellan, and T. Sejnowski. Face Recognition by Independent Component Analysis. IEEE Transactions on neural networks, 13(6):1450–1464, 2002. [4] G. Baudat and F. Anouar. Generalized Discriminant Analysis Using a Kernel Approach. Neural Computation, 12(10):2385–2404, 2000. [5] P. Belhumeur, J. Hespanha, and D. Kriegman. Eigenfaces vs. Fisherfaces: Recognition Using Class Specific Linear Projection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(7), 1997. [6] V. Blanz and T. Vetter. A Morphable Model for the Synthesis of 3D Faces. In ACM SIGGraph, Computer Graphics, 1999. [7] M. P. Do Carmo. Riemannian Geometry. Cambridge, MA: Birkhauser, 1992. 94 [8] Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. Introduction to Algorithms, 2nd Edition. The MIT Press, September 2001. [9] T. Cox and M. Cox. Multidimensional Scaling. Chapman and Hall, 1994. [10] J. Daugman. Face and Gesture Recognition: Overview. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(7):675–676, 1997. [11] R. Duda, P. Hart, and D. Stork. Pattern Classification, 2nd edition. John Wiley and Sons, 2000. [12] M. A. Fischler and R. C. Bolles. Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography. 24(6):381 – 395, 1981. [13] J. Foley, A. van Dam, S. Feiner, and J. Hughes. Computer Graphics: Principles and Practice. Addison Wesley, 1993. [14] A.S. Georghiades, D.J. Kriegman, and P.N. Belhumeur. Illumination Cones for Recognition under Variable Lighting: Faces. In IEEE Conference on Computer Vision and Pattern Recognition, pages 52–59, 1998. [15] G.H. Golub and C.F. Van Loan. Matrix Computations, 3rd Edition. Johns Hopkins Univ. Press, 1996. [16] Brian Guenter and Richard Parent. Computing the Arc Length of Parametric Curves. 10(3):72 – 78, May 1990. [17] I.T. Jolliffe. Principal Component Analysis. New York: Springer Verlag, 1986. 95 [18] Takeo Kanade and Akihiko Yamada. Multi-Subregion Based Probabilistic Approach Toward Pose-Invariant Face Recognition. In Proceedings of 2003 IEEE International Symposium on Computational Intelligence in Robotics and Automation (CIRA), pages 954 – 959, July 2003. [19] John M. Lee. Introduction to Smooth Manifolds. Springer, September 2002. [20] Kuang-Chih Lee, Jeffrey Ho, and David Kriegman. Acquiring Linear Subspaces for Face Recognition under Variable Lighting. IEEE Transactions on Pattern Analysis and Machine Intelligence, 27(5):684–698, May 2005. [21] Yanxi Liu and Jeffrey Palmer. A Quantified Study of Facial Asymmetry in 3D Faces. In Proceedings of the 2003 IEEE International Workshop on Analysis and Modeling of Faces and Gestures, in conjunction with the 2003 International Conference of Computer Vision (ICCV ’03), October 2003. [22] David J.C. MacKay. Information Theory, Inference and Learning Algorithms. Cambridge University Press, 2003. [23] James Munkres. Topology (2nd ed.). Prentice Hall, 1999. [24] J. Leveque R. Bazin D. Batisse B. Querleux N. Magnenat-Thalmann, P. Kalra. A Computational Skin Model: Fold and Wrinkle Formation. 5(4):317 – 323, December 2002. [25] Shree K. Nayar, Hiroshi Murase, and Sameer A. Nene. Parametric Appearance Representation. In Shree K. Nayar and T. Poggio, editors, Early Visual Learning, pages 131–160. Oxford University Press, February 1996. 96 [26] John Oprea. Differential Geometry and Its Applications, 2nd edition. Prentice Hall, 2003. [27] A. Pentland, B. Moghaddam, and T. Starner. View-based and Modular Eigenspaces for Face Recognition. In Proceedings of Computer Vision and Pattern Recognition, 1994. [28] P. Phillips, P. Grother, R. Micheals, and D. Blackburn. Facial Recognition Vendor Test 2002. http://www.frvt.org. [29] Ravi Ramamoorthi. Analytic PCA Construction for Theoretical Analysis of Lighting Variability in Images of a Lambertian Object. IEEE Transactions on Pattern Analysis and Machine Intelligence, pages 1322 – 1333, October 2002. [30] C Radhakrishna Rao and M Bhaskara Rao. Matrix Algebra and Its Applications to Statistics and Econometrics. World Scentific Publishing, July 1998. [31] Sam Roweis and Lawrence Saul. Nonlinear Dimensionality Reduction by Locally Linear Embedding. In Science, volume 290, Dec.22 2000. [32] B. Scholkopf and A. Smola. Learning with Kernels. MIT Press, 2002. [33] T. Sim and S. Zhang. Exploring Face Space. In CVPR workshop on Face Processing in Video, June 2004. [34] Terence Sim, Simon Baker, and Maan Bsat. The CMU Pose, Illumination, and Expression Database. IEEE Transactions on Pattern Analysis and Machine Intelligence, 25(12):1615 – 1618, December 2003. [35] S. Sudeep. Human ID Project at University of South Florida, 2001. http: //marathon.csee.usf.edu/HumanID/index.html. 97 [36] J. B. Tenenbaum, V. de Silva, and J. C. Langford. A Global Geometric Framework for Nonlinear Dimensionality Reduction. Science, 290:2319–2323, 2000. [37] George B. Jr. Thomas and Ross L. Finney. Calculus and Analytic Geometry (9th ed.). Addison Wesley, 1996. [38] M. Turk and A. Pentland. Eigenfaces for Recognition. Journal of Cognitive Neuroscience, 3(1), 1991. [39] T. Vetter. Synthesis of Novel Views from a Single Face Image. International Journal of Computer Vision, 28(2), 1998. [40] W. Zhao and R. Chellappa and A. Rosenfeld and P. Phillips. Face Recognition: A Literature Survey. ACM Computing Surveys, 12:399–458, 2003. [41] Jing Wang, Zhenyue Zhang, and Hongyuan Zha. Adaptive manifold learning. In Lawrence K. Saul, Yair Weiss, and L´eon Bottou, editors, Advances in Neural Information Processing Systems 17, pages 1473–1480. MIT Press, Cambridge, MA, 2005. [42] Alan Watt and M. Watt. Advanced Animation and Rendering Techniques. Addison-Wesley Professional, 1992. [43] M. Woo, J. Neider, T. Davis, and D. Shreiner. OpenGL(R) Programming Guide: The Official Guide to Learning OpenGL, Version 1.2. Addison-Wesley, 1999. 98 Appendix A Scatter Matrix Proof. On one hand, (xi − xj )(xi − xj ) i (xi xi − xi xj − xj xi + xj xj ) = j i j (xi xi ) − 2( = 2N xi )( xj ) i (xi xi ) − 2N mm = 2N i = 2N ( Here, m = N xi xi − N mm ). (A.1) xi is the total mean of the samples. On the other hand, N (xi − m)(xi − m) St = i = (xi xi − xi m − mxi + mm ) = xi xi − N mm . (A.2) By examining Equations (A.1) and (A.2), we can see that St = 2N N N i j (xi − xj )(xi − xj ) (A.3) 99 Appendix B PCA vs. Euclidean MDS Proof. If D is the Euclidean distance matrix between X, by converting distances to inner products with the centering matrix H, X X = − HDH = B. (B.1) X Xv = λM DS v, (B.2) From Eq. (2.5), where v is the MDS eigenvector corresponding to the eigenvalue λM DS . PCA applies eigen-decomposition, i.e. XX u = λP CA u (B.3) where u is the PCA eigenvector corresponding to the eigenvalue λP CA . If we multiply X on both sides of Eq. (B.2), XX Xv = λM DS Xv. (B.4) By comparing Eqs. (B.3) (B.4), we can readily obtain λP CA = λM DS Xv = u (B.5) (B.6) 100 In matrix form, Eqs. (B.5) (B.6) can be rewritten as Λ = ΛP CA = ΛM DS XV = U (B.7) (B.8) Since YP CA = U X and X X = B = VΛV , YP CA = U X = V X X = V VΛV = ΛV (B.9) By comparing with YM DS = Λ1/2 V , we can easily see that YP CA = Λ1/2 Λ1/2 V = Λ1/2 YM DS (B.10) 101 Appendix C Linear Least-Squares Given A ∈ RD×d with D d and b ∈ RD , the task is to find the vector x ∈ Rd that minimizes the Ax − b . Let us define e = Ax − b . To minimize e, we compute the derivative of e with respect to x: de dx d Ax − b = d(x) = = 2A (Ax − b) (C.1) A Ax = A b (C.2) Thus, In the case where A has rank d, the matrix A A is invertible, and so x may be found by x = (A A)−1 A b. Now let us prove that x = (A A)−1 A b minimizes e. This can be proven by 102 examining its second-order derivative d2 e d = {2A (Ax − b)} d(x) d(x) = 2A A (C.3) Note that A A is a positive definite matrix because it has positive eigenvalues. Therefore, the second-order derivative of e is a positive matrix, which can guarantee the minimum of e. 103 Appendix D Computing the Jacobian Matrix Given two matrices A ∈ RD×k and B ∈ Rd×k , J = AB (BB )−1 minimizes the Frobenius norm A − JB F, if BB is invertible. Proof. Let us define e = A − JB F. Based on the formula X F = tr{XX }, we can rewrite e as e = tr{(A − JB)(A − JB) }. (D.1) To minimize e, we compute the derivative of e with respect to J: de d(J) d(tr{(A − JB)(A − JB) }) = d(J) = = 2B(JB − A) (D.2) That is: (JB − A)B JBB = = AB . (D.3) 104 It is easy to see that J = AB (BB )−1 if BB is invertible. Now let us prove that J = AB (BB )−1 minimizes e. This can be proven by examining its second-order derivative d2 e d(BB J − A ) = d(J) d(J) = BB Note that BB (D.4) is a positive definite matrix because it has positive eigenvalues. Therefore, the second-order derivative of e is a positive matrix, which can guarantee the minimum of e. 105 Appendix E Image Rendering E.1 Face Models For the work in this thesis, we have chosen the USF 3D dataset [35] to render face images. This dataset consists of about 200 people with 3D (depth) data of their faces, as well as 2D photographs captured under uniform illumination. These photographs are used as texture maps over the 3D shape to produce the rendered images. The face models are captured by using Cyberware 3D scanner. Then, the surface of each face model has been triangulated into about 16, 000 triangles with 8, 000 vertices. We arbitrarily chose a subset of subjects (Fig. 3.1) and used OpenGL [43] to render the faces under different illuminations and poses. Fig. 3.2 shows a few rendered images under various poses and illuminations. The main reason that we chose to render these faces, instead of acquiring actual photographs, is that we cannot collect so many face images under controlled viewing conditions, i.e., particular pose or illumination angles. 106 E.2 Coordinate System Also, we suppose that we have a rendering algorithm R(θ, φ) that can render (synthesize) a face image of a person under particular illumination θ and pose φ, using some suitable parametrization of illumination and pose. This can be done by centering a face mesh at the origin in R3 , and placing a camera and a point light source on a viewing sphere of fixed radius around the origin. The camera and light positions are then specified by angles each. For instance, a light source s is represented using two angles (α, β), denoting the azimuth and elevation angles respectively. All angles are measured in degrees. The azimuth (longitude) is the left/right angle, i.e., the angle between the z-axis and the projection of s onto the xz-plane. The elevation (latitude) is the up/down angle, i.e., the angle between s and the xz-plane. Since we are only concerned with the direction of a light source, and not its strength, we may constrain s to lie on the surface of a unit hemisphere in front of the face. As such, this two-parameter representation is sufficient. When used in computations, however, s is converted to a × column vector, using the following equations: s = [sx , sy , sz ] sx = − cos(β) sin(α) sy = sin(β) sz = cos(β) cos(α) Although our current analysis assumes a point light source, other more complex lighting is possible using the Spherical Harmonics [29] technique recently proposed in the literature. This technique parameterizes illumination with numbers, instead of 2, but does not change the core ideas in this thesis in any way. 107 Figure E.1: Coordinate axes to measure illumination direction. The origin is in the center of the face. E.3 Rendering Parameters The rendered face images are 120 × 120 in size. By vectorization, we consider a D-pixel greyscale face image as a column vector x ∈ RD , where D is equal to 14, 400 = 120 × 120. To make the images more realistic, we render specularities and cast shadows appropriately. The cast shadow is rendered by using shadow buffer [42]. During image rendering, we set two sets of parameters for light and material in the following. (1) Light. We employ one point light source at infinity. The coefficients for ambient and diffuse light are set to 0.3 and respectively. (2) Material. There exist three parameters: specular, diffuse, and shiness. And we set them as 0.5, 1.0, and 50.0. We also add some ambient light to make them more realistic. There exist quite a few techniques for modeling human skin [24]. In this thesis, we employ the Phong model [13], which is a standard modeling technique in computer graphics. 108 [...]... dimensionality of the image space dimensionality of the parameter space threshold of per-pixel error curvature normalized curvature vector with all ones data point in the parameter space centroid of the data approximation error Gaussian noise data matrix in the high-dimensional space data matrix in the low-dimensional space total scatter matrix eigenvalue matrix Jacobian matrix graph distance matrix mapping... developing a mathematical framework for the face space In particular, this thesis makes four original contributions • Propose a new approach to model and quantitatively analyze the face space so that we can visualize and represent the face space • Demonstrate a new approach to machine learning which can combine global structure with local geometry • Explain some phenomena which have not been clarified yet... squares (mean= “◦ and standard deviation=vertical bar) to cover the face space (a) Face space under varying illumination and frontal pose; (b) Face space under varying pose and frontal illumination Note that for both curves, means and standard deviations decrease monotonically (almost) 61 4.1 Computing the geodesic distance in the face space (a) Two neighboring tangent planes... the fields of statistical modeling and manifold learning Chapter 3 introduces the theory of mathematical modeling of the face space The theory is then applied to visualize and represent the face space Chapter 4 elaborates on the geometric analysis of the face space The analysis shows the way to calculate the distance over the face space, and the way to discover the structure of the face space by considering... how large the face space occupies the image space This is motivated by the subspace approach, which has been extensively studied and used to model the face space In mathematical language, any subspace is an infinite space However, the face space should be bounded because the value of each image pixel is constrained, i.e, from 0 to 255 (for each channel) For example, when there is no lighting, the face. .. many face recognition techniques, face recognition remains a difficult, unsolved problem The main difficulty is that the appearance of a face changes dramatically when variations in illumination, pose and expression are present Attempts to find features invariant to these variations have largely failed [2] Therefore, we try to understand how face image and identity are a ected by these variations such as... begin to analyze two basic properties of the face space: distance metric and space structure The performance of many learning and classification algorithms depends on the distance metric over space For example, face recognition and face detection, which may need to measure the between-class and within-class distances [11] If we understand the distance metric of the face space, we can also explain some... But researchers [14] [2] have observed that the face space is a nonlinear space Nevertheless, a more systematical study is needed to study the face space For 11 example, we need to quantitatively measure how curved the face space is That is one of the reasons why we want to explore the face space In Section 3.3, we will prove that if a high-dimensional space is a plane and the projection matrix between... variation than for illumination variation, because researchers have observed that the face space under pose variation is more curved than that under illumination variation 2.1.5 Observations Although many statistical modeling techniques can be used to model the face space, they work under different assumptions because of insufficient knowledge of the face space Eigenface models the face space with the linear structure... the face space precisely This is because of the problem of limited data What they can do is to (implicitly) assume the distribution 4 Image rendering Face model Computer Graphics Face images Face Space Visualization Modeling Representation Analysis Distance metric Space structure Application Identity ambiguity Face recognition Figure 1.1: Overview of the thesis 5 of the face space, then employ certain . “◦  and standard deviation=vertical bar) to cover the face space. (a) Face space under varying illumination and frontal pose; (b) Face space under varying pose and frontal illumination. Note that for. the face space. We also propose a ne w approach to learn the structure of the face space that combines the global and local information. Along the way, we explain some phenomena, which have not. that the appearance of a face changes dramatically when variations in illumination, pose and expression are present. Attempts to find features invariant to these variations have largely failed [2].

Ngày đăng: 12/09/2015, 11:05