670 CHAPTER 23 Fingerprint Recognition FIGURE 23.17 Aligned ridge structures of mated pairs. Note that the best alignment in one part (top left) of the image results in large displacements between the corresponding minutiae in the other regions (bottom right) [22]. © IEEE. (template) minutiae string. The string representation is obtained by imposing a linear ordering based on radial angles and radii. The resulting input and template minutiae strings are matched using an inexact string matching algorithm to establish the corres- pondence. The inexact string matching algorithm essentially transforms (edits) the input string to template string , and the number of edit operations is considered as a metric of the (dis)similarity between the strings. While permitted edit operators model the impression variations in a representation of a finger (deletion of the genuine minutiae, insertion of spurious minutiae, and perturbation of the minutiae), the penalty associated with each edit operator models the likelihood of that edit. The sum of penalties of all the edits (edit distance) defines the similarity between the input and template minutiae strings. Among several possible sets of edits that permit the transformation of the input minutiae string into the reference minutiae string, the string matching algorithm chooses the transform associated with the minimum cost based on dynamic programming. The algorithm tentatively considers a candidate (aligned) input and a candidate tem- plate minutiae in the input and template minutiae string to be a mismatch if their attributes are not within a tolerance window (see Fig. 23.18) and penalizes them for 23.11 Fingerprint Matching 671 l (m, n) ␦ l (m, n) Reference minutia ⌬e ⌬r h (m, n) ␦ h (m, n) Template minutia Input minutia FIGURE 23.18 Bounding box and its adjustment [22]. ©IEEE. deletion/insertion edits. If the attributes are within the tolerance window, the amount of penalty associated with the tentative match is proportional to the disparity in the values of the attributes in the minutiae. The algorithm accommodates for the elastic distortion by adaptively adjusting the parameters of the tolerance window based on the most recent successful tentative match. The tentative matches (and correspondences) are accepted if the edit distance for those correspondences is smaller than any other corres- pondences. Figure 23.19 shows the results of applying the matching algorithm to an input and a template minutiae set pair. The outcome of the matching process is defined by a matching score. Matching score is determined from the number of mated minutia from the correspondences associated with the minimum cost of matching input and tem- plate minutiae strings. The raw matching score is normalized by the total number of minutia in the input and template fingerprint representations and is used for deciding whether input and template fingerprints are mates. The higher the normalized score, the larger the likelihood that the test and template fingerprints are the scans of the same finger. The results of performance evaluation of the fingerprint matching algorithm are illustrated in Fig. 23.20 for 1,698 finger print images in the NIST 9 database [41] and in Fig. 23.13 for 490 images of 70 individuals in the MSU database. Some sample points 672 CHAPTER 23 Fingerprint Recognition (a) (b) (c) (d) FIGURE 23.19 Results of applying the matching algorithm to an input minutiae set and a template; (a) input minutiae set; (b) template minutiae set; (c) alignment result based on the minutiae marked with green circles; (d) matching result where template minutiae and their correspondences are connected by green lines [22]. © IEEE. on the receiver operating characteristics curve are tabulated in Table 23.2. The accuracy of fingerprint matching alogirthms heavily depends on the testing samples. For instance, the best matcher in FpVTE2003 [43] achieved 99.9% true accept rate (TAR) at 1% false accept rate (FAR), while the best matcher in FVC2006 [12] achieved only 91.8% TAR at 1% FAR on the first database in the test (DB1). Commercial fingerprint matchers are very efficient. For instance, it takes about 32 ms for the best matcher in FVC2006 to extract features and perform matching on a PC with an Intel Pentium IV 3.20 GHz. 23.12 Summary and Future Prospects 673 TABLE 23.2 False acceptance and false reject rates on two data sets with different threshold values [22]. © IEEE. Threshold False acceptance False reject False acceptance False reject value rate rate rate rate (MSU) (MSU) (NIST 9) (NIST 9) 7 0.07% 7.1% 0.073% 12.4% 8 0.02% 9.4% 0.023% 14.6% 9 0.01% 12.5% 0.012% 16.9% 10 0 14.3% 0.003% 19.5% 10 Ϫ4 10 Ϫ3 10 Ϫ2 10 Ϫ1 10 0 10 1 10 2 0 20 40 60 80 100 False acceptance rate (%) Authentic acceptance rate (%) FIGURE 23.20 Receiver operating characteristic curve for NIST 9 (CD No. 1) [22]. © IEEE. 23.12 SUMMARY AND FUTURE PROSPECTS With recent advances in fingerprint sensing technology and improvements in the accu- racy and matching speed of the fingerprint matching algorithms, automatic personal identification based on a fingerprint is becoming an attractive alternative/comple- ment to the traditional methods of identification. We have provided an overview of fingerprint-based identification and summarized algorithms for fingerprint feature 674 CHAPTER 23 Fingerprint Recognition extraction, enhancement, matching, and classification. We have also presented a per- formance evaluation of these algorithms. The critical factor for the widespread use of fingerprints is in meeting the perfor- mance (e.g., matching speed and accuracy) standards demanded by emerging civilian identification applications. Unlike an identification based on passwords or tokens, the accuracy of the fingerprint-based identification is not perfect. There is a growing demand for faster and more accurate fingerprint matching algorithms which can (particularly) handle poor-quality images. Some of the emerging applications (e.g., fingerprint-based smartcards) will also benefit from acompactrepresentation of afingerprint and more effi- cient algorithms. The design of highly reliable, accurate, and foolproof biometric-based identification systems may warrant effective integration of discriminatory informa- tion contained in several different biometrics and/or technologies. The issues involved in integrating fingerprint-based identification with other biometric or nonbiometric technologies constitute an important research topic [24, 37]. As biometr ic technology matures, there will be an increasing interaction among the (biometric) market, (biometric) technology, and the (identification) applications. The emerging interaction is expected to be influenced by the added value of the technology, the sensitivities of the population, and the credibility of the service provider. It is too early to predict where, how, and which biometric technology will evolve and be mated with which applications. But it is certain that biometrics-based identification will have a profound influence on the way we conduct our daily business. It is also certain that, as the most mature and well-understood biometric, fingerprints will remain an integral part of the preferred biometrics-based identification solutions in the years to come. REFERENCES [1] A. K. Jain, R. Bolle,and S. Pankanti, editors. Biometrics: Personal Identification in Networked Society. Springer-Verlag, New York, 2005. [2] R. Bahuguna. Fingerprint verification using hologram matched filterings. In Proc. Biometric Consortium Eighth Meeting, San Jose, CA, June 1996. [3] G. T. Candela,P. J. Grother,C. I. Watson, R. A. Wilkinson, and C. L.Wilson. PCASYS: a pattern-level classification automation system for fingerprints. NIST Tech. Report NISTIR 5647, August 1995. [4] J. Canny. A computational approach to edge detection. IEEE Trans. PAMI, 8(6):679–698, 1986. [5] R. Cappelli, D. Maio, D. Maltoni, and L. Nanni. A two-stage fingerprint classification system. In Proc. 2003 ACM SIGMM Workshop on Biometrics Methods and Applications, 95–99, 2003. [6] CDEFFS: the ANIS/NIST committee to define an extended fingerprint feature set. http:// fingerprint.nist.gov/standard/cdeffs/index.html. [7] L. Coetzee and E. C. Botha. Fingerprint recognition in low quality images. Pattern Recognit., 26(10):1441–1460, 1993. [8] Walt DisneyWorldfingerprints visitors. http://www.boingboing.net/2006/09/01/walt-disneyworld- fi.html. References 675 [9] L. Lange and G. Leopold. Digital identification: it’s now at our fingertips. Electronic Engineering Times, No. 946, March 24, 1997. [10] Federal Bureau of Investigation. The Science of Fingerprints: Classification and Uses.U.S.Govern- ment Printing Office, Washington, DC, 1984. [11] J.Feng. Combining minutiae descriptorsforfingerprint matching. Pattern Recognit.,41(1):342–352, 2008. [12] FVC2006: the fourth international fingerpr int verification competition. http://bias.csr.unibo. it/fvc2006/. [13] R. Germain, A Califano, and S. Colville. Fingerprint matching using transformation parameter clustering. IEEE Comput Sci. Eng., 4(4):42–49, 1997. [14] L. O’Gorman and J. V. Nickerson. An approach to fingerprint filter design. Pattern Recognit., 22(1):29–38, 1989. [15] L. Hong, A. K. Jain, S. Pankanti, and R. Bolle. Fingerprint enhancement. In Proc. IEEE Workshop on Applications of Computer Vision, Sarasota, FL, 202–207, 1996. [16] L. Hong. Automatic personal identification using fingerprints. PhD Thesis, Michigan State University, East Lansing, MI, 1998. [17] L. Hong and A. K. Jain. Classification of fingerprint images. MSU Technical Report, MSU Technical Report MSUCPS:TR98–18, June 1998. [18] D. C. D. Hung. Enhancement and feature purification of fingerprint images. Pattern Recognit., 26(11):1661–1671, 1993. [19] A. K. Hrechak and J. A. McHugh. Automated fingerprint recognition using structur al matching. Pattern Recognit., 23(8):893–904, 1990. [20] D. K. Isenor and S. G. Zaky. Fingerprint identification using graph matching. Pattern Recognit., 19(2):113–122, 1986. [21] A. K. Jain and F. Farrokhnia. Unsupervised texture segmentation using Gabor filters. Pattern Recognit., 24(12):1167–1186, 1991. [22] A. K. Jain, L. Hong, S. Pankanti, and R. Bolle. On-line identity-authentication system using fingerprints. Proc. IEEE (Special Issue on Automated Biometrics), 85:1365–1388, 1997. [23] A. K. Jain, S. Prabhakar, and L. Hong. A multichannel approach to fingerprint classification. In Proc. Indian Conf. Comput. Vis., Graphics, and Image Process. (ICVGIP’98), New Delhi, India, December 21–23, 1998. [24] A. K. Jain, S. C. Dass, and K. Nandakumar. Soft biometric traits for personal recognition systems. In Proc. Int. Conf. Biometric Authentication (ICBA), Hong Kong, LNCS 3072, 731–738, July 2004. [25] A. K. Jain, Y. Chen, and M. Demirkus. Pores and ridges: high resolution fingerprint matching using level 3 features. IEEE Trans. PAMI, 29(1):15–27, 2007. [26] T. Kamei and M. Mizoguchi. Image filter design for fingerprint enhancement. In Proc. ISCV’ 95, Coral Gables, FL, 109–114, 1995. [27] K. Karu and A. K. Jain. Fingerprint classification. Pattern Recognit., 29(3):389–404, 1996. [28] M. Kawagoe and A. Tojo. Fingerprint pattern classification. Pattern Recognit., 17(3):295–303, 1984. [29] H. C. Lee and R. E. Gaensslen. Advances in Fingerprint Technology. CRC Press, Boca Raton, FL, 2001. [30] D. Maltoni, D. Maio, A. K. Jain, and S. Prabhakar. Handbook of Fingerprint Recognition. Springer Verlag, New York, 2003. 676 CHAPTER 23 Fingerprint Recognition [31] B. M. Mehtre and B. Chatterjee. Segmentation of fingerprint images—a composite method. Pattern Recognit., 22(4):381–385, 1989. [32] N. J. Naccache and R. Shinghal. An investigation into the skeletonization approach of Hilditch. Pattern Recognit., 17(3):279–284, 1984. [33] G. Parziale, E. Diaz-Santana, and R. Hauke. The surround ImagerTM: a multi-camera touchless device to acquire 3d rolled-equivalent fingerprints. In Proc. IAPR Int. Conf. Biometrics, Hong Kong, January 2006. [34] T. Pavlidis. Algorithms for Graphics and Image Processing. Computer Science Press, New York, 1982. [35] N. Ratha, K. Karu, S. Chen, and A. K. Jain. A real-time matching system for large fingerprint database. IEEE Trans. PAMI, 18(8):799–813, 1996. [36] H. T. F. Rhodes. Alphonse B ertillon: Father of Scientific Detection. Abelard-Schuman, New York, 1956. [37] A. Ross, K. Nandakumar, and A. K. Jain. Handbook of Multibiome trics. Springer Verlag, New York, 2006. [38] R. K. Rowe, U. Uludag, M. Demirkus, S. Parthasaradhi, and A. K. Jain. A multispectral whole-hand biometric authentication system. In Proc. Biometric Symp., Biometric Consortium Conf., Baltimore, September 2007. [39] M. K. Sparrow and P. J. Sparrow. A topological approach to the matching of single finger- prints: development of algorithms for use of rolled impressions. Tech. Report, National Bureau of Standards, Gaithersburg, MD, May 1985. [40] US-VISIT. http://www.dhs.gov. [41] C. I. Watson. NIST Special Database 9, Mated Fingerprint Card Pairs. National Institute of Standards and Technology, Gaithersburg, MD, 1993. [42] C. L. Wilson, G. T. Candela, and C. I. Watson. Neural-network fingerprint classification. J. Artif. Neural Netw., 1(2):203–228, 1994. [43] C. Wilson et al. Fingerprint vendor technology evaluation 2003: summary of results and analysis report. NIST Tech. Report NISTIR 7123, 2004. [44] J. D. Woodward. Biometrics: privacy’s foe or privacy’s friend? Proc. IEEE (Special Issue on Automated Biometrics), 85:1480–1492, 1997. [45] N. D. Young, G. Harkin, R. M. Bunn, D. J. McCulloch, R. W. Wilks, and A. G. Knapp. Novel fingerprint scanning arrays using polysilicon TFT’s on glass and polymer substrates. IEEE Electron Device Lett., 18(1):19–20, 1997. CHAPTER 24 Unconstrained Face Recognition from a Single Image Shaohua Kevin Zhou 1 , Rama Chellappa 2 , and Narayanan Ramanathan 2 1 Siemens Corporate Research, Princeton; 2 University of Maryland 24.1 INTRODUCTION In most situations, identifying humans using faces is an effortless task for humans. Is this true for computers? This very question defines the field of automatic face recognition [1–3], one of the most active research areas in computer vision, pattern recognition, and image understanding. Over the past decade,the problem of face recog nition has attracted substantial attention from various disciplines and has witnessed a skyrocketing growth of the literature. Below, we mainly emphasize some key perspectives of the face recognition problem. 24.1.1 Biometric Perspective Face is a biometric. As a consequence, face recognition finds wide applications in authen- tication, security, and so on. One recent application is the US-VISIT system by the Department of Homeland Security (DHS), collecting foreign passengers’ fingerprints and face images. Biometric signatures of a person characterize their physiological or behavioral char- acteristics. Physiolog ical biometrics are innate or naturally occuring, while behavioral biometrics arise from mannerisms or traits that are learned or acquired. Table 24.1 lists commonly used biometrics. Biometric technologies provide the foundation for an exten- sive array of hig hly secure identification and personal verification solutions. Compared with conventional identification and verification methods based on personal identifica- tion numbers (PINs) or passwords, biometric technologies offer many advantages. First, biometrics are individualized traits while passwords may be used or stolen by someone other than the authorized user. Also, biometrics are very convenient since there is nothing 677 678 CHAPTER 24 Unconstrained Face Recognition from a Single Image TABLE 24.1 A list of physiological and behavioral biometrics. Type Examples Physiological biometrics DNA, face, fingerprint, hand geometry, iris, pulse, retinal, and body odor Behavioral biometrics Face, gait, handwriting, signature, and voice to carry or remember. In a ddition, biometric technologies are becoming more accurate and less expensive. Among all biometrics listed in Table 24.1, face is a very unique one because it is the only biometric belonging to both the physiological and behavioral categories. While the physiological part of the face hasbeen widely exploited forface recognition,the behavioral part has not yet been fully investigated. In addition, as reported in [4, 5], face enjoys many advantages over other biometrics because it is a natural, nonintrusive, and easy-to-use biometric. For example [4], among the six biometrics of face, finger, hand, voice, eye, and signature, face biometric ranks the first in the compatibility evaluation of a machine readable t ravel document (MRTD) system in terms of six criteria: enrollment, renewal, machine-assisted identity verification requirements, redundancy, public perception, and storage requirements and performance. Probably the most important feature of acquiring the face biometric signature is that no cooperation is required during data acquisition. Besides applications related to identification and verification such as a ccess control, law enforcement, ID and licensing, surveillance, etc., face recognition is also useful in human-computer interaction, virtual reality, database retrieval, multimedia, computer entertainment, etc. See [2, 3] for recent summaries on face recognition applications. 24.1.2 Experimental Perspective Face recognition mainly involves the following three tasks [6]: ■ Verification: The recognition system determines if the query face image and the claimed identity match. ■ Identification: The recognition system determines the identity of the query face image. ■ Watch list: The recognition system first determines if the identity of the query face image is in the watch list and, if yes, then identifies the individual. Figure 24.1 illustrates the above three tasks and corresponding metrics used for evaluation. Among these tasks, the watch list task is the most difficult one. This chapter focuses only on the identification task. We follow the face recognition test protocol FERET [7] widely used in the face recognition literature. FERET stands for “facial recognition technology.”FERET assumes the availability of the following three sets, namely a training set, a gallery set, and a probe set. The training set is provided for the recognition algorithm to learn the features that are capable of characterizing the whole 24.1 Introduction 679 Verification: Identification: Watch List: Identity unknown Unknown individual Identification algorithm WL algorithm Claimed identity Verification algorithm Accept or reject Estimate identity On list? Verification rate ID rate Cumulative match characteristic Receiver operator characteristic Receiver operator characteristic False accept ID rate FIGURE 24.1 Three face recognition tasks: verification, identification, and watch list (courtesy of P. J. Phillips). human face space. The gallery and probe sets are used in the testing stage. The gallery set contains images with known identities and the probe set with unknown identities. The algorithm associates descriptive features with images in the gallery and probe sets and determines the identities of the probe images by comparing their associated features with features associated with gallery images. 24.1.3 Theoretical Perspective Face recognition is by nature an interdisciplinary research area,involving researchers from pattern recognition, computer vision and graphics, image processing/understanding, statistical computing, and machine learning. In addition, automatic face recognition algorithms/systems are often guided by psychophysics and neural studies on how humans perceive faces. A good summary of research on face perception is presented in [8].Wenow focus on the theoretical implication of pattern recognition for the task of face recognition. We present a hierarchical study of face pattern. There are three levels forming the hierarchy: pattern, visual pattern, and face pattern, each associated with a corresponding theory of recognition. Accordingly, face recognition approaches can be grouped into three categories. Pattern and pattern recognition: Because face is first a pattern, any pattern recognition theory [9] can be directly applied to a face recognition problem. In general, a vector [...]... circle The angles and are used to relate the viewpoint with the radiance from the object The right image shows the actual light field for the square object See another illustration in [33] the light field of focus is nothing but L ϭ [h(v1 )T , h(v2 )T , , h(vK )T ]T , which is a “long” Kd ϫ 1 vector obtained by stacking all the images at all these poses The introduction of such a “long” vector eases... 20.30]’ FIGURE 24.5 The error surfaces for the estimation of the light source direction given a face image of known shape and albedo The three plots correspond to the three approaches described in the text The lower the error is for a particular illumination direction, the darker the error sphere looks at the point corresponding to that direction The true and estimated values of the illumination direction... direction s, we compute the cost function (s) defined in (24.5), (24.6), and (24.7); therefore, we have an error surface for each method The lower the error is for a hypothesized illumination direction s, the darker the surface looks at the corresponding point on the sphere The global minimum is far from the true value using the first approach but is correct up to a discretization error for the second and third... linear combination of basis images hi , i.e., m hϭ fi h i , (24.1) iϭ1 where fi ’s are blending coefficients In other words, the basis images span the image ensemble Typically, the basis images are learned using images not necessarily illuminated under the same lighting condition This forces the learned basis images to inadequately cover variations in both identity and illumination The Lambertian reflectance... required to estimate s The nonlinear optimization is performed using the lsqnonlin function in MATLAB which is based on the interior-reflective Newton method For most faces, the function value did not change much after 4-5 iterations Therefore, the iterative optimization was always stopped after five iterations The whole process took about 5-7 seconds per image on a normal desktop 24.2.4 Recognition in the. .. carried on for further pixelwise alignment as opposed to [35] After the preprocessing step, the cropped out face image is of size 48 by 40 (i.e., d ϭ 1920) Also, we only process gray images by taking the average of the red, green, and blue channels of their color versions All 68 images under one illumination are used to form a gallery set and under another illumination to form a probe set The training set... differently colored sides resides The 2D light field L is a function of and as properly defined in Fig 24.7 The image of the 2D object is just a vertical line If the camera is allowed to leave the circle, then a curve is traced out in the light field to form the image, i.e., the light field is accordingly sampled Constructing a light field is a practically difficult task However, in the context of view-based... the T ϭ [p1 n1 , p2 n2 , , pd nd ]T matrix encodes the product of albedos and surface normal vectors for all d pixels This Lambertian model is specific to the object, and consequently, we call the T matrix an object-specific albedo-shape matrix The process of combining the above two properties is equivalent to imposing the restriction of the same light source on the basis images, with each basis image. .. registrations We randomly divide the 68 subjects into two parts The first 34 subjects are used in the training set (i.e., m ϭ 34) and the remaining 34 subjects are used in the gallery and probe sets It is guaranteed that there is no identity overlap between the training set and the gallery and probe sets To form the L vector, we use images at all available poses Since the illumination model has generalization... shadows The main disadvantage of this approach is the lack of generalization from known objects to unknown objects, with the exception of [30, 65] In [30], Shashua and Raviv used an ideal-class assumption All objects belonging 24.2 Linear Lambertian Object to the ideal class are assumed to have the same shape The work of Zhang and Samaras [65] utilized the regularity of the harmonic image exemplars to perform . If the attributes are within the tolerance window, the amount of penalty associated with the tentative match is proportional to the disparity in the values of the attributes in the minutiae. The. In other words, the basis images span the image ensemble. Ty pically, the basis images are learned using images not necessarily illuminated under the same lighting condition. This forces the. interaction is expected to be influenced by the added value of the technology, the sensitivities of the population, and the credibility of the service provider. It is too early to predict where, how,