1. Trang chủ
  2. » Công Nghệ Thông Tin

The Essential Guide to Image Processing- P24 pps

30 198 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 30
Dung lượng 1,63 MB

Nội dung

700 CHAPTER 24 Unconstrained Face Recognition from a Single Image f 16 f 15 f 13 f 21 f 12 f 11 f 08 f 06 f 10 f 18 f 04 f 02 0 10 20 30 40 50 60 70 80 90 100 Flash index Recognition rate top 1 top 3 top 5 Flash index Flash index Camera index Recognition rate f 16 f 15 f 13 f 21 f 12 f 11 f 08 f 06 f 10 f 18 f 04 f 02 f 16 f 15 f 13 f 21 f 12 f 11 f 08 f 06 f 10 f 18 f 04 f 02 0 10 20 30 40 50 60 70 80 90 100 top 1 top 3 top 5 Recognition rate 0 10 20 30 40 50 60 70 80 90 100 top 1 top 3 top 5 c 27 c 29 c 11 c 14 c 34 c 22 c 02 c 37 c 05 c 27 c 29 c 11 c 14 c 34 c 22 c 02 c 37 c 05 c 27 c 29 c 11 c 14 c 34 c 22 c 02 c 37 c 05 0 10 20 30 40 50 60 70 80 90 100 Recognition rate top 1 top 3 top 5 Camera index Recognition rate 0 10 20 30 40 50 60 70 80 90 100 top 1 top 3 top 5 Camera index 0 10 20 30 40 50 60 70 80 90 100 Recognition rate top 1 top 3 top 5 (a) (b) (c) FIGURE 24.8 The average recognition rates across illumination (the top row) and across poses (the bottom row) for three cases. Case (a) shows the average recognition rate (averaging over all illumination/poses and all gallery sets) obtained by the proposed algorithm using the top n matches. Case (b) shows the average recognition rate (averaging over all illumination/poses for the gallery set (c 27 , f 11 ) only) obtained by the proposed algorithm using the top n matches. Case(c) shows the average recognition rate (averaging over all illumination/poses and all gallery sets) obtained by the “Eigenface” algorithm using the top n matches. 24.4 Face Modeling and Verification Across Age Progression 701 robust to aging effects. Researchers from psychophysics laid the foundations for studies related to facial aging effects. D’arcy Thompson studied morphogenesis by means of geometric transformation functions applied on biological forms. Pittenger and Shaw [80] and Todd et al [81] identified certain forms of force configurations that when applied on 2D face profiles induce facial aging effects. Figure 24.9 illustrates the effect of applying the “revised” cardioidal strain t ransformation model on profile faces. The aforementioned transformation model is said to reflect the remodeling of fluid filled spherical objects with applied pressure. O’Toole et al [82] studied the effects of facial wrinkles in increasing the perceived age of faces. Ramanathan and Chellappa [59] developed a Bayesian age- difference classifier with the objective of developing systems that could perform face verification across age progression. The results from many such studies highlight the importance of developing computational models that characterize both growth-related shape variations and textural variations, such as wrinkles and other skin artifacts, in developing a facial aging model. In this section, we shall present computational models that characterize shape var i- ations that faces undergo during different stages of growth. Facial shape variations due to aging can be observed by means of facial feature drifts and progressive variations in the shape of facial contours, across ages. While facial shape variations during formative years are primarily due to craniofacial growth, during adulthood, facial shape variations are predominantly driven by the changing physical properties of facial muscles. Hence, we propose shape variation models for each of the age groups that best account for the factors that induce such variations. Pressure ϰ R 0 (1 Ϫ cos(␪)) Original Profile Profile with k ϭ 0.04 Profile with k ϭ 0.08 Profile with k ϭ 0.12 Profile with k ϭ 0.16 Profile with k ϭ 0.20 (a) (b) ϩ Origin (R 0 , ␪) (R 1 , ␪) ␪ FIGURE 24.9 (a) Remodeling of a fluid filled spherical object; (b) facial growth simulated on the profile of a child’s face using the “revised” cardioidal strain transformations. 702 CHAPTER 24 Unconstrained Face Recognition from a Single Image 24.4.1 Shape Transformation Model for Young Individuals [60] Drawing inspiration from the “revised” cardioidal strain transformation model pro- posed in psychophysics [81], we propose a craniofacial growth model that assumes the underlying form: PϰR 0 (1 Ϫ cos(␪ 0 )), R 1 ϭ R 0 ϩ k(R 0 Ϫ R 0 cos(␪ 0 )), (24.21) ␪ 1 ϭ ␪ 0 . The model described above characterizes facial feature drifts as caused by internal pressures that are resultants of craniofacial growth. In the above Eq. (24.21), P denotes the pressure at the particular point on the object surface acting radially outward. (R 0 , ␪ 0 ) and (R 1 , ␪ 1 ) denote the angular coordinates of a point on the surface of the object before and after the transformation. k denotes a growth-related constant. Face anthropometric studies [83] come in handy in providing dense facial measurements extracted across dif- ferent facial features across ages. Age-based facial measurements extracted across different facial features play a crucial role in developing the proposed grow th model. Figure 24.10 zy – zy ex – ex en – en ex ex zy sn sto ch ch go go gn sl li al al – al sl – gn ls – li n – ls tr – n ch – ch go – go al en en pi zy pi tr n ps ps tr – n n – sn n – sto sto – gn ls FIGURE 24.10 Face anthropometry: of the 57 facial landmarks defined in [83], we choose 24 landmarks illus- trated above for our study. We further illustrate some of the key facial measurements that were used to develop the growth model. 24.4 Face Modeling and Verification Across Age Progression 703 illustrates the 24 facial landmarks and some of the important facial measurements that were used in our study. Given the pressure model (24.21) and the age-based facial measurements, developing the craniofacial growth model amounts to identifying the growth parameters associated with different facial features. Let the facial growth parameters of the “revised” cardioidal strain transformation model that correspond to facial landmarks designated by [n, sn, ls, sto, li, sl, gn, en, ex, ps, pi, zy, al, ch, go]be[k 1 , k 2 , k 15 ]. The facial growth parameters for different age transformations can be computed using anthropometric const raints on facial proportions. The computation of facial growth parameters is formulated as a nonlinear optimization problem. We identified 52 facial proportions that can be reliably estimated using the photogrammetry of frontal face images. Anthropometric constraints based on proportion indices translate into linear and nonlinear constraints on selected facial growth parameters. While constraints based on proportion indices such as the intercanthal index and nasal index result in linear constraints on the growth parameters, constraints based on proportion indices such as the eye fissure index and orbital width index result in nonlinear constraints on the growth parameters. Let theconstraints derived using proportion indices be denoted as r 1 (k) ϭ ␤ 1 ,r 2 (k) ϭ ␤ 2 , , r N (k) ϭ ␤ N . The objective function f (k) that needs to be minimized w.r.t k is defined as f (k) ϭ 1 2 N  iϭ1 (r i (k) Ϫ ␤ i ) 2 . (24.22) The follow ing equations illustrate the constraints that were derived using different facial proportion indices. r 1 :  nϪgn zyϪzy ϭ c 1  ≡ ␣ (1) 1 k 1 ϩ ␣ (1) 2 k 7 ϩ ␣ (1) 3 k 12 ϭ ␤ 1 r 2 :  alϪal chϪch ϭ c 2  ≡ ␣ (2) 1 k 13 ϩ ␣ (2) 2 k 14 ϭ ␤ 2 r 3 :  liϪsl stoϪsl ϭ c 3  ≡ ␣ (3) 1 k 4 ϩ ␣ (3) 2 k 5 ϩ ␣ (3) 3 k 6 ϭ ␤ 3 r 4 :  stoϪgn gnϪzy ϭ c 4  ≡ ␣ (4) 1 k 5 ϩ ␣ (4) 2 k 7 ϩ ␣ (4) 3 k 12 ϩ ␣ (4) 4 k 2 4 ϩ ␣ (4) 5 k 2 7 ϩ␣ (4) 6 k 2 12 ϩ ␣ (4) 7 k 4 k 7 ϩ ␣ (4) 8 k 7 k 12 ϭ ␤ 4 (␣ i j and ␤ i are constants. c i is an age-based proportion index obtained from [83].) We use the Levenberg-Marquardt nonlinear optimization algorithm [84] to compute the growth parameters that minimize the objective function in an iterative fashion. Next, using the g rowth parameters computed over selected facial landmarks, we compute the growth parameters over the entire face region. This is formulated as a scattered data interpolation problem [85]. Figure 24.11 shows some of the age transformation results obtained using the proposed model. 704 CHAPTER 24 Unconstrained Face Recognition from a Single Image Original 6 yrs Original 8 yrs Original 10 yrs Original 8 yrs Growth parameters (8 yrs – 16 yrs) Transformed 16 yrs 0.11 0.1 0.09 0.08 0.09 0.09 0.08 0.07 0.08 0.07 0.04 0.12 0.1 0.08 0.06 0 0.02 Ϫ0.0 Original 16 yrs Growth parameters (10 yrs – 16 yrs) Transformed 16 yrs Original 16 yrs Growth parameters (8 yrs – 12 yrs) Transformed 12 yrs Original 12 yrs Growth parameters (6 yrs – 12 yrs) Transformed 12 yrs Original 12 yrs FIGURE 24.11 Age transformation results on different individuals. (The original images shown above were taken from the FG-Net database [86].) 24.4.2 Shape Transformation Model for Adults [61] We propose a facial shape variation model that represents facial feature deformations observed during adulthood as that driven by the changing physical properties of the underlying facial muscles. The model is based on the assumption that the degrees of freedom associated with facial feature deformations are directly related to the physical properties and geometric orientations of the underlying facial muscles. x (i) t 1 ϭ x (i) t 0 ϩ k (i) [P (i) t 0 ] x , (24.23) y (i) t 1 ϭ y (i) t 0 ϩ k (i) [P (i) t 0 ] y , 24.4 Face Modeling and Verification Across Age Progression 705 where (x (i) t 0 ,y (i) t 0 ) and (x (i) t 1 ,y (i) t 1 ) correspond to the cartesian coordinates of the ith facial feature at ages t 0 and t 1 , k (i) corresponds to a facial growth parameter, and [P (i) t 0 ] x , [P (i) t 0 ] y corresponds to the orthogonal components of the pressure applied on the ith facial feature at age t 0 . We propose a physically based paramet ric muscle model for human faces that implic- itly accounts for the physical properties, geometric orientations, and functionalities of each of the individual facial muscles. Drawing inspiration from Waters’ muscle model [87], we identify three types of facial muscles, namely linear muscles, sheet muscles, and sphincter muscles, based on their functionalities. Further, we propose transformation models for each muscle type. The following factors are to be taken into consideration while developing the pressure models. (i) Muscle functionality and gravitat ional forces: The proposed pressure models reflect the muscle functionalities such as the “stretch” operation and the “contraction” operation. The direction of applied pressure reflects the effects of gr avitational forces. (ii) Points of origin and insertion for each muscle: The degrees of freedom associated with muscle deformations are minimum at their points of origin (fixed end) and maximum at their points of insertion (free end). Hence, the deformations induced over a facial feature directly depend on the distance of the facial feature from the point of origin of the underlying muscle. The transformation models proposed on each muscle type are illustrated below. 1. Linear muscle (␣, ␾) Linear muscles correspond to the“stretch operation.” These muscles are described by their attributes namely the muscle length (␣) and the muscle orientation w.r.t to the facial axis (␾). The farther a feature is from the muscle’s point of origin, the greater the chances that the feature undergoes deformation. Hence, the pressure is modeled such that P (i) ϰ␣ (i) .(␣ i is the distance of the ith feature from the point of origin.) The corresponding shape transfor mation model is described below: x (i) t 1 ϭ x (i) t 0 ϩ k [␣ (i) sin␾], y (i) t 1 ϭ y (i) t 0 ϩ k [␣ (i) cos ␾]. 2. Sheet muscle (␣,␾,␪,␻) Sheet muscles correspond to the “stretch operation” as well. They are described by four of their attributes (muscle length, angles subtended, etc.). The pressure applied on a fiducial feature is modeled as P (i) ϰ␣ (i) sec ␪ (i) , the distance of the ith feature from the point(s) of origin of the underlying muscles. The shape transformation model is described below: x (i) t 1 ϭ x (i) t 0 ϩ k [␣ (i) sec ␪ (i) sin(␾ ϩ ␪ (i) )], y (i) t 1 ϭ y (i) t 0 ϩ k [␣ (i) sec ␪ (i) cos(␾ ϩ ␪ (i) )]. 706 CHAPTER 24 Unconstrained Face Recognition from a Single Image 3. Sphincter muscle (␣,␤) The sphincter muscle corresponds to the “contraction/expansion” operation and is described by two attributes. The pressure modeled as a function of the distance from the point of origin, P (i) ϰr (i) (␾ (i) ) cos ␾ (i) , is directed r adially inward/outward: x (i) t 1 ϭ x (i) t 0 ϩ k [r (i) (␾ (i) ) cos 2 ␾ (i) ], y (i) t 1 ϭ y (i) t 0 ϩ k [r (i) (␾ (i) ) cos ␾ (i) sin␾ (i) ]. Figure 24.12 illustrates the muscle-based pressure distributions described above. From a database that comprises 1200 pairs of age separated face images (predomi- nantly Caucasian), we selected 50 pairs of face images each undergoing the following age transformations (in years): 20s → 30s,30s → 40s,40s →50s,50s →60s, and 60s → 70s. We selected 48 facial features from each image pair and extracted 44 projec- tive measurements (21 horizontal measurements and 23 vertical measurements) across the facial features. We analyze the intrapair shape transformations from the perspective of weight-loss, weight-gain, and weight-retention and select the appropriate training sets for each case. Again, following an approach similar to that described in the previous section, we compute the muscle parameters by studying the transformation of ratios of facial distances across age transformations. 24.4.3 Texture Transformation Model From a modeling perspective, facial wrinkles and other forms of textural variations observed in aging faces can be characterized on the image domain by means of image gradients. Let (I (i) t 1 ,I (i) t 2 ),1Յ i Յ N correspond to pairs of age-separated face images of N individuals undergoing similar age transformations (t 1 → t 2 ). In order to study the facial wrinkle variations across age transformation, we identify four facial regions which tend to have a high propensity toward developing wrinkles, namely the forehead region (W 1 ), the eye-burrow region (W 2 ), the nasal region (W 3 ), and the lower chin region (W 4 ). W n ,1Յ n Յ 4 corresponds to the facial mask that helps isolate the desired facial region. Let ٌI (i) t 1 and ٌI (i) t 2 correspond to the image gradients of the ith image at t 1 and t 2 years, 1 Յ i Յ N. Given a test image J t 1 at t 1 years, the image gradient of which is ٌJ t 1 , we induce textural v ariations by incorporating the region-based gradient differences that were learned from the set of training images discussed above: ٌJ t 2 ϭٌJ t 1 ϩ 1 N N  iϭ1 4  nϭ1 W n ·  ٌI (i) t 2 ϪٌI (i) t 1  . (24.24) The transformed image J t 2 is obtained by solving the Poisson equation corresponding to image reconstructions from gradient fields [88]. Figure 24.13 provides an overview of the proposed facial aging model. 24.4 Face Modeling and Verification Across Age Progression 707 (i) Muscle-based pressure distribution (iii) Pressure modeled on sheet muscles (iv) Pressure modeled on sphincter muscles (ii) Pressure modeled on linear muscle FIGURE 24.12 Muscle-based pressure illustration. The proposed facial aging models were used to perform face recognition across age transformations on two databases. The first database was compr ised of age-separated face images of individuals under 18 years of age and the second comprised of age- separated face images of adults. On a database that comprises 260 age-separated image pairs of adults, we perform face recognition across age progression. The image pairs were compiled from both the Passport database [59] and the FG-NET database [86]. We adopt PCA to perform recognition across ages under the following three settings: no transformation in shape and texture, performing shape transformation, and performing 708 CHAPTER 24 Unconstrained Face Recognition from a Single Image Original image (age : 54 years) Weight-loss Weight-gain Original Transformed Muscle-based feature drifts 1.5 0 Ϫ1.5 ϩ 2.5 0 Ϫ2.5 Shape transformed (weight gain) Shape transformed (weight loss) Effects of gradient transformation Effects of gradient transformation Shape and texture transformation Shape and texture transformation ⌬x ⌬y ⌬x ⌬y ϩ FIGURE 24.13 An overview of the proposed facial aging model: facial shape variations induced for the cases of weight-gain and weight-loss are illustrated. Further, the effects of gradient transformations in inducing textural variations are illustrated as well. TABLE 24.4 Face recognition across ages. Experimental Setting Rank 1 (%) No transformations 38 Shape transformations 41 Shape and texture transformations 51 shape and textural transformation. Table 24.4 reports the rank 1 recognition score under the three settings. The experimental results highlight the significance of transforming shape and texture when performing face recognition across ages. A similarperformance improvement was observed on the face database that comprises individuals under 18 years of age. For a more detailed account on the experimental results, we refer the readers to our earlier works [60, 61]. 24.5 CONCLUSIONS This chapter presented a hierarchical framework for face pattern and face recog nition theory. Current face recognition approaches are classified according to their placements in this framework. We then presented the linear Lambertian object model for face recog- nition under illumination variation and the illuminating light field algorithm for face recognition under both illumination and pose variations. Finally, we discussed methods for face recognition and modeling across age progression. References 709 REFERENCES [1] R. Chellappa, C. L. Wilson, and S. Sirohey. Human and machine recognition of faces: a survey. Proc. IEEE, 83:705–740, 1995. [2] S. Z. Li and A. K. Jain, editors. Handbook of Face Recognition. Springer-Verlag, 2004. [3] W. Zhao, R. Chellappa, A. Rosenfeld, and P. J. Phillips. Face recognition: a literature survey. ACM Comput. Surv., 12, 2003. [4] R. Hietmeyer. Biometric identification promises fast and secure processing s of airline passengers. I. C. A. O. J., 55(9):10–11, 2000. [5] P. J. Phillips, R. M. McCabe, and R. Chellappa. Biometric image processing and recognition. Proc. European Signal Process. Conf., Rhodes, Greece, 1998. [6] P. J. Phillips, P. Grother, R. J. Micheals, D. M. Blackburn, E. Tabbssi, and M. Bone. Face recognition vendor test 2002: evaluation report. NISTIR 6965, http://www.frvt.org, 2003. [7] P. J. Phillips, H. Moon, S. Rizvi, and P. J. Rauss. The FERET evaluation methodology for face- recognition algorithms. IEEE Trans. Pattern Anal. Mach. Intell., 22:1090–1104, 2000. [8] A. J. O’Toole. Psychological and neur al perspectives on human faces recognition. In S. Z. Li and A. K. Jain, editors, Handbook of Face Recognition, Springer, 2004. [9] R. O. Duda, P. E. Hart, and D. G. Stork. Pattern Classification. Wiley-Interscience, 2001. [10] V. Bruce. Recognizing Faces. Lawrence Erlbaum Associates, London, UK, 1988. [11] M. Kirby and L. Sirovich. Application of Karhunen-Loéve procedure of the characterization of human faces. IEEE Trans. Patter n Anal. Mach. Intell., 12:103–108, 1990. [12] M. Turk and A. Pentland. Eigenfaces for recognition. J. Cogn. Neurosci., 3:72–86, 1991. [13] K. Etemad and R. Chellappa. Discriminant analysis for recognition of human face images. J. Opt. Soc. Am. A, 14(8):1724–1733, 1997. [14] P. N. Belhumeur, J. P. Hespanha, and D. J. Kriegman. Eigenfaces vs. Fisherfaces: recognition using class specific linear projection. IEEE Trans. Pattern Anal. Mach. Intell., 19:711–720, 1997. [15] W. Zhao, R. Chellappa, and A. Krishnaswamy. Discriminant analysis of principal components for face recognition. In Proc. Int. Conf. Automatic Face and Gesture Recognit., 361–341, Nara, Japan, 1998. [16] M. S. Barlett, H. M. Ladesand, and T. J. Sejnowski. Independent component representations for face recognition. Proc. Soc. Photo Opt. Instrum. Eng., 3299:528–539, 1998. [17] P. Penev and J. Atick. Local feature analysis: a general statistical theory for object representation. Netw.: Comput. Neural Syst., 7:477–500, 1996. [18] B. Moghaddam and A. Pentland. Probabilistic visual learning for object representation. IEEE Trans. Pattern Anal. Mach. Intell., PAMI-19(7):696–710, 1997. [19] B. Moghaddam. Principal manifolds and probabilistic subspaces for visual recognition. IEEE Trans. Pattern Anal. Mach. Intell., 24:780–788, 2002. [20] S. Zhou and R. Chellappa. Multiple-exemplar discriminant analysis for face recognition. In Proc. Int. Conf. Pattern Recognit., Cambridge, UK, August 2004. [21] J. Li, S. Zhou, and C. Shekhar. A comparison of subspace analysis for face recognition. In IEEE Int. Conf. Acoustics, Speech, and Signal Process. (ICASSP), Hongkong, China, April 2003. [22] S. H. Lin, S. Y. Kung, and J. J. Lin. Face recognition/detection by probabilistic decision based neural network. IEEE Trans. Neural Netw., 9:114–132, 1997. [...]... invariant to changes in the size, position, and orientation of the patterns In the case of iris recognition, this means we must create a representation that is invariant to the optical size of the iris in the image (which depends upon the distance to the eye and the camera optical magnification factor); the size of the pupil within the iris (which introduces a nonaffine pattern deformation); the location of the. .. to its outer boundary as a unit interval, it inherently corrects for the elastic pattern deformation in the iris when the pupil changes in size The localization of the iris and the coordinate system described above achieve invariance to the 2D position and size of the iris and to the dilation of the pupil within the iris However, it would not be invariant to the orientation of the iris within the image. .. integration The operator in (25.1) serves to find both the pupillary boundary and the outer (limbus) boundary of the iris in a mutually reinforcing manner Once the coarse -to- fine iterative searches for both these boundaries have reached single pixel precision, then a similar approach to detecting curvilinear edges is used to localize both the upper and lower eyelid boundaries The path of contour integration... XOR operator detects disagreement between any corresponding pair of bits, while the AND operator ensures that the compared bits are both deemed to have been uncorrupted by eyelashes, eyelids, specular reflections, or other noise The norms ( ) of the resultant bit vector and of the AND’ed mask vectors are then measured in order to compute a fractional Hamming Distance (HD) as the measure of the dissimilarity... both the iris and the pupil Although the results of the iris search greatly constrain the pupil search, concentricity of these boundaries cannot be assumed Very often the pupil center is nasal, and inferior, to the iris center Its radius can range from 0.1 to 0.8 of the iris radius Thus, all three parameters defining the pupil when approximated as a circle must be estimated separately from those of the. .. circular to arcuate, with spline parameters fitted by statistical estimation methods to model each eyelid boundary Images with less than 50% of the iris visible between the fitted eyelid splines are deemed inadequate, e.g., in blink The result of all these localization operations is the isolation of iris tissue from other image regions, as illustrated in Fig 25.1 by the graphical overlay on the eye Because the. .. such as run lengths similar to those of the codes for the properly focused eye images in Fig 25.1 (Fig 25.3 also illustrates the robustness of the iris- and pupil-finding operators and the eyelid detection operators, despite poor focus.) The benefit which arises from the fact that phase bits are set also for a poorly focused image as shown here, even if based only on random CCD thermal noise, is that different... image plane The most efficient way to achieve iris recognition with orientation invariance is not to rotate the image itself using the Euler matrix but rather to compute the iris phase code in a single canonical orientation and then to compare this very compact representation at many discrete orientations by cyclic scrolling of its angular variable The statistical consequences of seeking the best match... compared, but to be uniquely failed when any eye’s phase code is compared with another version of itself The test of statistical independence is implemented by the simple Boolean ExclusiveOR operator (XOR) applied to the 2048 bit phase vectors that encode any two iris patterns, masked (AND’ed) by both of their corresponding mask bit vectors to prevent noniris artifacts from influencing iris comparisons The XOR... both the set of pupillary boundary points (xp (␪), yp (␪)) and the set of limbus boundary points along the outer perimeter of the iris (xs (␪), ys (␪)) bordering the sclera, as determined by the active contour models that were initialized by the maxima of the operator (25.1): x(r, ␪) ϭ (1 Ϫ r)xp (␪) ϩ rxs (␪), (25.6) y(r, ␪) ϭ (1 Ϫ r)yp (␪) ϩ rys (␪) (25.7) Since the radial coordinate ranges from the . correspond to the stretch operation.” These muscles are described by their attributes namely the muscle length (␣) and the muscle orientation w.r.t to the facial axis (␾). The farther a feature. 5 c 27 c 29 c 11 c 14 c 34 c 22 c 02 c 37 c 05 c 27 c 29 c 11 c 14 c 34 c 22 c 02 c 37 c 05 c 27 c 29 c 11 c 14 c 34 c 22 c 02 c 37 c 05 0 10 20 30 40 50 60 70 80 90 100 Recognition rate top 1 top 3 top 5 Camera index Recognition rate 0 10 20 30 40 50 60 70 80 90 100 top 1 top 3 top 5 Camera index 0 10 20 30 40 50 60 70 80 90 100 Recognition rate top 1 top 3 top 5 (a). is an image such as Fig. 25.1 containing an eye. The operator searches over the image domain (x, y) for the maximum in the blurred partial derivative with respect to increasing radius r of the

Ngày đăng: 01/07/2014, 10:44

TỪ KHÓA LIÊN QUAN