1. Trang chủ
  2. » Công Nghệ Thông Tin

COMPUTER-AIDED INTELLIGENT RECOGNITION TECHNIQUES AND APPLICATIONS phần 6 pps

52 179 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 52
Dung lượng 1,17 MB

Nội dung

References 239 [56] Weng, J., Hwang, W. S., Zhang, Y. and Evans, C. H. “Developmental Robots: Theory, Method and Experimental Results,” Proceedings of 2nd International Symposium on Humanoid Robots, Tokyo, Japan, pp. 57–64, 1999. [57] Xie, M., Kandhasamy, J. S. and Chia, H. F. “Robot Intelligence: Towards machines that understand meanings,” International Symposium of Santa Caterina on Challenges in the Internet and Interdisciplinary Research, Amalfi Coast, Italy, January 2004. 13 Empirical Study on Appearance-based Binary Age Classification Mohammed Yeasin Department of Computer Science, State University of New York, Institute of Technology Utica, NY-13504, USA Rahul Khare Rajeev Sharma Computer Science and Engineering Department Pennsylvania State University, University Park, PA-16802, USA This chapter presents a systematic approach to designing a binary classifier using Support Vector Machines (SVMs). To exemplify the efficacy of the proposed approach, empirical studies were conducted in designing a classifier to classify people into different age groups using only appearance information from human facial images. Experiments were conducted to understand the effects of various issues that can potentially influence the performance of such a classifier. Linear data projection techniques such as Principal Component Analysis (PCA), Robust PCA (RPCA) and Non-Negative Matrix Factorization (NMF) were tested to find the best representation of the image data for designing the classifier. SVMs were used to learn the underlying model using the features extracted from the examples. Empirical studies were conducted to understand the influence of various factors such as preprocessing, image resolution, pose variation and gender on the classification of age group. The performances of the classifiers were also characterized in the presence of local feature occlusion and brightness gradients across the images. A number of experiments were conducted on a large data set to show the efficacy of the proposed approach. Computer-Aided Intelligent Recognition Techniques and Applications Edited by M. Sarfraz © 2005 John Wiley & Sons, Ltd 242 Appearance-based Binary Age Classification 1. Introduction Intelligent systems for monitoring people and collecting valuable demographics in a social environment will play an increasingly important role in enhancing the user’s interaction experience with a computer, and can significantly improve the intelligibility of Human–Computer Interaction (HCI) systems. For example, a robust age classification system is expected to provide a basis for the next generation of parental control tools. Robust age classification, along with gender and ethnicity information, can have a profound impact in advertising (i.e. narrowcasting and gathering demographic data for marketing), law enforcement, education, security, electronic commerce, gathering consumer statistics, etc. This would directly impact the consumer electronics and personal computer hardware markets since they are the key distribution channels for delivering content in today’s society. The algorithm discussed in this chapter may be integrated into the VLSI circuit of media devices, along with features to customize the desired type of control. Current methods for gathering vital consumer statistics for marketing involve tedious surveys. If it were possible to gather data such as age information about the customers within the marketing establishments, such information could prove to be vital for deciding which sections of society need to be targeted. This could potentially lead to enormous savings in time as well as money. Current methods of advertising follow broadcasting techniques wherein the advertisement is for all people and not targeted specifically towards the person watching the advertisement. In many practical scenarios, such a method may not be the best way, and automated narrowcasting could be used. Such targeted advertisements have a deeper impact as they are targeted specifically to the tastes of the person watching the advertisement. Tastes, needs and desires of people change with age and if it were possible to determine the age category of the person, then narrowcasting could be used effectively in such scenarios. Several different visual cues could be used for determining the age category of a person in an automated manner. Height, skin texture, facial features and hair color are some of the visual features that change with age. Cranio-facial research [1] suggests that as people grow older there is an outgrowing and dropping of the chin and the jaw. This change in the skull with an increase in age changes the T-ratios (transverse ratios of the distance between the eyes to the distance between lines connecting the eyes from the nose/chin/top of head). This theory was validated in a study in [2], where it was found that children up to the age of 18 have different ratios compared to people beyond the age of 18. After the age of 18 the skull stops growing and so there is no change in the ratios. Kwon et al. [2] used these results to differentiate between children and adults after using facial feature detectors on high-resolution images. The presence and absence of wrinkles was also used for differentiating between seniors and young adults (refer to [3] for details). However, these methods are inadequate for use in applications that require real-time performance with low-resolution images. The use of snakes for finding wrinkles in seniors is difficult to compute from low-resolution facial images and takes time to stabilize the snakes. Recent advances in image analysis and pattern recognition open up the possibility of automatic age classification from only facial images. Classifying age with a very high degree of accuracy is a relatively hard problem, even for human experts [1]. Automatic classification of age from only facial images presents a number of difficult challenges. In general, five main steps can be distinguished in tackling the problem in an holistic manner. 1. An automated robust face detection system for segmenting the facial region. 2. Preprocessing of image data to eliminate the variabilities that may occur in image data capture. 3. Suitable data representation techniques that can keep useful information and at the same time reduce the dimensionality of the feature vector. 4. Design of a robust classifier for the classification of age. 5. Characterization of performance of the classifier. This chapter provides an holistic approach by taking into account the five steps outlined to classify humans according to age groups of over 50 and under 40 using only visual data. The rest of the The Proposed Age Classification System 243 chapter is organized as follows. Section 2 describes the state of the art in understanding factors that may be used to estimate the age and automatic age classification. Following this, a brief description of the classifier design is presented in Section 3. Section 4 presents the results of empirical analysis conducted to understand the effects of various factors that may affect the performance of a classifier. Finally, Section 5 ends the chapter with a few concluding remarks and ideas for future work. 2. Related Works Automated classification of age from appearance information is largely underexplored. In [2], Kwon et al. made an attempt to classify images of people into three age categories: children, adults and senior adults. This work was based on the cardioidal strain transformation to model the growth of a person’s head from infancy to adulthood, obtained from cranio-facial research [1]. The revised cardioidal strain transformation describing head growth can be visualized as a series of ever-growing circles all attached at a common tangent ‘base’ point, in this case the top of the head. According to this transformation, with an increase in age, the growth of lower parts of the face is more pronounced than that of the upper part. Thus, for example, within the top and bottom margins of the head, the eyes occupy a higher position in an adult than in an infant due to an outgrowing and dropping of the chin and jaw. This result was used in distinguishing between adults and babies. In order to do this the authors used facial feature detectors to detect the eyes, nose and the mouth. These feature positions were then used to find out certain ratios for each face and if the ratio was greater than a particular value, the face was classified as belonging to an adult, else it was classified as belonging to a baby. In order to distinguish between young adults and seniors the authors used snakes to determine the presence of wrinkles on the forehead of the person. The main problem with this method is the need to detect facial features, which is extremely error-prone. The costs and processing time of this approach limit its practicality for use in real applications. In [1,4] O’Toole et al. used three-dimensional information for building a parametric 3D face model. They used caricature algorithms to exaggerate or de-emphasize the 3D facial features. In the resulting images the perceived age is changed based on whether the feature is exaggerated or de-emphasized. This result suggested that in older faces, the 3D facial features are emphasized. These results were verified by a number of observers. In a related work, Lanitis et al. [5] proposed a method for simulating aging in facial images, thus allowing them to ‘age-normalize’ faces before using them for ‘face recognition’. For this purpose the authors used statistical model parameters that contain shape and intensity information. Some of these parameters were then used for simulating aging effects. Empirical results reported in this paper suggest that the face recognition accuracy improved when ‘age normalization’ was carried out. These results indicate that the appearance of the face contains sufficient information regarding the age of the person that in turn could be used for classification purposes. In [2] it was shown that wrinkles could be used for differentiating between senior citizens and adults below a certain age. From this result it could be argued that the presence and absence of wrinkles, as indicated by the skin texture, could be detected even at low resolutions. Also, in [6] it has been stated that after age 45, the skin begins to thin, partially because of hormonal changes, resulting in a loss of volume and smoothness. In [3] it was found that it is easy to notice that there is a very big change in the facial skin texture of the person from age 40 to age 50. Keeping these points in mind, the two age categories used for classification were the age groups above 50 and below 40, keeping the ten years in between as a buffer zone where the performance of the classifier is unpredictable. 3. Description of the Proposed Age Classification System This section provides a detailed description of the proposed system. Figure 13.1 shows the block diagram consisting of the main blocks used in the age category classification system. The proposed system crops the face in the current scene and decides the age category of the person. The output 244 Appearance-based Binary Age Classification Video sequence Face detection Preprocessing and feature extraction Single or bank of classifiers Output Figure 13.1 Block diagram of the age classification system. of the face detector passes through the preprocessing algorithms, such as histogram equalization and brightness gradient removal, in order to present images of uniform brightness to the classifier. Before the image is fed to the classifier, the image is passed through a feature extractor algorithm. Principal component analysis, robust principal component analysis and non-negative matrix factorization were used for experimentation. This representation of the image is finally fed to the classifier that decides the age category of the person. To design the classifier, publicly available SVMLight [7] software was used. Bootstrapping techniques were used to learn a useful model based on available examples (see Figure 13.2 for details). The design of the classifier was undertaken in several stages. First, the training examples were used to train the SVMs to learn the underlying model which acts as a naïve classifier. Using the naïve classifier, one meticulously performs a series of experiments to pick a representative set of examples to bootstrap the classifier. A series of such new classifiers were tuned and tested on a test data set. The best classifier was chosen for the final testing for a given resolution and all other possible combinations that may influence the output of the classifier. A four-fold cross validation strategy was employed to report the classification results. For each of the training and testing sets, a radial basis function was used as the kernel function that had a fixed value of gamma, and the cost factor was varied with values of 0.1, 0.5, 1, 5, 10, 50 and 100. Thus, for each of the training and testing sets, seven different accuracies were obtained, depending on the parameter settings. The accuracies were averaged out for all the four sets and the best average accuracies were used as the final results. The output of the classifier was then used in the decision fusion process in the case of the parallel paradigm of classification, or fed to the next level of classifier in the case of the serial paradigm of classification. A detailed description of the proposed approach is presented in the following subsections. 3.1 Database A large database was collected using the resources from Advanced Interfaces Inc. The facial portions of the images were segmented automatically using the face detector developed in [8] and originally Training Classifiers Bootstrapping New classifiers Best classifier Figure 13.2 Bootstrapping process to find the best classifier for testing. The Proposed Age Classification System 245 Face detector Normalized Figure 13.3 Data collection and face normalization. proposed in [9]. The set-up used for obtaining the facial images was as shown in Figure 13.3. The pose variation in the data set was determined by the face detector, which allowed a maximum in-plane and/or out-of-plane rotation of about 30 degrees. The data set consisted of about 4100 grayscale images with a minimum resolution of 29 ×29 distributed across the age categories of ages 20 to 40 and 50 to 80. The database was divided up into four groups of males and females above 50 years old and below 40 years old. Table 13.1 summarizes the distribution of the database. In the process of data collection, facial images of people of different age groups were collected. All these images were appropriately labeled with the age of the person in the image. These labels were used as ground truths to be used during the training of the classifiers. This data set was divided into three parts – the training set, the bootstrapping set and the testing set, all of them mutually disjoint. 3.2 Segmentation of the Facial Region A biological face detection system developed by Yeasin et al. [8] was used to segment the face from the rest of the image. The following criteria were emphasized while developing the face detector: 1. The algorithm must be robust enough to cope with the intrinsic variabilities in images. 2. It must perform well in an unstructured environment. 3. It should be amenable to real-time implementation and give few or no false alarms. An example-based learning framework was used to learn the face model from a meticulously created representative set of face and nonface examples. The key to the success of the method was the systematic way of generating positive and negative training examples using various techniques, namely: preprocessing, lighting correction, normalization, the creation of virtual positive examples from a reasonable number of facial images and the bootstrap technique to collect the negative examples for training. A successful implementation of face detection was made using a retinally connected neural network architecture reported in [9], this was refined later to make it suitable for real-time applications. The average performance of the system is above 95 % on facial images having 30–35 degree deviation from the frontal facial image. Table 13.1 Database of images. Gender Age category Number of images Female Over 50 631 Male Over 50 1042 Female Under 40 1283 Male Under 40 1214 246 Appearance-based Binary Age Classification 3.3 Preprocessing The localized face from the previous stage is preprocessed to normalize the facial patterns. Several techniques, such as illumination gradient correction, were performed to compensate or to reduce the effect of lighting variations within the window where the face was detected. Also, histogram equalization was performed to reduce the nonuniformity in the pixel distributions that may occur due to various imaging situations. Additionally, normalization of the ‘training set’ was performed to align all facial features (based on manual labeling) of the face with respect to a canonical template, so that they formed a good cluster in the high-dimensional feature space. 3.4 Feature Extraction The feature extraction stage is designed to obtain a meaningful representation of observations and also to reduce the dimension of the feature vector. It is assumed that a classifier that uses a smaller dimensional feature vector will run faster and use less memory, which is very desirable for any real- time system. Besides increasing accuracy by removing very specific information about the images, the feature extraction method also improves the computational speed of the classifier – an important criterion for a real-time classifier system. In the proposed method, the following techniques were implemented to construct a feature vector from the observations. Experiments were conducted with the aim of achieving two goals, namely: to select the components of the PCA and NMF; and to make comparisons between PCA and NMF and use the suitable one for classification of age. A weighted NMF [10] scheme was used in the experimentation to alleviate the limitation of standard NMF in optimizing the local representation of the image. Several experiments were conducted with different numbers of bases to compare the classification results. 3.5 Classifying People into Age Groups For the training of the classifier, about 50 % of the data collected from all the age categories was used. A method of cross validation was used to get the best possible classifier. The different parameters that could be changed were the kernels, kernel parameters and the cost factor. Once the best classifier was found from the cross validation method, the misclassified examples could be used in the bootstrapping process and to further refine the classifier, thus finding the optimum classifier. In order to improve the performance of the classifier, either the parallel or the serial or a combination of the two paradigms could be used. The parallel paradigm is based on the fact that examples misclassified by one classifier could be classified correctly by another, thus giving a better overall accuracy if both classifiers are used. The classifiers can vary either in the type of parameters used or the type of feature extraction used for them. Another way to improve the accuracy is to use the fact that there are big differences in the facial features of different genders, as well as different ethnicities. For example, the face of an adult female could be misclassified as a person from a lower age category. Hence, different sets of images could be used for training the classifier for female age categories and for male age categories. The same logic could be extended for people of different ethnicities, leading to the need to use an ethnicity classifier before the age category classifier. Using the parallel and the serial paradigms simultaneously would give the best possible performance. Age category classification could be a binary age category classifier using the serial paradigm for classification. In this example, the image fed by the camera is used by the face detector software to detect the face in it. This face is then resized to 20 by 20 and histogram equalization and brightness gradient removal is carried out on the image. Following the image processing, the image is passed through a feature detector having a set of 100 basis vectors, thus giving a feature vector with 100 values. This is then fed to a gender classifier and, depending on the gender output, it is either fed to a male age classifier or a female age classifier. The final output of the age classifier gives the age category of the person as belonging either to the adult age category or the minor age category. Empirical Analysis 247 4. Empirical Analysis To understand the effects of various factors that may influence the performance of the classifier, a number of experiments were conducted in order to characterize the performance of the classifier. The tests conducted were: 1. Performance of dimensionality reduction methods. 2. The effect of preprocessing and image resolution. 3. The effect of pose variation. 4. Characterization of brightness gradients. 5. Characterization of occlusion. 6. The impact of gender on age classification. 7. Classifier accuracies across the age groups. 4.1 Performance of Data Projection Techniques The problem known as the ‘curse of dimensionality’ has received a great deal of attention from researchers. Many techniques have been proposed to project data. These techniques are based on the mapping of high-dimensional data to a lower dimensional space, such that the inherent structure of the data is approximately preserved. In this work, two linear projection methods for dimensionality reduction were used to find a suitable representation of the image data. Three different representations, namely: Principal Component Analysis (PCA), Robust Principal Component Analysis (RPCA) [11] and Non-Negative Matrix Factorization (NMF) [7] were used for the experimentation. The two forms of PCA are supposed to capture the global features of the facial image, while NMF is supposed to capture the local image features of the facial image. Classical PCA is very sensitive to noise, so to nullify the effect of noise, RPCA methods were tested. PCA is designed to capture the variance in a data set in terms of principal components. In effect, one is trying to reduce the dimensionality of the data to summarize the most important (i.e. defining) parts, whilst simultaneously filtering out noise present in the image data. However, the decision regarding which component of PCA captures the meaningful information for a particular problem is not well understood in general. Hence, it poses a problem in experimentation, because whether the age information is present in the eigenvectors corresponding to the highest eigenvalues, or those with low eigenvalues is not known. To overcome this problem, the experiments involved varying the number of principal components from 10 to 100 in steps of ten. A similar strategy was also used for the RPCA-based representation. The second method for dimensionality reduction that was used is Non-Negative Matrix Factorization (NMF) [7]. The advantage of this method is that in cases where the age information is present in certain features of the face and not the entire face, NMF would be able to capture that information. However, it has the same problem as PCA, i.e. basis vector determination, and was also handled in the same way as PCA. In each case, the basis vectors were obtained using the set of training examples. From the experiments it was found that all representational schemes, namely: PCA, RPCA and NMF, yielded very low accuracies of around 50 % for the age classification problem. On the other hand, classification done using raw image values for the same set of data gave accuracies around 70 %. This indicated that when the images were projected along the vector subspace for PCA, RPCA and NMF, data crucial for age classification was lost. While this may sound counterintuitive, it is believed that the lower accuracy has some important significance to choosing appropriate data projection techniques used in the experimentation. According to the central limit theorem, low-dimensional linear projections tend to be normally distributed as the original dimensionality increases. This would mean that little information could be extracted by linear projection when the original dimensionality of data is very high. To overcome this problem, nonlinear data projection methods (for example, self-organizing maps [12], distance-based approaches such as multidimensional scaling [13] and local linear embedding [14]) should be investigated. 248 Appearance-based Binary Age Classification 4.2 The Effect of Preprocessing and Image Resolution Preprocessing algorithms were designed to get rid of any variations in the lighting conditions in the images. The two preprocessing methods tested were histogram equalization and histogram equalization followed by brightness gradient removal. Brightness gradient removal cannot be used as a stand-alone preprocessing step and so was used in conjunction with histogram equalization to check if it has any significant impact on the classifier accuracies. Hence, the data set available was preprocessed to get three sets – unprocessed images, histogram equalized images and images processed through histogram equalization with brightness gradient removal. In order to find out the optimum image resolution to be used for classification, the available image set was downsampled to obtain 25 ×25 and 20 ×20 size images, along with the existing image resolution of 29×29. The reason behind trying out lower image resolutions was that by downsampling, certain high-frequency peculiarities specific to people, such as moles, etc., would be lost, leading to better classification accuracies. Figure 13.4 shows the plot of classification accuracy at various image resolutions under different image preprocessing conditions. The result in Figure 13.4 represents the model obtained using male examples only. It may be noted that similar results were obtained for models trained using females, as well as for models obtained by using combined male and female examples. From the plot it can be seen that there is an improvement in the classification accuracy of about 4–5 % for an increase in the facial image resolution from 20 ×20 to 29×29. This improvement can be seen for all image preprocessing conditions. 4.3 The Effect of Pose Variation In order to be effective for real-life applications, the age classifier should provide stable classification results in the presence of significant pose variations of the faces. In order to test the performance of the classifier in the presence of pose variations, the following test was conducted. The faces of 12 distinct people were saved under varying poses from frontal facial image to profile facial image. The best classifiers obtained for each resolution were used on these sets of faces to check for the consistency of classification accuracy. Thus, if, for a given track having ten faces, eight faces were classified correctly, the accuracy for that track was tabulated as 80 %. In this way, the results were tabulated for all three classifiers. Males 60 65 70 75 80 85 20 × 20 25 × 25 29 × 29 Resolution Accuracy Unprocessed Histogram equalization Brightness grad. removal + Histogram eq. Figure 13.4 The effect of preprocessing on age classification. Age classification accuracy (male only database) at various image resolutions under different preprocessing methods. [...]... Informatics, 34 (6) , pp 377–3 86, 2001 [7] Khan, M G Heart Disease Diagnosis and Therapy, Williams & Wilkins, Baltimore, 19 96 [8] Ogiela, M R and Tadeusiewicz, R “Syntactic reasoning and pattern recognition for analysis of coronary artery images,” Artificial Intelligence in Medicine, 26, pp 145–159, 2002 [9] Mandal, A K and Jennette, J Ch (Eds) Diagnosis and Management of Renal Disease and Hypertension,... 17–29, 2002 [21] Ogiela, M R and Tadeusiewicz, R “Artificial Intelligence Structural Imaging Techniques in Visual Pattern Analysis and Medical Data Understanding,” Pattern Recognition, 36( 10), pp 2441–2452, 2003 274 Medical Pattern Understanding and Cognitive Analysis [22] Sonka, M and Fitzpatrick, J M (Eds) Handbook of Medical Imaging: Vol 2–Medical Image Processing and Analysis SPIE PRESS, 2000... Processing, Lausanne, pp 363 – 366 , 19 96 [ 26] Ogiela, M R and Tadeusiewicz, R “Syntactic pattern recognition for X-ray diagnosis of pancreatic cancer,” IEEE Engineering In Medicine and Biology Magazine, 19 (6) , pp 94–105, 2000 [27] Ogiela, M R and Tadeusiewicz, R “Syntactic Analysis and Languages of Shape Feature Description in Computer Aided Diagnosis and Recognition of Cancerous and Inflammatory Lesions of Organs... under consideration can also be used for many other kinds of medical image Computer-Aided Intelligent Recognition Techniques and Applications © 2005 John Wiley & Sons, Ltd Edited by M Sarfraz 258 Medical Pattern Understanding and Cognitive Analysis Figure 14.1 Stages of image analysis and semantic classification in visual data understanding systems Further, we shall try to demonstrate that structural pattern... Martinez, A M and Serra J R “A new approach to object-related image retrieval,” Journal of Visual Languages & Computing, 11(3), pp 345– 363 , 2000 [38] Ogiela, M R and Tadeusiewicz, R “Semantic-Oriented Syntactic Algorithms for Content Recognition and Understanding of Images in Medical Databases,” 2001 IEEE International Conference on Multimedia and Expo – ICME 2001, Tokyo, Japan, pp 62 1 62 4, 2001 ... processing and analysis of medical images is proposed We try to introduce the terms and methodology of medical data understanding as a new step along the way, starting from image processing, and followed by analysis and classification [1–5] In the chapter we try to show that image-understanding technology, as the next step after image recognition, is useful (sometimes even necessary), possible and also... 2003 [15] Ogiela, M R and Tadeusiewicz, R “Advanced image understanding and pattern analysis methods in Medical Imaging,” Proceedings of the Fourth IASTED International Conference on Signal and Image Processing (SIP 2002), Kaua’i, Hawaii, USA, pp 583–588, 2002 [ 16] Ogiela, M R., Tadeusiewicz, R and Ogiela, L “Syntactic Pattern Analysis in Visual Signal Processing and Image Understanding,” The International... Le´, M “Shape Understanding: Knowledge Generation and Learning,” s s Proceedings of the Seventh Australian and New Zealand Intelligent Information Systems Conference (ANZIIS 2001), Perth, Western Australia, pp 189–195, 2001 [20] Tadeusiewicz, R and Ogiela, M R “Automatic Understanding Of Medical Images–New Achievements In Syntactic Analysis Of Selected Medical Images,” Biocybernetics and Biomedical Engineering,... Reconstructive Surgery, 36, pp 239–2 46, 1 965 [4] O’Toole, A J., Vetter, T., Volz, H and Salter, E M “Three dimensional caricatures of human heads: distinctiveness and the perception of age,” Perception, 26, pp 719–732, 1997 References 255 [5] Lanitis, A., Taylor, C J and Cootes, T F “Towards Automatic Simulation of Aging Effects on Face Images,” IEEE Transactions on Pattern Analysis and Machine Intelligence,... integral part of CAD systems or intelligent information systems managing pictorial medical databases located (scattered) in various places [ 36 38] References [1] Albus, J S and Meystal, A M and Aleksander, M Engineering of Mind: An Introduction to the Science of Intelligent Systems, John Wiley & Sons, Inc., New York, 2001 [2] Bankman, I (Ed.) Handbook of Medical Imaging: Processing and Analysis, Academic Press, . of medical image. Computer-Aided Intelligent Recognition Techniques and Applications Edited by M. Sarfraz © 2005 John Wiley & Sons, Ltd 258 Medical Pattern Understanding and Cognitive Analysis Figure. vector machines for pattern recognition, ” Data Mining and Knowledge Discovery, 2(2), pp. 121– 167 , 1998. 14 Intelligent Recognition in Medical Pattern Understanding and Cognitive Analysis Marek. and Image Understanding, 74(1), pp. 1–21, 1999. [3] Flores, G. M. “Senility of the face–Basic study to understand its causes and effects,” Plastic Reconstructive Surgery, 36, pp. 239–2 46, 1 965 . [4]

Ngày đăng: 14/08/2014, 11:21

TỪ KHÓA LIÊN QUAN