1. Trang chủ
  2. » Y Tế - Sức Khỏe

Computational color imaging

244 876 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Lecture Notes in Computer Science Commenced Publication in 1973 Founding and Former Series Editors: Gerhard Goos, Juris Hartmanis, and Jan van Leeuwen Editorial Board David Hutchison Lancaster University, UK Takeo Kanade Carnegie Mellon University, Pittsburgh, PA, USA Josef Kittler University of Surrey, Guildford, UK Jon M Kleinberg Cornell University, Ithaca, NY, USA Alfred Kobsa University of California, Irvine, CA, USA Friedemann Mattern ETH Zurich, Switzerland John C Mitchell Stanford University, CA, USA Moni Naor Weizmann Institute of Science, Rehovot, Israel Oscar Nierstrasz University of Bern, Switzerland C Pandu Rangan Indian Institute of Technology, Madras, India Bernhard Steffen University of Dortmund, Germany Madhu Sudan Microsoft Research, Cambridge, MA, USA Demetri Terzopoulos University of California, Los Angeles, CA, USA Doug Tygar University of California, Berkeley, CA, USA Gerhard Weikum Max-Planck Institute of Computer Science, Saarbruecken, Germany 5646 Alain Trémeau Raimondo Schettini Shoji Tominaga (Eds.) Computational Color Imaging Second International Workshop, CCIW 2009 Saint-Etienne, France, March 26-27, 2009 Revised Selected Papers Including 114 colored figures 13 Volume Editors Alain Trémeau Université Jean Monnet Laboratoire Hubert Curien UMR CNRS 5516 18 rue Benoit Lauras, 42000 Saint-Etienne, France E-mail: alain.tremeau@univ-st-etienne.fr Raimondo Schettini Università degli Studi di Milano-Bicocca Piazza dell’Ateneo Nuovo 1, 20126 Milano, Italy E-mail: schettini@disco.unimib.it Shoji Tominaga Chiba University 1-33, Yayoi-cho, Inage-ku, Chiba-shi, Chiba, 263-8522, Japan E-mail: shoji@faculty.chiba-u.jp Library of Congress Control Number: 2009930845 CR Subject Classification (1998): I.4, I.3, I.5, I.2.10, F.2.2 LNCS Sublibrary: SL – Image Processing, Computer Vision, Pattern Recognition, and Graphics ISSN ISBN-10 ISBN-13 0302-9743 3-642-03264-8 Springer Berlin Heidelberg New York 978-3-642-03264-6 Springer Berlin Heidelberg New York This work is subject to copyright All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting, reproduction on microfilms or in any other way, and storage in data banks Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer Violations are liable to prosecution under the German Copyright Law springer.com © Springer-Verlag Berlin Heidelberg 2009 Printed in Germany Typesetting: Camera-ready by author, data conversion by Scientific Publishing Services, Chennai, India Printed on acid-free paper SPIN: 12701643 06/3180 543210 Preface We would like to welcome you to the proceedings of CCIW 2009, the Computational Color Imaging Workshop, held in Saint-Etienne, France, March 26–27, 2009 This, the second CCIW, was organized by the University Jean Monnet and the Laboratoire Hubert Curien UMR 5516 (Saint-Etienne, France) with the endorsement of the International Association for Pattern Recognition (IAPR), the French Association for Pattern Recognition and Interpretation (AFRIF) affiliated with IAPR, and the "Groupe Français de l'Imagerie Numérique Couleur" (GFINC) The first CCIW was organized in 2007 in Modena, Italy, with the endorsement of IAPR This workshop was held along with the International Conference on Image Analysis and Processing (ICIAP), the main conference on image processing and pattern recognition organized every two years by the Group of Italian Researchers on Pattern Recognition (GIRPR) affiliated with the International Association for Pattern Recognition (IAPR) Our first goal, since we began the planning of the workshop, was to bring together engineers and scientists from various imaging companies and from technical communities all over the world to discuss diverse aspects of their latest work, ranging from theoretical developments to practical applications in the field of color imaging, color image processing and analysis The workshop was therefore intended for researchers and practitioners in the digital imaging, multimedia, visual communications, computer vision, and consumer electronic industry, who are interested in the fundamentals of color image processing and its emerging applications We received many excellent submissions Each paper was reviewed by three reviewers, and then the general chairs carefully selected only 23 papers in order to achieve a high scientific level at the workshop The final decisions were based on the criticisms and recommendations of the reviewers and the relevance of papers to the goal of the workshop Only 58% of the papers submitted were accepted for inclusion in the program In order to have an overview of current research directions in computational color imaging six different sessions were organized: • • • • • • Computational color vision models Color constancy Color image/video indexing and retrieval Color image filtering and enhancement Color reproduction (printing, scanning, and displays) Multi-spectral, high-resolution and high dynamic range imaging In addition to the contributed papers, four distinguished researchers were invited to this second CCIW to deliver keynote speeches on current research directions in hot topics on computational color imaging: VI Preface • Hidehiko Komatsu, on Information Processing in Higher Brain Areas • Qasim Zaidi, on General and Specific Color Strategies for Object Identification • Theo Gevers, on Color Descriptors for Object Recognition • Gunther Heidemann, on Visual Attention Models and Color Image Retrieval There are many organizations and people to thank for their various contributions to the planning of this meeting We are pleased to acknowledge the generous support of Chiba University, the Dipartimento di Informatica Sistemistica e Comunicazione di Università degli Studi di Milano-Bicocca, the Région Rhones-Alpes and Saint-Etienne Métropole Special thanks also go to all our colleagues on the Conference Committee for their dedication and work, without which this workshop would not have been possible Finally, we envision the continuation of this unique event, and we are already making plans for organizing the next CCIW workshop in Milan in 2011 April 2009 Alain Trémeau Raimondo Schettini Shoji Tominaga Organization Organizing Committee General Chairs Alain Trémeau (Université Jean Monnet, Saint-Etienne, France) Raimondo Schettini (Università di Milano-Bicocca, Milan, Italy) Shoji Tominaga (Chiba University, Chiba, Japan) Program Committee Jesus Angulo James K Archibald Sebastiano Battiato Marco Bressan Majeb Chambah Cheng-Chin Chiang Bibhas Chandra Dhara Francesca Gasparini Takahiko Horiuchi Hubert Konik Patrick Lambert J Lee Jianliang Li Peihua Li Chiunhsiun Lin Ludovic Macaire Lindsay MacDonald Massimo Mancuso Jussi Parkkinen Steve Sangwine Gerald Schaefer Ishwar K Sethi Xiangyang Xue Rong Zhao Silvia Zuffi Ecole des Mines de Paris, France Brigham Young University, USA Università di Catania, Italy Xerox, France Université de Reims, France National Dong Hwa University, Taiwan Jadavpur University, India Università di Milano-Bicocca, Italy Chiba University, Japan Université de Saint-Etienne, France Université de Savoie, France Brigham Young University, USA Nanjing University, P.R China Heilongjiang University, China National Taipei University, Taiwan Université de Lille, France London College of Communication, UK STMicroelectronics, France University of Joensuu, Finland University of Essexs, UK Aston University, UK Oakland University, Rochester, USA Fudan University, China Stony Brook University, USA CNR, Italy Local Committee Eric Dinet Damien Muselet Laboratoire Hubert Curien, Saint-Etienne, France Laboratoire Hubert Curien, Saint-Etienne, France VIII Organization Frédérique Robert Dro Désiré Sibidé Xiaohu Song IM2NP UMR CNRS 6242, Toulon, France Laboratoire Hubert Curien, Saint-Etienne, France Laboratoire Hubert Curien, Saint-Etienne, France Sponsoring Institutions Laboratoire Hubert Curien, Saint-Etienne, France Université Jean Monnet, Saint-Etienne, France Région Rhône-Alpes, France Saint-Etienne Métropole, France Università di Milano-Bicocca, Milan, Italy Chiba University, Japan Table of Contents Invited Talk Color Information Processing in Higher Brain Areas Hidehiko Komatsu and Naokazu Goda Computational Color Vision Models Spatio-temporal Tone Mapping Operator Based on a Retina Model Alexandre Benoit, David Alleysson, Jeanny Herault, and Patrick Le Callet Colour Representation in Lateral Geniculate Nucleus and Natural Colour Distributions Naokazu Goda, Kowa Koida, and Hidehiko Komatsu 12 23 Color Constancy Color Constancy Algorithm Selection Using CART Simone Bianco, Gianluigi Ciocca, and Claudio Cusano 31 Illuminant Change Estimation via Minimization of Color Histogram Divergence Michela Lecca and Stefano Messelodi 41 Illumination Chromaticity Estimation Based on Dichromatic Reflection Model and Imperfect Segmentation Johji Tajima 51 Color Image/Video Indexing and Retrieval An Improved Image Re-indexing Technique by Self Organizing Motor Maps Sebastiano Battiato, Francesco Rundo, and Filippo Stanco 62 KANSEI Based Clothing Fabric Image Retrieval Yen-Wei Chen, Shota Sobue, and Xinyin Huang 71 A New Spatial Hue Angle Metric for Perceptual Image Difference Marius Pedersen and Jon Yngve Hardeberg 81 Structure Tensor of Colour Quaternion Image Representations for Invariant Feature Extraction Jes´ us Angulo 91 X Table of Contents Color Image Filtering and Enhancement Non-linear Filter Response Distributions of Natural Colour Images Alexander Balinsky and Nassir Mohammad 101 Perceptual Color Correction: A Variational Perspective Edoardo Provenzi 109 A Computationally Efficient Technique for Image Colorization Adrian Pipirigeanu, Vladimir Bochko, and Jussi Parkkinen 120 Texture Sensitive Denoising for Single Sensor Color Imaging Devices Angelo Bosco, Sebastiano Battiato, Arcangelo Bruna, and Rosetta Rizzo 130 Color Reproduction (Printing, Scanning, Displays) Color Reproduction Using Riemann Normal Coordinates Satoshi Ohshima, Rika Mochizuki, Jinhui Chao, and Reiner Lenz Classification of Paper Images to Predict Substrate Parameters Prior to Print Matthias Scheller Lichtenauer, Safer Mourad, Peter Zolliker, and Klaus Simon 140 150 A Colorimetric Study of Spatial Uniformity in Projection Displays Jean-Baptiste Thomas and Arne Magnus Bakke 160 Color Stereo Matching Cost Applied to CFA Images Hachem Halawana, Ludovic Macaire, and Fran¸cois Cabestaing 170 JBIG for Printer Pipelines: A Compression Test Daniele Rav`ı, Tony Meccio, Giuseppe Messina, and Mirko Guarnera 180 Synthesis of Facial Images with Foundation Make-Up Motonori Doi, Rie Ohtsuki, Rie Hikima, Osamu Tanno, and Shoji Tominaga 188 Multi-spectral, High-Resolution and High Dynamic Range Imaging Polynomial Regression Spectra Reconstruction of Arctic Charr’s RGB J Birgitta Martinkauppi, Yevgeniya Shatilova, Jukka Kek¨ al¨ ainen, and Jussi Parkkinen An Adaptive Tone Mapping Algorithm for High Dynamic Range Images Jian Zhang and Sei-ichro Kamata 198 207 Table of Contents XI Material Classification for Printed Circuit Boards by Spectral Imaging System Abdelhameed Ibrahim, Shoji Tominaga, and Takahiko Horiuchi 216 Supervised Local Subspace Learning for Region Segmentation and Categorization in High-Resolution Satellite Images Yen-wei Chen and Xian-hua Han 226 Author Index 235 220 A Ibrahim, S Tominaga, and T Horiuchi Material Classification and Image Segmentation A material classification algorithm is proposed based on the spectral features among the spectral reflectances The image segmentation process is divided into two sub-processes of pixel-based classification and region growing 4.1 Pixel-Based Classification Algorithm The following algorithm is applied to each pixel independently Adjacent pixels with close reflectance are gathered in the same region and used as initial segments for the post processing level The average spectral reflectance ( S , S , , S 31 ) is calculated for the whole image Pixels with high reflectance values and satisfy S (λk ; x, y ) > S k (k=1, 2, …, 31) for the whole visible range [400-700nm] are classified into silk-screen print The peak wavelength of each spectral curve is detected for the remaining pixels except for screen-print If the spectral peak exists in the range [600–700nm] and the spectral reflectance values satisfy conditions of S (λk ; x, y ) > S k (k=21, 22, …, 31) and S (λk ; x, y ) < S k (k=1, 2,…,11), the pixel is classified into metal If the peak wavelength in the remaining pixels is in the range [510–590nm] and the relevant spectral reflectance satisfies the conditions S (λk ; x, y ) > S k (k=12, 13, …, 20) and S (λk ; x, y ) < S k (k=21, 22, …, 31), then the pixel is classified into resist-coated metal The other pixels satisfying the condition S (λk ; x, y ) < S k (k=1, 2, …, 31) are classified into substrate Finally, the through holes are determined independently from the observed image by using back-illumination The back-illuminated image is binarized using a threshold, in which the brighter parts correspond to the holes 4.2 Region Growing Algorithm The above algorithm partitions the spectral image of a PCB into different material regions However, there are pixels remaining without any labels in the above In addition, isolated regions with a small number of pixels can be considered as noisy pixels Hence, an algorithm of merging those undetermined pixels into the neighboring regions is needed The initial segments are provided by the above pixel-based algorithm Let us consider the following condition of region homogeneity R j H ( R j ) = True, j = 1, 2, , N , (2) where N is the number of initial segments The merging process depends on calculating distances between segments S and S' We define the distance as a spectral difference in the 31-dimensional vector, which is calculated using Euclidian distance D = ∑ iK=1 ( Si - Si ') , (3) Material Classification for Printed Circuit Boards by Spectral Imaging System 221 where K is the number of wavelengths The region growing algorithm is started using a region of 3x3 pixels with overlapped widows The minimum distance is checked between the current pixel and surrounding pixels to update the segments This process continues until the merging of all adjacent regions stop Finally, a smoothing operation is executed to get the final image segmentation result Experiments 5.1 Performance of the Proposed Method The scene of the raw circuit board shown in Fig was captured with the present spectral imaging system under incandescent lamps The image size was 1280x1024 pixels Two data sets of surface-spectral reflectances were estimated from the two spectral images at two different light sources We combined these reflectance images into one reflectance image by comparing the corresponding reflectances at the same pixel point and applying the above rules to all pixels Then, the proposed classification algorithm was executed for the spectral reflectance image The typical spectral reflectances obtained for the PCB in Fig is shown in Fig 4(a) Figure 4(b) shows the classification results of the developed method (a) (b) Fig (a) Typical curves of surface-spectral reflectance for print, metal, metal, and substrate of the PCB shown in Fig (b) Material classification results for a part of the raw PCB In the figure, the regions classified are painted in different colors, such as white for silk-screen, yellow for metal, green for resist-coated metal, black for substrate and grey for hole It should be note that the observed PCB image is clearly classified into four material regions and through-holes 5.2 Comparison with RGB Reflectance-Based Method In order to examine the effectiveness of surface reflectance in material classification, the spectral camera system was replaced with a digital still camera We used a Canon camera, EOS-1Ds MarkII to capture color images of the same PCB under the same illumination environment A Kenko extension ring was inserted between the camera 222 A Ibrahim, S Tominaga, and T Horiuchi body and the lens to get the required focus from enough distance The RGB images with the same size 1280x1024 as the spectral images were obtained The normalized color values were calculated as spectral reflectance from Eq (1) for only R, G, and B channels by eliminating illumination effect Figure shows the captured RGB image Figure shows the typical color reflectances obtained for different PCB materials The classification process based on the RGB reflectances developed as follows: Average color reflectances R, G , B over the whole image are calculated from red, green and blue values High reflectance pixels satisfying three conditions R ( x, y ) > R, G ( x, y ) > G , B ( x, y ) > B are classified into silk-screen print If the remaining pixels except for the screen print satisfy the condi- tions R ( x, y ) > R and R ( x, y ) > G ( x, y ) > B ( x, y ) , then the pixels are classified into metal If the remaining pixels satisfy R ( x, y ) < B ( x, y ), B ( x, y ) < G ( x, y ) , then the pixels are classified into resist metal The other pixels are classified into substrate Finally, the through holes can be determined by using back-illumination Fig Captured color image Fig Materials typical RGB reflectances Figure 7(d) presents the RGB-based segmentation results using the above classification algorithm without holes detection to easily compare segmentation results Comparing with Fig 4(b), we can confirm the accuracy of the proposed reflectancebased classification algorithm This is clear from the shape of materials, especially metal flakes and metal holes, where the RGB-based algorithm has a lot of missclassified pixels and some other pixels has wrong classification, especially in metal parts with specular highlight area 5.3 Segmentation Comparison with K-Means and Normalized Cut Algorithms For comparison with a traditional clustering algorithm and a popular graph theoretic algorithm, we choose the k-means [8] and the normalized cut [9] algorithms Those algorithms require expensive computational cost and memory requirements for large size images Moreover, the high dimension of the spectral images makes it difficult Material Classification for Printed Circuit Boards by Spectral Imaging System (a) Ground truth (c) Previous method [6] (e) RGB-based K-means 223 (b) Proposed spectral-based method (d) RGB reflectance-based (f) RGB-based N-cut Fig Segmentation results by the different methods, compared with the ground truth to apply such algorithms to the present problem Therefore, we apply the k-means algorithm to the RGB reflectance image and the normalized cut algorithm to the resized RGB reflectance image to check the performance of our classification method The final segmentation results for all algorithms are summarized in Fig without holes detection to easily present the performance of each algorithm in PCB segmentation Fig 7(a) shows the ground truth of segmentation The ground truth is manually generated as a desired segmentation Fig 7(b) shows the image segmentation results by the proposed method Figs (c)-(f) show the segmentation results by the previous method proposed in [6], RGB reflectance-based method, the k-means clustering, and the normalized cut algorithm, respectively We changed the initial seed points for kmeans many times but we got nearly same result Table lists the accuracy and CPU time of the compared algorithms The methods are run on CPU Intel Xeon E5405 2GHz with 3G memory The proposed, previous and RGB methods used C language on FreeBSD software K-means and N-cut used Matlab on the same system 224 A Ibrahim, S Tominaga, and T Horiuchi Table Comparison of the accuracy and CPU time for the compared methods Method Quality rate CPU time (s) Proposed method Previous method [6] RGB-based 98.72% 8.71 96.45% 8.22 94.01% 6.64 RGB-based K-means 77.56% 3.86 RGB-based N-cut 74.37% 1321.57 To demonstrate the accuracy of our method, we apply the proposed algorithm on a more complicated four materials PCB Figure shows segmentation result of a different board with four materials (a) Four materials PCB spectral image (b) Relevant segmentation result Fig Segmentation results of a four material PCB In case of five materials PCB with footprint elements, the proposed algorithm can easily be extended by calculating the average reflectance for the remaining pixels except print, resist, and metal after step in section 4.1 Then check step for substrate and the remaining pixels will be footprint The classification of a five material PCB is presented in Fig We can easily note that the developed method can be used Footprint (a) Five materials PCB spectral image (b) Relevant segmentation result Fig Segmentation results of a five material PCB Material Classification for Printed Circuit Boards by Spectral Imaging System 225 for different PCBs with different number of materials The classification results have shown a high accuracy with CPU time less than 9s for 1280x1024x31 spectral color PCB image Conclusion This paper has presented an approach to a reliable material classification for PCBs by constructing a spectral imaging system The system worked in the whole spectral range of visible wavelength [400-700nm] and the high spectral resolution of narrow filtration An algorithm was presented for effectively classifying the surface material on each pixel into several elements such as substrate, metal, resist, footprint, and paint, based on the surface-spectral reflectance information estimated from the imaging system The proposed approach was an incorporation of spectral reflectance estimation, spectral feature extraction, and image segmentation processes for material classification of raw PCBs The performance of the proposed method was compared with the other methods using the previous method, the RGB-reflectance based algorithm, the k-means algorithm and the normalized cut algorithm The experimental results showed the goodness of the present method in classification accuracy and computational cost The algorithm can be applied directly to the material classification problem in a variety of raw PCBs References Chang, P.C., Chen, L.Y., Fan, C.Y.: A case-based evolutionary model for defect classification of printed circuit board images J Intell Manuf 19, 203–214 (2008) Tsai, D.M., Yang, R.H.: An eigenvalue-based similarity measure and its application in defect detection: Image and Vision Computing 23(12), 1094–1101 (2005) Ibrahim, Z., Al-Attas, S.A.R.: Wavelet-based printed circuit board inspection algorithm Integrated Computer-Aided Engineering 12, 201–213 (2005) Huang, S.Y., Mao, C.W., Cheng, K.S.: Contour-Based Window Extraction Algorithm for Bare Printed Circuit Board Inspection IEICE Trans 88-D, 2802–2810 (2005) Leta, F.R., Feliciano, F.F., Martins, F.P.R.: Computer Vision System for Printed Circuit Board Inspection In: ABCM Symp Series in Mechatronics, vol 3, pp 623–632 (2008) Tominaga, S.: Material Identification via Multi-Spectral Imaging and Its Application to Circuit Boards In: 10th Color Imaging Conference, Color Science, Systems and Applications, Scottsdale, Arizona, pp 217–222 (2002) Tominaga, S., Okamoto, S.: Reflectance-Based Material Classification for Printed Circuit Boards In: 12th Int Conf on Image Analysis and Processing, Italy, pp 238–243 (2003) Duda, R.O., Hart, P.E., Stork, D.G.: Pattern Classification John Wiley and Sons, New York (2001) Shi, J., Malik, J.: Normalized Cuts and Image Segmentation IEEE Trans on Pattern Analysis and Machine Intelligence 22(8), 888–905 (2000) 10 Tominaga, S.: Surface Identification using the Dichromatic Reflection Model IEEE Trans PAMI 13, 658–670 (1991) Supervised Local Subspace Learning for Region Segmentation and Categorization in High-Resolution Satellite Images Yen-wei Chen1,2 and Xian-hua Han1,2 Elect & Inf Eng School, Central South Univ of Forest and Tech., Changsha, China chen@is.ritsumei.ac.jp Graduate School of Science and Engineering, Ritsumeikan University, Japan Abstract We proposed a new feature extraction method based on supervised locality preserving projections (SLPP) for region segmentation and categorization in high-resolution satellite images Compared with other subspace methods such as PCA and ICA, SLPP can preserve local geometric structure of data and enhance within-class local information The generalization of the proposed SLPP based method is discussed in this paper Keywords: supervised locality preserving projections, region segmentation, categorization, high-resolution satellite images, subspace learning, independent component analysis, generalization Introduction Recently several high resolution satellites such as IKONOS, Quickbird have been launched and the high resolution images (1m) are available Region segmentation and categorization in high-resolution satellite images are important issues for many applications, such as remote sensing (RS) and geographic information system (GIS) updating The satellite image is a record of relative reflectance of particular wavelengths of electromagnetic radiation A particular target reflection depends on the surface feature of the target and the wavelength of the incoming radiation Multi-spectral information has been widely used for classification of remotely sensed images [1] Since the spectra are combined by many factors such as object reflectance and instrumentation response, there are strong correlations among the spectra Principal component analysis (PCA) has been proposed to reduce the redundancy among the spectra and find efficient representation for classifications or segmentations [2] In our previous works, we proposed to apply independent component analysis (ICA) to learn the efficient spectral representation [3] Since ICA features are higher-order uncorrelated while PCA features are second-order uncorrelated, higher classification performance has been achieved by ICA Though ICA is a powerful method for finding efficient spectra A Trémeau, R Schettini, and S Tominaga (Eds.): CCIW 2009, LNCS 5646, pp 226–233, 2009 © Springer-Verlag Berlin Heidelberg 2009 Supervised Local Subspace Learning for Region Segmentation and Categorization 227 representation, it is an unsupervised approach and it lacks the local geometric structure of data Locality preserving projections (LPP) was proposed to approximate the eigenfunctions of the Laplace Beltrami operator on the image manifold, and be applied for face recognition and image indexing [4] In this paper, we propose a new approach based on supervised locality preserving projections (SLPP) for classification of highresolution satellite images The scheme of the proposed method is shown in Fig.1 The observed multi-spectral images are first transformed by SLPP and then the transformed spectral components are used as features for classifications A probabilistic neural network (PNN) [5] is used as a classifier Compared with other subspace methods such as PCA and ICA, SLPP can not only find the manifold of images but also enhance the within-class local information In our previous work, the proposed method has been successfully applied to IKONOS images and experimental results show that the proposed SLPP based method outperforms ICA-based method [6] In this paper, we discuss the generalization of the proposed SLPP based method We use only one image as training sample for SLPP subspace learning and classifier (PNN) training The trained SLPP subspace and PNN are used for other test image segmentation and categorization Fig The proposed method based on SLPP The paper is organized as following: the supervised LPP for feature extractions is presented in Sec.2, the probabilistic neural network for classifications is presented in Sec.3 and the experimental results are shown in Sec.4 Finally, the conclusion is given in Sec.5 Supervised Locality Preserving Projections (SLPP) The problem of subspace learning for image for feature extraction is the following Given a set of spectral feature vectors x1 , x ,", x m in R n of images, the goal is to find an efficient representation fi of xi such that f i − f j reflects the neighborhood relationship between fi and fj In other word, if f i − f j is small, then xi and xj are belong to same class Here, we assume that the images reside on a sub-manifold embedded in the ambient space R n 228 Y.-w Chen and X.-h Han LPP seeks a linear transformation P to project high-dimensional data into a lowdimensional sub-manifold that preserves the local Structure of the data Let X = [x1 , x , " , x m ] denote the feature matrix whose column vectors is the sample feature vectors in R n The linear transformation P can be obtained by solving the following minimization problem: ∑ (P T x i − P T x j ) Bij P (1) ij where Bij evaluate the local structure of the image space In this paper, we use normalized correlation coefficient of two sample as the penalty weight if the two sample belong to the same class: ⎧ x Ti x j ⎪ n n Bij = ⎨ ∑ x ∑ x il l =1 jl ⎪ l 0=1 ⎩ if sample i and j are in same class (2) otherwise By simple algebra formulation, the objective function cam be reduced to: ∑ (P T x i − P T x j ) Bij ij = ∑ P T x i Dii P T x i − ∑ P T x i Bij P T x j i (3) ij = P T X(D − B) X T P = P T XLX T P D is a diagonal matrix; its entries are column (or row, since B is symmetric) sum of B, Dii = ∑ j Bij L=D-S is the Laplacian matrix Then, the linear transformation P can be obtained by minimizing the objective function under constraint: P = arg P T X(D − B) X T P (4) P T XDXT P =1 Finally, the minimization problem can be converted to solving a generalized eigenvalue problem as follows: XLX T P = λXDX T P (5) Probabilistic Neural Network (PNN) The PNN model is based on Parzen’s results on probability density function (PDF) estimators [5] PNN is a three-layer feedforward network consisting of input layer, a pattern layer, and a summation or output layer as shown in Fig.2 We wish to form a Parzen estimate based on K patterns each of which is n-dimensional, randomly sampled from c classes The PNN for this case consists of n input units comprising the input layer, where each unit is connected to one and only one f the c category units The connection from the input to pattern units represents modifiable weights, which Supervised Local Subspace Learning for Region Segmentation and Categorization 229 will be trained Each category unit computes the sum of the pattern units connected to it A radial basis function and a Gaussian activation are used for the pattern nodes Fig PNN architecture The PNN is trained in the following way First, each pattern (sample feature) f of the training set is normalized to have unit length The first normalized training pattern is placed on the input units The modifiable weights linking the input units and the first pattern unit are set such that w1=f1 Then, a single connection from the first pattern unit is mage to the category unit corresponding to the known class of that pattern The process is repeated with each of the remaining training patterns, setting the weights to the successive pattern units such that wk=fk for k = 1,2, " , K After such training we have a network which is fully connected between input and pattern units, and sparsely connected from pattern to category units The trained network is then used for segmentation and categorization in the following way A normalized test pattern f is placed at the input units Each pattern unit computes the inner product to yield the net activation y, y k = w Tk ⋅ f (6) and emits a nonlinear function of yk; each output unit sums the contributions from all pattern units connected to it The activation function used is exp( x − w k / δ ) Assuming that both x and wk are normalized to unit length, this is equivalent to using exp( x − / δ ) Experimental Results The proposed method has been apply to classification of IKONOS images IKONOS simultaneously collects one-meter resolution black-and-white (panchromatic) images and four-meter resolution color (multi-spectral) images The multi-spectral images 230 Y.-w Chen and X.-h Han consist of four bands in the blue (B), green (G), red (R) and near-infrared wavelength regions And the multi-spectral images can be merged with panchromatic images of the same locations to produce "pan-sharpened color" images of 1-m resolution In our experiments, we use only RGB spectral images for region segmentation and categorization Two typical IKONOS color images as shown in Fig.4(a) and Fig.5(a) are used in our experiments One shown in Fig.4(a) is used as sample image for learning and another one shown in Fig.5(a) is used as test image for testing In our experiments, we define categories: sea, forest, ground, road and others We first randomly selected 100 points from each category In order to keep some texture information, we use a sub-block of × surround the selected point and the RGB values the sub-blocks are used as spectral feature vector x with a dimension of 27 x i (i = 1,2, " ,5 × 100) are used to learn the SLPP subspace for feature extraction and train the probabilistic neural network for region segmentation and categorization The learning and training process is shown in Fig.3 It is a two-step learning process We first use x to learn the SLPP subspace P Then the projection of f = P T x is used as inputs of PNN for training of PNN, which are also used as features for region segmentation and categorization Fig Two-step learning process Once the SLPP subspace and PNN are trained, they are used for feature extraction and region segmentation, respectively The segmentation and categorization process is just as shown in Fig.1 The feature vector x ( 27 × ) of each pixel is first projected to the SLPP subspace and the projection is input to trained PNN The output of PNN is the index of category Thus the satellite image can be segmented into regions and each region is categorized The region segmentation and categorization results for sample image (Fig.4(a)) are shown in Fig.4(b)-4(f) The results for test image (Fig.5(a)) are shown in Fig.5(b)-5(f) It can be seen that we can get a satisfy segmentation result for the sample image (Fig.4), while test image (Fig.5) the result is not very satisfied For example, a part of sea was categorized into the forest region as shown in Fig.5(c) Since only one image is used as training sample in our experiments, the generalization of the PNN is very limited The segmentation and categorization accuracy will be improved by increasing the number of sample images Supervised Local Subspace Learning for Region Segmentation and Categorization 231 Fig Region segmentation and categorization results (sample image) (IKONOS image: Copyright (C) 2003 Japan Space Imaging Corporation) 232 Y.-w Chen and X.-h Han Fig Region segmentation and categorization results (test image) (IKONOS image: Copyright (C) 2003 Japan Space Imaging Corporation) Supervised Local Subspace Learning for Region Segmentation and Categorization 233 Conclusions We proposed a new approach based on supervised locality preserving projections (SLPP) for region segmentation and categorization in high-resolution satellite images The observed multi-spectral images are first transformed by SLPP and then the transformed spectral components are used as features for classifications A probabilistic neural network (PNN) is used as a classifier In this paper, we use only one image as training sample for SLPP subspace learning and classifier (PNN) training We have shown that it is possible to segment and category other satellite images by using the trained SLPP subspace and PNN Acknowledgments This work was supported in part by the Strategic Information and Communications R&D Promotion Program (SCOPE) under the Grand No 072311002 References Avery, T.E., Berlin, G.L.: Fundamentals of Remote Sensing and Airphoto Interpretation Macmillan Publishing Co., New York (1992) Murai, H., Omatsu, S., OE, S.: Principal Component Analysis for Remotely Sensed Data Classified by Kohonen’s Feature Mapping Preprocessor and Multi-Layered Neural Network Classifier IEICE Trans.Commun E78-B(12), 1604–1610 (1995) Zeng, X.-Y., Chen, Y.-W., Nakao, Z.: Classification of remotely sensed images using independent component analysis and spatial consistency Journal of Advanced Computational Intelligence and Intelligent Informatics 8, 216–222 (2004) He, X., Niyogi, P.: Locality Preserving Projections In: Advances in Neural Information Processing Systems, Vancouver, Canada, vol 16 (2003) Specht, D.F.: Enhancements to Probabilistic Neural Networks In: Proceedings of the International Joint Conference on Neural Networks (IJCNN 1992), vol 1, pp 761–768 (1992) Chen, Y.-W., Han, X.-H.: Classification of High-Resolution Satellite Images Using Supervised Locality Preserving Projections In: Lovrek, I., Howlett, R.J., Jain, L.C (eds.) KES 2008, Part II LNCS (LNAI), vol 5178, pp 149–156 Springer, Heidelberg (2008) Author Index Alleysson, David 12 Angulo, Jes´ us 91 Le Callet, Patrick 12 Lecca, Michela 41 Lenz, Reiner 140 Bakke, Arne Magnus 160 Balinsky, Alexander 101 Battiato, Sebastiano 62, 130 Benoit, Alexandre 12 Bianco, Simone 31 Bochko, Vladimir 120 Bosco, Angelo 130 Bruna, Arcangelo 130 Cabestaing, Fran¸cois 170 Chao, Jinhui 140 Chen, Yen-Wei 71, 226 Ciocca, Gianluigi 31 Cusano, Claudio 31 Doi, Motonori 188 Goda, Naokazu 1, 23 Guarnera, Mirko 180 Halawana, Hachem 170 Han, Xian-hua 226 Hardeberg, Jon Yngve 81 Herault, Jeanny 12 Hikima, Rie 188 Horiuchi, Takahiko 216 Huang, Xinyin 71 Ibrahim, Abdelhameed 216 Kamata, Sei-ichro 207 Kek¨ al¨ ainen, Jukka 198 Koida, Kowa 23 Komatsu, Hidehiko 1, 23 Macaire, Ludovic 170 Martinkauppi, J Birgitta Meccio, Tony 180 Messelodi, Stefano 41 Messina, Giuseppe 180 Mochizuki, Rika 140 Mohammad, Nassir 101 Mourad, Safer 150 198 Ohshima, Satoshi 140 Ohtsuki, Rie 188 Parkkinen, Jussi 120, 198 Pedersen, Marius 81 Pipirigeanu, Adrian 120 Provenzi, Edoardo 109 Rav`ı, Daniele 180 Rizzo, Rosetta 130 Rundo, Francesco 62 Scheller Lichtenauer, Matthias Shatilova, Yevgeniya 198 Simon, Klaus 150 Sobue, Shota 71 Stanco, Filippo 62 Tajima, Johji 51 Tanno, Osamu 188 Thomas, Jean-Baptiste 160 Tominaga, Shoji 188, 216 Zhang, Jian 207 Zolliker, Peter 150 150 ... directions in computational color imaging six different sessions were organized: • • • • • • Computational color vision models Color constancy Color image/video indexing and retrieval Color image... applications in the field of color imaging, color image processing and analysis The workshop was therefore intended for researchers and practitioners in the digital imaging, multimedia, visual... Shoji Tominaga (Eds.) Computational Color Imaging Second International Workshop, CCIW 2009 Saint-Etienne, France, March 26-27, 2009 Revised Selected Papers Including 114 colored figures 13 Volume

Ngày đăng: 22/03/2017, 20:15

Xem thêm: Computational color imaging

Mục lục

    Color Information Processing in Higher Brain Areas

    Neural Pathway for Color Vision

    Representation of Color Information

    Transformation of Color Signals in Early Visual Areas

    Relationship between Neural Responses and Behavior in the Inferior Temporal Cortex

    Cognitive Control of Color-Related Behavior

    Spatio-temporal Tone Mapping Operator Based on a Retina Model

    OPL: Spatio-temporal Filtering and Contour Enhancement

    Retina Ganglion Cells Final Dynamic Compression Step

    Design of a Retina Based TMO

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN