1. Trang chủ
  2. » Cao đẳng - Đại học

khái niệm xử lý tín hiệu tương đồng giữa Sonar, Radar và hệ thống hình ảnh y tế .Signal processing concept similarities among sonar, radar and medical imaging systems

711 1,8K 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 711
Dung lượng 40,65 MB

Nội dung

Stergiopoulos, Stergios “Frontmatter” Advanced Signal Processing Handbook Editor: Stergios Stergiopoulos Boca Raton: CRC Press LLC, 2001 Library of Congress Cataloging-in-Publication Data Advanced signal processing handbook : theory and implementation for radar, sonar, and medical imaging real-time systems / edited by Stergios Stergiopoulos p cm — (Electrical engineering and signal processing series) Includes bibliographical references and index ISBN 0-8493-3691-0 (alk paper) Signal processing—Digital techniques Diagnostic imaging—Digital techniques Image processing—Digital techniques I Stergiopoulos, Stergios II Series TK5102.9 A383 2000 621.382′2—dc21 00-045432 CIP This book contains information obtained from authentic and highly regarded sources Reprinted material is quoted with permission, and sources are indicated A wide variety of references are listed Reasonable efforts have been made to publish reliable data and information, but the author and the publisher cannot assume responsibility for the validity of all materials or for the consequences of their use Neither this book nor any part may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, microfilming, and recording, or by any information storage or retrieval system, without prior permission in writing from the publisher All rights reserved Authorization to photocopy items for internal or personal use, or the personal or internal use of specific clients, may be granted by CRC Press LLC, provided that $.50 per page photocopied is paid directly to Copyright Clearance Center, 222 Rosewood Drive, Danvers, MA 01923 USA The fee code for users of the Transactional Reporting Service is ISBN 0-8493-3691-0/01/$0.00+$.50 The fee is subject to change without notice For organizations that have been granted a photocopy license by the CCC, a separate system of payment has been arranged The consent of CRC Press LLC does not extend to copying for general distribution, for promotion, for creating new works, or for resale Specific permission must be obtained in writing from CRC Press LLC for such copying Direct all inquiries to CRC Press LLC, 2000 N.W Corporate Blvd., Boca Raton, Florida 33431 Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation, without intent to infringe © 2001 by CRC Press LLC No claim to original U.S Government works International Standard Book Number 0-8493-3691-0 Library of Congress Card Number 00-045432 Printed in the United States of America Printed on acid-free paper Preface Recent advances in digital signal processing algorithms and computer technology have combined to provide the ability to produce real-time systems that have capabilities far exceeding those of a few years ago The writing of this handbook was prompted by a desire to bring together some of the recent theoretical developments on advanced signal processing, and to provide a glimpse of how modern technology can be applied to the development of current and next-generation active and passive realtime systems The handbook is intended to serve as an introduction to the principles and applications of advanced signal processing It will focus on the development of a generic processing structure that exploits the great degree of processing concept similarities existing among the radar, sonar, and medical imaging systems A high-level view of the above real-time systems consists of a high-speed Signal Processor to provide mainstream signal processing for detection and initial parameter estimation, a Data Manager which supports the data and information processing functionality of the system, and a Display SubSystem through which the system operator can interact with the data structures in the data manager to make the most effective use of the resources at his command The Signal Processor normally incorporates a few fundamental operations For example, the sonar and radar signal processors include beamforming, “matched” filtering, data normalization, and image processing The first two processes are used to improve both the signal-to-noise ratio (SNR) and parameter estimation capability through spatial and temporal processing techniques Data normalization is required to map the resulting data into the dynamic range of the display devices in a manner which provides a CFAR (constant false alarm rate) capability across the analysis cells The processing algorithms for spatial and temporal spectral analysis in real-time systems are based on conventional FFT and vector dot product operations because they are computationally cheaper and more robust than the modern non-linear high resolution adaptive methods However, these non-linear algorithms trade robustness for improved array gain performance Thus, the challenge is to develop a concept which allows an appropriate mixture of these algorithms to be implemented in practical real-time systems The non-linear processing schemes are adaptive and synthetic aperture beamformers that have been shown experimentally to provide improvements in array gain for signals embedded in partially correlated noise fields Using system image outputs, target tracking, and localization results as performance criteria, the impact and merits of these techniques are contrasted with those obtained using the conventional processing schemes The reported real data results show that the advanced processing schemes provide improvements in array gain for signals embedded in anisotropic noise fields However, the same set of results demonstrates that these processing schemes are not adequate enough to be considered as a replacement for conventional processing This restriction adds an additional element in our generic signal processing structure, in that the conventional and the advanced signal processing schemes should run in parallel in a real-time system in order to achieve optimum use of the advanced signal processing schemes of this study ©2001 CRC Press LLC The handbook also includes a generic concept for implementing successfully adaptive schemes with near-instantaneous convergence in 2-dimensional (2-D) and 3-dimensional (3-D) arrays of sensors, such as planar, circular, cylindrical, and spherical arrays It will be shown that the basic step is to minimize the number of degrees of freedom associated with the adaptation process This step will minimize the adaptive scheme’s convergence period and achieve near-instantaneous convergence for integrated active and passive sonar applications The reported results are part of a major research project, which includes the definition of a generic signal processing structure that allows the implementation of adaptive and synthetic aperture signal processing schemes in real-time radar, sonar, and medical tomography (CT, MRI, ultrasound) systems that have 2-D and 3-D arrays of sensors The material in the handbook will bridge a number of related fields: detection and estimation theory; filter theory (Finite Impulse Response Filters); 1-D, 2-D, and 3-D sensor array processing that includes conventional, adaptive, synthetic aperture beamforming and imaging; spatial and temporal spectral analysis; and data normalization Emphasis will be placed on topics that have been found to be particularly useful in practice These are several interrelated topics of interest such as the influence of medium on array gain system performance, detection and estimation theory, filter theory, space-time processing, conventional, adaptive processing, and model-based signal processing concepts Moveover, the system concept similarities between sonar and ultrasound problems are identified in order to exploit the use of advanced sonar and model-based signal processing concepts in ultrasound systems Furthermore, issues of information post-processing functionality supported by the Data Manager and the Display units of real-time systems of interest are addressed in the relevant chapters that discuss normalizers, target tracking, target motion analysis, image post-processing, and volume visualization methods The presentation of the subject matter has been influenced by the authors’ practical experiences, and it is hoped that the volume will be useful to scientists and system engineers as a textbook for a graduate course on sonar, radar, and medical imaging digital signal processing In particular, a number of chapters summarize the state-of-the-art application of advanced processing concepts in sonar, radar, and medical imaging X-ray CT scanners, magnetic resonance imaging, and 2-D and 3-D ultrasound systems The focus of these chapters is to point out their applicability, benefits, and potential in the sonar, radar, and medical environments Although an all-encompassing general approach to a subject is mathematically elegant, practical insight and understanding may be sacrificed To avoid this problem and to keep the handbook to a reasonable size, only a modest introduction is provided In consequence, the reader is expected to be familiar with the basics of linear and sampled systems and the principles of probability theory Furthermore, since modern real-time systems entail sampled signals that are digitized at the sensor level, our signals are assumed to be discrete in time and the subsystems that perform the processing are assumed to be digital It has been a pleasure for me to edit this book and to have the relevant technical exchanges with so many experts on advanced signal processing I take this opportunity to thank all authors for their responses to my invitation to contribute I am also greatful to CRC Press LLC and in particular to Bob Stern, Helena Redshaw, Naomi Lynch, and the staff in the production department for their truly professional cooperation Finally, the support by the European Commission is acknowledged for awarding Professor Uzunoglu and myself the Fourier Euroworkshop Grant (HPCF-1999-00034) to organize two workshops that enabled the contributing authors to refine and coherently integrate the material of their chapters as a handbook on advanced signal processing for sonar, radar, and medical imaging system applications Stergios Stergiopoulos ©2001 CRC Press LLC Editor Stergios Stergiopoulos received a B.Sc degree from the University of Athens in 1976 and the M.S and Ph.D degrees in geophysics in 1977 and 1982, respectively, from York University, Toronto, Canada Presently he is an Adjunct Professor at the Department of Electrical and Computer Engineering of the University of Western Ontario and a Senior Defence Scientist at Defence and Civil Institute of Environmental Medicine (DCIEM) of the Canadian DND Prior to this assignment and from 1988 and 1991, he was with the SACLANT Centre in La Spezia, Italy, where he performed both theoretical and experimental research in sonar signal processing At SACLANTCEN, he developed jointly with Dr Sullivan from NUWC an acoustic synthetic aperture technique that has been patented by the U.S Navy and the Hellenic Navy From 1984 to 1988 he developed an underwater fixed array surveillance system for the Hellenic Navy in Greece and there he was appointed senior advisor to the Greek Minister of Defence From 1982 to 1984 he worked as a research associate at York University and in collaboration with the U.S Army Ballistic Research Lab (BRL), Aberdeen, MD, on projects related to the stability of liquid-filled spin stabilized projectiles In 1984 he was awarded a U.S NRC Research Fellowship for BRL He was Associate Editor for the IEEE Journal of Oceanic Engineering and has prepared two special issues on Acoustic Synthetic Aperture and Sonar System Technology His present interests are associated with the implementation of non-conventional processing schemes in multi-dimensional arrays of sensors for sonar and medical tomography (CT, MRI, ultrasound) systems His research activities are supported by CanadianDND Grants, by Research and Strategic Grants (NSERC-CANADA) ($300K), and by a NATO Collaborative Research Grant Recently he has been awarded with European Commission-ESPRIT/IST Grants as technical manager of two projects entitled “New Roentgen” and “MITTUG.” Dr Stergiopoulos is a Fellow of the Acoustical Society of America and a senior member of the IEEE He has been a consultant to a number of companies, including Atlas Elektronik in Germany, Hellenic Arms Industry, and Hellenic Aerospace Industry ©2001 CRC Press LLC Contributors Dimos Baltas Konstantinos K Delibasis Simon Haykin Department of Medical Physics and Engineering Strahlenklinik, Städtische Kliniken Offenbach Offenbach, Germany Institute of Communication and Computer Systems National Technical University of Athens Athens, Greece Communications Research Laboratory McMaster University Hamilton, Ontario, Canada Institute of Communication and Computer Systems National Technical University of Athens Athens, Greece Amar Dhanantwari Klaus Becker FGAN Research Institute for Communication, Information Processing, and Ergonomics (FKIE) Wachtberg, Germany James V Candy Lawrence Livermore National Laboratory University of California Livermore, California, U.S.A G Clifford Carter Naval Undersea Warfare Center Newport, Rhode Island, U.S.A N Ross Chapman Defence and Civil Institute of Environmental Medicine Toronto, Ontario, Canada Reza M Dizaji School of Earth and Ocean Sciences University of Victoria Victoria, British Columbia, Canada Donal B Downey The John P Robarts Research Institute University of Western Ontario London, Ontario, Canada Geoffrey Edelson Advanced Systems and Technology Sanders, A Lockheed Martin Company Nashua, New Hampshire, U.S.A Aaron Fenster School of Earth and Ocean Sciences University of Victoria Victoria, British Columbia, Canada The John P Robarts Research Institute University of Western Ontario London, Ontario, Canada Ian Cunningham Dimitris Hatzinakos The John P Robarts Research Institute University of Western Ontario London, Ontario, Canada ©2001 CRC Press LLC Department of Electrical and Computer Engineering University of Toronto Toronto, Ontario, Canada Grigorios Karangelis Department of Cognitive Computing and Medical Imaging Fraunhofer Institute for Computer Graphics Darmstadt, Germany R Lynn Kirlin School of Earth and Ocean Sciences University of Victoria Victoria, British Columbia, Canada Wolfgang Koch FGAN Research Institute for Communciation, Information Processing, and Ergonomics (FKIE) Wachtberg, Germany Christos Kolotas Department of Medical Physics and Engineering Strahlenklinik, Städtische Kliniken Offenbach Offenbach, Germany Harry E Martz, Jr Lawrence Livermore National Laboratory University of California Livermore, California, U.S.A George K Matsopoulos Arnulf Oppelt Daniel J Schneberk Institute of Communication and Computer Systems National Technical University of Athens Athens, Greece Siemens Medical Engineering Group Erlangen, Germany Lawrence Livermore National Laboratory University of California Livermore, California, U.S.A Charles A McKenzie Cardiovascular Division Beth Israel Deaconess Medical Center and Harvard Medical School Boston, Massachusetts, U.S.A Bernard E McTaggart Naval Undersea Warfare Center (retired) Newport, Rhode Island, U.S.A Sanjay K Mehta Naval Undersea Warfare Center Newport, Rhode Island, U.S.A Natasa Milickovic Kostantinos N Plataniotis Department of Electrical and Computer Engineering University of Toronto Toronto, Ontario, Canada Andreas Pommert Department of Electrical and Computer Engineering University of Western Ontario London, Ontario, Canada Frank S Prato Edmund J Sullivan Lawson Research Institute and Department of Medical Biophysics University of Western Ontario London, Ontario, Canada John M Reid Gerald R Moran Department of Radiology Thomas Jefferson University Philadelphia, Pennsylvania, U.S.A Nikolaos A Mouravliansky Institute of Communication and Computer Systems National Technical University of Athens Athens, Greece ©2001 CRC Press LLC Defence and Civil Institute of Environmental Medicine Toronto, Ontario, Canada Institute of Mathematics and Computer Science in Medicine University Hospital Eppendorf Hamburg, Germany Department of Medical Physics and Engineering Strahlenklinik, Städtische Kliniken Offenbach Offenbach, Germany Lawson Research Institute and Department of Medical Biophysics University of Western Ontario London, Ontario, Canada Stergios Stergiopoulos Department of Biomedical Engineering Drexel University Philadelphia, Pennsylvania, U.S.A Department of Bioengineering University of Washington Seattle, Washington, U.S.A Georgios Sakas Department of Cognitive Computing and Medical Imaging Fraunhofer Institute for Computer Graphics Darmstadt, Germany Naval Undersea Warfare Center Newport, Rhode Island, U.S.A Rebecca E Thornhill Lawson Research Institute and Department of Medical Biophysics University of Western Ontario London, Ontario, Canada Nikolaos Uzunoglu Department of Electrical and Computer Engineering National Technical University of Athens Athens, Greece Nikolaos Zamboglou Department of Medical Physics and Engineering Strahlenklinik, Städtische Kliniken Offenbach Offenbach, Germany Institute of Communication and Computer Systems National Technical University of Athens Athens, Greece Dedication To my lifelong companion Vicky, my son Steve, and my daughter Erene ©2001 CRC Press LLC Contents Signal Processing Concept Similarities among Sonar, Radar, and Medical Imaging Systems Stergios Stergiopoulos 1.1 1.2 1.3 1.4 Introduction Overview of a Real-Time System Signal Processor Data Manager and Display Sub-System SECTION I Adaptive Systems for Signal Process 2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8 General Topics on Signal Processing Simon Haykin The Filtering Problem Adaptive Filters Linear Filter Structures Approaches to the Development of Linear Adaptive Filtering Algorithms Real and Complex Forms of Adaptive Filters Nonlinear Adaptive Systems: Neural Networks Applications Concluding Remarks Gaussian Mixtures and Their Applications to Signal Processing Kostantinos N Plataniotis and Dimitris Hatzinakos 3.1 Introduction 3.2 Mathematical Aspects of Gaussian Mixtures 3.3 Methodologies for Mixture Parameter Estimation 3.4 Computer Generation of Mixture Variables 3.5 Mixture Applications 3.6 Concluding Remarks Matched Field Processing — A Blind System Identification Technique N Ross Chapman, Reza M Dizaji, and R Lynn Kirlin 4.1 Introduction 4.2 Blind System Identification 4.3 Cross-Relation Matched Field Processor 4.4 Time-Frequency Matched Field Processor ©2001 CRC Press LLC 4.5 4.6 Higher Order Matched Field Processors Simulation and Experimental Examples Model-Based Ocean Acoustic Signal Processing James V Candy and Edmund J Sullivan 5.1 Introduction 5.2 Model-Based Processing 5.3 State-Space Ocean Acoustic Forward Propagators 5.4 Ocean Acoustic Model-Based Processing Applications 5.5 Summary Advanced Beamformers 6.1 6.2 6.3 6.4 6.5 6.6 6.7 6.8 Stergios Stergiopoulos Introduction Background Theoretical Remarks Optimum Estimators for Array Signal Processing Advanced Beamformers Implementation Considerations Concept Demonstration: Simulations and Experimental Results Conclusion Advanced Applications of Volume Visualization Methods in Medicine Georgios Sakas, Grigorios Karangelis, and Andreas Pommert 7.1 Volume Visualization Principles 7.2 Applications to Medical Data Appendix Principles of Image Processing: Pixel Brightness Transformations, Image Filtering and Image Restoration Target Tracking 8.1 8.2 8.3 8.4 8.5 8.6 Wolfgang Koch Introduction Discussion of the Problem Statistical Models Bayesian Track Maintenance Suboptimal Realization Selected Applications Target Motion Analysis (TMA) 9.1 9.2 9.3 9.4 Introduction Features of the TMA Problem Solution of the TMA Problem Conclusion ©2001 CRC Press LLC Klaus Becker FIGURE 19.5 Accuracy of the automatic registration: (a) superposition of an area of the registered FL image on the corresponding RF image; (b) superposition of an area of the registered ICG image on the corresponding RF image; (c) a registered pathological FL image with three boundaries traced by an expert; and (d) superposition of the boundaries on the corresponding RF image MR surface points coordinates to compensate for the difference of voxel size between the two modalities The final step of the preprocessing consisted of the production of the distance map from the CT surface, DM(CT) The distance map is a discrete space in which each voxel holds a value equal to its Euclidean distance from the closest node of the CT skin surface.14 The distance map accelerates the process of matching two surfaces consisting of N nodes each, since it reduces the problem’s complexity from O(N2) to O(N) After preprocessing of the CT and MR data, the registration proceeded as an optimization of the function of MOM over the parameters of the selected geometric transformation model The MOM was defined as the average Euclidean distance between the CT and MR skin surfaces.72 ©2001 CRC Press LLC FIGURE 19.6 Magnified areas of the CT bone contours (outer and inner) superimposed on the corresponding MR slice for two image pairs (a and b) The three images on each row, from left to right, correspond to the automatic elastic refinement method, the GAs in conjunction with the affine transformation surface-based method, and the affine-based manual method Three registration methods had been assessed: (1) as the surface-based method, GAs in conjunction with the affine transformation; (2) as the automatic registration method, GAs and affine combined with the proposed elastic deformation method (TPS model), also called the elastic registration refinement method; and (3) as a manual method, the affine transformation in conjunction with a number of markers.72 The accuracy of the medical registration methods is visually demonstrated in Figure 19.6, where CT skull contours are superimposed on magnified areas of the corresponding MR transverse sections for two CTMR image pairs (Figures 19.6a and 19.6b) The application of the GAs-affine method and the application of the affine-based manual method result in slight inaccuracies in the placement of the CT skull contours, which often invade the brain, and the zygomatic bone at the vicinity of the eye, and also fail to locate accurately the superior sagittal sinus These inaccuracies were corrected by the combination of the GAsaffine method and the elastic registration refinement method described in the above section The performance of two automatic registration methods against the manual one, in terms of the MOM, is quantitatively assessed for all CT-MR image pairs, as shown in Figure 19.7 The averaged skin surface distance (MOM) for the seven pairs of CT and MR brain data was obtained, in millimeters, and the values of MOM for both automatic registration methods were averaged over ten independent executions for all image pairs to compensate for the stochastic (randomized) nature of the optimization method, whereas for the manual method, the values of MOM were averaged over three trials for each pair It can be observed from Figure 19.7 that the values of the surface distance of the elastic registration refinement method were systematically lower than those of the other automatic registration method when using the affine transformation with GAs only These results clearly show a definite registration refinement for all pairs of CT-MR data, due to the application of the elastic deformation method Furthermore, it can be observed that the manual registration method based on the affine transformation performed significantly worse than the two automatic methods It is also evident that the performance of the manual method was strongly influenced by the number of markers placed ©2001 CRC Press LLC FIGURE 19.7 Performance of (a) the automatic elastic-based refinement method, (b) the automatic GAs with affine surface-based method, and (c) the affine-based manual method, in terms of surface distance (MOM), in millimeters, for all CT and MR image pairs 19.3 Medical Image Fusion Imaging the same parts of human anatomy with the different modalities, or with the same modality at different times, provides the expert with a great amount of information which must be combined in order to become diagnostically useful Medical image fusion is a process that combines information from different images and displays it to the expert so that its diagnostic value is maximized Although much attention has been drawn to the process of medical image registration, medical image fusion, as a prerequisite of the process for image registration, is not extensively explored in the literature, mainly because it is considered a straightforward step A generic approach of the medical image fusion can be seen in Figure 19.8, where all the necessary steps are highlighted In this section, the concept of the medical image fusion process, as well as the most representative techniques reported, will be revised and broadened to include several methods to combine diagnostic information 19.3.1 Fusion at the Image Level 19.3.1.1 Fusion Using Logical Operators This technique of fusing information from two images can be twofold The reference image, which is not processed, accommodates a segmented region of interest from the second registered image The simplest way to combine information from the two images is by using a logical operator, such as the XOR operator, according to the following equation: I ( x, y ) = I A ( x, y ) ( – M ( x, y ) ) + I B ( x, y )M ( x, y ) (19.25) where M(x, y) is a Boolean mask that marks with 1s every pixel, which is copied from image B to the fused image I(x, y) Moreover, in certain cases, it is desirable to simply delineate the outline of the object of interest from the registered image and to position it in the coordinate system of the reference image An example of this technique is the extraction of the boundary of an object of interest, such as a tumor from a MR image and the overlay on the coordinate system of the registered CT image of the same patient Figure 19.9 ©2001 CRC Press LLC FIGURE 19.8 Generic medical image fusion scheme FIGURE 19.9 Fused images using the XOR operator: CT information (bone structures) (a) and (b) superimposed on two registered MR transverse slices demonstrates the fusion technique using the logical operators on the same patient data, where CT information (bone structures) is superimposed on two registered MR transverse slices 19.3.1.2 Fusion Using a Pseudocolor Map According to this fusion technique, the registered image is rendered using a pseudocolor scale and is transparently overlaid on the reference image.73 There are a number of pseudocolor maps available, defined algorithmically or by using psychophysiologic criteria A pseudocolor map is defined as a correspondence of an (R, G, B) triplet to each distinct pixel value Usually, the pixel value ranges from to 255, and each of the elements of the triplet varies in the same range, thus producing a true color effect The color map is called “pseudo” because only one value is acquired for each pixel during the acquisition of the image data ©2001 CRC Press LLC Two of the pseudocolor maps that are defined by psychophysiologic criteria are the geographic color map and the hot body color map The (R, G, B) triplet values are defined as a function of the original pixel value according to the following equation: (R, G, B) = (R(pixel_value), G(pixel_value), B(pixel_value)) (19.26) The RGB Cube color map is a positional color map, which maps the original values to a set of colors that are determined by traversing the following edges of a cube: B(0, 0, 1) → (1, 0, 1) → R(1, 0, 0) → (1, 1, 0) → G(0, 1, 0) → (0, 1, 1) → B(0, 0, 1) (19.27) This configuration sets up six color ranges, each of which has N/6 steps, where N is the number of colors (distinct pixel values) of the initial image If N is not a factor of 6, the difference is made up at the sixth range The CIE Diagram color map follows the same philosophy, thus traversing a triangle on the RGB Cube in the following manner: R(1, 0, 0) → G(0, 1, 0) → B(0, 0, 1) (19.28) Another useful color map for biomedical images is the Gray Code This color map is constructed by a three-bit gray code, where each successive value differs by a single bit A number of additional color maps, such as HSV Rings and Rainbow, are constructed by manipulating the color in the HSV coordinate system (Hue/Saturation/Value coordinate system) The conversion from HSV to RGB is straightforward and well documented in the literature Figure 19.10 demonstrates the fusion using the RGB Cube pseudocolor map of a SPECT image on the corresponding CT image of the same patient 19.3.1.3 Clustering Algorithms for Unsupervised Fusion of Registered Images Fusion of information on the image level can be achieved by processing both registered images in order to produce a fused image with an appropriate pixel classification The key quantity in this technique is the double histogram P(x, y) of the two registered images, which is defined as the probability of a pixel (i, j) having a value of y in image B, given that the same pixel has a value of x in image A: P(x, y) = P(IB(i, j) = y|IA(i, j) = x) (19.29) This quantity is very closely related to the entropy of the two images and can be very useful for effective tissue classification/segmentation, since it utilizes information from both (registered) images rather than one The concept of the double histogram can be generalized in the R n space fusing n registered images Dynamic studies from single photon emission CT (SPECT) are examples of such cases In this general case, the n-dimensional histogram is defined as follows: FIGURE 19.10 Fused image using the RGB Cube pseudocolor map: (a) reference CT image, (b) registered SPECT image in pseudocolor, and (c) fusion result ©2001 CRC Press LLC Hn (v1, v2, …, ) = P(In (i, j) = y|I1(i, j) = v1; I2(i, j) = v2; …; In – 1(i, j) = – 1) (19.30) Fusing the registered images to produce an enhanced or segmented/classified image becomes equivalent to partitioning the n-dimensional histogram to a desired number of classes using a clustering algorithm The goal of clustering is to reduce the amount of data by categorizing similar data items together Such grouping is pervasive in the way humans process information Clustering is hoped to provide an automatic tool for constricting categories in data feature space Clustering algorithms can be divided into two basic types: hierarchical and partitional.74 Hierarchical algorithms are initialized by a random definition of clusters and evolve by splitting large inhomogeneous clusters or merging small similar clusters Partitional algorithms attempt to directly decompose the data into a set of disjoint clusters by minimizing a measure of dissimilarity between data points in the same clusters, while maximizing dissimilarity between data points in different clusters 19.3.1.3.1 The K-Means Algorithm The K-means algorithm is a partitional clustering algorithm, which is used to distribute points in feature space among a predefined number of classes.75 An implementation of the algorithm applied in pseudocode can be summarized as follows: Initialize the centroids of the classes, i = 1, …, k at random m i (0) t=0 repeat for the centroids m i (t) of all classes locate the data points whose Euclidean distance from mi is minimal x set m i (t + 1) equal to the new center of mass of xi, m i (t + 1) = i ni t=t+1 until(| m i (t – – m i (t)| ≤ error, for all classes i) This algorithm is applied to the n-dimensional histogram of the images to be fused, as defined in Equation 19.30 The centroids of the classes are selected randomly in the n-dimensional histogram, and the algorithm evolves by moving the position of the centroids so that the following quantity is minimized: E = ∑ (x j j – mc(x ) ) j (19.31) where j is the number of data points index and m c ( x ) is the class centroid closest to data point x j The j above algorithm could be employed to utilize information from pairs of images, such as CT, SPECT, MRI-T1, MRI-T2, and functional MRI, to achieve tissue classification and fusion For tissue classification, nine different classes are usually sufficient, corresponding to background, cerebrospinal fluid, gray and white matter, bone material, abnormalities, etc If fusion is required, then 64 or more classes could be defined, providing enhancement of the fine details of the original images In Figure 19.11, a fused image using the K-means algorithm for different classes of tissue is produced by fusing CT and MR images of the same patient 19.3.1.3.2 The Fuzzy K-Means Algorithm The Fuzzy K-means algorithm (FKM) is a variation of the K-means algorithm, with the introduction of fuzziness in the form of a membership function The membership function defines the probability with ©2001 CRC Press LLC FIGURE 19.11Fused image using the K-means algorithm for different classes of tissue: (a) reference CT image, (b) registered MRI image of the same patient, and (c) fusion result which each image pixel belongs to a specific class It is also applied on the n-dimensional histogram of the images to be fused, as in the previous method The FKM algorithm and its variations have been employed in several types of pattern recognition problems.75,76 If a number of n unlabeled data points is assumed, which have to be distributed over k clusters, a membership function u can be defined, such that u: ( j, x ) → [ 0, ] (19.32) where ≤ j ≤ k is an integer corresponding to a cluster The membership function assigns to a data point x a positive, less than or equal to one, probability ujx, which indicates that point x belongs to class j To meet computational requirements, the membership function is implemented as a 2-D matrix, where first index indicates the cluster and the second index indicates the value of the data point If the membership function is to express mathematical probability, the following constraint applies: n ∑u ij = 1, ∀cluster j (19.33) i=1 The FKM algorithm evolves by minimizing the following quantity: k E = n ∑ ∑u p ij ( xi – mj ) (19.34) j = 1i = where the exponential p is a real number greater than and controls the fuzziness of the clustering process The FKM algorithm can be described in pseudocode as follows: t=0 initialize the matrix u randomly repeat n ∑u x p ij i =1 -, j = 1, …, k calculate the cluster centroid using the formula: m j ( t ) = i n ∑u i=1 ©2001 CRC Press LLC p ij  calculate the new values for u: u ij ( t ) =   ∑ – 1 x i – m j p  -  x – m j   –1 i t=t+1 until (| m i (t – 1) – m i (t)| ≤ error or | u iji (t – 1) – uij(t)| ≤ error) 19.3.1.3.3 Self Organizing Maps (SOMs) The SOM is a neural network algorithm which uses a competitive learning technique to train itself in an unsupervised manner A comprehensive overview can be found in Section 19.2.3.1 According to Reference 66, the Kohonen model comprises a layer of neurons m, usually one dimensional (1-D) or 2D The size of the network is defined by the purpose of the fusion operation If it is desirable to simply fuse information to enhance fine detail visibility, a large number of neurons are required, typically × to 16 × 16 If it is known a priori that a small number of tissue classes exist within the images, then the number of neurons of the network is set equal to the number of classes The resulting image is a segmented image, which classifies each pixel to one of these classes A number up to is commonly the n most meaningful choice in this case Each neuron is connected to the input vector (data point) x ∈ R n with a weight vector w ∈ R In the case of image fusion, the input signal has a dimensionality equal to the number of images to be fused, whereas the signal itself is comprised of the values of the images at a random pixel (i, j): x = {I ( i, j ), I ( i, j ), …, I n ( i, j ) } (19.35) It becomes obvious that the proposed method can fuse an arbitrary number of images (assuming that registration has been performed for all of the images) However, the commonest used case is for n = The above process achieves spatial coherence within the network of neurons as far as the input signal is concerned This property is equivalent to clustering, since after self-training the neurons form clusters which reflect the input signal (data points) Figure 19.12 shows two fused images by applying the SOM algorithm on the same patient data imaged by a Gatholinium (Gd)-enhanced MRI (Figure 19.12a) and MRI-T2 (Figure 19.12c) The upper right fused image (Figure 19.12b) is obtained using 256 classes, whereas the lower right image (Figure 19.12d) is classified using classes 19.3.1.4 Fusion to Create Parametric Images It is often necessary to fuse information from a series of images of a dynamic study to classify tissues according to a specific parameter The result of the fusion process is the creation of a parametric image, which visualizes pixel by pixel the value of the diagnostically useful parameter.77 The required classification is performed by thresholding the parametric image at an appropriate level A typical example of this process is the gated blood pool imaging performed for the clinical assessment of the ventricular function A sequence of images — typically 16 per cardiac cycle — is acquired of the cardiac chamber of interest, which usually is the left ventricle The assessment of ventricular function is performed by producing parametric images, visualizing several critical parameters: • The amplitude image measures the change of volume as function of time at a pixel by pixel basis, calculating the ejection fraction (EF) for every pixel in the sequence of images • The phase image visualizes the synchronization of cardiac contraction at a pixel by pixel basis The first parameter is valuable in assessing dyskinetic cardiac tissue due to infarcts, whereas the second parameter is a strong indication of fatal conditions such as fibrillation Both images are calculated by means of Fourier analysis, using only the first few harmonics In this way, information of a large number of images is fused into a single parametric image, thus enabling the expert to quantify clinical evaluation beyond any qualitative inaccuracies ©2001 CRC Press LLC FIGURE 19.12 Fused images by applying the SOM algorithm on the same patient data imaged by (a) Gd-enhanced MRI and (c) MRI-T2 Fused image (b) is obtained using 256 classes, whereas image (d) is classified using classes 19.3.2 Fusion at the Object Level Fusion at the object level involves the generation of either a spatio-temporal model or a 3-D textured object of the required object Segmentation and triangulation algorithms must be performed prior to the fusion process Fusion at the object level is demonstrated by fusing temporal information on the spatial domain Fourdimensional (4-D) MR cardiac images, such as gradient echo, are obtained, providing both anatomical and functional information about the cardiac muscle.78 The left ventricle from each 3-D image, covering the whole cardiac phase, is first segmented, and the produced endocardiac surface is then triangulated The radial displacement between the end-diastolic and end-systolic phase at each point of the triangulated surface is mapped on the surface and coded as pseudocolor, thus producing a textured object (Figure 19.13) which visualizes information of myocardial wall motion during the entire cardiac cycle.79 19.4 Conclusions In this chapter, a comprehensive review of the most widely used techniques for automatic medical image registration and medical image fusion has been presented In terms of medical image registration, three main categories have been reported: the point-based registration, usually rendered as manual, and the surface- and volume-based registrations, rendered as ©2001 CRC Press LLC FIGURE 19.13 Fusion at the object level: the end-diastolic (a) and end-systolic (b) phases captured from the animated VRML files of a normal left ventricle automatic A generic automatic registration scheme was then presented, which consisted of the selection of the image features that were used during the matching process, the definition of a MOM that quantified the spatial matching between the reference and the transformed image, and the application of an optimization technique that determined the independent parameters of the transformation model employed according to the MOM The scheme formulates the problem of the medical image registration as an optimization problem employing SA and GA as two global optimization techniques, in conjunction with affine, bilinear, rigid, and projective transformation models An elastic deformation model based on the use of the Kohonen neural network and the application of an elastic 3-D image warping method, using the TPS method, has been presented to capture local shape differences This method may be applied sequentially to any combination of transformations and optimization techniques highlighted in the generic medical image registration scheme The aforementioned medical registration methods have been successfully applied on 2-D and 3-D medical data For the 2-D retinal registration, three optimization methods (DSM, SA, and GA) have been evaluated in terms of accuracy and efficiency for retinal image registration Results have been presented in the form of evolution of the optimization with the number of function evaluations averaged over a number of independent executions to compensate for the stochastic nature of the techniques Results have also been presented in the form of the best achieved registration, both numerically and visually, for a sufficient number of data applied on three different retinal images: RF, FA, and IGC images The experimental results showed the superiority of GA as a global optimization process in conjunction with the bilinear transformation model For the 3-D CT-MRI head registration, a global transformation to account for global misalignment and an elastic deformation method to capture local shape differences have been proposed Since the registration is a surface-based method, it strongly depends on the initial segmentation step to obtain the CT-MR surfaces The external skin surface has been selected as the common anatomical structure in both modalities for this specific application The global transformation has been based on the novel implementation of the affine transformation method in conjunction with the GAs, as a global optimization process The proposed elastic deformation method has been applied sequentially to the global transformation Two novelties have been addressed through the application: (1) the introduction of the Kohonen neural network model to define an automatic one-to-one point correspondence between the transformed and the reference surfaces (the so-called interpolant points) and (2) the extended implementation of the TPS Interpolation method in three dimensions The qualitative and the quantitative ©2001 CRC Press LLC results for all CT-MR image pairs show the advantageous performance of the proposed automatic methods, in terms of surface distance (MOM), against the manual method Medical image fusion techniques have also been presented to increase diagnostic value from the combination of information from medical images of different modalities The fusion process is required prior the application of a medical image registration scheme Two main categories of the medical image fusion have been addressed: fusion at the image level and fusion at the object level The most representative techniques of these categories have been revised and broadened throughout the chapter to include several methods, although more extensive research toward medical image fusion is required References P A Van den Elsen, E J Pol, and M A Viergever, Medical image matching: a review with classification, IEEE Eng Med Biol., 12(1), 26–39, 1993 C R Maurer and J M Fitzpatrick, A review of medical image registration, in Interactive Image Guided Neurosurgery, R J Maciunas, Ed., Am Assoc Neurological Surgeons, Park Ridge, IL, pp 17–49, 1993 D J Hawks, D L G Hill, and E C M L Bracey, Multi-modal data fusion to combine anatomical and physiological information in the head and the heart, in Cardiovascular Nuclear Medicine and MRI, J H C Reiber and E E Van der Wall, Eds., Kluwer Academic Publishers, Dordrecht, the Netherlands, pp 113–130, 1992 C R Maurer, J M Fitzpatrick, M Y Wang, R L Galloway, R J Maciunas, and G G Allen, Registration of head volume images using implentable fiducial markers, IEEE Trans Med Imaging, 16, 447–462, 1997 W E L Grimpson, G J Ettinger, S J White, T Lozano-Perez, W M Wells, and R Kikinis, An automatic registration method for frameless stereotaxy, image guided surgery, and enhanced reality visulization, IEEE Trans Med Imaging, 15, 129–140, 1996 V Morgioj, A Brusa, G Loi, E Pignoli, A Gramanglia, M Scarcetti, E Bomburdieri, and R Marchesini, Accuracy evaluation of fusion of CT, MR, and SPECT images using commercially available software packages (SRS Proto with IFS), Int J Radiat Oncol Biol Phys., 43(1), 227–234, 1995 P Clarysse, D Gibon, J Rousseau, S Blond, C Vasseur, et al., A computer-assisted system for 3D frameless localization in stereotaxic MRI, IEEE Trans Med Imaging, 10, 523–529, 1991 D L G Hill, D J Hawkes, J E Crossman, M J Gleeson, T C S Cox, et al., Registration of MR and CT images for skull base surgery using point-like anatomical features, Br J Radiol., 64, 1030–1035, 1991 C Evans, T M Peters, D L Collins, C J Henri, S Murrett, G S Pike, and W Dai, 3-D correlative imaging and segmentation of cerebral anatomy, function and vasculature, Automedica, 14, 65–69, 1992 10 R Amdur, D Gladstone, K Leopold, and R D Hasis, Prostate seed implant quality assessment using MR and CT image fusion, Int J Oncol Biol Phys., 43, 67–72, 1999 11 F L Bookstein, Principal warps: thin-plate splines and the decomposition of deformation, IEEE Trans Pattern Anal Mach Intell., 11, 567–585, 1989 12 C A Pelizzari, G T Y Chen, D R Spelbring, R R Weichselbaum, and C T Chen, Accurate threedimensional registration of CT, PET and/or MR images of the brain, J Comput Assist Tomogr., 13, 20–26, 1989 13 W Press, B Flannery, S Teukolsky, and W Vetterling, Numerical Recipes in C, 2nd edition, Cambridge University Press, London, 1992 14 D Kozinska, O J Tretiak, and J Nissanov, Multidimensional alignment using Euclidean distance transform, Graphical Models Image Proces., 59, 373–387, 1997 15 G Borgerfors, Multidimensional chamfer matching: a tree edge matching algorithm, IEEE Trans Pattern Anal Mach Intell., 10, 849–865, 1988 ©2001 CRC Press LLC 16 M Van Herk and H M Kooy, Automatic three-dimensional correlation of CT-CT, CT-MRI, and CT-SPECT using chamfer matching, Med Phys., 21, 1163–1178, 1994 17 M Jiang, R A Robb, and K J Molton, A new approach to 3-D registration of multimodality medical image by surface matching, Visualization in Biomedical Computing, Proc SPIE-1808, Int Soc Opt Eng., Bellingham, WA, pp 196–213, 1992 18 P J Besl and N D McKay, A method for registration of 3-D shapes, IEEE Trans Pattern Anal Mach Intell., 14, 239–256, 1992 19 A Collignon, D Vanderaeulen, P Suetens, and G Marshal, Registration of 3-D multimodality medical images using surfaces and point landmarks, Pattern Recogn Lett., 15, 461–467, 1994 20 A Maurer, G B Aboutanos, B M Dawant, R J Maciunas, and J M Fitzpatrick, Registration of 3-D images using weighted geometrical features, IEEE Trans Med Imaging, 15, 836–849, 1996 21 J L Herring, B M Dawant, C R Maurer, D M Muratore, G L Galloway, and J M Fitzpatrick, Surface-based registration of CT images to physical space for image guided surgery of the spliene: a sensitivity Study, IEEE Trans Med Imaging, 17, 743- 752, 1998 22 J Y Chianos and B J Sallivan, Coincident bit counting — a new criterion for image registration, IEEE Trans Med Imaging, 12, 30–38, 1992 23 T Radcliffe, R Rajapekshe, and S Shaler, Pseudocorrelation: a fast, robust, absolute, gray level image alignment algorithms, Med Phys., 41, 761–769, 1994 24 J B A Maintz, P A van den Elsen, and M A Viergever, Comparison of feature based matching of CT and MR brain images, in Computer Vision, Virtual Reality, and Robotics in Medicine, N Ayache, Ed., Springer-Verlag, Berlin, pp 219–228, 1995 25 J B A Maintz, P A van den Elsen, and M A Viergever, Evaluation of ridge seeking operators for multimodality medical image matching, IEEE Trans Pattern Anal Mach Intell., 18, 353–365, 1996 26 P A Van den Elsen, J B A Maintz, E J D Pol, and M N Viergever, Automatic registration of CT and MR brain images using correlation of geometrical features, IEEE Trans Med Imaging, 14, 384–396, 1995 27 P A Van den Elsen, E J D Pol, T S Samanaweera, P F Hemler, S Napel, and S R Adler, Grey value correlation techniques used for automatic matching of CT and MRI brain and spine images, Visualization Biomed Comput 1994, 2359, 227–237, 1994 28 C Studholme, D L G Hill, and D J Hawkes, Automated registration of truncated MR and CT datasets of the head, Proc Br Mach Vision Assoc., 27–36, 1995 29 R P Woods, J C Mazziotta, and S R Cherry, MRI — PET registration with automated algorithm, J Comput Assist Tomogr., 97, 536–546, 1993 30 A Collignon, F Maes, D Delaere, D Vandermeulen, P Suetens, and G Marshal, Automated multimodality image registration based on information theory, in Information Processing in Medical Imaging 1995, Y Bizais, C Barillot, and R Di Paola, Eds., Kluwer Academic Publishers, Dordrecht, the Netherlands, pp 263–274, 1995 31 F Maes, A Collignon, D Vandermeulen, G Marchal, and P Suetens, Multimodality image registration by maximization of mutual information, IEEE Trans Med Imaging, 16, 167–198, 1997 32 J West, J M Fitzpatrick, M Y Wang, B M Dawant, C R Maurer, R M Kassler, and R J Maciunas, Retrospective intermodality registration techniques for images of the head: surface-based versus volume-based, IEEE Trans Med Imaging, 18, 147–150, 1999 33 D Lemoine, C Barillot, C Cibaud, and E Pasqualinin, A 3-D CT stereoscopic deformation method to merge multimodality images and atlas datas, in Proc of Computer Assisted Radiology (CAR), Springer-Verlag, Berlin, pp 663–668, 1991 34 J C Gee, M Reivich, and R Bajscy, Elastically deforming a 3-D atlas to match anatomical brain images, J Comput Assist Tomogr., 17, 225–236, 1993 35 L Bookstein, Thin Plate Splines and the atlas problem for biomedical image, in Lectures Notes in Computer Sciences, Information Processing in Medical Imaging, Vol 511, A C F Colchester and D J Hawkes, Eds., Springer-Verlag, Berlin, pp 326–342, 1991 ©2001 CRC Press LLC 36 R Bajcsy and S Kovacic, Multiresolution elastic matching, Comput Vision Graph Image Process., 46, 1–29, 1989 37 M Singh, R R Brechner, and V W Henderson, Neuromagnetic localization using magnetic resonance images, IEEE Trans Med Imaging, 11, 125–134, 1984 38 M Moshfeghi, Elastic matching of multimodality medical images, Graphical Models Image Process., 53, 271–282, May 1991 39 C Davatzikos, J L Prince, and R N Bryan, Image registration based on boundary mapping, IEEE Trans Med Imaging, 15, 112–115, 1996 40 H Chang and J M Fitzpatrick, A technique for accurate magnetic resonance imaging in the presence of field inhomogeneities, IEEE Trans Med Imaging, 11, 319–329, 1992 41 J Michiels, H Bosmans, P Pelgrims, D Vandermeulen, J Gybels, G Marshal, and P Suetens, On the problem of geometric distortion of MR images for stereostatic neurosurgery, Magn Reson Imaging, 12, 749–764, 1994 42 J M Fitzpatrick, J B West, and C R Maurer, Predicting error in rigid-body point-based registration, IEEE Trans Med Imaging, 17, 694–702, 1998 43 C R Maurer, R J Maciunas, and J M Fitzpatrick, Registration of head CT images to physical space using a weighted combination of points and surfaces, IEEE Trans Med Imaging, 17, 753–761, 1998 44 C Barillot, B Gibaud, J C Gee, and D Lemoine, Segmentation and fusion of multimodality and multisubjects data for the preparation of neurosurgical procedures, in Medical Imaging: Analysis on Multimodality 2-D/3-D Image, L Beolchi and M H Kuhn, Eds., IOS Press, pp 70–82, 1995 45 L Schad, R Boesecke, W Schlegel, G Hartmann, V Sturm, et al., Three-dimensional image correlation of CT, MR and PET studies in radiotherapy treatment planning of brain tumors, J Comput Assist Tomogr., 11(6), 948–954, 1987 46 D Hawks, D L G Hill, and E C M L Bracey, Multi-modal data fusion to combine anatomical and physiological information in the head and the heart, in Cardiovascular Nuclear Medicine and MRI, J H C Reiver and E E Van der Wall, Eds., Kluwer Academic Publishers, Dordrecht, the Netherlands, pp 113–130, 1992 47 K S Arun, T S Huang, and S Blostein, Least-squares fitting of two 3-D sets, IEEE Trans Pattern Anal Mach Intell., 9(5), 698–700, 1987 48 M Singh, W Frei, T Shibata, G Huth, and N Telfer, A digital technique for accurate change detection in nuclear medical images — with application to myocardial perfusion studies using thallium-201, IEEE Trans Nucl Sci., 26(1), 565–575, 1979 49 E Kramer, M Noz, J Sanger, A Megibaw, and G Maguire, CT-SPECT fusion to correlate radiolabeled monoclonal antibody uptake with abdominal CT findings, Radiology, 172(3), 861–865, 1989 50 K Toennies, J Udupa, G Herman, I Wornom, and S Buchman, Registration of 3D objects and surfaces, IEEE Comput Graphics Appl., 10(3), 52–62, 1990 51 E Peli, R Augliere, and G Timberlake, Feature-based registration of retinal images, IEEE Trans Med Imaging, 6, 272–278, 1987 52 L Junck, J G Moen, G D Hutchins, M B Brown, and D E Kuhl, Correlation methods for the centering, rotation and alignment of functional brain images, J Nucl Med., 31(7), 1220–1226, 1990 53 A Appicella, J H Nagel, and R Duara, Fast multimodality image matching, in Ann Int Conf IEEE Eng Med Biol Soc., Vol 10, IEEE Comp Soc., Los Alamos, pp 414–415, 1988 54 A Venot and V Leclerc, Automated correction of patient motion and gray values prior to subtraction in digitized angiography, IEEE Trans Med Imaging, 3(4), 179–186, 1984 55 S L Jacoby, J S Kowalik, and J T Pizzo, Iterative Methods for Nonlinear Optimization Problems, Prentice-Hall, Englewood Cliffs, NJ, 1972 56 C Fuh, and P Maragos, Motion displacement estimation using an affine model for image matching, Opt Eng., 30, 881–887, 1991 ©2001 CRC Press LLC 57 E Aarts and Van Laardhoven, Simulated Annealing: Theory and Practice, John Wiley & Sons, New York, 1987 58 D Goldberg, Genetic Algorithms in Optimization, Search and Machine Learning, Addison-Wesley, Reading, MA, 1989 59 H Haneishi, T Masuda, N Ohyama, T Honda, and J Tsujiuchi, Analysis of the cost function used in simulated annealing for CT image reconstruction, Appl Opt., 29(2), 259–265, 1990 60 D E Palmer, C Pattaroni, K Nunami, R K Chadha, M Goodman, T Wakamiya, K Fukase, S Horimoto, M Kitazawa, H Fujita, A Kubo, and T Shiba, Effects of dehydroalanine on peptide conformations, J Am Chem Soc., 114(14), 5634–5642, 1992 61 L Ingber, Statistical mechanics of neocortical interactions: a scaling paradigm applied to electroencephalography, Phys Rev., A 44(6), 4017–4060, 1991 62 M S Kim and C Guest, Simulated annealing algorithm for binary phase only filters in pattern classification, Applied Opt., 29(8), 1203–1208, 1990 63 L Ingber, Very fast simulated re-annealing, J Math Comput Modelling, 12(8), 967–973, 1989 64 L Ingber, Simulated annealing practice versus theory, J Math Comput Modelling, 18(11), 29–57, 1993 65 N Arad, N Dyn, D Reisfeld, and Y Yeshurun, Image warping by radial basis functions: application to facial expressions, J Comput Vision Graphics Image Process., 56, 161–172, 1994 66 T Kohonen, Self organized formation of topologically correct feature maps, Biol Cybernetics, 43, 59–69, 1982 67 J Camp, B Cameron, D Blezek, and R Robb, Virtual reality in medicine and biology, Future Generation Comput Syst., 14, 91–108, 1998 68 T Martinetz and K Schulten, A neural gas network learns topologies, in Artificial Neural Networks, T Kohonen et al., Eds., North-Holland, Amsterdam, pp 397–402, 1991 69 R A Robb and D P Hanson, The ANALYZE software system for visualization analysis in surgery simulation, in Computer Integrated Surgery, S Lavalie et al., Eds., MIT Press, Cambridge, MA, 1993 70 G K Matsopoulos, N A Mouravliansky, K K Delibasis, and K S Nikita, Automatic registration of retinal images with global optimization techniques, IEEE Trans Inf Technol Bioeng., 3, 47–60, 1999 71 K K Delibasis, G K Matsopoulos, N A Mouravliansky, and K S Nikita, Efficient implementation of the marching cubes algorithm for rendering medical data, Lecture Notes in Computer Science, High Performance Computing and Networking, in Proc 7th Int Conf., HPCN 1593, P Sloot, M Bubak, A Hoekstra and B Hertzberger, Eds., Springer-Verlag, Berlin, pp 989–999, 1999 72 G K Matsopoulos, K K Delibasis, N Mouravliansky, and K S Nikita, Unsupervised learning for automatic CT — MR image registration with local deformations, European Medical & Biological Engineering Conference, EMBEC’99, Vienna, November 1999 73 J Gomes, L Darsa, B Costa, and L Velho, Warping and Morphing of Graphical Objects, Morgan Kaufman Publishers, Inc., San Francisco, CA, 1998 74 A K Jain and R C Dubes, Algorithms for Clustering Data, Prentice-Hall, Englewood Cliffs, NJ, 1988 75 R Rezaee, C Nyqvist, P van der Zwet, E Jansen, and J Reiber, Segmentation of MR images by a fuzzy C-means algorithm, Comput Cardiol., 21–24, 1995 76 J Bezdek, Partition structures: a tutorial, in The Analysis of Fuzzy Information, B Raton, Ed., CRC Press, Boca Raton, 1987 77 P Sharp, H Gemmel, and F Smith, Practical Nuclear Medicine, IRL Press, Oxford, 1989 78 E Wall, Magnetic resonance in cardiology: which clinical questions can be answered now and in the near future?, in What’s New in Cardiovascular Imaging, J Reiber and E Wall, Eds., Kluwer Academic Publishers, Dordrecht, the Netherlands, pp 197–206, 1998 ©2001 CRC Press LLC 79 K K Delibasis, N Mouravliansky, G K Matsopoulos, K S Nikita, and A Marsh, MR functional cardiac imaging: segmentation, measurement and WWW based visualization of 4D data, Future Generation Comput Syst., 15, 185–193, 1999 ©2001 CRC Press LLC

Ngày đăng: 30/05/2016, 11:07

TỪ KHÓA LIÊN QUAN

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN

w