1. Trang chủ
  2. » Giáo án - Bài giảng

Bản chất của hình ảnh y sinh học (Phần 7)

56 258 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 56
Dung lượng 2,06 MB

Nội dung

7 Analysis of Texture Texture is one of the important characteristics of images, and texture analysis is encountered in several areas 432, 433, 434, 435, 436, 437, 438, 439, 440, 441, 442] We nd around us several examples of texture: on wooden furniture, cloth, brick walls, oors, and so on We may group texture into two general categories: (quasi-) periodic and random If there is a repetition of a texture element at almost regular or (quasi-) periodic intervals, we may classify the texture as being (quasi-) periodic or ordered the elements of such a texture are called textons 438] or textels Brick walls and oors with tiles are examples of periodic texture On the other hand, if no texton can be identi ed, such as in clouds and cement-wall surfaces, we can say that the texture is random Rao 432] gives a more detailed classi cation, including weakly ordered or oriented texture that takes into account hair, wood grain, and brush strokes in paintings Texture may also be related to visual and/or tactile sensations such as neness, coarseness, smoothness, granularity, periodicity, patchiness, being mottled, or having a preferred orientation 441] A signi cant amount of work has been done in texture characterization 441, 442, 439, 432, 438] and synthesis 443, 438] see Haralick 441] and Haralick and Shapiro 440] (Chapter 9) for detailed reviews According to Haralick et al 442], texture relates to information about the spatial distribution of gray-level variation however, this is a general observation It is important to recognize that, due to the existence of a wide variety of texture, no single method of analysis would be applicable to several di erent situations Statistical measures such as gray-level co-occurrence matrices and entropy 442] characterize texture in a stochastic sense however, they not convey a physical or perceptual sense of the texture Although periodic texture may be modeled as repetitions of textons, not many methods have been developed for the structural analysis of texture 444] In this chapter, we shall explore the nature of texture found in biomedical images, study methods to characterize and analyze such texture, and investigate approaches for the classi cation of biomedical images based upon texture We shall concentrate on random texture in this chapter due to the extensive occurrence of oriented patterns and texture in biomedical images, we shall treat this topic on its own, in Chapter © 2005 by CRC Press LLC 583 584 Biomedical Image Analysis 7.1 Texture in Biomedical Images A wide variety of texture is encountered in biomedical images Oriented texture is common in medical images due to the brous nature of muscles and ligaments, as well as the extensive presence of networks of blood vessels, veins, ducts, and nerves A preferred or dominant orientation is associated with the functional integrity and strength of such structures Although truly periodic texture is not commonly encountered in biomedical images, ordered texture is often found in images of the skins of reptiles, the retina, the cornea, the compound eyes of insects, and honeycombs Organs such as the liver are made up of clusters of parenchyma that are of the order of ; mm in size The pixels in CT images have a typical resolution of 1 mm, which is comparable to the size of the parenchymal units With ultrasonic imaging, the wavelength of the probing radiation is of the order of ; mm, which is also comparable to the size of parenchymal clusters Under these conditions, the liver appears to have a speckled random texture Several samples of biomedical images with various types of texture are shown in Figures 7.1, 7.2, and 7.3 see also Figures 1.5, 1.8, 9.18, and 9.20 It is evident from these illustrations that no single approach can succeed in characterizing all types of texture Several approaches have been proposed for the analysis of texture in medical images for various diagnostic applications For example, texture measures have been derived from X-ray images for automatic identi cation of pulmonary diseases 433], for the analysis of MR images 445], and processing of mammograms 165, 275, 446] In this chapter, we shall investigate the nature of texture in a few biomedical images, and study some of the commonly used methods for texture analysis 7.2 Models for the Generation of Texture Martins et al 447], in their work on the auditory display of texture in images (see Section 7.8), outlined the following similarities between speech and texture generation The sounds produced by the human vocal system may be grouped as voiced, unvoiced, and plosive sounds 31, 176] The rst two types of speech signals may be modeled as the convolution of an input excitation signal with a lter function The excitation signal is quasi-periodic when we use the vocal cords to create voiced sounds, or random in the case of unvoiced sounds Figure 7.4 (a) illustrates the basic model for speech generation © 2005 by CRC Press LLC Analysis of Texture 585 (a) (b) (c) (d) FIGURE 7.1 Examples of texture in CT images: (a) Liver (b) Kidney (c) Spine (d) Lung The true size of each image is 55 55 mm The images represent widely differing ranges of tissue density, and have been enhanced to display the inherent texture Image data courtesy of Alberta Children's Hospital © 2005 by CRC Press LLC 586 Biomedical Image Analysis (a) (b) (c) (d) FIGURE 7.2 Examples of texture in mammograms (from the MIAS database 376]): (a) { (c) oriented texture true image size 60 60 mm (d) random texture true image size 40 40 mm For more examples of oriented texture, see Figures 9.20 and 1.8, as well as Chapter © 2005 by CRC Press LLC Analysis of Texture 587 (a) (b) (c) FIGURE 7.3 Examples of ordered texture: (a) Endothelial cells in the cornea Image courtesy of J Jaroszewski (b) Part of a y's eye Reproduced with permission from D Suzuki, \Behavior in drosophila melanogaster: A geneticist's view", Canadian Journal of Genetics and Cytology, XVI(4): 713 { 735, 1974 c Genetics Society of Canada (c) Skin on the belly of a cobra snake Image courtesy of Implora, Colonial Heights, VA http://www.implora.com See also Figure 1.5 © 2005 by CRC Press LLC 588 Biomedical Image Analysis Random excitation Unvoiced Vocal-tract filter Quasi-periodic excitation Speech signal Voiced (a) Random field of impulses Random Texture element (texton) or spot filter Ordered field of impulses Textured image Ordered (b) FIGURE 7.4 (a) Model for speech signal generation (b) Model for texture synthesis Reproduced with permission from A.C.G Martins, R.M Rangayyan, and R.A Ruschioni, \Audi cation and soni cation of texture in images", Journal of Electronic Imaging, 10(3): 690 { 705, 2001 c SPIE and IS&T © 2005 by CRC Press LLC Analysis of Texture 589 Texture may also be modeled as the convolution of an input impulse eld with a spot or a texton that would act as a lter The \spot noise" model of van Wijk 443] for synthesizing random texture uses this model, in which the Fourier spectrum of the spot acts as a lter that modi es the spectrum of a 2D random-noise eld Ordered texture may be generated by specifying the basic pattern or texton to be used, and a placement rule The placement rule may be expressed as a eld of impulses Texture is then given by the convolution of the impulse eld with the texton, which could also be represented as a lter A one-to-one correspondence may thus be established between speech signals and texture in images Figure 7.4 (b) illustrates the model for texture synthesis: the correspondence between the speech and image generation models in Figure 7.4 is straightforward 7.2.1 Random texture According to the model in Figure 7.4, random texture may be modeled as a ltered version of a eld of white noise, where the lter is represented by a spot of a certain shape and size (usually of small spatial extent compared to the size of the image) The 2D spectrum of the noise eld, which is essentially a constant, is shaped by the 2D spectrum of the spot Figure 7.5 illustrates a random-noise eld of size 256 256 pixels and its Fourier spectrum Parts (a) { (d) of Figure 7.6 show two circular spots of diameter 12 and 20 pixels and their spectra parts (e) { (h) of the gure show the random texture generated by convolving the noise eld in Figure 7.5 (a) with the circular spots, and their Fourier spectra It is readily seen that the spots have ltered the noise, and that the spectra of the textured images are essentially those of the corresponding spots Figures 7.7 and 7.8 illustrate a square spot and a hash-shaped spot, as well as the corresponding random texture generated by the spot-noise model and the corresponding spectra the anisotropic nature of the images is clearly seen in their spectra 7.2.2 Ordered texture Ordered texture may be modeled as the placement of a basic pattern or texton (which is of a much smaller size than the total image) at positions determined by a 2D eld of (quasi-) periodic impulses The separations between the impulses in the x and y directions determine the periodicity or \pitch" in the two directions This process may also be modeled as the convolution of the impulse eld with the texton in this sense, the only di erence between ordered and random texture lies in the structure of the impulse eld: the former uses a (quasi-) periodic eld of impulses, whereas the latter uses a random-noise eld Once again, the spectral characteristics of the texton could be seen as a lter that modi es the spectrum of the impulse eld (which is essentially a 2D eld of impulses as well) © 2005 by CRC Press LLC 590 FIGURE 7.5 Biomedical Image Analysis (a) (b) (a) Image of a random-noise eld (256 256 pixels) (b) Spectrum of the image in (a) Reproduced with permission from A.C.G Martins, R.M Rangayyan, and R.A Ruschioni, \Audi cation and soni cation of texture in images", Journal of Electronic Imaging, 10(3): 690 { 705, 2001 c SPIE and IS&T Figure 7.9 (a) illustrates a 256 256 eld of impulses with horizontal periodicity px = 40 pixels and vertical periodicity py = 40 pixels Figure 7.9 (b) shows the corresponding periodic texture with a circle of diameter 20 pixels as the spot or texton Figure 7.9 (c) shows a periodic texture with the texton being a square of side 20 pixels, px = 40 pixels, and py = 40 pixels Figure 7.9 (d) depicts a periodic-textured image with an isosceles triangle of sides 12 16 and 23 pixels as the spot, and periodicity px = 40 pixels and py = 40 pixels See Section 7.6 for illustrations of the Fourier spectra of images with ordered texture 7.2.3 Oriented texture Images with oriented texture may be generated using the spot-noise model by providing line segments or oriented motifs as the spot Figure 7.10 shows a spot with a line segment oriented at 135o and the result of convolution of the spot with a random-noise eld the log-magnitude Fourier spectra of the spot and the textured image are also shown The preferred orientation of the texture and the directional concentration of the energy in the Fourier domain are clearly seen in the gure See Figure 7.2 for examples of oriented texture in mammograms See Chapter for detailed discussions on the analysis of oriented texture and several illustrations of oriented patterns © 2005 by CRC Press LLC Analysis of Texture 591 (a) (b) (c) (d) (e) (f) Figure 7.6 (g) © 2005 by CRC Press LLC (h) 592 Biomedical Image Analysis FIGURE 7.6 (a) Circle of diameter 12 pixels (b) Circle of diameter 20 pixels (c) Fourier spectrum of the image in (a) (d) Fourier spectrum of the image in (b) (e) Random texture with the circle of diameter 12 pixels as the spot (f) Random texture with the circle of diameter 20 pixels as the spot (g) Fourier spectrum of the image in (e) (h) Fourier spectrum of the image in (f) The size of each image is 256 256 pixels Reproduced with permission from A.C.G Martins, R.M Rangayyan, and R.A Ruschioni, \Audi cation and soni cation of texture in images", Journal of Electronic Imaging, 10(3): 690 { 705, 2001 c SPIE and IS&T FIGURE 7.7 (a) (b) (c) (d) (a) Square of side 20 pixels (b) Random texture with the square of side 20 pixels as the spot (c) Spectrum of the image in (a) (d) Spectrum of the image in (b) The size of each image is 256 256 pixels Reproduced with permission from A.C.G Martins, R.M Rangayyan, and R.A Ruschioni, \Audi cation and soni cation of texture in images", Journal of Electronic Imaging, 10(3): 690 { 705, 2001 c SPIE and IS&T © 2005 by CRC Press LLC 624 Biomedical Image Analysis Although homomorphic deconvolution has been shown to successfully extract the basic wavelets or motifs in periodic signals, the extraction of the impulse train or eld is made di cult by the presence of noise and artifacts related to the deconvolution procedure 31] Example: An image of a part of a building with ordered arrangement of windows is shown in Figure 7.22 (a) A single window section of the image extracted by homomorphic deconvolution is shown in part (b) of the gure (a) (b) FIGURE 7.22 (a) An image of a part of a building with a periodic arrangement of windows (b) A single window structure extracted by homomorphic deconvolution Reproduced with permission from A.C.G Martins and R.M Rangayyan, \Texture element extraction via cepstral ltering in the Radon domain", IETE Journal of Research (India), 48(3,4): 143 { 150, 2002 c IETE An image with a periodic arrangement of a textile motif is shown in Figure 7.23 (a) The result of the homomorphic deconvolution procedure of Martins and Rangayyan 444, 505] to extract the texton is shown in part (b) of the same gure It is evident that a single motif has been extracted, albeit with some blurring and loss of detail The procedure, however, was not successful with biomedical images due to the e ects of quasi-periodicity as well as signi cant size and scale variations among the repeated versions of the basic pattern More research is desirable in this area © 2005 by CRC Press LLC Analysis of Texture (a) 625 (b) FIGURE 7.23 (a) An image with a periodic arrangement of a textile motif (b) A single motif or texton extracted by homomorphic deconvolution Reproduced with permission from A.C.G Martins and R.M Rangayyan, \Texture element extraction via cepstral ltering in the Radon domain", IETE Journal of Research (India), 48(3,4): 143 { 150, 2002 c IETE 7.8 Audi cation and Soni cation of Texture in Images The use of sound in scienti c data analysis is rather rare, and analysis and presentation of data are done almost exclusively by visual means Even when the data are the result of vibrations or sounds, such as the heart sound signals or phonocardiograms, a Doppler ultrasound exam, or sonar, they are often mapped to a graphical display or an image and visual analysis is performed The auditory system has not been used much for image analysis in spite of the fact that it has several advantages over the visual system Whereas many interesting methods have been proposed for the auditory display of scienti c laboratory data and computer graphics representations of multidimensional data, not much work has been reported for deriving sounds from visual images Chambers et al 506] published a report on auditory data presentation in the early 1970s The rst international conference on auditory display of scienti c data was held in 1992 507], with speci c interest in the use of sound for the presentation and analysis of information Meijer 508, 509] proposed a soni cation procedure to present image data to the blind In this method, the frequency of an oscillator is associated with the position of each pixel in the image, and the amplitude is made proportional © 2005 by CRC Press LLC 626 Biomedical Image Analysis to the pixel intensity The image is scanned one column at a time and the outputs of the associated oscillators are all presented as a sum, followed by a click before the presentation of the next column In essence, the image is treated as a spectrogram or a time-frequency distribution 31, 176] The sound produced by this method with simple images such as a line crossing the plane of an image can be easily analyzed however, the sound patterns related to complex images could be complicated and confusing Texture analysis is often confounded by other neighboring or surrounding features Martins et al 447] explored the potential of auditory display procedures, including audi cation and soni cation, for aural presentation and analysis of texture in images An analogy was drawn between random texture and unvoiced speech, and between periodic texture and voiced speech, in terms of generation based on the ltering of an excitation function as shown in Figure 7.4 An audi cation procedure that played in sequence the projections (Radon transforms) of the given image at several angles was proposed for the auditory analysis of random texture A linear-prediction model 510, 176, 31] was used to generate the sound signal from the projection data Martins et al also proposed a soni cation procedure to convert periodic texture to sound, with the emphasis on displaying the essential features of the texture element and periodicity in the horizontal and vertical directions Projections of the texton were used to compose sound signals including pitch like voiced speech as well as a rhythmic aspect, with the pitch period and rhythm related to the periodicities in the horizontal and vertical directions in the image Data-mapping functions were designed to relate image characteristics to sound parameters in such a way that the sounds provided information in microstructure (timbre, individual pitch) and macrostructure (rhythm, melody, pitch organization) that were related to the objective or quantitative measures of texture In order to verify the potential of the proposed methods for aural analysis of texture, a set of pilot experiments was designed and presented to 10 subjects 447] The results indicated that the methods could facilitate qualitative and comparative analysis of texture In particular, it was observed that the methods could lead to the possibility of de ning a sequence or order in the case of images with random texture, and that sound-to-image association could be achieved in terms of the size and shape of the spot used to synthesize the texture Furthermore, the proposed mapping of the attributes of periodic texture to sound attributes could permit the analysis of features such as texton size and shape, as well as periodicity in qualitative and comparative manners The methods could lead to the use of auditory display of images as an adjunctive procedure to visualization Martins et al 511] conducted preliminary tests on the audi cation of MR images using selected areas corresponding to the gray and white matter of the brain, and to normal and infarcted tissues By using the audi cation method, di erences between the various tissue types were easily perceived by two radiologists visual discrimination of the same areas while remaining © 2005 by CRC Press LLC Analysis of Texture 627 within their corresponding MR-image contexts was said to be di cult by the same radiologists The results need to be rmed with a larger study 7.9 Application: Analysis of Breast Masses Using Texture and Gradient Measures In addition to the textural changes caused by microcalci cations, the presence of spicules arising from malignant tumors causes disturbances in the homogeneity of tissues in the surrounding breast parenchyma Based upon this observation, several studies have focused on quantifying the textural content in the mass ROI and mass margins to achieve the classi cation of masses versus normal tissue as well as benign masses versus malignant tumors Petrosian et al 446] investigated the usefulness of texture features based upon GCMs for the classi cation of masses and normal tissue With a dataset of 135 manually segmented ROIs, the methods indicated 89% sensitivity and 76% speci city in the training step, and 76% sensitivity and 64% speci city in the test step using the leave-one-out method Kinoshita et al 512] used a combination of shape factors and texture features based on GCMs Using a three-layer feed-forward neural network, they reported 81% accuracy in the classi cation of benign and malignant breast lesions with a dataset of 38 malignant and 54 benign lesions Chan et al 450], Sahiner et al 451, 513], and Wei et al 514, 515] investigated the e ectiveness of texture features derived from GCMs for di erentiating masses from normal breast tissue in digitized mammograms One hundred and sixty-eight ROIs with masses and 504 normal ROIs were examined, and eight features including correlation, entropy, energy, inertia, inverse di erence moment, sum average moment, sum entropy, and di erence entropy were calculated for each region All the ROIs were manually segmented by a radiologist Using linear discriminant analysis, Chan et al 450] reported an accuracy of 0:84 for the training set and 0:82 for a test set Wei et al 514, 515] reported improved classi cation results with the same dataset by applying multiresolution texture analysis Sahiner et al applied a convolutional neural network 513], and later used a genetic algorithm 513, 516] to classify the masses and normal tissue in the same dataset Analysis of the gradient or transition information present in the boundaries of masses has been attempted by a few researchers in order to arrive at benign-versus-malignant decisions Kok et al 517] used texture features, fractal measures, and edge-strength measures computed from suspicious regions for lesion detection Huo et al 518] and Giger et al 519] extracted mass regions using region-growing methods and proposed two spiculation measures obtained from an analysis of radial edge-gradient information surrounding © 2005 by CRC Press LLC 628 Biomedical Image Analysis the periphery of the extracted regions Benign-versus-malignant classi cation studies performed using the features yielded an average e ciency of 0:85 Later on, the group reported to have achieved superior results with their computer-aided classi cation scheme as compared to an expert radiologist by employing a hybrid classi er on a test set of 95 images 520] Highnam et al 521] investigated the presence of a \halo" | an area around a mass region with a positive Laplacian | to indicate whether a circumscribed mass is benign or malignant They found that the extent of the halo varies between the CC and MLO views for benign masses, but is similar for malignant tumors Guliato et al 276, 277] proposed fuzzy region-growing methods for segmenting breast masses, and further proposed classi cation of the segmented masses as benign or malignant based on the transition information present around the segmented regions see Sections 5.5 and 5.11 Rangayyan et al 163] proposed a region-based edge-pro le acutance measure for evaluating the sharpness of mass boundaries see Sections 2.15 and 7.9.2 Many studies have focused on transforming the space-domain intensities into other forms for analyzing gradient and texture information Claridge and Richter 522] developed a Gaussian blur model to characterize the transitional information in the boundaries of mammographic lesions In order to analyze the blur in the boundaries and to determine the prevailing direction of linear patterns, a polar coordinate transform was applied to map the lesion into polar coordinates A measure of spiculation was computed from the transformed images to discriminate between circumscribed and spiculated lesions as the ratio of the sum of vertical gradient magnitudes to the sum of horizontal gradient magnitudes Sahiner et al 451, 516, 523] introduced the RBST method to transform a band of pixels surrounding the boundary of a segmented mass onto the Cartesian plane (see Figure 7.26) The band of pixels was extracted in the perpendicular direction from every point on the boundary Texture features based upon GCMs computed from the RBST images resulted in an average e ciency of 0:94 in the benign-versus-malignant classi cation of 168 cases Sahiner et al reported that texture analysis of RBST images yielded better benign-versus-malignant discrimination than analysis of the original spacedomain images However, such a transformation is sensitive to the precise extraction of the band of pixels surrounding the ROI the method may face problems with masses having highly spiculated margins Hadjiiski et al 524] reported on the design of a hybrid classi er | adaptive resonance theory network cascaded with linear discriminant analysis | to classify masses as benign or malignant They compared the performance of the hybrid classi er that they designed with a back-propagation neural network and linear discriminant classi ers, using a dataset of 348 manually segmented ROIs (169 benign and 179 malignant) Benign-versus-malignant classi cation using the hybrid classi er achieved marginal improvement in performance, with an average e ciency of 0:81 The texture features used in the classi er © 2005 by CRC Press LLC Analysis of Texture 629 were based upon GCMs and run-length sequences computed from the RBST images Giger et al 525] classi ed manually delineated breast mass lesions in ultrasonographic images as benign or malignant using texture features, margin sharpness, and posterior acoustic attenuation With a dataset of 135 ultrasound images from 39 patients, the posterior acoustic attenuation feature achieved the best benign-versus-malignant classi cation results, with an average e ciency of 0:84 Giger et al reported to have achieved higher sensitivity and speci city levels by combining the features derived from both mammographic and ultrasonographic images of mass lesions as against using features computed from only the mammographic mass lesions Mudigonda et al 275, 165] derived measures of texture and gradient using ribbons of pixels around mass boundaries, with the hypothesis that the transitional information in a mass margin from the inside of the mass to its surrounding tissues is important in discriminating between benign masses and malignant tumors The methods and results of this work are described in the following sections See Sections 6.7, 8.8, 12.11, and 12.12 for more discussion on the detection and analysis of breast masses 7.9.1 Adaptive normals and ribbons around mass margins Mudigonda et al 165, 275] obtained adaptive ribbons around boundaries of breast masses and tumors that were drawn by an expert radiologist, in the following manner Morphological dilation and erosion operations 526] were applied to the boundary using a circular operator of a speci ed diameter Figures 7.24 and 7.25 show the extracted ribbons across the boundaries of a benign mass and a malignant tumor, respectively The width of the ribbon in each case is mm across the boundary (4 mm or 80 pixels on either side of the boundary at a resolution of 50 m per pixel) The ribbon width of mm was determined by a radiologist in order to take into account the possible depth of in ltration or di usion of masses into the surrounding tissues In order to compute gradient-based measures and acutance (see Section 2.15), Mudigonda et al developed the following procedure to extract pixels from the inside of a mass boundary to the outside along the perpendicular direction at every point on the boundary A polygonal model of the mass boundary, computed as described in Section 6.1.4, was used to approximate the mass boundary with a polygon of known parameters With the known equations of the sides of the polygonal model, it is possible to estimate the normal at every point on the boundary The length of the normal at any point on the boundary was limited to a maximum of 80 pixels (4 mm) on either side of the boundary or the depth of the mass at that particular point This is signi cant, especially in the case of spiculated tumors possessing sharp spicules or microlobulations, such that the extracted normals not cross over into adjacent spicules or mass portions The normals obtained as above for a benign mass and a malignant tumor are shown in Figures 7.24 and 7.25 © 2005 by CRC Press LLC 630 Biomedical Image Analysis (a) (b) FIGURE 7.24 (c) (a) A 000 900 section of a mammogram containing a circumscribed benign mass Pixel size = 50 m (b) Ribbon or band of pixels across the boundary of the mass extracted by using morphological operations (c) Pixels along the normals to the boundary, shown for every tenth boundary pixel Maximum length of the normals on either side of the boundary = 80 pixels or mm Images courtesy of N.R Mudigonda 166] See also Figure 12.28 © 2005 by CRC Press LLC Analysis of Texture 631 (a) (b) FIGURE 7.25 (c) (a) A 630 560 section of a mammogram containing a spiculated malignant tumor Pixel size = 50 m (b) Ribbon or band of pixels across the boundary of the tumor extracted by using morphological operations (c) Pixels along the normals to the boundary, shown for every tenth boundary pixel Maximum length of the normals on either side of the boundary = 80 pixels or mm Images courtesy of N.R Mudigonda 166] See also Figure 12.28 © 2005 by CRC Press LLC 632 Biomedical Image Analysis With an approach that is di erent from the above but comparable, Sahiner et al 451] formulated the RBST method to map ribbons around breast masses in mammograms into rectangular arrays see Figure 7.26 It was expected that variations in texture due to the spicules that are commonly present around malignant tumors would be enhanced by the transform, and lead to better discrimination between malignant tumors and benign masses The rectangular array permitted easier and straightforward computation of texture measures 7.9.2 Gradient and contrast measures Due to the in ltration into the surrounding tissues, malignant breast lesions often permeate larger areas than apparent on mammograms As a result, tumor margins in mammographic images not present a clear-cut transition or reliable gradient information Hence, it is di cult for an automated detection procedure to realize precisely the boundaries of mammographic masses, as there cannot be any objective measure of such precision Furthermore, when manual segmentation is used, there are bound to be large inter-observer variations in the location of mass boundaries due to subjective di erences in notions of edge sharpness Considering the above, it is appropriate for gradient-based measures to characterize the global gradient phenomenon in the mass margins without being sensitive to the precise location of the mass boundary A modi ed measure of edge sharpness: The subjective impression of sharpness perceived by the HVS is a function of the averaged variations in intensities between the relatively light and dark areas of an ROI Based upon this, Higgins and Jones 115] proposed a measure of acutance to compute sharpness as the mean-squared gradient along knife-edge spread functions of photographic lms Rangayyan and Elkadiki 116] extended this concept to 2D ROIs in images see Section 2.15 for details Rangayyan et al 163] used the measure to classify mammographic masses as benign or malignant: acutance was computed using directional derivatives along the perpendicular at every boundary point by considering the inside-to-outside di erences of intensities across the boundary normalized to unit pixel distance The method has limitations due to the following reasons: Because derivatives were computed based on the inside-to-outside di erences across the boundary, the measure is sensitive to the actual location of the boundary Furthermore, it is sensitive to the number of di erences (pixel pairs) that are available at a particular boundary point, which could be relatively low in the sharply spiculated portions of a malignant tumor as compared to the well-circumscribed portions of a benign mass The measure thus becomes sensitive to shape complexity as well, which is not intended The nal acutance value for a mass ROI was obtained by normalizing the mean-squared gradient computed at all the points on the boundary © 2005 by CRC Press LLC Analysis of Texture FIGURE 7.26 633 Mapping of a ribbon of pixels around a mass into a rectangular image by the rubber-band straightening transform 428, 451] Figure courtesy of B Sahiner, University of Michigan, Ann Arbor, MI Reproduced with permission from B.S Sahiner, H.P Chan, N Petrick, M.A Helvie, and M.M Goodsitt, \Computerized characterization of masses on mammograms: The rubber band straightening transform and texture analysis", Medical Physics, 25(4): 516 { 526, 1995 c American Association of Medical Physicists © 2005 by CRC Press LLC 634 Biomedical Image Analysis with a factor dependent upon the maximum gray-level range and the maximum number of di erences used in the computation of acutance For a particular mass under consideration, this type of normalization could result in large di erences in acutance values for varying numbers of pixel pairs considered Mudigonda et al 165] addressed the above-mentioned drawbacks by developing a consolidated measure of directional gradient strength as follows Given the boundary of a mass formed by N points, the rst step is to compute the RMS gradient in the perpendicular direction at every point on the boundary with a set of successive pixel pairs as made available by the ribbonextraction method explained in Section 7.9.1 The RMS gradient dm at the mth boundary point is obtained as s dm = P(pm ;1) n=0 fm (n) ; fm (n + 1)] pm (7.42) where fm (n) n = : : : pm are the (pm + 1) pixels available along the perpendicular at the mth boundary point, including the boundary point The normal pm is limited to a maximum of 160 pixels (80 pixels on either side of the boundary, with the pixel size being 50 m) A modi ed measure of acutance based on the directional gradient strength Ag of the ROI is computed as N X dm Ag = N (f 1; f ) max m=1 (7.43) where fmax and fmin are the local maximum and the local minimum pixel values in the ribbon of pixels extracted, and N is the number of pixels along the boundary of the ROI Because RMS gradients computed over several pixel pairs at each boundary point are used in the computation of Ag , the measure is expected to be stable in the presence of noise, and furthermore, expected to be not sensitive to the actual location of the boundary The factor (fmax ; fmin ) in the denominator in Equation 7.43 serves as an additional normalization factor in order to account for the changes in the gray-level contrast of images from various databases it also normalizes the Ag measure to the range 1] Coe cient of variation of gradient strength: In the presence of objects with fuzzy backgrounds, as is the case in mammographic images, the mean-squared gradient as a measure of sharpness may not result in adequate dence intervals for the purposes of pattern classi cation Hence, statistical measures need to be adopted to characterize the feeble gradient variations across mass margins Considering this notion, Mudigonda et al 165] proposed a feature based on the coe cient of variation of the edge-strength values computed at all points on a mass boundary The stated purpose of this © 2005 by CRC Press LLC Analysis of Texture 635 feature was to investigate the variability in the sharpness of a mass around its boundary, in addition to the evaluation of its average sharpness with the measure Ag Variance is a statistical measure of signal strength, and can be used as an edge detector because it responds to boundaries between regions of di erent brightness 527] In the procedure proposed by Mudigonda et al., the variance ( w2 ) localized in a moving window of an odd number of pixels (M ) in the perpendicular direction at a boundary pixel is computed as bM= X2c fm (n) ; w ]2 w=M n=b;M=2c (7.44) where M = fm (n) n = : : : pm are the pixels considered at the mth boundary point in the perpendicular direction and w is the running mean intensity in the selected window: X2c bM= fm (n) : = w M n=b;M=2c (7.45) The window is moved over the entire range of pixels made available at a particular boundary point by the ribbon-extraction method described in Section 7.9.1 The maximum of the variance values thus computed is used to represent the edge strength at the boundary point being processed The coe cient of variation (Gcv ) of the edge-strength values for all the points on the boundary is then computed The measure is not sensitive to the actual location of the boundary within the selected ribbon, and is normalized so as to be applicable to a mixture of images from di erent databases 7.9.3 Results of pattern classi cation In the work of Mudigonda et al 165, 166], four GCMs were constructed by scanning each mass ROI or ribbon in the 0o , 45o , 90o , and 135o directions with unit-pixel distance (d = 1) Five of Haralick's texture features, de ned as F1 F2 F3 F5 , and F9 in Section 7.3.2, were computed for the four GCMs, thus resulting in a total of 20 texture features for each ROI or ribbon A pixel distance of d = is preferred to ensure large numbers of cooccurrences derived from the ribbons of pixels extracted from mass margins Texture features computed from GCMs constructed for larger distances (d = and 10 pixels, with the resolution of the images being 50 or 62 m) were found to possess a high degree of correlation (0:9 and higher) with the corresponding features computed for unit-pixel distance (d = 1) Hence, pattern classi cation experiments were not carried out with the GCMs constructed using larger distances In addition to the texture features described above, the two gradient-based features Ag and Gcv were computed from adaptive ribbons extracted around © 2005 by CRC Press LLC 636 Biomedical Image Analysis the boundaries of 53 mammographic ROIs, including 28 benign masses and 25 malignant tumors Three leading features (with canonical coe cients greater than 1) including two texture measures of correlation (d = at 90o and 45o ), and a measure of inverse di erence moment (d = at 0o ) were selected from the 20 texture features computed from the ribbons The classi cation accuracy was found to be the maximum with the three features listed above The two most-e ective features selected for analyzing the mass ROIs included two measures of correlation (d = at 90o and 135o ) Pattern classi cation experiments with 38 masses from the MIAS database 376] (28 benign and 10 malignant) indicated average accuracies of 68:4% and 78:9% using the texture features computed with the entire mass ROIs and the adaptive ribbons around the boundaries, respectively This result supports the hypothesis that discriminant information is contained around the margins of breast masses rather than within the masses With the extended database of 53 masses (28 benign and 25 malignant), and with features computed using the ribbons around the boundaries, the classi cation accuracies with the gradient and texture features as well as their combination were 66%, 75:5%, and 73:6%, respectively (The area under the receiver operating characteristics curves were, respectively, 0:71, 0:80, and 0:81 see Section 12.8.1 for details on this method.) The gradient features were observed to increase the sensitivity, but reduce the speci city when combined with the texture features In a di erent study, Alto et al 528, 529] obtained benign-versus-malignant classi cation accuracies of up to 78:9% with acutance (as in Equation 2.110), 66:7% with Haralick's texture measures, and 98:2% with shape factors applied to a di erent database of 57 breast masses and tumors Although combinations of the features did not result in higher pattern classi cation accuracy, advantages were observed in experiments on content-based retrieval (see Section 12.12) In experiments conducted by Sahiner et al 428] with automatically extracted boundaries of 249 mammographic masses, Haralick's texture measures individually provided classi cation accuracies of up to only 0:66, whereas the Fourier-descriptor-based shape factor de ned in Equation 6.58 gave an accuracy of 0:82 (the highest among 13 shape features, 13 texture features, and ve run-length statistics) Each texture feature was computed using the RBST method 451] (see Figure 7.26) in four directions and for 10 distances However, the full set of the shape factors provided an average accuracy of 0:85, the texture feature set provided the same accuracy, and the combination of shape and texture feature sets provided an improved accuracy of 0:89 These results indicate the importance of including features from a variety of perspectives and image characteristics in pattern classi cation See Sections 5.5, 5.11, and 8.8 for discussions on the detection of masses in mammograms Sections 6.7 and 12.11 for details on shape analysis of masses and Section 12.12 for a discussion on the application of texture measures for content-based retrieval and classi cation of mammographic masses © 2005 by CRC Press LLC Analysis of Texture 637 7.10 Remarks In this chapter, we have examined the nature of texture in biomedical images, and studied several methods to characterize texture We have also noted numerous applications of texture analysis in the classi cation of biomedical images Depending upon the nature of the images on hand, and the anticipated textural di erences between the various categories of interest, one may have to use combinations of several measures of texture and contour roughness (see Chapter 6) in order to obtain acceptable results Relating statistical and computational representations of texture to visually perceived patterns or expert opinion could be a signi cant challenge in medical applications See Ojala et al 530] for a comparative analysis of several methods for the analysis of texture See Chapter 12 for examples of pattern classi cation via texture analysis Texture features may also be used to partition or segment multitextured images into their constituent parts, and to derive information regarding the shape, orientation, and perspective of objects Haralick and Shapiro 440] (Chapter 9) describe methods for the derivation of the shape and orientation of 3D objects or terrains via the analysis of variations in texture Examples of oriented texture were presented in this chapter Given the importance of oriented texture and patterns with directional characteristics in biomedical images, Chapter is devoted completely to the analysis of oriented patterns 7.11 Study Questions and Problems Selected data les related to some of the problems and exercises are available at the site www.enel.ucalgary.ca/People/Ranga/enel697 Explain the manner in which (a) the variance, (b) the entropy, and (c) the skewness of the histogram of an image can represent texture Discuss the limitations of measures derived from the histogram of an image in the representation of texture What are the main similarities and di erences between the histogram and a gray-level co-occurrence matrix of an image? What are the orders of these two measures in terms of PDFs? © 2005 by CRC Press LLC 638 Biomedical Image Analysis Explain why gray-level co-occurrence matrices need to be estimated for several values of displacement (distance) and angle Explain how shape complexity and texture (gray-level) complexity complement each other (You may use a tumor as an example.) Sketch two examples of fractals, in the sense of self-similar nested patterns, in biomedical images 7.12 Laboratory Exercises and Projects Visit a medical imaging facility and a pathology laboratory Collect examples of images with (a) random texture, (b) oriented texture, and (c) ordered texture Respect the priority, privacy, and dentiality of patients Request a radiologist, a technologist, or a pathologist to explain how he or she interprets the images Obtain information on the di erences between normal and abnormal (disease) patterns in di erent types of samples and tests Collect a few sample images for use in image processing experiments, after obtaining the necessary permissions and ensuring that you carry no patient identi cation out of the laboratory Compute the log-magnitude Fourier spectra of the images you obtained in Exercise Study the nature of the spectra and relate their characteristics to the nature of the texture observed in the images Derive the histograms of the images you obtained in Exercise Compute the (a) the variance, (b) the entropy, (c) the skewness, and (d) kurtosis of the histograms Relate the characteristics of the histograms and the values of the parameters listed above to the nature of the texture observed in the images Write a program to estimate the fractal dimension of an image using the method given by Equation 7.39 Compute the fractal dimension of the images you obtained in Exercise Interpret the results and relate them to the nature of the texture observed in the images © 2005 by CRC Press LLC [...]... counterpart © 2005 by CRC Press LLC 602 Biomedical Image Analysis The di erence entropy measure is de ned as F11 = ; LX ;1 k=0 px ;y (k) log2 px ;y (k)] : (7.20) Two information-theoretic measures of correlation are de ned as Hxy ; Hxy1 F12 = max fH H g x y (7.21) and F13 = f1 ; exp ;2 (Hxy2 ; Hxy )]g2 (7.22) where Hxy = F9 Hx and Hy are the entropies of px and py , respectively Hxy1 = ; and Hxy2 = ; LX ;1... X-ray images of the calcaneus (heel) bone, and showed that the value was decreased by injury and osteoporosis, indicating reduced complexity of structure (increased © 2005 by CRC Press LLC 612 Biomedical Image Analysis gaps) as compared to normal bone Saparin et al 479], using symbol dynamics and measures of complexity, found that the complexity of the trabecular structure in bone declines more rapidly... F6 )2 px +y (k): (7. 17) The sum entropy feature F8 is given by F8 = ; 2(X L;1) k=0 px +y (k) log2 px +y (k)] : (7.18) Entropy, a measure of nonuniformity in the image or the complexity of the texture, is de ned as F9 = ; LX ;1 LX ;1 l1 =0 l2 =0 p(l1 l2 ) log2 p(l1 l2 )] : (7.19) The di erence variance measure F10 is de ned as the variance of px ;y , in a manner similar to that given by Equations 7.16 and... ned as skewness = m3=32 (7.4) 4 kurtosis = m m2 (7.5) m2 and 2 indicate the asymmetry and uniformity (or lack thereof) of the PDF Highorder moments are a ected signi cantly by noise or error in the PDF, and may not be reliable features The moments of the PDF can only serve as basic representatives of gray-level variation Byng et al 448] computed the skewness of the histograms of 24 24 (3:12 3:12 mm)... co-occurrence of gray levels, known as the gray-level co-occurrence matrix (GCM) GCMs are also known as spatial gray-level dependence (SGLD) matrices, and may be computed for various orientations and distances The GCM P(d ) (l1 l2 ) represents the probability of occurrence of the pair of gray levels (l1 l2 ) separated by a given distance d at angle GCMs are constructed by mapping the gray-level co-occurrence... for the elements © 2005 by CRC Press LLC Analysis of Texture 603 along the diagonal corresponding to the gray levels present in the texture elements A measure of association is the 2 statistic, which may be expressed using the notation above as L;1 L;1 2 2 = X X p(l1 l2 ) ; px (l1 ) py (l2 )] : (7.26) px (l1 ) py (l2) l1 =0 l2 =0 The measure may be normalized by dividing by L, and expected to possess... as F2 = © 2005 by CRC Press LLC LX ;1 k=0 k2 LX ;1 LX ;1 l1 =0 l2 =0 | {z } jl1 ;l2 j=k p(l1 l2 ): (7.12) Analysis of Texture 601 The correlation measure F3 , which represents linear dependencies of gray levels, is de ned as " L;1 L;1 X X F3 = 1 x y l1 =0 l2 =0 l1 l2 p(l1 l2) ; x y # (7.13) where x and y are the means, and x and y are the standard deviations of px and py , respectively The sum of squares... (For a discussion on the application of fractal analysis to bone images, see Geraets and van der Stelt 469].) Sedivy et al 482] showed that the fractal dimension of atypical nuclei in dysplastic lesions of the cervix uteri increased as the degree of dysplasia increased They indicated that fractal dimension could quantify the irregularity and complexity of the outlines of nuclei, and facilitate objective... that they have high values for di erent regions of the original image possessing di erent types of texture (edges and waves, respectively) Feature vectors composed of the values of various Laws' operators for each pixel may be used for classifying the image into texture categories on a pixel-by-pixel basis The results may be used for texture segmentation and recognition In an example provided by Laws... is unity, that of a circle or a 2D perfectly planar (sheet-like) object is two, and that of a sphere is three As the irregularity or complexity of a pattern increases, its fractal dimension increases up to its own Euclidean dimension dE plus one The fractal dimension of a jagged, rugged, convoluted, kinky, or crinkly curve will be greater than unity, and reaches the value of two as its complexity increases

Ngày đăng: 27/05/2016, 15:46

TỪ KHÓA LIÊN QUAN

TÀI LIỆU CÙNG NGƯỜI DÙNG

  • Đang cập nhật ...

TÀI LIỆU LIÊN QUAN