Bản chất của hình ảnh y sinh học (Phần 5)

165 230 0
Bản chất của hình ảnh y sinh học (Phần 5)

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

5 Detection of Regions of Interest Although a physician or a radiologist, of necessity, will carefully examine an image on hand in its entirety, more often than not, diagnostic features of interest manifest themselves in local regions It is uncommon that a condition or disease will alter an image over its entire spatial extent In a screening situation, the radiologist scans the entire image and searches for features that could be associated with disease In a diagnostic situation, the medical expert concentrates on the region of suspected abnormality, and examines its characteristics to decide if the region exhibits signs related to a particular disease In the CAD environment, one of the roles of image processing would be to detect the region of interest (ROI) for a given, speci c, screening or diagnostic application Once the ROIs have been detected, the subsequent tasks would relate to the characterization of the regions and their classi cation into one of several categories A few examples of ROIs in di erent biomedical imaging and image analysis applications are listed below Cells in cervical-smear test images (Papanicolaou or Pap-smear test) 272, 273] Calci cations in mammograms 274] Tumors and masses in mammograms 275, 276, 277] The pectoral muscle in mammograms 278] The breast outline or skin-air boundary in mammograms 279] The broglandular disc in mammograms 280] The air-way tree in lungs The arterial tree in lungs The arterial tree of the left ventricle, and constricted parts of the same due to plaque development Segmentation is the process that divides an image into its constituent parts, objects, or ROIs Segmentation is an essential step before the description, recognition, or classi cation of an image or its constituents Two major approaches to image segmentation are based on the detection of the following characteristics: © 2005 by CRC Press LLC 363 364 Biomedical Image Analysis Discontinuity | Abrupt changes in gray level (corresponding to edges) are detected Similarity | Homogeneous parts are detected, based on gray-level thresholding, region growing, and region splitting/merging Depending upon the nature of the images and the ROIs, we may attempt to detect the edges of the ROIs (if distinct edges are present), or we may attempt to grow regions to approximate the ROIs It should be borne in mind that, in some cases, an ROI may be composed of several disjoint component areas (for example, a tumor that has metastasized into neighboring regions and calci cations in a cluster) Edges that are detected may include disconnected parts that may have to be matched and joined We shall explore several techniques of this nature in the present chapter Notwithstanding the stated interest in local regions as above, applications exist where entire images need to be analyzed for global changes in patterns: for example, changes in the orientational structure of collagen bers in ligaments (see Figure 1.8), and bilateral asymmetry in mammograms (see Section 8.9) Furthermore, in the case of clustered calci cations in mammograms, cells in cervical smears, and other examples of images with multicomponent ROIs, analysis may commence with the detection of single units of the pattern of interest, but several such units present in a given image may need to be analyzed, separately and together, in order to reach a decision regarding the case 5.1 Thresholding and Binarization If the gray levels of the objects of interest in an image are known from prior knowledge, or can be determined from the histogram of the given image, the image may be thresholded to detect the features of interest and reject other details For example, if it is known that the objects of interest in the image have gray-level values greater than L1 , we could create a binary image for display as n) L1 (5.1) g(m n) = 0255 ifif ff ((m m n) L where f (m n) is the original image g(m n) is the thresholded image to be displayed and the display range is 255] See also Section 4.4.1 Methods for the derivation of optimal thresholds are described in Sections 5.4.1, 8.3.2, and 8.7.2 Example: Figure 5.1 (a) shows a TEM image of a ligament sample demonstrating collagen bers in cross-section see Section 1.4 Inspection of the histogram of the image (shown in Figure 2.12) shows that the sections of the © 2005 by CRC Press LLC Detection of Regions of Interest 365 collagen bers in the image have gray-level values less than about 180 values greater than this level represent the brighter background in the image The histogram also indicates the fact that the gray-level ranges of the collagenber regions and the background overlap signi cantly Figure 5.1 (b) shows a thresholded version of the image in (a), with all pixels less than 180 appearing in black, and all pixels above this level appearing in white This operation is the same as the thresholding operation given by Equation 5.1, but in the opposite sense Most of the collagen ber sections have been detected by the thresholding operation However, some of the segmented regions are incomplete or contain holes, whereas some parts that appear to be separate and distinct in the original image have been merged in the result An optimal threshold derived using the methods described in Sections 5.4.1, 8.3.2, and 8.7.2 could lead to better results 5.2 Detection of Isolated Points and Lines Isolated points may exist in images due to noise or due to the presence of small particles in the image The detection of isolated points is useful in noise removal and the analysis of particles The following convolution mask may be used to detect isolated points 8]: ;1 ;1 ;1 ;1 ;1 ;1 ;1 ;1 : (5.2) The operation computes the di erence between the current pixel at the center of the mask and the average of its 8-connected neighbors (The mask could also be seen as a generalized version of the Laplacian mask in Equation 2.83.) The result of the mask operation could be thresholded to detect isolated pixels where the di erence computed would be large Straight lines or line segments oriented at 0o 45o 90o , and 135o may be detected by using the following 3 convolution masks 8]: ;1 ;1 ;1 ;1 ;1 ;1 ;1 ;1 ;1 ;1 ;1 2 ;1 ;1 ;1 ;1 ;1 ;1 ;1 ;1 ;1 : (5.3) ;1 ;1 ;1 ;1 A line may be said to exist in the direction for which the corresponding mask provides the largest response © 2005 by CRC Press LLC 366 Biomedical Image Analysis (a) FIGURE 5.1 (b) (a) TEM image of collagen bers in a scar-tissue sample from a rabbit ligament at a magni cation of approximately 30 000 See also Figure 1.5 Image courtesy of C.B Frank, Department of Surgery, University of Calgary See Figure 2.12 for the histogram of the image (b) Image in (a) thresholded at the gray level of 180 © 2005 by CRC Press LLC Detection of Regions of Interest 367 5.3 Edge Detection One of the approaches to the detection of an ROI is to detect its edges The HVS is particularly sensitive to edges and gradients, and some theories and experiments indicate that the detection of edges plays an important role in the detection of objects and analysis of scenes 122, 281, 282] In Section 2.11.1 on the properties of the Fourier transform, we saw that the rst-order derivatives and the Laplacian relate to the edges in the image Furthermore, we saw that these space-domain operators have equivalent formulations in the frequency domain as highpass lters with gain that is proportional to frequency in a linear or quadratic manner The enhancement techniques described in Sections 4.6 and 4.7 further strengthen the relationship between edges, gradients, and high-frequency spectral components We shall now explore how these approaches may be extended to detect the edges or contours of objects or regions (Note: Some authors consider edge extraction to be a type of image enhancement.) 5.3.1 Convolution mask operators for edge detection An edge is characterized by a large change in the gray level from one side to the other, in a particular direction dependent upon the orientation of the edge Gradients or derivatives measure the rate of change, and hence could serve as the basis for the development of methods for edge detection The rst derivatives in the x and y directions, approximated by the rst di erences, are given by (using matrix notation) fyb (m n) f (m n) ; f (m ; n) fxb (m n) f (m n) ; f (m n ; 1) 0 (5.4) where the additional subscript b indicates a backward-di erence operation Because causality is usually not a matter of concern in image processing, the di erences may also be de ned as fyf (m n) f (m + n) ; f (m n) fxf (m n) f (m n + 1) ; f (m n) 0 (5.5) where the additional subscript f indicates a forward-di erence operation A limitation of the operators as above is that they are based upon the values of only two pixels this makes the operators susceptible to noise or spurious pixel values A simple approach to design robust operators and reduce the sensitivity to noise is to incorporate averaging over multiple measurements © 2005 by CRC Press LLC 368 Biomedical Image Analysis Averaging the two de nitions of the derivatives in Equations 5.4 and 5.5, we get fya (m n) 0:5 f (m + n) ; f (m ; n)] fxa (m n) 0:5 f (m n + 1) ; f (m n ; 1)] 0 (5.6) where the additional subscript a indicates the inclusion of averaging In image processing, it is also desirable to express operators in terms of odd-sized masks that may be centered upon the pixel being processed The Prewitt operators take these considerations into account with the following 3 masks for the horizontal and vertical derivatives Gx and Gy , respectively: ;1 Gx : ;1 : ;1 (5.7) ;1 ;1 ;1 Gy : 0 : (5.8) 1 The Prewitt operators use three di erences across pairs of pixels in three rows or columns around the pixel being processed Due to this fact, and due to the scale factor of 0:5 in Equation 5.6, in order to derive the exact gradient, the results of the Prewitt operators should be divided by , where is the sampling interval in x and y however, this step could be ignored if the result is scaled for display or thresholded to detect edges In order to accommodate the orientation of the edge, a vectorial form of the gradient could be composed as Gf (m n) = Gfx (m n) + j Gfy (m n) kGf (m where and q n)k = G2fx (m n) + G2fy (m n) n) Gf (m n) = tan;1 GGfy ((m fx m n) Gfx (m n) = (f Gx )(m n) Gfy (m n) = (f Gy )(m n): (5.9) (5.10) (5.11) If the magnitude is to be scaled for display or thresholded for the detection of edges, the square-root operation may be dropped, or the magnitude approximated as jGfx j + jGfy j in order to save computation © 2005 by CRC Press LLC Detection of Regions of Interest 369 The Sobel operators are similar to the Prewitt operators, but include larger weights for the pixels in the row or column of the pixel being processed as ;1 Gx : ;2 (5.12) ;1 ;1 ;2 ;1 Gy : 0 : (5.13) Edges oriented at 45o and 135o may be detected by using rotated versions of the masks as above The Prewitt operators for the detection of diagonal edges are ;1 ;1 G45o : ;1 (5.14) 1 and ;1 ;1 G135o : ;1 : (5.15) 1 Similar masks may be derived for the Sobel operator (Note: The positive and negative signs of the elements in the masks above may be interchanged to obtain operators that detect gradients in the opposite directions This step is not necessary if directions are considered in the range 0o ; 180o only, or if only the magnitudes of the gradients are required.) Observe that the sum of all of the weights in the masks above is zero This indicates that the operation being performed is a derivative or gradient operation, which leads to zero output values in areas of constant gray level, and the loss of intensity information The Roberts operator uses 2 neighborhoods to compute cross-di erences as ;1 0 ;1 : (5.16) and The masks are positioned with the upper-left element placed on the pixel being processed The absolute values of the results of the two operators are added to obtain the net gradient: g(m n) = jf (m + n + 1) ; f (m n)j + jf (m + n) ; f (m n + 1)j (5.17) with the indices in matrix-indexing notation The individual di erences may also be squared, and the square root of their sum taken to be the net gradient The advantage of the Roberts operator is that it is a forward-looking operator, as a result of which the result may be written in the same array as the input image This was advantageous when computer memory was expensive and in short supply © 2005 by CRC Press LLC 370 Biomedical Image Analysis Examples: Figure 5.2 (a) shows the Shapes text image Part (b) of the gure shows the gradient magnitude image, obtained by combining, as in Equation 5.9, the horizontal and vertical derivatives, shown in parts (c) and (d) of the gure, respectively The image in part (c) presents high values (positive or negative) at vertical edges only horizontally oriented edges have been deleted by the horizontal derivative operator The image in part (d) shows high output at horizontal edges, with the vertically oriented edges having been removed by the vertical derivative operator The test image has strong edges for most of the objects present, which are clearly depicted in the derivative images however, the derivative images show the edges of a few objects that are not readily apparent in the original image as well Parts (e) and (f) of the gure show the derivatives at 45o and 135o , respectively the images indicate the diagonal edges present in the image Figures 5.3, 5.4, and 5.5 show similar sets of results for the clock, the knee MR, and the chest X-ray test images, respectively In the derivatives of the clock image, observe that the numeral \1" has been obliterated by the vertical derivative operator Figure 5.3 (d)], but gives rise to high output values for the horizontal derivative Figure 5.3 (c)] The clock image has the minute hand oriented at approximately 135o with respect to the horizontal this feature has been completely removed by the 135o derivative operator, as shown in Figure 5.3 (f), but has been enhanced by the 45o derivative operator, as shown in Figure 5.3 (e) The knee MR image contains sharp boundaries that are depicted well in the derivative images in Figure 5.4 The derivative images of the chest X-ray image in Figure 5.5 indicate large values at the boundaries of the image, but depict the internal details with weak derivative values, indicative of the smooth nature of the image 5.3.2 The Laplacian of Gaussian Although the Laplacian is a gradient operator, it should be recognized that it is a second-order di erence operator As we observed in Sections 2.11.1 and 4.6, this leads to double-edged outputs with positive and negative values at each edge this property is demonstrated further by the example in Figure 5.6 (see also Figure 4.26) The Laplacian has the advantage of being omnidirectional, that is, being sensitive to edges in all directions however, it is not possible to derive the angle of an edge from the result The operator is also sensitive to noise because there is no averaging included in the operator the gain in the frequency domain increases quadratically with frequency, causing signi cant ampli cation of high-frequency noise components For these reasons, the Laplacian is not directly useful in edge detection The double-edged output of the Laplacian indicates an important property of the operator: the result possesses a zero-crossing in between the positive and negative outputs across an edge the property holds even when the edge in the original image is signi cantly blurred This property is useful in the development of robust edge detectors The noise sensitivity of the Laplacian © 2005 by CRC Press LLC Detection of Regions of Interest 371 (a) (b) (c) (d) (e) (f) FIGURE 5.2 (a) Shapes test image (b) Gradient magnitude, display range 400] out of 765] (c) Horizontal derivative, display range ;200 200] out of ;765 765] (d) Vertical derivative, display range ;200 200] out of ;765 765] (e) 45o derivative, display range ;200 200] out of ;765 765] (f) 135o derivative, display range ;200 200] out of ;765 765] © 2005 by CRC Press LLC 372 Biomedical Image Analysis (a) (b) (c) (d) (e) (f) FIGURE 5.3 (a) Clock test image (b) Gradient magnitude, display range 100] out of 545] (c) Horizontal derivative, display range ;100 100] out of ;538 519] (d) Vertical derivative, display range ;100 100] out of ;446 545] (e) 45o derivative, display range ;100 100] out of ;514 440] (f) 135o derivative, display range ;100 100] out of ;431 535] © 2005 by CRC Press LLC Detection of Regions of Interest 513 h3 h2 h1 h4 h6 h4 h5 FIGURE 5.78 The regions where the six basic fusion operators are applied are indicated by h2 h3 h4 h5 h6 g Reproduced with permission from D Guliato, R.M Rangayyan, W.A Carnielli, J.A Zu o, and J.E.L Desautels, \Fuzzy fusion operators to combine results of complementary medical image segmentation techniques", Journal of Electronic Imaging, 12(3): 379 { 389, 2003 c SPIE and IS&T fh1 A nite input alphabet = fI1 I2 I3 I4 g Let and be two nite partitions of 1], where = = fhigh lowg We can choose the classes high and low as follows: low = 0:5), high = 0:5 1:0] pij high if ;Si (pij ) 0:5 for j = : : : m, and pij low if ;Si (pij ) < 0:5 for j = : : : m, where pij , i fr cg, and j = : : : m, identify the ith source and the j th pixel and ;Si (pij ) is the membership degree of the pixel pj in Si The nite input alphabet is produced by the function : ! , where: { (high low) = I1 The pixel being analyzed presents a high membership degree in the region-growing segmentation result and a low membership degree in the closed-contour result This input represents a certainty or uncertainty situation depending on the spatial position of the pixel being analyzed see Figure 5.80 { (high high) = I2 The pixel being analyzed presents a high membership degree in the region-growing segmentation result and a high © 2005 by CRC Press LLC 514 Biomedical Image Analysis membership degree in the closed-contour result This indicates an intersection case see Figure 5.80 { (low high) = I3 The pixel being analyzed presents a low membership degree in the region-growing segmentation result and a high membership degree in the closed-contour result This indicates an uncertainty case see Figure 5.80 { (low low) = I4 The pixel being analyzed presents a low membership degree in the region-growing segmentation result and a low membership degree in the closed-contour result This indicates an uncertainty case if the pixel belongs to the interior of the contour in the opposite case, this indicates a stopping or limiting condition of the fusion operator see Figure 5.80 A transition diagram of M , as shown in Figure 5.81 The transition diagram illustrates the situations when the basic fusion operator is executed The analysis begins with a pixel that belongs to the intersection of the two segmentation results The rst input must be of type I1 the initial state of the automaton is a, which corresponds to the fact that the pixel belongs to the interior of the contour The analysis procedure is rst applied to all of the pixels inside the contour While the inputs are I1 or I4 , the operators h1 or h3 will be applied and the automaton remains in state a When an input of type I2 or I3 arrives, the automaton goes to state b to inform the analysis process that the pixel being processed is on the boundary given by the contour-detection method At this stage, all the pixels on the contour are processed While the inputs are I2 or I3 and the operators h5 or h2 are applied, the automaton will remain in state b If, while in state b, the input I1 or I4 occurs (and the operators h4 or h6 applied), the automaton goes to state c, indicating that the pixel being analyzed is outside the contour All the pixels outside the contour are processed at this stage Observe that, depending upon the state of the automaton, di erent fusion operators may be applied to the same inputs As indicated by the transition diagram in Figure 5.81, all of the pixels in the interior of the contour are processed rst all of the pixels on the contour are processed next, followed by the pixels outside the contour The initial state q0 Q, with q0 = fag The set of nal states F = fcg, where F Q In the present case, F has only one element Behavior of the basic fusion operators: The fusion operator described above can combine several results of segmentation, two at a time The result yielded by the fusion operator is a fuzzy set that identi es the certainty and uncertainty present in the inputs to the fusion process It is expected that © 2005 by CRC Press LLC Detection of Regions of Interest 515 a Sc b Sr c FIGURE 5.79 The three states of the automaton Reproduced with permission from D Guliato, R.M Rangayyan, W.A Carnielli, J.A Zu o, and J.E.L Desautels, \Fuzzy fusion operators to combine results of complementary medical image segmentation techniques", Journal of Electronic Imaging, 12(3): 379 { 389, 2003 c SPIE and IS&T I4 I1 Sc I1 I3 Sr I3 I2 I2 FIGURE 5.80 I4 The four possible input values fI1 I2 I3 I4 g for the fusion operator The short line segments with the labels I2 and I3 represent artifacts in the segmentation result Reproduced with permission from D Guliato, R.M Rangayyan, W.A Carnielli, J.A Zu o, and J.E.L Desautels, \Fuzzy fusion operators to combine results of complementary medical image segmentation techniques", Journal of Electronic Imaging, 12(3): 379 { 389, 2003 c SPIE and IS&T © 2005 by CRC Press LLC 516 Biomedical Image Analysis I1 / h1 I2 / h a I2 / h5 I1 / h4 b I4 / h3 I3 / h2 I3 / h2 I1 , I2 / h c I4 / h I3 , I4 / h FIGURE 5.81 Transition diagram that governs the actions of the fusion operator Reproduced with permission from D Guliato, R.M Rangayyan, W.A Carnielli, J.A Zu o, and J.E.L Desautels, \Fuzzy fusion operators to combine results of complementary medical image segmentation techniques", Journal of Electronic Imaging, 12(3): 379 { 389, 2003 c SPIE and IS&T maximal certainty will be represented by a membership degree equal to or (that is, the pixel being analyzed certainly belongs to or does not belong to the nal segmentation result) When individual segmentation results disagree with respect to a pixel belonging or not belonging to the nal result, or when both sources not present su cient reliability, the fusion operator yields a membership degree equal to 0:5 to represent a situation with maximal uncertainty Other situations are represented by membership degrees ranging in the interval (0 0:5) and (0:5 1), depending on the evidence with respect to the membership of the analyzed pixel in the ROI and the reliability of the sources Two illustrative studies on the behavior of the basic fusion operators h1 and h2 are presented below, taking into consideration a limited set of entries: State a of the automaton (inside the contour), entry (high, low), basic fusion operator h1 = maxfCr ;Sr (prj ) Cc 0:5g This is the starting condition of the fusion operator see Figure 5.81 The starting pixel must lie in the intersection of Sr and Sc (see Figures 5.78 and 5.79) In this case, ;Sr (prj ) 0:5 (that is, p belongs to the regiongrowing result with a high membership degree) and ;Sc (pcj ) < 0:5 (that is, p is inside the contour) This situation represents the condition that both sources agree with respect to the pixel p belonging to the ROI Table 5.2 provides explanatory comments describing the behavior of h1 for several values of the reliability parameters and inputs from the two sources State a of the automaton (inside the contour) or state b (on the contour), entry (low, high), basic fusion operator h2 The operator h2 is applied when the automaton is in the state a or b and a transition occurs from a pixel inside the contour to a pixel on © 2005 by CRC Press LLC Detection of Regions of Interest 517 the contour that is, (a I3 ) ! b] or from a pixel on the contour to a neighboring pixel on the contour that is, (b I3 ) ! b] In this case, ;Sr (prj ) < 0:5 (that is, p does not belong to the regiongrowing result or belongs with a low membership degree), and ;Sc (pcj ) > 0:5 (that is, p is on the contour) This situation represents the condition where the sources disagree with respect to the pixel p belonging to the ROI The result of h2 is a weighted average of the input membership values Table 5.3 provides explanatory comments describing the behavior of h2 for several values of the reliability parameters and inputs from the two sources TABLE 5.2 Behavior of the Basic Fusion Operator h1 Cr ;Sr (prj ) Cc ;Sc (pcj ) h1 Comments 1.0 1.0 1.0 0.0 1.0 p belongs to the ROI with maximal certainty 1.0 1.0 0.0 0.0 1.0 Result depends on the source with the higher reliability 0.0 1.0 1.0 0.0 1.0 Result depends on the source with the higher reliability 0.0 1.0 0.0 0.0 0.5 Both sources not present reliability 0.8 1.0 1.0 0.0 1.0 Source Sc presents the higher reliability 0.8 1.0 0.8 0.0 0.8 Result depends on the source with the higher reliability 0.9 1.0 0.3 0.0 0.9 Result depends on the source with the higher reliability Reproduced with permission from D Guliato, R.M Rangayyan, W.A Carnielli, J.A Zu o, and J.E.L Desautels, \Fuzzy fusion operators to combine results of complementary medical image segmentation techniques", Journal of Electronic Imaging, 12(3): 379 { 389, 2003 c SPIE and IS&T Application of the fusion operator to the segmentation of breast tumors: Figure 5.82 shows the results of the fusion of the ROIs represented in © 2005 by CRC Press LLC 518 Biomedical Image Analysis TABLE 5.3 Behavior of the Basic Fusion Operator h2 Cr ;Sr (prj ) Cc ;Sc (pcj ) h2 Comments 1.0 0.0 1.0 1.0 0.5 Weighted averaging 1.0 0.0 0.0 1.0 0.0 Result depends on the source with the higher reliability 0.0 0.0 1.0 1.0 1.0 Result depends on the source with the higher reliability 0.0 0.0 0.0 1.0 0.5 Both sources not present reliability maximal uncertainty 0.8 0.0 1.0 1.0 0.56 Weighted averaging Reproduced with permission from D Guliato, R.M Rangayyan, W.A Carnielli, J.A Zu o, and J.E.L Desautels, \Fuzzy fusion operators to combine results of complementary medical image segmentation techniques", Journal of Electronic Imaging, 12(3): 379 { 389, 2003 c SPIE and IS&T Figures 5.30 and 5.32 The results have been superimposed with the contours drawn by an expert radiologist, for comparison Figure 5.83 demonstrates the application of the methods for contour detection, fuzzy region growing, and fusion to a segment of a mammogram with a malignant tumor Observe that the fusion results reduce the uncertainty present in the interior of the regions, but also reduce the certainty of the boundaries The features of the results of the individual segmentation procedures contribute to the fusion results, allowing the postponement of a crisp decision (if necessary) on the ROI or its boundary to a higher level of the image analysis system Evaluation of the results of fusion using a measure of fuzziness: In order to evaluate the results of the fusion operator, Guliato et al 277] compared the degree of agreement between the reference contour given by an expert radiologist and each segmentation result: contour segmentation, region-growing segmentation, and the result of fusion The reference contour and a segmentation result were aggregated using the fusion operator The fusion operator yields a fuzzy set that represents the certainty and uncertainty identi ed during the aggregation procedure The maximal certainty occurs when ;(p) = or ;(p) = 1, where ; is the membership degree of the pixel p The maximal uncertainty occurs when ;(p) = 0:5 In the former case, the information sources agree completely with respect to the pixel p in the latter, the information sources present maximal ict with respect to the pixel p Intermediate values of the membership degree represent intermediate degrees of agreement among the information sources If the uncertainty presented © 2005 by CRC Press LLC Detection of Regions of Interest 519 Figure 5.82 (a) by the fusion result can be quanti ed, the result could be used to evaluate the degree of agreement among two di erent information sources In order to quantify the uncertainty, Guliato et al 277] proposed a measure of fuzziness In general, a measure of fuzziness is a function f : F (X ) ! R+ (5.89) where F (X ) denotes the set of all fuzzy subsets of X For each fuzzy set A of X , this function assigns a nonnegative real number f (A) that characterizes the degree of fuzziness of A The function f must satisfy the following three requirements: f (A) = if, and only if, A is a crisp set f (A) assumes its maximal value if, and only if, A is maximally fuzzy, that is, all of the elements of A are equal to 0:5 and if set A is undoubtedly sharper than set B , then f (A) f (B ) There are di erent ways of measuring fuzziness that satisfy all of the three essential requirements 351] Guliato et al 277] chose to measure fuzziness in terms of the distinctions between a set and its complement, observing that it is the lack of distinction between a set and its complement that distinguishes © 2005 by CRC Press LLC 520 Biomedical Image Analysis (b) FIGURE 5.82 Result of the fusion of the contour and region (a) in Figures 5.30 (c) and (d) for the case with a malignant tumor and (b) in Figures 5.32 (c) and (d) for the case with a benign mass, with Cr = 1:0, Cc = 1:0 The contours drawn by the radiologist are superimposed for comparison Reproduced with permission from D Guliato, R.M Rangayyan, W.A Carnielli, J.A Zu o, and J.E.L Desautels, \Fuzzy fusion operators to combine results of complementary medical image segmentation techniques", Journal of Electronic Imaging, 12(3): 379 { 389, 2003 c SPIE and IS&T © 2005 by CRC Press LLC Detection of Regions of Interest Figure 5.83 (a) Figure 5.83 (b) © 2005 by CRC Press LLC 521 522 Biomedical Image Analysis Figure 5.83 (c) a fuzzy set from a crisp set The implementation of this concept depends on the de nition of the fuzzy complement the standard complement is de ned as A(x) = ; A(x), for all x X Choosing the Hamming distance, the local distinction between a given set A and its complement A is measured by j A(x) ; f1 ; A(x)g j = j 2A(x) ; j (5.90) and the lack of local distinction is given by 1; j 2A(x) ; j : (5.91) The measure of fuzziness, f (A), is then obtained by adding the local measurements: X f (A) = 1; j 2A(x) ; j]: (5.92) x2X The range of the function f is jX j]: f (A) = if, and only if, A is a crisp set f (A) = jX j when A(x) = 0:5 for all x X In the work reported by Guliato et al 277], for each mammogram the reference contour drawn by the expert radiologist was combined, using the fusion operator, with each of the results obtained by contour detection, fuzzy region growing, and fusion, denoted by RSc , RSr , and RFr , respectively The fusion operator was applied with both the reliability measures equal to unity, © 2005 by CRC Press LLC Detection of Regions of Interest 523 (d) FIGURE 5.83 (a) A 700 700-pixel portion of a mammogram with a spiculated malignant tumor Pixel size = 62.5 m (b) Contour extracted (white line) by fuzzyset-based preprocessing and region growing The black line represents the boundary drawn by a radiologist (shown for comparison) (c) Result of fuzzy region growing The contour drawn by the radiologist is superimposed for comparison (d) Result of the fusion of the contour in (b) and the region in (c) with Cr = 1:0, Cc = 0:9 The contour drawn by the radiologist is superimposed for comparison Reproduced with permission from D Guliato, R.M Rangayyan, W.A Carnielli, J.A Zu o, and J.E.L Desautels, \Fuzzy fusion operators to combine results of complementary medical image segmentation techniques", Journal of Electronic Imaging, 12(3): 379 { 389, 2003 c SPIE and IS&T © 2005 by CRC Press LLC 524 Biomedical Image Analysis that is, Cr = Cc = 1:0, for the two information sources being combined in each case When the result of contour detection was combined with the contour drawn by the radiologist, the former was converted into a region because the fusion method is designed to accept a contour and a region as the inputs Considering the results shown in Figure 5.83, the measures of fuzzinesss obtained were f (RSc ) = 14,774, f (RSr ) = 14,245, and f (RFr ) = 9,710, respectively The aggregation or fusion of the two segmentation results presents lower uncertainty than either, yielding a better result as expected The methods were tested with 14 mammographic images of biopsy-proven cases the values of the measure of fuzzinesss for the cases are shown in Table 5.4 The values of Cc and Cr used to obtain the result of fusion for the 14 mammograms are also listed in the table Both Cc and Cr were maintained equal to unity when computing the measure of fuzziness with respect to the contour drawn by the radiologist for all the cases In 11 cases, the fusion operator yielded improvement over the original results There was no improvement by fusion in three of the cases: in one of these cases both segmentation results were not accurate, and in the other two, the fuzzy region segmentation was much better than the result of contour segmentation (based upon visual comparison with the reference contour drawn by the radiologist) The results provide good evidence that the fusion operator obtains regions with a higher degree of certainty than the results of the individual segmentation methods The measure of fuzziness may be normalized by division by jX j However, in the context of the work of Guliato et al., this would lead to very small values because the number of boundary pixels is far less than the number of pixels inside a mass The measure of fuzziness without normalization is adequate in the assessment of the results of fusion because the comparison is made using the measure for each mammogram separately 5.12 Remarks We have explored several methods to detect the edges of objects or to segment ROIs We have also studied methods to detect objects of known characteristics, and methods to improve initial estimates of edges, contours, or regions The class of lters based upon mathematical morphology 8, 192, 220, 221, 222] has not been dealt with in this chapter After ROIs have been detected and extracted from a given image, they may be analyzed further in terms of representation, feature extraction, pattern classi cation, and image understanding Some of the measures and approaches that could be used for these purposes are listed below It should be recognized that the accuracy of the measures derived will depend upon the accuracy of the results of detection or segmentation 400] © 2005 by CRC Press LLC Detection of Regions of Interest 525 TABLE 5.4 Measures of Fuzziness for the Results of Segmentation and Fusion for 14 Mammograms Mammogram Cr Cc f (RSc ) f (RSr ) f (RFr ) Is the result of fusion better? spic-s-1 1.0, 1.0 14,774 14,245 9,711 Yes circ-fb-010 1.0, 1.0 8,096 9,223 7,905 Yes spx111m 1.0, 1.0 5,130 9,204 4,680 Yes spic-fh0 1.0, 0.6 28,938 23,489 21,612 Yes circ-x-1 1.0, 0.8 6,877 2,990 3,862 No spic-fh2 0.8, 1.0 45,581 38,634 34,969 Yes circ-fb-005 1.0, 1.0 26,176 34,296 25,084 Yes circ-fb-012 1.0, 0.9 16,170 15,477 12,693 Yes spic-db-145 1.0, 0.9 8,306 7,938 7,658 Yes circ-fb-025 1.0, 0.6 56,060 44,277 49,093 No spic-fb-195 1.0, 1.0 11,423 12,511 10,458 Yes spic-s-112 1.0, 0.6 31,413 17,784 12,838 Yes spic-s-401 1.0, 0.6 13,225 11,117 11,195 No circ-fb-069 1.0, 1.0 46,835 53,321 38,832 Yes Reproduced with permission from D Guliato, R.M Rangayyan, W.A Carnielli, J.A Zu o, and J.E.L Desautels, \Fuzzy fusion operators to combine results of complementary medical image segmentation techniques", Journal of Electronic Imaging, 12(3): 379 { 389, 2003 c SPIE and IS&T © 2005 by CRC Press LLC 526 Biomedical Image Analysis External characteristics: { boundary or contour morphology, { boundary roughness, { boundary complexity It is desirable that boundary descriptors are invariant to translation, scaling, and rotation Methods for the analysis of contours and shape complexity are described in Chapter Internal characteristics: { { { { gray level, color, texture, statistics of pixel population Methods for the analysis of texture are presented in Chapters and Description of (dis)similarity: { distance measures, { correlation coe cient Chapter 12 contains the descriptions of several methods based upon measures of similarity and distance however, the methods are described in the context of pattern classi cation using vectors of features Some of the methods may be extended to compare sets of pixels representing segmented regions Relational description: { { { { placement rules, string, tree, and web grammar, structural description, syntactic analysis See Gonzalez and Thomason 401] and Duda et al 402] for details on syntactic pattern recognition and image understanding © 2005 by CRC Press LLC Detection of Regions of Interest 527 5.13 Study Questions and Problems Selected data les related to some of the problems and exercises are available at the site www.enel.ucalgary.ca/People/Ranga/enel697 Give the de nition of the 3 Sobel masks, and explain how they may be used to detect edges of any orientation in an image What are the limitations of this approach to edge detection? What type of further processing steps could help in improving edge representation? Prepare a 5 image with the value 10 in the central 3 region and the value zero in the remainder of the image Calculate the results of application of the 3 Laplacian operator and the masks given in Section 5.2 for the detection of isolated points and lines Evaluate the results in terms of the detection of edges, lines, points, and corners 5.14 Laboratory Exercises and Projects Create a test image with objects of various shapes and sizes Process the image with LoG functions of various scales Analyze the results in terms of the detection of features of varying size and shape Prepare a test image with a few straight lines of various slopes and positions Apply the Hough transform for the detection of straight lines Study the e ect of varying the number of bins in the Hough space Analyze the spreading of values in the Hough space and develop strategies for the detection of the straight lines of interest in the image What are the causes of artifacts with this method? © 2005 by CRC Press LLC [...]... point (x0 y0 ) in a neighborhood of (x y) is similar in gradient magnitude to the point (x y) if kG(x y) ; G(x0 y0 )k T (5.27) where G(x y) is the gradient vector of the given image f (x y) at (x y) and T is a threshold The direction of the gradient | a point (x0 y0 ) in a neighborhood of (x y) is similar in gradient direction to the point (x y) if j (x y) ; (x0 y0 )j A @f (x y) = @y (x y) = 6 G(x y) = tan;1... conducted an extensive study on the problem of false and real zero-crossings, and proposed that zero-crossings may be classi ed as real if (x y) < 0 and false if (x y) > 0, where (x y) = r r2 p(x y) ] rp(x y) , where denotes the dot product, p(x y) is a smoothed version given image such as h @p of@p ithe T and r r2 p(x y) ] = p(x y) = g(x y ) f (x y) ], rp(x y) = @x @y © 2005 by CRC Press LLC Detection... characterization of the edge or boundary pixels may be achieved by using gradient and Laplacian operators as follows 8]: 8 0 if rf (x y) < T > > < b(x y) = > L+ if rf (x y) T and r2f (x y) 0 > : L; if rf (x y) T and r2 (x y) < 0 f (5. 35) where rf (x y) is a gradient and r2f (x y) is the Laplacian of the given image f (x y) T is a threshold and 0, L+ , L; represent three distinct gray levels In the resulting image,... nonzero stability map values exist only along the orientation of a local segment of the stability map that crosses (x y) , then (x y) may be considered to signify a stable edge pixel at (x y) On the other hand, if many nonzero stability map values are present at di erent directions, (x y) indicates an insigni cant boundary pixel at (x y) In other words, a consistent stability indexing method (in the sense... scale-space The method rapidly gained considerable interest, and has been explored further by several researchers in image processing and analysis 285, 286, 287, 288, 289] The scale-space (x y ) of an image f (x y) is de ned as the set of all zerocrossings of its LoG: f (x y )g = f(x y )g j (x y ) = 0 @ 2 @ 2 @x + @y 6= 0 where (x y ) = fr2 g(x y ) © 2005 by CRC Press LLC >0 f (x y) g: (5.21) (5.22) Detection... zero-crossing maps fS1 (x y) S2 (x y) : : : SN (x y) g is obtained, where N is the number of scales The adjusted zero-crossing maps are used to construct the zero-crossing stability map (x y) as (x y) = N X i=1 Si (x y) : (5. 25) The values of (x y) are, in principle, a measure of boundary stability through the lter scales Marr and Hildreth 281] and Marr and Poggio 300] suggested © 2005 by CRC Press LLC 388... prerequisite for most techniques for image analysis Whereas a human observer may, by merely looking at a displayed image, readily recognize its structural components, computer analysis of an image requires algorithmic analysis of the array of image pixel values before arriving at conclusions about the content of the image Computer analysis of images usually starts with segmentation, which reduces pixel... edge, those (x0 y0 ) having the same orientation as that of l0 are not included in the computation of (x y) Based upon these requirements, the relative stability index (x y) is de ned as (x y) = Pm;l10 p k=0 k lk (5.26) where k = exp(;d2k ) and dk = (x ; xk )2 + (y ; yk )2 and (xk yk ) are the locations of lk It should be noted that 0 < (x y) 1 and that the value of (x y) is governed by the geometrical... 282] Consider the Gaussian speci ed by the function 2 2 (5.18) g(x y) = ; exp ; x 2 + 2y : The usual normalizing scale factor has been left out Taking partial derivatives with respect to x and y, we obtain @ 2 g = ; x2 ; 2 exp 4 @x2 2 @ g = ; y2 ; 2 exp 4 @y2 which leads to r2 g (x y) = LoG(r) = ; r ; x2 +y2 2 2 ; x2 +y2 2 2;2 2 4 2 r2 (5.20) 2 2 p where r = x2 + y2 The LoG function is isotropic and... image Reproduced with permission from Z.-Q Liu, R.M Rangayyan, and C.B Frank, \Statistical analysis of collagen alignment in ligaments by scale-space analysis", IEEE Transactions on Biomedical Engineering, 38(6):580{588, 1991 c IEEE © 2005 by CRC Press LLC 392 Biomedical Image Analysis emphasis lter with its gain quadratically proportional to frequency, with a Gaussian lowpass lter The methods and results

Ngày đăng: 27/05/2016, 15:42

Mục lục

  • Contents

  • Chapter 5 Detection of Regions of Interest

    • 5.1 Thresholding and Binarization

    • 5.2 Detection of Isolated Points and Lines

    • 5.3 Edge Detection

      • 5.3.1 Convolution mask operators for edge detection

      • 5.3.2 The Laplacian of Gaussian

      • 5.3.3 Scale-space methods for multiscale edge detection

      • 5.3.4 Canny's method for edge detection

      • 5.3.5 Fourier-domain methods for edge detection

      • 5.3.6 Edge linking

      • 5.4 Segmentation and Region Growing

        • 5.4.1 Optimal thresholding

        • 5.4.2 Region-oriented segmentation of images

        • 5.4.3 Splitting and merging of regions

        • 5.4.4 Region growing using an additive tolerance

        • 5.4.5 Region growing a multiplicative tolerance

        • 5.4.6 Analysis of region growing in the presence of noise

        • 5.4.7 Iterative region growing with multiplicative tolerance

        • 5.4.8 Region growing based upon the human visual system

        • 5.4.9 Application: Detection of calcifications by multitolerance region growing

        • 5.4.10 Application: Detection of calcifications by linear prediction error

        • 5.5 Fuzzy set based Region Growing to Detect Breast Tumors

          • 5.5.1 Preprocessing based upon fuzzy sets

Tài liệu cùng người dùng

  • Đang cập nhật ...

Tài liệu liên quan