Biosignal and Biomedical Image Processing phần 10 pdf

26 364 1
Biosignal and Biomedical Image Processing phần 10 pdf

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

362 Chapter 12 F IGURE 12.15 Isolated portions of the cells shown in Figure 12.14. The upper images were created by thresholding the intensity. The lower left image is a com- bination (logical OR) of the upper images and the lower right image adds a thresholded texture-based image. The original and texture images are shown in Figure 12.14. Note that the texture image has been scaled up, first by a factor of six, then by an additional factor of two, to bring it within a nominal image range. The intensity thresh- olded images are shown in Figure 12.15 (upper images; the upper right image has been inverted). These images are combined in the lower left image. The lower right image shows the combination of both intensity-based images with the thresholded texture image. This method of combining images can be ex- tended to any number of different segmentation approaches. MORPHOLOGICAL OPERATIONS Morphological operations have to do with processing shapes. In this sense they are continuity-based techniques, but in some applications they also operate on TLFeBOOK Image Segmentation 363 edges, making them useful in edge-based approaches as well. In fact, morpho- logical operations have many image processing applications in addition to seg- mentation, and they are well represented and supported in the MATLAB Image Processing Toolbox. The two most common morphological operations are dilation and erosion. In dilation the rich get richer and in erosion the poor get poorer. Specifically, in dilation, the center or active pixel is set to the maximum of its neighbors, and in erosion it is set to the minimum of its neighbors. Since these operations are often performed on binary images, dilation tends to expand edges, borders, or regions, while erosion tends to decrease or even eliminate small regions. Obviously, the size and shape of the neighborhood used will have a very strong influence on the effect produced by either operation. The two processes can be done in tandem, over the same area. Since both erosion and dilation are nonlinear operations, they are not invertible transforma- tions; that is, one followed by the other will not generally result in the original image. If erosion is followed by dilation, the operation is termed opening.Ifthe image is binary, this combined operation will tend to remove small objects without changing the shape and size of larger objects. Basically, the initial ero- sion tends to reduce all objects, but some of the smaller objects will disappear altogether. The subsequent dilation will restore those objects that were not elimi- nated by erosion. If the order is reversed and dilation is performed first followed by erosion, the combined process is called closing. Closing connects objects that are close to each other, tends to fill up small holes, and smooths an object’s outline by filling small gaps. As with the more fundamental operations of dila- tion and erosion, the size of objects removed by opening or filled by closing depends on the size and shape of the neighborhood that is selected. An example of the opening operation is shown in Figure 12.16 including the erosion and dilation steps. This is applied to the blood cell image after thresholding, the same image shown in Figure 12.3 (left side). Since we wish to eliminate black artifacts in the background, we first invert the image as shown in Figure 12.16. As can be seen in the final, opened image, there is a reduction in the number of artifacts seen in the background, but there is also now a gap created in one of the cell walls. The opening operation would be more effective on the image in which intermediate values were masked out (Figure 12.3, right side), and this is given as a problem at the end of the chapter. Figure 12.17 shows an example of closing applied to the same blood cell image. Again the operation was performed on the inverted image. This operation tends to fill the gaps in the center of the cells; but it also has filled in gaps between the cells. A much more effective approach to filling holes is to use the imfill routine described in the section on MATLAB implementation. Other MATLAB morphological routines provide local maxima and min- ima, and allows for manipulating the image’s maxima and minima, which im- plement various fill-in effects. TLFeBOOK 364 Chapter 12 F IGURE 12.16 Example of the opening operation to remove small artifacts. Note that the final image has fewer background spots, but now one of the cells has a gap in the wall. MATLAB Implementation The erosion and dilation could be implemented using the nonlinear filter routine nlfilter , although this routine limits the shape of the neighborhood to a rect- angle. The MATLAB routines imdilate and imerode provide for a variety of neighborhood shapes and are much faster than nlfilter . As mentioned above, opening consists of erosion followed by dilation and closing is the reverse. MATLAB also provide routines for implementing these two operations in one statement. To specify the neighborhood used by all of these routines, MATLAB uses a structuring element.* A structuring element can be defined by a binary array, where the ones represent the neighborhood and the zeros are irrelevant. This allows for easy specification of neighborhoods that are nonrectangular, indeed that can have any arbitrary shape. In addition, MATLAB makes a number of popular shapes directly available, just as the fspecial routine makes a number *Not to be confused with a similar term, structural unit, used in the beginning of this chapter. A structural unit is the object of interest in the image. TLFeBOOK Image Segmentation 365 F IGURE 12.17 Example of closing to fill gaps. In the closed image, some of the cells are now filled, but some of the gaps between cells have been erroneously filled in. of popular two-dimensional filter functions available. The routine to specify the structuring element is strel and is called as: structure = strel(shape, NH, arg); where shape is the type of shape desired, NH usually specifies the size of the neighborhood, and arg and an argument, frequently optional, that depends on shape .If shape is ‘arbitrary’ , or simply omitted, then NH is an array that specifies the neighborhood in terms of ones as described above. Prepackaged shapes include: ‘disk’ a circle of radius NH (in pixels) ‘line’ a line of length NH and angle arg in degrees ‘rectangle’ a rectangle where NH is a two element vector specifying rows and col- umns ‘diamond’ a diamond where NH is the distance from the center to each corner ‘square’ a square with linear dimensions NH TLFeBOOK 366 Chapter 12 For many of these shapes, the routine strel produces a decomposed structure that runs significantly faster. Based on the structure, the statements for dilation, erosion, opening, and closing are: I1 = imdilate(I, structure); I1 = imerode(I, structure); I1 = imopen(I, structuure); I1 = imclose(I, structure); where I1 is the output image, I is the input image and structure is the neigh- borhood specification given by strel , as described above. In all cases, struc- ture can be replaced by an array specifying the neighborhood as ones, bypass- ing the strel routine. In addition, imdilate and imerode have optional arguments that provide packing and unpacking of the binary input or output images. Example 12.5 Apply opening and closing to the thresholded blood cell images of Figure 12–3 in an effort to remove small background artifacts and to fill holes. Use a circular structure with a diameter of four pixels. % Example 12.5 and Figures 12.16 and 12.17 % Demonstration of morphological opening to eliminate small % artifacts and of morphological closing to fill gaps % These operations will be applied to the thresholded blood cell % images of Figure 12.3 (left image). % Uses a circular or disk shaped structure 4 pixels in diameter % clear all; close all; I = imread(‘blood1.tif’); % Get image and threshold I = im2double(I); BW = ϳim2bw(I,thresh(I)); % SE = strel(‘disk’,4); % Define structure: disk of radius % 4 pixels BW1= imerode(BW,SE); % Opening operation: erode BW2 = imdilate(BW1,SE); % image first, then dilate % display images % BW3= imdilate(BW,SE); % Closing operation, dilate image BW4 = imerode(BW3,SE); % first then erode % display images TLFeBOOK Image Segmentation 367 This example produced the images in Figures 12.15 and 12.16. Example 12.6 Apply an opening operation to remove the dark patches seen in the thresholded cell image of Figure 12.15. % Figures 12.6 and 12.18 % Use opening to remove the dark patches in the thresholded cell % image of Figure 12.15 % close all; clear all; % SE = strel(‘square’,5); % Define closing structure: % square 5 pixels on a side load fig12_15; % Get data of Figure 12.15 (BW2) BW1= ϳimopen(ϳBW2,SE); % Opening operation Display images The result of this operation is shown in Figure 12.18. In this case, the closing operation is able to remove completely the dark patches in the center of the cell image. A 5-by-5 pixel square structural element was used. The size (and shape) of the structural element controlled the size of artifact removed, and no attempt was made to optimize its shape. The size was set here as the minimum that would still remove all of the dark patches. The opening operation in this example used the single statement imopen . Again, the opening operation oper- ates on activated (i.e., white pixels), so to remove dark artifacts it is necessary to invert the image (using the logical NOT operator, ϳ) before performing the opening operation. The opened image is then inverted again before display. F IGURE 12.18 Application of the open operation to remove the dark patches in the binary cell image in Figure 12.15 (lower right). Using a 5 by 5 square struc- tural element resulted in eliminating all of the dark patches. TLFeBOOK 368 Chapter 12 MATLAB morphology routines also allow for manipulation of maxima and minima in an image. This is useful for identifying objects, and for filling. Of the many other morphological operations supported by MATLAB, only the imfill operation will be described here. This operation begins at a designated pixel and changes connected background pixels (0’s) to foreground pixels (1’s), stopping only when a boundary is reached. For grayscale images, imfill brings the intensity levels of the dark areas that are surrounded by lighter areas up to the same intensity level as surrounding pixels. (In effect, imfill removes re- gional minima that are not connected to the image border.) The initial pixel can be supplied to the routine or obtained interactively. Connectivity can be defined as either four connected or eight connected . In four connectivity, only the four pixels bordering the four edges of the pixel are considered, while in eight con- nectivity all pixel that touch, including those that touch only at the corners, are considered connected. The basic imfill statement is: I_out = imfill(I, [r c], con); where I is the input image, I_out is the output image, [r c] is a two-element vector specifying the beginning point, and con is an optional argument that is set to 8 for eight connectivity (four connectivity is the default). (See the help file to use imfill interactively.) A special option of imfill is available specifi- cally for filling holes. If the image is binary, a hole is a set of background pixels that cannot be reached by filling in the background from the edge of the image. If the image is an intensity image, a hole is an area of dark pixels surrounded by lighter pixels. To invoke this option, the argument following the input image should be holes . Figure 12.19 shows the operation performed on the blood cell image by the statement: I_out = imfill(I, ‘holes’); EDGE-BASED SEGMENTATION Historically, edge-based methods were the first set of tools developed for seg- mentation. To move from edges to segments, it is necessary to group edges into chains that correspond to the sides of structural units, i.e., the structural bound- aries. Approaches vary in how much prior information they use, that is, how much is used of what is known about the possible shape. False edges and missed edges are two of the more obvious, and more common, problems associated with this approach. The first step in edge-based methods is to identify edges which then be- come candidates for boundaries. Some of the filters presented in Chapter 11 TLFeBOOK Image Segmentation 369 F IGURE 12.19 Hole filling operation produced by imfill . Note that neither the edge cell (at the upper image boundary) or the overlapped cell in the center are filled since they are not actually holes. (Original image reprinted with permission from the Image Processing Handbook 2nd edition. Copyright CRC Press, Boca Raton, Florida.) perform edge enhancement, including the Sobel, Prewitt, and Log filters. In addition, the Laplacian, which takes the spatial second derivative, can be used to find edge candidates. The Canny filter is the most advanced edge detector supported by MATLAB, but it necessarily produces a binary output while many of the secondary operations require a graded edge image. Edge relaxation is one approach used to build chains from individual edge candidate pixels. This approach takes into account the local neighborhood: weak edges positioned between strong edges are probably part of the edge, while strong edges in isolation are likely spurious. The Canny filter incorporates a type of edge relaxation. Various formal schemes have been devised under this category. A useful method is described in Sonka (1995) that establishes edges between pixels (so-called crack edges) based on the pixels located at the end points of the edge. Another method for extending edges into chains is termed graph search- ing. In this approach, the endpoints (which could both be the same point in a closed boundary) are specified, and the edge is determined based on minimizing some cost function. Possible pathways between the endpoints are selected from candidate pixels, those that exceed some threshold. The actual path is selected based on a minimization of the cost function. The cost function could include features such as the strength of an edge pixel and total length, curvature, and proximity of the edge to other candidate borders. This approach allows for a TLFeBOOK 370 Chapter 12 great deal of flexibility. Finally, dynamic programming can be used which is also based on minimizing a cost function. The methods briefly described above use local information to build up the boundaries of the structural elements. Details of these methods can be found in Sonka et al. (1995). Model-based edge detection methods can be used to exploit prior knowledge of the structural unit. For example, if the shape and size of the image is known, then a simple matching approach based on correlation can be used (matched filtering). When the general shape is known, but not the size, the Hough transform can be used. This approach was originally designed for identi- fying straight lines and curves, but can be expanded to other shapes provided the shape can be described analytically. The basic idea behind the Hough transform is to transform the image into a parameter space that is constructed specifically to describe the desired shape analytically. Maxima in this parameter space then correspond to the presence of the desired image in image space. For example, if the desired object is a straight line (the original application of the Hough transform), one analytic representa- tion for this shape is y = mx + b,* and such shapes can be completely defined by a two-dimensional parameter space of m and b parameters. All straight lines in image space map to points in parameter space (also known as the accumula- tor array for reasons that will become obvious). Operating on a binary image of edge pixels, all possible lines through a given pixel are transformed into m,b combinations, which then increment the accumulator array. Hence, the accumu- lator array accumulates the number of potential lines that could exist in the image. Any active pixel will give rise to a large number of possible line slopes, m, but only a limited number of m,b combinations. If the image actually contains a line, then the accumulator element that corresponds to that particular line’s m,b parameters will have accumulated a large number. The accumulator array is searched for maxima, or supra threshold locations, and these locations identify a line or lines in the image. This concept can be generalized to any shape that can be described analyt- ically, although the parameter space (i.e., the accumulator) may have to include several dimensions. For example, to search for circles note that a circle can be defined in terms of three parameters, a, s, and r for the equation given below. (y = a) 2 + (x − b) 2 = r 2 (1) where a and b define the center point of the circle and r is the radius. Hence the accumulator space must be three-dimensional to represent a, b, and r. *This representation of a line will not be able to represent vertical lines since m →∞for a vertical line. However, lines can also be represented in two dimensions using cylindrical coordinates, r and θ: y = r cos θ+r sin θ. TLFeBOOK Image Segmentation 371 MATLAB Implementation Of the techniques described above, only the Hough transform is supported by MATLAB image processing routines, and then only for straight lines. It is sup- ported as the Radon transform which computes projections of the image along a straight line, but this projection can be done at any angle.* This results in a projection matrix that is the same as the accumulator array for a straight line Hough transform when expressed in cylindrical coordinates. The Radon transform is implemented by the statement: [R, xp] = radon(BW, theta); where BW is a binary input image and theta is the projection angle in degrees, usually a vector of angles. If not specified, theta defaults to (1:179). R is the projection array where each column is the projection at a specific angle. ( R is a column vector if theta is a constant). Hence, maxima in R correspond to the positions (encoded as an angle and distance) in the image. An example of the use of radon to perform the Hough transformation is given in Example 12.7. Example 12.7 Find the strongest line in the image of Saturn in image file ‘saturn.tif’ . Plot that line superimposed on the image. Solution First convert the image to an edge array using MATLAB’s edge routine. Use the Hough transform (implemented for straight lines using radon ) to build an accumulator array. Find the maximum point in that array (using max ) which will give theta, the angle perpendicular to the line, and the distance along that perpendicular line of the intersection. Convert that line to rectangular coordinates, then plot the line superimposed on the image. % Example 12.7 Example of the Hough transform % (implemented using ‘radon’) to identify lines in an image. % Use the image of Saturn in ‘saturn.tif’ % clear all; close all; radians = 2*pi/360; % Convert from degrees to radians I = imread(’saturn.tif’); % Get image of Saturn theta = 0:179; % Define projection angles BW = edge(I,.02); % Threshold image, threshold set [R,xp] = radon(BW,theta); % Hough (Radon) transform % Convert to indexed image [X, map] = gray2ind (mat2gray(R)); *The Radon transform is an important concept in computed tomography (CT) as described in a following section. TLFeBOOK [...]... zero Load the image of the spine (‘spine.tif’) and filter using the Laplacian filter (use the default constant) Then threshold this image using TLFeBOOK Image Segmentation 373 FIGURE 12.20 Thresholded image of Saturn (from MATLAB’s saturn.tif) with the dominant line found by the Hough transform The right image is the accumulator array with the maximum point indicated by an ‘*’ (Original image is a public... tomography (PET), single photon emission computed tomography (SPECT), and ultrasound Other approaches under development include optical imaging* and impedence tomography Except for simple x-ray images which provide a shadow of intervening structures, some form of image processing is required to produce a useful image The algorithms used for image reconstruction depend on the modality In magnetic resonance... emission tomography (PET) and computed tomography use projections from collimated beams and the reconstruction algorithm is critical The quality of the image is strongly dependent on the image reconstruction algorithm.† *Of course, optical imaging is used in microscopy, but because of scattering it presents serious problems when deep tissues are imaged A number of advanced image processing methods are... problems due to scattering and provide useful images using either coherent or noncoherent light †CT may be the first instance where the analysis software is an essential component of medical diagnosis and comes between the physician and patient: the physician has no recourse but to trust the software 375 TLFeBOOK 376 Chapter 13 CT, PET, AND SPECT Reconstructed images from PET, SPECT, and CT all use collimated... rotation angles The former is explored in Figure 13.7 which shows the square pattern of Figure 13.5 sampled with one-half (left-hand image) and one-quarter (right-hand image) the number of parallel beams used in Figure 13.5 The images have been multiplied by a factor of 10 to enhance the faint aliasing artifacts One of the problems at the end of this chapter explores the influence of undersampling... TLFeBOOK Image Reconstruction 383 FIGURE 13.5 Image reconstruction of a simple white square against a black background Back-projection alone produces a smeared image which can be corrected with a spatial derivative filter These images were generated using the code given in Example 13.1 MATLAB Implementation Radon Transform The MATLAB Image Processing Toolbox contains routines that perform both the Radon and. .. of back-projection and filtered backprojection After a simple image of a white square against a dark background is generated, the CT projections are constructed using the forward Radon transform The original image is reconstructed from these projections using both the filtered and unfiltered back-projection algorithm The original image, the projections, and the two reconstructed images are displayed... Generate CT data by applying the Radon transform to an MRI image of the brain (an unusual example of mixed modalities!) Reconstruct the image using the inverse Radon transform with the Ram-Lak (derivative) filter and the cosine filter with a maximum relative frequency of 0.4 Display the original and reconstructed images % Example 13.2 and Figure 13.9 Image Reconstruction using % filtered back-projection... profiles—but want to know the image (or, in the more general case, the system), that produced that output From the definition of the Radon transform in Eq (9), the image should result from the application of an inverse Radon transform ᏾−1, to the projection profiles, pθ(r): I(x,y) = ᏾−1[pθ(r)] (10) While the Radon transform (Eqs (8) and (9)) and inverse Radon transform (Eq (10) ) are expressed in terms... lowpass filters, one having a weak cutoff (for example, Gaussian with an alpha of 0.5) and the other having a strong cutoff (alpha > 4) Threshold the two filtered images using the maximum variance routine (graythresh) Display the original and filtered images along with their histograms Also display the thresholded images 2 The Laplacian filter which calculates the second derivative can also be used . upper images were created by thresholding the intensity. The lower left image is a com- bination (logical OR) of the upper images and the lower right image adds a thresholded texture-based image. The. intensity thresh- olded images are shown in Figure 12.15 (upper images; the upper right image has been inverted). These images are combined in the lower left image. The lower right image shows the combination. morpho- logical operations have many image processing applications in addition to seg- mentation, and they are well represented and supported in the MATLAB Image Processing Toolbox. The two most

Ngày đăng: 06/08/2014, 00:21

Từ khóa liên quan

Tài liệu cùng người dùng

Tài liệu liên quan