1. Trang chủ
  2. » Công Nghệ Thông Tin

DIGITAL IMAGE PROCESSING 4th phần 8 docx

81 275 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Nội dung

466 EDGE DETECTION of optical images of real scenes, generally do not possess step edges because the anti- aliasing low-pass filtering prior to digitization reduces the edge slope in the digital image caused by any sudden luminance change in the scene. The one-dimensional profile of a line is shown in Figure 15.1-1c. In the limit, as the line width w approaches zero, the resultant amplitude discontinuity is called a roof edge. Continuous domain, two-dimensional models of edges and lines assume that the amplitude discontinuity remains constant in a small neighborhood orthogonal to the edge or line profile. Figure 15.1-2a is a sketch of a two-dimensional edge. In addi- tion to the edge parameters of a one-dimensional edge, the orientation of the edge slope with respect to a reference axis is also important. Figure 15.1-2b defines the edge orientation nomenclature for edges of an octagonally shaped object whose amplitude is higher than its background. FIGURE 15.1-1. One-dimensional, continuous domain edge and line models. EDGE, LINE AND SPOT MODELS 467 Figure 15.1-3 contains step and unit width ramp edge models in the discrete domain. The vertical ramp edge model in the figure contains a single transition pixel whose amplitude is at the midvalue of its neighbors. This edge model can be obtained by performing a pixel moving window average on the vertical step edge model. The figure also contains two versions of a diagonal ramp edge. The single- pixel transition model contains a single midvalue transition pixel between the regions of high and low amplitude; the smoothed transition model is generated by a pixel moving window average of the diagonal step edge model. Figure 15.1-3 also presents models for a discrete step and ramp corner edge. The edge location for discrete step edges is usually marked at the higher-amplitude side of an edge transi- tion. For the single-pixel transition model and the smoothed transition vertical and corner edge models, the proper edge location is at the transition pixel. The smoothed transition diagonal ramp edge model has a pair of adjacent pixels in its transition zone. The edge is usually marked at the higher-amplitude pixel of the pair. In Figure 15.1-3, the edge pixels are italicized. FIGURE 15.1-2. Two-dimensional, continuous domain edge model. 22× 22× 468 EDGE DETECTION Discrete two-dimensional single-pixel line models are presented in Figure 15.1-4 for step lines and unit width ramp lines. The single-pixel transition model has a mid- value transition pixel inserted between the high value of the line plateau and the low- value background. The smoothed transition model is obtained by performing a pixel moving window average on the step line model. A spot, which can only be defined in two dimensions, consists of a plateau of high amplitude against a lower amplitude background, or vice versa. Figure 15.1-5 presents single-pixel spot models in the discrete domain. There are two generic approaches to the detection of edges, lines and spots in a luminance image: differential detection and model fitting. With the differential detec- tion approach, as illustrated in Figure 15.1-6, spatial processing is performed on an original image to produce a differential image with accentuated spa- tial amplitude changes. Next, a differential detection operation is executed to deter- mine the pixel locations of significant differentials. The second general approach to FIGURE 15.1-3. Two-dimensional, discrete domain edge models. 22× Fjk,() Gjk,() EDGE, LINE AND SPOT MODELS 469 edge, line or spot detection involves fitting of a local region of pixel values to a model of the edge, line or spot, as represented in Figures 15.1-1 to 15.1-5. If the fit is sufficiently close, an edge, line or spot is said to exist, and its assigned parameters are those of the appropriate model. A binary indicator map is often generated to indicate the position of edges, lines or spots within an image. Typically, edge, line and spot locations are specified by black pixels against a white background. There are two major classes of differential edge detection: first- and second-order derivative. For the first-order class, some form of spatial first-order differentiation is performed, and the resulting edge gradient is compared to a threshold value. An edge is judged present if the gradient exceeds the threshold. For the second-order derivative class of differential edge detection, an edge is judged present if there is a significant spatial change in the polarity of the second derivative. FIGURE 15.1-4. Two-dimensional, discrete domain line models. Ejk,() 470 EDGE DETECTION FIGURE 15.1-5. Two-dimensional, discrete domain single pixel spot models. FIGURE 15.1-6. Differential edge, line and spot detection. FIRST-ORDER DERIVATIVE EDGE DETECTION 471 Sections 15.2 and 15.3 discuss the first- and second-order derivative forms of edge detection, respectively. Edge fitting methods of edge detection are considered in Section 15.4. 15.2. FIRST-ORDER DERIVATIVE EDGE DETECTION There are two fundamental methods for generating first-order derivative edge gradi- ents. One method involves generation of gradients in two orthogonal directions in an image; the second utilizes a set of directional derivatives. 15.2.1. Orthogonal Gradient Generation An edge in a continuous domain edge segment , such as the one depicted in Figure 15.1-2a, can be detected by forming the continuous one-dimensional gradi- ent along a line normal to the edge slope, which is at an angle with respect to the horizontal axis. If the gradient is sufficiently large (i.e., above some threshold value), an edge is deemed present. The gradient along the line normal to the edge slope can be computed in terms of the derivatives along orthogonal axes according to the following (1, p. 106) (15.2-1) Figure 15.2-1 describes the generation of an edge gradient in the discrete domain 1 in terms of a row gradient and a column gradient . The spatial gradient amplitude is given by (15.2-2) 1. The array nomenclature employed in this chapter places the origin in the upper left corner of the array with j increasing horizontally and k increasing vertically. FIGURE 15.2-1. Orthogonal gradient generation. Fxy,() Gxy,() θ Gxy,() Fxy,()∂ x∂ θcos Fxy,()∂ y∂ θsin+= Gxy,() G R jk,() G C jk,() Gjk,() G R jk,()[] 2 G C jk,()[] 2 +[] 12⁄ = 472 EDGE DETECTION For computational efficiency, the gradient amplitude is sometimes approximated by the magnitude combination (15.2-3) The orientation of the spatial gradient with respect to the row axis is (15.2-4) The remaining issue for discrete domain orthogonal gradient generation is to choose a good discrete approximation to the continuous differentials of Eq. 15.2-1. Small Neighborhood Gradient Operators. The simplest method of discrete gradi- ent generation is to form the running difference of pixels along rows and columns of the image. The row gradient is defined as (15.2-5a) and the column gradient is 2 (15.2-5b) As an example of the response of a pixel difference edge detector, the following is the row gradient along the center row of the vertical step edge model of Figure 15.1-3: In this sequence, h = b – a is the step edge height. The row gradient for the vertical ramp edge model is 2. These definitions of row and column gradients, and subsequent extensions, are chosen such that G R and G C are positive for an edge that increases in amplitude from left to right and from bottom to top in an image. Gjk,() G R jk,() G C jk,()+= θ jk,() arc G C jk,() G R jk,() ⎩⎭ ⎨⎬ ⎧⎫ tan= G R jk,() Fjk,()Fj 1– k,()–= G C jk,()Fjk,()Fjk 1+,()–= 0000h 0000 0000 h 2 h 2 000 FIRST-ORDER DERIVATIVE EDGE DETECTION 473 For ramp edges, the running difference edge detector cannot localize the edge to a single pixel. Figure 15.2-2 provides examples of horizontal and vertical differencing gradients of the monochrome peppers image. In this and subsequent gradient display photographs, the gradient range has been scaled over the full contrast range of the photograph. It is visually apparent from the photograph that the running difference technique is highly susceptible to small fluctuations in image luminance and that the object boundaries are not well delineated. Diagonal edge gradients can be obtained by forming running differences of diag- onal pairs of pixels. This is the basis of the Roberts (2) cross-difference operator, which is defined in magnitude form as (15.2-6a) FIGURE 15.2-2. Horizontal and vertical differencing gradients of the peppers_mon image. ( b ) Horizontal magnitude ( c ) Vertical magnitude ( a ) Original Gjk,() G 1 jk,()G 2 jk,()+= 474 EDGE DETECTION and in square-root form as (15.2-6b) where (15.2-6c) (15.2-6d) The edge orientation with respect to the row axis is (15.2-7) Figure 15.2-3 presents the edge gradients of the peppers image for the Roberts oper- ators. Visually, the objects in the image appear to be slightly better distinguished with the Roberts square-root gradient than with the magnitude gradient. In Section 15.5, a quantitative evaluation of edge detectors confirms the superiority of the square-root combination technique. The pixel difference method of gradient generation can be modified to localize the edge center of the ramp edge model of Figure 15.1-3 by forming the pixel differ- ence separated by a null value. The row and column gradients then become (15.2-8a) (15.2-8b) FIGURE 15.2-3. Roberts gradients of the peppers_mon image. Gjk,() G 1 jk,()[] 2 G 2 jk,()[] 2 +[] 12⁄ = G 1 jk,()Fjk,()Fj 1– k 1+,()–= G 2 jk,()Fjk,()Fj 1+ k 1+,()–= θ jk,() π 4 arc G 2 jk,() G 1 jk,() ⎩⎭ ⎨⎬ ⎧⎫ tan+= G R jk,()Fj 1+ k,()Fj 1– k,()–= G C jk,()Fjk 1–,()Fjk 1+,()–= ( a ) Magnitude ( b ) Square root FIRST-ORDER DERIVATIVE EDGE DETECTION 475 The row gradient response for a vertical ramp edge model is then Although the ramp edge is properly localized, the separated pixel difference gradi- ent generation method remains highly sensitive to small luminance fluctuations in the image. This problem can be alleviated by using two-dimensional gradient forma- tion operators that perform differentiation in one coordinate direction and spatial averaging in the orthogonal direction simultaneously. Prewitt (1, p. 108) has introduced a pixel edge gradient operator described by the pixel numbering convention of Figure 15.2-4. The Prewitt operator square root edge gradient is defined as (15.2-9a) with (15.2-9b) (15.2-9c) where K = 1. In this formulation, the row and column gradients are normalized to provide unit-gain positive and negative weighted averages about a separated edge FIGURE 15.2-4. Numbering convention for 3 × 3 edge detection operators. 00 h 2 h h 2 00 33× Gjk,() G R jk,()[] 2 G C jk,()[] 2 +[] 12⁄ = G R jk,() 1 K 2+ A 2 KA 3 A 4 ++()A 0 KA 7 A 6 ++()–[]= G C jk,() 1 K 2+ A 0 KA 1 A 2 ++()A 6 KA 5 A 4 ++()–[]= [...]... five-level template gradient 0. 581 0.590 0.531 0.613 0.559 1.36 Roberts orthogonal gradient 0.361 0.369 0.341 0.466 0.395 0. 384 0.400 0.66 0.65 0.69 0.73 0.66 0.66 0.67 0.924 0.926 0 .89 8 0 .88 6 0.923 0.912 0 .89 2 PD tN PD tN Operator PF SNR = 10 SNR = 1 Vertical Edge 0.049 0.0 38 0.0 58 0.136 0.057 0. 480 0.105 PF 1.22 1.16 1.45 1.51 1.14 1.19 1.74 tN 0.593 0. 587 0.524 0.6 18 0.604 0.593 0.551 PD SNR = 1... detector For noise-free images, the FIRST-ORDER DERIVATIVE EDGE DETECTION FIGURE 15.2-11 Nevatia–Babu template gradient impulse response arrays 487 488 EDGE DETECTION FIGURE 15.2-12 Nevatia–Babu gradient of the peppers_mon image threshold can be chosen such that all amplitude discontinuities of a minimum contrast level are detected as edges, and all others are called non-edges With noisy images, threshold... Laplacian response for a single-transition-pixel diagonal ramp edge model is 0 –h – h - 8 8 0 h -8 h -8 0 and the edge lies at the zero crossing at the center pixel The Laplacian response for the smoothed transition diagonal ramp edge model of Figure 15.1-3 is – h – h – h h- h h0 - - - - - 0 16 8 16 16 8 16 In this example, the zero crossing does not occur at a pixel location The edge should... rc + k 8 r c + k 9 r c (15.3-15) about a candidate edge point ( j, k ) in the discrete image F ( j, k ) , where the k n are weighting factors to be determined from the discrete image data In this notation, the indices – ( W – 1 ) ⁄ 2 ≤ r, c ≤ ( W – 1 ) ⁄ 2 are treated as continuous variables in the row (x-coordinate) and column (y-coordinate) directions of the discrete image, but the discrete image. .. 15.2-1 were conducted with relatively low signal-to-noise ratio images Section 15.5 provides examples of such images For high signal-to-noise ratio images, the optimum threshold is much lower As a rule of thumb, under the condition that PF = 1 – P D , the edge detection threshold can be scaled linearly with signal-to-noise ratio Hence, for an image with SNR = 100, the threshold is about 10% of the peak... version of the separable eight-neighbor Laplacian is given by 1 H = -8 –2 1 –2 1 –2 4 1 1 –2 (15.3 -8) It is instructive to examine the Laplacian response to the edge models of Figure 15.1-3 As an example, the separable eight-neighbor Laplacian corresponding to the center row of the vertical step edge model is – 3 h 3h - 0 8 8 0 where h = b – a is the edge height The Laplacian response of the... respectively Figure 15.2-10 provides a comparison of the edge gradients of the peppers image for the four 3 × 3 template gradient operators 486 EDGE DETECTION (a) Prewitt compass gradient (b) Kirsch (c) Robinson three-level (d) Robinson five-level FIGURE 15.2-10 3 × 3 template gradients of the peppers_mon image Nevatia and Babu ( 18) have developed an edge detection technique in which the gain-normalized 5 ×... 0.136 0.057 0. 480 0.105 PF 1.22 1.16 1.45 1.51 1.14 1.19 1.74 tN 0.593 0. 587 0.524 0.6 18 0.604 0.593 0.551 PD SNR = 1 0.374 0.365 0.324 0.472 0.376 0. 387 0.469 PF 0.65 0.61 0.79 0.71 0.63 0.64 0. 78 tN Diagonal Edge 0.931 0.946 0 .82 5 0.900 0.947 0.931 0.7 78 PD SNR = 10 0.054 0.056 0.023 0.153 0.053 0.064 0.221 PF TABLE 15.2-1 Threshhold Levels and Associated Edge Detection Probabilities for 3 × 3 Edge... (15.3-17) n=1 where Pn ( r, c ) denotes a set of discrete orthogonal polynomials and the a n are weighting coefficients Haralick ( 28) has used the following set of 3 × 3 Chebyshev orthogonal polynomials: P1 ( r, c ) = 1 (15.3-18a) P2 ( r, c ) = r (15.3-18b) P3 ( r, c ) = c (15.3-18c) ... Recognition Design Procedure FIRST-ORDER DERIVATIVE EDGE DETECTION (a) Sobel, t = 0.06 (b) FDOG, t = 0. 08 (c) Sobel, t = 0. 08 (d ) FDOG, t = 0.10 (e) Sobel, t = 0.10 491 (f ) FDOG, t = 0.12 FIGURE 15.2-14 Threshold sensitivity of the Sobel and first derivative of Gaussian edge detectors for the peppers_mon image 492 EDGE DETECTION provides a tabulation of the optimum threshold for several 2 × 2 and 3 × . DETECTION of optical images of real scenes, generally do not possess step edges because the anti- aliasing low-pass filtering prior to digitization reduces the edge slope in the digital image caused. a luminance image: differential detection and model fitting. With the differential detec- tion approach, as illustrated in Figure 15.1-6, spatial processing is performed on an original image to. by a null value. The row and column gradients then become (15.2-8a) (15.2-8b) FIGURE 15.2-3. Roberts gradients of the peppers_mon image. Gjk,() G 1 jk,()[] 2 G 2 jk,()[] 2 +[] 12⁄ = G 1 jk,()Fjk,()Fj

Ngày đăng: 14/08/2014, 02:20

TỪ KHÓA LIÊN QUAN