1. Trang chủ
  2. » Kỹ Thuật - Công Nghệ

Image processing P5

38 263 0
Tài liệu đã được kiểm tra trùng lặp

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 38
Dung lượng 1,17 MB

Nội dung

Image Processing: Fundamentals.Maria Petrou and Panagiota Bosdogianni The Copyright 1999 John Wiley & Sons Ltd Print ISBN 0-471-99883-4 Electronic ISBN 0-470-84190-7 Chapter Two-Dimensional Filters What is this chapter about? Manipulation of images often entails omitting or enhancing details of certain spatial frequencies This is equivalent to multiplying the Fourier transform of the image with a certain function that “kills” or modifies certain frequency components When we that, we say that wefilter the image, and the function we use is called a filter This chapter explores some of the basic properties of 2D filters and presents some methods by which the operation we wish to apply to the Fourier transform of the image can be converted into a simple convolution operation applied to the image directly, allowing us to avoid using the Fourier transform itself How we define a 2D filter? A 2D filter is defined in terms of its Fourier transform H ( p , v), called the system function By taking the inverse Fourier transform of H ( p , v) we can calculate the filter in the real domain This is called the unit sample response of the filter and is denoted by h(k,Z) Howare the system function and the unit sample response of related? the filter H ( p , v) is defined as a continuous function of h, v) The unit sample response h(k,1 ) is defined as the inverse Fourier transform of H ( p , v), but since it has to be used for the convolution of a digital image, it is defined at discrete points only Then the equations relating these two functions are: 156 Image Processing: The Fundamentals \ X l Signal i - Inverse X F.T 11- X Filter bands we want to eliminate r Filter needed Inverse F.T ; @ - m X Signal Filtered I wewant to keep band signal filtered of the F.T Figure 5.1: Top row: a signal and its Fourier transform Middle row: the unit sample response function of a filter on the left, and the filter's system function on the right Bottom row: On the left the filtered signal that can be obtained by convolving the signal at the top with the filter in the middle On the right the Fourier transform of the filtered signal obtained by multiplying the Fourier transform of the signal at the top, with the Fourier transform (system function) of the filter in the middle Two-Dimensional Filters 157 If we are interested in real filters only, these equations can bemodified as follows: (5.3) h(Ic,Z) = 2T cc +m H ( P , v) = G +m h(n,m)cos(pn + vm) (5.4) n , - m m=-m Why are we interested in the filter function in the real domain? We can achieve the enhancement of the image as desired, by simply convolving it with h(Ic,Z) instead of multiplying its Fourier transform with H ( p , v ) Figure 5.1 shows this schematically in the 1D case The 2D case is totally analogous Are there any conditions which h(Ic,Z)must fulfil so that it can be used as a convolution filter? Yes, h ( k ,1) must be zero for Ic > K and Z > L , for some finite values K and L ; i.e the filter with which we want to convolve the image must be a finite array of numbers The ideal lowpass, bandpass and highpass filters not fulfil this condition B5.1: What is the unit sample response of the ideal lowpass filter? The ideal lowpass filterwhich cuts tozero all frequencies above certain frequency a R, say, is defined as: We can use this definition of H ( p , v) t o calculate the corresponding unit sample response from equation ( ) : h(k,Z) = 2T - 1;1; We shall introduce polar coordinates Then cos(@ (T, + H ( P , v)d@v 0) in the ( p ,v) frequency space: 158 where angle Image Processing: The Fundamentals has been defined sin E so that: k and cos Jm We define a new variable t Jm E = B + Then equation (5.6) can be written as: ~ + R +&L i c o s ( r d m s i n t)rdrdt In the second term we change the variable t o i = t-27r Therefore, we can write: + t = i+27r + sin t = sin l 2', S,'"1"c o s ( r d w s i n t ) r d r d t h(lc,l) = - +L 27r - cos(rdmsin+-drdi iZT 1" I 27r /gR cos(rdwsint)rdrdt This can be written as: h(k,l) = 1" 1" +$S,'" l" - 271 c o s ( r d m s i n t)rdrdt c o s ( r d m s i n t)rdrdt In the second term, we define a new variable of integration: i t - 7r + t = i + 7r + sint = - s i n i + c o s ( r J m s i n t ) = c o s ( r J m s i n i ) and dt = di Then: Two-Dimensional Filters 159 We know that the Bessel function of the first kind of zero order is defined as: If we use definition (5.8) in equation (5.7) we obtain: h(Ic,l) = I" rJIJ(rdF5)dr We define a new variable of integration X E r d m + dr = d x m xJo(x)dx Then: (5.9) From the theory of Bessel functions, it is known that: (5.10) (5.11) This function is a function of infinite extent, defined at each point (Ic, l ) of integer coordinates It corresponds,therefore, to an array of infinitedimensions The implication is that this filter cannot be implemented as a linear convolution filter of the image 160 Fundamentals Processing: The Image Example (B) What is the impulse response of the ideal bandpass filter? The ideal bandpass filter is defined as: The only difference, therefore, with the ideal lowpass filter derived in box B5.1 is in the limits of equation (5.9): This is a function defined for all values (k, Therefore the ideal bandpass filter 1) is an infinite impulse response filter Example (B) What is the impulse response of the ideal highpass filter? The ideal highpass filter is defined as: The only diference, therefore, with the ideal lowpass filter derived in box B5.1 is in the limits of equation (5.9): h(k,l) = m J k2 + l2 R1 p x~~(x)dx Two-Dimensional Filters 161 Bessel function J1(x) tends to f o r X +-CO However, its asymptotic behaviour is limx+.mJ1(x) = This means that J1(x) does not tend to fast enough fi to compensate for the factor X whichmultiplies it, i.e limx+.mx J l ( x ) +- CO Therefore, there is no real domain function that has as Fourier transform the ideal highpass filter In practice, of course, the highest frequency we may possibly be interested in is & where N is the number of pixels in the image, so the issue of CO upper limit in equation (5.12) does not arise and the ideal highpass filter becomes the same as the ideal bandpass filter What is the relationship between the 1D and the 2D ideal lowpass filters? The 1D ideal lowpass filter is given by: h(k) = sin k k ~ (5.13) The 2D ideal lowpass filter is given by: h(k,1 ) = J1 ( d m ) d r n where J1(x) is the first-order Bessel function of the first kind Figure 5.2 shows the plot of h ( k ) versus k and the plot of h ( k , l ) versus k for = It canbe seen that although the two filters look similar, they differ in significant details: their zero crossings are at different places, and the amplitudes of their side-lobes are different This implies that we cannot take an ideal or optimal (according to some criteria) 1D filter replace its variable by the polar radius (i.e replace k by d r n in equation (5.13)) and createthe corresponding “ideal or optimal”filter in 2D However, although the 2D filter we shall create thisway will not be the ideal or optimal one according to the corresponding criteria in 2D, it will be a good suboptimal filter with qualitatively the same behaviour as the optimal one How can we implement a filter of infinite extent? A filter which is of infinite extent in real space can be implemented in a recursive way, and that is why it is called a recursive filter Filters which are of finite extent in real space are called non-recursive filters Filters are usually represented and manipulated with the help of their z-transforms How is the z-transform of a digital 1D filter defined? { X I , x2,x3, ,xn} Sometimes an arrow is used to denote the element of the string that corresponds to A filter of finite extent is essentially a finite string of numbers 162 Image Processing: The Fundamentals Figure 5.2: The cross-section of the 2D ideal lowpass filter is similar but different from the cross-section of the 1D ideal lowpass filter to the zeroth position The z-transform of such a string is defined as: c m X(z)= (5.14) xkz-k k=l where and m are defined according to which term of the string of numbers { x k } is assumed to be the k = term (For example, if is the k = term then = -2 and m = n - in the above sequence) If the filter is of infinite extent, the sequence of numbers which represents it is of infinite extent too and its z-transform is given by an infinite sum of the form: 00 00 In such a case we can usually write this sum in closed form as the ratio of two polynomials in z , as opposed to writing it as a single polynomial in z which is the case for the z-transform of the finite filter Two-Dimensional Filters 163 Why we use z-transforms? The reason we use z-transforms is because digital filters can easily be realized in hardware in terms of their z-transforms The z-transform of a sequence together with its region of convergence uniquely defines the sequence Further, it obeys the convolution theorem: The z-transform of the convolution of twosequencesis the product of the z-transforms of the two sequences How is the z-transform defined in 2D? For a finite 2D array of dimensions M two complex variables z1 and z2: X N , the z-transform is a finite polynomial in M N i=o j=o (5.15) where cij are the elements of the array For an infinite array, the double summation is of infinite extent and it can be written as a ratio of two finite summations: (5.16) where M,, N,, Mb and Nb are some integers Conventionally we choose boo = B5.2: Why does the extent of a filter determine whether the filter is recursive or not? When we convolve an image with a digital filter, we essentially multiply the ztransform of the image with the z-transform of the filter: R(Z1,ZZ) z-transform of output image = -H(z1,zz) z-transform of filter D(z1,zz) (5.17) z-transform of input image If we substitute (5.16) in (5.17) and bring the denominator to the left hand side of the equation, we have: i=o j = o \i=o j=o In the sum on the left hand side we separate the k = 0, = term, which by convention has boo = l: 164 Processing: Image The Fundamentals Remember that R(z1,z ) is a sum in z r z ? with coefficients say r,, It is clear from the above equation that the value of r,, can be calculated in terms of the previously calculated values of r,, since the series R(z1,z ) appears on the right hand side of the equation too That is why such a filter is called recursive In the case of a finite filter all b k l ' s are zero (except boo which is 1) and so the coefficients r,, of R ( z l , z ) are expressed in terms of aij and the coefficients which appear in D ( z , z z ) only (i.e we have no recursion) Example (B) A 256 X 256 image is to be processed by an infinite impulse response filter The z+transform of the filter can be written as the ratio of a third-degree polynomial ineach of the variables z and z over another third-degree polynomial in the same variables Calculate the values of the output image in terms of the values of the input image and the filter coefficients In equation (5.18) wehave Ma = Na = M(, = NI, = Letussaythatthe z-transform of the input image is: 255 255 D(Zl,.2) k l = dklZlZ2 k=O l=O I1 and of the output image is: (5.19) 178 Processing: Image N N The Fundamentals N n=l m=l n=l " \ N N -m) +c N h ( ~ ,) sin(vm) m sin(-vfi) + h ( ~ ,sin(p0) = o 0) (5.55) m=l In the above expression we have also separated the terms with n = or m = Since: h(n,m) = h(-n, -m) * { h(0,m) = h(0,-m) h(n,0) = h(-n, 0) h(n,-m) = h(-n, m) (5.56) and because sin(-2) = - sinx, the terms in (5.55) that are underlined with the same type of line cancel each other Therefore, the imaginary part in the equation (5.54) is zero and the Fourier transform of the filter is real, given by: cc N H ( p ,v ) = N h(n,m) cos(pn + vm) (5.57) We can further simplify this expression by changing variables again for the sums over negative indices (i.e set fi = -n for negative n and iiz = -m for negative m ): [e H ( p , v )= = h ( n ,-m) cos(pn - v f i ) e=N n=-N c cc +cc N + h ( n ,m) cos(pn h(-.n, fi=N e=N \ N -" -m) cos(-p.n - h(n, -m) c o s ( p n - v f i ) n=l e = N +cc + cc 1 = + vm)+ h(n,0) cos(pn) m=l N vfi) h(-Ln,m) cos(-@ fi=N m=l ~ N N n=l m=l L h(n, ) cos(pn + vm) m " / + vm) n=l 179 Two-Dimensional Filters c +c + h(-.n, 0) cos(-p.n) c +c + fi=N N h ( n ,0) cos(pn) n=l N h(0,-m) cos(-rnv) 77=N h(0,m) cos(mv) + h(0,O) m=l Using (5.56) again and the fact that cos( 5) = cosx, we collect together the terms that are underlined with the same type of line to obtain: cc c N H ( p ,v) = m=l N h(n,m) cos(pun n=l N h(0,m) cos(mv) +2 cc + c N + vm)+ + h(0,O) N h(-n, m) cos(-pun + vm) N h(n,0) cos(pn) m=l n=l Finally, by writing together the first three sums, we get: cc N H ( p ,v) = N h(n,m ) cos(pn n=-N m=l + vm)+ c N h(n,0) cos(pn) + h(0,O) n=l Is there any wayby which we can reduce the computational intensity of the linear programming solution? Yes, by breaking the problem into a series of smaller ones within the framework of an iterative algorithm At every iteration step of this algorithm the optimal solution is found within a subset of the set of all the constraint points The algorithm uses linear programming at each iteration step and at every step narrows the range over which the Chebyshev error is allowed to lie So, it breaks a big linear programming problem into a series of small, more manageable ones What is the philosophy of the iterative approach? Imagine that we have a metal sheet with which we want to clad a curved surface Every time we try to bend the sheet to pass through some points of the surface, it bulges badly out at some other points owing to its rigidity We find it difficult to hold it down everywhere simultaneously So, we bend and distort it first so that it fits very well some points (see Figure 5.6) Then we find the point where it deviates most from the desired shape and deform it again so that it now fits another subset of points that includes the one with the previously maximum deviation It is clear that now it will not fit all that well the original set of points, and since we included the most deviating one from the remaining points, our new fitting will be a little bit worse than before In other words, as our fittingprogresses, we gradually increase the 180 Image Processing: The Fundamentals fitting error This is the penalty we pay in order to make our metal sheet fit better and better overall the shape we want Such an algorithm is called maximizing because from one iteration step to thenext it increases the lower limit of the fitting error A rigid metal sheet First bend to match part of the surface Figure 5.6: How to fit a surface gradually Are there any algorithms that work by decreasing the upper limit of the fitting error? Yes, they are called minimizing There are also algorithms called mini-max that work by simultaneously increasing the lower limit of the error and decreasing its upper limit However, we are not going to discuss them here How does the maximizing algorithm work? It works by making use of the concept of the limiting set of equations and the La Vallee Poussin theorem What is a limiting set of equations? The set of equations (5.48)is called limiting if all Am’s are # and their absolute values lAml cannot simultaneously be reduced for any choice of c’s we make What does the La Vallee Poussin theorem say? Suppose that equations (5.48) form a limiting set We choose some ci’s which give the best approximation to F according to the Chebyshev sense Call this approximation Two-Dimensional Filters P* M points 181 ctg(p, v) Best approximation in the Chebyshev sense over the set X of , (p,, v) means that: , - - (p1, VI),(p2, v2), muzIP* - F1 over set of points X muzlP - F1 over same set of points X where P is any other approximation; i.e any other set of ci’s The generalized Vallee Poussin theorem states that the above best Chebyshev error is greater than the minimum value of the error of the random approximation P: error of the best Chebyshev approximation So, the theorem states that: i.e the error of the best Chebyshev approximation is bounded from above and below, by the minimum and the maximum error of any other approximation What is the proof of the La Vallee Poussin theorem? The right hand side of (5.58) is obvious by the definition of the best Chebyshev approximation To prove the left inequality, assume first that it does not hold Assume: muzxIP* - F1 < minxlP - F1 + muzxlAkl < minxlA,I This implies that all A;% are less than all A,% (since the maximum of the A;% is less than the minimum of the Am’s).This means that when we change the values of ci’s from ci’s to cf’s, all the absolute values of the A,% reduce simultaneously This is in contradiction to the assumption that the set of equations over the points in X is limiting So (5.58) holds What are the steps of the iterative algorithm? Suppose that we choose a subset Xk: of the set of points X (these points are thechosen discrete ( p , v) points over which we want to determine the best approximation) In this subset of points we choose the best approximation c n & ( p , v) = i=l and we define its error in Xk: by: cfgi(p, v) best in Xk: 182 Image Processing: The Fundamentals and its error in the whole set X by: EI, mazxIPk - F1 Now determine a new set of points whichis a subset of X, such that this P k approximation in the Xk+l set has maximum error and minimum error lpk - FI 2' k and the set of equations c n A = F(p7 v) - w ( p v) V(p7 v) E &+l i=l is limiting Then according to the La Vallee Poussin theorem: lpk - FI ma2Xk+l Ipk+l - maxxk+llpk - or dk (5.59) dk+l E k where P,+1 is the best approximation in the new set of points Xk+l and: &+l = maxxk+l P k + l - FI I From the way we choose every time the extra point to include in the subset X k , it is obvious that from one step to the next we narrow the double inequality (5.59) by increasing its lower limit Can we approximate a filter by working fully in the frequency domain? Yes, by calculating the values of the system function in some frequenciesas functions of the values of the same function at other frequencies In other words, express H ( p , v) in terms of H ( k , ) where k, take up specific integer values and calculate the values of H ( k , ) for which mazlH(p,v) - F ( p , v)I is minimum We shall consider certain discrete values of p, v These points will be the constraint points How can we express the system function of a filter at some frequencies as a function of its values at other frequencies? Consider the equations that express the relationship between the impulse response of the filter and its system function: (5.60) Two-Dimensional Filters 183 Note that here ( p ,v) take integer values and they no longer signify the angular frequencies like they did in all other expressions in this chapter so far The inverse discrete Fourier transform of this system function is: N N (5.61) + Call 2N p Notice that ifwe assume that H ( k , 1) = H ( - k , -1) the filter will be real (as we want it) because the imaginary terms will cancel out; i.e N p N L=-N l=-N (5.62) From (5.62) it is obvious that h(m,n)= h(-m, -n) and using that in (5.60) we have: (5.63) Since we want to express H ( p , v) in terms of other values of H , we must substitute from (5.62) into (5.63): c c cc N H(p,v)=$ N N N H(k,Z)cos n=-Nm=-Nk=-Nl=-N (5.64) It can be shown (see Box B5.4)that this expression can be written as: Z)+c N 1)@(0, N c H ( k ,Z)@(k, (5.65) 1) k=ll=-N where @ ( k , is some known function of the trigonometric functions 1) Example 5.4 Wewant to construct a X real and symmetric (when both its argumentschange sign) filter, with the followingfrequencyresponse: m 1 184 Image Processing: The Fundamentals i.e Formulate the problem so that it can be solved bylinearprogramming State clearly the dimensions and elements of each arrayinvolved (Suggestion: Use at most nine constraint points in Fourier space.) Because of symmetry h(-l, -1) = h(1,l) h(-1,O) h(0,-1) = h(1,O) h(-1,l) = = h(0,l) h(1,-l) For a X filter, clearly N = in equation (5.65) Then: + H ( p , V ) = 2h(-1,1) COS(Y - p) 2h(0,1) COS V 2h(l, 0) cosp h(0,O) + + + 2h(l, 1)COS(,U + V ) h(1, l),h(l,O),h(O, l),h(l, -1) and h(0,O) are the unknowns we must specify Since they can be positive or negative, these must be our free unknowns Define the error as m a ~ ~ , ~ I H (- F ( p ,.)I v) p , Then: OIH-FI+{ S > H - F j - H >F ( H - F ) + H > F >- - W e use two inequalitieslike the above for every constraint point frequency space, in i.e a total of 18 inequalities W e want to minimize which must always be non-negative, so it must be our non-negative unknown So: Minimize Z = X1 under the constraints AllXl + A12X2 > B1 X1 > X2 free 18 inequalities unknown unknowns where X1 = W e consider the points in the Fourier space in the following order: Two-Dimensional Filters 185 Then 0 -1 0 -1 0 -1 0 -1 0 A12 = (Each element i s the value of- F ( p , v) or F ( p , v)) -1 -1 -1 -1 -1 -1 -1 -1 -1 -2cosl 2cosl -2cosl 2cosl -2cosl 2cosl -2 -2 -2 -2cosl 2cosl -2cosl 2cosl -2cosl 2cosl -2 -2cosl 2cosl -2cos2 2cos2 -2cosl 2cosl -2 -2cosl 2cosl -2cos2 2cos2 -2cosl 2cosl -2 -2cosl 2cosl -2 -2cosl 2cosl -2cosl 2cosl -2 -2cosl 2cosl -2cosl 2cosl -2 -2cosl 2cosl -2cos2 ’ cos -2 cos cos -2 -2 cos cos -2 -2 cos cos -2 -2 cos cos -2 cos 2cos2 , 186 Image Processing: The Fundamentals B5.4: Prove expression (5.65) + p) + cos(a Ifwe use the identity cos a cos p = $ [cos(a equation (5.64) can be written as: - N + cos[nA(k N - N p) - p)] and define P =A N + mA(Z - V)]} We define: = = A(p+ k ) IC y A(Z+v) A(k - p ) E U A(1- V ) = V Then: & c cH ( k , Z ) c , H(P,v)= N N N N c {cos(n~~+my)+cos(nu+m~)} (5.66) Consider the first of the terms inside the curly brackets: N N C C n=-N m=-N cos(nz + my) = C C N C (cos(nz)cos(my) + sin(rny) sin(nz)) n=-N N = C cos(n2) n=-N cos(my) m=-N N N m=-N n=-N As sine is an odd function, sin(my) summed over a symmetric interval of values of m gives zero So the second double sum in the above expression vanishes On the other hand, cos(nz) being an even function allows us to write: N N l +2 c N cos(n2) n=-N m=-N We use the identity: n C cos(kz) = cos k=l ) X ? ( n I C sin -~ sin I N Two-Dimensional Filters 187 We apply it here to obtain: cc N N cos (?X) cos(nz + m y ) = sin sin n=-Nm=-N We also use the identity: cos a s i n p = sin(a cc N N cos(nz+my) = n=-Nm=-N sin + p) - cos (YX)I -sin We define: 4(u,P) sin I (y sin F y) sin(a - p) So: )( ( (9%) l+ sin sin sin - F l+ sin ( 2N+1 ) T y = ssin[r(ua+ P)] m [ F ( +P)] sin (9-sin % y) sin! (5.67) (5.68) From the definition ofp, X and A we have: y x = z”+lA ( p + k ) = ;%(p+v) = r(p V) Similarly for v y Then equation (5.67) can be written as: + N N (5.69) Working in a similar way we obtain: Then (5.66) can be written as: N N We define: From the definition of (equation (5.68)) it is clear that: @(/c, = @ ( - k , -1) 1) We also have H ( k ,1 ) = H ( - k , - ) Then equation (5.70) can be further simplified: 188 Image Processing: The Fundamentals l N In a similar manner we can split the summation over k into the0 term, thepositive and the negative terms and define a new variable i -k to sum over negative = values of k: l I N I H ( p , v ) = 3112 N N c + N H(-& Z-1)@H(-i, , ( - & ) + y y - l)@ ( i H ( - k p - k 0) 1) + 2 H ( k , -Z)@(k, -1) k=l2=1 k=l ~ ~ N N v + c N H ( k , O ) @ ( k 0) , c N + H ( , -1)@(O, -1) k=l 1=1 c N + H@,W ( ,1) + H@,O)@(O, 0) 1=1 Owing to the symmetry properties of functions @ ( k ,1) and H ( k , Z ) , the terms that are underlined by the same type of line are equal, and therefore they can be combined t o yield: N +2 c N H ( k , ) @ ( k 0) , k=l +2 N c N H(0,1)@(0, ) 1 + H(O,O)@(O, 0) 1=1 (5.72) The first three sums on the right hand side of the above expression can be combined into one as they include summationover all positive values of 1, all negative values of and the term for = 0: H ( p ,v) = ~ 2p2 cc k = l=-N H ( k , Z)@(k,1 ) +2 c N H ( ,1)@(0, ) 1=1 + H(O,O)@(O, 0) Two-Dimensional Filters 189 Finally, upon substituting the value of p we obtain: N l N (5.73) What exactly are we trying to whenwe design the filter inthe frequency domain only? We try to find optimal valuesfor the coefficients H ( k ,1 ) for Ic = , , , N , = , , ,N so that the error in approximating F ( p , v) by H ( p , v) at a fixed number of constraint points (pi, vi) is minimum In practice, what people usually is to fix some values of the H(k,Z) and try to optimize the choice of the remaining values For example, in the case of a lowpass filter, we can put H(Ic,Z) = for the frequencies we want to pass and H ( k ,1 ) = for the undesired frequencies and then leave the values of H(Ic,Z) which refer to transitional frequencies to be determined (see Figure 5.7) Z f plane values I t‘ ‘\ I I I e / e-, frequency , - pass band: put all H(k,l) there equal to 1 , stop band: put H(k,l) there all equal to transition band: calculate optimal values of H(k,l) there l I l l t ’ -grid mk values continuous values Figure 5.7: Fixing some of the filter values in the frequency domain and letting others be free Image Processing: The Fundamentals 190 How can we solve for the unknown values H ( k , l ) ? Equation (5.65) is similar to equation (5.50) and can be solved by programming using linear Does the frequency sampling method yield optimal solutions according to the Chebyshev criterion? No, because the optimization is not done for all H ( k , 1) since some of them are fixed Example 5.5 Formulate the problem of designing a filter inthe frequency domain so that it can be solved by a linear programming package If compare we equations (5.65) and (5.50), weprobsee that the lem similarthe is very to problem of specifyingvalues the of the filter in the real domain linear using programming: In thatwe case wereseeking t o specify h(O,O),h(l,O), .,h(N,O), ,h(-N,rn), h(-N + l , m ) , ,h ( N , m ) f o r m = 1,2, , N Now wish we to specify H(O,O),H(O,l) , ,H ( O , N ) , H ( k , - N ) , H ( k , - N l ) , , H ( k , N ) f o r = , 2, ,N In that case, we had function cos(pn+vm) appearing in equation (5.50) Now v) we have function @ ( p , instead @ ( p ,v), however, is also an even function with it behaves in a similar way to the respect t o both its arguments, and therefore cosine function as far as the solution of the problem is concerned The problem therefore can be formulated as follows: + Minimize: under the constraints: where: and Two-Dimensional Filters - X2 = 191 + (2N2 2J ? + 1) X matrix B1 = + 2Pl X matrix is a 2pl X (2N2+ 2N + 1) matrix the elements of which can easily be found if we substitute equation (5.65) into the inequality constraint It turns out to be: A2 I 192 Image Processing: The Fundamentals What is the “take home” message of this chapter? If we specify exactly the desired frequency response of a filter to be zero outside a finite range, we obtain a filter of infinite extent in the real domain A digital filter of infinite extent can be realized as a recursive filter with the help of its ztransform z-transforms in two dimensions are muchmoredifficult to be inverted than l-dimensional z-transforms Also, recursive filters have to fulfil certain criteria to be stable The theory of 2-dimensional digital filters, their stability criteria and their design problems are beyond the scope of this book To avoid using infinite impulse response filters we resort to approximations We specify exactly the frequency response of the filter we require and then try tofind the finite impulse response filter which approximates the desired frequency behaviour as well as possible Sometimes it is important to have exactly the desired frequency response to some frequencies, while we not care much about some other frequencies Then we fix the frequency response of the filter at the specific frequenciesand try to the value find of the filter at the remaining frequencies in an approximate way In filter design the criterion of approximation we use isthe Chebyshev norm instead of the least square error This is because we are interested in the worse artifact we create at a single frequency,rather than in an overall good approximation which may have a very unevenly distributed error among the various frequencies ... that this filter cannot be implemented as a linear convolution filter of the image 160 Fundamentals Processing: The Image Example (B) What is the impulse response of the ideal bandpass filter?... not? When we convolve an image with a digital filter, we essentially multiply the ztransform of the image with the z-transform of the filter: R(Z1,ZZ) z-transform of output image = -H(z1,zz) z-transform... of the output image in terms of the values of the input image and the filter coefficients In equation (5.18) wehave Ma = Na = M(, = NI, = Letussaythatthe z-transform of the input image is: 255

Ngày đăng: 17/10/2013, 23:15