Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống
1
/ 30 trang
THÔNG TIN TÀI LIỆU
Thông tin cơ bản
Định dạng
Số trang
30
Dung lượng
1,64 MB
Nội dung
6 Will-be-set-by-IN-TECH where u k is the vector that contains the elements of u k (n).“·” indicates pointwise matrix multiplication and, throughout this chapter, pointwise matrix multiplication takes a lower precedence than conventional matrix multiplication. Combining all of the circular convolutions into one matrix equation, we should have ⎡ ⎢ ⎢ ⎢ ⎣ F 2N c 0 F 2N c 1 . . . F 2N c K−1 ⎤ ⎥ ⎥ ⎥ ⎦ = ⎡ ⎢ ⎢ ⎢ ⎣ F 2N h F 2N h . . . F 2N h ⎤ ⎥ ⎥ ⎥ ⎦ · ⎡ ⎢ ⎢ ⎢ ⎣ F 2N u 0 F 2N u 1 . . . F 2N u K−1 ⎤ ⎥ ⎥ ⎥ ⎦ . (13) Using equation (7), equation (13) can be written as H˜c = H ˜u · H ˜ h r , (14) where h r = ⎡ ⎢ ⎢ ⎢ ⎣ h h . . . h ⎤ ⎥ ⎥ ⎥ ⎦ , (15) and u = ⎡ ⎢ ⎢ ⎢ ⎣ u 0 u 1 . . . u k−1 ⎤ ⎥ ⎥ ⎥ ⎦ . (16) Therefore, The vector of the circular convolutions is given by c = PH −1 H ˜u · H ˜ h r . (17) According to the overlap-save method, only the second half of c k corresponds to the k th section of the linear convolution. Denote the k th section of the linear convolution by y k and the vector that contains the elements of y (n) by y.Theny k canbewrittenas y k = 0 N×N I N×N c k , (18) and y as y = Gc, (19) where G = ⎡ ⎢ ⎢ ⎢ ⎣ 0 N×N I N×N 0 2N×2N ··· 0 2N×2N 0 2N×2N 0 N×N I N×N ··· 0 2N×2N . . . . . . . . . . . . 0 2N×2N 0 2N×2N ··· 0 N×N I N×N ⎤ ⎥ ⎥ ⎥ ⎦ . (20) Finally, the linear convolution using the HOT is given by y = GPH −1 H ˜u · H ˜ h r . (21) In summery, the convolution between (K + 1)N-point input u(n) and N-point impulse response h (n) can be calculated efficiently using the HOT as follows: 140 AdaptiveFiltering Hirschman Optimal Transform (HOT) DFT Block LMS Algorithm 7 1. Divide u(n) into K overlapping sections and combine them into one vector to from u. 2. Perform K-band polyphase decomposition of u to form ˜u. 3. Take the HOT of ˜u. 4. Post append h (n) with N zeros and then stack the appended h(n) K times into one vector to form h r . 5. Perform K-band polyphase decomposition of h r to form ˜ h r . 6. Take the HOT of ˜ h r . 7. Point-wise multiply the vectors from steps 3 and 6. 8. Take the inverse HOT of the vector from step 7. 9. Perform K-band polyphase decomposition of the result from step 8. 10. Multiply the result of step 9 with G. 4. Development of the HOT DFT block LMS algorithm Recall that in the block LMS algorithm there are two convolutions needed. The first convolution is a convolution between the filter impulse response and the filter input and is needed to calculate the output of the filter in each block. The second convolution is a convolution between the filter input and error and is needed to estimate the gradient in the filter weight update equation. If the block length is much larger than the filter length, then the fast HOT convolution developed in the previous section can be used to calculate the first convolution. However, the second convolution is a convolution between two signals of the same length and the fast HOT convolution can not be used directly without modification. Let N be the filer length and L = NK be the block length, where N, L,andK are all integers. Let ˆw (k)= ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ w 0 (k) w 1 (k) . . . w N−2 (k) w N−1 (k) ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ (22) be the filter tap-weight vector in the k th block and ˆu (k)= ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ u ( kL − N ) . . . u ( kL ) u ( kL + 1 ) . . . u ( kL + L − 1 ) ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ (23) be the vector of input samples needed in the k th block. To use the fast HOT convolution described in the previous section, ˆu (k) is divided is into K overlapping sections. Such sections 141 Hirschman Optimal Transform (HOT) DFT Block LMS Algorithm 8 Will-be-set-by-IN-TECH can be formed by multiplying ˆu(k) with the following matrix: J = ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ I N×N 0 ··· 00 0I N×N ··· 00 0I N×N ··· 00 . . . . . . . . . . . . . . . 00 ··· I N×N 0 00 ··· I N×N 0 00 ··· 0I N×N ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ . (24) Define the extended tap-weight vector (post appended with N zeros) w (k)= ⎡ ⎢ ⎢ ⎢ ⎣ ˆw (k) 0 . . . 0 ⎤ ⎥ ⎥ ⎥ ⎦ . (25) According the fast HOT convolution, see equation (21), the output of the adaptive filter in the k th block y(k)= ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ y (kL) y(kL + 1) . . . y (kL + L − 2) y(kL + L − 1) ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ (26) is given by y (k)=GPH −1 HPw r (k) · HP J ˆu(k) . (27) The desired signal vector and the filter error in the k th block are given by ˆ d (k)= ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ d (kL) d(kL + 1) . . . d (kL + L − 2) d(kL + L − 1) ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ (28) and ˆe (k)= ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ e (kL) e(kL + 1) . . . e (kL + L − 2) e(kL + L − 1) ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ , (29) respectively, where e (n)=d(n) − y( n). (30) 142 AdaptiveFiltering Hirschman Optimal Transform (HOT) DFT Block LMS Algorithm 9 The filter update equation is given by ˆw (k + 1)= ˆw(k)+ μ L L−1 ∑ i=0 ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ u ( kL + i ) u ( kL + i − 1 ) . . . u ( kL + i − N + 2 ) u ( kL + i − N + 1 ) ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ e (kL + i). (31) The sum in equation (31) can be efficiently calculated using the (L + N)-point DFTs of the error vector e (n) and input vector u(n). However, the (L + N)-point DFT of u(n) is not available and only the 2N-point DFTs of the K sections of ˆu (k) are available. Therefore, the sum in equation (31) should be divided into K sections as follows: L−1 ∑ i=0 ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ u ( kL + i ) u ( kL + i − 1 ) . . . u ( kL + i − N + 2 ) u ( kL + i − N + 1 ) ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ e (kL + i)= K−1 ∑ l=0 N −1 ∑ j=0 ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ u ( kL + lN + j ) u ( kL + lN + j − 1 ) . . . u ( kL + lN + j − N + 2 ) u ( kL + lN + j − N + 1 ) ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ e (kL + lK + j). (32) For each l,thesumoverj can be calculated as follows. First, form the vectors u l (k)= ⎡ ⎢ ⎢ ⎢ ⎣ u (kL + lN − N) . . . u (kL + lN + N − 2) u(kL + lN + N − 1) ⎤ ⎥ ⎥ ⎥ ⎦ , (33) e l (k)= ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ 0 N×1 e(kL + lN) . . . e (kL + lN + N − 2) e(kL + lN+ N − 1) ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ . (34) Then the sum over j is just the first N elements of the circular convolution of e l (k) and circularly shifted u l (k) and it can be computed using the DFT as shown below: N−1 ∑ j=0 u l (k)e(kL + lK + j)=U N u ∗ lF (k) · e lF (k) , (35) where U N = I N×N 0 N×N 0 N×N 0 N×N , (36) 143 Hirschman Optimal Transform (HOT) DFT Block LMS Algorithm 10 Will-be-set-by-IN-TECH u lF (k)=F 2N u l (k), (37) and e lF (k)=F 2N e l (k). (38) Therefore, the filter update equation for the HOT DFT block LMS algorithm can be written as w (k + 1)=w(k)+ μ L K−1 ∑ l=0 U N F −1 u ∗ lF (k) · e lF (k) . (39) Next, we express the sum in equation (39) in terms of the HOT. Form the vectors u (k)= ⎡ ⎢ ⎢ ⎢ ⎣ u 0 (k) u 1 (k) . . . u K−1 (k) ⎤ ⎥ ⎥ ⎥ ⎦ , (40) e (k)= ⎡ ⎢ ⎢ ⎢ ⎣ e 0 (k) e 1 (k) . . . e K−1 (k) ⎤ ⎥ ⎥ ⎥ ⎦ . (41) Then using equation (7), the filter update equation can be written as w (k + 1)=w(k)+ μ L SPH −1 H ∗ ˜u(k) · H˜e(k) , (42) where the matrix S is given by S = ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ 1 K×1 0 K×1 ··· 0 K×1 0 K×1 1 K×1 ··· 0 K×1 . . . . . . . . . . . . 0 K×0 0 K×1 ··· 1 K×1 0 N×KN 0 N×KN 0 N×KN ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ . (43) Figure 2 shows the flow block diagram of the HOT DFT block LMS adaptive filter. 5. Computational cost of the HOT DFT block LMS algorithm Before looking at the convergence analysis of the new adaptive filter, we look at its computational cost. To calculate the the output of the k th block, 2K + 12N-point DFTs are needed. Therefore, (2K + 1)2N log 2 2N + 2NK multiplications are needed to calculate the output. To calculate the gradient estimate in the filter update equation, 2K 2N-point DFTs are required. Therefore, 6KN log 2 2N + 2NK multiplications are needed. The total multiplication count of the new algorithm is then (4K + 1)2N log 2 2N + 4NK. The multiplication count for the DFT block LMS algorithm is 10KN log 2 2NK + 4NK. Therefore, as K gets larger the HOT DFT block LMS algorithm becomes more efficient than the DFT block LMS algorithm. For example, for N = 100 and K = 10, the HOT DFT LMS algorithm is about 30% more efficient and for for N = 50 and K = 20 the HOT DFT LMS algorithm is about 40% more efficient. 144 AdaptiveFiltering Hirschman Optimal Transform (HOT) DFT Block LMS Algorithm 11 Fig. 2. The flow block diagram of the HOT DFT block LMS adaptive filter. 145 Hirschman Optimal Transform (HOT) DFT Block LMS Algorithm 12 Will-be-set-by-IN-TECH The ratio between the number of multiplications required for the HOT DFT block LMS algorithm and the number of multiplications required for the DFT block LMS algorithm is plotted in Figure 3 for different filter lengths. The HOT DFT block LMS filter is always more efficient than the DFT block LMS filter and the efficiency increases as the block length increases. 100 200 300 400 500 600 700 800 900 1000 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1 Block size Ratio N = 50 N = 15 N = 10 N = 5 Fig. 3. Ratio between the number of multiplications required for the HOT DFT and the DFT block LMS algorithms. 6. Convergence analysis of the HOT DFT LMS algorithm Now the convergence of the new algorithm is analyzed. The analysis is performed in the DFT domain. The adaptive filter update equation in the DFT domain is given by w F (k + 1)=w F (k)+ μ L K−1 ∑ l=0 FU N F −1 u ∗ lF (k) · e lF (k) . (44) Let the desired signal be generated using the linear regression model d (n)=w o (n) ∗ u(n)+e o (n), (45) where w o (n) is the impulse response of the Wiener optimal filter and e o (n) is the irreducible estimation error, which is white noise and statistically independent of the adaptive filter input. In the k th block, the l th section of the desired signal in the DFT domain is given by ˆ d l (k)= 0 N×N I N×N F −1 w o F (k) · u lF (k) + ˆe o l (k), (46) Therefore, the l th section of the error is given by e l (k)=L N F −1 F (k) · u lF (k) + e o l (k), (47) 146 AdaptiveFiltering Hirschman Optimal Transform (HOT) DFT Block LMS Algorithm 13 where L N = 0 N×N 0 N×N 0 N×N I N×N , (48) and F (k)=w o F − w F (k). Using equation (44), the error in the estimation of the adaptive filter weight vector F (k) is updated according to F (k + 1)= F (k) − μ L K−1 ∑ l=0 U N,F u ∗ lF (k) · e lF (k) , (49) where U N,F = F I N×N 0 N×N 0 N×N 0 N×N F −1 . (50) Taking the DFT of equation (47), we have that e lF (k)=L N,F F (k) · u lF (k) + e o lF (k), (51) where L N,F = F 0 N×N 0 N×N 0 N×N I N×N F −1 . (52) Using equation (51), we can write u ∗ lF (k) · e lF (k)=U ∗ lF (k) L N,F U lF (k) F (k)+e o lF (k) . (53) Using U ∗ lF (k)L N,F U lF (k)=u ∗ lF (k)u T lF (k) · L N,F , (54) equation (53) can be simplified to u ∗ lF (k) · e lF (k)= u ∗ lF (k)u T lF (k) · L N,F F (k)+u ∗ lF (k) · e o lF (k). (55) Substituting equation (55) into equation (49), we have that F (k + 1)= I − μ L U N,F K −1 ∑ l=0 u ∗ lF (k)u T lF (k) · L N,F F (k) − μ L U N,F K −1 ∑ l=0 u ∗ lF (k) · e o lF (k). (56) Taking the expectation of the above equation yields E F (k + 1)= I − μ N U N,F R u,F · L N,F E F (k), (57) where R u,F = F H R u F and R u is the 2N × 2N autocorrelation matrix of u(n). Equation (57) is similar to the result that corresponds to the DFT block LMS algorithm (Farhang-Boroujeny & Chan, 2000). Therefore, the convergence characteristics of the HOT DFT block LMS algorithm are similar to that of the DFT block LMS algorithm. The convergence speed of the HOT DFT LMS algorithm can be increased if the convergence moods are normalized using the estimated power of the tap-input vector in the DFT domain. The complete HOT DFT block LMS weight update equation is given by w (k + 1)=w(k)+ μ L K−1 ∑ l=0 U N F −1 Λ −1 l (k) u ∗ lF (k) · e lF (k) (58) and Λ l (k + 1)= k − 1 k Λ l (k)+ 1 kL Diag u ∗ lF (k) · u lF (k) . (59) 147 Hirschman Optimal Transform (HOT) DFT Block LMS Algorithm 14 Will-be-set-by-IN-TECH 7. Misadjustment of the HOT DFT LMS algorithm In this section, the misadjusment of the HOT DFT block LMS algorithm is derived. The mean square error of the conventional LMS algorithm is given by J (n)=E e(n) 2 . (60) For the block LMS algorithm, the mean square error is given by J (k)= 1 L E L−1 ∑ i=0 e (kL + i) 2 , (61) which is also equivalent to J (k)= 1 2NL E K−1 ∑ l=0 e H lF (k)e lF (k). (62) Using equation (51), the mean square error of the HOT DFT block LMS algorithm is given by J (k)=J o + 1 2NL E K−1 ∑ l=0 L N,F U lF (k)(k) 2 , (63) where J o is the mean square of e o (n). Assuming that (k) and Diag[u lF (k)] are independent, the excess mean square error is given by J ex (k)= 1 2NL E K−1 ∑ l=0 H F (k)EU H lF (k)L N,F U H lF (k) F (k). (64) Using equation (54), the excess mean square error can be written as J ex = K 2NL E H F (k) R u,F · L N,F F (k), (65) or equivalently J ex = K 2NL tr R u,F · L N,F E F (k) H F (k) . (66) 8. Simulation of the HOT DFT block LMS algorithm The learning curves of the HOT DFT block LMS algorithm were simulated. The desired input was generated using the linear model d (n)=w o (n) ∗ u(n)+e o (n),wheree o (n) is the measurement white gaussian noise with variance 10 −8 . The input was a first-order Markov signal with autocorrelation function given by r (k)=ρ |k| . The filter was lowpass with a cutoff frequency π/2 rad. Figure 4 shows the learning curves for the HOT DFT block LMS filter with those for the LMS and DFT block LMS filters for N = 4, K = 3, and ρ = 0.9. Figure 5 shows similar curves for N = 50, K = 10, and ρ = 0.9. Both figures show that the HOT DFT block LMS algorithm converges at the same rate as the DFT block LMS algorithm and yet is computationally more efficient. Figure 6 shows similar curves for N = 50 and K = 10 and ρ = 0.8. As the correlation coefficient decreases the algorithms converges faster and the HOT DFT block LMS algorithm converges at the same rate as the DFT block LMS algorithm. 148 AdaptiveFiltering [...]... 2010 Kailath, T (1 968 ) An innovations approach to least-squares estimation -part I: linear filtering in additive white noise, IEEE transactions on automatic control, vol 13, issue 6, pp 64 6 65 5 Kalman, R.E & Bucy, R S (1 961 ) New results in linear filtering and prediction theory, Transactions of the ASME – Journal of basic engineering, 83: (1 961 ): pp 95-107 Knapp, C & Carter, G C (19 76) The generalized... processing, 39 (4): pp 239-243 168 AdaptiveFiltering Widrow, B., Glover, J R Jr., McCool, J M., Kaunitz, J., Williams, C S., Hearn, R H., Zeidler, J R., Dong, E Jr., & Goodlin, R C (1975) Adaptive noise cancelling: principles and applications, Proceedings of the IEEE, 63 (12), pp. 169 2-17 16 Widrow, B & Hoff, M E (1 960 ) Adaptive switching circuits, IRE Wescon Convention Record, pp 96- 104 ... was supported in part by the UTM Institutional Grant vote number 77523 Real-Time Noise Cancelling Approach on Innovations-Based Whitening Application to Adaptive FIR RLS in Beamforming Structure 167 8 References Berghe, J V & Wouters, J (1998) An adaptive noise canceller for hearing aids using two nearby microphones, Journal of the Acoustical Society of America, 103 (6) , pp 362 1 362 6 Compernolle, D... (time difference of arrival) function may alternatively be used instead of adaptive filter 1 in beamforming structure (Knapp & Carter, 19 76) 3 ANC - adaptive filter ( W2 ) and delay filter ( D 2 ) The last part of block diagram shows ANC method (Widrow et al., 1975), where it consists of adaptive filter 2 and delay filter 2 The adaptive filter generally uses FIR (finite impulse response) LMS (least mean... 0.1z 6 The frequency response of the coloring filter is shown in Figure 7 The learning curves are shown in Figure 8 The simulations are again consistent with the theoretical predictions presented in this chapter 150 AdaptiveFiltering Will-be-set-by-IN-TECH 16 10 LMS DFT Block HOT−DFT 0 −10 Mean sqaure error dB −20 −30 −40 −50 60 −70 −80 −90 0 0.5 1 1.5 Number of iteration 2 2.5 3 4 x 10 Fig 6 Learning... signal For the processing, two FFT points (2048 in simulation 160 AdaptiveFiltering test and 40 96 in real test) frame sizes have been used, and sampling frequency of 22050Hz and Nyquist frequency of around 11000Hz have been chosen For the precise test, programmed operation is made to stop the estimate to freeze both kepstrum coefficients and adaptive (FIR and IIR RLS) filter weights when the signal is... 4 - - - 1 1 099 − 0.439 0 175 1 - 0 397 0 078 - - 1 1 501 - - - 1 1 0 96 - 0 525 − 0 44 0 1 76 0 122 − 0 070 − 0 070 0.000 Table 1 Comparison of overall estimate in vector weights: (A) IIR RLS in theory (B) IIR RLS in estimate (C) front-end innovations based inverse kepstrum and cascaded FIR RLS in estimate 162 AdaptiveFiltering 5.3 Simulation test in beamforming structure Based on the analysis... structure: between (i) whitening filter application only and (ii) no-processing 166 AdaptiveFiltering Fig 15 Average power spectrum showing noise cancelling performance: comparison between (i) IIR RLS in ANC structure and (ii) whitening filter with FIR RLS in beamforming structure during the period of (A) noise and (B) signal and noise 6 Conclusion It has been shown in simulation test that the application of... used (in the case of two microphone, it should be 0.5) Alternatively, adaptive filter 1 can be used as a first adaptive noise canceller For this application, its output is a noise reference input to next cascading adaptive filter 2 during noise period in VAD (Wallace & Goubran, 1992) Based on a same structure, two-stage adaptivefiltering scheme is introduced (Berghe and Wouters, 1998) As a speech... Switching adaptive filters for enhancing noisy and reverberant speech from microphone array recordings, International conference on acoustics, speech, and signal processing (ICASSP), pp 833-8 36, Albuquerque, Griffiths L J & Jim C W (1982) An alternative approach to linearly constrained adaptive beamforming, IEEE transactions on antennas and propagation, vol AP30, pp 27-34 Haykin, S (19 96) Adaptive filter . Trans. Infor. Theory, pp. 20 86- 2090, Jul 2001. Widrow, B. & Hoff, Jr., M. (1980). Adaptive switching circuit,” IRE WESCON Conv. Rec.,pt.4 pp. 96- 104, 1980. 152 Adaptive Filtering 7 Real-Time. indicates that the z- transform of the causal part of the inverse z- transform of B(z) is being taken. Adaptive Filtering 1 56 From the optimum Wiener filtering structure, the innovations process n ε . structures of beamforming and ANC. Adaptive Filtering 154 5.0 )( 2 zW )( 1 zW + − + + + − 1 H 2 H 1 D 2 D 5.0 Delay filter 1 Delay filter 2 Adaptive filter 1 Adaptive filter 2 Error 1 Error