Tài liệu 22 TransformDomain Adaptive Filtering pptx

22 222 0
Tài liệu 22 TransformDomain Adaptive Filtering pptx

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

W. Kenneth Jenkins, et. Al. “Transform Domain Adaptive Filtering.” 2000 CRC Press LLC. <http://www.engnetbase.com>. TransformDomainAdaptive Filtering W.KennethJenkins UniversityofIllinois, Urbana-Champaign DanielF.Marshall MITLincolnLaboratory 22.1LMSAdaptiveFilterTheory 22.2OrthogonalizationandPowerNormalization 22.3ConvergenceoftheTransformDomainAdaptiveFilter 22.4DiscussionandExamples 22.5Quasi-NewtonAdaptiveAlgorithms AFastQuasi-NewtonAlgorithm • Examples 22.6The2-DTransformDomainAdaptiveFilter 22.7Block-BasedAdaptiveFilters ComparisonoftheConstrainedandUnconstrainedFre- quencyDomainBlock-LMSAdaptiveAlgorithms • Examples andDiscussion References Oneoftheearliestworksontransformdomainadaptivefilteringwaspublishedin1978byDentino etal.[4],inwhichtheconceptofadaptivefilteringinthefrequencydomainwasproposed.Many publicationshavesinceappearedthatfurtherdevelopthetheoryandexpandthecurrentunder- standingofperformancecharacteristicsforthisclassofadaptivefilters.Inadditiontothediscrete Fouriertransform(DFT),otherorthogonaltransformssuchasthediscretecosinetransform(DCT) andtheWalshHadamardtransform(WHT)canalsobeusedeffectivelyasameanstoimprovethe LMSalgorithmwithoutaddingtoomuchcomputationalcomplexity.Forthisreason,thegeneral termtransformdomainadaptivefilteringisusedinthefollowingdiscussiontomeanthattheinput signalispreprocessedbydecomposingtheinputvectorintoorthogonalcomponents,whicharein turnusedasinputstoaparallelbankofsimpleradaptivesubfilters.Withanorthogonaltransforma- tion,theadaptationtakesplaceinthetransformdomain,asitispossibletoshowthattheadjustable parametersareindeedrelatedtoanequivalentsetoftimedomainfiltercoefficientsbymeansofthe sametransformationthatisusedfortherealtimeprocessing[5,14,17]. AdirectformFIRdigitalfilterstructureisshowninFig.22.1.ThedirectformrequiresN−1 delays,Nmultiplications,andN−1additionsforeachoutputsamplethatisproduced.Theamount ofhardware(aswellaspower)requiredtoimplementthedirectformstructuredependsonthedegree ofhardwaremultiplexingthatcanbeutilizedwithinthespeeddemandsoftheapplication.Afully parallelimplementationconsistingofNdelayregisters,Nmultipliers,andatreeoftwo-inputadders wouldbeneededforveryhigh-frequencyapplications.Attheoppositeendoftheperformancespec- trum,asequentialimplementationconsistingofalengthNdelaylineandasingletimemultiplexed multiplierandaccumulationadderwouldprovidethecheapest(andslowest)implementation.This c  1999byCRCPressLLC FIGURE 22.1: The direct form adaptive filter structure. latter structure would be characteristic of a filter that is implemented in software on one of the many commercially available DSP chips. Regardless of the hardware complexity that results from a particular implementation, the com- putational complexity of the filter is determined by the requirements of the algorithm and, as such, remains invariant with respect to different hardware structures. In particular, the computational complexity of the direct form FIR filter is O[N], since N multiplications and (N −1) additions must be performed at each iteration. When designing an adaptive filter, it seems reasonable to seek an adaptive algorithm whose order of complexity is no greater than the order of complexity of the basic filter structure itself. This goal is achieved by the LMS algorithm, which is the major contributing factor to the enormous success of that algorithm. Extending this principle for 2-D adaptive filters implies that desirable 2-D adaptive algorithms have an order of complexity of O[N 2 ], since a 2-D FIR direct form filter has O[N 2 ] complexity inherent in its basic structure [11, 21]. The transform domain adaptive filteris a generalization ofthe LMS FIR structure, in which a linear transformation is performed on the input signal and each transformed “chanel” is power normalized to improve the convergence rate of the adaptation process. The linear transform is characterized throughout the following discussions as a sliding window operator that consists of a transformation matrix multiplying an input vector [14]. At each iteration n the input vector includes one new input sample x(n), and N − 1 past input samples x(n − k), k = 1, .,N− 1. As the window slides forward sample by sample, filtered outputs are produced continuously at each value of the index n. Since the input transformation is represented by a matrix-vector product, it might appear that the computational complexity of the transform domain filter is at least O[N 2 ]. However, many transformations can be implemented with fast algorithms that have complexities less than O[N 2 ]. For example, the discrete Fourier transform can be implemented by the FFT algorithm, resulting in a complexity of O[N log 2 N] per iteration. Some transformations can be implemented recursively in a bank of parallel filters, resulting in a net complexity of O[N] per iteration. The main point to be made here is that the complexity of the transform domain filter typically falls between O[N] and O[N 2 ], with the actual complexity depending on the specific algorithm that is used to compute the sliding window transform operator [17]. 22.1 LMS Adaptive Filter Theory The LMS algorithm is derived as an approximation to the steepest descent optimization strategy. The fact that the field of adaptive signal processing is based on an elementary principle from optimization theory suggests that more advanced adaptive algorithms can be developed by incorporating other c  1999 by CRC Press LLC results from the field of optimization [22]. This point of view recurs throughout this discussion, as concepts are borrowed from the field of optimization and modified for adaptive filtering as needed. In particular, one of the borrowed ideas that appears later is the quasi-Newton optimization strategy. It will be shown that transform domain adaptive filtering algorithms are closely related to quasi- Newton algorithms, but have computational complexity that is closer to the simple requirements of the LMS algorithm. For a length N FIR filter with the input expressed as a column vector x(n) =[x(n), x(n − 1), .,x(n− N + 1)] T , the filter output y(n) is easily expressed as y(n) = w T (n)x(n) , (22.1) where w(n) =[w 0 (n), w 1 (n), .,w N−1 (n)] T is the time varying vector of filter coefficients (tap weights), and the superscript “T” denotes vector transpose. The output error is formed as the difference between the filter output and a training signal d(n), i.e., e(n) = d(n)− y(n). Strategies for obtaining an appropriate d(n) vary from one application to another. In many cases the availability of a suitable training signal determines whether an adaptive filtering solution will be successful in a particular application. The ideal cost function is defined by the mean squared error (MSE) criterion, E[|e(n)| 2 ]. The LMS algorithm is derived by approximating the ideal cost function by the instantaneous squared error, resulting in J LMS (n) =|e(n)| 2 . While the LMS seems to make a rather crude approximation at the very beginning, the approximation results in an unbiased estimator. In many applications the LMS algorithm is quite robust and is able to converge rapidly to a small neighborhood of the optimum Wiener solution. The steepest descent optimization strategy is given by w(n + 1) = w(n) − µ∇ E[|e| 2 ] (n) , (22.2) where∇ E[|e| 2 ] (n) is the gradient of the cost function with respect to the coefficient vector w(n). When the gradient is formed using the LMS cost function J LMS (n) =|e(n)| 2 , the conventionalLMS results: w(n + 1) = w(n) + µe(n)x(n) , e(n) = d(n) − y(n) , (22.3) and y(n) = x(n) T w(n) . (Note: Many sources include a “2” before the µ factor in Eq. (22.3) because this factor arises during the derivation of (22.3)from(22.2). In this discussion we assume this factor is absorbed into the µ, so it will not appear explicitly.) Since the LMS algorithm is treated in considerable detail in other sections of this book, we will not present any further derivation or analysis of it here. However, the following observations will be useful when other algorithms are compared to the LMS as a baseline design [2, 3, 6, 8]. 1. Assume that all of the signals and filter variables are real-valued. The filter itself requires N multiplications and N − 1 additions to produce y(n) at each value of n . The coeffi- cient update algorithm requires 2N multiplications and N additions, resulting in a total computational burden of 3N multiplications and 2N − 1 additions per iteration. Since N is generally much larger than the factor of three, the order of complexity of the LMS algorithm is O[N]. 2. The cost function given for the LMS algorithm is a simplified form of the one used for the RLS algorithm. This implies that the LMS algorithm is a simplified version of the RLS algorithm, where averages are replaced by single instantaneous terms. c  1999 by CRC Press LLC 3. The (power normalized) LMS algorithm is also a simplified form of the transform domain adaptive filter which results by setting the transform matrix equal to the identity matrix. 4. The LMS algorithm is also a simplified form of the Gauss-Newton optimization strategy whichintroduces secondorder statistics (the input autocorrelationfunction) toaccelerate the rate of convergence. In order to obtain the LMS algorithm from the Gauss-Newton algorithm, two approximations must be made: (i) The gradient must be approximated by the instantaneous error squared, and (ii) the inverse of the input autocorrelation matrix must be crudely approximated by the identity matrix. These observations suggest that many of the seemingly distinct adaptive filtering algorithms that appear scattered about in the literature are indeed closely related, and can be considered to be mem- bers of a family whose hereditary characteristics have their origins in Gauss-Newton optimization theory [15, 16]. The different members of this family inherit their individual characteristics from approximations that are made on the pure Gauss-Newton algorithm at various stages of their deriva- tions. However, after the individual derivations are complete and each algorithm is packaged in its own algorithmic form, the algorithms look considerably different from one another. Unless a conscious effort is made to reveal their commonality, the fact that they have evolved from common roots may be entirely obscured. The convergence behavior of the LMS algorithm, as applied to a direct form FIR filter structure, is controlled by the autocorrelation matrix R x of the input process, where R x ≡ E[x ∗ (n)x T (n)] . (22.4) (The ∗ in Eq. (22.4) denotes complex conjugate to account for the general case of complex input signals, although throughout most of the following discussions it will be assumed that x(n) and d(n) are both real-valued signals.) The autocorrelation matrix R x is usually positive definite, which is one of the conditions necessary to guarantee convergence to the Wiener solution. Another necessary condition for convergence is 0 <µ<1/λ max ,whereλ max is the largest eigenvalue of R x . It is also well established that the convergence of this algorithm is directly related to the eigenvalue spread of R x . The eigenvalue spread is measured by the condition number of R x , defined as κ = λ max /λ min , where λ min is the minimum eigenvalue of R x . Ideal conditioning occurs when κ = 1 (white noise); as this ratio increases, slower convergence results. The eigenvalue spread (condition number) depends on the spectral distribution of the input signal and can be shown to be related to the maximum and minimumvalues of the input powerspectrum (22.4). Fromthis line of reasoning itbecomes clear that white noise is the ideal input signal for rapidly training an LMS adaptive filter. The adaptive process becomes slower and requires more computation for input signals that are more severely colored [6]. Convergence properties are reflected in the geometry of the MSE surface, which is simply the mean squared output error E[|e(n)| 2 ] expressed as a function of the N adaptive filter coefficients in (N + 1)-space. An expression for the error surface of the direct form filter is J(z) ≡ E  |e(n)| 2  = J min + z ∗T R x z , (22.5) with R x definedin(22.4) and z ≡ w − w opt ,wherew opt is the vector of optimum filter coefficients in the sense of minimizing the mean squared error ( w opt is known as the Wiener solution). An example of an error surface for a simple two-tap filter is shown in Fig. 22.2. In this example x(n) was specified to be a colored noise input signal with an autocorrelation matrix R x =  1.00.9 0.91.0  . Figure 22.2 shows three equal-error contours on the three dimensional surface. The term z ∗T R x z in Eq. (22.2) is a quadratic form that describes the bowl shape of the FIR error surface. When R x is c  1999 by CRC Press LLC positive definite, the equal-error contours of the surface are hyperellipses (N dimensional ellipses) centered at the origin of the coefficient parameter space. Furthermore, the principle axes of these hyperellipses are the eigenvectors of R x , and their lengths are proportional to the eigenvalues of R x . Sincethe convergence rate of the LMS algorithm is inversely relatedto the ratio of the maximum to the minimum eigenvalues of R x , large eccentricity of the equal-error contours implies slow convergence of the adaptive system. In the case of an ideal white noise input, R x has a single eigenvalue of multiplicity N, so that the equal-error contours are hyperspheres [8]. FIGURE 22.2: Example of an error surface for a simple two-tap filter. 22.2 Orthogonalization and Power Normalization The transform domain adaptive filter (TDAF) structure is shown in Fig. 22.3. The input x(n) and desired signal d(n) are assumed to be zero mean and jointly stationary. The input to the filter is a vector of N current and past input samples, defined in the previous section and denoted as x(n). This vector is processed by a unitary transform, such as the DFT. Once the filter order N is fixed, the transform is simply an N × N matrix T, which is in general complex, with orthonormal rows. The transformed outputs form a vector v(n) whichisgivenby v(n) =  v 0 (n), v 1 (n), .,v N−1 (n)  T = Tx(n) . (22.6) With an adaptive tap vector defined as W(n) =  W 0 (n), W 1 (n), .,W N−1 (n)  T , (22.7) the filter output is given by y(n) = W T (n)v(n) = W T (n)Tx(n) . (22.8) The instantaneous output error e(n) = d(n) − y(n) (22.9) c  1999 by CRC Press LLC FIGURE 22.3: The transform domain adaptive filter structure is then formed and used to update the adaptive filter taps using a modified form of the LMS algo- rithm (22.11): W(n + 1) = W(n) + µe(n) −2 v ∗ (n)  2 ≡ diag  σ 2 0 ,σ 2 1 , .,σ 2 N−1  (22.10) where σ 2 i = E  | v i (n) | 2  . As before, the superscript asterisk in (22.10) indicates complex conjugation to account for the most general case in which the transform is complex. Also, the use of the upper case coefficient vector in Eq. (22.10) denotes that W(n) is a transform domain variable. The power estimates σ 2 i can be developed on-line by computing an exponentially weighted average of past samples according to σ 2 i (n) = ασ 2 i (n − 1) + | v i (n) | 2 , 0 <α<1 . (22.11) If σ 2 i becomes too small due to an insufficient amount of energy in the i-th channel, the update mechanism becomes ill-conditioned due to a very large effective step size. In some cases the process will become unstable and register overflow will cause the adaptation to catastrophically fail. So the algorithm given by (22.10) should have the update mechanism disabled for the i-th orthogonal channel if σ 2 i falls below a critical threshold. Alternatively the transform domain algorithm may be stabilized by adding small positive constants ε to the diagonal elements of  2 according to   2 =  2 + εI . (22.12) Then   2 is used in place of  2 in Eq. (22.10). For most input signals σ 2 i  ε, and the inclusion of the stabilization factors is transparent to the performance of the algorithm. However, whenever σ 2 i ≈ ε, the stabilization terms begins to have a significant effect. Within this operating region the power in the channels will not be uniformly normalized and the convergence rate of the filter will begin to degrade but catatrophic failure will be avoided. The motivation for using the TDAF adaptive system instead of a simpler LMS based system is to achieve rapid convergence of the filter’s coefficients when the input signal is not white, while maintaining a reasonably low computational complexity requirement. In the following section this convergence rate improvement of the TDAF will be explained geometrically. c  1999 by CRC Press LLC 22.3 Convergence of the Transform Domain Adaptive Filter In this section the convergence rate improvement of the TDAF is described in terms of the mean squared error surface. From Eqs. (22.4) and (22.6) it is found that R v = T ∗ R x T T , so that for the transform structure without power normalization Eq. (22.5) becomes Jz ≡ E  |e(n)| 2  = J min + z ∗T  T ∗ R x T T  z . (22.13) The difference between (22.5) and (22.13) is the presence of T inthequadratictermof(22.13). When T is a unitary matrix, its presence in (22.13) gives a rotation and/or a reflection of the surface. The eccentricity of the surface is unaffected by the transform, so the convergence rate of the system is unchanged by the transformation alone. However, the signal power levels at the adaptive coefficients are changed by the transforma- tion. Consider the intersection of the equal-error contours with the rotated axes: letting z = [0···z i ···0] T , with z i in the i-th position, Eq. (22.13) becomes J(z) − J min =  T ∗ R x T T  i z 2 i ≈ σ 2 i z 2 i . (22.14) If the equal-error contours are hyperspheres (the ideal case), then for a fixed value of the error J (n),(22.14) must give |z i |=|z j | for all i and j, since all points on a hypersphere are equidistant from the origin. When the filter input is not white, this will not hold in general. But since the power levels σ 2 i are easily estimated, the rotated axes can be scaled to have this property. Let  −1 ˆ z = z, where  is defined in (22.10). Then the error surface of the TDAF, with transform T and including power normalization, is given by J( ˆ z) = J min + ˆ z ∗T   −1 T ∗ R x T T  −1  ˆ z . (22.15) The main diagonal entries of  −1 T ∗ R x T T  −1 are all equal to one, so (22.14) becomes J(z)−J min = ˆz 2 i , which has the property described above. Thus, the action of the TDAF system is to rotate the axes of the filter coefficient space using a unitary rotation matrix T, and then to scale these axes so that the error surface contours become approximately hyperspherical at the points where they can be easily observed, i.e., the points of intersection with the new (rotated) axes. Usually the actual eccentricity of the error surface contours is reduced by this scaling, and faster convergence is obtained. As a second example, transform domain processing is now added to the previous example, as illustrated in Figs. 22.4 and 22.5. The error surface of Fig. 22.4 was created by using the (arbitrary) transform T =  0.866 0.500 0.500 0.866  , on the error surface shown in Fig. 22.2, which produces clockwise rotation of the ellipsoidal contours so that the major and minor axes more closely align with the coordinate axes than they did without the transform. Powernormalization was then applied using the normalization matrix  −1 as shown in Fig. 22.5, which represents the transformed and power normalized error surface. Note that the elliptical contours after transform domain processing are nearly circular in shape, and in fact they would have been perfectly circular if the rotation of Fig. 22.4 had brought the contours into precise alignment with the coordinate axes. Perfect alignment did not occur in this example because T was not able to perfectly diagonalize the input autocorrelation matrix for this particular x(n). Since T is a fixed transform in the TDAF structure, it clearly cannot properly diagonalize R x for an arbitrary x(n), hence the surface rotation (orthogonalization) will be less than perfect for most input signals. It c  1999 by CRC Press LLC FIGURE 22.4: Error surface for the TDAF with transform T. FIGURE 22.5: Error surface with transform and power normalization. should be noted here that a well-known conventional algorithm called recursive least squares (RLS) is known to achieve near optimum convergence rates by forming an estimate of R −1 x , the inverse of the autocorrelation matrix. This type of algorithm automatically adjusts to whiten any input signal, and it also varies over time if the input signal is a nonstationary process. Unfortunately, the computation required for the RLS algorithm is large and is not easily carried out in real time within the resource limitations of many practical applications. The RLS algorithm falls into the general class of quasi- Newton optimization techniques, which are thoroughly treated in numerous places throughout the literature. There are two different ways to interpret the mechanism that brings about improved convergence rates achieved through transform domain processing [16]. The first point of view considers the com- bined operations of orthogonalization and power normalization to be the effective transformation  −1 T , an interpretation that is implied by Eq. (22.15). This line of thinking leads to an understand- ing of the transformed error surfaces as illustrated by example in Figs. 22.4 and 22.5 and leads to the logical conclusion that the faster learning rate is due to the conventional LMS algorithm operating on c  1999 by CRC Press LLC an improved error surface that has been rendered more properly oriented and more symmetrical via the transformation. While this point of view is useful in understanding the principles of transform domain processing, it is not generally implementable from a practical point of view. This is because for an arbitrary input signal, the power normalization factors that constitute the  −1 part of the input transformation are not known a priori, and must be estimated after T is used to decompose the input signal into orthogonal channels. The second point of view interprets the transform domain equations as operating on the trans- formed error surface (without power normalization) with a modified LMS algorithm where the step sizes are adjusted differently in the various channels according to µ(n) = µ −2 ,where µ(n) = diag[µ i (n)] is a diagonal matrix that contains the step size for the i-th channel at loca- tion (i, i). The dependence of the µ i (n)’s on the iteration (time) index n acknowledges that the steps sizes are a function of the power normalization factors, which are updated in real time as part of the on-line algorithm. This suggests that the TDAF should be able to track nonstationary input statistics within the limited abilities of the transformation T to orthogonalize the input and within the accuracy limits of the power normalization factors. Furthermore, when the input signal is white, all of the σ 2 i ’s are identical and each is equal to the power in the input signal. In this case the TDAF with power normalization becomes the conventional normalized LMS algorithm. It is straightforward to show mathematically that the above two points of view are indeed com- patible [10]. Let ˆv(n) ≡  −1 Tx(n) =  −1 v(n) and let the filter tap vector be denoted ˆ W(n) when the matrix  −1 T is treated as the effective transformation. For the resulting filter to have the same response as the filter in Fig. 22.3 we must have v T (n)W = y(n) =ˆv T ˆ W = v T (n) −1 ˆ W , ∀v(n) (22.16) which implies that W =  −1 ˆ W. It the tap vector ˆw is updated using the LMS algorithm, then W(n + 1) =  −1 ˆ W(n + 1) =  −1 [ ˆ W(n) + µe(n)ˆv ∗ (n)] =  −1 ˆ W(n) + µe(n) −1 ˆv ∗ (n) = W(n) + µe(n) −2 v ∗ (n) , (22.17) which is precisely the algorithm (22.10). This analysis demonstrates that the two interpretations are consistent, and they are, in fact, alternate ways to explain the fundamentals of transform domain processing. 22.4 Discussion and Examples It is clear from the above development that the power estimates σ 2 i are the optimum scale factors, as opposed to |σ i | or some other statistic. Also, it is significant to note that no convergence rate improvement can be realized without power normalization. This is the same conclusion that was reachedin[6] where the frequency domain LMS algorithm was analyzed with a constant convergence factor. From theerrorsurfacedescription of theTDAF’s operation, it isseen thatan optimaltransform rotates the axes of the hyperellipsoidal equal-error contours into alignment with the coordinate axes. The prescribed power normalization scheme then gives the ideal hyperspherical contours, and the convergence rate becomes the same as if the input were white. The optimal transform is composed of the orthonormal eigenvectors of the input autocorrelation matrix and is known in the literature as the Karhunen-Loe’ve transform (KLT). The KLT is signal dependent and usually cannot be easily computed in real time. Note that real signals have real KLT’s, suggesting the use of real transforms in the TDAF (in contrast to complex transforms such as the DTF). Sincethe optimaltransform forthe TDAFissignal dependent, a universallyoptimalfixedparameter transform can never be found. It is also clear that once the filter order has been chosen, any unitary c  1999 by CRC Press LLC [...]... convergence over that of the c 1999 by CRC Press LLC FIGURE 22. 8: Two-dimensional transform domain adaptive filter structure fixed transform algorithm Similar results appear in Fig 22. 10 with the same coloring filter and a fourth-order adaptive filter 22. 7 Block-Based Adaptive Filters The block-based LMS (BLMS) algorithm is one of the many efficient adaptive filtering algorithms aimed at increasing convergence... = dk,c (22. 29) yk,c [0 · · · 0 last N elements of F−1 Yk ]T (22. 30) FIGURE 22. 10: Convergence plot for 5 × 5 FIR 2-D LMS, 2-D TDAF, and 2-D FQN adaptive filters in the system identification configuration with low-pass colored inputs Then the error signal in the frequency domain is Ek = F(dk − yk ) (22. 31) In order to guarantee that there are only N nonzero terms in the impulse response of the adaptive. .. Press LLC (22. 19) FIGURE 22. 6: Comparison of (smoothed) learning curves for five different transforms operating on a colored noise input signal with condition number 681 −1 −1 where Rx (n) is an estimate of Rx that varies as a function of the index n Equation (22. 19) characterizes the quasi-Newton LMS algorithm Note that (22. 18) is the starting point for the development of many practical adaptive algorithms... the transform domain examples shown previously 22. 6 The 2-D Transform Domain Adaptive Filter Many successful 1-D FIR algorithms have been extended to 2-D filters [7, 10, 19, 21] Transform domain adaptive algorithms are also well suited to 2-D signal processing Orthogonal transforms with power normalization can be used to accelerate the convergence of an adaptive filter in the presence of a colored input... techniques In recent years it has also appeared in various forms in publications on adaptive filtering In this section a brief introduction to quasi-Newton adaptive filtering methods is presented When the quasi-Newton concept is integrated into the LMS algorithm, the resulting adaptive strategy is closely related to the transform domain adaptive filter, but where the transform is computed on-line as an approximation... unconstrained linear frequency domain LMS algorithms have been developed (22. 2 and 22. 9) In this chapter we formulate both the constrained and unconstrained FBLMS algorithms and then present performance comparisons between them c 1999 by CRC Press LLC FIGURE 22. 9: Convergence plot for 3 × 3 FIR 2-D LMS, 2-D TDAF, and 2-D FQN adaptive filters in the system identification configuration with low-pass colored... size µ for the FQN algorithm is given by 1 −1 −i µ = ε + x T (n)Rx (n − 1)x(n) 2 (22. 22) This step size is used in other quasi-Newton algorithms (22. 4), and seems nearly optimal The −1 parameter ε is intended to be small relative to the average value of x T (n)Rx (n − 1)x(n) Then the normalization term omitted from (22. 21), which is a function of α but not of i, cancels out of −1 the coefficient update,... F.R., Jr., Self-orthogonalizing adaptive equalization algorithms, IEEE Trans Commun., Vol COM-25(7), 666–672, July 1977 [6] Haykin, S., Adaptive Filter Theory, Prentice-Hall, Englewood Cliffs, NJ, 1991 [7] Hadhoud, M.M and Thomas, D.W., The two-dimensional adaptive LMS (TDLMS) algorithm, IEEE Trans Circuits Syst., Vol 35, 485–494, 1988 [8] Honig, M.L and Messerschmidt, D.G., Adaptive Filters: Structures,... multidimensional adaptive filtering of frequency domain multiplexed video signals, Ph.D dissertation, M.I.T., Cambridge, MA, 1990 [20] Shynk, J.J., Frequency-domain and multirate adaptive filtering, IEEE ASSP Mag., 15–37, Jan 1992 [21] Strait, J.C., Structures and algorithms for two-dimensional adaptive signal processing, Ph.D dissertation, University of Illinois at Urbana-Champaign, 1995 [22] Widrow, B... input condition number and greatly improve convergence rates, although some transforms are seen to be more effective than others for the coloring chosen for these examples 22. 5 Quasi-Newton Adaptive Algorithms The dependence of the adaptive system’s convergence rate on the input power spectrum can be reduced by using second-order statistics via the Gauss-Newton method [9, 10, 21] The GaussNewton algorithm . MITLincolnLaboratory 22. 1LMSAdaptiveFilterTheory 22. 2OrthogonalizationandPowerNormalization 22. 3ConvergenceoftheTransformDomainAdaptiveFilter 22. 4DiscussionandExamples 22. 5Quasi-NewtonAdaptiveAlgorithms. 22. 4DiscussionandExamples 22. 5Quasi-NewtonAdaptiveAlgorithms AFastQuasi-NewtonAlgorithm • Examples 22. 6The2-DTransformDomainAdaptiveFilter 22. 7Block-BasedAdaptiveFilters ComparisonoftheConstrainedandUnconstrainedFre-

Ngày đăng: 16/12/2013, 04:15

Tài liệu cùng người dùng

  • Đang cập nhật ...

Tài liệu liên quan