1. Trang chủ
  2. » Kỹ Thuật - Công Nghệ

Tài liệu 31 Channel Equalization as a Regularized Inverse Problem doc

10 305 0

Đang tải... (xem toàn văn)

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 10
Dung lượng 196,64 KB

Nội dung

Doherty, J.F. “Channel Equalization as a Regularized Inverse Problem” Digital Signal Processing Handbook Ed. Vijay K. Madisetti and Douglas B. Williams Boca Raton: CRC Press LLC, 1999 c  1999byCRCPressLLC 31 Channel Equalization as a Regularized Inverse Problem John F. Doherty Pennsylvania State University 31.1 Introduction 31.2 Discrete-Time Intersymbol Interference Channel Model 31.3 Channel Equalization Filtering Matrix Formulation of the Equalization Problem 31.4 Regularization 31.5 Discrete-Time Adaptive Filtering Adaptive Algorithm Recapitulation • Regularization Proper- ties of Adaptive Algorithms 31.6 Numerical Results 31.7 Conclusion References 31.1 Introduction In this article we examine the problem of communication channel equalization and how it relates to the inversion of a linear system of equations. Channel equalization is the process by which the effect of a band-limited channel may be diminished, i.e., equalized, at the sink of a communication system. Although there are many ways to accomplish this, we will concentrate on linear filters and adaptive filters. It is through the linear filter approach that the analogy to matrix inversion is possible. Regularized inversion refers to a process in which noise dominated modes of the observed signal are attenuated. 31.2 Discrete-Time Intersymbol Interference Channel Model Intersymbol interference (ISI) is a phenomenon observed by the equalizer caused by frequency distortion of the transmitted signal. This distortion is usually caused by the frequency selective characteristics of the transmission medium. However, it can also be due to deliberate time dispersion of the transmitted pulse to affect realizable implementations of the transmit filter. In any case, the purpose of the equalizer is to remove deleterious effects of the ISI on symbol detection. The ISI generation mechanism is described next with a description of equalization techniques to follow. The information transmitted by a digital communication system is comprised of a set of discrete symbols. Likewise, the ultimate form of the received information is cast into a discrete form. However, the intermediate components of the digital communications system operate with continuous waveforms which carry the information. The major portions of the communications link are the transmitter c  1999 by CRC Press LLC pulse shaping filter, the modulator, the channel, the demodulator, and the receiver filter. It will be advantageous to transform the continuous part of the communication system into an equivalent discrete time channel description for simulation purposes. The discrete formulation should be transparent to both the information source and the equalizer when evaluating performance. The equivalent discrete time channel model is attained by combining the transmit filter, p(t), the channel filter, g(t), and the receive filter, w(t), into a single continuous filter, that is, h(t) = w(t)∗ g(t)∗ p(t) (31.1) Refer to Fig. 31.1. The effect of the sampler preceding the decision device is to discretize the aggre- FIGURE 31.1: The signal flow block diagram for the equivalent channel description. The equalizer observes x(nT ), a sampled version of the receive filter output x(t). gate filter. The equivalent discrete time channel as a means to simulate the performance of digital communications systems was advanced by Proakis [1] and has found subsequent use throughout the communications literature [2, 3]. It has been shown that a bandpass transmitted pulse train has an equivalent low pass representa- tion [1] s(t)= ∞  n=0 A n p(t − nT ) (31.2) where { A n } is the information bearing symbol set, p(t) is the equivalent low pass transmit pulse waveform, and T is the symbol rate. The observed signal at the input of the receiver is r(t) = ∞  n=0 A n  +∞ −∞ p(t − nT )g(t − nT − τ)dτ + n(t) (31.3) where g(t) is the equivalent low pass bandlimited impulse response of the channel and the channel noise, n(t), is modeled as white Gaussian noise. The optimum receiver filter, w(t), is the matched filter which is designed to give maximum correlation with the received pulse [4]. The output of the receiver filter, that is, the signal seen by the sampler, can be written as x(t) = ∞  n=0 A n h(t − nT ) + ν(t) (31.4) h(t) =  +∞ −∞   +∞ −∞ p(t − nT )g(t − nT − λ)dλ  w(t − τ)dτ (31.5) ν(t) =  +∞ −∞ n(t)w(t − τ)dτ (31.6) where h(t) is the response of the receiver filter to the received pulse, representing the overall impulse response between the transmitter and the sampler, and ν(t) =  +∞ −∞ n(t)w(t − τ)dτ is a filtered c  1999 by CRC Press LLC version of the channel noise. The input to the equalizer is a sampled version of Eq. (31.4), that is, sampling at times t = kT produces x(kT ) = ∞  n=0 A n h(kt − nT ) + ν(kT ) (31.7) as the input to the discrete time equalizer. By normalizing with respect to the sampling interval and rearranging terms, Eq. (31.7) becomes x k = h 0 A k  desired symbol + ∞  n=0 n=k A n h k−n    intersymbol interference + ν k (31.8) 31.3 Channel Equalization Filtering 31.3.1 Matrix Formulation of the Equalization Problem The task of finding the optimum linear equalizer coefficients can be described by casting the problem into a system of linear equations,      ˜ d 1 ˜ d 2 . . . ˜ d L      =      x T 1 x T 2 . . . x T L      c +      e 1 e 2 . . . e L      (31.9) x k =  x k+N−1 , .,x k−1  T (31.10) where (·) T denotes the transpose operation. The received sample at time k is x k , which consists of the channel output corrupted by additive noise. The elements of the N × 1 vector c k are the coefficients of the equalizer filter at time k. The equalizer is said to be in decision directed mode when ˜ d k is taken as the output of the nonlinear decision device. The equalizer is in training, or reference directed, mode when ˜ d k is explicitly made identical to the transmitted sequence A k . In either case, e k is the error between the desired equalizer output, ˜ d k , and the actual equalizer output, x T k c. We will assume that ˜ d k = A k+N , then the notation in Eq. (31.9) can be written in the compact form, d = Xc + e (31.11) by defining d =  ˜ d 1 , ., ˜ d L  T and by making the obvious associations with Eq. (31.9). Note that the parameter L determines the number of rows of the time varying matrix X. Therefore, choosing L is analogous to choosing an observation interval for the estimation of the filter coefficients. 31.4 Regularization We seek a solution for the filter coefficients of the form c = Yd,whereY is in some sense an inverse of the data matrix X. The least squares solution requires that Y =  X T X  −1 X T (31.12) c  1999 by CRC Press LLC where X #  =  X T X  −1 X T represents the Moore-Penrose (M-P) inverse of X. If one or more of the eigenvalues of the matrix X T X is zero, then the Moore-Penrose inverse does not exist. To investigate the behavior of the inverse, we will decompose the data matrix into the form X = X S + X N ,whereX S is the signal component and X N is the noise component. Generally, the noise data matrix is full rank and the signal data matrix may be nearly rank deficient from the spectral nulls in the transmission channel. This is illustrated by examining the smallest eigenvalue of X T S X S λ min = S R min + O  N −k  (31.13) where S R is the continuous PSD of the received data x k , S R min is the minimum value of the PSD, k is the number of non-vanishing derivatives of S R at S R min , and N is the equalizer filter length. Any spectral loss in the signal caused by the channel is directly translated into a corresponding decrease in the minimum eigenvalue of the received signal. If λ min becomes small, but nonzero, the data correlation matrix X T X becomes ill-conditioned and its inversion becomes sensitive to the noise. The sensitivity is expressed in the quantity δ  =   ˜ c − c    c  ≤ σ 2 n λ min + O  σ 4 n  (31.14) where the noiseless least squares filter coefficient vector solution, c, has been perturbed by adding a white noise to the data with variance σ 2 n  1, to produce the least squares solution ˜ c. Substituting Eq. (31.13) into Eq. (31.14) yields δ ≤ σ 2 n S R min + O  N −k  + O  σ 4 n  ≈ σ 2 n S R min (31.15) The relation in Eq. (31.15) is an indicator of the potential numerical problems in solving for the equalizer filter coefficients when the data is spectrally deficient. We see that direct inversion of the data matrix is not recommendable when the channel has severe spectral nulls. This situation is equivalent to stating that the original estimation problem d = Xc is ill-posed. That is, the equalizer is asked to reproduce components of the channel input that are unobservable at the channel output or are obscured by noise. Thus, it is reasonable to ascertain the modes of the input dominated by noise and give them little weight, relative to the signal dominated components, when solving for the equalizer filter coefficients. This process of weighting is called regularization. Regularization can be described by relying on a generalization of the M-P inverse that depends on the singular value decomposition (SVD) of the data matrix X = UV T (31.16) where U is an L× N unitary matrix, V is an N × N unitary matrix,  = diag ( σ 1 ,σ 2 , .,σ N ) is a diagonal matrix of singular values where σ i ≥ 0, σ 1 >σ 2 > ··· >σ N . It is assumed in Eq. (31.16) that L>N, which is typical in the equalization problem. We define the generalized pseudo-inverse of X as X † = V † U T (31.17) where  † = diag  σ † 1 ,σ † 2 , .,σ † N  and σ † i =    σ −1 i σ i = 0 0 σ i = 0 (31.18) c  1999 by CRC Press LLC The M-P inverse can be reformulated using the SVD as follows X # =  V 2 V T  −1 VU T = V −1 U T (31.19) Upon examination of Eq. (31.17) and Eq. (31.19), we notethat X # = X † only if all the singular values of X are nonzero, σ i = 0. Another item to note is that V 2 V T is the eigenvalue decomposition of X T X, which implies that the eigenvalues of X T X are the squares of the singular values of X. The generalized pseudo-inverse in Eq. (31.17)providesaneigenvaluespectralweightinggiven by Eq. (31.18), which differs from the M-P inverse only when one or more of the eigenvalues of X T X are identically zero. However, this form of regularization is rather restrictive since complete annihilation of the spectral components is rarely encountered in practice. A more likely condition for the eigenvalues of X T X is that a small band of signal eigen-modes are much smaller in magnitude than the corresponding noise modes. Direct inversion of these eigen-modes, although well-defined mathematically, leads to noise enhancement at the equalizer output and to noise sensitivity in the filter coefficient solution. An alternative to the generalized pseudo-inverse is to use a regularized inverse wherein the eigen-modes are weighted prior to inversion [5]. This approach leads to a trade- off between the noise immunity of the equalizer filter weights and the signal fidelity at the equalizer filter output. To demonstrate this trade-off, let c = X † d (31.20) be the least squares solution. Let the regularized inverse be Y n such that lim n→∞ Y n = X † .The regularized estimate for an observation perturbed by a random noise vector, n,is c n = Y n ( d + n ) (31.21) The effects of the regularized inverse and the noise vector are indicated by  c n − c  =    Y n n +  Y n − X †  d    ≤  Y n n  +    Y n − X †     d  (31.22) The term  Y n n  is the part of the coefficient errordue to the noise and is likely to increase as n →∞. The term   Y n − X †   represents the contribution due to the regularization error in approximating the pseudo-inverse. This error tends to zero as n →∞. The trade-off between noise attenuationand regularization error is evident upon inspection of Eq. (31.22), which also points out an idiosyncratic property of the regularization process. At first, the equalizer output error tends to decrease, due to decreasing regularization error,   Y n − X †   . Then, as n increases further, the output error is likely to increase due to the noise amplification component,  Y n n  . This behavior leads to the question regarding the best choice for the parameter n. A widely accepted procedure is to use the discrepancy principle, which states that n should satisfy  Xc n  − ( d + n )  =  n  (31.23) Letting n>n  usually results in noise amplification at the equalizer output. 31.5 Discrete-Time Adaptive Filtering We will next examine three adaptive algorithms in terms of their regularization properties in deriving the equalizer filter. These algorithms are the normalized least mean squares (NLMS) algorithm, the recursive least squares (RLS) algorithm, and the block-iterative NLMS (BINLMS) algorithm. These algorithms are representative of the wider class of adaptive algorithms of which they belong. c  1999 by CRC Press LLC 31.5.1 Adaptive Algorithm Recapitulation NLMS The NLMS algorithm update is given by c n = c n−1 + µ  d n − x T n c n−1  x n  x n  2 (31.24) for n = 1, .,L. This is rewritten as c n =  I − µ x n x T n  x n  2  c n−1 + µ d n x n  x n  2 (31.25) Define P n  =  I − µx n x T n /  x n  2  and p n  = µd n x n /  x n  2 , then Eq. (31.25) becomes c L = Qc 0 + q (31.26) where Q = P L P L−1 ···P 1 (31.27) and q = [ P L ···P 2 ] p 1 + [ P L ···P 3 ] p 2 +···+P L p L−1 + p L (31.28) BINLMS The BINLMS algorithm relies on observing the entire block of filter vectors x n , 1 ≤ n ≤ L,in Eq. (31.9). The BINLMS update procedure is c n+1 = c n + µ  d j − x T j c n  x j   x j   2 (31.29) where j = nmodL. The update in Eq. (31.29) is related to the NLMS update by considering Eq. (31.26). That is, Eq. (31.29) is equivalent to c n·L = Qc (n−1)·L + q (31.30) where L updates of Eq. (31.29) are compacted into a single update in Eq. (31.30). Note that only L updates are possible using Eq. (31.24) compared to an arbitrary number of updates in Eq. (31.29). RLS The update procedure for the RLS algorithm is g n = λ −1 Y n−1 x n 1 + λ −1 x T n Y n−1 x n (31.31) e n = d n − c T n−1 x n (31.32) c n = c n−1 + e n g n (31.33) Y n = λ −1  Y n−1 − g n x T n Y n−1  (31.34) where g n is called the gain vector, Y n is the estimate of  X T n X n  −1 using the matrix inversion lemma, and X n represents the first n rows of X in Eq. (31.9). The forgetting factor 0 <λ 1 allows the RLS algorithm to weight more recent samples providing a tracking capability for time-varying channels. The matrix inversion recursion is initialized with Y 0 = δ −1 I,where0 <δ 1. The initialization constant transforms the data correlation matrix into X T n  n X n + λ n δI (31.35) where  n = diag  1,λ, .,λ n−1  . c  1999 by CRC Press LLC 31.5.2 Regularization Properties of Adaptive Algorithms In this section we examine how each of the adaptive algorithms achieve regularization of the equalizer filter solution. We begin with the BINLMS and will subsequently take the NLMS as a special case. The BINLMS update of Eq. (31.30) is equivalent to c l = Qc l−1 + q (31.36) where an increment in l is equivalentto L increments of n in Eq. (31.29). The recursionin Eq. (31.36) is also equivalent to c l = B l d (31.37) where lim l→∞ B l = X † .Letˆσ k,l represent the singular values of B l , then the relationship among the singular values of B l and the singular values of X is [6] ˆσ k,l =      1 σ k  1 −  1 − µ N σ 2 k  l+1  ,σ k = 0 0 ,σ k = 0 (31.38) The regularization property of the BINLMS depends on both µ and l. Since the step size parameter µ is chosen to guarantee convergence, i.e., 0 <  1 − µ N σ 2 1  < 1, the regularization is primarily controlled by the iteration index l. The regularization behavior of the BINLMS given by Eq. (31.38) is that the signal dominant modes are inverted first, followed by the weaker noise dominant modes, as the index l increases. The regularization behavior ofthe NLMS algorithm is directly derived fromthe BINLMS by setting l = 1 in Eq. (31.38). We see that the only control over the regularization for the NLMS algorithm is to decrease the step size µ. However, this leads to a potentially undesirable reduction in the convergence rate of the adaptive equalizer filter. The RLS algorithm weighting of the singular values is derived upon inspection of Eq. (31.35). The RLS equalizer filter coefficient estimate is c LS =  X T  L X + λ L δI  −1 X T   1/2 L  T d (31.39) Let ˆσ LS,k represent the singular values of the effective inverse used in the RLS algorithm, then ˆσ LS,k = √ λ k σ k λ k σ 2 k + λ L δ (31.40) There are several points to note about Eq. (31.40). In the absence of the forgetting factor, λ = 1, and the initialization constant, δ = 0, the RLS algorithm provides the exact inverse of the singular values, as expected. The constant δ prevents the dominator of Eq. (31.40) from getting too small. However, this regularization is lost if λ L → 0, which is the case when the observation interval L becomes large. The behavior of the regularization functions (31.38) and (31.40)isillustratedinFig.31.2. 31.6 Numerical Results A numerical example of the regularization characteristics of the adaptive equalization algorithms discussed is now presented. A data matrix XXis constructed with dimensions L = 50 and N = 11, which has the singular value matrix  = diag ( 1.0, 0.9, .0.1, 0.0 ) . The step size µ = 0.2 is chosen. Since the RLS algorithm computes an estimate of  X T X  −1 , it is sensitive to the eigenvalues of c  1999 by CRC Press LLC FIGURE 31.2: The regularization functions of the NLMS, BINLMS, and RLS algorithms. FIGURE 31.3: The regularization behavior of the NLMS, BINLMS, and the RLS adaptive algorithms is shown. The BINLMS curves represent block iterations of 5, 10, 15, and 20. The RLS algorithm uses λ = 1.0 and λ = 0.96. c  1999 by CRC Press LLC X T X . A graph similar to Fig. 31.2 is produced with the exception that the eigenvalue inverses of X T X are plotted for the RLS algorithm. These results are shown in Fig. 31.3 using the eigenvalues of X given by σ 2 i = ( 1 − (i − 1)/10 ) 2 for 1 ≤ i ≤ 10 and σ 2 11 = 0. The RLS algorithm exhibits large dynamic range in the eigenvalue inverse using the matrix inversion lemma, which may lead to unstable operation of the adaptive equalizer filter. 31.7 Conclusion A short introduction to the basic concepts of regularization analysis are presented in this article. Some further development in the application of this analysis to decision-feedback equalization may be found in [6]. The choice of which adaptive algorithm to use is application-dependent and each one comes with its associated advantages and disadvantages. The LMS-type algorithms are low- complexity solutions that have relatively slow convergence. The RLS-type algorithms have much faster convergence but are typically plagued by stability problems associated with error propagation and unregularized matrix inversion. Circumventing these stability problems tends to lead to more complex algorithm implementation. The BINLMS algorithm is a trade-off between the convergence speed of the RLS-type algorithms and the stability of the LMS-type algorithms. A disadvantage of the BINLMS algorithm is that instantaneous throughput may be high due to the block-processing required. References [1] Proakis, J., Digital Communications, 2nd ed., McGraw-Hill, New York, 1989. [2] Hatzinakos, D. and Nikias, C., Estimation of multipath channel response in frequency selective channels, 7, 12–19, Jan. 1989. [3] Eleftheriou, E. and Falconer, D., Adaptive equalization techniques for HF channels, SAC-5, 238–247, Feb. 1987. [4] Wozencraft, J. and Jacobs, I., Principles of Communication Engineering, John Wiley & Sons, New York, 1965. [5] Tikhonov, A. and Arsenin, V., Solutions to Ill-Posed Problems, V.H. Winston and Sons, Wash- ington, D.C., 1977. [6] Doherty, J. and Mammone, R., An adpative algorithm for stable decision-feedback filtering, IEEE Trans. Circuits Syst. II: Analog and Digital Signal Processing, 40 CAS-II, Jan. 1993. c  1999 by CRC Press LLC . J.F. Channel Equalization as a Regularized Inverse Problem Digital Signal Processing Handbook Ed. Vijay K. Madisetti and Douglas B. Williams Boca Raton:. Interference Channel Model 31. 3 Channel Equalization Filtering Matrix Formulation of the Equalization Problem 31. 4 Regularization 31. 5 Discrete-Time Adaptive

Ngày đăng: 25/12/2013, 06:16

TỪ KHÓA LIÊN QUAN