Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống
1
/ 48 trang
THÔNG TIN TÀI LIỆU
Thông tin cơ bản
Định dạng
Số trang
48
Dung lượng
424,19 KB
Nội dung
Real-Time Digital Signal Processing Sen M Kuo, Bob H Lee Copyright # 2001 John Wiley & Sons Ltd ISBNs: 0-470-84137-0 (Hardback); 0-470-84534-1 (Electronic) Adaptive Filtering As discussed in previous chapters, filtering refers to the linear process designed to alter the spectral content of an input signal in a specified manner In Chapters and 6, we introduced techniques for designing and implementing FIR and IIR filters for given specifications Conventional FIR and IIR filters are time-invariant They perform linear operations on an input signal to generate an output signal based on the fixed coefficients Adaptive filters are time varying, filter characteristics such as bandwidth and frequency response change with time Thus the filter coefficients cannot be determined when the filter is implemented The coefficients of the adaptive filter are adjusted automatically by an adaptive algorithm based on incoming signals This has the important effect of enabling adaptive filters to be applied in areas where the exact filtering operation required is unknown or is non-stationary In Section 8.1, we will review the concepts of random processes that are useful in the development and analysis of various adaptive algorithms The most popular least-meansquare (LMS) algorithm will be introduced in Section 8.2 Its important properties will be analyzed in Section 8.3 Two widely used modified adaptive algorithms, the normalized and leaky LMS algorithms, will be introduced in Section 8.4 In this chapter, we introduce and analyze the LMS algorithm following the derivation and analysis given in [8] In Section 8.5, we will briefly introduce some important applications of adaptive filtering The implementation considerations will be discussed in Section 8.6, and the DSP implementations using the TMS320C55x will be presented in Section 8.7 8.1 Introduction to Random Processes A signal is called a deterministic signal if it can be described precisely and be reproduced exactly and repeatedly However, the signals encountered in practice are not necessarily of this type A signal that is generated in a random fashion and cannot be described by mathematical expressions or rules is called a random (or stochastic) signal The signals in the real world are often random in nature Some common examples of random signals are speech, music, and noises These signals cannot be reproduced and need to be modeled and analyzed using statistical techniques We have briefly introduced probability and random variables in Section 3.3 In this section, we will review the important properties of the random processes and introduce fundamental techniques for processing and analyzing them 352 ADAPTIVE FILTERING A random process may be defined as a set of random variables We associate a time function x n x n, A with every possible outcome A of an experiment Each time function is called a realization of the random process or a random signal The ensemble of all these time functions (called sample functions) constitutes the random process x n If we sample this process at some particular time n0 , we obtain a random variable Thus a random process is a family of random variables We may consider the statistics of a random process in two ways If we fix the time n at n0 and consider the random variable x n0 , we obtain statistics over the ensemble For example, Ex n0 is the ensemble average, where E is the expectation operation introduced in Chapter If we fix A and consider a particular sample function, we have a time function and the statistics we obtain are temporal For example, Ex n, Ai is the time average If the time average is equal to the ensemble average, we say that the process is ergodic The property of ergodicity is important because in practice we often have access to only one sample function Since we generally work only with temporal statistics, it is important to be sure that the temporal statistics we obtain are the true representation of the process as a whole 8.1.1 Correlation Functions For many applications, one signal is often used to compare with another in order to determine the similarity between the pair, and to determine additional information based on the similarity Autocorrelation is used to quantify the similarity between two segments of the same signal The autocorrelation function of the random process x(n) is defined as rxx n, k Ex nx k: 8:1:1 This function specifies the statistical relation of two samples at different time index n and k, and gives the degree of dependence between two random variables of n k units apart For example, consider a digital white noise x(n) as uncorrelated random variables with zero-mean and variance s2x The autocorrelation function is rxx n, k Ex nx k Ex nEx k 0, s2x , n 6 k n k 8:1:2 If we subtract the means in (8.1.1) before taking the expected value, we have the autocovariance function gxx n, k Efx n mx nx k mx kg rxx n, k mx nmx k: 8:1:3 The objective in computing the correlation between two different random signals is to measure the degree in which the two signals are similar The crosscorrelation and crosscovariance functions between two random processes x(n) and y(n) are defined as rxy n, k Ex ny k 8:1:4 353 INTRODUCTION TO RANDOM PROCESSES and gxy n, k Efx n mx ny k my kg rxy n, k mx nmy k: 8:1:5 Correlation is a very useful DSP tool for detecting signals that are corrupted by additive random noise, measuring the time delay between two signals, determining the impulse response of a system (such as obtain the room impulse response used in Section 4.5.2), and many others Signal correlation is often used in radar, sonar, digital communications, and other engineering areas For example, in CDMA digital communications, data symbols are represented with a set of unique key sequences If one of these sequences is transmitted, the receiver compares the received signal with every possible sequence from the set to determine which sequence has been received In radar and sonar applications, the received signal reflected from the target is the delayed version of the transmitted signal By measuring the round-trip delay, one can determine the location of the target Both correlation functions and covariance functions are extensively used in analyzing random processes In general, the statistical properties of a random signal such as the mean, variance, and autocorrelation and autocovariance functions are time-varying functions A random process is said to be stationary if its statistics not change with time The most useful and relaxed form of stationary is the wide-sense stationary (WSS) process A random process is called WSS if the following two conditions are satisfied: The mean of the process is independent of time That is, Ex n mx , 8:1:6 where mx is a constant The autocorrelation function depends only on the time difference That is, rxx k Ex n kx n: 8:1:7 Equation (8.1.7) indicates that the autocorrelation function of a WSS process is independent of the time shift and rxx k denotes the autocorrelation function of a time lag of k samples The autocorrelation function rxx k of a WSS process has the following important properties: The autocorrelation function is an even function of the time lag k That is, rxx k rxx k: 8:1:8 The autocorrelation function is bounded by the mean squared value of the process expressed as jrxx kj rxx 0, 8:1:9 354 ADAPTIVE FILTERING where rxx 0 Ex2 n is equal to the mean-squared value, or the power in the random process In addition, if x(n) is a zero-mean random process, we have rxx 0 Ex2 n s2x : 8:1:10 Thus the autocorrelation function of a signal has its maximum value at zero lag If x(n) has a periodic component, then rxx k will contain the same periodic component Example 8.1: Given the sequence x n an u n, < a < 1, the autocorrelation function can be computed as rxx k X x n kx n n 1 X ank an ak n0 X a2 n : n0 Since a < 0, we obtain rxx k ak : a2 Example 8.2: Consider the sinusoidal signal expressed as x n cos !n, find the mean and the autocorrelation function of x(n) (a) mx Ecos !n (b) rxx k Ex n kx n Ecos !n !k cos !n 1 Ecos 2!n !k cos !k cos !k: 2 The crosscorrelation function of two WSS processes x(n) and y(n) is defined as rxy k Ex n ky n: 8:1:11 This crosscorrelation function has the property rxy k ryx k: 8:1:12 Therefore ryx k is simply the folded version of rxy k Hence, ryx k provides exactly the same information as rxy k, with respect to the similarity of x(n) to y(n) 355 INTRODUCTION TO RANDOM PROCESSES In practice, we only have one sample sequence fx ng available for analysis As discussed earlier, a stationary random process x(n) is ergodic if all its statistics can be determined from a single realization of the process, provided that the realization is long enough Therefore time averages are equal to ensemble averages when the record length is infinite Since we not have data of infinite length, the averages we compute differ from the true values In dealing with finite-duration sequence, the sample mean of x(n) is defined as m x X1 1N x n, N n0 8:1:13 where N is the number of samples in the short-time analysis interval The sample variance is defined as X1 1N s 2x N 2 mx : x n 8:1:14 n0 The sample autocorrelation function is defined as rxx k N NX k 1 k x n kx n, k 0, 1, , N 1, 8:1:15 n0 where N is the length of the sequence x(n) Note that for a given sequence of length N, Equation (8.1.15) generates values for up to N different lags In practice, we can only expect good results for lags of no more than 5±10 percent of the length of the signals The autocorrelation and crosscorrelation functions introduced in this section can be computed using the MATLAB function xcorr in the Signal Processing Toolbox The crosscorrelation function rxy k of the two sequences x(n) and y(n) can be computed using the statement c = xcorr(x, y); where x and y are length N vectors and the crosscorrelation vector c has length 2N The autocorrelation function rxx k of the sequence x(n) can be computed using the statement c = xcorr(x); In addition, the crosscovariance function can be estimated using v = xcov(x, y); and the autocovariance function can be computed with v = xcov(x); See Signal Processing Toolbox User's Guide for details 356 ADAPTIVE FILTERING 8.1.2 Frequency-Domain Representations In the study of deterministic digital signals, we use the discrete-time Fourier transform (DTFT) or the z-transform to find the frequency contents of the signals In this section, we will use the same transform for random signals Consider an ergodic random process x(n) This sequence cannot be really representative of the random process because the sequence x(n) is only one of infinitely possible sequences However, if we consider the autocorrelation function rxx k, the result is always the same no matter which sample sequence is used to compute rxx k Therefore we should apply the transform to rxx k rather than x(n) The correlation functions represent the time-domain description of the statistics of a random process The frequency-domain statistics are represented by the power density spectrum (PDS) or the autopower spectrum The PDS is the DTFT (or the z-transform) of the autocorrelation function rxx k of a WSS signal x(n) defined as X Pxx ! rxx ke j!k , 8:1:16 k or Pxx z X rxx kz k : 8:1:17 k A sufficient condition for the existence of the PDS is that rxx k is summable The PDS defined in (7.3.16) is equal to the DFT of the autocorrelation function The windowing technique introduced in Section 7.3.3 can be used to improve the convergence properties of (7.3.16) and (7.3.17) if the DFT is used in computing the PDS of random signals Equation (8.1.16) implies that the autocorrelation function is the inverse DTFT of the PDS, which is expressed as p rxx k Pxx !e j!k d!: 8:1:18 2p p From (8.1.10), we have the mean-square value Ex n rxx 0 2p p p Pxx !d!: 8:1:19 Thus rxx 0 represents the average power in the random signal x(n) The PDS is a periodic function of the frequency !, with the period equal to 2p We can show (in the exercise problems) that Pxx ! of a WSS signal is a real-valued function of ! If x(n) is a real-valued signal, Pxx ! is an even function of ! That is, Pxx ! Pxx ! 8:1:20 Pxx z Pxx z : 8:1:21 or INTRODUCTION TO RANDOM PROCESSES 357 The DTFT of the crosscorrelation function Pxy ! of two WSS signals x(n) and y(n) is given by X Pxy ! rxy ke j!k , 8:1:22 k or Pxy z X rxy kz k : 8:1:23 k This function is called the cross-power spectrum Example 8.3: The autocorrelation function of a WSS white random process can be defined as rxx k s2x d k m2x : 8:1:24 The corresponding PDS is given by Pxx ! s2x 2pm2x d !, j!j p: 8:1:25 An important white random signal is called white noise, which has zero mean Thus its autocorrelation function is expressed as rxx k s2x d k, 8:1:26 and the power spectrum is given by Pxx ! s2x , j!j < p, 8:1:27 which is of constant value for all frequencies ! Consider a linear and time-invariant digital filter defined by the impulse response h(n), or the transfer function H(z) The input of the filter is a WSS random signal x(n) with the PDS Pxx ! As illustrated in Figure 8.1, the PDS of the filter output y(n) can be expressed as Pyy ! H ! ... xcov(x); See Signal Processing Toolbox User''s Guide for details 356 ADAPTIVE FILTERING 8.1.2 Frequency-Domain Representations In the study of deterministic digital signals, we use the discrete-time... steady-state performance, and finite-precision effects 8.2.1 Introduction to Adaptive Filtering An adaptive filter consists of two distinct parts ± a digital filter to perform the desired signal processing, ... Pxx ! of a WSS signal is a real-valued function of ! If x(n) is a real-valued signal, Pxx ! is an even function of ! That is, Pxx ! Pxx ! ? ?8:1 :20 Pxx z Pxx z : ? ?8:1 :21 or INTRODUCTION