ț Adaptive structures ț The least mean square (LMS) algorithm ț Programming examples using C and TMS320C3x code Adaptive filters are best used in cases where signal conditions or system para- meters are slowly changing and the filter is to be adjusted to compensate for this change. The least mean square (LMS) criterion is a search algorithm that can be used to provide the strategy for adjusting the filter coefficients. Programming examples are included to give a basic intuitive understanding of adaptive filters. 7.1 INTRODUCTION In conventional FIR and IIR digital filters, it is assumed that the process para- meters to determine the filter characteristics are known. They may vary with time, but the nature of the variation is assumed to be known. In many practical problems, there may be a large uncertainty in some parameters because of inad- equate prior test data about the process. Some parameters might be expected to change with time, but the exact nature of the change is not predictable. In such cases, it is highly desirable to design the filter to be self-learning, so that it can adapt itself to the situation at hand. The coefficients of an adaptive filter are adjusted to compensate for changes in input signal, output signal, or system parameters. Instead of being rigid, an adaptive system can learn the signal characteristics and track slow changes. An adaptive filter can be very useful when there is uncertainty about the character- istics of a signal or when these characteristics change. Figure 7.1 shows a basic adaptive filter structure in which the adaptive fil- ter’s output y is compared with a desired signal d to yield an error signal e, which is fed back to the adaptive filter. The coefficients of the adaptive filter are 195 7 Adaptive Filters Digital Signal Processing: Laboratory Experiments Using C and the TMS320C31 DSK Rulph Chassaing Copyright © 1999 John Wiley & Sons, Inc. Print ISBN 0-471-29362-8 Electronic ISBN 0-471-20065-4 adjusted, or optimized, using a least mean square (LMS) algorithm based on the error signal. We will discuss here only the LMS searching algorithm with a linear com- biner (FIR filter), although there are several strategies for performing adaptive filtering. The output of the adaptive filter in Figure 7.1 is y(n) = Α N – 1 k = 0 w k (n)x(n – k) (7.1) where w k (n) represent N weights or coefficients for a specific time n. The con- volution equation (7.1) was implemented in Chapter 4 in conjunction with FIR filtering. It is common practice to use the terminology of weights w for the co- efficients associated with topics in adaptive filtering and neural networks. A performance measure is needed to determine how good the filter is. This measure is based on the error signal, e(n) = d(n) – y(n) (7.2) which is the difference between the desired signal d(n) and the adaptive filter’s output y(n). The weights or coefficients w k (n) are adjusted such that a mean squared error function is minimized. This mean squared error function is E[e 2 (n)], where E represents the expected value. Since there are k weights or co- efficients, a gradient of the mean squared error function is required. An esti- mate can be found instead using the gradient of e 2 (n), yielding w k (n + 1) = w k (n) + 2e(n)x(n – k) k = 0, 1, , N – 1 (7.3) which represents the LMS algorithm [1–3]. Equation (7.3) provides a simple but powerful and efficient means of updating the weights, or coefficients, with- out the need for averaging or differentiating, and will be used for implementing adaptive filters. 196 Adaptive Filters FIGURE 7.1 Basic adaptive filter structure. The input to the adaptive filter is x(n), and the rate of convergence and accu- racy of the adaptation process (adaptive step size) is . For each specific time n, each coefficient, or weight, w k (n) is updated or re- placed by a new coefficient, based on (7.3), unless the error signal e(n) is zero. After the filter’s output y(n), the error signal e(n) and each of the coefficients w k (n) are updated for a specific time n, a new sample is acquired (from an ADC) and the adaptation process is repeated for a different time. Note that from (7.3), the weights are not updated when e(n) becomes zero. The linear adaptive combiner is one of the most useful adaptive filter struc- tures and is an adjustable FIR filter. Whereas the coefficients of the frequency- selective FIR filter discussed in Chapter 4 are fixed, the coefficients, or weights, of the adaptive FIR filter can be adjusted based on a changing environ- ment such as an input signal. Adaptive IIR filters (not discussed here) also can be used. A major problem with an adaptive IIR filter is that its poles may be up- dated during the adaptation process to values outside the unit circle, making the filter unstable. The programming examples developed later will make use of equations (7.1)–(7.3). In (7.3), we will simply use the variable  in lieu of 2. 7.2 ADAPTIVE STRUCTURES A number of adaptive structures have been used for different applications in adaptive filtering. 1. For noise cancellation. Figure 7.2 shows the adaptive structure in Figure 7.1 modified for a noise cancellation application. The desired signal d is cor- rupted by uncorrelated additive noise n. The input to the adaptive filter is a noise nЈ that is correlated with the noise n. The noise nЈ could come from the same source as n but modified by the environment. The adaptive filter’s output y is adapted to the noise n. When this happens, the error signal approaches the de- sired signal d. The overall output is this error signal and not the adaptive filter’s output y. This structure will be further illustrated with programming examples using both C and TMS320C3x code. 7.2 Adaptive Structures 197 FIGURE 7.2 Adaptive filter structure for noise cancellation. 2. For system identification. Figure 7.3 shows an adaptive filter structure that can be used for system identification or modeling. The same input is to an unknown system in parallel with an adaptive filter. The error signal e is the dif- ference between the response of the unknown system d and the response of the adaptive filter y. This error signal is fed back to the adaptive filter and is used to update the adaptive filter’s coefficients, until the overall output y = d. When this happens, the adaptation process is finished, and e approaches zero. In this scheme, the adaptive filter models the unkown system. 3. Additional structures have been implemented such as: a) Notch with two weights, which can be used to notch or cancel/reduce a si- nusoidal noise signal. This structure has only two weights or coefficients, and is illustrated later with a programming example. b) Adaptive predictor, which can provide an estimate of an input. This struc- ture is illustrated later with three programming examples. c) Adaptive channel equalization, used in a modem to reduce channel distor- tion resulting from the high speed of data transmission over telephone channels. The LMS is well suited for a number of applications, including adaptive echo and noise cancellation, equalization, and prediction. Other variants of the LMS algorithm have been employed, such as the sign- error LMS, the sign-data LMS, and the sign-sign LMS. 1. For the sign-error LMS algorithm, (7.3) becomes w k (n + 1) = w k (n) + sgn[e(n)]x(n – k) (7.4) where sgn is the signum function, 1 if u м 0 sgn(u) = Ά (7.5) –1 if u < 0 198 Adaptive Filters FIGURE 7.3 Adaptive filter structure for system identification. 2. For the sign-data LMS algorithm, (7.3) becomes w k (n + 1) = w k (n) + e(n)sgn[x(n – k)] (7.6) 3. For the sign-sign LMS algorithm, (7.3) becomes w k (n + 1) = w k (n) + sgn[e(n)]sgn[x(n – k)] (7.7) which reduces to w k (n) +  if sgn[e(n)] = sgn[x(n – k)] w k (n + 1) = Ά (7.8) w k (n) –  otherwise which is more concise from a mathematical viewpoint, because no multiplica- tion operation is required for this algorithm. The implementation of these variants does not exploit the pipeline features of the TMS320C3x processor. The execution speed on the TMS320C3x for these variants can be expected to be slower than for the basic LMS algorithm, due to additional decision-type instructions required for testing conditions in- volving the sign of the error signal or the data sample. The LMS algorithm has been quite useful in adaptive equalizers, telephone cancellers, and so forth. Other methods such as the recursive least squares (RLS) algorithm [4], can offer faster convergence than the basic LMS but at the expense of more computations. The RLS is based on starting with the optimal solution and then using each input sample to update the impulse response in or- der to maintain that optimality. The right step size and direction are defined over each time sample. Adaptive algorithms for restoring signal properties can also be found in [4]. Such algorithms become useful when an appropriate reference signal is not available. The filter is adapted in such a way as to restore some property of the signal lost before reaching the adaptive filter. Instead of the desired waveform as a template, as in the LMS or RLS algorithms, this property is used for the adaptation of the filter. When the desired signal is available, the conventional approach such as the LMS can be used; otherwise a priori knowledge about the signal is used. 7.3 PROGRAMMING EXAMPLES USING C AND TMS320C3x CODE The following programming examples illustrate adaptive filtering using the least mean square (LMS) algorithm. It is instructive to read the first example 7.2 Adaptive Structures 199 even if you have only a limited knowledge of C, since it illustrates the steps in the adaptive process. Example 7.1 Adaptive Filter Using C Code Compiled With Borland C/C++ This example applies the LMS algorithm using a C-coded program compiled with Borland C/C++. It illustrates the following steps for the adaptation process using the adaptive structure in Figure 7.1: 1. Obtain a new sample for each, the desired signal d and the reference input to the adaptive filter x, which represents a noise signal. 2. Calculate the adaptive FIR filter’s output y, applying (7.1) as in Chapter 4 with an FIR filter. In the structure of Figure 7.1, the overall output is the same as the adaptive filter’s output y. 3. Calculate the error signal applying (7.2). 4. Update/replace each coefficient or weight applying (7.3). 5. Update the input data samples for the next time n, with a data move scheme used in Chapter 4 with the program FIRDMOVE.C. Such scheme moves the data instead of a pointer. 6. Repeat the entire adaptive process for the next output sample point. Figure 7.4 shows a listing of the program ADAPTC.C, which implements the LMS algorithm for the adaptive filter structure in Figure 7.1. A desired signal is chosen as 2cos(2nf/F s ), and a reference noise input to the adaptive filter is chosen as sin(2nf/F s ), where f is 1 kHz, and F s = 8 kHz. The adaptation rate, filter order, number of samples are 0.01, 22, and 40, respectively. The overall output is the adaptive filter’s output y, which adapts or converges to the desired cosine signal d. The source file was compiled with Borland’s C/C++ compiler. Execute this program. Figure 7.5 shows a plot of the adaptive filter’s output (y_out) con- verging to the desired cosine signal. Change the adaptation or convergence rate  to 0.02 and verify a faster rate of adaptation. Interactive Adaptation A version of the program ADAPTC.C in Figure 7.4, with graphics and interac- tive capabilities to plot the adaptation process for different values of  is on the accompanying disk as ADAPTIVE.C, to be compiled with Turbo or Borland C/C++. It uses a desired cosine signal with an amplitude of 1 and a filter order of 31. Execute this program, enter a  value of 0.01, and verify the results in Figure 7.6. Note that the output converges to the desired cosine signal. Press F2 to execute this program again with a different beta value. 200 Adaptive Filters //ADAPTC.C - ADAPTATION USING LMS WITHOUT THE TI COMPILER #include <stdio.h> #include <math.h> #define beta 0.01 //convergence rate #define N 21 //order of filter #define NS 40 //number of samples #define Fs 8000 //sampling frequency #define pi 3.1415926 #define DESIRED 2*cos(2*pi*T*1000/Fs) //desired signal #define NOISE sin(2*pi*T*1000/Fs) //noise signal main() { long I, T; double D, Y, E; double W[N+1] = {0.0}; double X[N+1] = {0.0}; FILE *desired, *Y_out, *error; desired = fopen (“DESIRED”, “w++”); //file for desired samples Y_out = fopen (“Y_OUT”, “w++”); //file for output samples error = fopen (“ERROR”, “w++”); //file for error samples for (T = 0; T < NS; T++) //start adaptive algorithm { X[0] = NOISE; //new noise sample D = DESIRED; //desired signal Y = 0; //filter’output set to zero for (I = 0; I <= N; I++) Y += (W[I] * X[I]); //calculate filter output E = D - Y; //calculate error signal for (I = N; I >= 0; I—) { W[I] = W[I] + (beta*E*X[I]); //update filter coefficients if (I != 0) X[I] = X[I-1]; //update data sample } fprintf (desired, “\n%10g %10f”, (float) T/Fs, D); fprintf (Y_out, “\n%10g %10f”, (float) T/Fs, Y); fprintf (error, “\n%10g %10f”, (float) T/Fs, E); } fclose (desired); fclose (Y_out); fclose (error); } FIGURE 7.4 Adaptive filter program compiled with Borland C/C++ (ADAPTC.C). FIGURE 7.5 Plot of adaptive filter’s output converging to desired cosine signal. FIGURE 7.6 Plot of adaptive filter’s output converging to desired cosine signal using inter- active capability with program ADAPTIVE.C. Example 7.2 Adaptive Filter for Noise Cancellation Using C Code This example illustrates the adaptive filter structure shown in Figure 7.2 for the cancellation of an additive noise. Figure 7.7 shows a listing of the program ADAPTDMV.C based on the previous program in Example 7.1. Consider the following from the program: 1. The desired signal specified by DESIRED is a sine function with a fre- quency of 1 kHz. The desired signal is corrupted/added with a noise signal specified by ADDNOISE. This additive noise is a sine with a frequency of 312 Hz. The addition of these two signals is achieved in the program with DPLUSN for each sample period. 2. The reference input to the adaptive FIR filter is a cosine function with a frequency of 312 Hz specified by REFNOISE. The adaptation step or rate of convergence is set to 1.5 × 10 –8 , the number of coefficients to 30, and the num- ber of output samples to 128. 3. The output of the adaptive FIR filter y is calculated using the convolution equation (7.1), and converges to the additive noise signal with a frequency of 312 Hz. When this happens, the “error” signal e, calculated from (7.2), ap- proaches the desired signal d with a frequency of 1 kHz. This error signal is the overall output of the adaptive filter structure, and is the difference between the adaptive filter’s output y and the primary input consisting of the desired signal with additive noise. In the previous example, the overall output was the adaptive filter’s output. In that case, the filter’s output converged to the desired signal. For the structure in this example, the overall output is the error signal and not the adaptive filter’s output. This program was compiled with the TMS320 assembly language floating- point tools, and the executable COFF file is on the accompanying disk. Down- load and run it on the DSK. The output can be saved into the file fname with the debugger command save fname,0x809d00,128,L which saves the 128 output samples stored in memory starting at the address 809d00 into the file fname, in ASCII Long format. Note that the desired sig- nal with additive noise samples in DPLUSN are stored in memory starting at the address 809d80, and can be saved also into a different file with the debugger save command. Figure 7.8 shows a plot of the output converging to the 1-kHz desired sine signal, with a convergence rate of  = 1.5 × 10 –8 . The upper plot in Figure 7.9 shows the FFT of the 1-kHz desired sine signal and the 312-Hz additive noise signal. The lower plot in Figure 7.9 shows the overall output which illustrates the reduction of the 312-Hz noise signal. 7.3 Programming Examples Using C and TMS320C3x Code 203 204 Adaptive Filters /*ADAPTDMV.C - ADAPTIVE FILTER FOR NOISE CANCELLATION */ #include “math.h” #define beta 1.5E-8 /*rate of convergence */ #define N 30 /*# of coefficients */ #define NS 128 /*# of output sample points*/ #define Fs 8000 /*sampling frequency */ #define pi 3.1415926 #define DESIRED 1000*sin(2*pi*T*1000/Fs) /*desired signal */ #define ADDNOISE 1000*sin(2*pi*T*312/Fs) /*additive noise */ #define REFNOISE 1000*cos(2*pi*T*312/Fs) /*reference noise*/ main() { int I,T; float Y, E, DPLUSN; float W[N+1]; float Delay[N+1]; volatile int *IO_OUTPUT= (volatile int*) 0x809d00; volatile int *IO_INPUT = (volatile int*) 0x809d80; for (T = 0; T < N; T++) { W[T] = 0.0; Delay[T] = 0.0; } for (T=0; T < NS; T++) /*# of output samples */ { Delay[0] = REFNOISE; /*adaptive filter’s input*/ DPLUSN = DESIRED + ADDNOISE; /*desired + noise, d+n */ Y = 0; for (I = 0; I < N; I++) Y += (W[I] * Delay[I]); /*adaptive filter output */ E = DPLUSN - Y; /*error signal */ for (I = N; I > 0; I—) { W[I] = W[I] + (beta*E*Delay[I]); /*update weights */ if (I != 0) Delay[I] = Delay[I-1]; /*update samples */ } *IO_OUTPUT++ = E; /*overall output E */ *IO_INPUT++ = DPLUSN; /* store d + n */ } } FIGURE 7.7 Adaptive filter program for sinusoidal noise cancellation using data move (ADAPTDMV.C). [...]... Х 16 kHz, as can be verified using similar calculations made in the exercises in Chapter 3 to calculate a desired sampling frequency However, 7.3 Programming Examples Using C and TMS32 0C3 x Code 219 ;ADAPTER.ASM-ADAPTIVE STRUCTURE FOR NOISE CANCELLATION OUTPUT AT e(n) start “.text”,0x809900 ;where text begins start “.data”,0x80 9C0 0 ;where data begins include “AICCOM31.ASM” ;AIC communications routines... with 0x73 to enable the AIC auxiliary input and bypass the input filter on the AIC, as described in the AIC secondary communication protocol in Chapter 3 With an input sinusoidal signal from pin 3 of the connector JP3, the output (from the RCA jack) is the delayed input Figure 7.20 shows the program listing ADAPTER.ASM for this example The AIC configuration data set in AICSEC specifies a sampling rate... application, available on the AIC on board the DSK While the primary input (PRI IN) is through an RCA jack, a second input to the AIC is available on the DSK board from pin 3 of the 32-pin connector JP3 The secondary or auxiliary input (AUX IN) should be first tested with the loop program (LOOP.ASM) discussed in Chapter 3 Four values are set in AICSEC to configure the AIC in the loop program Replace... should be exercised to set the input to the adaptive filter to a zero DC level A shifted signal could also be produced with a phase-shifter circuit REFERENCES 1 B Widrow and S D Stearns, Adaptive Signal Processing, Prentice-Hall, Englewood Cliffs, NJ, 1985 2 B Widrow and M E Hoff, Jr., “Adaptive Switching Circuits,” in IRE WESCON, pp 96–104, 1960 3 B Widrow, J R Glover, J M McCool, J Kaunitz, C S Williams,... into data section AICSEC word 162Ch,1h,244Ah,73h ;For AIC,Fs = 16K/2 = 8 kHz NOISE_ADDR word NOISE+LENGTH-1 ;last address of noise samples WN_ADDR word COEFF ;address of coefficients w(N-1) ERF_ADDR word ERR_FUNC ;address of error function ERR_FUNC float 0 ;initialize error function BETA float 2.5E-12 ;rate of adaptation constant LENGTH set 50 ;set filter length COEFF: ;buffer for coefficients loop... multiply instruction yields R0, which contains the value e(n)x1(n) for a specific time n The second multiply instruction yields R0, which contains the value e(n)x2(n) This multiply instruction is in parallel with an ADDF3 instruction in order to update the first weight w1(n) 10 The parallel addition instruction || ADDF3 R3,R0,R2 adds R3, which contains the first weight w1(n), and R0, which contains e(n)x1(n)... The 50 coefficients or weights of the FIR filter are initialized to zero The circular buffer XN_BUFF, aligned on a 64-word boundary, is created for the cosine samples The cosine samples are placed in the circular memory buffer in a similar fashion as was done in Chapter 4 in conjunction with FIR filters For example, note that the first cosine sample is stored into the last or bottom memory location... are called, eliminating the call and return from subroutine intructions 3 Before the adaptation routine is called to update the weights or coefficients, the repeat counter register RC is initialized with LENGTH – 2, or RC = 48 As a result, the repeat block of code is executed 49 times (repeated 48 times), including the STF R2,*AR0++ instruction in parallel Figure 7.19 shows the overall output y(n) converging... graph) converging to the desired 1-kHz input signal (upper graph), yielding the same results as in Figure 7.12 Example 7.5 Adaptive Notch Filter With Two Weights, Using TMS32 0C3 x Code The adaptive notch structure shown in Figure 7.15 illustrates the cancellation of a sinusoidal interference, using only two weights or coefficients This structure is discussed in References 1 and 3 The primary input consists... samples COEFF float 0, 0 ;two weights or coefficients brstart “SC_BUFF”,8 ;align samples buffer SC sect “SC_BUFF” ;circular buffer for sine/cosine loop LENGTH ;actual length of 2 float 0 ;init to zero endloop ;end of loop entry BEGIN ;start of code text ;assemble into text section BEGIN LDP WN_ADDR ;init to data page 128 LDI @DPN_ADDR,AR3 ;sin1000 + sin312 addr -> AR3 LDI @COS_ADDR,AR2 ;address of cos312 . can learn the signal characteristics and track slow changes. An adaptive filter can be very useful when there is uncertainty about the character- istics. or coefficients for a specific time n. The con- volution equation (7.1) was implemented in Chapter 4 in conjunction with FIR filtering. It is common practice