Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống
1
/ 15 trang
THÔNG TIN TÀI LIỆU
Thông tin cơ bản
Định dạng
Số trang
15
Dung lượng
90,29 KB
Nội dung
Tracking and Kalman Filtering Made Easy Eli Brookner Copyright # 1998 John Wiley & Sons, Inc ISBNs: 0-471-18407-1 (Hardback); 0-471-22419-7 (Electronic) 14 MORE ON VOLTAGE-PROCESSING TECHNIQUES 14.1 COMPARISON OF DIFFERENT VOLTAGE LEAST-SQUARES ALGORITHM TECHNIQUES Table 14.1-1 gives a comparison for the computer requirements for the different voltage techniques discussed in the previous chapter The comparison includes the computer requirements needed when using the normal equations given by (4.1-30) with the optimum least-squares weight W given by (4.1-32) Table 14.1-1 indicates that the normal equation requires the smallest number of computations (at least when s > m, the case of interest), followed by the Householder orthonormalization, then by the modified Gram–Schmidt, and finally the Givens orthogonalization However, the Givens algorithm computation count does not assume the use of the efficient CORDIC algorithm The assumption is made that all the elements of the augmented matrix T are real When complex data is being dealt with, then the counts given will be somewhat higher, a complex multiply requiring four real multiplies and two real adds, a complex add requiring two real adds The Householder algorithm has a slight advantage over the Givens and modified Gram–Schmidt algorithms relative to computer accuracy Table 14.1-2 gives a summary of the comparison of the voltage least-squares estimation algorithms Before leaving this section, another useful least-squares estimate example is given showing a comparison of the poor results obtained using the normal equations and the excellent results obtained using the modified Gram–Schmidt algorithm For this example (obtained from reference 82) 339 340 MORE ON VOLTAGE-PROCESSING TECHNIQUES TABLE 14.1-1 Operation Counts for Various Least-Squares Computational Methods Asymptotic Number of Operations a Method 2 sm ỵ 16 m sm 13 m sm 2 sm 23 m Normal equations (power method) Householder orthogonalization Modified Gram–Schmidt Givens orthogonalization Note: The matrix T is assumed to be an s m matrix Source: From references 80 and 81 a An operation is a multiply, or divide, plus and add 1 6" T¼6 40 07 7 05 Y nị ẳ Y 4ị " " 607 ẳ6 405 14:1-1ị 14:1-2ị for (4.1-11a) We now slove for the least-squares estimate using the normal equations given by (4.1-30) and (4.1-32) We shall at first obtain the exact solution Toward this end, we calculate ỵ "2 1 T TT ẳ 14:1-3ị ỵ "2 1 1ỵ" and T T Y 4ị ẳ 15 14:1-4ị Then from (4.1-30) and (4.1-32) it follows that the exact solution (no round-off errors) is ¼ 415 X n;n 14:1-5ị ỵ "2 DIFFERENT VOLTAGE LEAST-SQUARES ALGORITHM TECHNIQUES 341 TABLE 14.1-2 Voltage Least-Squares Estimate Algorithms (Orthonormal Transforms): Trade-offs Householder Lowest cost on a serial (nonparallel, single-central-processor) machine Best numerical behavior (by a small margin) Givens Rotations Introduces one zero at a time and as a result is more costly in number of computations required However, allows parallel implementations: linear and triangular systolic arrays Rotations can be efficiently implemented in hardware using CORDIC number representations Square-root free version requires computations equal to that of Householder but is no longer orthogonal and requires large dynamic range (generally alright if floating point used) Modified Gram–Schmidt Like Givens MGS is more costly than Householder Like Givens is amenable to systolic implementation Provides joint order/time recursive updates General Comment: Where accuracy is an issue, the orthonormal transforms (voltage methods) are the algorithms of choice over the normal equation With the development of microcomputers having high precision, like 32 and 64 bits floating point, accuracy is less of an issue Where a high throughput is needed, the systolic architecture offered by the Givens approach can provide the high throughput of parallel processing Source: After Steinhardt [138] Now we obtain the normal equation solution Assume eight-digit floating-point arithmetic If " ¼ 10 4 , then ỵ " ẳ 1:00000001, which is rounded off to 1.0000000 The matrix T T T then is thought to contain all 1’s for its entries and becomes singular and noninvertable so that no least-squares estimate is obtainable using the normal equations (4.1-30) and (4.1-32) Next let us apply the modified Gram–Schmidt algorithm to this same example The same eight-digit floating-point arithmetic is assumed It follows that Q of (13.1-20) becomes 0 " " " " 7 14:1-6ị Q ẳ6 " 12 " 13 " " " 13 " 342 MORE ON VOLTAGE-PROCESSING TECHNIQUES and U and Y 100 of (13.1-39) becomes U0 ¼ 40 1 12 ð14:1-7Þ and 1 Y 100 ẳ 14:1-8ị Finally substituting the above in (13.1-42) and solving using the backsubstitution method yields 14 X n;n ¼ ð14:1-9Þ which is close to the exact solution of (14.1-5) Those who desire to further pursue the voltage techniques described are urged to read references 76, 79, 81 to 83, 89, 91, 101 to 103, 115, 118 to 122, and 139 References 79, 115, 119, 121, and 122 apply the voltage techniques to the Kalman filter; see Section 14.5 14.2 QR DECOMPOSITION In the literature what is called the QR decomposition method is described for solving the least-squares estimation problem [81, 89, 102] This involves the decomposition of the augmented matrix T into a product of matrices designated as QR as done in (13.1-27) when carrying out the Gram–Schmidt algorithm in Chapter 13 Thus the Gram–Schmidt is a QR decomposition method It follows that the same is true for the Householder and Givens methods These give rise to the upper triangular R when the augmented T is multiplied by an orthogonal transformation Q T , that is, Q T T ¼ R; see, for example (11.1-30) or (12.2-6), where Q T ¼ F for these equations Thus from (10.2-1), QQ T T ¼ T ¼ QR, and the augmented matrix takes the form QR, as desired An additional physical interpretation of the matrices Q and R can be given that is worthwhile for obtaining further insight into this decomposition of the augmented matrix T When Q is orthonormal, its magnitude is in effect unity, and it can be thought of as containing the phase information of the augmented matrix T The matrix R then contains the amplitude information of T The QR SEQUENTIAL SQUARE-ROOT (RECURSIVE) PROCESSING 343 can then be thought of as the polar decompostion of the augmented matrix T into its amplitude R and phase Q components That this is true can be rigorously proven by the use of Hilbert space [104] 14.3 SEQUENTIAL SQUARE-ROOT (RECURSIVE) PROCESSING Consider the example augmented matrix T given by (11.1-29) In this matrix time is represented by the first subscript i of two subscripts of the elements t ij , and the only subscript i of y i Thus the first row represents the observation obtained first, at time i ¼ 1, and the bottom row the measurement made last For convenience we shall reverse the order of the rows so that the most recent measurement is contained in the top row Then without any loss of information (11.1-29) becomes t 31 t 32 y T ¼ t 21 t 22 y ð14:3-1Þ t 11 t 12 y To solve for the least-squares estimate, we multiply the above augmented matrix T by an orthonormal transformtion matrix to obtain the upper triangular matrix given by ðt 31 Þ ðt 32 Þ j ðy Þ ðt 22 Þ j y ị 7 T 00 ẳ ð14:3-2Þ - j - 0 j ðy Þ The entries in this matrix differ from those of (11.1-30) because of the reordering done Using the back-substitution method, we can solve for the leastsquares estimate by applying (14.3-2) to (10.2-16) Assume now that we have obtained a new measurement at time i ¼ Then (14.3-1) becomes t 41 t 42 y t 31 t 32 y 7 T0 ẳ 14:3-3ị t 21 t 22 y t 11 t 12 y We now want to solve for the least-squares estimate based on the use of all four measurements obtained at i ¼ 1; 2; 3; A straightforward method to obtain the least-squares would be to repeat the process carried out for (14.3-1), that is, triangularize (14.3-3) by applying an orthonormal transformation and then use 344 MORE ON VOLTAGE-PROCESSING TECHNIQUES the back-substitution procedure This method has the disadvantage, however, of not making any use of the computations made previously at time i ¼ To make use of the previous computations, one adds the new measurements obtained at time i ¼ to (14.3-2) instead of (14.3-1) to obtain the augmented matrix [79] t 41 t 42 y4 ðt 31 Þ ðt 32 Þ y ị 7 T 00 ẳ ð14:3-4Þ ðt 22 Þ ðy Þ 0 ðy Þ One can now apply an orthonormal transformation to (14.3-4) to upper triangularize it and use the back substitution to obtain the least-squares estimate This estimate will be based on all the measurements at i ¼ 1; 2; 3; The computations required when starting with (14.3-4) would be less than when starting with (14.3-3) because of the larger number of zero entries in the former matrix This is readily apparent when the Givens procedure is used Furthermore the systolic array implementation for the Givens algorithm as represented by Figures 11.3-2 and 11.3-4 is well suited to implementing the sequential algorithm described above The sequential procedure outlined above would only be used if the leastsquares estimate solutions are needed at the intermediate times i ¼ 1; 2; 3; 4; ; j; ; etc This is typically the case for the radar filtering problem If we did not need these intermediate estimates, it is more efficient to only obtain the estimate at time i ¼ s at which the last measurement is made [79] This would be done by only triangularizing the final augmented matrix T containing all the measurements from i ¼ 1; ; s Sometimes one desires a discounted least-squares estimate: see Section 1.2.6 and Chapter To obtain such an estimate using the above sequential procedure, one multiplies all the elements of the upper triangularized matrix T by where < < before augmenting T to include the new measurement as done in (14.3-4) Thus (14.3-4) becomes instead t 41 t 42 y4 ðt 31 Þ ðt 32 Þ ðy Þ 7 T0 ¼ ð14:3-5Þ ðt 22 Þ ðy Þ 0 ðy Þ The multiplication by is done at each update Such a discounted (or equivalently weighted) least-squares estimate is obtained using the systolic arrays of Figures 11.3-2 and 11.3-4 by multiplying the elements of the systolic array by at each update; see references 83 and 89 The sequential method described above is sometimes referred to in the literature as the sequential square-root information filter (SRIF) [79] The reader is referred to reference 79 for the application of the sequential SRIF to EQUIVALENCE BETWEEN VOLTAGE-PROCESSING METHODS 345 the Kalman filter The reader is also referred to references 78, 81 to 83, 89, 102, and 119 14.4 EQUIVALENCE BETWEEN VOLTAGE-PROCESSING METHODS AND DISCRETE ORTHOGONAL LEGENDRE POLYNOMIAL APPROACH Here we will show that the voltage-processing least-square approach of Section 4.3 and Chapters 10 to 13 becomes the DOLP approach of Section 5.3 when a polynomial of degree m is being fitted to the data, that is, when the target motion as a function of time is assumed to be described by a polynomial of degree m, and when the times between measurements are equal Consider again the case where only range xrị ẳ x r is measured Assume, [see (5.2-3)], that the range trajectory can be approximated by the polynomial of degree m xrị ẳ x r ẳ ptị ẳ m X a j rTị j 14:4-1ị jẳ0 where for simplicity in notation we dropped the bar over the a j Alternately from (5.2-4) xrị ẳ x r ẳ ptị ẳ m X z jr j 14:4-2ị jẳ0 where z j ẳ a jT j 14:4-2aị where z j is the scaled jth state derivative; see (5.2-8) The values of x r ; r ¼ 0; ; s, can be represented by the column matrix x0 x1 X¼6 xr xs ð14:4-3Þ Note that the column matrix X defined above is different from the column matrix used up until now The X defined above is physically the matrix of the true ranges at times r ¼ 0; 1; 2; ; s It is the range measurements y r made without the measurement noise error, for example, N n of (4.1-1) or N ðnÞ of 346 MORE ON VOLTAGE-PROCESSING TECHNIQUES (4.1-11a) The X defined up until now was the m column matrix of the process state vector When this state vector X is multiplied by the observation matrix M or the transition–observation matrix T, it produces the range matrix X defined by (14.4-3) This is the only section where we will use X as defined by (14.4-3) Which X we are talking about will be clear from the text Which of these two X’s is being used will also be clear because the state vector X n always has the lowercase subscript s or n or The range matrix will usually not have a subscript or will have the capital subscript Z or B as we shall see shortly Applying (14.4-2) to (14.4-3) yields 3 z0 ỵ ỵ ỵ x0 x z ỵ z ỵ ỵ z m1 m 7 6 x z ỵ z 12 ỵ ỵ z m2 m 7 6 7ẳ6 14:4-4ị Xẳ6 7 6 x r z ỵ z 1r ỵ ỵ z mr m 7 6 xs z ỵ z 1s ỵ ỵ z ms m or X ẳ T Zs 14:4-5ị where z0 z1 7 Zs ¼ z2 7 ð14:4-5aÞ zm and 10 62 T ¼ 6 6r s0 11 21 12 22 r1 r2 s1 s2 m7 61 61 2m 7 7¼6 rm 61 sm 21 22 r r2 7 2m 7 7 rm 7 s s2 ð14:4-5bÞ sm Physically Z s is the scaled state matrix Z n of (5.4-12) for index time s instead of n; see also (5.2-8) and (14.4-2a) Here, T is the transition–observation matrix for the scaled state matrix Z s EQUIVALENCE BETWEEN VOLTAGE-PROCESSING METHODS 347 Physically, (14.4-5) is the same as (4.1-11) when no noise measurements errors are present, that is, when N nị ẳ As indicated, Z s is scaled version of the state matrix X n of (4.1-11) or (4.1-2) and the matrix T of (14.4-5) is the transition–observation matrix of (4.1-11) or (4.1-11b) for the scaled state matrix Z s , which is different from the transition–observation matrix T for X s The range xðrÞ trajectory is modeled by a polynomial of degree m, hence X s and in turn its scaled form Z s have a dimension m where m ẳ m ỵ 1; see (4.1-2) and discussion immediately following Now from (5.3-1) we know that xðrÞ can be expressed in terms of DOLP j ðrÞ, j ¼ 0; ; m as xðrÞ ¼ x r ẳ m X j j rị 14:4-6ị jẳ0 where for simplicity we have dropped the subscript n on j In turn, using (14.4-6), the column matrix of x r , r ¼ 0; ; s, can be written as ð0Þ x0 x ð1Þ 6 x ð2Þ 6 7 X¼6 7¼6 x r ðrÞ 6 xs ðsÞ ð0Þ ð1Þ ð2Þ ðrÞ ðsÞ 32 m ð0Þ 0 m ð0Þ 76 7 m ð2Þ 76 7 76 76 r m ðrÞ 76 7 54 m m ðsÞ ð14:4-7Þ or X ẳ PB 14:4-8ị 0ị ð0Þ m ð0Þ ð1Þ ð1Þ m ð1Þ 7 6 ð2Þ ð2Þ m ð2Þ 7 6 7 P¼6 6 ðrÞ ðrÞ m ðrÞ 7 ðsÞ ðsÞ m ðsÞ ð14:4-8aÞ where 348 MORE ON VOLTAGE-PROCESSING TECHNIQUES and 0 1 7 7 B¼6 r 7 m ð14:4-8bÞ The DOLP j rị can be written as rị ẳ c 00 14:4-9aị rị ẳ c 10 ỵ c 11 r 14:4-9bị rị ẳ c 20 þ c 21 r þ c 22 r ð14:4-9cÞ m rị ẳ c m0 ỵ c m1 r ỵ c m2 r ỵ ỵ c mm r m ð14:4-9dÞ where the coefficients c ij can be obtained from the equations of Section 5.3 In matrix form (14.4-9a) to (14.4-9d) is expressed by P ¼ TC ð14:4-10Þ where T is defined by (14.4-5b) and C is given by the upper triangular matrix of DOLP coefficients: c 00 6 6 C¼6 6 0 c 10 c 11 0 c 20 c 21 c 22 0 0 c m1;0 c m1;1 c m1;2 c m1;m1 c m0 c m1 c m2 c m;m1 c mm 7 7 7 7 ð14:4-10aÞ Substituting (14.4-10) into (14.4-8) yields X ẳ TCB 14:4-11ị for the range matrix X in terms of the DOLP Now x r is the actual range What we measure is the noise-corrupted range y r , r ¼ 0; 1; 2; ; s, given by yr ẳ xr ỵ r ð14:4-12Þ as in (4.1-1) to (4.1-1c) In matrix form this becomes Y ẳXỵN 14:4-13ị EQUIVALENCE BETWEEN VOLTAGE-PROCESSING METHODS where 7 N ¼ 349 ð14:4-13aÞ s and Y is an s ỵ 1ị column matrix of y r , r ¼ 0; 1; ; s Thus here the numbers of measurements y r equals s ỵ 1; not s as in (4.1-10) What we are looking for is an estimate of the ranges x ; ; x s designated as x 0 ; ; x s or X Specifically, we are looking for the least-squares estimate based on the range measurements y ; ; y s or Y of (14.4-12) and (14.4-13) If we use our polynomial expression for x r , then we are looking for the leastsquares estimate of the polynomial coefficients, that is, of the a j of (14.4-1) or alternately the least-squares estimate of the scaled a j , that is, z i given by (14.4-2a) or (5.2-8) or the column matrix of (14.4-5) Designate the leastsquares estimate of Z s as Z s From (14.4-5) X Z ẳ T Z s 14:4-14ị where the subscript Z on the range matrix X is used to indicate that to obtain the estimate of the range matrix X of (14.4-3) we are using the polynomial fit for x r given by (14.4-2) in terms of the scaled derivatives of x i , that is, by the state coordinates z i of (14.4-2a) and (14.4-5a) Similarly, if we use the DOLP representation for x r , then the least-squares estimate of the range matrix X is obtained from the least-squares estimate of B, designated as B Specifically, using (14.4-11) gives X B ẳ TCB 14:4-15ị where the subscript B on X is used to indicate that the estimate is obtained using the DOLP representation for the range matrix X From (4.1-31) we know that the least-squares estimate of Z s is given by the Z s , which minimizes the magnitude of the column error matrix E Z given by EZ ¼ Y XZ ẳ Y T Zs 14:4-16ị Similarly, the least-squares estimate of B is given by the B, which minimizes the magnitude squared of the column error matrix E B given by E B ¼ Y X B ¼ Y TCB ð14:4-17Þ To apply the voltage-processing procedure to (14.4-16) to obtain the leastsquares estimate Z s of Z s , we want to apply an orthonormal transformation F to (14.4-16), which transforms it into m equations of the m unknowns of Z s , with 350 MORE ON VOLTAGE-PROCESSING TECHNIQUES these equations being in the Gauss elimination form as done in Section 4.3 and Chapters 10 to 13 Note that because the j ðrÞ are the DOLP terms, the columns of P are orthonormal This follows from (5.3-2) Hence the rows of the transpose of P, P T , are orthonormal and P T is an orthonormal transformation, like F of (13.1-33) Strictly speaking, P T is not an orthonormal transformation because it is not square, it being of dimension m s ỵ 1Þ [This is because P only spans the m -dimensional subspace of (s+1)-dimensional space of X Specifically, it spans only the column space of T Here, P could be augmented to span the whole (s+1)-dimensional, but this is not necessary This is the same situation we had for F of (13.1-33)] Let us try P T as this orthonormal transformation Multiplying both sides of (14.4-16) by P T and reversing the sign of E Z yields PTEZ ¼ PTT Zs PTY 14:4-18ị ẳ P T T Z s Y 10 E 1Z ð14:4-19Þ or where E 1Z ẳ PTEZ Y 10 T ẳP Y 14:4-19aị ð14:4-19bÞ Now applying the transform P T to (14.4-10) yields P T P ẳ P T TC 14:4-20ị PTP ẳ I ð14:4-21Þ But where I is the m m identity matrix (Note that because P is not a square matrix, PP T 6¼ I.) Thus P T TC ẳ I 14:4-22ị Postmultiplying both sides of (14.4-22) by C 1 yields P T TCC 1 ¼ C 1 14:4-23ị C 1 ẳ P T T 14:4-24ị or EQUIVALENCE BETWEEN VOLTAGE-PROCESSING METHODS 351 Substituting (14.4-24) into (14.4-19) yields E 1Z ¼ C 1 Z s Y 10 ð14:4-25Þ Because C is upper triangular, it can be shown that its inverse is upper triangular (see problem 14.4-1) Thus C 1 is an upper triangular matrix and equivalent to U of the voltage-processing method Thus U ¼ C 1 14:4-26ị ẳ UZ s Y 10 E 1Z ð14:4-27Þ Then (14.4-25) becomes Thus the transformation P T puts (14.4-16) in the form obtained when using the voltage-processing method, and hence P T is a suitable voltage-processing method orthonomal transformation as we hoped it would be Equation (14.4-25) or (14.4-27) consists of m ẳ m ỵ equations with m+1 unknown z i ’s to be solved for such that kE 1Z k is minimum Hence, as was the 0 case for (4.3-31) and (10.2-14), the minimum kE 1Z k is obtained by setting E 1Z equal to zero to obtain for (14.4-25) and (14.4-27) Y 10 ẳ C 1 Z s 14:4-28ị which when using (14.4-26) becomes Y 10 ẳ UZ s 14:4-29ị Solving for Z s in (14.4-28) or (14.4-29) gives the desired least-squares estimate Z s Equation (14.4-29) is equivalent to (10.2-16) of the voltage-processing Because C 1 and in turn U, are upper method for estimating the unscaled X n;n triangular, Z s can be easily solved for using the back-substitution method, as However, because we know C discussed relative to solving (10.2-16) for X n;n when x r is modeled to be a polynomial in time, it being the matrix of coefficients of the DOLP, we can solve (14.4-28) or (14.4-29) without using back substitution Instead, we can solve for Z s directly by writing (14.4-28) as Z s ¼ CY 10 ¼ CP T Y ð14:4-30Þ where use was made of (14.4-19b) [Note that Equation (14.4-27) for estimating Z is equivalent to (4.3-45) and (10.2-14) used in the process of estimating X s , the unscaled Z s ; with Y 20 not present The orthonormal transformation P T here, see (14.4-18), for example, is equivalent to that of F of (10.2-14) except there the rows of F were made to span the s-dimensional space of Y ðnÞ while here, as indicated above, the rows of P T only span the m ỵ ¼ m -dimensional space of 352 MORE ON VOLTAGE-PROCESSING TECHNIQUES the columns of T given by (14.4-5b) It is for this reason that Y 20 is not present in (14.4-25).] Thus the voltage-processsing least-squares solution when fitting a polynomial fit of degree m to the data becomes a straightforward solution given by (14.4-30) with the orthonormal matrix P given by the matrix of DOLP given by (14.4-8a) and the upper triangular matrix C given by the matrix of DOLP coefficients given by (14.4-10a) with both of these matrices known beforehand Thus the Gram-Schmidt, Givens, or Householder transformations not have to be carried out for the voltage-processing method when a polynomial fit is made to data In addition, the voltage-processing orthonormal transformation F equals P T and hence is known beforehand Also, the upper triangular matrix U is equal to C 1 and is known beforehand also Moreover, U does not have to be calculated because (14.4-30) can be used to solve for Z s directly It remains to relate the resultant voltage-processing method solution given by (14.4-30) with the DOLP solution From (5.3-10) we know that the DOLP leastsquares solution for j is given by j ¼ s X j ðrÞy r ð14:4-31Þ r¼0 Using (14.4-8a), this can be written as B ẳ PTY 14:4-32ị On examining (14.4-30) we see that our Z s obtained using the voltageprocessing method with F ¼ P T is given in terms of the DOLP solution for B given above Specifically, applying (14.4-32) to (14.4-30) yields Z s ¼ CB ð14:4-33Þ Computationally, we see that the voltage-processing solution is identical to the DOLP solution when the data is modeled by a polynomial of degree m and the times between measurements are equal Finally, the least-squares estimate of X of (14.4-3) can be obtained by applying (14.4-33) to (14.4-5) to yield X ẳ TCB 14:4-34ị which on applying (14.4-32) becomes X ¼ TCP T Y ð14:4-35Þ Alternately, we could apply (14.4-32) directly to (14.4-8) to obtain X ẳ PP T Y 14:4-36ị SQUARE-ROOT KALMAN FILTERS 353 Both solutions, (14.4-35) and (14.4-36), are identical since, from (14.4-10), P ¼ TC It is just that the order of the calculations are different In either case we not need to obtain a matrix inverse or use the back-substitution method because here we know the matrices Cð¼ U 1 ị and P T ẳ Fị Let us recapitulate the important results obtained in this section The main point is that the DOLP approach is equivalent to the voltage-processing approach Moreover, when the voltage-processing method is used to provide a least-squares polynomial fit of degree m; ; s ỵ consecutive data points equally spaced in time with s ỵ > m ẳ m ỵ 1, the voltage-processing matrix U and the transformation matrix F of Chapters 10 to 13 are known in advance Specifically, U is given by the inverse of the matrix C of the coefficient of the DOLP given by (14.4-10a) Moreover, C 1 does not need to be evaluated because we can obtain the scaled least-squares estimate of Z s by using (14.4-30) Thus, for the case where a least-squares polynomial fit is being made to the data, the voltage-processing approach becomes the DOLP approach with U 1 given by C and F ¼ P T This is indeed a beautiful result It is important to point out that U (and in turn U 1 ) and F of the voltageprocessing approach are known in advance, as discussed above, only if there are no missing measurements y j , j ¼ 0; 1; 2; ; s In the real world, some y j will be missing due to weak signals In this case T and in turn U are not known in advance, the drop out of data points being random events 14.5 SQUARE-ROOT KALMAN FILTERS The Kalman filters discussed up until now and in Chapter 18 use a power method of computation They involve calculating the covariance matrix of the predicted and filtered state vectors; see, for example, (9.3-1a) to (9.3-1d) and (2.4-4a) to (2.4-4j) There are square-root Kalman filter algorithms that compute the square-root of these covariance matrices and hence are less sensitive to round-off errors [79, 115, 119, 121, 122] A similar filter is the U-D covariance factorization filter discussed elsewhere [79, 119, 121, 122] (In Section 10.2.2 we pointed out that U 1 is the square root of the covariance Similarly UD 1=2 is another form of the square root of S where U matrix S n;n n;n is an upper triangular matrix and D is a diagonal matrix (Note that the square root of (13.1-39a) is of this form.) The question arises as to how to tell if one needs these square-root-type filters or can the simpler, conventional power method type filters be used The answer is to simulate the power method filter on a general-purpose computer with double or triple precision and determine for what computation roundoff one runs into performance and stability problems If for the accuracy to be used with the conventional filter, one does not run into a performance or stability problem, then the conventional filters described here can be used If not, then the square-root filters should be considered ... In this case T and in turn U are not known in advance, the drop out of data points being random events 14.5 SQUARE-ROOT KALMAN FILTERS The Kalman filters discussed up until now and in Chapter... polynomial fit is made to data In addition, the voltage-processing orthonormal transformation F equals P T and hence is known beforehand Also, the upper triangular matrix U is equal to C 1 and is known... ON VOLTAGE-PROCESSING TECHNIQUES and U and Y 100 of (13.1-39) becomes U0 ¼ 40 1 12 14:1-7ị and 1 Y 100 ẳ ð14:1-8Þ Finally substituting the above in (13.1-42) and solving using the backsubstitution