Tài liệu GPS - đường dẫn quán tính và hội nhập P8 doc

36 313 0
Tài liệu GPS - đường dẫn quán tính và hội nhập P8 doc

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

8 Kalman Filter Engineering We now consider the following, practical aspects of Kalman ®ltering applications: 1. how performance of the Kalman ®lter can degrade due to computer roundoff errors and alternative implementation methods with better robustness against roundoff; 2. how to determine computer memory, word length, and throughput require- ments for implementing Kalman ®lters in computers; 3. ways to implement real-time monitoring and analysis of ®lter performance; 4. the Schmidt±Kalman suboptimal ®lter, designed for reducing computer requirements; 5. covariance analysis, which uses the Riccati equations for performance-based predictive design of sensor systems; and 6. Kalman ®lter architectures for GPS/INS integration. 8.1 MORE STABLE IMPLEMENTATION METHODS 8.1.1 Effects of Computer Roundoff Computer roundoff limits the precision of numerical representation in the imple- mentation of Kalman ®lters. It has been shown to cause severe degradation of ®lter performance in many applications, and alternative implementations of the Kalman ®lter equations (the Riccati equations, in particular) have been shown to improve robustness against roundoff errors. 229 Global Positioning Systems, Inertial Navigation, and Integration, Mohinder S. Grewal, Lawrence R. Weill, Angus P. Andrews Copyright # 2001 John Wiley & Sons, Inc. Print ISBN 0-471-35032-X Electronic ISBN 0-471-20071-9 Computer roundoff for ¯oating-point arithmetic is often characterized by a single parameter e roundoff , which is the largest number such that 1  e roundoff  1 in machine precision. 8:1 The following example, due to Dyer and McReynolds [32], shows how a problem that is well conditioned, as posed, can be made ill-conditioned by the ®lter implementation. Example 8.1 Let I n denote the n  n identity matrix. Consider the ®ltering problem with measurement sensitivity matrix H  11 1 111 d ! and covariance matrices P 0  I 3 ,andR  d 2 I 2 ; where d 2 < e roundoff but d > e roundoff . In this case, although H clearly has rank 2 in machine precision, the product HP 0 H T with roundoff will equal 33 d 3  d 3 2d ! ; which is singular. The result is unchanged when R is added to HP 0 H T . In this case, then, the ®lter observational update fails because the matrix HP 0 H T  R is not invertible. 8.1.2 Alternative Implementations The covariance correction process (observational update) in the solution of the Riccati equation was found to be the dominant source of numerical instability in the Kalman ®lter implementation, with the more common symptoms of failure being asymmetry of the covariance matrix (easily ®xed) or, worse by far, negative terms on its diagonal. These implementation problems could be avoided for some problems by using more precision, but they were eventually solved for most applications by using alternatives to the covariance matrix P as the dependent variable in the covariance correction equation. However, each of these methods required a com- patible method for covariance prediction. Table 8.1 lists several of these compatible implementation methods for improving the numerical stability of Kalman ®lters. Figure 8.1 illustrates how these methods perform on the ill-conditioned problem of Example 8.1 as the conditioning parameter d  0. For this particular test case, using 64-bit ¯oating-point precision (52-bit mantissa), the accuracy of the Carlson 230 KALMAN FILTER ENGINEERING TABLE 8.1 Compatible Methods for Implementing the Riccati Equation Covariance Implementation Methods Matrix Corrector Predictor Format Method Method Symmetric nonnegative de®nite Kalman [71], Joseph [19] Kalman [71] Kalman [71] Square Cholesky factor C Potter [100, 8] C k1   F k C k  Triangular Cholesky factor C Carlson [20] Kailath±Schmidt a Triangular Cholesky factor C Morf±Kailath combined [93] Modi®ed Cholesky factors U; D Bierman [10] Thornton [116] a From unpublished sources. Fig. 8.1 Degradation of numerical solutions with problem conditioning. 8.1 MORE STABLE IMPLEMENTATION METHODS 231 [20] and Bierman [10] implementations degrade more gracefully than the others as d  e, the machine precision limit. The Carlson and Bierman solutions still maintain about nine digits (30 bits) of accuracy at d   e  , when the other methods have essentially no bits of accuracy in the computed solution. These results, by themselves, do not prove the general superiority of the Carlson and Bierman solutions for the Riccati equation. Relative performance of alternative implementation methods may depend upon details of the speci®c application, and for many applications, the standard Kalman ®lter implementation will suf®ce. For many other applications, it has been found suf®cient to constrain the covariance matrix to remain symmetric. 8.1.3 SerialMeasurement Processing It is shown in [73] that it is more ef®cient to process the components of a measurement vector serially, one component at a time, than to process them as a vector. This may seem counterintuitive, but it is true even if its implementation requires a transformation of measurement variables to make the associated measure- ment noise covariance R a diagonal matrix (i.e., with noise uncorrelated from one component to another). 8.1.3.1 Measurement Decorrelation If the covariance matrix R of measure- ment noise is not a diagonal matrix, then it can be made so by UDU T decomposition (Eq. B.22) and changing the measurement variables, R corr  U R D R U T R ; 8:2 R decorr  def D R (a diagonal matrix); 8:3 z decorr  def U R z corr ; 8:4 H decorr  def U R H corr ; 8:5 where R corr is the nondiagonal (i.e., correlated component to component) measure- ment noise covariance matrix, and the new decorrelated measurement vector z decorr has a diagonal measurement noise covariance matrix R decorr and measurement sensitivity matrix H decorr . 8.1.3.2 Serial Processing of Decorrelated Measurements The compo- nents of z decorr can now be processed one component at a time using the corresponding row of H decorr as its measurement sensitivity matrix and the corresponding diagonal element of R decorr as its measurement noise variance. A MATLAB implementation for this procedure is listed in Table 8.2, where the ®nal line is a ``symmetrizing'' procedure designed to improve robustness. 232 KALMAN FILTER ENGINEERING 8.1.4 Joseph Stabilized Implementation This implementation of the Kalman ®lter is in [19], where it is demonstrated that numerical stability of the solution to the Riccati equation can be improved by rearranging the standard formulas for the measurement update into the following formats (given here for scalar measurements): z  z  R  ; 8:6  H  zH; 8:7 K   HP  H T  1 1 P  H T ; 8:8 P I  K  HPI  K  H T  K K T : 8:9 These equations would replace those for K and P within the loop in Table 8.2. The Joseph stabilized implementation and re®nements (mostly taking advantage of partial results and the redundancy due to symmetry) in [10], [46] and are implemented in the MATLAB ®les Joseph.m, Josephb.m,andJosephdv.m, respectively, on the accompanying diskette. 8.1.5 Factorization Methods 8.1.5.1 Historical Background Robust implementation methods were intro- duced ®rst for the covariance correction (measurement updates), observed to be the principal source of numerical instability. In [100, 8], the idea of using a Cholesky factor (de®ned in Section B.8.1) of the covariance matrix P, as the dependent variable in the Riccati equation is introduced. Carlson [20] discovered a more robust method using triangular Cholesky factors, which have zeros either above or below their main diagonals. Bierman [10] extended TABLE 8.2 Matlab Implementation of SerialMeasurement Update x = x à k [-] P = P k [-] for j=1: `, z=z k (j), H = H k (j,:); R = R decorr (j,j); K= PH'/(HPH'+R) x = K( z -Hx); P = P-KHP; end; x à k [+] = x P k [+] =(P+ P')/2; 8.1 MORE STABLE IMPLEMENTATION METHODS 233 this to modi®ed Cholesky factors (de®ned in Section B.1.8.1), which are diagonal and unit triangular matrices D and U, respectively, such that UDU T  P 8:10 and U is triangular with 1's along its main diagonal. Compatible covariance prediction methods were discovered by Thomas Kailath and Stanley F. Schmidt (for Carlson's method) and Catherine Thornton [116] (for Bierman's method). 8.1.6 Square-Root Filtering Methods 8.1.6.1 Problems with the Riccati Equation Many early applications of Kalman ®ltering ran into serious numerical instability problems in solving the ancillary Riccati equation for the Kalman gain. The problem was eventually solved (over the next decade or so) by reformulating the Riccati equation so that its solution was more robust against computer roundoff errors. Some of the more successful of these approaches are collectively called ``square-root ®ltering.'' 8.1.6.2 Square-Root Filtering The concept for square-root ®ltering came from James H. Potter when he was at the MIT Instrumentation Laboratory (later the Charles Stark Draper Laboratory) in the early 1960s, and his concept was implemented successfully in the Kalman ®lter used for onboard navigation in all the Apollo moon missions. Potter's algorithm is implemented on the Matlab m-®le potter.m on the accompanying diskette. It was originally called square-root ®ltering because it is based, in part, on an algorithm for taking a symmmetric square root of a special form of a symmetric matrix. The improved robustness of Potter's approach comes from replacing the covar- iance matrix P with its Cholesky factor 1 as the dependent parameter of the Riccati equation. Some of the observed improvement in numerical stability is attributed to improvement in the condition number condC (ratio of largest to smallest char- acteristic value) over condP, because condC  condP p : 8:11 A matrix is considered ill-conditioned for inversion in a particular computer (``machine'' ) if its condition number is close to 1=e machine , where e machine is the largest positive number 2 for which 1  e machine  machine 1 8:12 1 See Section B.1.8.1 for the de®nition and properties of Cholesky factors. 2 e machine has the reserved name eps in MATLAB. Its value is returned when you type ``eps''. 234 KALMAN FILTER ENGINEERING in machine precision. That is, the result of adding e machine to 1 has no effect in machine precision. 8.1.6.3 Triangularization Methods The so-called ``QR'' theorem of linear algebra states that every real m  n matrix S can be factored in the form S  QR, where Q is an m  m orthogonal matrix and R is an m  n upper triangular matrix. Depending on the relative magnitudes of m and n, the resulting triangular matrix R may have any of the forms m < nm nm< n R  with the nonzero part of the upper triangular submatrix in the upper right corner. There are several algorithms for computing the triangular and orthogonal factors, including some with the order of the factors reversed (effectively ``RQ'' algorithms). These are also called triangularization methods. They are key to square-root ®ltering, because they can transform a nontriangular Cholesky factor M into a triangular one  T, because MM T   T  O  O T  T T 8:13   T  T T : 8:14 Algorithms that implement QR decompositions need not compute the orthogonal factor explicitly, if it is not needed. Because the matrix symbols Q (dynamic disturbance noise covariance) and R (measurement noise covariance) are already used for speci®c parts of the Kalman ®lter, we will use alternative symbols here. 8.1.6.4 QR Decomposition by Householder Transformations House- holder transformation matrices 3 are orthogonal matrices of the form vI  2vv T v T v ; 8:15 where v is a column vector and I is the compatibly dimensioned identity matrix. The condition number of an othogonal matrix is perfect (i.e., 1), making it well suited for robust operations in numerical linear algebra. The QR decomposition of a matrix M is effectively accomplished by a series of products by Householder transformation matrices, in the partitioned form v 0 0 I !  3 Named for Alston S. Householder (1904±1993), who developed many of the more robust methods used in numerical linear algebra. 8.1 MORE STABLE IMPLEMENTATION METHODS 235 with the vector v chosen to annihilate all but the end element of the remaining subrow x of M until only the upper triangular part remains. It suf®ces to let v  x T x 0 . . . 0 0 1 P T T T T R Q U U U U S 8:16 However, operations with Householder matrices are typically not implemented by calculating the appropriate Householder matrix and taking a matrix product. They can be implemented quite ef®ciently as an algorithm operating ``in place'' on the matrix M, destroying M and leaving only the matrix T in its place when completed. The MATLAB function housetri.m on the accompanying diskette does just this. 8.1.6.5 Triangularization of Cholesky Factors If A is any Cholesky factor of P and A  CM is a QR decomposition of A such that M is the orthogonal factor and C is the triangular factor, then C is a triangular Cholesky factor of P. That is, P  AA T  CMCM T  CMM T C T  CC T 8:17 and C is triangular. This is the basis for the following two types of square-root ®ltering. 8.1.6.6 Morf±Kailath Square-Root Filter In Morf±Kailath square-root ®ltering, the entire Riccati equation, including prediction and correction steps, is implemented in a single triangularization procedure. It effectively computes the Cholesky factors of successive covariance matrices of prediction error (required for computing Kalman gain) without ever explicitly computing the intermediate values for corrected estimation errors. Assume the following: G k is the dynamic disturbance distribution matrix of the system model, C Q k is a Cholesky factor of Q k ; F k is the state transition matrix from the previous epoch; C P k is a Cholesky factor of P k ; the covariance matrix of prediction error from the previous epoch; H k is the measurement sensitivity matrix of the previous epoch; C R k is the measurement noise covariance matrix of the previous epoch; and 236 KALMAN FILTER ENGINEERING  is a triangularizing orthogonal matrix for the partitioned matrix such that G k C Q k F k C P k 0 0 H k C P k C R k !  0 C P k1 C k1 00C E k1 ! ; 8:18 a partitioned upper triangular matrix. Then C P k1 is the square triangular Cholesky factor of P k1 , the covariance matrix of prediction error, and the Kalman gain K k1  C k1 =C E k1 : 8:19 8.1.6.7 Carlson±Schmidt Square±Root Filtering In Carlson±Schmidt square-root ®ltering, only the temporal update (predictor) of the Riccati equation is implemented using triangularization. The observational update is implemented by an algorithm due to Carlson [20]. The Carlson algorithm is implemented in the Matlab m-®le carlson.m on the accompanying diskette. It calculates the Cholesky factor C P;k1  of the covariance matrix P k1  corrected for the effect of taking the measurement. The temporal update is implemented as  0 C P;k   F k C P;k1  G k C Q;k  1  2  3  n ; 8:20 where C P;k  sought-for triangular Cholesky factor of P k  C Q;k  a Cholesky factor of Q k  F k C P;k1  G k C Q;k is a Cholesky factor of P k , and the sequence of Householder transformation matrices  1  2  3  n transforms it into the appropriate triangular form It can be shown that the matrix  F k C P;k1  G k C Q;k  is, indeed, a Cholesky factor of P k  by multiplying it out:  F k C P;k1  G k C Q;k  F k C P;k1  G k C Q;k  T  F k C P;k1 F k C P;k1  T  G k C Q;k G k C Q;k  T  F k C P;k1 C T P;k1 F T k  G k C Q;k C T Q;k G T k  F k P k1 F T k  G k Q k G T k  P k : The triangularization in Eq. 8.20 is implemented in the Matlab m-®le schmidt.m on the accompanying diskette. 8.1 MORE STABLE IMPLEMENTATION METHODS 237 8.1.7 Bierman±Thornton UD Filter The Bierman±Thornton square-root ®lter is analogous to the Carlson±Schmidt square root ®lter, but with modi®ed Cholesky factors of P in place of ordinary Cholesky factors. It is also called ``UD ®ltering,'' in reference to the modi®ed Cholesky factors U and D. The principal differences between Bierman±Thornton UD ®ltering and Carlson± Schmidt square-root ®ltering are as follows: 1. The Bierman±Thornton square-root ®lter uses U and D in place of C. 2. The observational update (due to Bierman [10] ) requires no square roots. 3. The temporal update (due to Thornton [116]) uses modi®ed weighted Gram± Schmidt orthogonalization in place of Householder triangularization. The methods of Carlson and Bierman are ``rank 1 modi®cation'' algorithms for Cholesky factors and modi®ed Cholesky factors, respectively. A rank 1 modi®cation algorithm for a triangular Cholesky factor, for example, calculates the triangular Cholesky factor C such that CC T  CC T  vv T ; given the prior Cholesky factor C and the vector v. Rank 1 modi®cation in this case refers to the matrix rank of the modi®cation vv T of CC T . In this particular application of rank 1 modi®cation, the matrix and vector are P  CC T (predicted covariance), v  PH T  HPH T  R p ; respectively. This only works if the dimension of the measurement equals 1 (i.e., the rank of H is 1), which is the reason that square-root ®ltering must process measurement vector components one at a time. The corresponding UD predictor algorithm was discovered by Catherine Thorn- ton, and was the subject of her Ph.D. dissertation [116]. It is based on a relatively robust orthogonalization method developed by A Ê ke BjoÈrck [11] and called ``modi- ®ed Gram±Schmidt.'' Bierman [10] refers to it as ``modi®ed weighted Gram± Schmidt'' (MWGS), which is much longer than its appropriate name, ``Thornton's method.'' A listing of its implementation in Matlab (from thornton.m on the accompanying diskette) is presented in Table 8.3. The corresponding Matlab listing of the Bierman corrector algorithm (from bierman.m on the accompanying diskette) is given in Table 8.4. 238 KALMAN FILTER ENGINEERING [...]... aY Ðroll-, pitch-, and yaw-axis accelerometer outputs; oR ; oP ; oY Ðroll-, pitch- and yaw-axis gyroscope outputs; ri GPS receiver pseudorange output for ith GPS satellite, if available; and _ ri GPS receiver Doppler output for ith GPS satellite, if available 8.6 GPS/ INS INTEGRATION ARCHITECTURES 259 Acceleration Sensitivities The sensitivities of the outputs of vehicle-®xed accelerometers to the... Quite Upper Bounds The contour plots in Fig 8.2 are ``not-quite-worst-case computational cost'' plotted in terms of equivalent multiply-and-accumulate operations per update cycle as functions of the parameter matrix dimensions and the compatible implementation methods used These are effectively ``not-quite upper bounds'' because they assume the worst-case parameter matrix conditions (i.e., all parameter... described in 8.6 GPS/ INS INTEGRATION ARCHITECTURES 255 Section 7.8.5.2, and the INS Kalman ®lter uses the GPS Kalman ®lter outputs for estimating sensor errors that the INS would be incapable of estimating by itself An implementation of this type is called ` `GPS- aided INS'' because the INS Kalman ®lter treats the outputs of the GPS Kalman ®lter as sensor outputs and does not include the GPS state variables... n…4n2 ‡ 3np ‡ p2 ‡ n À 1†=2 ‡ p… p2 À 1† 6 multiply-and-accumulate operations per update cycle These computational costs include those for diagonalizing Q (by UD factorization), which is on the order of 1 2 6 p…p À 1† multiply-and-accumulate operations per update cycle, where Q is p  p These formulas also include the additional 1 np… p À 1† multiply-and-accumulate 2 8.3 KALMAN FILTER MONITORING 245 operations... covariance analysis 8.6 GPS/ INS INTEGRATION ARCHITECTURES GPS architecture is likely to change with the addition of more channels and aiding signals, and INSs already have a wide variety of architectures However, even if there were only one GPS and one possible INS, there would still be many ways to integrate the two For these reasons, there is no all-encompassing Kalman ®lter architecture for GPS= INS integration... `…9n2 ‡ 3`n ‡ `2 ‡ 3` ‡ 12n À 4† multiply-and-accumulate operations 6 244 KALMAN FILTER ENGINEERING Fig 8.2 Arithmetic operations (multiply, accumulate) per update for Kalman ®lter implementations per update cycle, which is not signi®cantly different from that of the conventional Kalman ®lter [10, p 107, plus decorrelation] The corresponding not-quite-worst-case computational cost for the Thornton... rotation vector rotating locally level ENU coordinates into vehicle-®xed RPY coordinates _ _ _ rE ; rN ; rU Ðtime derivatives of the east, north, and vertical components of the rotation vector rotating locally level ENU coordinates into vehicle-®xed RPY coordinates Twelve Accelerometer Parameters baR ; baP ; baY Ðroll-, pitch-, and yaw-axis components of accelerometer bias Ma Ðnine elements of the 3... (equivalent clock drift rate error in seconds per second times c) Twenty-Four (or More) GPS Timing Errors DrSAi Ðpseudorange error for ith GPS satellite, in meters In the minimum con®guration (24 satellites active), 26 of the 65 state variables are for GPS, and the remaining 39 are for the INS The pseudorange and velocity units for the 24 GPS system state variables are chosen to avoid factors of c % 3 Â... potential GPS receiver channels, the outputs of which are pseudoranges and Doppler rates to the satellites in view The GPS receiver can only track the satellites in view, and we model this by setting to zero the measurement sensitivities of receiver channels to GPS satellites occluded by the earth Measurement Variables will be The elements of the measurement vector for this model aR ; aP ; aY Ðroll-, pitch-,... computational costs for which were approximated as being equivalent to six multiply-and-accumulate operations.) The corresponding Schmidt±Kailath temporal update (using Householder triangularization) has computational cost on the order of 1 n…2n2 ‡ 3np ‡ 6n ‡ 3 12p ‡ 25† multiply-and-accumulate operations per update cycle in the worst-case scenario (G and Q time varying and Q nondiagonal) This includes the . John Wiley & Sons, Inc. Print ISBN 0-4 7 1-3 5032-X Electronic ISBN 0-4 7 1-2 007 1-9 Computer roundoff for ¯oating-point arithmetic is often characterized. in Fig. 8.2 are ``not-quite-worst-case computational cost'' plotted in terms of equivalent multiply-and-accumulate opera- tions per update cycle

Ngày đăng: 15/12/2013, 03:15

Tài liệu cùng người dùng

  • Đang cập nhật ...

Tài liệu liên quan