1. Trang chủ
  2. » Kỹ Thuật - Công Nghệ

Tài liệu Lọc Kalman - lý thuyết và thực hành bằng cách sử dụng MATLAB (P4) ppt

55 670 4

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 55
Dung lượng 487,59 KB

Nội dung

Kalman Filtering: Theory and Practice Using MATLAB, Second Edition, Mohinder S Grewal, Angus P Andrews Copyright # 2001 John Wiley & Sons, Inc ISBNs: 0-471-39254-5 (Hardback); 0-471-26638-8 (Electronic) Linear Optimal Filters and Predictors Prediction is dif®cultÐespecially of the future Attributed to Niels Henrik David Bohr (1885±1962) 4.1 4.1.1 CHAPTER FOCUS Estimation Problem This is the problem of estimating the state of a linear stochastic system by using measurements that are linear functions of the state We suppose that stochastic systems can be represented by the types of plant and measurement models (for continuous and discrete time) shown as Equations 4.1±4.5 in Table 4.1, with dimensions of the vector and matrix quantities as shown in Table 4.2 The symbols D…k À `† and d…t À s† stand for the Kronecker delta function and the Dirac delta function (actually, a generalized function), respectively TABLE 4.1 Linear Plant and Measurement Models Model Continuous Time Discrete Time Equation Number Plant _ x…t† ˆ F …t†x…t† ‡ w…t† xk ˆ FkÀ1 xkÀ1 ‡ wkÀ1 (4.1) Measurement z…t† ˆ H…t†x…t† ‡ v …t† zk ˆ Hk xk ‡ vk (4.2) Plant noise Ehw …t†i ˆ Ehw…t†w T …s†i ˆ d…t À s†Q…t† Ehwk i ˆ Ehwk wiT i ˆ D…k À i†Qk (4.3) (4.4) Observation noise Ehv …t†i ˆ Ehv …t†v T …s†i ˆ d…t À s†R…t† Ehvk i ˆ Ehvk viT i ˆ D…k À i†Rk (4.5) 114 4.1 115 CHAPTER FOCUS TABLE 4.2 Dimensions of Vectors and Matrices in Linear Model Symbol Symbol Dimensions nÂ1 `Â1 `Â` x,w z,v R Dimensions F; Q H D; d nÂn `Ân scalar The measurement and plant noise vk and wk are assumed to be zero-mean Gaussian processes, and the initial value x0 is a Gaussian variate with known mean x0 and known covariance matrix P0 Although the noise sequences wk and vk are assumed to be uncorrelated, the derivation in Section 4.5 will remove this restriction and modify the estimator equations accordingly ^ The objective will be to ®nd an estimate of the n state vector xk represented by xk, a linear function of the measurements zi ; ; zk , that minimizes the weighted meansquared error ^ ^ E‰xk À xk ŠT M ‰xk À xk Š; …4:6† where M is any symmetric nonnegative-de®nite weighting matrix 4.1.2 Main Points to Be Covered Linear Quadratic Gaussian Estimation Problem We are now prepared to derive the mathematical forms of optimal linear estimators for the states of linear stochastic systems de®ned in the previous chapters This is called the linear quadratic Gaussian (LQG) estimation problem The dynamic systems are linear, the performance cost functions are quadratic, and the random processes are Gaussian Filtering, Prediction, and Smoothing There are three general types of estimators for the LQG problem:  Predictors use observations strictly prior to the time that the state of the dynamic system is to be estimated: tobs < test :  Filters use observations up to and including the time that the state of the dynamic system is to be estimated: tobs test : 116 LINEAR OPTIMAL FILTERS AND PREDICTORS  Smoothers use observations beyond the time that the state of the dynamic system is to be estimated: tobs > test : Orthogonality Principle A straightforward and simple approach using the orthogonality principle is used in the derivation1 of estimators These estimators will have minimum variance and be unbiased and consistent Unbiased Estimators The Kalman ®lter can be characterized as an algorithm for computing the conditional mean and covariance of the probability distribution of the state of a linear stochastic system with uncorrelated Gaussian process and measurement noise The conditional mean is the unique unbiased estimate It is propagated in feedback form by a system of linear differential equations or by the corresponding discrete-time equations The conditional covariance is propagated by a nonlinear differential equation or its discrete-time equivalent This implementation automatically minimizes the expected risk associated with any quadratic loss function of the estimation error Performance Properties of Optimal Estimators The statistical performance of the estimator can be predicted a priori (that is, before it is actually used) by solving the nonlinear differential (or difference) equations used in computing the optimal feedback gains of the estimator These are called Riccati equations,2 and the behavior of their solutions can be shown analytically in the most trivial cases These equations also provide a means for verifying the proper performance of the actual estimator when it is running 4.2 KALMAN FILTER Observational Update Problem for System State Estimator Suppose that a measurement has been made at time tk and that the information it provides is to be For more mathematically oriented derivations, consult any of the references such as Anderson and Moore [1], Bozic [9], Brammer and Sif¯ing [10], Brown [11], Bryson and Ho [14], Bucy and Joseph [15], Catlin [16], Chui and Chen [18], Gelb et al [21], Jazwinski [23], Kailath [24], Maybeck [30, 31], Mendel [34, 35], Nahi [36], Ruymgaart and Soong [42], and Sorenson [47] Named in 1763 by Jean le Rond D'Alembert (1717±1783) for Count Jacopo Francesco Riccati (1676± 1754), who had studied a second-order scalar differential equation [213], although not the form that we have here [54, 210] Kalman gives credit to Richard S Bucy for showing him that the Riccati differential equation is analogous to spectral factorization for de®ning optimal gains The Riccati equation also arises naturally in the problem of separation of variables in ordinary differential equations and in the transformation of two-point boundary-value problems to initial-value problems [155] 4.2 117 KALMAN FILTER applied in updating the estimate of the state x of a stochastic system at time tk It is assumed that the measurement is linearly related to the state by an equation of the form zk ˆ Hxk ‡ vk , where H is the measurement sensitivity matrix and vk is the measurement noise Estimator in Linear Form The optimal linear estimate is equivalent to the general (nonlinear) optimal estimator if the variates x and z are jointly Gaussian (see ^ Section 3.8.1) Therefore, it suf®ces to seek an updated estimate xk …‡†Ðbased on the observation zk Ðthat is a linear function of the a priori estimate and the measurement z: ^ ^ xk …‡† ˆ Kk xk …À† ‡ K k zk ; …4:7† ^ ^ where xk …À† is the a priori estimate of xk and xk …‡† is the a posteriori value of the estimate Optimization Problem The matrices Kk and K k are as yet unknown We seek ^ those values of Kk and K k such that the new estimate xk …‡† will satisfy the orthogonality principle of Section 3.8.2 This orthogonality condition can be written in the form ^ Eh‰xk À xk …‡†ŠzT i ˆ 0; i Eh‰xk À ^ xk …‡†ŠzT i k i ˆ 1; 2; ; k À 1; ˆ 0: …4:8† …4:9† ^ If one substitutes the formula for xk from Equation 4.1 (in Table 4.1) and for xk …‡† from Equation 4.7 into Equation 4.8, then one will observe from Equations 4.1 and 4.2 that the data z1 ; ; zk not involve the noise term wk Therefore, because the random sequences wk and vk are uncorrelated, it follows that Ewk zT ˆ for i i k (See Problem 4.5.) Using this result, one can obtain the following relation: ^ E‰hFkÀ1 xkÀ1 ‡ wkÀ1 À Kk xk …À† À K k zk izT Š ˆ 0; i i ˆ 1; ; k À 1: …4:10† But because zk ˆ Hk xk ‡ vk , Equation 4.10 can be rewritten as ^ E‰FkÀ1 xkÀ1 À Kk xk …À† À K k Hk xk À K k vk ŠzT ˆ 0; i i ˆ 1; ; k À 1: We also know that Equations 4.8 and 4.9 hold at the previous step, that is, ^ Eh‰xkÀ1 À xkÀ1 …‡†ŠzT i ˆ 0; i i ˆ 1; ; k À 1; and Ehvk zT i ˆ 0; i i ˆ 1; ; k À 1: …4:11† 118 LINEAR OPTIMAL FILTERS AND PREDICTORS Then Equation 4.11 can be reduced to the form FkÀ1 ExkÀ1 zT À Kk E^ k …À†zT À K k Hk FkÀ1 ExkÀ1 zT À K k Evk zT ˆ 0; x i i i i FkÀ1 ExkÀ1 zT À Kk E^ k …À†zT À K k Hk FkÀ1 ExkÀ1 zT ˆ 0; x i i i 1 x Eh‰xk À K k Hk xk À Kk xk Š À Kk …^ k …À† À xk †izT ˆ 0; i ‰I À Kk À K k Hk ŠExk zT ˆ 0: i …4:12† Equation 4.12 can be satis®ed for any given xk if Kk ˆ I À K k Hk : …4:13† Clearly, this choice of Kk causes Equation 4.7 to satisfy a portion of the condition given by Equation 4.8, which was derived in Section 3.8 The choice of K k is such that Equation 4.9 is satis®ed Let the errors R …4:14† R …4:15† ~ ^ xk …‡† ˆ xk …‡† À xk ; ~ ^ xk …À† ˆ xk …À† À xk ; R zk ˆ zk …À† À zk ~ ^ ^ ˆ Hk xk …À† À zk : …4:16† ~ ~ Vectors xk …‡† and xk …À† are the estimation errors after and before updates, respectively.3 ^ The parameter xk depends linearly on xk , which depends linearly on zk Therefore, from Equation 4.9 ^ E‰xk À xk …‡†ŠzT …À† ˆ k …4:17† and also (by subtracting Equation 4.9 from Equation 4.17) ^ E‰xk À xk …‡†Š~ T ˆ 0: zk …4:18† ^ ~ Substitute for xk ; xk …‡† and zk from Equations 4.1, 4.7, and 4.16, respectively Then ^ E‰FkÀ1 xkÀ1 ‡ wkÀ1 À Kk …À† À K k zk Š‰Hk xk …À† À zk ŠT ˆ 0: However, by the system structure ^k Ewk zT ˆ Ewk xT …‡† ˆ 0; k ^ ^ E‰FkÀ1 xkÀ1 À Kk xk …À† À K k zk Š‰Hk xk …À† À zk ŠT ˆ 0: The symbol $ is of®cially called a tilde but often called a ``squiggle.'' 4.2 119 KALMAN FILTER ~ Substituting for Kk , zk ; and xk …À† and using the fact that E~ k …À†vT ˆ 0, this last x k result can be modi®ed as follows: ^ ^ ˆ Eh‰FkÀ1 xkÀ1 À xk …À† ‡ K k Hk xk …À† À K k Hk xk À K k vk Š ^ ‰Hk xk …À† À Hk xk À vk ŠT i ^ ^ ~ ˆ Eh‰…xk À xk …À†† À K k Hk …xk À xk …À†† À K k vk Š‰Hk xk …À† À vk ŠT i ~ ~ ˆ Eh‰…À~ k …À† ‡ K k Hk xk …À† À K k vk Š‰Hk xk …À† À vk ŠT i: x By de®nition, the a priori covariance (the error covariance matrix before the update) is x xk Pk …À† ˆ Eh~ k …À†~ T …À†i: It satis®es the equation T ‰I À K k Hk ŠPk …À†Hk À K k Rk ˆ 0; and therefore the gain can be expressed as T T K k ˆ Pk …À†Hk ‰Hk Pk …À†Hk ‡ Rk ŠÀ1 ; …4:19† which is the solution we seek for the gain as a function of the a priori covariance One can derive a similar formula for the a posteriori covariance (the error covariance matrix after update), which is de®ned as Pk …‡† ˆ Eh‰~ k …‡†~ T …‡†Ši: x xk …4:20† By substituting Equation 4.13 into Equation 4.7, one obtains the equations ^ xk …‡† ˆ …I À K k Hk †^ k …À† ‡ K k zk ; x ^ ^ ^ xk …‡† ˆ xk …À† ‡ K k ‰zk À Hk xk …À†Š: …4:21† Subtract xk from both sides of the latter equation to obtain the equations ^ ^ ^ xk …‡† À xk ˆ xk …À† ‡ K k Hk xk ‡ K k vk À K k Hk xk …À† À xk ; ~ ~ ~ xk …‡† ˆ xk …À† À K k Hk xk …À† ‡ K k vk ; ~ xk …‡† ˆ …I À K k Hk †~ k …À† ‡ K k vk : x …4:22† By substituting Equation 4.22 into Equation 4.20 and noting that E~ k …À†vT ˆ 0, one x k obtains T Pk …‡† ˆ Ef‰I À K k Hk Š~ k …À†~ T …À†‰I À K k Hk ŠT ‡ K k vk vT K k g x xk k T  ˆ …I À Kk Hk †Pk …À†…I À K k Hk †T ‡ K k Rk K k : …4:23† 120 LINEAR OPTIMAL FILTERS AND PREDICTORS This last equation is the so-called ``Joseph form'' of the covariance update equation derived by P D Joseph [15] By substituting for K k from Equation 4.19, it can be put into the following forms: Pk …‡† ˆ Pk …À† À K k Hk Pk …À† T T T T T À Pk …À†Hk K k ‡ K k Hk Pk …À†Hk K k ‡ K k Rk K k T T ˆ …I À K k Hk †Pk …À† À Pk …À†Hk K k T T ‡ K k …Hk Pk …À†Hk ‡ Rk † K k |‚‚‚‚‚‚‚‚‚‚‚‚‚‚‚‚‚{z‚‚‚‚‚‚‚‚‚‚‚‚‚‚‚‚‚} T Pk …À†Hk ˆ …I À K k Hk †Pk …À†; …4:24† the last of which is the one most often used in computation This implements the effect that conditioning on the measurement has on the covariance matrix of estimation uncertainty Error covariance extrapolation models the effects of time on the covariance matrix of estimation uncertainty, which is re¯ected in the a priori values of the covariance and state estimates, Pk …À† ˆ E‰~ k …À†~ T …À†Š; x xk ^ ^ xk …À† ˆ FkÀ1 xkÀ1 …‡†; …4:25† respectively Subtract xk from both sides of the last equation to obtain the equations ^ ^ xk …À† À xk ˆ FkÀ1 xkÀ1 …‡† À xk ; ~ xk …À† ˆ FkÀ1 ‰^ kÀ1 …‡† À xkÀ1 Š À wkÀ1 x ~ ˆ FkÀ1 xkÀ1 …‡† À wkÀ1 ~ ~k for the propagation of the estimation error, x Postmultiply it by xT …À† (on both sides of the equation) and take the expected values Use the fact that E~ kÀ1 wT ˆ to x kÀ1 obtain the results def Pk …À† ˆ E‰~ k …À†~ T …À†Š x xk x xkÀ1 ˆ FkÀ1 E‰~ kÀ1 …‡†~ T …‡†ŠFT ‡ E‰wkÀ1 wT Š kÀ1 kÀ1 …‡† ˆ FkÀ1 PkÀ1 FT ‡ QkÀ1 ; kÀ1 …4:26† which gives the a priori value of the covariance matrix of estimation uncertainty as a function of the previous a posteriori value 4.2 121 KALMAN FILTER 4.2.1 Summary of Equations for the Discrete-Time Kalman Estimator The equations derived in the previous section are summarized in Table 4.3 In this formulation of the ®lter equations, G has been combined with the plant covariance T by multiplying GkÀ1 and GkÀ1, for example, T QkÀ1 ˆ GkÀ1 E…wkÀ1 wT †GkÀ1 KÀ1 T  ˆ GkÀ1 QkÀ1 GkÀ1 : The relation of the ®lter to the system is illustrated in the block diagram of Figure 4.1 The basic steps of the computational procedure for the discrete-time Kalman estimator are as follows: Compute Pk …À† using PkÀ1 …‡†, FkÀ1 , and QkÀ1 Compute K k using Pk …À† (computed in step 1), Hk , and Rk Compute Pk …‡† using K k (computed in step 2) and Pk …À† (from step 1) ^ Compute successive values of xk …‡† recursively using the computed values of ^ K k (from step 3), the given initial estimate x0 , and the input data zk TABLE 4.3 Discrete-Time Kalman Filter Equations System dynamic model: Measurement model: Initial conditions: Independence assumption: xk ˆ FkÀ1 xkÀ1 ‡ wkÀ1 wk $ x …0; Qk † zk ˆ Hk xk ‡ vk vk $ x …0; Rk † ^ Ehx0 i ˆ x0 ~ ~T Ehx0 x0 i ˆ P0 Ehwk vjT i ˆ for all k and j State estimate extrapolation (Equation 4.25): ^ ^ xk …À† ˆ FkÀ1 xkÀ1 …‡† Error covariance extrapolation (Equation 4.26): Pk …À† ˆ FkÀ1 PkÀ1 …‡†FT ‡ QkÀ1 kÀ1 State estimate observational update (Equation 4.21): ^ ^ ^ xk …‡† ˆ xk …À† ‡ K k ‰zk À Hk xk …À†Š Error covariance update (Equation 4.24): Pk …‡† ˆ ‰I À K k Hk ŠPk …À† Kalman gain matrix (Equation 4.19): T T K k ˆ Pk …À†Hk ‰Hk Pk …À†Hk ‡ Rk ŠÀ1 122 LINEAR OPTIMAL FILTERS AND PREDICTORS M w v Fig 4.1 Block diagram of system, measurement model, and discrete-time Kalman ®lter ^ Step of the Kalman ®lter implementation [computation of xk …‡†] can be implemented only for state vector propagation where simulator or real data sets are available An example of this is given in Section 4.12 In the design trade-offs, the covariance matrix update (steps and 3) should be checked for symmetry and positive de®niteness Failure to attain either condition is a sign that something is wrongÐeither a program ``bug'' or an ill-conditioned problem In order to overcome ill-conditioning, another equivalent expression for Pk …‡† is called the ``Joseph form,''4 as shown in Equation 4.23: T Pk …‡† ˆ ‰I À K k Hk ŠPk …À†‰I À K k Hk ŠT ‡ K k Rk K k : Note that the right-hand side of this equation is the summation of two symmetric matrices The ®rst of these is positive de®nite and the second is nonnegative de®nite, thereby making Pk …‡† a positive de®nite matrix There are many other forms5 for K k and Pk …‡† that might not be as useful for robust computation It can be shown that state vector update, Kalman gain, and error covariance equations represent an asymptotically stable system, and therefore, the ^ ^ estimate of state xk becomes independent of the initial estimate x0 , P0 as k is increased Figure 4.2 shows a typical time sequence of values assumed by the ith component of the estimated state vector (plotted with solid circles) and its corresponding variance of estimation uncertainty (plotted with open circles) The arrows show the successive values assumed by the variables, with the annotation (in parentheses) on the arrows indicating which input variables de®ne the indicated transitions Note that each variable assumes two distinct values at each discrete time: its a priori value after Bucy and Joseph [15] Some of the alternative forms for computing K k and Pk …‡† can be found in Jazwinski [23], Kailath [24], and Sorenson [46] 4.2 123 KALMAN FILTER Fig 4.2 Representative sequence of values of ®lter variables in discrete time corresponding to the value before the information in the measurement is used, and the a posteriori value corresponding to the value after the information is used EXAMPLE 4.1 Let the system dynamics and observations be given by the following equations: xk ˆ xkÀ1 ‡ wkÀ1 ; z k ˆ xk ‡ vk ; Ehvk i ˆ Ewk ˆ 0; Ehvk1 vk2 i ˆ 2D…k2 À k1 †; z1 ˆ 2; Ehwk1 wk2 i ˆ D…k2 À k1 †; z2 ˆ 3; ^ Ehx…0†i ˆ x0 ˆ 1; ^ ^ Eh‰x…0† À x0 Š‰x…0† À xo ŠT i ˆ P0 ˆ 10: ^ The objective is to ®nd x3 and the steady-state covariance matrix PI One can use the equations in Table 4.3 with F ˆ ˆ H; Q ˆ 1; R ˆ 2; 154 LINEAR OPTIMAL FILTERS AND PREDICTORS  In the general time-invariant case, QDt ˆ e FDt @… ˆ Dt A e ÀFt Qt e ÀF T t dt eF T Dt ; where Qt is the constant covariance of process noise for continuous time, and QDt is its constant counterpart for discrete-time intervals equal to Dt 4.10.2 Relationship between R(t ) and Rk This depends upon the way that the discrete-time sensor processes the noise If it is an integrating sensor, then vk ˆ … tk tkÀ1 v…t† dt;  Rk ˆ R ˆ tk À tkÀ1 … tk tkÀ1 …4:97† …4:98† R…t† dt;  where R is the time-averaged value of R…t† over the interval tkÀ1 < t 4.11 …4:99† tk MODEL EQUATIONS FOR TRANSFORMED STATE VARIABLES The question to be addressed here is what happens to the Kalman ®lter model equations when the state variables and measurement variables are rede®ned by linear transformations? The answer to this question can be derived as a set of formulas, giving the new model equations in terms of the parameters of ``old'' model equations and the linear transformations relating the two sets of variables In Chapter 7, these formulas will be used to simplify the model equations 4.11.1 Linear Transformations of State Variables These are changes of variables by which the ``new'' state and measurement variables are linear combinations of the respective old state and measurement variables Such transformations can be expressed in the form  xk ˆ A k xk ; …4:100† zk ˆ B k H k ;  …4:101†  where x and z are the old variables and x and z are the new state vector and  measurement, respectively 4.12 155 APPLICATION OF KALMAN FILTERS Matrix Constraints One must further assume that for each discrete-time index k, Ak is a nonsingular n  n matrix The requirements on Bk are less stringent One need only assume that it is conformable for the product Bk Hk , that is, that Bk is a matrix with ` columns The dimension of zk is arbitrary, and can depend on k  4.11.2 New Model Equations With the above assumptions, the corresponding state, measurement, and state uncertainty covariance equations of the Kalman ®lter model are transformed to the following forms:     xk‡1 ˆ Fk xk ‡ wk ;    z ˆ H k xk ‡ vk ;        T    Pk …‡† ˆ Pk …À† À Pk …À†Hk ‰Hk Pk …À†Hk ‡ Rk ŠHk Pk …À†;   k   Pk‡1 …À† ˆ Fk Pk …‡†FT ‡ Qk ; …4:102† …4:103† …4:104† …4:105† where the new model parameters are  Fk ˆ Ak Fk AÀ1 ; k  k ˆ Bk Hk AÀ1 ; H k  k ˆ Ehwk wT i  k Q …4:106† …4:107† …4:108† ˆ Ak Qk AT ; k …4:109†  Rˆ Eh k vT i v k Bk Rk BT ; k …4:110† ˆ …4:111† and the new state estimation uncertainty covariance matrices are  Pk …Ỉ† ˆ Ak Pk …Ỉ†AT : k 4.12 …4:112† APPLICATION OF KALMAN FILTERS The Kalman ®lter has been applied to inertial navigation [15, 45, 167], sensor calibration [168], radar tracking [18], manufacturing [47], economics [30], signal processing [47], and freeway traf®c modeling [166]Ðto cite a few examples This section shows some applications of the programs provided on the companion ¯oppy diskette A simple example of a second-order underdamped oscillator is given here to illustrate the application of the equations in Table 4.3 This harmonic oscillator is an approximation of a longitudinal dynamics of an aircraft short period [8] 156 LINEAR OPTIMAL FILTERS AND PREDICTORS EXAMPLE 4.3 Consider a linear, underdamped, second-order system with displacement x1 …t†, rate x2 …t†, damping ratio z and (undamped) natural frequency of rad=sec, and constant driving term of 12.0 with additive white noise w(t) normally distributed The second-order continuous-time dynamic equation  x x1 …t† ‡ 2zw_ …t† ‡ o2 x1 …t† ˆ 12 ‡ w…t† can be written in state space form via state-space techniques of Chapter 2: _ x1 …t† _ x2 …t† ˆ Ào2 54 À2zo x1 …t† x2 …t† ‡ w…t† ‡ 12 : The observation equation is z…t† ˆ x1 …t† ‡ v…t†: Generate 100 data points with plant noise and measurement noise equal to zero ^ ^ Then estimate x1 …t† and x2 …t† with the following initial condition and parameter values: x1 …0† x2 …0† ˆ P…0† ˆ ft ft=s ; 2 ; Q ˆ 4:47…ft=s†2 ; z ˆ 0:2; R ˆ 0:01 …ft†2 ; o ˆ rad=s: Equations 4.21, 4.24, 4.25, and 4.26 were programmed in MATLAB on a PC (see Appendix A) Figure 4.4 shows the resulting estimates of position and velocity using the noise-free data generated from simulating the above second-order equation Figure 4.5 shows the corresponding RMS uncertainties in position and velocity (top plot), correlation coef®cient between position and velocity (middle plot), and Kalman gains (bottom plot) These results were generated from the accompanying MATLAB program exam 43.m described in Appendix A with sample time ˆ sec EXAMPLE 4.4 This example is that of a pulsed radar tracking system In this system, radar pulses are sent out and return signals are processed by the Kalman ®lter in order to determine the position of maneuvering airborne objects [137] This example's equations are drawn from IEEE papers [219, 200] 4.12 157 APPLICATION OF KALMAN FILTERS Fig 4.4 Estimated position (ft) and velocity (ft/s) versus time (s) Difference equations of dynamics equations in state-space formulation are P T 0 1 r 0 0 0 T 0 0 0 T0 T T T0 xk ˆ T T0 T T R0 Q Q T U 0U U U T U T U T wkÀ1 U 0U UxkÀ1 ‡ T U T U: 0U U U T U U T S R S w2 r kÀ1 P The discrete-time observation equation is given by zk ˆ 0 0 0 ! v1 xk ‡ k ; v2 k where _ _ xT ˆ ‰rk rk Uk yk yk Uk Š k rk ˆ range of the vehicle at time k _ rk ˆ range rate of the vehicle at time k Uk ˆ maneuvering correlated state noise yk ˆ bearing of the vehicle at time k 158 LINEAR OPTIMAL FILTERS AND PREDICTORS Fig 4.5 RMS uncertainties, position and velocity, correlation coef®cient, and Kalman gain _ yk ˆ bearing rate of the vehicle at time k Uk ˆ maneuvering-correlated state noise T ˆ sampling period in seconds wT ˆ ‰w1 k k w2 Š zero-mean white-noise sequences and covariance of s2 and s2 ; k respectively vT k ˆ ‰v1 k v2 Š sensor zero mean white noise sequence and covariance of k s2 and s2 ; respectively; r y and wk and vk are uncorrelated : r ˆ correlation coefficient ˆ V b À lT b ` E‰Uk UkÀ1 Š ˆ b s2 m b X l ; T> l T where s2 is the maneuver variance and l the inverse of average maneuver duration m The shaping ®lter for whitening the maneuver noise is given by 1 Uk ˆ rUkÀ1 ‡ w1 ; kÀ1 4.12 159 APPLICATION OF KALMAN FILTERS which drives the range rate …_k † state of the vehicle, and r 2 Uk ˆ rUkÀ1 ‡ w2 ; kÀ1 which drives the bearing rate …yk † state of the vehicle The derivation of the discretetime shaping ®lter is given in Section 3.6 with examples The range, range rate, bearing, and bearing rate equations have been augmented by the shaping ®lter equations The dimension of the state vector is increased from  to  Covariance and gain plots for this system are produced using the Kalman ®lter program of Appendix A The following initial covariance …P0 †, plant noise (Q), and measurement noise (R) are used to generate the covariance results: P T sr T T T sr T TT T T T0 T P0 ˆ T T T0 T T T T T0 T R s2 r T 0 2s2 r ‡ s2 T2 0 0 s2 0 0 s2 y s2 y T 0 s2 y T 2s2 y ‡ s2 T2 0 0 0 0 0 0 0 s2 0 0 0 0 0 0 0 P T T0 T T T0 T QˆT T T0 T T T0 R sr Rˆ 0 s2 y Q U 0U U U 0U U U; U 0U U U 0U S s2 ; Here r ˆ 0:5 and T ˆ 5, 10, 15 s, respectively Also, s2 ˆ …1000 m†2 ; r s2 ˆ …103=3†2 ; s2 ˆ …0:017 rad†2 ; y s2 ˆ 1:3  10À8 : Q 0U U U U 0U U U U 0U U U; U 0U U U U U 0U U S s2 160 LINEAR OPTIMAL FILTERS AND PREDICTORS Some parts of this example are discussed in [100] Results of covariances and Kalman gain plots are shown in Figures 4.6±4.8 Convergence of the diagonal elements of the covariance matrix is shown in these ®gures for intervals (5, 10, 15 s) Selected Kalman gain values are shown in the following ®gures for various values of sampling times These results were generated using the accompanying MATLAB program exam 44.m described in Appendix A 4.13 SMOOTHERS A smoother estimates the state of a system at time t using measurements made before and after time t The accuracy of a smoother is generally superior to that of a ®lter, because it uses more measurements for its estimate Smoothers are usually divided into three types: Range Range Fixed-interval smoothers use all the measurements over a ®xed interval to estimate the system state at all times in the same interval This type of smoother is most often used for off-line processing to get the very best estimate of the system state over the entire time interval Fixed-point smoothers estimate the system state at a ®xed time in the past, given the measurements up to the current time This type of smoother is used when the state estimate is needed at only one time in the interval, such as for estimating the miss distance between two objects that are being tracked by radar Fig 4.6 Covariances 161 SMOOTHERS Fig 4.7 Covariances and Kalman gains 0.2 Bearing rate gain 15 sec 0.5 sec 0 100 Time (sec) × 10–3 15 sec sec 0 100 Time (sec) 200 0.1 sec 15 sec 200 Bearing rate noise gain Bearing gain Range rate noise gain 4.13 100 Time (sec) 200 × 10–4 15 sec sec 0 Fig 4.8 Kalman gains 100 Time (sec) 200 162 LINEAR OPTIMAL FILTERS AND PREDICTORS Fixed-lag smoothers estimate the system state at a ®xed time interval lagging the time of the current measurement This type of smoother trades off estimate latency for more accuracy These can all be derived from the Kalman ®lter model The general derivation methodology uses the Kalman ®lter for measurements up to each time t that the state is to be estimated, combined with another algorithm derived from the Kalman ®lter for the measurements beyond that time This second algorithm can be derived by running the Kalman ®lter backward from the last measurement to the measurement just past t, then optimally combining the two independent estimates (forward and backward) of the state at time t based on the two independent sets of measurements The resulting formulas generally need to be modi®ed for more ef®cient and robust implementation Smoothers derived in this way appeared in the technical literature soon after the introduction of the Kalman ®lter We present here a smoother implementation of each type These are not necessarily in the best forms for implementation Derivations of these and more numerically stable implementations (including ``squareroot'' and ``information'' forms) can be found in many textbooks on the general subject of estimation (e.g., Anderson and Moore [1], Bierman [7], Gelb et al [21]) Both the ®xed-lag smoother and the ®xed-point smoother can be implemented in real time, as the measurements are made The ®xed-interval smoother can be implemented by a forward (®ltering) pass through all the measurements, followed by a backward (smoothing) pass 4.13.1 Rauch±Tung±Striebel Two-Pass Smoother This ®xed-interval smoother implementation was derived by H Rauch, K Tung, and C Striebel and published in 1965 [21] The ®rst (forward) pass uses a Kalman ®lter ^ ^ but saves the intermediate results xk …À†, xk …‡†, Pk …À†, and Pk …‡† at each measurement time tk The second pass runs backward in time in a sequence from the time tN of the last measurement, computing the smoothed state estimate from the intermediate results stored on the forward pass The smoothed estimate (designated by the subscript ‰sŠ) is initialized with the value ^ x ^ ˆ xn …‡†; ^ x‰sŠN …4:113† then computed recursively by the formulas ^ ^ ^ x‰sŠk ˆ xk …‡† ‡ Ak …^ ‰sŠk‡1 À xk‡1 …À††; x À1 Ak ˆ Pk …‡†FT Pk‡1 …À†: k …4:114† …4:115† 4.13 163 SMOOTHERS The covariance of uncertainty of the smoothed estimate can also be computed on the second pass: P‰sŠk ˆ Pk …‡† ‡ Ak …P‰sŠk‡1 À Pk‡1 …À††AT ; k …4:116† although this is not a necessary part of the smoother implementation It should be computed only if it is of suf®cient interest The MATLAB m-®le RTSvsKF.m, described in Appendix A, demonstrates the relative performance of this smoother and the Kalman ®lter 4.13.2 A Fixed-Point Smoother This type of smoother includes a Kalman ®lter to estimate the state at the current time tk using the measurements up to time tk , then adds the following equations to obtain a smoothed estimate of the state at a ®xed time ti < tk : ^ ^ ^ x‰sŠijk ˆ x‰sŠijkÀ1 ‡ Bk K k …zk À H xk …À††; À1 Bk ˆ BkÀ1 PkÀ1 …‡†FT Pk …À†; kÀ1 …4:117† …4:118† where the subscript notation ‰sŠijk refers to the smoothed estimate of the state at time ti , given the measurements up to time tk (A derivation and application of this technique to the analysis of inertial navigation system test data may be found in ^ [169].) The values of x…À†, K k , zk , Hk P, and P are computed in the Kalman ®lter and the initial value Bi ˆ I , the identity matrix The covariance of uncertainty of the smoothed estimate can also be computed by the formula P‰sŠijk ˆ P‰sŠijkÀ1 ‡ Bk …Pk …‡† À Pk …À††BT ; k …4:119† although this is not a necessary part of the smoother implementation 4.13.3 A Fixed-Lag Smoother The ®xed-lag smoother estimates the system state at time tkÀ` , given the measurements up to time tk (usually, the current time) The ®xed positive integer ` is the ®xed lag, equal to the number of discrete time steps between the time at which the state is to be estimated and the time of the last measurement used in estimating it The memory requirements for ®xed-lag smoothers increase with `, because the ^ intermediate Kalman ®lter values for xi …‡†, Pi …‡†, Fi , and Qi must be saved for ^ k À ` i k [For time-invariant systems, only xi …‡† and Pi …‡† need to be saved, and the steady state-value of Pi …‡† may suf®ce.] In addition to a Kalman ®lter 164 LINEAR OPTIMAL FILTERS AND PREDICTORS implementation for the state estimate at time tk , the following equations must be implemented to obtain the smoothed estimate of the state at time tkÀ` : ^ ^ ^ x‰sŠk‡1À` ˆ FkÀ` x‰sŠkÀ` ‡ QkÀ` FT PkÀ` …‡†…^ ‰sŠkÀ` À xkÀ` …‡†† x kÀ` ^ ‡ Bk‡1 K k‡1 …zk‡1 À Hk‡1 Fk xk …‡††; Bk‡1 ˆ …4:120† À1 Bk Pk …‡†FT Pk‡1 …À†: k …4:121† The ®rst ` steps of a ®xed-lag smoother must be implemented as a ®xed-point smoother, with the ®xed point at the initial time This procedure also initializes Bk For time-invariant systems, the steady-state values of the gainlike expressions ^ QkÀ` FT PkÀ` …‡† and Bk‡1 K k‡1 can be used with the stored values of xk …‡† kÀ` 4.14 4.14.1 SUMMARY Points to Remember The optimal linear estimator is equivalent to the general (nonlinear) optimal estimator if the random processes x and z are jointly normal Therefore, the equations for the discrete-time and continuous-time linear optimal estimators can be derived by using the orthogonality principle of Chapter The discrete-time estimator (Kalman ®lter) has been described, including its implementation equations and block diagram description The continuous-time estimator (Kalman±Bucy ®lter) is also described Prediction is equivalent to ®ltering when measurements (system outputs) are not available Implementation equations for continuous-time and discrete-time predictors have been given, and the problem of missing data has been discussed in detail The estimator equations for the case that there is correlation between plant and measurement noise sources and correlated measurement errors were discussed Relationships between stationary continuous-time and Kalman ®lter and Wiener ®lters were covered Methods for solving matrix Riccati differential equations have been included Examples discussed include the applications of the Kalman ®lter to (1) estimating the state (phase and amplitude) of a harmonic oscillator and (2) a discrete-time Kalman ®lter implementation of a ®ve-dimensional radar tracking problem An estimator for the state of a dynamic system at time t, using measurements made after time t, is called a smoother 4.14.2 Important Equations to Remember Kalman Filter The discrete-time model for a linear stochastic system has the form xk ˆ FkÀ1 xkÀ1 ‡ GkÀ1 wkÀ1 ; zk ˆ Hk xk ‡ vk ; 165 PROBLEMS where the zero-mean uncorrelated Gaussian random processes fwk g and fvk g have covariances Qk and Rk , respectively, at time tk The corresponding Kalman ®lter equations have the form ^ ^ xk …À† ˆ FkÀ1 xkÀ1 …‡†; T Pk …À† ˆ FkÀ1 PkÀ1 …‡†FT ‡ GkÀ1 QkÀ1 GkÀ1 ; kÀ1 ^ ^ ^ xk …‡† ˆ xk …À† ‡ K k …zk À Hk xk …À††; T T K k ˆ Pk …À†Hk …Hk Pk …À†Hk ‡ R†À1 ;  Pk …‡† ˆ Pk …À† À Kk Hk Pk …À†; where the …À† indicates the a priori values of the variables (before the information in the measurement is used) and the …‡† indicates the a posteriori values of the variables (after the information in the measurement is used) The variable K is the Kalman gain Kalman±Bucy Filter The continuous-time model for a linear stochastic system has the form d x…t† ˆ F…t†x…t† ‡ G…t†w…t†; dt z…t† ˆ H…t†x…t† ‡ v…t†; where the zero-mean uncorrelated Gaussian random processes fw…t†g and fv…t†g have covariances Q(t) and R(t), respectively, at time t The corresponding Kalman±Bucy ^ ®lter equations for the estimate x of the state variable x, given the output signal z, has the form d ^ x x…t† ˆ F…t†^ …t† ‡ K…t†‰z…t† À H…t†^ …t†Š; x dt K…t† ˆ P…t†H T …t†RÀ1 …t†; d T P…t† ˆ F…t†P…t† ‡ P…t†F T …t† À K…t†R…t†K …t† ‡ G…t†Q…t†GT …t†: dt PROBLEMS 4.1 A scalar discrete-time random sequence xk is given by xk‡1 ˆ 0:5xk ‡ wk ; Ex0 ˆ 0; Ex2 ˆ 1; Ew2 ˆ 1; k Ewk ˆ 0; 166 LINEAR OPTIMAL FILTERS AND PREDICTORS where wk is white noise The observation equation is given by z k ˆ xk ‡ vk Evk ˆ 0; Ev2 ˆ 1; and vk is also white noise The terms x0 , wk , and vk are all k Gaussian Derive a (nonrecursive) expression for E‰x1 jz0 ; z1 ; z2 Š 4.2 For the system given in Problem 4.1: (a) Write the discrete-time Kalman ®lter equations (b) Provide the correction necessary if z2 was not received ^ (c) Derive the loss in terms of the estimate x3 due to missing z2 (d) Derive the ®lter for k I (steady state) (e) Repeat (d) when every other observation is missed 4.3 In a single-dimension example of a radar tracking an object by means of trackwhile-scan, measurements of the continuous-time target trajectory at some discrete times are made The process and measurement models are given by _ x…t† ˆ À0:5x…t† ‡ w…t†; zkT ˆ xkT ‡ vkT ; where T is the intersampling interval (assume s for simplicity): Evk ˆ Ew…t† ˆ 0; Ew…t1 †w…t2 † ˆ 1d…t2 À t1 †; Evk1 T vk2 T ˆ 1D…k2 À k1 †; Ehvk wT i ˆ 0: Derive the minimum mean-squared-®lter of x(t) for all t 4.4 In Problem 4.3, the measurements are received at discrete times and each measurement is spread over some nonzero time interval (radar beam width nonzero) The measurement equation of Problem 4.3 can be modi®ed to zkT ‡Z ˆ xkT ‡Z ‡ vkT ‡Z ; where k ˆ 0; 1; 2; ; Z Z0 : Let T ˆ s, Z0 ˆ 0:1 (radar beam width) and v(t) be a zero-mean white Gaussian process with covariance equal to Derive the minimum meansquared ®lter of x(t) for all t 167 PROBLEMS 4.5 Prove the condition in the discussion following Equation 4.9 that Ewk zT ˆ i for i ˆ 1; k when wk and vk are uncorrelated and white 4.6 In Example 4.4, use white noise as a driving input to range rate …_k † and r _ bearing rate …yk † equations instead of colored noise This reduces the dimension of the state vector from  to  Formulate the new observation equation Generate the covariance and Kalman gain plots for the same values of P0 , Q, R, s2 , s2 , s2 , and s2 r y 4.7 For the same problem as Problem 4.6, obtain values of the plant covariance Q for the four-state model such the associated mean-squared estimation uncertainties for range, range rate, bearing, and bearing rate are within 5±10 % of those for the six-state model (Hint: This should be possible because the plant noise is used to model the effects of linearization errors, discretization errors, and other unmodeled effects or approximations This type of suboptimal ®ltering will be discussed further in Chapter 7.) 4.8 For the estimation problem modeled by the equations xk ˆ xkÀ1 ‡ wkÀ1 ; wk $ x …0; 30† and white; z k ˆ xk ‡ vk ; vk $ x …0; 20† and white; P0 ˆ 150; calculate the values of Pk …‡†, Pk …À†, and K k for k ˆ 1; 2; 3; and PI …‡† (the steady-state value) 4.9 Parameter estimation problem Let x be a zero-mean Gaussian random variable with covariance P0 , and let zk ˆ x ‡ vk be an observation of x with noise vk $ x …0; R† ^ (a) Write the recursion equations for Pk …‡†, Pk …À†, K k , and xk (b) What is the value of x1 if R ˆ 0? (c) What is the value of x1 if R ˆ ‡I? (d) Explain the results of (b) and (c) in terms of measurement uncertainty 4.10 Assume a stochastic system in continuous time modeled by the equations _ x…t† ˆ Àx…t† ‡ w…t†; w…t† $ x …0; 30†; z…t† ˆ x…t† ‡ v…t†; v…t† $ x …0; 20†: (a) Derive the values of the mean-squared estimation error P(t) and Kalman gain K…t† for time t ˆ 1; 2; 3; (b) Solve for the steady state value of P 168 LINEAR OPTIMAL FILTERS AND PREDICTORS 4.11 Show that the matrices Pk and P(t) of Equations 4.23 and 4.37 are symmetric T That is, Pk ˆ Pk and PT …t† ˆ P…t† 4.12 Derive the observability matrices for Example 4.4 and Problem 4.6 and determine whether these systems are observable 4.13 A vector discrete-time random sequence xk is given by xk ˆ 1 ! xkÀ1 ‡ wkÀ1 ; wk $ x …0; 1† and white: The observation equation is given by zk ˆ ‰ Šxk ‡ vk ; vk $ x ‰0; ‡ …À1†k Š and white: Calculate the values of Pk …‡†, Pk …À† and K k for k ˆ 1; ; 10 and PI …‡† (the steady-state value) with ! 10 P0 ˆ : 10 ... The MATLAB m-®le RTSvsKF.m, described in Appendix A, demonstrates the relative performance of this smoother and the Kalman ®lter 4.13.2 A Fixed-Point Smoother This type of smoother includes a Kalman. .. P…I†H T RÀ1 HP…I† for continuous-time systems The positive-de®nite solution of this algebraic equation is the steady-state value of the covariance matrix, ‰P…I†Š The Kalman? ?Bucy ®lter equation in... derive a closed-form solution for the steady-state Riccati equation in the scalar time-invariant case and in Chapter to derive a fast iterative solution method for the matrix time-invariant case

Ngày đăng: 26/01/2014, 15:20

TỪ KHÓA LIÊN QUAN

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN

w