1. Trang chủ
  2. » Kỹ Thuật - Công Nghệ

Tài liệu Tracking and Kalman filtering made easy P17 ppt

8 195 0

Đang tải... (xem toàn văn)

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 8
Dung lượng 62,83 KB

Nội dung

17 BAYES ALGORITHM WITH ITERATIVE DIFFERENTIAL CORRECTION FOR NONLINEAR SYSTEMS 17.1 DETERMINATION OF UPDATED ESTIMATES We are now in a position to obtain the updated estimate for the nonlinear observation and target dynamic model cases [5, pp. 424–443]. We shall use the example of the ballistic projectile traveling through the atmosphere for definiteness in our discussion. Assume that the past measurements have permitted us to obtain the state vector estimate " Xðt À Þ at the time t À , the last time observations were made on the target. As done at the end of Section 16.2, it is convenient to designate this past time t À  with the index k and write " Xðt À Þ as X à k;k . Assume that the measurements are being made in polar coordinates while the projectile is being tracked in rectangular coordinates, that is, the state-vector is given in rectangular coordinates, as done in Section 16.2. [Although previously the projectile trajectory plane was assumed to contain the radar, we will no longer make this assumption. We will implicitly assume that the radar is located outside the plane of the trajectory. It is left to the reader to extend (16.3-29) and (16.3-30) to this case. This is rather straightforward, and we shall refer to these equations in the following discussions as if this generalization has been made. This extension is given elsewhere [5, pp. 106– 110].] By numerically integrating the differential equation given by (16.3-29) starting with " Xðt À Þ at time t À  we can determine " XðtÞ. As before we now find it convenient to also refer to " XðtÞ¼ " X n as X à n;k , it being the estimate of the predicted state vector at time t (or n) based on the measurement at time <t (or k < n). We can compute the transition matrix È n;k by numerical integration of the differential equation given by (16.3-14) with A given by (16.3-30). In turn È n;k can be used to determine the covariance matrix of " XðtÞ 367 Tracking and Kalman Filtering Made Easy. Eli Brookner Copyright # 1998 John Wiley & Sons, Inc. ISBNs: 0-471-18407-1 (Hardback); 0-471-22419-7 (Electronic) using [5, p. 431] S à n;k ¼ Èðt n ; t k ; " XÞS à k;k Èðt n ; t k ; " XÞ T ð17:1-1Þ assuming we know S k;k . Assume that at time t (or n) we get the measurement Y n . Thus at time n we have Y n and the estimate X à n;k , based on the past data, with their corresponding covariance matrices R 1 and S à n;k , respectively. What is desired is to combine these to obtain the updated estimate X à n;n . We would like to use the Bayes filter to do this. We can do this by using the linearized nonlinear observation equations. This is done by replacing the nonlinear observation equation (16.2-1) by its linearized version of (16.2-9). The linear- ized observation matrix MðX à n;n Þ [which replaces the nonlinear G of (16.2-1)] is then used for M in (9.2-1) of the Bayes filter. Also Y n and X à n;k are replaced by their differentials in (9.2-1). They are designated as, respectively, Y n and X à n;k . These differentials are determined shortly. One might think at first that the Bayes filter can be applied without the use of the linearization of the nonlinear observation equations. This is because in (9.2-1) for X  à n;n , we can replace ðY n À MX à n;k Þ by Y n À Gð " X n Þ without using the linearization of the observations. However, as we see shortly, the calculation of H n requires the use of Mð " X n Þ; see (17.1-5a) and (17.1-5b). Also Mð " XÞ is needed to calculate S à n1;n for use in the next update calculation, as done using (17.1-5b). Physically we need to calculate Mð " X n Þ to find the minimum variance estimate X à n;n as the weighted sum of Y n and X à n;k because to do this we need both of the variates to be combined in the same coordinate system and we need their variances in this same coordinate system. Calculating Mð " X n Þ allows us to do the latter by using (16.2-9). The Bayes filter of (17.1-5) implicitly chooses the coordinates of X à n;s for the common coordinate system. The differentials are now determined. Using (16.2-6) with G given by (1.5-3), we can calculate " Y n in terms of " X obtained as described above. Using in turn (16.2-7), we calculate Y n to be given by Y n ¼ Y n À " Y n ð17:1-2Þ where " Y n ¼ GðX à n;k Þð17:1-2aÞ from (16.2-6). For convenience the differential X à n;k is referenced relative to X à n;k .Ifwe knew X n , the differential X à n;k could be given by X à n;k ¼ X à n;k À X n ð17:1-3Þ But we do not know X n , it being what we are trying to estimate. As a result, we do the next best thing. The differential is referenced relative to our best estimate 368 BAYES ALGORITHM WITH ITERATIVE DIFFERENTIAL CORRECTION of X based on the past data, which is X à n;k . As a result X à n;k ¼ X à n;k À X à n;k ¼ 0 ð17:1-4Þ This may seem strange, but it is due to our choice of reference for X à n;k . This will become clearer as we proceed. We are now in a position to apply the Bayes filter. In place of Y nþ1 and X  à nþ1;n in (9.2-1) to (9.2-1d) we use respectively, Y n and X à n;k [= 0, because of (17.1-4)]. Moreover, the Bayes filter of (9.2-1) to (9.2-1d) now becomes, for finding our updated differential estimate X à n;n [5, pp. 431–432], X à n;n ¼ X à n;k þ H  n ½ Y n À MðX à n;k ÞX à n;k ð17:1-5Þ where X à n;k ¼ 0 in the above and H  n ¼ S  à n;n ½MðX à n;k Þ T R À1 1 ð17:1-5aÞ S  à n;n ¼fðS  à n;k Þ À1 þ½MðX à n;k Þ T R À1 1 MðX à n;k Þg À1 ð17:1-5bÞ But X à n;n ¼ X à n;n À X à n;k ð17:1-6Þ all the diferential vectors being referred to X à n;k : The updated estimate X à n;n is thus obtained from (17.1-6) to be X à n;n ¼ X à n;k þ X à n;n ð17:1-7Þ This is our desired updated estimate. In obtaining the above update X à n;n , we have used a linearization of the nonlinear observation equations. The new updated estimate will have a bias, but it should be less than the bias in the original estimate " X n . In fact, having the updated estimate X à n;n , it is now possible to iterate the whole process we just went through a second time to obtain a still better estimate for X à n;n . Let us designate the above updated estimate X à n;n obtained on the first cycle as ðX à n;n Þ 1 and correspondingly designate X à n;n as ðX à n;n Þ 1 .Nowð X à n;n Þ 1 can be used in place of X à n;k to obtain a new Y n and X à n;k , which we shall call ðY n Þ 2 and ðX à n;k Þ 2 , given by ðY n Þ 2 ¼ Y n Àð " Y n Þ 2 ð17:1-8Þ where ð " Y n Þ 2 ¼ G½ðX à n;n Þ 1 ð17:1-8aÞ and ðX à n;k Þ 2 ¼ X à n;k ÀðX à n;n Þ 1 ð17:1-8bÞ DETERMINATION OF UPDATED ESTIMATES 369 Note that ðX n;k Þ 2 is now not equal to zero. This is because ðX à n;n Þ 1 is no longer X à n;k . The covariance of ðX n;k Þ 2 2 still is S à n;k and that of ðY n Þ 2 still is R 1 . Applying the Bayes filter, again using the new differential measurement and prediction estimate vector, yields ðX à n;n Þ 2 and in turn ðX à n;n Þ 2 from ðX à n;n Þ 2 ¼ðX à n;n Þ 1 þðX à n;n Þ 2 ð17:1-9Þ This procedure could be repeated with still better and better estimates obtained for X à n;n . The procedure would be terminated when [5, p. 433] ½ðX à n;n Þ rþ1 À Xð à n;n Þ r  T ½ðX à n;n Þ rþ1 À Xð à n;n Þ r  <" ð17:1-10Þ Generally, the first cycle estimate ðX à n;n Þ 1 is sufficiently accurate. The above use of the Bayes algorithm with no iteration is basically the filter developed by Swerling [123] before Kalman; see the Appendix. Once the final update X à n;n has been obtained, the whole process would be repeated when a new observation Y nþm is obtained at a later time n þ m. The subscript n þ m is used here instead of n þ 1 to emphasize that the time instant between measurements need not necessarily be equal. Then X à nþm;n would be obtained by integrating forward the nonlinear equation of motion given by (16.3-1). This X à nþm;n would be used to obtain  X à nþm;n , Y nþm , and MðX à nþm;n Þ using (17.1-2), (17.1-2a), (17.1-4), and (16.2-10). Integrating (16.3-14), Èðt nþm ; t n ; " XÞ would be obtained, from which in turn S à nþm;n would be obtained using (17.1-1). Using the Bayes filter, specifically (17.1-5) to (17.1-5b), X à nþm;nþm would then be obtained and, in turn, the desired next update state vector X à nþm;nþm . 17.2 EXTENSION TO MULTIPLE MEASUREMENT CASE We will now extend the results of Section 17.2 to the case where a number of measurements, let us say L þ 1 measurements, are simultaneously used to update the target trajectory estimate as done in Section 9.5 for the Bayes and Kalman filters when the observation scheme and target dynamics model are linear. For concreteness we will still use the example consisting of a projectile passing through the atmosphere. Assume measurements are made at the L þ 1 time instances t nÀL , t nÀLþ1 ; ; t nÀ1 , and t n , where these times are not necessarily equally spaced. Let these L þ 1 measurement be given by Y nÀL ; Y nÀLþ1 ; ; Y nÀ1 ; Y n ð17:2-1Þ where Y nÀi is a measurement vector of the projectile position in polar coordinates; see (16.2-4). [This is in contrast to (5.2-1) where the measurement y nÀi was not a vector but just the measurement of one target parameter, e.g., 370 BAYES ALGORITHM WITH ITERATIVE DIFFERENTIAL CORRECTION range.] Let us put the L þ 1 vector measurements in the form Y ðnÞ ¼ Y n Y nÀ1 . . . Y nÀL 0 B B B B B B B B @ 1 C C C C C C C C A ð17:2-2Þ For the case of the projectile target being considered the observation scheme and the target dynamics model are nonlinear. To use the measurement vector (17.2-2) for updating the Bayes filter or Kalman filter, as done at the end of Chapter 9 through the use of T, requires the linearization of the L þ 1 observations. Using the development given above for linearizing the nonlinear observation and dynamic model, this becomes a simple procedure [5] and will now be detailed. As before, let X à k;k be an estimate of the state vector of the projectile based on measurements prior to the L þ 1 new measurements. Using the dynamic equations given by (16.3-1) we can bring X à k;k forward to the times of the L þ 1 new measurements to obtain X à n;k ; X à nÀ1;k ; ; X à nÀL;k from which in turn we obtain " Y n ; " Y nÀ1 ; ; " Y nÀL through the use of (17.1-2a) and then in turn obtain the differential measurement vector Y ðnÞ ¼ Y n Y nÀ1 . . . Y nÀL 0 B B B B B B B B @ 1 C C C C C C C C A ð17:2-3Þ by the use of (17.1-2). Having the above Y ðnÞ we wish to obtain an update to the predicted X’s given by X à n;k ; X à nÀ1;k ; ; X à nÀL;k . However, rather than update all L þ 1 X ’s, it is better to update a representative X along the trajectory. This representative X would be updated using all the L þ 1 measurements contained in Y ðnÞ or equivalently Y ðnÞ . First, using (16.2-9), we obtain Y ðnÞ ¼ Mð " X n Þ X n Mð " X nÀ1 Þ X nÀ1 . . . Mð " X nÀL Þ X nÀL 2 6 6 6 6 6 6 6 6 4 3 7 7 7 7 7 7 7 7 5 þ N n N nÀ1 . . . N nÀL 2 6 6 6 6 6 6 6 6 4 3 7 7 7 7 7 7 7 7 5 ð17:2-4Þ EXTENSION TO MULTIPLE MEASUREMENT CASE 371 where M is defined by (16.2-10) and where, for simplicity of notation, " X nÀi is used in place of X à nÀi;k , which is X à k;k brought forward to time n À i. It is the X n , X nÀ1 , , X nÀL that we want to update based on the differential measurement matrix Y ðnÞ . Instead, as mentioned above, we will now reference all the differential state vectors X nÀi at time t nÀi ; i ¼ 0; ; L, to some reference time t c;n ¼ t cn . It is generally best to choose the time t cn to be at or near the center observation of the L þ 1 observations. The transition matrix from the time t cn to the time t nÀi of any measurement can be obtained by integrating (16.3-14). Using these transition matrices, (17.2-4) becomes Y ðnÞ ¼ Mð " X n ÞÈðt n ; t cn ; " X cn ÞX cn Mð " X nÀ1 ÞÈðt nÀ1 ; t cn ; " X cn ÞX cn . . . Mð " X nÀL ÞÈðt nÀL ; t cn ; " X cn ÞX cn 2 6 6 6 6 6 6 6 6 4 3 7 7 7 7 7 7 7 7 5 þ N n N nÀ1 . . . N nÀL 2 6 6 6 6 6 6 6 6 4 3 7 7 7 7 7 7 7 7 5 ð17:2-5Þ where " X cn , also designated as X à cn;k , is the value of X à k;k brought forward to time t cn . Equation (17.2-5) can now be written as Y ðnÞ ¼ T c;n X cn þ N ðnÞ ð17:2-6Þ where T c;n ¼ M n Èðt n ; t cn ; " X cn Þ M nÀ1 Èðt nÀ1 ; t cn ; " X cn Þ . . . M nÀL Èðt nÀL ; t cn ; " X cn Þ 2 6 6 6 6 6 6 6 6 4 3 7 7 7 7 7 7 7 7 5 ð17:2-6aÞ and N ðnÞ ¼ N n N nÀ1 . . . N nÀL 2 6 6 6 6 6 6 6 6 4 3 7 7 7 7 7 7 7 7 5 ð17:2-6bÞ 372 BAYES ALGORITHM WITH ITERATIVE DIFFERENTIAL CORRECTION and M nÀi ¼ Mð " X nÀi Þð17:2-6cÞ Referencing  X relative to " X cn ¼ X à cn;k yields X à cn;k ¼ X à cn;k À X à cn;k ¼ 0 ð17:2-7Þ The above parallels (17.1-4) We can now apply the Bayes Filter of (17.1-5) to (17.1-5b) or Kalman filter of (9.3-1) to (9.3-1d) to update the projectile trajectory based on the L þ 1 measurements. This is done for the Bayes filter of (17.1-5) to (17.1-5b) by replacing M by T cn and R 1 by R ðnÞ , which is the covariance matrix of Y ðnÞ ; specifically, (17.1-5) to (17.1-5b) become X à cn;cn ¼ X à cn;k þ H  cn ðY ðnÞ À T cn X à cn;k Þð17:2-8Þ where H  cn ¼ S  à cn;cn T T cn R À1 ðnÞ ð17:2-8aÞ S  à cn;cn ¼½ðS  à cn;k Þ À1 þ T T cn R À1 ðnÞ T cn  À1 ð17:2-8bÞ and from (17.1-1) S  à cn;k ¼ Èðt cn ; t k ; " X cn Þ S  à k;k Èðt cn ; t k ; " X cn Þ T ð17:2-8cÞ Having X à cn;cn ; one can obtain the desired update X à cn;cn ¼ " X cn þ X à cn;cn ð17:2-9Þ Having this new first estimate X à cn;cn for X at time t cn , which we now designate as ðX à cn;cn Þ 1 , we could obtain an improved estimate designated as ðX à cn;cn Þ 2 . This is done by iterating the whole process described above with X now referenced relative to ðX à cn;cn Þ 1 , as done in (17.1-8b) when Y ðnÞ consisted of one measurement. If, in applying the above recursive Bayes filter, it was assumed that the variance of the estimate based on the past data was infinite or at least extremely large, then the recursive relation would actually degenerate into a nonrecursive minimum variance estimate based on the most recent L þ 1 measurements as given by X à cn;cn ¼ " X cn þðT T cn R À1 ðnÞ T cn Þ À1 T T cn R À1 ðnÞ ½Y ðnÞ À Gð " X cn Þ ð17:2-10Þ EXTENSION TO MULTIPLE MEASUREMENT CASE 373 The above equation follows from (4.1-30) with W given by (4.5-4) for the minimum variance estimate. 17.3 HISTORICAL BACKGROUND The iterative differential correction procedure described in this chapter was first introduced by Gauss in 1795 [5]. There is an interesting story [5, 122, 124] relating to Gauss’s development of his least-squares estimate and the iterative differential correction. At that time the astronomers of the world had been looking for a missing planet for about 30 years. There was Mercury, Venus, Earth, Mars, and then the missing planet or planetoid. It has been theorized that because the planet had fragmented into planetoids (asteroids) it was so difficult to locate. It was finally on January 1, 1801, that an Italian astronomer, Giuseppe Piazzi, spotted for the first time one of these planetoids. There was great rejoicing among the world’s astronomers. However, the astronomers soon became concerned because the planetoid was out of view after 41 days, and they feared it would possibly not be found for another 30 years. At this time Gauss, who was then 23 years old, gathered the data that Piazzi had obtained on the planetoid Ceres. Over a period of a few months he applied his weighted least-squares estimate and the iterative differential correction techniques to determine the orbit of Ceres. In December of 1801, he sent his results to Piazzi who was then able to sight it again on the last day of 1801. 374 BAYES ALGORITHM WITH ITERATIVE DIFFERENTIAL CORRECTION . used to determine the covariance matrix of " XðtÞ 367 Tracking and Kalman Filtering Made Easy. Eli Brookner Copyright # 1998 John Wiley & Sons,. trajectory estimate as done in Section 9.5 for the Bayes and Kalman filters when the observation scheme and target dynamics model are linear. For concreteness

Ngày đăng: 21/01/2014, 07:20