1. Trang chủ
  2. » Công Nghệ Thông Tin

handbook of multisensor data fusion phần 5 pdf

53 277 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 53
Dung lượng 650,44 KB

Nội dung

©2001 CRC Press LLC 10.2.2.3 Sensors There is a set of sensors that report observations at an ordered, discrete sequence of (possibly random) times. These sensors may be of different types and report different information. The set can include radar, sonar, infrared, visual, and other types of sensors. The sensors may report only when they have a contact or on a regular basis. Observations from sensor j take values in the measurement space H j . Each sensor may have a different measurement space. The probability distribution of each sensor’s response conditioned on the value of the target state s is assumed to be known. This relationship is captured in the likelihood function for that sensor. The relationship between the sensor response and the target state s may be linear or nonlinear, and the probability distribution representing measurement error may be Gaussian or non-Gaussian. 10.2.2.4 Likelihood Functions Suppose that by time t observations have been obtained at the set of times 0 ≤ t 1 ≤ … ≤ t K ≤ t . To allow for the possibility that more than one sensor observation may be received at a given time, let Y k be the set of sensor observations received at time t k . Let y k denote a value of the random variable Y k . Assume that the likelihood function can be computed as (10.1) The computation in Equation 10.1 can account for correlation among sensor responses. If the distribution of the set of sensor observations at time t k is independent given target state, then L k ( y k | s ) is computed by taking the product of the probability (density) functions for each observation. If they are correlated, then one must use the joint density function for the observations conditioned on target state to compute L k ( y k | s ). Let Y ( t ) = ( Y 1 , Y 2 ,…, Y K ) and y = ( y 1 ,…, y K ). Define L ( y | s 1 ,…, s K ) = Pr { Y ( t ) = y | X ( t 1 ) = s 1 ,…, X ( t K ) = s K }. Assume (10.2) Equation 10.2 means that the likelihood of the data Y ( t ) received through time t depends only on the target states at the times { t 1 ,…, t K } and not on the whole target path. 10.2.2.5 Posterior Define q ( s 1 ,…, s K ) = Pr { X ( t 1 ) = s 1 ,…, X ( t K ) = s K } to be the prior probability (density) that the process { X ( t ); t ≥ 0} passes through the states s 1 ,…, s K at times t 1 ,…, t K . Let p ( t K , s K ) = Pr { X ( t K ) = s K | Y ( t K ) = y }. Note that the dependence of p on y has been suppressed. The function p(t K , ·) is the posterior distribution on X(t K ) given Y(t K ) = y. In mathematical terms, the problem is to compute this posterior distribution. Recall that from the point of view of Bayesian inference, the posterior distribution on target state represents our knowledge of the target state. All estimates of target state derive from this posterior. 10.2.3 Computing the Posterior Compute the posterior by the use of Bayes’ theorem as follows: (10.3) Lys Y yXt s s S kk k k k () == () = {} ∈Pr for Pr Y y ytXusu utLst st K () = () = () ≤≤ {} = () … ()     , , ,0 1 pt s tXts t Ls sqss sdsds ds Ls sqss sdsds ds kK KKK K KKK KKK , ,, ,,, ,, ,,, () = () = () = {} () = {} = … () … () … … () … () … − ∫ ∫ Pr and Pr Yy Yy y y 112121 11212 ©2001 CRC Press LLC Computing p(t K , s K ) can be quite difficult. The method of computation depends upon the functional forms of q and L. The two most common ways are batch computation and a recursive method. 10.2.3.1 Recursive Method Two additional assumptions about q and L permit recursive computation of p(t K , s K ). First, the stochastic process {X(t; t ≥ 0} must be Markovian on the state space S. Second, for i ≠ j, the distribution of Y(t i ) must be independent of Y(t j ) given (X(t 1 ) = s 1 ,…,X(t K ) = s K ) so that (10.4) The assumption in Equation 10.4 means that the sensor responses (or observations) at time t k depend only on the target state at the time t k . This is not automatically true. For example, if the target state space is position only and the observation is a velocity measurement, this observation will depend on the target state over some time interval near t k . The remedy in this case is to add velocity to the target state space. There are other observations, such as failure of a sonar sensor to detect an underwater target over a period of time, for which the remedy is not so easy or obvious. This observation may depend on the whole past history of target positions and, perhaps, velocities. Define the transition function q k (s k |s k –1 ) = Pr {X(t k ) = s k |X(t k –1 ) = s k –1 } for k ≥ 1, and let q 0 be the probability (density) function for X(0). By the Markov assumption (10.5) 10.2.3.2 Single-Target Recursion Applying Equations 10.4 and 10.5 to 10.3 results in the basic recursion for single-target tracking given below. Basic Recursion for Single-Target Tracking Initialize Distribution: (10.6) For k ≥ 1 and s k ∈ S, Perform Motion Update: (10.7) Compute Likelihood Function L k from the observation Y k = y k Perform Information Update: (10.8) The motion update in Equation 10.7 accounts for the transition of the target state from time t k–1 to t k . Transitions can represent not only the physical motion of the target, but also changes in other state variables. The information update in Equation 10.8 is accomplished by point-wise multiplication of p – (t k , s k ) by the likelihood function L k (y k |s k ). Likelihood functions replace and generalize the notion of contacts in this view of tracking as a Bayesian inference process. Likelihood functions can represent sensor information such as detections, no detections, Gaussian contacts, bearing observations, measured signal- to-noise ratios, and observed frequencies of a signal. Likelihood functions can represent and incorporate information in situations where the notion of a contact is not meaningful. Subjective information also Ls s Lys Kkkk k K y 1 1 ,,… () = () = ∏ qs s q s s q s ds Kkkk k K S 11000 1 ,,… () = () () − = ∏ ∫ pt s q s s S 00 0 0 0 , () = () ∈ for pts qss pt s ds kk kkk k k k – ,, () = () () −−−− ∫ 1111 pt s C Lyspts kk k kk kk ,, – () = () () 1 ©2001 CRC Press LLC can be incorporated by using likelihood functions. Examples of likelihood functions are provided in Section 10.2.4. If there has been no observation at time t k , then there is no information update, only a motion update. The above recursion does not require the observations to be linear functions of the target state. It does not require the measurement errors or the probability distributions on target state to be Gaussian. Except in special circumstances, this recursion must be computed numerically. Today’s high-powered scientific workstations can compute and display tracking solutions for complex nonlinear trackers. To do this, discretize the state space and use a Markov chain model for target motion so that Equation 10.7 is computed through the use of discrete transition probabilities. The likelihood functions are also computed on the discrete state space. A numerical implementation of a discrete Bayesian tracker is described in Section 3.3 of Stone et al. 3 10.2.4 Likelihood Functions The use of likelihood functions to represent information is at the heart of Bayesian tracking. In the classical view of tracking, contacts are obtained from sensors that provide estimates of (some components of) the target state at a given time with a specified measurement error. In the classic Kalman filter formulation, a measurement (contact) Y k at time t k satisfies the measurement equation (10.9) where Y k is an r-dimensional real column vector X(t k ) is an l-dimensional real column vector M k is an r × l matrix ε k ~ N (0, Σ k ) Note that ~N(µ, Σ) means “has a Normal (Gaussian) distribution with mean µ and covariance Σ.” In this case, the measurement is a linear function of the target state and the measurement error is Gaussian. This can be expressed in terms of a likelihood function as follows. Let L G (y|x) = Pr{Y k = y|X(t k ) = x}. Then (10.10) Note that the measurement y is data that is known and fixed. The target state x is unknown and varies, so that the likelihood function is a function of the target state variable x. Equation 10.10 looks the same as a standard elliptical contact, or estimate of target state, expressed in the form of multivariate normal distribution, commonly used in Kalman filters. There is a difference, but it is obscured by the symmetrical positions of y and M k x in the Gaussian density in Equation 10.10. A likelihood function does not represent an estimate of the target state. It looks at the situation in reverse. For each value of target state x, it calculates the probability (density) of obtaining the measurement y given that the target is in state x. In most cases, likelihood functions are not probability (density) functions on the target state space. They need not integrate to one over the target state space. In fact, the likelihood function in Equation 10.10 is a probability density on the target state space only when Y k is l-dimensional and M k is an l × l matrix. Suppose one wants to incorporate into a Kalman filter information such as a bearing measurement, speed measurement, range estimate, or the fact that a sensor did or did not detect the target. Each of these is a nonlinear function of the normal Cartesian target state. Separately, a bearing measurement, speed measurement, and range estimate can be handled by forming linear approximations and assuming Gaussian measurement errors or by switching to special non-Cartesian coordinate systems in which the YXt kkkk = () +M ε Lyx y x y x G r kk T k () = () −− () − ()       − − − 2 1 2 2 12 1 π det ΣΣexp MM ©2001 CRC Press LLC measurements are linear and hopefully the measurement errors are Gaussian. In combining all this information into one tracker, the approximations and the use of disparate coordinate systems become more problematic and dubious. In contrast, the use of likelihood functions to incorporate all this information (and any other information that can be put into the form of a likelihood function) is quite straightforward, no matter how disparate the sensors or their measurement spaces. Section 10.2.4.1 provides a simple example of this process involving a line of bearing measurement and a detection. 10.2.4.1 Line of Bearing Plus Detection Likelihood Functions Suppose that there is a sensor located in the plane at (70,0) and that it has produced a detection. For this sensor the probability of detection is a function, P d (r), of the range r from the sensor. Take the case of an underwater sensor such as an array of acoustic hydrophones and a situation where the propagation conditions produce convergence zones of high detection performance that alternate with ranges of poor detection performance. The observation (measurement) in this case is Y = 1 for detection and 0 for no detection. The likelihood function for detection is L d (1|x) = P d (r(x)), where r(x) is the range from the state x to the sensor. Figure 10.1 shows the likelihood function for this observation. Suppose that, in addition to the detection, there is a bearing measurement of 135 degrees (measured counter-clockwise from the x 1 axis) with a Gaussian measurement error having mean 0 and standard deviation 15 degrees. Figure 10.2 shows the likelihood function for this observation. Notice that, although the measurement error is Gaussian in bearing, it does not produce a Gaussian likelihood function on the target state space. Furthermore, this likelihood function would integrate to infinity over the whole state space. The information from these two likelihood functions is combined by point-wise multiplica- tion. Figure 10.3 shows the likelihood function that results from this combination. 10.2.4.2 Combining Information Using Likelihood Functions Although the example of combining likelihood functions presented in Section 10.2.4.1 is simple, it illustrates the power of using likelihood functions to represent and combine information. A likelihood function converts the information in a measurement to a function on the target state space. Since all information is represented on the same state space, it can easily and correctly be combined, regardless of how disparate the sources of the information. The only limitation is the ability to compute the likelihood function corresponding to the measurement or the information to be incorporated. As an example, subjective information can often be put into the form of a likelihood function and incorporated into a tracker if desired. FIGURE 10.1 Detection likelihood function for a sensor at (70,0). 20 40 60 x 1 20 40 60 x 2 0 ©2001 CRC Press LLC 10.3 Multiple-Target Tracking without Contacts or Association (Unified Tracking) In this section, the Bayesian tracking model for a single target is extended to multiple targets in a way that allows multiple-target tracking without calling contacts or performing data association. 10.3.1 Multiple-Target Motion Model In Section 10.2, the prior knowledge about the single target’s state and its motion through the target state space S were represented in terms of a stochastic process {X(t); t ≥ 0} where X(t) is the target state at time t. This motion model is now generalized to multiple targets. FIGURE 10.2 Bearing likelihood function for a sensor at (70,0). FIGURE 10.3 Combined bearing and detection likelihood function. 0 20 40 60 20 40 60 x 2 x 1 0 20 40 60 x 1 20 40 60 x 2 ©2001 CRC Press LLC Begin the multiple-target tracking problem at time t = 0. The total number of targets is unknown but bounded by N , which is known. We assume a known bound on the number of targets because it allows us to simplify the presentation and produces no restriction in practice. Designate a region, ᑬ, which defines the boundary of the tracking problem. Activity outside of ᑬ has no importance. For example, we might be interested in targets having only a certain range of speeds or contained within a certain geographic region. Add an additional state φ to the target state space S. If a target is not in the region ᑬ, it is considered to be in state φ. Let S + = S ʜ {φ} be the extended state space for a single target and S + = S + × … × S + be the joint target state space where the product is taken N times. 10.3.1.1 Multiple-Target Motion Process Prior knowledge about the targets and their “movements” through the state space S + is expressed as a stochastic process X = {X(t); t ≥ 0}. Specifically, let X(t) = (X 1 (t),…,X N (t)) be the state of the system at time t where X n (t) ∈ S + is the state of target n at time t. The term “state of the system” is used to mean the joint state of all of the the targets. The value of the random variable X n (t) indicates whether target n is present in ᑬ and, if so, in what state. The number of components of X(t) with states not equal to φ at time t gives the number of targets present in ᑬ at time t. Assume that the stochastic process X is Markovian in the state space S + and that the process has an associated transition function. Let q k (s k | s k–1 ) = Pr{X(t k ) = s k |X(t k–1 ) = s k–1 } for k ≥ 1, and let q 0 be the probability (density) function for X(0). By the Markov assumption (10.11) The state space S + of the Markov process X has a measure associated with it. If the process S + is a discrete space Markov chain, then the measure is discrete and integration becomes summation. If the space is continuous, then functions such as transition functions become densities on S + with respect to that measure. If S + has both continuous and discrete components, then the measure will be the product or mixture of discrete and continuous measures. The symbol ds will be used to indicate integration with respect to the measure on S + , whether it is discrete or not. When the measure is discrete, the integrals become summations. Similarly, the notation Pr indicates either probability or probability density as appropriate. 10.3.2 Multiple-Target Likelihood Functions There is a set of sensors that report observations at a discrete sequence of possibly random times. These sensors may be of different types and may report different information. The sensors may report only when they have a contact or on a regular basis. Let Z(t, j) be an observation from sensor j at time t. Observations from sensor j take values in the measurement space H j . Each sensor may have a different measurement space. For each sensor j, assume that one can compute (10.12) To compute the probabilities in Equation 10.12, one must know the distribution of the sensor response conditioned on the value of the state s. In contrast to Section 10.2, the likelihood functions in this section can depend on the joint state of all the targets. The relationship between the observation and the state s may be linear or nonlinear, and the probability distribution may be Gaussian or non-Gaussian. Suppose that by time t, observations have been obtained at the set of discrete times 0 ≤ t 1 ≤ … ≤ t K ≤ t. To allow for the possibility of receiving more than one sensor observation at a given time, let Y k be the Pr X s X s s s s stt qqd KK kkk k K 11 1000 1 () =… () = {} = () () − = ∏ ∫ ,, Pr X s s SZtj z t z H j , () = () = {} ∈∈ + for and ©2001 CRC Press LLC set of sensor observations received at time t k . Let y k denote a value of the random variable Y k . Extend Equation 10.12 to assume that the following computation can be made (10.13) L k (y k |·) is called the likelihood function for the observation Y k = y k . The computation in Equation 10.13 can account for correlation among sensor responses if required. Let Y(t) = (Y 1 , Y 2 ,…, Y K ) and y = (y 1 ,…, y K ). Define L(y|s 1 ,…, s K ) = Pr {Y(t) = y|X(t 1 ) = s 1 ,…, X(t K ) = s k }. In parallel with Section 10.2, assume that (10.14) and (10.15) Equation 10.14 assumes that the distribution of the sensor response at the times {t k , k = 1,…, K} depends only on the system states at those times. Equation 10.15 assumes independence of the sensor response distributions across the observation times. The effect of both assumptions is to assume that the sensor response at time t k depends only on the system state at that time. 10.3.3 Posterior Distribution For unified tracking, the tracking problem is equivalent to computing the posterior distribution on X(t) given Y(t). The posterior distribution of X(t) represents our knowledge of the number of targets present and their state at time t given Y(t). From this distribution point estimates can be computed, when appropriate, such as maximum a posteriori probability estimates or means. Define q(s 1 ,…, s K ) = Pr{X(t 1 ) = s 1 ,…, X(t K ) = s K } to be the prior probability (density) that the process X passes through the states s 1 ,…, s K at times t 1 ,…,t K . Let q 0 be the probability (density) function for X(0). By the Markov assumption (10.16) Let p(t, s) = Pr{X(t) = s|Y(t)}. The function p(t,·) gives the posterior distribution on X(t) given Y(t) By Bayes’ theorem, (10.17) Ly Y y t kk k k k ||sPr X s sS () == () = {} ∈ + for Pr Y y X s y s s() | ( ) ( ), |,,tuuutLtt K ==≤≤ {} = () … () () 0 1 LLy Kkkk k K ys s s|, , | 1 1 … () = () = ∏ qqqd Kkkk k K ss ss ss 11 1 00 0 ,, |… () = ()() − = ∏ ∫ pt tt t Lqdd Lqdd KK KKK K KKK KKK (, ) |, , ,, , |, , ,, , s Pr Y y X s Pr Y y ys s s s s s s ys s s s s s s = () = () = {} () = {} = … () … () … … () … () … − ∫ ∫ and 11211 1121 ©2001 CRC Press LLC 10.3.4 Unified Tracking Recursion Substituting Equations 10.15 and 10.16 into Equations 10.17 gives and (10.18) where C and C′ normalize p(t K ,·) to be a probability distribution. Equation 10.18 provides a recursive method of computing p(t K ,·). Specifically, Unified Tracking Recursion Initialize Distribution: (10.19) For k ≥ 1 and s k ∈ S + , Perform Motion Update: (10.20) Compute Likelihood Function L k from the observation Y k = y k Perform Information Update: (10.21) 10.3.4.1 Multiple-Target Tracking without Contacts or Association The unified tracking recursion appears deceptively simple. The difficult part is performing the calculations in the joint state space of the N targets. Having done this, the combination of the likelihood functions defined on the joint state space with the joint distribution function of the targets automatically accounts for all possible association hypotheses without requiring explicit identification of these hypotheses. Section 10.4 demonstrates that this recursion produces the same joint posterior distribution as multiple- hypothesis tracking (MHT) does when the conditions for MHT are satisfied. However, the unified tracking recursion goes beyond MHT. One can use this recursion to perform multiple-target tracking when the notions of contact and association (notions required by MHT) are not meaningful. Examples of this are given in Section 5.3 of Stone et al. 3 Another example by Finn 4 applies to tracking two aircraft targets with a monopulse radar when the aircraft become so close together in bearing that their signals become unresolved. They merge inextricably at the radar receiver. pt C Ly q q d d C Ly q Ly q q d d KK k k k k K kk k K k K KKK KKK kkkkkk ,|| || || sssssss sss sss sss () = ′ () ( )() … = ′ ()( ) × ()( )() … = −− = − − ∏ ∫ ∏ ∫ 1 1 1 100 0 1 1 1 100 0 KK k K K d − = − − ∏ ∫         2 1 1 1 s pt C Ly q pt d KK K K K KK K K K K ,||,ssssss () = ()( )( ) −−−− ∫ 1 1111 pt q 00 00 0 ,sssS () = () ∈ + for pt q pt d kk kk k k k k − −−−− () = ()( ) ∫ ,|,sss ss 1111 pt C Ly pt kk k k k kk ,|,sss () = ()() − 1 ©2001 CRC Press LLC 10.3.4.1.1 Merged Measurements The problem tackled by Finn 4 is an example of the difficulties caused by merged measurements. A typical example of merged measurements is when a sensor’s received signal is the sum of the signals from all the targets present. This can be the case with a passive acoustic sensor. Fortunately, in many cases the signals are separated in space or frequency so that they can be treated as separate signals. In some cases, two targets are so close in space (and radiated frequency) that it is impossible to distinguish which component of the received signal is due to which target. This is a case when the notion of associating a contact to a target is not well defined. Unified tracking will handle this problem correctly, but the computational load may be too onerous. In this case an MHT algorithm with special approximations could be used to provide an approximate but computationally feasible solution. See, for example, Mori et al. 5 Section 10.4 presents the assumptions that allow contact association and multiple-target tracking to be performed by using MHT. 10.3.4.2 Summary of Assumptions for Unified Tracking Recursion In summary, the assumptions required for the validity of the unified tracking recursion are 1. The number of targets is unknown but bounded by N. 2. S + = S ʜ {φ} is the extended state space for a single target where φ indicates the target is not present. X n (t) ∈ S + is the state of the nth target at time t. 3. X(t) = (X 1 (t),…, X N (t)) is the state of the system at time t, and X = {X(t); t ≥ 0} is the stochastic process describing the evolution of the system over time. The process, X, is Markov in the state space S + = S + × … × S + where the product is taken N times. 4. Observations occur at discrete (possibly random) times, 0 ≤ t 1 ≤ t 2 …. Let Y k = y k be the observation at time t k , and let Y(t K ) = y K = (y 1 ,…, y K ) be the first K observations. Then the following is true 10.4 Multiple-Hypothesis Tracking (MHT) In classical multiple-target tracking, the problem is divided into two steps: (1) association and (2) estimation. Step 1 associates contacts with targets. Step 2 uses the contacts associated with each target to produce an estimate of that target’s state. Complications arise when there is more than one reasonable way to associate contacts with targets. The classical approach to this problem is to form association hypotheses and to use MHT, which is the subject of this section. In this approach, alternative hypotheses are formed to explain the source of the observations. Each hypothesis assigns observations to targets or false alarms. For each hypothesis, MHT computes the probability that it is correct. This is also the probability that the target state estimates that result from this hypothesis are correct. Most MHT algo- rithms display only the estimates of target state associated with the highest probability hypothesis. The model used for the MHT problem is a generalization of the one given by Reid 6 and Mori et al. 7 Section 10.4.3.3 presents the recursion for general multiple-hypothesis tracking. This recursion applies to problems that are nonlinear and non-Gaussian as well as to standard linear-Gaussian situations. In this general case, the distributions on target state may fail to be independent of one another (even when conditioned on an association hypothesis) and may require a joint state space representation. This recursion includes a conceptually simple Bayesian method of computing association probabilities. Section 10.4.4 discusses the case where the target distributions (conditioned on an association hypothesis) Pr Y y X s Pr Y y X s s tuuut tttkK Ly t KK K KKK K kk K k K () ==≤≤ {} = () = () = () =… {} = () () = ∏ () (), ,,, | 0 1 1 ©2001 CRC Press LLC are independent of one another. Section 10.4.4.2 presents the independent MHT recursion that holds when these independence conditions are satisfied. Note that not all tracking situations satisfy these independence conditions. Numerous books and articles on multiple-target tracking examine in detail the many variations and approaches to this problem. Many of these discuss the practical aspects of implementing multiple target trackers and compare approaches. See, for example, Antony, 8 Bar-Shalom and Fortman, 9 Bar-Shalom and Li, 10 Blackman, 11 Blackman and Popoli, 1 Hall, 12 Reid, 6 Mori et al., 7 and Waltz and Llinas. 13 With the exception of Mori et al., 7 these references focus primarily on the linear-Gaussian case. In addition to the full or classical MHT as defined by Reid 6 and Mori et al., 7 a number of approxima- tions are in common use for finding solutions to tracking problems. Examples include joint probabilistic data association (Bar-Shalom and Fortman 9 ) and probabilistic MHT (Streit 14 ). Rather than solve the full MHT, Poore 15 attempts to find the data association hypothesis (or the n hypotheses) with the highest likelihood. The tracks formed from this hypothesis then become the solution. Poore does this by providing a window of scans in which contacts are free to float among hypotheses. The window has a constant width and always includes the latest scan. Eventually contacts from older scans fall outside the window and become assigned to a single hypothesis. This type of hypothesis management is often combined with a nonlinear extension of Kalman filtering called an interactive multiple model Kalman filter (Yeddanapudi et al. 16 ). Section 10.4.1 presents a description of general MHT. Note that general MHT requires many more definitions and assumptions than unified tracking. 10.4.1 Contacts, Scans, and Association Hypotheses This discussion of MHT assumes that sensor responses are limited to contacts. 10.4.1.1 Contacts A contact is an observation that consists of a called detection and a measurement. In practice, a detection is called when the signal-to-noise ratio at the sensor crosses a predefined threshold. The measurement associated with a detection is often an estimated position for the object generating the contact. Limiting the sensor responses to contacts restricts responses to those in which the signal level of the target, as seen at the sensor, is high enough to call a contact. Section 10.6 demonstrates how tracking can be performed without this assumption being satisfied. 10.4.1.2 Scans This discussion further limits the class of allowable observations to scans. The observation Y k at time t k is a scan if it consists of a set ᑝ k of contacts such that each contact is associated with at most one target, and each target generates at most one contact (i.e., there are no merged or split measurements). Some of these contacts may be false alarms, and some targets in ᑬ might not be detected on a given scan. More than one sensor group can report a scan at the same time. In this case, the contact reports from each sensor group are treated as separate scans with the same reporting time. As a result, t k+1 = t k . A scan can also consist of a single contact report. 10.4.1.3 Data Association Hypotheses To define a data association hypothesis, h, let ᑝ j = set of contacts of the jth scan ᑢ(k) = set of all contacts reported in the first k scans Note that ᑢ k() ᑝ j j 1= k ∪ = [...]... general types of data, such as tracks and track-observation combinations, as well as observations, Reid16 used the term reports for the contents of the data sets Thus, Mk let Z(k) denote a data set of Mk reports z ikk i =1 and let ZN denote the cumulative data set of N such sets k defined by { } { } Z (k) = z ikk Mk ik =1 { } Z N = Z (1), , Z (N ) and (11.2) respectively In multisensor data fusion and... theorem presented in Section 11 .5. 1 Derivation of the assignment problem (Equation 11. 15) leads to several pertinent remarks The definition of a partition in Equations 11.4 and 11 .5 implies that each actual report belongs to, at most, one track of reports Zγi in a partition Zγ of the cumulative data set This can be modified to allow multiassignments of one, some, or all of the actual reports The assignment... log-likelihood ratio often becomes a linear combination of those data, whereas the measurement likelihood ratio involves a product of powers of the data In terms of logarithms, Equation 10.62 becomes ( ) ( ) ( ) ln Λ t k , s = ln Λ− t k , s + ln ᑦ k yk s for s ∈S (10.64) The following example is provided to impart an understanding of the practical differences between a formulation in terms of probabilities... omitted, but Equations 11.4a and 11.4b are retained. 25 A partition γ ∈ Γ* of the index set IN induces a partition of the data ZN via { } Z γ = Z γ 1 , , Z γ n ( γ ) where Z γ i ⊂ Z N (11 .5) (γ / Clearly, Zγi ∩ Zγi = 0 for i ≠ j and ZN = ∪ n=1)Z γj Each Zγi is considered to be a track of data Note that j a Zγi need not have observations from each frame of data, Z(k), but it must, by definition, have at least... large number of data association problems in multitarget tracking and multisensor data fusion1 ,4,6,7,12- 15, 18-21,24 is generally posed as ( ) Maximize P Γ = γ Z N λ ∈Γ *     (11.1) where ZN represents N data sets (Equation 11.2), γ is a partition of indices of the data (Equations 11.3 and 11.4a-11.4d), Γ* is the finite collection of all such partitions (Equations 11.4a-11.4d), Γ is a discrete random... hybrids of these two architectures, in which some preprocessed data and some raw data are used and switches between the two are possible A discussion of the advantages and disadvantages of these architectures is presented in the Blackman and Popoli book.4 The centralized and hybrid architectures are most applicable to the current data association problem 11.3 Assignment Formulation of Some General Data. .. partitioning of the data into proper frames of data is more interesting as there are several choices More efficient partitioning methods will be addressed in forthcoming work The data association problem to be solved is to correctly partition the data into observations emanating from individual targets and false alarms The combinatorial optimization problem that governs a large number of data association... ©2001 CRC Press LLC 10 .5 Relationship of Unified Tracking to MHT and Other Tracking Approaches This section discusses the relationship of unified tracking to other tracking approaches such as general MHT 10 .5. 1 General MHT Is a Special Case of Unified Tracking Section 5. 2.1 of Stone et al.3 shows that the assumptions for general MHT that are given in Section 10.4.3.4 imply the validity of the assumptions... P(Γ = γ | Z N ) is the posterior probability of a partition γ being true given the data ZN Each of these terms must be defined The objective then becomes formulating a reasonably general class of these data association problems (Equation 11.1) as multidimensional assignment problems (Equation 11. 15) In the surveillance example, the data sets were observations of the objects in the surveillance region,... Morefield, 15 who used integer programming to solve set packing and covering problems arising from a data association problem MHT has been popularized by the fundamental work of Reid.16 These works are further discussed in Blackman and Popoli,4 Bar-Shalom and Li,6 and Waltz and Llinas,1 all of which also serve as excellent introductions to the field of multitarget tracking and multisensor data fusion Bar-Shalom, . conditioned on the truth of a data association hypothesis is the product of independent distributions on the targets. Proof. The proof of this theorem is given in Section 4.3.1 of Stone et al. 3 Let. scan can also consist of a single contact report. 10.4.1.3 Data Association Hypotheses To define a data association hypothesis, h, let ᑝ j = set of contacts of the jth scan ᑢ(k) = set of all contacts. state of the system at time t where X n (t) ∈ S + is the state of target n at time t. The term “state of the system” is used to mean the joint state of all of the the targets. The value of the

Ngày đăng: 14/08/2014, 05:20

Nguồn tham khảo

Tài liệu tham khảo Loại Chi tiết
2. Hall, D.L., Mathematical Techniques in Multisensor Data Fusion, Artech House, Boston, MA, 1992 Sách, tạp chí
Tiêu đề: Mathematical Techniques in Multisensor Data Fusion
3. Drummond, O.E., A hybrid fusion algorithm architecture and tracklets, Proc. SPIE Conf. Signal and Data Proc. of Small Targets, Vol. 3136, San Diego, CA, 1997, 485 Sách, tạp chí
Tiêu đề: Proc. SPIE Conf. Signaland Data Proc. of Small Targets
4. Blackman, S. and Popoli, R., Design and Analysis of Modern Tracking Systems, Artech House Inc., Norwood, MA, 1999 Sách, tạp chí
Tiêu đề: Design and Analysis of Modern Tracking Systems
5. Moore, J.R. and Blair, W.D., Practical aspects of multisensor tracking, Multitarget-Multisensor Tracking: Applications and Advances III, Bar-Shalom, Y., and Blair, W.D., Eds., Artech House Inc., Norwood, MA, 2000 Sách, tạp chí
Tiêu đề: Multitarget-MultisensorTracking: Applications and Advances III
6. Bar-Shalom, Y. and Li X.R., Multitarget-Multisensor Tracking: Principles and Techniques, OPAMP Tech. Books, Los Angeles, 1995 Sách, tạp chí
Tiêu đề: Multitarget-Multisensor Tracking: Principles and Techniques
7. Poore, A.B. and Rijavec, N., Multitarget tracking and multidimensional assignment problems, Proc.1991 SPIE Conf. Signal and Data Processing of Small Targets, Vol. 1481, 1991, 345 Sách, tạp chí
Tiêu đề: Proc."1991 SPIE Conf. Signal and Data Processing of Small Targets
8. Poore, A.B. and Rijavec, N., A Lagrangian relaxation algorithm for multi-dimensional assignment problems arising from multitarget tracking, SIAM J. Optimization, 3(3), 545, 1993 Sách, tạp chí
Tiêu đề: SIAM J. Optimization
9. Poore, A.B. and Robertson, A.J. III, A new class of Lagrangian relaxation based algorithms for a class of multidimensional assignment problems, Computational Optimization and Applications, 8(2), 129, 1997 Sách, tạp chí
Tiêu đề: Computational Optimization and Applications
10. Shea, P.J. and Poore, A.B., Computational experiences with hot starts for a moving window implementation of track maintenance, Proc. 1998 SPIE Conf.: Signal and Data Processing of Small Targets, Vol. 3373, 1998 Sách, tạp chí
Tiêu đề: Proc. 1998 SPIE Conf.: Signal and Data Processing of SmallTargets
11. Poore, A.B. and Yan, X., Some algorithmic improvements in multi-frame most probable hypothesis tracking, Signal and Data Processing of Small Targets, SPIE, Drummond, O.E., Ed, 1999 Sách, tạp chí
Tiêu đề: Signal and Data Processing of Small Targets, SPIE
15. Morefield, C.L., Application of 0-1 integer programming to multitarget tracking problems, IEEE Trans. Automatic Control, 22(3), 302, 1977 Sách, tạp chí
Tiêu đề: IEEETrans. Automatic Control
16. Reid, D.B., An algorithm for tracking multiple targets, IEEE Trans. Automatic Control, 24(6), 843, 1996 Sách, tạp chí
Tiêu đề: IEEE Trans. Automatic Control
17. Deb, S., Pattipati, K.R., Bar-Shalom, Y., and Yeddanapudi, M., A generalized s-dimensional assign- ment algorithm for multisensor multitarget state estimation, IEEE Trans. Aerospace and Electronic Systems, 33, 523, 1997 Sách, tạp chí
Tiêu đề: IEEE Trans. Aerospace and ElectronicSystems
18. Kirubarajan, T., Bar-Shalom, Y., and Pattipati, K.R., Multiassignment for tracking a large number of overlapping objects, Proc. SPIE Conf. Signal & Data Proc. of Small Targets, Vol. 3136, 1997 Sách, tạp chí
Tiêu đề: Proc. SPIE Conf. Signal & Data Proc. of Small Targets
19. Kirubarajan, T., Wang, H., Bar-Shalom, Y., and Pattipati, K.R., Efficient multisensor fusion using multidimensional assignment for multitarget tracking, Proc. SPIE Conf. Signal Processing, Sensor Fusion and Target Recognition, 1998 Sách, tạp chí
Tiêu đề: Proc. SPIE Conf. Signal Processing, SensorFusion and Target Recognition
20. Popp, R., Pattipati, K., Bar-Shalom, Y., and Gassner, R., An adaptive m-best assignment algorithm and parallelization for multitarget tracking, Proc. 1998 IEEE Aerospace Conf., Snowmass, CO, 1998 Sách, tạp chí
Tiêu đề: Proc. 1998 IEEE Aerospace Conf
21. Chummun, M., Kirubarajan, T., Pattipati, K.R., and Bar-Shalom, Y., Efficient multidimensional data association for multisensor-multitarget tracking using clustering and assignment algorithms, Proc. 2nd Internat’l. Conf. Information Fusion, 1999 Sách, tạp chí
Tiêu đề: Proc. 2nd Internat’l. Conf. Information Fusion
22. Poore, A.B., Multidimensional assignment formulation of data association problems arising from multitarget tracking and multisensor data fusion, Computational Optimization and Applications, 3, 27, 1994 Sách, tạp chí
Tiêu đề: Computational Optimization and Applications
23. Drummond, O.E., Target tracking, Wiley Encyclopedia of Electrical and Electronics Engineering, Vol. 21, John Wiley & Sons, NY, 1999, 377 Sách, tạp chí
Tiêu đề: Wiley Encyclopedia of Electrical and Electronics Engineering
24. Sittler, R.W., An optimal data association problem in surveillance theory, IEEE Trans. Military Electronics, 8(2), 125, 1964 Sách, tạp chí
Tiêu đề: IEEE Trans. MilitaryElectronics

TỪ KHÓA LIÊN QUAN