handbook of multisensor data fusion phần 5 pot

3 190 0
handbook of multisensor data fusion phần 5 pot

Đang tải... (xem toàn văn)

Thông tin tài liệu

©2001 CRC Press LLC measurements are linear and hopefully the measurement errors are Gaussian. In combining all this information into one tracker, the approximations and the use of disparate coordinate systems become more problematic and dubious. In contrast, the use of likelihood functions to incorporate all this information (and any other information that can be put into the form of a likelihood function) is quite straightforward, no matter how disparate the sensors or their measurement spaces. Section 10.2.4.1 provides a simple example of this process involving a line of bearing measurement and a detection. 10.2.4.1 Line of Bearing Plus Detection Likelihood Functions Suppose that there is a sensor located in the plane at (70,0) and that it has produced a detection. For this sensor the probability of detection is a function, P d (r), of the range r from the sensor. Take the case of an underwater sensor such as an array of acoustic hydrophones and a situation where the propagation conditions produce convergence zones of high detection performance that alternate with ranges of poor detection performance. The observation (measurement) in this case is Y = 1 for detection and 0 for no detection. The likelihood function for detection is L d (1|x) = P d (r(x)), where r(x) is the range from the state x to the sensor. Figure 10.1 shows the likelihood function for this observation. Suppose that, in addition to the detection, there is a bearing measurement of 135 degrees (measured counter-clockwise from the x 1 axis) with a Gaussian measurement error having mean 0 and standard deviation 15 degrees. Figure 10.2 shows the likelihood function for this observation. Notice that, although the measurement error is Gaussian in bearing, it does not produce a Gaussian likelihood function on the target state space. Furthermore, this likelihood function would integrate to infinity over the whole state space. The information from these two likelihood functions is combined by point-wise multiplica- tion. Figure 10.3 shows the likelihood function that results from this combination. 10.2.4.2 Combining Information Using Likelihood Functions Although the example of combining likelihood functions presented in Section 10.2.4.1 is simple, it illustrates the power of using likelihood functions to represent and combine information. A likelihood function converts the information in a measurement to a function on the target state space. Since all information is represented on the same state space, it can easily and correctly be combined, regardless of how disparate the sources of the information. The only limitation is the ability to compute the likelihood function corresponding to the measurement or the information to be incorporated. As an example, subjective information can often be put into the form of a likelihood function and incorporated into a tracker if desired. FIGURE 10.1 Detection likelihood function for a sensor at (70,0). 20 40 60 x 1 20 40 60 x 2 0 ©2001 CRC Press LLC 11 Data Association Using Multiple Frame Assignments 11.1 Introduction 11.2 Problem Background 11.3 Assignment Formulation of Some General Data Association Problems 11.4 Multiple Frame Track Initiation and Track Maintenance Track Initiation • Track Maintenance Using a Sliding Window 11.5 Algorithms Preprocessing • The Lagrangian Relaxation Algorithm for the Assignment Problem • Algorithm Complexity • Improvement Methods 11.6 Future Directions Other Data Association Problems and Formulations • Frames of Data • Sliding Windows • Algorithms • Network- Centric Multiple Frame Assignments Acknowledgments References 11.1 Introduction The ever-increasing demand in surveillance is to produce highly accurate target identification and esti- mation in real time, even for dense target scenarios and in regions of high track contention. Past surveillance sensor systems have relied on individual sensors to solve this problem; however, current and future needs far exceed single sensor capabilities. The use of multiple sensors, through more varied information, has the potential to greatly improve state estimation and track identification. Fusion of information from multiple sensors is part of a much broader subject called data or information fusion, which for surveillance applications is defined as “a multilevel, multifaceted process dealing with the detection, association, correlation, estimation, and combination of data and information from multiple sources to achieve refined state and identity estimation, and complete and timely assessments of situation and threat”. 1 (A comprehensive discussion can be found in Waltz and Llinas.) 2 Level 1 deals with single and multisource information involving tracking, correlation, alignment, and association by sampling the external environment with multiple sensors and exploiting other available sources. Numerical processes thus dominate Level 1. Symbolic reasoning involving various techniques from artificial intelligence permeates Levels 2 and 3. Aubrey B. Poore Colorado State University Suihua Lu Colorado State University Brian J. Suchomel Numerica, Inc. ©2001 CRC Press LLC 12 General Decentralized Data Fusion with Covariance Intersection (CI) 12.1 Introduction 12.2 Decentralized Data Fusion 12.3 Covariance Intersection Problem Statement • The Covariance Intersection Algorithm 12.4 Using Covariance Intersection for Distributed Data Fusion 12.5 Extended Example 12.6 Incorporating Known Independent Information Example Revisited 12.7 Conclusions Acknowledgments Appendix 12.A The Consistency of CI Appendix 12.B MATLAB Source Code Conventional CI • Split CI References 12.1 Introduction One of the most important areas of research in the field of control and estimation is decentralized (or distributed) data fusion. The motivation for decentralization is that it can provide a degree of scalability and robustness that cannot be achieved with traditional centralized architectures. In industrial applica- tions, decentralization offers the possibility of producing plug-and-play systems in which sensors can be slotted in and out to optimize a tradeoff between price and performance. This has significant implications for military systems as well because it can dramatically reduce the time required to incorporate new computational and sensing components into fighter aircraft, ships, and other types of platforms. The benefits of decentralization are not limited to sensor fusion onboard a single platform; decentral- ization also can allow a network of platforms to exchange information and coordinate activities in a flexible and scalable fashion that would be impractical or impossible to achieve with a single, monolithic platform. Interplatform information propagation and fusion form the crux of the network centric warfare (NCW) vision for the U.S. military. The goal of NCW is to equip all battlespace entities — aircraft, ships, and even individual human combatants — with communication and computing capabilities to allow each to represent a node in a vast decentralized command and control network. The idea is that each Simon Julier IDAK Industries Jeffrey K. Uhlmann University of Missouri . use of multiple sensors, through more varied information, has the potential to greatly improve state estimation and track identification. Fusion of information from multiple sensors is part of. References 12.1 Introduction One of the most important areas of research in the field of control and estimation is decentralized (or distributed) data fusion. The motivation for decentralization. ©2001 CRC Press LLC 12 General Decentralized Data Fusion with Covariance Intersection (CI) 12.1 Introduction 12.2 Decentralized Data Fusion 12.3 Covariance Intersection Problem

Ngày đăng: 14/08/2014, 05:20

Từ khóa liên quan

Tài liệu cùng người dùng

  • Đang cập nhật ...

Tài liệu liên quan