1. Trang chủ
  2. » Giáo Dục - Đào Tạo

Ứng dụng phương pháp lọc bayes và mô hình markov ẩn trong bài toán quan sát quỹ đạo đa mục tiêu TT TIENG ANH

27 4 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

MINISTRY OF EDUCATION AND TRAINING MINISTRY OF NATIONAL DEFENCE ACADEMY OF MILITARY SCIENCE AND TECHNOLOGY ———————— NGUYEN THI HANG MULTIPLE TARGET TRACKING WITH A BAYESIAN APPROACH AND HIDDEN MARKOV MODEL Major: Probability Theory and Mathematical Statistics Code: 46 01 06 SUMMARY OF MATHEMATICS DOCTORAL THESIS Ha Noi - 2021 This thesis has been completed at ACADEMY OF MILITARY SCIENCE AND TECHNOLOGY MINISTRY OF NATIONAL DEFENCE Scientific Supervisors: Dr Trinh Quoc Anh Dr Nguyen Van Hung Reviewer 1: Assoc.Prof.Dr Pham Ngoc Phuc Military Technical Academy Reviewer 2: Assoc.Prof.Dr Tran Hung Thao Vietnam Academy of Science and Technology Reviewer 3: Dr Nguyen Hac Hai Hanoi National University of Education The thesis will be defended in front of thesis examination Committee at Academy of Military Science and Technology in hour on date , 2021 The thesis can be found at: - Liblary of Academy of Military Science and Technology - Vietnam National Liblary INTRODUCTION The necessary of the thesis Multiple Target Tracking (MTT) models, sometimes called multi-target tracing observations, are one of the most important components in many functional systems in practice, society and especially in national security In practice, we often encounter functional systems such as: Air traffic monitoring system (in aviation); Security camera system; Automated control system for self-driving devices; Automated control system for the robots; In defense and security, we see the functional systems such as: Airspace surveillance radar system; Coastal defense radar system; Ballistic missile defense system; Self-propelled missile control system; Unmanned aerial vehicle (UAV) control system; S-400 air defense system (Russia); THAAD radar system (USA); Submarine control system; Depending on the characteristics of each functional system, it is required that the embedded MTT model in that system be built with the corresponding appropriate conditions Up to now, the class of MTT models studied is quite rich and many research results have been published On the other hand, we also see that although it is publicly announced at a certain level in some fields, but in some other areas, especially in national security and defense with state secret, research results, key algorithms are kept confidential Because of that, all countries have to pay attention and research MTT models for the purpose of developing their own defense industry In addition, due to the continuous development and progress of science and technology, the "tracking tools" are also growing and changing Therefore, the MTT models also change, the algorithms must also be studied and changed in accordance with that progressive change All of that show out the practical significance, scientific significance, topicality and urgency of studying MTT models 2 The objectives of the research The MTT models in different functional systems have different conditions and structures, but they all have the same basic principles and purposes The purpose of the MTT is to determine the number of targets and their state properties (the trajectory is a consequence of a combination of a number of state properties) at each time in the surveillance region with the basic conditions, as follows: Targets appear, disappear at random; Targets appear, move and disappear independently of each other; Observations are made at discrete (usually uniformly spaced) time points and observations are made in an environment with noise (both passive and active noise (caused by the enemy)) The data set observed at each time point is a "messy" data set Each data can be an observation metric derived from one target or from another, or it can also be a metric derived from a false alarm (caused by noise) Up to the present time, the popular mathematical method to solve the MTT problem is Bayesian Sequential Estimation (BSE) This method essentially is a recursive update of the posterior distribution function of the states of the target to give the "following the target", "following the trajectory" algorithms All published algorithms built on this principle up to now are nontrivial algorithms because they are tied to very complex probabilistic models The basic solution method is a combination of data association algorithms (DA - Data Association) with suitable filters We review some of the important data association algorithms and association filters published and used in this field Data association algorithms can be grouped into three main families: ❼ Global Nearest Neighbors (GNN) algorithm and its variant developments ❼ Multiple Hypothesis Tracking (MHT) algorithm and its variant develop- ments ❼ Joint Probabilistic Data Association (JPDA) algorithm and its variant developments The most commonly used filters in this field are: Bayesian filter, Kalman filter, and Extended Kalman filter (EKF) In fact, when the targets move very close to each other due to the limited resolving power of the sensors, the targets are indistinguishable, or due to some technical reasons, are also leading to condition or more targets with the same observed data That phenomenon is called the obscuring target phenomenon With this phenomenon, the algorithms "tracking the target", "tracking the trajectory" have been announced until now, there is a state of losing the target, losing the tracking trajectory Therefore the first goal of the thesis is to study the algorithm to solve the general MTT problem, in which the solution method overcomes the phenomenon of the obscured target On the other hand, in reality, we encounter many functional systems with MTT models in which it is required to determine the number of targets in a certain target class at each time in the surveillance region This target class of interest can be either all targets, or an actual subclass of the all targets class The target number of this target class of interest is the quantity hidden in the number of all targets estimated by the MTT model (see examples at the beginning of chapter 3) Therefore the second goal of the thesis is to be interested in solving such a class of MTT models More specifically, study the method of using hidden Markov model (HMM - Hidden Markov Model) to solve this MTT model class The content of the research The thesis studies two classes of MTT problem: ❼ The general MTT problem class may have an obscured target phenomenon ❼ The class of MTT problem in which it is required to estimate the number of targets in a subclass of targets of interest at any time in the surveillance region The object and scope of the research The thesis studies MTT models and algorithms; Bayesian statistics and filtering; Kalman Filter; Hidden Markov model; An essential part on stochastic analysis and stochastic processes The research method Use methods of Bayesian statistics, Kalman filter and algorithms in hidden Markov models to realize the research objectives of the thesis The scientific and practical significance: The results obtained in the thesis not only contribute to enriching science for the problem of MTT and HMM theory, but also have great significance in practical application More precisely those contributions include: − With the general MTT model class that may have hidden target phe- nomenon, the thesis proposes a new data association method which is a data association strategy based on a recursively determined mapping system This association strategy overcomes the situation of ”losing the target”, ”losing the tracking trajectory” when the target is hidden At the same time, the thesis also proves the existence of an optimal data association strategy in the Bayesian sense, showing how to clearly build a strategy that satisfies the given general properties T and the specific ” K(ε)-optimal” properties commonly used in practice − With the MTT model class only interested in a subclass of targets, the thesis approached by using the HMM model and obtained the new results, as follows: Propose ”Forward algorithm”, ”Modified Viterbi algorithm” for heterogeneous HMM; Build a compatible HMM and solve the problem of determining the number of targets in the interested target class of the above-mentioned MTT model Chapter SOME BACKGROUND KNOWLEDGE This chapter presents some knowledge about discrete and continuous case Bayesian inference, Kalman filter, extended Kalman filter, some knowledge about stochastic processes, as a basis for presenting the main content of the thesis in the following chapters 1.1 Bayesian Statistics In this section, the basic concepts, basic formulas and main ideas of Bayesian inference method are presented for the research content of the thesis 1.2 Some problems with Bayesian filtering The content of this section summarizes and extracts some necessary results for the thesis on Bayesian filtering such as: Origin and nature of Bayesian filter, basic filter equations, Bayesian filter approach and smoothing (in discrete and continuous cases), and applications of Bayesian filtering 1.3 Kalman Filter and Extended Kalman Filter Cite the results of Kalman filter and extended Kalman filter (case of discrete and continuous time) needed for the thesis 1.4 Some knowledge about Stochastic process Present some concepts and results about two types of stochastic processes, Poisson processes and Markov processes, necessary for the presentation of the thesis results 1.5 Conclusion of Chapter The knowledge presented in this chapter is the cited results, the thesis only introduces without giving proof Citations are shown during the presentation Chapter MULTIPLE TARGET TRACKING IN CASE MAY HAVE OBSCURED TARGETS In this chapter, the thesis presents some research results on the general multi-target tracking (MTT) problem that may have obscured targets on the basis of the application of ideas and methods of Bayesian statistics and Bayesian filter 2.1 Introduction In this section, the thesis synthesizes and analyzes research results on the problem of multi-target tracking (MTT) that have been published, domestic and international, up to the present time Thereby the thesis shows the novelty, openness of the thesis’s results for the general MTT problem that may have obscured target phenomenon 2.2 Multiple target tracking problem: Mathematical model In this section, the thesis presents the general MTT problem studied in chapter 2, specifically as follows: The surveillance region is denoted by R, R ⊂ Rnx , with Rnx is the state space of the target The duration of surveillance is [0, T ], T ∈ R+ The surveillance times are ti , i = 0, 1, ; ti ∈ [0, T ] Usually the observable points are evenly spaced without loss of generality, so we can consider T ∈ Z+ , ti = i, i = 0, 1, , n At a time t, t ∈ [0, T ], there are several targets in the surveillance region This target number is random and denoted by Mt = Mt (ω) Object k (numerical order k ) called Xtk , k = 1, 2, , t ∈ tki , tkf ⊂ [0, T ] Object k appear in a random position is uniformly distributed over R at tki , disappeared at tkf Targets appear, move and disappear independently of each other Probability to appear Xtk is pk , < pk < In the surveillance region R and at a time t, t ∈ [0, T ] there may be false alarm due to noise or observation techniques The false alarm is called FA (False Alarm) The FA appears with probability q , < q < The FA appears at random locations is uniformly distributed over R The FA appears and disappears independently of each other and of the other targets Number of FA in the domain R at t is random and called Gt = Gt (ω) A FA is a special type of target, that appears instantaneously for a very short period of time (even only at one time) and have no orbital motion The dynamic model of the targets is described, as follows: k Xt+1 = Fk Xtk + Vtk , (2.1) Let Fk : Rnx → Rnx is the measured mapping from Rnx to Rnx ; Vtk ∈ Rnx are white noise processes with covariance matrix is Qk , Vtk , k = 1, 2, are not correlated The measurement model: Yt = G (Xt ) + Wt , (2.2) with G : Rnx → Rny , ny is the dimension of the observation vector; G is the measured mapping from Rnx to Rny , Wt ∈ Rny are white noise processes with covariance matrix is R Wt not correlated with the Vtk , k = 1, 2, Specifically for the target k , from (2.2), we have: Ytk = G Xtk + Wt (2.3) In the model (2.1)-(2.2), Vtk is called systematic noise, Wt is called observed noise (also known as measurement error) We state some assumptions for the MTT model described above, as follows: Let d(x, y), Euclid distance in Rn Assumption 2.2.1 The surveillance region R is the closed domain and the inner boundary Rnx (with Metric d(·, ·)) In practice, due to the limited resolution of the observables, a situation occurs if two targets X and X too close together, the observable data about them Y and Y respectively are the same We call this phenomenon of obscuring the target, that is, the target X obscured by the target X or vice versa, the target X obscured by the target X On the other hand, the target is an object of positive mass The position of the target cannot be a point (in the mathematical sense), but must be a domain in state space We have the following assumption: Call O(O;r) , r > 0, is an open sphere with center O radius r and O(O,r) , r > 0, is the corresponding closed sphere in Rnx Assumption 2.2.2 With every point x in the surveillance region R, x ∈ R, exist number rx > with model (2.1)-(2.2) if X and X of the same sphere O(x;rx ) , O(x;rx ) ⊂ R, then their observable data are the same The purpose of the MTT: With the model and conditions described above, based on the observed values (data), estimate the number of existing targets in R at a time t, t ∈ [0, T ] and estimate their trace 2.3 Data association method, optimal strategy and existence of optimal strategy 2.3.1 Recursive data association method Call: Y (t) = {Ytj |j = 1, 2, , nt } is the set of observed values at time t, nt is the number of observed results at time t Call: Y (0 : t) = t s=0 Y (s) s the set of observed values up to time t Definition 2.3.1 A trajectory of the k -th target appeared at the time tki , tki ∈ [0, T ], and disappeared at the time tkf , tkf ∈ [0, T ] is: X ktk ,tk = {Xtk | tki [ i f] t tkf ; tki ∈ [0, T ] ; tkf ∈ [0, T ]} With As are sets, We use symbol direct accumulation: n As = {(a1 , a2 , , an ) | as ∈ As , s = 1, n}, s=1 in wich we call ak with k = 1, 2, , n is the component k -th of element (a1 , a2 , , an ) Definition 2.3.2 A data association chain with the starting time ti and the end time tf and is denoted by L[ti ,tf ] , it’s zigzag connect consecujs+1 tive points Ytjs with Yt+1 , t = ti , ti+1 , , tf −1 ; s = 1, 2, of a cer- tain jt Ytji , Ytji +1 , , Ytjs , , Ytf f −ti +1 element of the set direct accumula- 11 We call Ll t− , Yti , l Card ftT −1 has a final vertex at time t is Yti In case: Card (Yti ) , Yti ∈ Y (t) is chain l-th ftT −1 (Yti ) = ⇔ l = ⇔ Yti is the new measure appearing at time t We call DLl [t− , Yti ] is the vertex set of the chain Ll [t− , Yti ] i ] ∪ {Y j } Write Zlt (j) = DLl [(t − 1)− , Yt−1 t Note that: ∀ Y0i ∈ Y (t0 ) Card ftT0 −1 Y0i = Algorithm to find T -strategy: Algorithm 2.1 Algorithm to find T -strategy Input: T ; Y (t) = {Ytj | ≤ j ≤ nt } Output: T -strategy ftT |t = t1 , t2 , , tn BEGIN Get Input T for t = to T Get Input(Y (t)); nt = Card(Y (t)); for i = to nt−1 −1 i )) for l = to Card(ft−1 (Yt−1 for j = to nt Calculate Ll [t− ; Ytj ]; IF Ll [t− ; Ytj ] satisfy T ) THEN i ) = Y j; 10 Get Output ftT (Yt−1 t 11 ELSE i ) = ∅; 12 Get Output ftT (Yt−1 13 END IF 14 end for 15 end for 16 end for 17 end for 18 END i Remark 2.4.1 a/ It is easy to see that, the same measured value y = Yt−1 i but ftT (y) = ftT Yt−1 can get m + 1, with m = Card T ft−1 −1 i Yt−1 , image value in Y (t) b/ Value m = Card T ft−1 −1 i Yt−1 is the minimum number of targets 12 that are obscured from each other at time (t − 1) and have the same i observed value Yt−1 2.5 Strategy "K(ε) -optimal" and strategy finding algorithm "K(ε) -optimal" Definition 2.5.1 Strategy {ftK(ε) |t = t1 , , tn } is called the optimal strategy ε-threshold (and is called K(ε) -optimal) if every chain of its data association satisfies the following conditions: A1 – When using image chain data to estimate the true trajectory of the corresponding target by Kalman filter, the estimated variance P (t|t) is minimized for all time domain t of the chain A2 – The estimated variance P (t|t) mentioned in (A1 –) does not exceed ε, ε > 0, for any t in the lifetime domain of the chain Here, ε > 0, given arbitrarily small ε, is called the acceptance threshold of the strategy Algorithm to find strategy {ftK(ε) |t = t1 , t2 , , tn } We call F = {Fθ | θ ∈ Θ} is a set of a priori and predictive information about possible dynamics in the state transition model of the targets (see (2.1)) We call the adjusted variance in step t when using Kalman filter for the Fθ model (2.1)-(2.3) with Fk = Fθ and datasets Zlt (j) is Plij (t | t) Denote δli = min1 Fθ j nt {minθ∈Θ Plij (t | t)} Fθ Denote (j∗ , θ∗ ) = arg min1 j nt {minθ∈Θ Plij (t | t)} K(ε) The algorithm finds the strategy {ft |t = t1 , t2 , , tn }, as follows: Algorithm 2.2 Algorithm to find strategy K( ) -optimal Input: T ; ; Y (t) = {Ytj | ≤ j ≤ nt } Output: Strategy K( ) − optimal: ftK( ) |t = t1 , t2 , , tn BEGIN Get Input T ; ; for t = to T Get Input(Y (t)); nt = Card(Y (t)); 13 for i = to nt−1 −1 i )) for l = to Card(ft−1 (Yt−1 for j = to nt F Calculate Plijq (t|t); F δli = min Plijq (t|t); 1≤j≤nt 10 11 12 13 14 15 16 17 18 END IF δli > THEN i ) = ∅; Get Output ftK( ) (Yt−1 ELSE i ) = Y j; Get Output ftK( ) (Yt−1 t END IF end for end fof end for end fof 2.6 Conclusion of Chapter Main results obtained in Chapter 2: 1/ Propose a new data association strategy is a data association method based on a recursively built mapping system: ft : M [Y (t − 1)] → Y (t), t = t1 , t2 , , tn With a reasonable scientific structure of the source set M [Y (t − 1)], it is possible to overcome the loss of the target, the loss of the tracking trajectory when the target is obscured 2/ Prove the existence of an optimal data association strategy in the sense of maximizing the posterior probability at each filtering step 3/ Build a data association strategy that satisfies a given property T 4/ Build a data association strategy that satisfies the property “ K(ε) -optimal”, which is often applied in practice The main results of this chapter are published in the scientific works [CT1]; [CT2] 14 Chapter MULTIPLE TARGET TRACKING WITH HIDDEN MARKOV MODEL In this chapter, the thesis studies the MTT model class in which the targets of interest are some of the possible targets of the MTT model The thesis has proposed an approach that is to use the hidden Markov model to solve the problem 3.1 Introduction The thesis introduces the MTT model class in which the audience of interest tracks and identifies not all the targets but only a subclass of them It also gives some practical examples of this MTT model class and introduces the content and structure of the chapter 3.2 Mathematical model of the MTT 3.2.1 Mathematical model of the MTT Suppose we observe some moving objects (also called target) in a space region and in a certain time period We call R is the spatial domain that we are interested in, where R ⊂ Rnx , with Rnx is the state space of the target R is called the surveillance region We call [1, T ], T > 1, T ∈ R+ , is about time that we are interested in [1, T ] is called the duration of surveillance The surveillance times are: t1 , t2 , , tn ; = t1 < t2 < < tn = T , they are discrete points, without loss of generality when it comes to i (ti )-th, we convention: T ∈ Z+ , ti ∈ Z+ and write ti = i, i = 1, 2, , n; in there t1 = s the first observation and tn = T is the last observation of the surveillance process Targets appear and disappear at random Targets appear at random locations with uniform distribution in R Targets appear to move and disappear independently of each other FA is considered as a kind of target and follow the same assumptions (about target) as other targets 15 We call Xtk , t ∈ tki , tkf , k = 1, 2, is the state of the k -th target at time t (tki and tkf is the time to appear and disappear respectively) We call M is the target class that the MTT model is interested in The number of targets belonging to the target class of interest in R at time t denoted by Mt , then we have: Mt = Mt (ω) = Card{Xtk |k ∈ M} The number of targets that not belong to the target class of interest in R at time t denoted by Gt , we have: Gt = Gt (ω) = Card{Xts |s ∈ / M} Logically and naturally, we assume: + Mt = Mt (ω) is a Poisson random variable with parameter λm , λm > + Xtk , k ∈ M, appear with probability pm , < pm < + Gt = Gt (ω), is a Poisson random variable with parameter λg , λg > / M, appear with probability pg , < pg < + Xts , s ∈ We call: Y (t) = {Ytj |j = 1, 2, , nt } is the set of observed values at time t, t = t1 , t2 , , tn ; nt is the number of observed targets at time t Easy to see nt = Card(Y (t)) is a random variable and nt = nt (ω) = Mt (ω) + Gt (ω); so we have: nt = nt (ω) P (λm + λg ) The requirements of the MTT problem are: Determine the target number in the class M at time t in the duration of surveillance in R, means determine Mt (ω) 3.2.2 Approximation Model Because: Mt (ω) P (λm ) and nt (ω) P (λm + λg ) so ∀ε > arbitrarily small, ∃M ∗ = M (ε) ∈ N+ ∃N ∗ = N (ε) ∈ N+ so that P [Mt (ω) ≤ M ∗ ] ≥ 1−ε P [nt ≤ N ∗ ] ≥ − ε, ∀t ∈ [1, T ] We make the following limiting assumption: Assumption 3.2.1 ∃ M ∗ , N ∗ ∈ N+ so that: Mt (ω) ≤ M ∗ (modP ), ∀t ∈ [1, T ]; nt (ω) ≤ N ∗ (modP ), ∀t ∈ [1, T ] We will call the problem MTT stated in section 3.2.1 with the condition that the hypothesis 3.2.1 is satisfied as an approximation model This model 16 is the object of study in this chapter 3.3 Hidden Markov model We study the HMM model whose structure is described, as follows: + The state index parameter is: M, M ∈ N+ + The set of distinct states S = {S1 , S2 , , SM }, so that S is called the state space We call qt is the state of the HMM at time t, therefore qt get value on S + The parameter indicating the number of observed values is N, N ∈ N+ + The set of all distinct observations V = {v1 , v2 , , vN }, so that V is called the space of observed values We call Ot is the observation at time t, therefore Ot get value on V + Distribution of the initial state: Π = {πi : ≤ i ≤ M }, in there: πi = P (q1 = Si ), ≤ i ≤ M + State transition probability distribution: a/ The case of homogeneous HMM model: A = [aij ]1≤i,j≤M , in there: aij = P [ql+1 = Sj |ql = Si ] , ∀l, ≤ i, j ≤ M b/ The case of heterogeneous HMM model: A(k) = [aij (k)]1≤i,j≤M in there: aij (k) = P [qk+1 = Sj |qk = Si ] , ≤ i, j ≤ M Note: For convenience aij in (a/) we also use the notation aql ql+1 and in (b/) with aij (k) we use the notation aqk qk+1 (k) + Symbol of probability distribution of observations when HMM is in state Sj , ≤ j ≤ M is B = [bj (k)]1≤k≤N , ≤ j ≤ M in there bj (k) = P [Ot = vk |qt = sj ], ≤ k ≤ N, ≤ j ≤ M + Let: A = {A(k) : k = 1, 2, } When identified parameters M, N then HMM is completely definite when Λ = (A, B, Π) is known, in the homogeneous case and Λ = (A, B, Π) in the heterogeneous case herefore, the symbol set Λ = (A, B, Π) (the case of homogeneous) or Λ = (A, B, Π) (the case of heterogeneous)is often used to represent the corresponding HMM We are interested in studying HMMs in the time domain [1, T ], T > 1, T ∈ N+ The times t is said to be understood as t ∈ [1, T ], t ∈ N+ The times 17 considered are: = t1 < t2 < < tn = T , without loss of generality we can replace equivalence tk = k, k = 1, 2, , n This HMM model is called the discrete HMM model With a sequence of observations in the time domain [1, T ]: O = O1 O2 OT We are interested in the following two basic problems of HMM - The first basic problem: Given the observation sequence O = O1 O2 OT Λ Let’s calculate P (O|Λ) - The Second basic problem: Given the observation sequence O = O1 O2 OT Λ Let’s define the sequence of states Q = q1 q2 qT optimal (the "optimal" or "best fit" is understood in the sense of maximum probability) This is the problem of determining the hidden part of the HMM model based on the sequence of tracking In the study of HMM, people are also interested in the 3rd fundamental problem, which is the HMM tuning problem, related to machine learning which is often applied in recognition theory With the published works so far on HMM, the "Forward - Backward" algorithm and the Viterbi algorithm have been introduced to solve the first and second basic problems As we all know, in the MTT model, observed data only has the past to the present, the observed data in the future is not available, so "reverse variable" does not exist Therefore, those algorithms are not applicable to MTT-related HMMs (which will be elaborated in section 3.5) We build two algorithms: Forward Algorithm and Modified Viterbi Algorithm for heterogeneous HMM, as follows: 3.4 Forward Algorithm and Modified Viterbi Algorithm In this section, the thesis builds a Forward algorithm and Modified Viterbi algorithm The Modified Viterbi algorithm is actually the Viterbi algorithm but only uses progressive variables and Forward algorithm Both of these algorithms are built on heterogeneous HMM Of course, the homogeneous HMM is a special case of the heterogeneous HMM With heterogeneous HMM we have the following results: 18 3.4.1 The first basic problem and Forward Algorithm Consider the case of heterogeneous HMM model: Λ = (A, B, Π) With t = tk with observation Otk sometimes also denoted Ok ; with state qtk sometimes also denoted qk At any time t = tk , tk ∈ [1, T ], with Λ we have a sequence of observations up to time t, O = O1 O2 · · · Ot (3.4) The first basic problem is to calculate the probability of the observed sequence (3.4) provided that the HMM is not homogeneous Λ given The solution formula and algorithm for problem are given through the following lemma and algorithm: Lemma 3.4.1 k aqs−1 qs (s − 1) bqs (Os ) P (O|Λ) = ∀(q1 q2 qk ) (3.5) s=1 where we denote: aq0 q1 (0) = πq1 In the homogeneous HMM model, a Forward-Backward Algorithm is introduced to calculate the formula (3.5) with t = T With the HMM model applied to solve the MTT problem, that algorithm cannot be used Here, the thesis proposes an algorithm called Forward Algorithm to calculate the formula (3.5) with ∀t ∈ [1, T ], as follows ❼ Forward Algorithm We write ατ (i) = P (O1 O2 · · · Oτ ; qτ = Si |Λ); that is, ατ (i) is the probability of the beginning of the observed sequence up to time tτ and at that time tτ , state qτ = qtτ = Si , Si ∈ S ατ (i) is called the forward variable Then, the probability P (O|Λ) is calculated according to the progressive variable ατ (i) according to the following inductive procedure: 1/ Starting step: α1 (i) = πi bi (O1 ), ≤ i ≤ M 19 2/ Inductive step: M ατ (i) aij (τ ) · bj (Oτ +1 ); ατ +1 (j) = i=1 ≤ τ ≤ t − = tk − 1, ≤ j ≤ M 3/ Finish step: M M P (O|Λ) = αt (i) = i=1 αtk (i) i=1 3.4.2 The second basic problem and Modified Viterbi Algorithm for heterogeneous HMM The second basic problem for HMM is to discover the hidden part of the model, that is, to find the most reasonable sequence of states, the optimal sequence of states corresponding to the given set of observations The first important question is what is the most reasonable standard? What is optimal? There are two types of requests: ❼ Type : For the observation sequence: O = O1 O2 · · · Ot , ≤ t ≤ T generated by the heterogeneous HMM Λ Find the state qt = qt∗ that is optimal in the sense of maximum probability ❼ Type 2: For the observation sequence: O = O1 O2 · · · Ot , ≤ t ≤ T generated by the heterogeneous HMM Λ Find the sequence of states Q∗ = q1∗ q2∗ · · · qt∗ of Λ is optimal in the sense of maximum probability a/ Method to find the solution for type This is the problem of finding the discrete optimal state qt = qt∗ at the current time t We construct the variable: γt (i) = P (qt = Si |O, Λ) (3.10) It is easy to represent γt (i) through the progressive variable αt (i) by the following formula: γt (i) = αt (i) P (O|Λ) (3.11) 20 From formula (3.10) we get the solution: qt∗ = arg max γt (i) (3.12) 1≤i≤M ❼ Modified Viterbi Algorithm 1: According to the Forward algorithm of section 3.4.1 we can easily use Forward algorithm to get γt (i) through formula (3.11) and from there get qt∗ through formula (3.12) b/ Method to find the solution for type To find the best sequence of states Q∗ = q1∗ q2∗ · · · qt∗ given the observed sequence O = O1 O2 Ot of Λ, the thesis proposes the following algorithm and called Modified Viterbi Algorithm for heterogeneous HMM We define the quantity: δτ (i) = max q1 q2 qτ −1 P (q1 q2 qτ −1 qτ , qτ = Si , O1 O2 · · · Oτ |Λ), (3.13) that is, δτ (i) is the greatest probability along a single state sequence up to time τ and ends up in τ at state Si From (3.13) we have the inductive formula for δτ (j) according to the following formula: δτ (j) = max δτ −1 (i) · aij (τ − 1) 1≤i≤M · bj (Oτ ) (3.14) In order to calculate the sequence of states to be found during induction according to formula (3.14) we must keep the argument (state) maxima in (3.14) for each τ and j So with δτ (i), we the induction with the quantity ψt (j), as follows: ❼ Modified Viterbi Algorithm 1/ Starting step: δ1 (i) = πi bi (O1 ), ψ1 (i) = 0, ≤ i ≤ M 2/ Inductive step: δτ (j) = max δτ −1 (i) · aij (τ − 1) 1≤i≤M · bj (Oτ ), ≤ τ ≤ t, ≤ j ≤ M ψτ (j) = arg max {δτ −1 (i) · aij (τ − 1)} , ≤ τ ≤ t, ≤ j ≤ M 1≤i≤M 3/ Finish step: P ∗ = max {δt (i)} , qt∗ = arg max {δt (i)} 1≤i≤M 1≤i≤M 21 4/ Backtracking step: qτ∗ = ψτ +1 (q∗τ +1 ), τ = t − 1, t − 2, · · · , At the end of the algorithm, we determine the optimal sequence of states: Q∗ = q1∗ q2∗ · · · qt∗ 3.5 Application of HMM model for the MTT 3.5.1 Complementary basic probability in building the HMM corresponding to MTT Contents of this section mentioned supplementary method to calculate the basic probability in the next section (Section 3.5.2) 3.5.2 Application of HMM model for the MTT In this section we consider the MTT problem stated in section 3.2.2 We build the HMM, as follows: 1/ Parameters M and state space We have: M = M ∗ + 1.State space: S = {S0 , S1 , , SM ∗ } , in there, Si is an event “here are exactly i targets of class M in R at the time of the corresponding interes”, i = 0, 1, , M ∗ 2/ Parameters N and the space of observed values We take N = N ∗ + 1.The space of observations: V = {v0 , v1 , , vN ∗ } , where, vk is the event “There are exactly k observed values at the corresponding time of interest”, k = 0, 1, , N ∗ 3/ State transition probability distribution A = [aij ], ≤ i, j ≤ M ∗ , in there, aij = P [qtk = Sj |qtk−1 = Si ] i = D1 · l=max{0;(i−j)} (λm )i −λm j+l−i l l j+l−i D0 · e · CM ∗ +l−i · Ci · (1 − pm ) · pm i! where the normalized constants D0 and D1 are calculated by the formula: M∗ D0 = i=0 D1 =   M∗  (λm )i −λm e i! −1 i j=0 l=max{0;(i−j)} −1  (λm )i −λm j+l−i D0 · e CM ∗ +l−i · Cil (1 − pm )l pj+l−i m i!  22 4/ The probability distribution of the observed sequence when the system is ≤ k ≤ N ∗ , ≤ j ≤ M ∗ in there, in state Sj at time t is: B = {bj (vk )}, bj (vk ) = P [Ot = vk |qt = Sj ] =   0 với k < j k (λ + λg ) −(λm +λg )  D2 · m e k! với k ≥ j where D2 is the normalized the formula  constant calculated by  −1 ∗  N (λ + λ )k  m D2 =  k=j g k! e−(λm +λg )  ≤ i ≤ M ∗ in there (λm )i −λm ·e πi = P [q1 = Si ] = D0 · i! 5/ Distribution of initial states: Π = {πi }, Thus, we have built an HMM for the MTT problem stated in section 3.2.2 We denote this HMM as ΛM T T Application of Forward Algorithm and Modified Viterbi Algorithm presented in section 3.4 for ΛM T T with the note that the homogeneous model is only a special case of the heterogeneous case with A(n) ≡ A, ∀n When we know the values nt1 , nt2 , , ntk (ntk = nt ), we can determine the corresponding target number according to the algorithm respectively m∗t1 , m∗t2 , , m∗tk (m∗tk = m∗t ), with tk = t is the current time symbol 3.6 Conclusion of Cchapter Chapter studies the MTT problem of the form stated in section 3.2.2 With the HMM method, this chapter has achieved the following results: 1/ Propose "Forward Algorithm" and "Modified Viterbi Algorithm" for heterogeneous HMM 2/ Building the HMM is compatible with the MTT problem studied in Chapter 3/ Use the proposed algorithms and compatible HMMs to solve the stated MTT problem The main results of this chapter are published in the scientific works [CT3], [CT4] 23 CONCLUSION The thesis deals with an important research topic in science and engineering that is Multiple Target Tracking (MTT) problem solving This problem has many applications in civil as well as military, so until now the problem is being studied by many countries as well as many scientists Research results of the thesis: The thesis presents research results for two classes of MTT models With the general MTT model class that may have hidden target phenomenon, the thesis proposes a new data association method which is a data association strategy based on a recursively determined mapping system This association strategy overcomes the situation of ”losing the target”, ”losing the tracking trajectory” when the target is hidden At the same time, the thesis also proves the existence of an optimal data association strategy in the Bayesian sense, showing how to clearly build a strategy that satisfies the given general properties T and the specific ” K(ε)-optimal” properties commonly used in practice With the MTT model class only interested in a subclass of targets, the thesis approached by using the HMM model and obtained the new results, as follows: Propose ”Forward algorithm”, ”Modified Viterbi algorithm” for heterogeneous HMM; Build a compatible HMM and solve the problem of determining the number of targets in the interested target class of the above-mentioned MTT model New contribution of this thesis: Build trajectory model of multiple targets using a recursive data association method that takes into account the entire orbital history Introduce a new method of "data association" and build a method of "recursive construction" based on the ideas of Bayesian inference to find data association satisfying a given property 24 Solve the problem of estimating the number of targets based on the hidden Markov model with two new algorithms, the forward algorithm and the improved Viterbi algorithm Further extensions of this thesis: In the process of researching and completing the thesis, there are open problems, which should be further researched and improved for better results Investigate the general MTT model with a priori distribution of the multiple exponent of the elements in the original set M [Y (t0 )] Based on the algorithm to find T - strategy, find optimal solutions in different meanings to the problem MTT Research the HMM model for the general MTT model class 4, Research the uniqueness of the optimal state of qt∗ Q∗ and find relationship qt∗ belongs to Q∗ or not LIST OF SCIENTIFIC PUBLICATIONS [CT1] Nguyen Thi Hang, Nguyen Hai Nam, “The existence of an optimal solution and Kalman algorithm to find a solution acceptable under predetermited threshold for problem of Multi-target Tracking”, Journal of Military Science and Technology, ISSN 1859-1043, No 46 (12-2016), pp 149–157, 2016 [CT2] Nguyen Thi Hang, “An Optimal Algorithm for Multi-Target Tracking with Obscured Targets”, Journal of Research and Developmen on Infomation and Communication Technology, ISSN 1859-3526, No 1, September, Volume 2019, pp 47–55, 2019 [CT3] Nguyen Thi Hang, “Using Hidden Markov Model to Determine Objective in Multi-target Tracking”, Journal of Military Science and Technology, ISSN 1859-1043, No 68 (8-2020), pp 178–185, 2020 [CT4] Nguyen Thi Hang, Le Bich Phuong, Pham Ngoc Anh, “The Modified Viterbi Algorithm in Determining the number of targets in the Multiple Target Tracking”, Journal of Military Science and Technology, ISSN 1859-1043, No 73 (6-2021), pp 145–152, 2021 ... interested in solving such a class of MTT models More specifically, study the method of using hidden Markov model (HMM - Hidden Markov Model) to solve this MTT model class The content of the research... the research The thesis studies two classes of MTT problem: ❼ The general MTT problem class may have an obscured target phenomenon ❼ The class of MTT problem in which it is required to estimate... The object and scope of the research The thesis studies MTT models and algorithms; Bayesian statistics and filtering; Kalman Filter; Hidden Markov model; An essential part on stochastic analysis

Ngày đăng: 14/12/2021, 05:37

Xem thêm:

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN

w