Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống
1
/ 19 trang
THÔNG TIN TÀI LIỆU
Thông tin cơ bản
Định dạng
Số trang
19
Dung lượng
177,41 KB
Nội dung
Vietnam Journal of Mathematics 33:4 (2005) 443–461 Central Limit Theorem for Functional of Jump Markov Processes Nguyen Van Huu, Vuong Quan Hoang, and Tran Minh Ngoc Department of Mathematics Hanoi National University, 334 Nguyen Trai Str., Hanoi, Vietnam Received February 8, 2005 Revised May 19, 2005 Abstract. In this paper some conditions a re given to ensure that for a jump homoge- neous Markov process {X(t),t≥ 0} the law of the integral functional of the process: T −1/2 T 0 ϕ(X(t))dt, converges to the normal law N(0,σ 2 ) as T →∞,whereϕ is a mapping from the state space E into R. 1. Introduction The central limit theorem is a subject investigated intensively by many well- known probabilists such as Linderberg, Chung, The results concerning cen- tral limit theorems, the iterated logarithm law, the lower and upper bounds of the moderate deviations are well understood for independent random variable sequences and for martingales but less is known for dependent random variables such as Markov chains and Markov processes. The first result on central limit for functionals of stationary Markov chain with a finite state space can be found in the book of Chung [5]. A technical method for establishing the central limit is the regeneration method. The main idea of this method is to analyse the Markov process with arbitrary state space by dividing it into independent and identically distributed random blocks between visits to fixed state (or atom). This technique has been developed by Athreya - Ney [2], Nummelin [10], Meyn - Tweedie [9] and recently by Chen [4]. The technical method used in this paper is based on central limit for mar- tingales and ergodic theorem. The paper is ogranized as follows: In Sec. 2, we shall prove that for a positive recurrent Markov sequence 444 Nguyen Van Huu, Vuong Quan Hoang, and Tran Minh Ngoc {X n ,n≥ 0} with Borel state space (E, B)andforϕ : E → R such that ϕ(x)=f(x) −Pf(x)=f(x) − E f(y)P (x, dy) with f : E → R such that E f 2 (x)Π(dx) < ∞,whereP (x, .) is the transition probability and Π(.) is the stationary distribution of the process, the distribution of n −1/2 n i=1 ϕ(X i ) converges to the normal law N(0,σ 2 )withσ 2 = E (ϕ 2 (x)+ 2ϕ(x)Pf(x))Π(dx). The central limit theorem for the integral functional T −1/2 T 0 ϕ(X(t))dt of jump Markov process {X(t),t≥ 0} will be established and proved in Sec. 3. Some examples will be given in Sec. 4. It is necessary to emphasize that the conditions for normal asymptoticity of n −1/2 n i=1 ϕ(X i ) is the same as in [8] but they are not equivalent to the ones established in [10, 11]. The results on the central limit for jump Markov processes obtained in this paper are quite new. 2. Central Limit for the Functional of Markov Sequence Let us consider a Markov sequence {X n ,n ≥ 0} defined on a basic probability space (Ω, F,P) with the Borel state space (E,B), where B is the σ-algebra generated by the countable family of subsets of E. Suppose that {X n ,n≥ 0} is homogeneous with transition probability P (x, A)=P (X n+1 ∈ A|X n = x),A∈B. We have the following definitions Definition 2.1. Markov process {X n ,n ≥ 0} is said to be irreducible if there exists a σ- finite measure μ on (E, B) such that for all A ∈B μ(A) > 0 implies ∞ n=1 P n (x, A) > 0, ∀x ∈ E where P n (x, A)=P (X m+n ∈ A|X m = x). The measure μ is called irreducible measure. By Proposition 2.4 of Nummelin [10], there exists a maximum irreducible measure μ ∗ possessing the property that if μ is any irreducible measure then μ μ ∗ . Definition 2.2. Markov process {X n ,n≥ 0} is said to be recurrent if ∞ n=1 P n (x, A)=∞, ∀x ∈ E,∀A ∈B: μ ∗ (A) > 0. The process is said to be Harris recurrent if P x (X n ∈ Ai.o.)=1. Central Limit Theor em for Functional of Jump Markov Processes 445 Let us notice that a process which is Harris recurrent is also recurrent. Theorem 2.1. If {X n ,n≥ 0} is recurrent then there exists a uniquely invariant measure Π(.) on (E,B) (up to constant multiples) in the sense Π(A)= E Π(dx)P (x, A), ∀A ∈B, (1) or equivalently Π(.)=ΠP(.). (2) (see Theorem 10.4.4 of Meyn-Tweedie, [9]). Definition 2.3. AMarkovsequence{X n ,n≥ 0} is said to be positive recurrent (null recurrent) if the invariant measure Π is finite (infinite). For a positive recurrent Markov sequence {X n ,n ≥ 0}, its unique invariant probability measure is called stationary distribution and is denoted by Π. Here- after we always denote the stationary distribution of Markov sequence {X n ,n≥ 0} by Π and if ν is the initial distribution of Markov sequence then P ν (.),E ν (.) are denoted for probability and expectation operator responding to ν.Inpar- ticular, P ν (.),E ν (.) are replaced by P x (.),E x (.)ifν is the Dirac measure at x. We have the following ergodic theorem: Theorem 2.2. If Markov sequence {X n ,n≥ 0} possesses the unique invariant distribution Π such that P (x, .) Π(.), ∀x ∈ E, (3) then {X n ,n≥ 0} is metrically transitive when initial distribution is the station- ary distribution. Further, for any measurable mapping ϕ : E ×E :→ R such that E Π |ϕ(X 0 ,X 1 )| < ∞,withprobabilityone lim n→∞ n −1 n−1 k=0 ϕ(X k ,X k+1 )=E Π ϕ(X 0 ,X 1 )(4) and the limit does not depend on the initial distribution. (See Theorem 1.1 from Patrick Billingsley [3]). The following notations will be used in this paper: For a measurable mapping ϕ : E → R we denote Πϕ = E ϕ(x)Π(dx),Pϕ(x)= E ϕ(y)P(x, dy)=E(ϕ(X n+1 )|X n = x), P n ϕ(x)= E ϕ(y)P n (x, dy)=E(ϕ(X n+m )|X m = x). 446 Nguyen Van Huu, Vuong Quan Hoang, and Tran Minh Ngoc For the countable state space E = {1, 2, } we denote P ij = P (i, {j})=P (X n+1 = j|X n = i),P (n) ij = P n (i, {j})=P (X m+n = j|X n = i) π j =Π({j}),P=[P ij ,i,j ∈ E],P (n) =[P (n) ij ,i,j ∈ E]=P n . Then Πϕ = j∈E ϕ(j)π j ,Pϕ(j)= k∈E ϕ(k)P jk ,P n ϕ(j)= k∈E ϕ(k)P (n) jk . If the distribution of random variable Y n converges to the normal distribution N(μ, σ 2 )thenwedenote L −→ N(μ, σ 2 ). The indicator function of a set A is denoted by 1 1 1 A ,where 1 1 1 A (ω)= 1, if ω ∈ A 0, else. Finally, the mapping ϕ : E = {1, 2, }−→R is denoted by column vector ϕ =(ϕ(1),ϕ(2), ) T . The main result of this section is to establish the conditions for n −1/2 n k=1 ϕ(X k ) L −→ N(μ, σ 2 ). We need a central limit theorem for martingale differences as follows Theorem 2.3. (Central limit theorem for martingale differences) Suppose that {u k ,k ≥ 0} is a sequence of martingale differences defined on a probability space (Ω, F,P) corresponding to a filter {F k ,k ≥ 0}, i.e., E(u k+1 |F k )=0,k = 0, 1, 2, ··· Further, assume that the following conditions are satisfied (A 1 ) n −1 n k=1 E(u 2 k |F k−1 ) P −→ σ 2 , (A 2 ) n −1 n k=1 E(u 2 k 1 1 1 [|u k |≥ε √ n] |F k−1 ) P −→ 0,foreachε>0(the conditional Lin- derberg’s condition). Then n −1/2 n k=1 u k L −→ N(0,σ 2 ). (5) (see Corollary of Theorem 3.2, [7]). Remark 1. Theorem 2.3 remains valid for {u k ,k ≥ 0} being a m-dimensional martingale differences where the condition (A 1 ) is replaced by n −1 n k=1 Var (u k |F k−1 ) P −→ σ 2 =[σ ij ,i,j =1, 2, ··· ,m] Central Limit Theor em for Functional of Jump Markov Processes 447 with Var (u k |F k−1 )=[E(u ik u jk |F k−1 ),i,j =1, 2, ···,m]. We shall prove the following theorem. Theorem 2.4. (Central limit theorem for functional of Markov sequence) Sup- pose that the following conditions hold: (H 1 ) The Markov sequence {X n ,n ≥ 0} is positive recurrent with the transition probability P (x, .) and the unique stationary distribution Π(.) satisfying the condition (3). (H 2 ) The mapping ϕ : E → R can be represented in the form ϕ(x)=f(x) −Pf(x),x∈ E, (6) where f : E → R is measurable and Πf 2 < ∞. Then n −1/2 n k=1 ϕ(X k ) L −→ N(0,σ 2 )(7) for any initial distribution, where σ 2 =Π(f 2 − (Pf) 2 )=Π(ϕ 2 +2ϕP f). (8) Proof. We have n −1/2 n k=1 ϕ(X k )=n −1/2 n k=1 [f(X k ) − Pf(X k )] = n −1/2 n k=1 [f(X k ) − Pf(X k−1 )] + n −1/2 n k=1 Pf(X k−1 ) −n −1/2 n k=1 Pf(X k ) = n −1/2 n k=1 u k + n −1/2 [Pf(X 0 ) − Pf(X n )], where u k = f(X k ) − Pf(X k−1 )=f(X k ) − E(f(X k )|X k−1 ) are martingale differences with respect to F k = σ(X 0 ,X 1 , ··· ,X k ), whereas n −1/2 [Pf(X 0 ) −Pf(X n )] P −→ 0 by Chebyshev’s inequality. Thus, it is sufficient to prove that Y n := n −1/2 n k=1 u k L −→ N(0,σ 2 ) and the convergence does not depend on the initial distribution. For this pur- pose, we shall show that the martingale differences { u k ,k ≥ 1} satisfy the con- ditions (A 1 ), (A 2 ). Accordingtoassumption(H 2 )wehave 448 Nguyen Van Huu, Vuong Quan Hoang, and Tran Minh Ngoc E Π [E(u 2 1 |F 0 )] = E Π (u 2 1 )=E Π [f(X 1 ) − Pf(X 0 )] 2 = E Π f 2 (X 1 ) − E Π [Pf(X 0 )] 2 , thus E Π (u 2 1 )=Πf 2 − Π(Pf) 2 < ∞. (9) Therefore, by the ergodic Theorem 2.2, for any initial distribution with proba- bility one n −1 n k=1 E(u 2 k |F k−1 ) −→ E Π u 2 1 = σ 2 . Thus the condition (A 1 ) of Theorem 2.3 is satisfied. On the other hand, by (9) we have E Π (u 2 1 1 1 1 [|u 1 |≥t] ) −→ 0, (10) as t ↑∞. Again by the ergodic Theorem 2.2, for any initial distribution, with probability one n −1 n k=1 E(u 2 k 1 1 1 [|u k |≥t] |F k−1 ) −→ E Π (u 2 1 1 1 1 [|u 1 |≥t] ) (11) for each t>0. By (11) and then (10) we have with probability one 0 ≤ lim n→∞ n −1 n k=1 E Π (u 2 k 1 1 1 [|u k |≥ε √ n] ) ≤ lim n→∞ n −1 n k=1 E Π (u 2 k 1 1 1 [|u k |≥t] ) = E Π (u 2 1 1 1 1 [|u 1 |≥t] ) −→ 0ast ↑∞. Thus condition (A 2 ) is satisfied, hence by the central limit theorem for martin- gale differences {u k ,k ≥ 1} (7) holds. Remark 2. If the series ∞ n=0 P n ϕ(x)= ∞ n=0 E ϕ(y)P n (x, dy) converges, then we always have ϕ(x)=f(x) −Pf(x) with f(x)= ∞ n=0 P n ϕ(x). In fact, it is obvious that Central Limit Theor em for Functional of Jump Markov Processes 449 f(x)=ϕ(x)+ ∞ n=1 P n ϕ(x)=ϕ(x)+P ∞ n=0 P n ϕ(x)=ϕ(x)+Pf(x). Furthermore, in this case σ 2 =Π ϕ 2 +2 ∞ n=0 ϕP n ϕ . Remark 3. If ϕ = f −Pf holds, then Πϕ =Πf − ΠPf =0. (12) So the condition (12) is necessary for ϕ = f − Pf. Furthermore, in addition if we have lim n→∞ P n f(x)=Πf, ∀x ∈ E then f(x) is also given by f(x)= ∞ n=0 P n ϕ(x)+Πf. In fact, we have ϕ(x)=f(x) −Pf(x) Pϕ(x)=Pf(x) − P 2 f(x) ··· P n ϕ(x)=P n f(x) − P n+1 f(x). Summing the above equalities we obtain n k=0 P k ϕ(x)=f(x) −P n+1 f(x) −→ f (x) − Πf. Remark 4. Function f given by (6) is defined uniquely up to an additional constant if lim n→∞ P n g(x)=Πg for all g Π- integrable. In fact, suppose that f 1 ,f 2 are the functions satisfying (6). Then g = f 1 −f 2 is a solution of the equations: g(x)=Pg(x),g(x)=P (Pg(x)) = P 2 g(x)=···= P n g(x), ∀x ∈ E for all n =1, 2, ···. Thus there exists the limit g(x) = lim n→∞ P n g(x)=Πg (a constant). It also follows from Remark 4 and from (8) that if f satisfies the equation (6) then σ 2 is defined uniquely, i.e., σ 2 does not change if f is replaced by f + C with C being any constant, since 450 Nguyen Van Huu, Vuong Quan Hoang, and Tran Minh Ngoc Π[ϕ 2 +2ϕP (f + C)] = Π[ϕ 2 +2ϕP f]+2CΠϕ =Π[ϕ 2 +2ϕP f]. Remark 5. If Πϕ = 0 we can replace ϕ by ϕ ∗ = ϕ −Πϕ. Corollary 2.1. Assume that a Markov chain {X n ,n≥ 0} is irreducible, ergodic with the countable state space E = {1, 2, ···} and with the ergodic distribution Π=(π 1 ,π 2 , ···) and the following condition is satisfied (H 3 ) The mapping ϕ : E → R takes the form ϕ(x)=f(x) −Pf(x), ∀x ∈ E with f : E → R being measurable such that Πf 2 < ∞.Put σ 2 =Π[f 2 − (Pf) 2 ]=Π[ϕ 2 +2ϕP f]. Then n −1/2 n k=1 ϕ(X k ) L −→ N(0,σ 2 ) as n →∞. 3. Central Limit for Integral Functional of Jump Markov Process 3.1. Jump Markov Process Let {X(t),t ≥ 0} be a random process defined on some probability space (Ω, F,P) with measurable state space (E,B). Definition 3.1. The process {X(t),t≥ 0} is called jump homogeneous Markov process with the state space (E,B) if it is a Markov process with transition prob- ability P (t, x, A)=P (X(t + s) ∈ A|X(s)=x),s,t≥ 0 satisfying the following condition lim t→0 P (t, x, {x})=1, ∀x ∈ E. (13) We suppose also that {X(t),t ≥ 0} is right continuous and the limit (13) is uniform in x ∈ E. By Theorem 2.4 in [6] the sample functions of {X(t),t≥ 0} are step functions with probability one, and there exist two q− functions q(.)andq(., .)beingBaire functions where q(x, .) is finite measure on Borel subsets of E \{x}, q(x)= q(x, E \{x}) is bounded. Further lim t→0 (1 − P (t, x, {x}) t = q(x), lim t→0 P (t, x, A) t = q(x, A) Central Limit Theor em for Functional of Jump Markov Processes 451 uniformly in A ⊂ E \{x}. If q(x) > 0 ∀x ∈ E then the process has no absorbing state. We assume also that q(x) is bounded from 0. Since {X(t),t ≥ 0} is right continuous and step process, the system starts out in some state Z 1 , stays there a length of time ρ 1 , then jumps immediately to a new state Z 2 ,staysalengthoftimeρ 2 , etc. Therefore there exist random variables Z 1 ,Z 2 , ··· and ρ 1 ,ρ 2 , ··· such that X(t)=Z 1 , if 0 ≤ t<ρ 1 , X(t)=Z n , if ρ 1 + ···+ ρ n−1 ≤ t<ρ 1 + ···+ ρ n ,n≥ 2. ρ n ’s are all finite because we have assumed that q(x) > 0 ∀x ∈ E. Let ν(t) be the random variable defined by ν(t)=max{k : ρ 1 + ···+ ρ k <t} then ν(t) is the number of jumps which occur up to time t. It follows from the general theory of discontinuous Markov process (see [6], p.266) that {Z n ,n≥ 1} is a Markov chain with transition probability P (x, A)= q(x, A) q(x) , (14) furthermore P (ρ n+1 >s|ρ 1 , ··· ,ρ n ,Z 1 , ··· ,Z n+1 )=e −q(Z n+1 )s ,s>0 (15) P (Z n+1 ∈ A|ρ 1 , ··· ,ρ n ,Z 1 , ··· ,Z n )=P (Z n ,A). (16) The function q(., .) is called the transition intensity. It follows from (15), (16) that {(Z n ,ρ n ),n ≥ 1} is a Markov chain on the cartesian product E×R + ,whereR + =(0, ∞). This chain is called the imbedded chain with the transition probability Q(x, s, A × B)=P(Z n+1 ∈ A, ρ n+1 ∈ B|Z n = x, ρ n = s) = A P (x, dy) B q(y)e −q(y)u du, A × B ∈B×B(R + ), where B(R + ) denotes the Borel σ-algebraonR + .This transition probability does not depend on s and we rewrite it by Q(x, A ×B)or formally by Q(x, dy ×du)=P(x, dy)q(y)exp(−q(y)u)du. Definition 3.2. The probability measure Π ∗ on (E × R + , B×B(R + )) is called the stationary distribution of the imbedded chain {(Z n ,ρ n ),n≥ 1} if Π ∗ (A × B)= E×R + Π ∗ (dx × ds)Q(x, A × B),A× B ∈B×B(R + ). (17) 452 Nguyen Van Huu, Vuong Quan Hoang, and Tran Minh Ngoc Letting B = R + ,thenΠ ∗ is the stationary distribution of the imbedded chain if and only if Π(.)=Π ∗ (. ×R + ) (18) is the one of {Z n ,n≥ 1} with the transition probability P (x, A)=Q(x, A ×R + ) and Π ∗ (A × B)= E Π(dx)Q(x, A ×B). Since ΠP (.)=Π(.), we have Π ∗ (A × B)= E Π(dx) A P (x, dy) B q(y)exp(−q(y)u)du = A ( E Π(dx)P (x, dy)) B q(y)exp(−q(y)u)du or Π ∗ (A × B)= A Π(dy) B q(y)exp(−q(y)u)du (19) or in differential form Π ∗ (dy ×du)=Π(dy)q(y)exp(−q(y)u)du. (20) Thus we have the following proposition: Proposition 3.1. If the Markov chain {Z n ,n≥ 1} with the transition probabil- ity P (x, A) has the stationary distribution Π then the imbedded chain possesses also the stationary distribution Π ∗ defined by (19) or (20). Proposition 3.2. If P(x, .) Π(.) ∀x ∈ E,whereΠ is the stationary distribu- tion of {Z n ,n≥ 1} then the transition probability Q(x, .) of the imbedded chain is also absolutely continuous with respect to the stationary distribution Π ∗ , i.e. Q(x, .) Π ∗ (.), ∀x ∈ E. (see [3], p.66). Here and after we shall denote by Π, Π ∗ the stationary distributions of Markov chain {Z n ,n ≥ 1} and the imbedded chain {(Z n ,ρ n ),n ≥ 1}, respec- tively. 3.2. Functional Central Limit Theorem We have the following ergodic theorem for the imbedded chain Theorem 3.1. (Ergodic theorem for the imbedded process) If Markov chain {Z n ,n ≥ 1} with the transition probability P (x, .) having the stationary distri- bution Π such that [...]... remain valid if in the limits n is replaced by ν(t), then limits are taken as t → ∞ lim t→∞ Proof (21) follows from the ergodic theorem for Markov chain {(Zn , ρn ), n ≥ 1}, and (23) follows from (22) by the same argument as in the renewal theory Applying Theorem 2.4 for the imbedded chain {(Zn , ρn ), n ≥ 1} we obtain the following theorem Theorem 3.2 (Central limit theorem for the imbedded chain)... Ney, A new approach to the limit theory of recurrent Markov chain, Trans Amer Math Society 245 (1978) 493–501 3 P Billingsley, Statistical Inference for Markov Processes, The University of Chicago Press, 1958 4 X Chen, Limit theorem for functionals of Ergodic Markov Chains with general state space, Memoir of the American Mathematical Society 139 (1999) 1–200 5 K L Chung, Markov Chains with Stationary... Theorem for Functional of Jump Markov Processes 459 Theorem 3.3 Assume that the condition (C1 ) of Theorem 3.2 and the following condition (C3 ) are satisfied (C3 ) (i) Πϕ2 q −2 < ∞ and, (ii) The following equation has a solution g (I − P )g(x) = P ϕq −1 (x) with Πg 2 < ∞ Then T T −1/2 0 L ϕ(X(t))dt −→ N (0, αδ 2 ) for any initial distribution, where δ 2 = 2Π(ϕ2 q −2 + ϕgq −1 ) Proof The conclusion of Theorem. .. martingale differences such that Central Limit Theorem for Functional of Jump Markov Processes sup (n−1 n,m≥1 m+n 455 Eu2 ) = C < ∞ k (29) k=m and that {ν(t), t ≥ 0} is a random process valued in {1, 2, · · · } such that {ν(t) = k} ∈ Fk ∀t ≥ 0 and lim t→∞ Then ν(t) = α > 0 a.s t ν(T ) T −1/2 [αT ] uk − k=1 (30) P uk −→ 0 as T → ∞ (31) k=1 Proof It follows from condition (30) that: for all ε > 0, and T sufficiently... αΠϕ∗ q −1 = απ1 q1 = and 1 √ T T 0 q2 q2 , ϕ(x) = 1 {x=1} − , q1 + q2 q1 + q2 1 {X(t)=1} − q2 L dt −→ N (0, αδ 2 ) q1 + q2 (45) Central Limit Theorem for Functional of Jump Markov Processes 461 for any initial distribution In order to find δ 2 we have to solve the equation (44) for i = 1, i.e 1 −1 P ϕq −1 (1) g ; 1 = (46) −1 1 g2 P ϕq −1 (2) with notice that P ϕq −1 (1) 0 1 = P ϕq −1 (2) 1 0 −1 (1 − q2... ≥ n) Then by Markov property: Px (AT ) = E(1AT |X0 = x) = E[E(1AT |X1 )|X0 = x] = E E(1AT |X1 = y)P (x, dy) = E\Λ Ey (1AT )P (x, dy) Therefore 0 ≤ lim sup Px (AT ) = lim sup T →∞ T →∞ = E\Λ Py (AT )P (x, dy) lim Py (AT )P (x, dy) = 0 E\Λ T →∞ So lim Px (AT ) = 0 ∀x T →∞ and hence lim Pν (|ηT | ≥ ε) = lim T →∞ T →∞ E Px (|ηT | ≥ ε)ν(dx) = 0 Central Limit Theorem for Functional of Jump Markov Processes.. .Central Limit Theorem for Functional of Jump Markov Processes 453 Π(.) ∀x ∈ E, P (x, ) and if ϕ(Z1 , ρ1 ; Z2 , ρ2 ) is the random variable possessing the finite expectation μ w.r.t the probability measure PΠ∗ , then for any initial distribution n lim n−1 n→∞ ϕ(Zk , ρk ; Zk+1 , ρk+1 ) = μ ; a.s (21) k=1 In particular,... + q2 ), g2 = 0 Hence, by Theorem 3.3, we obtain (45) with 1 δ 2 = 2Π(ϕ2 q −2 + ϕq −1 g) = (q1 + q2 )2 We obtain from (45) √ T1 q2 2q1 q2 L − T −→ N 0, T (q1 + q2 ) (q1 + q2 )3 Acknowledgement The authors would like to thank Prof Dr Nguyen Huu Du and Prof Dr Tran Hung Thao for useful discussions References 1 A de Acosta, Moderate deviations for vector valued functional of a Markov chain: Lower bounds,... holds for any initial distibution Applying Theorem 3.2 for the imbedded chain {(Zk , ρk ), k ≥ 1} we obtain [αT ] T −1/2 L uk −→ N (0, αδ 2 ) (42) k=1 with δ 2 = Π∗ (f 2 − (Qf )2 ) = Π∗ (f 2 − g 2 ) = Π∗ (ψ 2 + 2ψg) = 2Π(ϕ2 q −2 + ϕgq −1 ) Finally, it follows from (40), (31), (41) (34), (42) that (39) holds for any initial distribution Now we state and prove the main theorem as follows Central Limit Theorem. .. Hall and C C Heyde, Martingale Limit Theory and Its Application, Academic Press, 1980 8 S Niem and E Nummelin, Central Limit Theorems for Markov Random Walks, Commentations Physico-Mathematica, 54, Societas Scientiarum Fennica, Helsinki, 1982 9 S P Meyn and R L Tweedie, Markov Chains and Stochastic Stability, Springer– Verlag, London, 1993 10 E Nummelin, General Irreducible Markov Chains and Non-negative . 11]. The results on the central limit for jump Markov processes obtained in this paper are quite new. 2. Central Limit for the Functional of Markov Sequence Let us consider a Markov sequence {X n ,n. first result on central limit for functionals of stationary Markov chain with a finite state space can be found in the book of Chung [5]. A technical method for establishing the central limit is the. Vietnam Journal of Mathematics 33:4 (2005) 443–461 Central Limit Theorem for Functional of Jump Markov Processes Nguyen Van Huu, Vuong Quan Hoang, and Tran Minh Ngoc Department of Mathematics Hanoi