DSpace at VNU: On stability in distribution of stochastic differential delay equations with Markovian switching

7 81 0
DSpace at VNU: On stability in distribution of stochastic differential delay equations with Markovian switching

Đang tải... (xem toàn văn)

Thông tin tài liệu

Systems & Control Letters 65 (2014) 43–49 Contents lists available at ScienceDirect Systems & Control Letters journal homepage: www.elsevier.com/locate/sysconle On stability in distribution of stochastic differential delay equations with Markovian switching Nguyen Huu Du a,∗ , Nguyen Hai Dang b , Nguyen Thanh Dieu c a Faculty of Mathematics, Mechanics, and Informatics, University of Science-VNU, 334 Nguyen Trai, Thanh Xuan, Hanoi, Viet Nam b Department of Mathematics, Wayne State University, Detroit, MI 48202, USA c Department of Mathematics,Vinh University, 182 Le Dan, Vinh, Nghe An, Viet Nam article info Article history: Received 11 December 2012 Received in revised form 13 December 2013 Accepted 17 December 2013 Available online 23 January 2014 abstract This paper provides a new sufficient condition for stability in distribution of stochastic differential delay equations with Markovian switching (SDDEs) It can be considered as an improvement to the result given by Yuan C et al in [6] © 2013 Elsevier B.V All rights reserved Keywords: Stochastic differential delay equations Stability in distribution Itô’s formula Markov switching SDDEs with Markovian switching homogeneous SDDE with Markovian switching dX (t ) = f X (t ), X (t − τ ), r (t ) dt  Following the development of stochastic differential equations (SDEs), stochastic differential equations with Markovian switching have recently become an active area of stochastic analysis Taking a further step, it is realized that, in real systems, the future state is usually dependent not only on the present state but also on the past states, so they should be modeled by differential equations with time delays For this reason, stochastic differential delay equations with Markovian switching and their stability have been studied carefully in many papers; especially, for a systematic and detailed review, the book [1] should be referred to Since there are four types of convergence in probability theory, at least four different types of stochastic stability should be studied, namely, stability in distribution, stability in probability as well as moment stability and almost sure stability For the past decade, almost sure stability and moment stability have received enormous attention [2], while to the best of our knowledge, very few studies on the stability in distribution of stochastic differential equations with Markovian switching can be found Among them, [3] is a noticeable contribution to the stability in distribution In [3], the authors consider a   + g X (t ), X (t − τ ), r (t ) dB(t ) (1.1) on a probability space (Ω , F , (Ft )t ≥0 , P) satisfying the usual conditions, where f : Rn × Rn × S → Rn , g : Rn × Rn × S → Rn × Rm , B(t ) = (B1 (t ), , Bm (t ))T is an m-dimensional Brownian motion, r (t ) is a Markov chain taking values in a finite state space S = {1, 2, , N } with the generator Γ = (γij )N ×N with γij > if i ̸= j (see [4]) and r (·) is independent of B(·) Moreover, B(t ) and r (t ) are Ft -adapted In order to prevent possible confusion, the notations in this paper are the same as in [3] which will be reintroduced below for convenience Denote by C ([−τ ; 0]; Rn ) the family of continuous functions ϕ(·) from [−τ ; 0] to Rn with norm ∥ϕ∥ = sup−τ ≤θ≤0 |ϕ(θ )|, by C (Rn × S ; R+ ) the family of nonnegative functions V (x, i) on Rn × S which are twice continuously differential in x For a continuous Rn -valued stochastic process X (t ) on t ∈ [−τ ; ∞), denote Xt = {X (t +θ ) : −τ ≤ θ ≤ 0} for t ≥ so Xt is a stochastic process with the state space C ([−τ ; 0]; Rn ) Moreover, for a subset A ⊂ Ω , we denote by 1A the indicator function of A, i.e 1A = if ω ∈ A and 1A = otherwise For V ∈ C (Rn × S ; R+ ), we define LV (x, y, i) = ∗ Corresponding author Fax: +84 8588817 E-mail addresses: nhdu2001@yahoo.com, dunh@vnu.edu.vn (N.H Du), dangnh.maths@gmail.com (N.H Dang), dieunguyen2008@gmail.com (N.T Dieu) 0167-6911/$ – see front matter © 2013 Elsevier B.V All rights reserved http://dx.doi.org/10.1016/j.sysconle.2013.12.006  n  γij V (x, j) + Vx (x, i)f (x, y, i) j =1 + trace[g T (x, y, i)Vxx (x, i)g (x, y, i)], 44 N.H Du et al / Systems & Control Letters 65 (2014) 43–49 and L V (x, y, z1 , z2 , i) = n  γij V (x − y, j) + Vx (x − y, i) j=1   × f (x, z1 , i) − f (y, z2 , i)   + trace g T (x, z1 , i) − g T (y, z2 , i)   × Vxx (x − y, i) g (x, z1 , i) − g (y, z2 , i) , where  ∂ V (x, i) ∂ V (x, i) , , , ∂ x1 ∂ xn   ∂ V (x, i) Vxx (x, i) = ∂ xk ∂ xj n × n For any two stopping times ≤ τ1 ≤ τ2 < ∞, it follows from the Vx (x, i) =  generalized Itô formula that EV (X (τ2 ), r (τ2 )) = EV (X (τ1 ), r (τ1 ))  +E τ2 τ1 Assumption 1.1 (The Local Lipschitz Condition) For each integer k ≥ there is a positive constant hk such that LV (X (s), X (s − τ ), r (s))ds provided that the integrations involved exist and are finite It is well-known that (Xt , r (t )) is a homogeneous Markov process with transition probability p(t , ξ , i, dζ × {j}) Definition (See [3]) Eq (1.1) is said to be stable in distribution if there exists a probability measure π (· × ·) on C ([−τ , 0]; Rn ) × S such that the transition probability p(t , ξ , i, dζ × {j}) of (Xt , r (t )) converges weakly to π (dζ × {j}) as t → ∞ for every (ξ , i) ∈ C ([−τ , 0]; Rn ) × S In [3], the authors provided a criterion for stability in distribution of Eq (1.1) which is cited as the following theorem Theorem 1.1 Assume that the coefficients of Eq (1.1) satisfy the local Lipschitz condition and the linear growth condition, that is, there exist h; hk > (k ∈ N) such that for all x, y, x, y ∈ Rn , i ∈ S, |f (x, y, i)| + |g (x, y, i)| ≤ h(1 + |x| + |y|), and for all |x| ∨ |y| ∨ |x| ∨ |y| ≤ k, i ∈ S, |f (x, y, i) − f (x, y, i)|2 + |g (x, y, i) − g (x, y, i)|2 ≤ hk (|x − x|2 + |y − y|2 )  2(x − x) f (x, y, i) − f (x, y, i)    + 17|g (x, y, i) − g (x, y, i)|2 ≤ α |x − x|2 + |y + y|2 (H2) There are positive numbers c1 , β, λ1 > λ2 ≥ 0, functions V ∈ C (Rn × S ; R+ ) and w1 ∈ C (Rn ; R+ ) such that (1.2) for any (x, i) ∈ R × S and n LV (x, y, i) ≤ −λ1 w1 (x) + λ2 w1 (y) + β (1.3) for any (x, y, i) ∈ Rn × Rn × S (H3) There exist numbers c2 > 0, λ > κ ≥ and functions U ∈ C (Rn × S ; R+ ), w2 ∈ C (Rn ; R+ ) such that c2 |x|2 ≤ w2 (x) ∧ U (x, i) ∀ (x, i) ∈ Rn × S , (1.4) L U (x, y, z1 , z2 , i) ≤ −λw2 (x − y) + κw2 (z1 − z2 ) Then, Eq (1.1) is stable in distribution (1.6) for all |x| ∨ |y| ∨ |x| ∨ |y| ≤ k, i ∈ S Assumption 1.2 There are positive numbers c1 , β, λ1 > λ2 ≥ 0, functions V ∈ C (Rn × S ; R+ ) and w1 ∈ C (Rn ; R+ ) such that lim w1 (x) = ∞, c1 w1 (x) ≤ V (x, i) ≤ w1 (x) |x|→∞ (1.7) for any (x, i) ∈ Rn × S and LV (x, y, i) ≤ −λ1 w1 (x) + λ2 w1 (y) + β (1.8) for any (x, y, i) ∈ R × R × S n n Assumption 1.3 There exists a nonnegative number κ and continuous functions U ∈ C (Rn × S ; R+ ), w2 ∈ C (Rn ; R+ ), λ ∈ C (R4n ; R+ ) such that U (·, i), w2 (·) vanish only at for all i ∈ S , λ(x, y, z1 , z2 ) > κ provided x ̸= y and that (1.9) The idea of our proofs is as follows First, under Assumptions 1.1 and 1.2, we will prove the existence of a unique global solution for any initial value and the tightness of the family of transition probabilities, which is necessary for the convergence of transition probabilities Then, in lieu of considering the second moment as in [3], utilizing the tightness, we can estimate the amount of time when the solution stays in a suitable compact set or lies outside the compact set When the solution is in a compact set, global conditions can be replaced by local ones For this reason, the hypothesis (H1) is not necessary Furthermore, in Assumption 1.3, we not need that c2 |x|2 ≤ w2 (x) ∧ U (x, i) ∀ (x, i) ∈ Rn × S but only the condition that w2 (x) and U (x, i) vanish at only x = (they might even tend to at infinity) It means that the class of functions which can be candidates for testing the assumption is broadened considerably Analogously, in many cases, which will later been demonstrated by some examples, we cannot find λ > κ such that L U (x, y, z1 , z2 , i) ≤ −λw2 (x − y) + κw2 (z1 − z2 ), and ∀ x, y, z1 , z2 ∈ Rn and i ∈ S ≤ hk (|x − x|2 + |y − y|2 ) + κw2 (z1 − z2 ) (H1) There is α > such that c1 |x|2 ≤ V (x, i) ≤ w1 (x) |f (x, y, i) − f (x, y, i)|2 + |g (x, y, i) − g (x, y, i)|2 L U (x, y, z1 , z2 , i) ≤ −λ(x, y, z1 , z2 )w2 (x − y) Moreover, the three following hypotheses are also satisfied T Although this theorem can be applied to many stochastic differential delay equations with Markovian switching as demonstrated in [3], some of the conditions seem to be restrictive First of all, it is seen that the linear growth excludes some important functions as the coefficients Thus, it should be replaced with a condition of Khas’minskii type (see [5,1]) Moreover, f (x, z1 , i) − f (y, z2 , i) and g (x, z1 , i) − g (y, z2 , i) may depend not only on (x − y) and (z1 − z2 ) but also on x, y, z1 , z2 themselves Therefore, the hypothesis (H3), in which L U is required to be uniformly upper bounded by a function of (x − y) and (z1 − z2 ), may not be satisfied for many equations Besides, the hypothesis (H1), which is imposed only for technical reason, should be removed In addition, the statement in [3, p 198] that the uniform boundedness of the second moment together with Chebyshev’s inequality implies the tightness is not obvious since the relative compactness in a function space does not follow from the boundedness The tightness therefore needs to be proved carefully Motivated by these comments, the main goal of this paper is to weaken the aforesaid hypotheses, particularly, we will use the following assumptions ∀ (x, y, z1 , z2 , i) ∈ R4n × S (1.5) but we have for all (x, y, z1 , z2 , i) ∈ R4n × S that L U (x, y, z1 , z2 , i) ≤ −λ(x, y, z1 , z2 )w2 (x − y) + κw2 (z1 − z2 ) N.H Du et al / Systems & Control Letters 65 (2014) 43–49 where λ(x, y, z1 , z2 ) > κ if x ̸= y while this inequality may fail either at x = y or when one of the variables x, y, z1 , z2 tends to infinity A sufficient condition for stability in distribution of SDDEs with the Markovian switching Denote by X ξ ,i (t ) the solution to Eq (1.1) with initial data X0 = ξ ∈ C ([−τ , 0]; Rn ) and r (0) = i We also denote by ri (t ) the Markov chain starting in i To establish new sufficient conditions for stability in distribution of Eq (1.1), we will give three lemmas Lemma 2.1 Let Assumptions 1.1 and 1.2 hold For any ξ ∈ C ([−τ , 0]; Rn ) and i ∈ S, there exists a unique global solution X ξ ,i (t ) to Eq (1.1) on [0, ∞) Moreover, for any compact set K ⊂ C [−τ , 0]; 45 If the claim limk→∞ σk = ∞ a.s is false, we can find Ω1 ⊂ Ω , t0 > such that P(Ω1 ) > and for all ω ∈ Ω1 , σk ≤ t0 ∀ k ∈ N As a result, Eeλ3 (t0 ∧σk ) V (X ξ ,i (t0 ∧ σk ), ri (t0 ∧ σk )) ≥ E1Ω1 V (X ξ ,i (t0 ∧ σk ), ri (t0 ∧ σk )) ≥ inf{V (y, i) : |y| > k} · P(Ω1 ) → ∞, eλ3 t EV (X ξ ,i (t ), ri (t )) ≤  As a result c1 Ew1 (X ξ ,i (t )) ≤ EV (X ξ ,i (t ), ri (t )) ≤ M ∀ t ≥ 0, (ξ , i) ∈ K × S ; (b) for any T > 0, ε > 0, there exists a positive integer H = H (K , T , ε) such that   P ∥Xsξ ,i ∥ ≤ H ∀ s ∈ [t ; t + T ] ≥ − ε, ∀ t ≥ 0, (ξ , i) ∈ K × S   λ +λ λ −λ Proof Let λ3 = 2 , τ1 ln 12λ > and for each k ∈ N, define the stopping time σk = inf{t ≥ : |X ξ ,i (t )| > k} Note that Assumption 1.1 guarantees the existence and uniqueness of a maximal local solution to Eq (1.1) To prove the existence of a unique global solution, it is sufficient to show that limk→∞ σk = ∞ w.p.1 Applying the generalized Itô formula to eλ3 t V (X ξ ,i (t ), ri (t )) and then using Assumption 1.2 yields EV (X ξ ,i (t ), ri (t )) ≤   eλ3 s V X ξ ,i (s), ri (s) ds + λ3 E  t ∧σ k eλ3 s LV (X ξ ,i (s), X ξ ,i (s − τ ), ri (s))ds +E  t ∧σk ≤ V (ξ (0), i) + λ3 E eλ3 s w1 (X ξ ,i (s))ds  t ∧σk  +E eλ3 s −λ1 w1 (X ξ ,i (s))  + λ2 w1 (X ξ ,i (s − τ )) + β ds  t  t ∧σk   ≤ V (ξ (0), i) + β eλ3 s ds + E eλ3 s −λ1 + λ3 0   × w1 (X ξ ,i (s)) + λ2 w1 X ξ ,i (s − τ ) ds  t  t ∧σk   λ3 s ≤ V (ξ (0), i) + β e ds + E eλ3 s −λ1 + λ3  t ∧σk −τ ξ ,i × w1 (X (s))ds + E λ2 eλ3 s+λ3 τ w1 (X ξ ,i (s))ds ≤ + V (ξ (0), i) + λ2 eλ3 τ 0 −τ w1 (ξ (s))  ds < ∞, we obtain the desired result of item (a) Now, we move on to the item (b) For any (ξ , i) ∈ K × S, define σ = inf{s ≥ t : ∥Xsξ ,i ∥ > k} t k Applying Itô’s formula for eλ3 (s−t ) V (X ξ ,i (s), ri (s)) on the interval [t , (t + T ) ∧ σk ], similar to (2.1), we have Eeλ3 ((t +T )∧σk −t ) V X ξ ,i ((t + T ) ∧ σkt ), ri ((t + T ) ∧ σkt ) t ≤    β λ3 T e + EV (X ξ ,i (t ), ri (t )) λ3  t λ3 τ + λ2 e Ew1 (X ξ ,i (s))ds inf |y|≥H ,j∈S (2.3)  t Ew1 (X ξ ,i (s))ds t −τ (2.4)  V (y, j) · P{σHt < t + T } (2.5) ≤ Eeλ3 ((t +T )∧σH −t ) V X ξ ,i ((t + T ) ∧ σHt ),  ri ((t + T ) ∧ σHt ) t ≤ , so we have w1 (ξ (s))ds β λ3 Employing (2.3) and (2.4) yields  β λ3 T Mτ e + M + λ2 eλ3 τ λ3 c1 (2.6) This implies that P(σHt < t + T ) ≤ ε The proof is complete −τ  w1 (ξ (s))ds β λ3 T Mτ e + M + λ2 eλ3 τ λ3 c1 Let H = H (K , T , ε) ∈ N satisfy   β λ3 T λ3 τ M τ inf V (y, j) ≥ e + M + λ2 e |y|≥H ,j∈S ε λ3 c1 −τ  ≤ Eeλ3 s w1 (X ξ ,i (s))ds β λ3 t e + V (ξ (0), i) + λ2 eλ3 τ λ3  −τ  β λ3 T e + EV (X ξ ,i (t ), ri (t )) + λ2 eλ3 τ λ3 Eeλ3 (t ∧σk ) V (X ξ ,i (t ∧ σk ), ri (t ∧ σk )) ≤ EV (X ξ ,i (0), ri (0))  V (ξ (0), i) + λ2 eλ3 τ In view of the claim of item (a), −τ It is easy to verify that λ1 − λ3 > λ2 e  t −τ t ∧σk λ3 τ β + λt λ3 e Putting M = sup(ξ ,i)∈K ×S Eeλ3 (t ∧σk ) V (X ξ ,i (t ∧ σk ), ri (t ∧ σk )) = EV (X ξ ,i (0), ri (0)) β λ3 t e + λ2 eλ3 τ λ3 β λ3 t e + V (ξ (0), i) λ3  + λ2 eλ3 τ w1 (ξ (s))ds −τ (a) there is M = M (K ) such that + (2.2) which contradicts (2.1) The existence and uniqueness of a global solution is therefore proved To prove the item (a), letting k → ∞, we obtain from (2.1) that Rn ,  as k → ∞, (2.1) In the next stage, we investigate the tightness of the family of transition probability distributions {p(t , ξ , i, dζ × {j}) : t ≥ 0} Note that, in [3], under the linear growth and local Lipschitz conditions as well as the hypothesis (H2 ), it is proved that sup0≤t such that for all ξ ∈ K |X ξ ,i (s2 ) − X ξ ,i (s1 )| ≥ ε1 ≤ ε2 , sup t ≤s1 ≤s2 ≤t +τ s2 −s1 H   Let ≤ δ ≤ τ By the Burkholder–Davis–Gundy inequality and the basic inequality (|x|+|y|)4 ≤ 8(|x|4 +|y|4 ) for any s1 ∈ [t , t +τ −δ], we have sup s2 ∈[s1 ,s1 +δ] |X ξ ,i (s2 ∧ τs1 ) − X ξ ,i (s1 )|4   4   s2    ξ ,i ξ ,i ≤ 8E sup  1{s≤τs1 } f (X (s), X (s − τ ), ri (s))ds  s2 ∈[s1 ,s1 +δ] s1    s2  + 8E sup  1{s≤τs1 } g (X ξ ,i (s), s2 ∈[s1 ,s1 +δ] s1  4   ξ ,i X (s − τ ), ri (s))dB(s)   |ξ (s2 ) − ξ (s1 )| (2.10) Putting δ0 = δ˜ ∧ δ , and combining (2.9) with (2.10), we have t ≤s≤t +τ sup −τ ≤s1 ≤s2 ≤0 s2 −s1 ≤δ ε2 , τ such that   ε2 P sup ∥Xsξ ,i ∥ ≤ H ≥ − ,   s2 −s1 ≤δ˜  E ε1 ξ ,i ξ ,i  Proof First, we show that for any ε1 , ε2 > 0, there exists δ0 = δ0 (ε1 , ε2 , K ) > such that H′ = |X ξ ,i (s2 ) − X ξ ,i (s1 )| ≥  is tight H K, s2 ∈[s1 ,s1 +δ] P 1{τt ≥t +τ } ε14 δ ≤ sup  81 ≤ {p(t , ξ , i, dζ × {j}) : (t , i, ξ ) ∈ R+ × S × K } P 4 Consequently, for any ≤ δ ≤ τ , sup (Pa [x : |x(−τ )| ≥ M ]) ≤ ε      1{s≤τs1 } f (X ξ ,i (s), X ξ ,i (s − τ ), ri (s))ds  2  s1 +δ    + 32E 1{s≤τs1 } |g (X ξ ,i (s), X ξ ,i (s − τ ), ri (s)) |2 ds  s1  Theorem 2.2 A family {Pa : a ∈ A} of probability measures on C ([−τ , 0]; Rn ) is tight if and only if the two following conditions hold: (1) For each positive ε , there exists an M such that s1 +δ  sup t −τ ≤s1 ≤s2 ≤t s2 −s1 ≤δ0  |X ξ ,i (s2 ) − X ξ ,i (s1 )| ≥ ε1 ≤ ε2 where  M = x ∈ C ([−τ , 0]; Rn ) : sup  |x(s2 ) − x(s1 )| t −τ ≤s1 ≤s2 ≤s1 +δ0 ≤t which implies that the condition (2) of Theorem 2.2 holds for the family {p(t , ξ , i, dζ × S ) : (t , i, ξ ) ∈ R+ × S × K } We also note that the condition (1) is derived from item (b) of Lemma 2.1 Consequently, it is tight Since the transition probabilities of ri (t ) are by themselves tight, we can claim that the family of {p(t , ξ , i, dζ × {j}) : (t , i, ξ ) ∈ R+ × S × K } is tight The proof is complete Lemma 2.4 Let Assumptions Then for any ε >  1.1–1.3 be satisfied  and any compact set K ⊂ C [−τ , 0]; Rn , there exists T = T (ε, K ) such that for all t > T , ξ ,i P{∥Xt η,i − Xt ∥ < ε} ≥ − ε, ∀ ξ , η ∈ K , i ∈ S Proof Step Since Assumption 1.3 is weaker than the hypothesis (H3), in lieu of proving lim E|X ξ ,i (t ) − X η,i (t )|2 → t →∞ N.H Du et al / Systems & Control Letters 65 (2014) 43–49 Applying (2.7) for ε1 = σ3 , ε2 = 8ℓ , there is < δ0 < τ such that as in [3], we will show that for any σ , h¯ > 0,  σ ,h  lim P At ¯ → 0,  t →∞ η,i ξ ,i  ξ ,i = ω : ∥Xt ∥ ∨ ∥Xt ∥ ≤ h¯ , |X (t ) − X (t )| ≥ σ For each n ∈ N, define the stopping time   Tn = inf s > : ∥Xsξ ,i ∥ ∨ ∥Xsη,i ∥ > n ∧ t where At , (2.14)  ξ ,i η,i |X (s) − X (s)| ≥ sup tn ≤sℓ−2 ℓ = ℓ, (2.15) = min{w2 (x − y) : |x| ∨ |y| ≤ h¯ , |x − y| ≥ σ }, In view of the tightness of the family {p(t , ξ , i, dζ ×{j}) : (t , ξ , i) ∈ R+ × K × S }, we can find H1 = H1 (K , ℓ) satisfying = min{λ(x, y, z1 , z2 ) : |x| ∨ |y| ∨ |z1 | ∨ |z2 | ≤ h¯ , |x − y| ≥ σ } − κ P{∥Xt ∥ ≤ H1 } ≥ − σ ,h¯ λ ζ ,i are positive since λ(·), w2 (·) are the conNote that c2 and λ tinuous functions and w2 (x − y) > 0, λ(x, y, z1 , z2 ) > 0, ∀ x ̸= y We have the estimate ≤ EU (ξ (0) − η(0), i) − E Tn ξ ,i λ(Xs , Xs ) ≥ × w2 (X (s) − X (s))ds  Tn + κE w2 (X ξ ,i (s − τ ) − X η,i (s − τ ))ds Tn  −E σ ,h¯ Tn  E  w2 (ξ (s) − η(s))ds P{Aσs ,h¯ }ds ≤ (2.11) c2 λ +κ σ ,h c2 ¯ ∞    E 1{Aσ ,h¯ } w2 (X ξ ,i (s) − X η,i (s)) ds U (ξ (0) − η(0), i) (2.12) ζ ,i P{∥Xt ∥ > H2 } < η,i ε , ∀ (ζ , i) ∈ K × S , t >  w2 (ξ (s) − η(s))ds < − X η,i (tn )| ≥ σ > ℓ, (2.19) ε mσ ,H2 η,i P{∥Xt ∥ ∨ ∥Xt ∥ ≤ H2 , |X ξ ,i (t ) − X η,i (t )| ≥ σ } ≤ P{|X ξ ,i (t )| ∨ |X η,i (t )| ≤ H2 , |X ξ ,i (t ) − X η,i (t )| ≥ σ }   ε ≤ σ ,H EU X ξ ,i (t ) − X η,i (t ) ≤ , ∀ n ∈ N  (2.13) (2.20) By the compactness of K , there exist ζ1 , , ζn such that for any ζ ∈ K , we can find ζk , k ∈ {1, , n} such that ∥ζ − ζk ∥ ≤ δ1 By σ ,H (2.18), there exists Tε > such that for all ≤ u, v ≤ n, ζ ,i ζ ,i P ∥Xt u ∥ ∨ ∥Xt v ∥ ≤ H2 , |X ζu ,i (t ) − X ζv ,i (t )| ≥ P{Atn ¯ } = P ∥Xtn ∥ ∨ ∥Xtn ∥ ≤ h¯ , |X ξ ,i (tn )  < ε, Put mσ ,H2 = min{U (x − y, i) : |x − y| ≥ σ , |x| ∨ |y| ≤ H2 } Since U (0, i) = w2 (0) = 0, for any ε > 0, we can find δ1 > such that } > ξ ,i  In view of tightness, let H2 = H2 (ε, K ) > h¯ such that m ∀ t ≥ if ∥ξ − η∥ ≤ δ1 Thus, there exists a constant ℓ > and a sequence tn , n = 1, 2, , tn ↑ ∞ such that  , ∀ n ∈ N Consequently, (2.18) η,i ξ ,i ξ ,i Suppose that σ ,h provided that ∥ξ − η∥ ≤ δ1 It follows from (2.11) that w2 (ξ (s) − η(s))ds < ∞ t →∞ δ0 ℓ −τ −τ lim sup P{At }ds ≥ }ds = ∞ U (ξ (0) − η(0), i) + κ  σ ,h¯ σ ,H P{As3 s  σ ,h¯ σ ,h¯  tn ∀ ξ , η ∈ K , i ∈ S 1{Aσ ,h¯ } w2 (X ξ ,i (s) − X η,i (s))ds Letting n → ∞, which implies that Tn → t, and then letting t → ∞, we derive that for any h¯ , σ > 0, ∞  tn +δ0 (2.17) P ∥Xt ∥ ∨ ∥Xt ∥ ≤ h¯ , |X ξ ,i (t ) − X η,i (t )| ≥ σ s Next, we will prove the uniformity of the limit above for ξ , η ∈ K , that is, for any σ , h¯ , ε > 0, there is Tεσ ,h¯ = Tεσ ,h¯ (K ) > such that for all t > Tεσ ,h¯ we have −τ −λ t →∞  σ ,H P{As3 = ℓ σ ,h lim P{At ¯ } = w2 (ξ (s) − η(s))ds  λ(Xsξ ,i , Xsη,i ) − κ w2 (X ξ ,i (s) − X η,i (s))ds ≤ U (ξ (0) − η(0), i) + κ ℓ σ which is a contradiction since the inequality (2.12) holds for any σ , h¯ > 0, that is, it must hold for the pair ( σ3 , H1 ) We therefore conclude that −τ  (2.16) 0  ℓ−2 ∞  η,i ≤ EU (ξ (0) − η(0), i) + κ E ∀ t ≥ 0, (ζ , i) ∈ K × S  It means that η,i ξ ,i , P ∥Xsξ ,i ∥ ∨ ∥Xsη,i ∥ ≤ H1 ; |X ξ ,i (s) − X η,i (s)| ≥ EU (X ξ ,i (Tn ) − X η,i (Tn ), ri (Tn ))  ℓ Combining (2.15) and (2.16), we deduce that for tn ≤ s < tn + δ0 σ ,h¯ σ ,h¯ ≤ ℓ ∀ n ∈ N and  ≤ It follows from (2.13) and (2.14) that   λ(Xsξ ,i , Xsη,i ) = λ X ξ ,i (s), X η,i (s), X ξ ,i (s − τ ), X η,i (s − τ ) , c2 ∀ (ζ , i) ∈ K × S , n ∈ N P To simplify the notation, denote σ ,h¯ tn ≤s H2 } + P{∥Xt l ∥ > H2 < ε, P{∥Xt ∥ ≤ H3 } ≥ − 32 , (2.21) ∀ (ζ , i, t ) ∈ K × S × R+ ζ ,i ζ ,i  ρ=0 ξ ,i ≤ |X (t + ρδ) − X (t + ρδ)| ≥ Dt = η,i ξ ,i , P 1{τt ≥t +τ } η,i ξ ,i  ≥1− ε − ε =1− 3ε η,i ξ ,i   P {τt ≥ t + τ } ∩ |X ξ ,i (t + ρδ) − X η,i (t + ρδ)| <  ε , sup s2 ∈[s1 ,s1 +δ] |X ζ ,i (s2 ) − X ζ ,i (s1 )| ≥ ε  = P {τt ≥ t + τ } \ ∃0 ≤ ρ ≤ m0 − : ξ ,i  η,i |X (t + ρδ) − X (t + ρδ)| ≥ 81 C δ, (2.22) where C3 , a constant depending on K , H3 , ε , can be constructed like CH ′ in Lemma 2.3 Let m0 ∈ N such that 81 C δ ≤ 8ετ with δ = mτ In view of (2.22), ε4 for ρ = 0, , m0 − 1, we have  s2 ∈[t +ρδ,t +(ρ+1)δ] − X ξ ,i (t + ρδ)| ≥ ε  ≤ |X ξ ,i (s2 ) combining with (2.23)–(2.25) implies that η,i ξ ,i  ∀ t > T, as required In view of Lemma 2.4 and a slight modification of [3, Lemma 3.4], we can employ [3, Theorem 3.1] to obtain the main result δε 8τ  Theorem 2.5 Eq (1.1) is stable in distribution under Assumptions 1.1–1.3  ξ ,i As a result, P {τt ≥ t + τ } ∩ Ct ≤ 8ε , where Examples  =  = P {τt ≥ t + τ } \ Dt ≥ − (2.25)  Note that if the three events |X ξ ,i (t + ρδ) − X η,i (t + ρδ)| <  η,i ξ ,i ε , ∀ ρ = 0, , m0 − , {τt ≥ t + τ } \ Ct and {τt ≥ t + τ } \ Ct η,i ξ ,i occur simultaneously, we have ∥Xt +τ − Xt +τ ∥ < ε This statement,  sup ε 3ε  P ∥Xt +τ − Xt +τ ∥ < ε > − ε,  P {τt ≥ t + τ } ∩ ∃ρ ∈ {0, , m0 − 1} : sup s2 ∈[t +ρδ,t +(ρ+1)δ] |X ξ ,i (s2 ) − X ξ ,i (t + ρδ)| ≥ ε In this section, some examples will be presented in order to illustrate our improvements  Example 3.1 We now consider equation dX (t ) = Hence  , ∃ρ ∈ {0, , m0 − 1} : ∥Xt +ρδ ∥ ∨ ∥Xt +ρδ ∥  ε ξ ,i η,i ≤ H3 and |X (t + ρδ) − X (t + ρδ)| ≥  Ct ∀ ≤ ρ ≤ m0 −  ξ ,i ε Since {∥Xt +ρδ ∥∨∥Xt +ρδ ∥ ≤ H3 } ⊃ {τt ≥ t +τ }, it is easy to see that η,i Using the same arguments as in the proof of Lemma 2.3, for any < δ < τ and t ≤ s1 < s1 + δ ≤ t + τ we have (see (2.8)) ε ≤ Thus, for t > T , τt = (t + τ ) ∧ inf{s ≥ t : ∥Xs ∥ ∨ ∥Xs ∥ > H3 } ≤   ξ ,i δ ε where η,i ζ ,i η,i η,i ξ ,i P ∥Xt +ρδ ∥ ∨ ∥Xt +ρδ ∥ ≤ H3 ,  P{τt < t + τ } ≤ P{∥Xt ∥ ∨ ∥Xt ∥ ∨ ∥Xt +τ ∥ ∨ ∥Xt +τ ∥ > H3 } ε  m0 −1 P {τt ≥ t + τ } \ Dt Note that if ∥Xt ∥ ≤ H3 and ∥Xt +τ ∥ ≤ H3 , then ∥Xs ∥ ≤ H3 ∀ s, t ≤ s ≤ t + τ Consequently, for any ξ , η ∈ K , ξ ,i (2.24)  Step Let arbitrarily ε > By virtue of tightness, there is H3 = H3 (K , ε) such that ε ε ≥1−  Owing to the uniform convergence shown in Step 1, we can find T = T (K , ε) such that for any t > T , as desired ζ ,i η,i  P {τt ≥ t + τ } \ Ct which implies that P{Dt } ≤ 4ε where  Similarly, ξ ,i P {τt ≥ t + τ } \ Ct  ≥1− ε − ε ε =1− (2.23)  a(r (t )) −  X (t ) dt + (b(r (t )) + sin X (t − τ )) dB(t ) (3.1) N.H Du et al / Systems & Control Letters 65 (2014) 43–49 Our main purpose is to prove that the hypothesis (H3 ) of Theorem 1.1 is possibly not satisfied, while it is easier to verify Assumption 1.3 Let V (x, i) = x2 , we have LV (x, y, i) = 2a(i)x − x2 + (b(i) + sin y)2 ≤ −  x2 + β, where β = sup(x,y,i)∈R×R×S 2a(i)x− x2 +(b(i)+sin y)2  < ∞ This means that Assumption 1.2 is satisfied Now, letting U (x, i) = x2 again, it is computed that L U (x, y, z1 , z2 , i) = −(x − y)2 + (sin z1 − sin z2 )2 = −(x − y)2 + sin2 z1 − z2 cos2 z1 + z2 (3.2) u−v) ≤ κ w(2u(−v) ≤ κλ < However, this inequality does not hold if we take u → v It means that the hypothesis (H3) of Theorem 1.1 is not satisfied for U (x, i) = x2 Nevertheless, define w2 (u) = sin2 2u if |u| ≤ π and w2 (u) = (x−y)2 otherwise, λ(x, y, z1 , z2 ) = w (x−y) if x ̸= y and λ(x, y, z1 , z2 ) = if x = y It is easy to verify that λ(x, y, z1 , z2 ) > κ := for all x ̸= y In view of (3.2), we can estimate L U (x, y, z1 , z2 , i) ≤ −λ(x, y, z1 , z2 )w2 (x − y) + w2 (z1 − z2 ), that is, Assumption 1.3 is satisfied The equation is therefore stable in distribution dX (t ) = a(i) − X (t ) − bX (t ) + cX (t − τ ) dt  (3.3) where b > c > 0, a(i) > 0, < d(i) < , ∀ i ∈ S Obviously, the coefficients of this equation not satisfy the linear growth condition, but the existence and uniqueness of a global solution can easily be verified using [5, Theorem 2.4] Unfortunately, as shown later, the hypothesis (H1 ) not hold Therefore, we cannot apply the result in [3] The purpose of this example is to demonstrate that Eq (3.3) is stable in distribution although the condition (H1 ) is not satisfied Indeed, by calculating we have 2(x − x)(f (x, y, i) − f (x, y, i)) + 17|g (x, y, i) − g (x, y, i)|2   = 2(x − x) −(x3 − x3 ) − b(x − x) + c (y − y) + 17d(i)(x2 − x2 )2   = (x − x)2 (17d(i) − 2)(x2 + x2 ) + 2(17d(i) − 1)xx − 2b(x − x)2 + 2c (x − x)(y − y) LV (x, y, i) = 2a(i)x − 2x4 − 2bx2 + 2cxy + d(i)x4  cy 2 ≤ −(2 − d(i))x4 + 2a(i)x − b x − b 2 c c − bx2 + y2 ≤ −bx2 + y2 + β, b b  < +∞ Hence, Assumption 1.2 is satisfied Next, employ- ing U (x, i) = x2 again, we have the estimate L U (x, y, z1 , z2 , i)   = 2(x − y) −(x3 − y3 ) − b(x − y) + c (z1 − z2 ) + d(i)(x2 − y2 )2   = −(x − y)2 (2 − d(i))(x2 + y2 ) + 2(1 − d(i))xy − 2b(x − y)2 + 2c (x − y)(z1 − z2 ) c2 = −b(x − y)2 + (z1 − z2 )2 b  2 c − b (x − y) − (z1 − z2 ) − (x − y)2 b   × (2 − d(i))(x2 + y2 ) + 2(1 − d(i))xy Since d(i) ≤ Example 3.2 We consider another equation   (17d(i) − 2)(x2 + x2 ) + 2(17d(i) − 1)xx = +∞, which easily implies that there is no α > such that condition (H1) holds However, we will show that the coefficients of Eq (3.3) satisfy Assumptions 1.2 and 1.3 Let V (x, i) = x2 , we have   (sin u−sin v)2 (u−v)2 17 x→+∞ cy b Substituting z1 = z2 and x = y (separately) into the inequality above we have λw2 (x − y) ≤ (x − y)2 , (sin z1 − sin z2 )2 < κw2 (z1 −  + d(i)X (t )dB(t ), lim  L U (x, y, z1 , z2 , i) ≤ −λw2 (x − y) + κw2 (z1 − z2 )  Let x = x + l, l ̸= Since 17d(i) > ∀ i ∈ S, where β = sup(x,y,i)∈R×R×S −(2 − d(i))x4 + 2a(i)x − b x − Suppose that there are λ > κ > 0, w2 (·) such that z2 ) Thus, for u ̸= v, 49 ∀ i ∈ S, (2 − d(i))(x2 + y2 )+ 2(1 − d(i))xy ≥ ∀ x, y Therefore, L U (x, y, z1 , z2 , i) ≤ −b(x − y)2 + cb (z1 − z2 )2 , which means that Assumption 1.3 holds Consequently, Eq (3.3) is stable in distribution Acknowledgment This work was done under the support of the Grand NAFOSTED, No 101.02-2011.21 References [1] X Mao, C Yuan, Stochastic Differential Equations with Markovian Switching, Imperial College Press, 2006 [2] R.Z Has’minskii, Stochastic Stability of Differential Equations, Sijthoff Noordhoff, 1980 [3] C Yuan, J Zou, X Mao, Stability in distribution of stochastic differential delay equations with Markovian switching, Systems Control Lett 50 (2003) 195–207 [4] W.J Anderson, Continuous—Times Markov Chain, Springer, Berlin, 1991 [5] X Mao, M.J Rassias, Khasminskii-type theorems for stochastic differential delay equations, J Stoch Anal Appl 23 (2005) 1045–1069 [6] P Billingsley, Convergence of Probability Measures, John Wiley and Sons, Inc., New York, NY, 1999 ... Mao, Stability in distribution of stochastic differential delay equations with Markovian switching, Systems Control Lett 50 (2003) 195–207 [4] W.J Anderson, Continuous—Times Markov Chain, Springer,... Mao, C Yuan, Stochastic Differential Equations with Markovian Switching, Imperial College Press, 2006 [2] R.Z Has’minskii, Stochastic Stability of Differential Equations, Sijthoff Noordhoff, 1980... hypotheses are also satisfied T Although this theorem can be applied to many stochastic differential delay equations with Markovian switching as demonstrated in [3], some of the conditions seem to be

Ngày đăng: 16/12/2017, 14:58

Mục lục

  • On stability in distribution of stochastic differential delay equations with Markovian switching

    • SDDEs with Markovian switching

    • A sufficient condition for stability in distribution of SDDEs with the Markovian switching

    • Examples

    • Acknowledgment

    • References

Tài liệu cùng người dùng

  • Đang cập nhật ...

Tài liệu liên quan