1. Trang chủ
  2. » Thể loại khác

DSpace at VNU: A note on sufficient conditions for asymptotic stability in distribution of stochastic differential equations with Markovian switching

7 99 0

Đang tải... (xem toàn văn)

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 7
Dung lượng 394,35 KB

Nội dung

Nonlinear Analysis 95 (2014) 625–631 Contents lists available at ScienceDirect Nonlinear Analysis journal homepage: www.elsevier.com/locate/na A note on sufficient conditions for asymptotic stability in distribution of stochastic differential equations with Markovian switching Nguyen Hai Dang ∗ Faculty of Mathematics, Mechanics, and Informatics, VNU, Hanoi University of Science, 334 Nguyen Trai, Thanh Xuan, Hanoi, Viet Nam article info Article history: Received February 2013 Accepted 30 September 2013 Communicated by S Carl abstract The aim of this paper is to improve the results on stability in distribution of nonlinear stochastic differential equations in Yuan and Mao (2003) Both conditions for the stability in distribution given in that paper are weakened © 2013 Elsevier Ltd All rights reserved MSC: 34K50 34K20 65C30 60J10 Keywords: Stochastic differential equations Stability in distribution Itô’s formula Markovian switching SDEs with Markovian switching For the past decade, stochastic differential equations with Markovian switching and their stability have drawn much attention Most of the papers are concerned with stochastic stability (stability in probability), moment stability and almost sure stability However, the stability in distribution does not seem to be paid as much attention as its importance Among very few papers studying this type of stability, [1] is a notable contribution Its results are summarized as follows Denote by (Ω , F , (Ft )t ≥0 , P) the probability space satisfying the usual conditions and B(t ) = (B1 (t ), B2 (t ), , Bm (t ))T the m-dimensional Brownian motion adapted to the filtration (Ft )t ≥0 Let r (t ), t ≥ be a Markov chain on the probability space taking values in a finite state space S = {1, 2, , N } with generator Γ = (γij )N ×N given by  γij ∆ + o(∆) P{r (t + ∆) = j|r (t ) = i} = + γij ∆ + o(∆) if i ̸= j if i = j, where ∆ > Here γij > is the transition rate from i to j if i ̸= j while γii = −  γij i̸=j It is well known that almost every sample path of r (t ) is right continuous step functions and r (t ) is an ergodic Markov chain We assume further that Markov chain r (·) is independent of Brownian motion B(·) ∗ Tel.: +84 986120076 E-mail address: dangnh.maths@gmail.com 0362-546X/$ – see front matter © 2013 Elsevier Ltd All rights reserved http://dx.doi.org/10.1016/j.na.2013.09.030 626 N.H Dang / Nonlinear Analysis 95 (2014) 625–631 Consider a nonlinear SDE with Markovian switching dX (t ) = f (X (t ), r (t )) dt + g (X (t ), r (t )) dB(t ) (1.1) where f , g : Rn × S → Rn For the existence and uniqueness of a global solution with any initial value, the authors imposed the following assumption Assumption 1.1 The coefficients of Eq (1.1) satisfy the local Lipschitz condition and the linear growth condition, that is, there exists an h; hk > 0(k ∈ N) such that for all x ∈ Rn , i ∈ S, |f (x, i)| + |g (x, i)| ≤ h(1 + |x|) ∀x, ∈ Rn , i ∈ S , and for all ∀ |x| ∨ |y| ≤ k, i ∈ S, |f (x, i) − f (y, i)|2 + |g (x, i) − g (y, i)|2 ≤ hk (|x − y|2 ) It is known that the process y(t ) = (X (t ), r (t )) is a homogeneous Markov process with the state space Rn × S The stability in distribution is defined as follows Definition (See [1]) The process y(t ) is said to be asymptotically stable in distribution if there exists a probability measure π(·, ·) on Rn × S such that the transition probability p(t , x, i, dy × {j}) of y(t ) converges weakly to π (dy × {j}) as t → ∞ for every (x, i) ∈ Rn × S Eq (1.1) is said to be asymptotically stable in distribution if y(t ) is asymptotically stable in distribution Let C (R × S , R+ ) denote the family of non-negative functions U (x, i) on Rn × S which is continuously twice differentiable in x We denote by K the set of continuous functions µ : R+ → R+ vanishing only at Moreover, µ is said to belong to K∞ if µ ∈ K and µ(x) → ∞ as x → ∞ C Yuan and X Mao in [1] proved that Eq (1.1) is asymptotically stable in distribution if Assumption 1.1 and the two following ones are satisfied Assumption 1.2 There exists a function V (x, i) ∈ C (R × S , R+ ) and positive constants α, β such that V (x, i) → ∞ as |x| → ∞ and that LV (x, i) ≤ −α V (x, i) + β for any (x, i) ∈ Rn × S Assumption 1.3 There exist functions U ∈ C (R × S ; R+ ), ν, µ ∈ K , such that U (0, i) = ∀ i ∈ S , ν(|x|) ≤ U (x, i) ∀ (x, i) ∈ R × S , LU (x, y, i) ≤ −µ(|x − y|) ∀ x, y, (1.2) (1.3) (1.4) The two operators LV (x, i) and LU (x, y, i), which will still be used throughout this paper, are defined by LV (x, i) = n    γij V (x, j) + Vx (x, i)f (x, i) + trace g (x, i)T Vxx (x, i)g (x, i) j =1 and LU (x, y, i) = n  γij U (x − y, j) + Ux (x − y, i) (f (x, i) − f (y, i)) j =1   + trace (g (x, i) − g (y, i))T Uxx (x − y, i) (g (x, i) − g (y, i)) This result is very useful to investigate the stability in distribution of Eq (1.1) as illustrated with some examples in [1] Nevertheless, these conditions can be weakened to cover a larger collection of stochastic differential equations A slight and usual improvement can be made by using the condition of Khasminskii type instead of the linear growth one Moreover, Assumptions 1.2 and 1.3 should be replaced with weaker ones also In particular, Assumption 1.2 guarantees the tightness of the family of transition probabilities However, as we will point out later that this assumption can be weakened Condition (1.4) seems be too restrictive, also It demands LU (x, y, i) to be uniformly upper bounded by a negative-definite function of the difference (x − y) Therefore, it is nearly satisfied only for f and g being Lipschitz functions For this reason, we also attempt to localize this condition The improvements will be shown in the next section and justified by some examples New sufficient conditions Instead of using Assumptions 1.1–1.3, we will prove that Eq (1.1) is still asymptotically stable in distribution under the three following assumptions Assumption 2.1 The coefficients of Eq (1.1) satisfy the local Lipschitz condition, that is, for any k ∈ N, there exists an hk > such that |f (x, i) − f (y, i)|2 + |g (x, i) − g (y, i)|2 ≤ hk (|x − y|2 ) N.H Dang / Nonlinear Analysis 95 (2014) 625–631 627 Assumption 2.2 There exist functions V ∈ C (Rn × S ; R+ ), ν ∈ K∞ and a positive number β such that lim V (x, i) = ∞ ∀i ∈ S (2.1) |x|→∞ and LV (x, i) ≤ β − ν(x) (2.2) Assumption 2.3 There exist functions U C (R ì S ; R+ ), à1 ∈ K and µM ∈ K for each M > satisfying U (0, i) = ∀ i ∈ S , (2.3) µ1 (|x|) ≤ U (x, i) ∀ (x, i) ∈ R × S , (2.4) LU (x, y, i) ≤ −µM (|x − y|) ∀ |x| ∨ |y| ≤ M , i ∈ S (2.5) Theorem 2.1 Let Assumptions 2.1 and 2.2 holds Then, for any initial value (x, i) ∈ Rn ×  tS, there exists a unique global solution, denoted by X x,i (t ), to (1.1) Moreover, for R > 0, we can find K = K (R) > such that 1t Eµ(X x,i (s))ds ≤ K , ∀|x| ≤ R, t ≥ Proof Under Assumptions 2.1 and 2.2, the existence and uniqueness of global strong solutions is obvious due to the Khasminskii-type theorem (see [2, Theorem 3.19]) Define stopping times tk = inf{t ≥ : X x,i (t ) ∨ X y,i (t ) > k}, k ∈ N By virtue of the generalized Itô formula, x ,i EV (X (t ∧ tk )) ≤ V (x, i) + E t ∧ tk  (β − µ(X x (s)))ds which implies that t ∧tk  µ(X x (s))ds ≤ V (x, i) + β t E Letting k → ∞ and using the Lebesgue monotone convergence theorem, we conclude that t t  1 µ(X x (s))ds ≤ β + V (x, i) ≤ K (R) ∀|x| ≤ R, t ≥ E t Theorem 2.2 If Assumptions 2.1–2.3 are satisfied, Eq (1.1) has the property that for any positive number R, ε, δ , there exists a T = T (R, ε, δ) such that P{|X x,i (t ) − X y,i (t )| ≤ δ ∀t ≥ T } ≥ − ε ∀|x| ∨ |y| ≤ R Proof Suppose that |x| ∨ |y| ≤ R Let arbitrarily δ > and ε > We firstly show that for any bounded stopping times τ1 ≤ τ2 , EU X x,i (τ2 ), X y,i (τ2 ), r (τ2 ) ≤ EU X x,i (τ1 ), X y,i (τ1 ), r (τ1 ) ≤ U (x − y, i)     (2.6) and  τ1 0≤E LU (X x,i (s), Y y,i (s), r (s))ds + U (x − y, i) (2.7) Indeed, let tk be defined above, then, applying the generalized Itô formula, we yield < EU X x,i (τ1 ∧ tk ), X y,i (τ1 ∧ tk ), r (τ1 ∧ tk )   = U (x − y, i) + E τ1 ∧tk  LU (X x,i (s), Y y,i (s), r (s))ds ≤ U (x − y, i) Since τ1 ∧ tk → τ1 as k → ∞, by the Lebesgue monotone convergence theorem, EU X x,i (τ1 ), X y,i (τ1 ), r (τ1 ) ≤ U (x − y, i) Similarly we have  EU X x,i (τ2 ), X y,i (τ2 ), r (τ2 ) ≤ EU X x,i (τ1 ), X y,i (τ1 ), r (τ1 )    It also follows from the generalized Itô formula that  −E τ ∧ tk LU (Xsx,i , Ysy,i , r (s))ds ≤ U (x − y, i)   628 N.H Dang / Nonlinear Analysis 95 (2014) 625–631 Since τ1 ∧tk  − LU (Xsx,i , Ysy,i , r (s))ds ↗ − τ1  LU (Xsx,i , Ysy,i , r (s))ds as k → ∞, 0 we obtain (2.7) Since lim|ξ |→∞ µ(ξ ) = ∞, for any ε > 0, there exists an M such that inf|ξ |>M µ(ξ ) ≥ 2K , where K is defined in Theoε2 rem 2.1 Thus, y ,i x ,i ε2  y,i x ,i P{|Xt | ∨ |Yt | > M } ≤ P{|Xt | > M } + P{|Xt | > M } ≤ Eµ X x,i + Eµ X y,i 2K     On the other hand, since U is continuous and U (0, i) = ∀i ∈ S, there exists an α such that < α < δ and sup|x|≤α,i∈S U (x,i) ≤ ε Define the stopping time µ (δ) τα = inf{t ≥ : |X x,i (t ) − X y,i (t )| ≤ α and |X x,i (t )| ∨ |X y,i (t )| ≤ M }, Note that τα ∧t  E LU (X x,i (s), X y,i (s), r (s))ds = E t  1{0≤s≤τα } LU (X x,i (s), X y,i (s), r (s))ds t  1{|X x,i (s)|∨|X y,i (s)|≤M } 1{0≤s≤τα } LU X x,i (s), X y,i (s), r (s) ds  ≤E ≤ −µM (α)E  t  1{|X x,i (s)|∨|X y,i (s)|≤M } 1{0≤s≤τα } ds On the other hand, t  1{0≤s≤τα } ds − E E t  t  1{|X x,i (s)|∨|X y,i (s)|}≤M 1{0≤s≤τα } ds = E 1{|X x,i (s)|∨|X y,i (s)|>M } 1{0≤s≤τα } ds By virtue of Holder’s inequality, 2 t  ≤ ≤t Consequently, E t  t t  E1{|X x,i (s)|∨|X y,i (s)|>M } 1{0≤s≤τα } ds ε2  2K t E1{0≤s≤τα } ds Eµ X x,i (s) + Eµ X y,i (s)      ds ≤ (t ε)2 ∀t ≥ 1{|X x,i (s)|∨|X y,i (s)|≤M } 1{0≤s≤τα } ds ≥ E(τα ∧ t ) − ε t, which implies x,i y,i LU (X (s), X (s)), r (s)ds ≤ −µ (α)E E t  E1{|X x,i (s)|∨|X y,i (s)|>M } M t  1|X x,i (s)|∨|X y,i (s)|≤M 1{0≤s≤τα } ds ≤ −µM (α) (E(τα ∧ t ) − ε t ) In view of (2.7), ≤ U (x − y, i) + E Consequently t P(τα ≥ t ) ≤ E(τα ∧ t ) ≤  τα ∧ t LU (X x,i (s), X y,i (s), r (s))ds Hence U (x − y, i) + µM (α)ε t − µM (α)E(τα ∧ t ) ≥ U ( x − y , i) µM (α) + ε t ∀t ≥ This implies the existence of T > satisfying P(τα ≥ T ) ≤ 2ε (2.8) Define the stopping time σ = inf{t ≥ τα : |X x,i (t ) − Y y,i (t )| ≥ δ} and it is easy to verify that σα =  σ τα if τα ≤ T otherwise is also a stopping time Since σα ≥ τα , it follows from (2.6) EU X x,i (σα ∧ t ) − X y,i (σα ∧ t ), r (σα ∧ t ) ≤ EU X x,i (τα ∧ t ) − X y,i (τα ∧ t ), r (σα ∧ t )     N.H Dang / Nonlinear Analysis 95 (2014) 625–631 629 Moreover, E1{τα >T } U X x,i (σα ∧ t ) − X y,i (σα ∧ t ), r (σα ∧ t ) = E1{τα >T } U X x,i (τα ∧ t ) − X y,i (τα ∧ t ), r (σα ∧ t )     Consequently, E1{τα ≤T } U X x,i (σα ∧ t ) − X y,i (σα ∧ t ), r (σα ∧ t ) ≤ E1{τα ≤T } U X x,i (τα ∧ t ) − X y,i (τα ∧ t ), r (σα ∧ t )     Thus, P ({τα ≤ T } ∩ {σ < t }) µ1 (δ) ≤ E1{τα ≤T }∩{σ − 3ε ∀|x| ∨ |y| ≤ R, i ∈ S t ≥T as desired To take further steps, we need to introduce more notations Let P (Rn × S ) denote all probability measures on Rn × S For P1 , P2 ∈ P (Rn × S ), we define metric dH as follows   n  n       dH [P1 , P2 ] = sup  f (x, i)P1 (dx, i) − f (x, i)P2 (dx, i) ,   n n f ∈H i=1 R i =1 R where H = {f : Rn × S → R : |f (x, i) − f (y, j)| ≤ |x − y| + |i − j|, |f (·, ·)| ≤ 1} It is well known that the weak convergence of probability measures is equivalent to the convergence in this metric (see [3]) Lemma 2.1 Let Assumptions 2.1–2.3 hold Then, for any compact subset K of Rn , lim dH [p(t , x, i, · × ·), p(t , y, j, · × ·)] = (2.9) t →∞ uniformly in x, y ∈ K and i, j ∈ S Proof Under Assumptions 2.1 and 2.2, it is easy to point out that for any positive numbers ε, R and T , we can find a  R =  R(R, T , ε) > such that   x ,i  P sup X (t ) ≤ R > − ε ∀|x| ≤ R, i ∈ S 0≤t ≤T Having this property as well as the conclusion of Theorem 2.2, we can obtain the desired assertion by employing the proof of [1, Lemma 3.2] Note that, the two Assumptions 2.1 and 2.2 guarantee the Feller property of the solution Furthermore, they also imply the existence of an invariant probability measure π on Rn × S owing to the conclusion of Theorem 2.1 (see [4, Theorem 4.5] or [5] for details) We now prove our main result with simplified notations Particularly, we denote by p(t , u, dv) the transition probability instead of p(t , x, i, dy × {j}) and S = Rn × S Theorem 2.3 Let Assumptions 2.1–2.3 hold Then, the transition probabilities p(t , z , du) converge weakly to the invariant probability measure π (du) for all z ∈ S Proof Since π (·) is invariant, then π (du) =  f (u)π (du) = S  f (u) S   S p(t , v, du)π (dv) ∀t ≥ which implies p(t , v, du)π (dv) S     f (u)p(t , v, du) π (dv) = S S (2.10) 630 N.H Dang / Nonlinear Analysis 95 (2014) 625–631 For any z ∈ S, we estimate       f (u)p(t , z , du) − f (u)π (du) =   S         f (u)p(t , z , du) − f (u)p(t , v, du) π (dv)  S S S         ≤  f (u)p(t , z , du) − f (u)p(t , v, du) π (dv) Bk S S        f (u)p(t , z , du) − f (u)p(t , v, du) π (dv) +   S Bck S S ≤ sup {dH [p(t , z , ·), p(t , v, ·)]}π (Bk ) + 2π (Bck ) v∈Bk (2.11) where Bk = {x ∈ Rn : |x| ≤ k} × S and Bck = (Rn \ Bk ) × S and k ∈ N is chosen to be sufficiently large such that π (Bk ) ≥ − 4ε On the other hand, by Lemma 2.1, there is a T = T (Bk , ε) > such that sup {dH [p(t , z , ·), p(t , v, ·)]} ≤ v∈Bk ε ∀t ≥ T (2.12) Since f is taken arbitrarily, it follows from (2.11) and (2.12) that dH [p(t , z , ·), π (·)] ≤ ε ∀t ≥ T (2.13) which implies the desired conclusion Examples We now illustrate our results by the two following examples Example 3.1 We consider stability in distribution of the stochastic logistic model dXt = Xt (a(r (t )) − b(r (t ))Xt ) dt + c (r (t ))Xt2 dB(t ), (3.1) where a(i), b(i), c (i) is positive constants It is known that (see [6]) if X0 > 0, a.s then < Xt < ∞ ∀t > with probability Put Zt = ln Xt , the equation becomes  dZt = a(r (t )) − b(r (t ))e Zt − c (r (t ))  2Zt e dt + ceZt dB(t ) (3.2) For this equation, the local Lipschitz condition is obviously satisfied Moreover, consider the function V (z , i) = ez + e−z > ∀(z , i) ∈ R × S , we have c (i) LV (z , i) = a(i)(ez − e−z ) − b(i)(e2z − 1) + ez Thus, β = sup(z ,i)∈R×S {LV (z , i) + ε V } < +∞, whenever ε < min{a(i)} It follows that LV (z , i) ≤ β − ε V (z ) ∀(z , i) ∈ R × S then Assumption 2.2 holds Put U (z , i) = z We consider two solutions of (3.2) with the initial values are (u, i), (v, i) ∈ R × S We have c2  (e2u − e2v ) + c (i)(eu − ev )2   eu − ev = −2b(i)(u − v)(eu − ev ) − c (i)(u − v)(eu − ev ) eu + ev − u−v  LU (u, v, i) = −2(u − v) b(i)(eu − ev ) + It is easy to see that eu + ev − eu −ev u−v > ∀u, v ∈ R, hence LU (u, v, i) ≤ −2b(i)(u − v)(eu − ev ) ∀(u, v, i) ∈ R × R × S On the other hand, LU (u, v, i) → whenever u → −∞ and v → −∞ Thus, there is no µ ∈ K such that LU (u, v, i) ≤ −µ(|u −v|) It means that the function U (z , i) = z does not satisfy Assumption 1.3 However, for any M > 0, LU (u, v, i) ≤ −2b(i)(u − v)(eu − ev ) ≤ −kM (u − v)2 where kM = min{b(i)}e−M Consequently, Assumption 2.3 holds for this function and it follows that Eq (3.2) is asymptotically stable in distribution As a results, Eq (3.1) is asymptotically stable in distribution on the state space (0, +∞) × S also Example 3.2 Consider another equation dX (t ) = a(r (t )) − b(r (t )) arctan(X (t )) ln X (t ) +    dt + c (r (t ))dB(t ) (3.3) N.H Dang / Nonlinear Analysis 95 (2014) 625–631 631 where a(i), b(i), c (i) are constant for each i ∈ S and b(i) are positive Using V (x, i) = x2 , we see that LV (x, i) = 2a(i)x + c (i) − 2b(i)x arctan x ln(x2 + 1)   = 2a(i)x + c (i) − (2b(i) − α)x arctan x ln(x2 + 1) − α x arctan x ln(x2 + 1) ≤ β − µ(x), (3.4) where α = min{b(i) : i ∈ S }, µ(x) = α x arctan x ln(x2 + 1) and β= sup {2a(i)x + c (i) − (2b(i) − α)x arctan x ln(x2 + 1)} < ∞ (x,i)∈R×S The estimate (3.4) means that Assumption 2.2 holds On the other hand, it is easy to see that, Assumption 1.2 cannot be satisfied for V (x, i) = x2 In order to check Assumption 2.3, we compute LV (x, y, i) = −2b(i)(x − y) arctan x ln(x2 + 1) − arctan y ln(y2 + 1)   (3.5) Using arguments analogous to those in Eq (3.1), we can show that for any M > 0, there is a σM > such that LV (x, y, i) ≤ −σM (x − y)2 ∀|x| ∨ |y| ≤ M although there is no µ ∈ K such that LV (x, y, i) ≤ −µ(x − y) We therefore still conclude that Eq (3.3) is asymptotically stable in distribution References [1] C Yuan, X Mao, Asymptotic stability in distribution of stochastic differential equations with Markovian switching, Stochastic Process Appl 79 (2003) 45–67 [2] X Mao, C Yuan, Stochastic Differential Equations with Markovian Switching, Imperial College Press, 2006 [3] N Ikeda, S Watanabe, Stochastic Differential Equations and Diffusion Processes, North-Holland, Amsterdam, 1981 [4] S.P Meyn, R.L Tweedie, Stability of Markovian processes III: Foster–Lyapunov criteria for continuous-time processes, Adv Appl Probab 25 (1993) 518–548 [5] L Stettner, On the existence and uniqueness of invariant measure for continuous time Markov processes, LCDS Report No 86-16, April 1986, Brown University, Providence, 1986 [6] Q Luo, X Mao, Stochastic population dynamics under regime switching, J Math Anal Appl 334 (2007) 69–84 ... therefore still conclude that Eq (3.3) is asymptotically stable in distribution References [1] C Yuan, X Mao, Asymptotic stability in distribution of stochastic differential equations with Markovian. .. these conditions can be weakened to cover a larger collection of stochastic differential equations A slight and usual improvement can be made by using the condition of Khasminskii type instead of. .. Markovian switching, Stochastic Process Appl 79 (2003) 45–67 [2] X Mao, C Yuan, Stochastic Differential Equations with Markovian Switching, Imperial College Press, 2006 [3] N Ikeda, S Watanabe, Stochastic

Ngày đăng: 16/12/2017, 15:06