1. Trang chủ
  2. » Thể loại khác

DSpace at VNU: ASYMPTOTIC BEHAVIOR OF KOLMOGOROV SYSTEMS WITH PREDATOR-PREY TYPE IN RANDOM ENVIRONMENT

20 139 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 20
Dung lượng 464,96 KB

Nội dung

DSpace at VNU: ASYMPTOTIC BEHAVIOR OF KOLMOGOROV SYSTEMS WITH PREDATOR-PREY TYPE IN RANDOM ENVIRONMENT tài liệu, giáo án...

COMMUNICATIONS ON PURE AND APPLIED ANALYSIS Volume 13, Number 6, November 2014 doi:10.3934/cpaa.2014.13.2693 pp 2693–2712 ASYMPTOTIC BEHAVIOR OF KOLMOGOROV SYSTEMS WITH PREDATOR-PREY TYPE IN RANDOM ENVIRONMENT Nguyen Huu Du Department of Mathematics, Mechanics and Informatics, Hanoi National University, 334 Nguyen Trai, Thanh Xuan, Hanoi Vietnam Nguyen Hai Dang Department of Mathematics, Wayne State University, Detroit, MI 48202, USA (Communicated by Wei Feng) Abstract The paper is concerned with the asymptotic behavior of two species population whose densities are described by Kolmogorov systems of predatorprey type in random environment We study the omega-limit set and find conditions ensuring the existence and attractivity of a stationary density Some applications to the predator-prey model with Beddington-DeAngelis functional response are considered to illustrate our results Introduction In ecology, the development of species are affected by intrinsic factors as well as interactions with other species One of the most important types of interactions between populations is the predator-prey one which includes some of normal interactions such as plant-herbivore, host-parasitoid, herbivore-carnivore and host-pathogen interactions To understand intensively the dynamics of populations of predator-prey type, these are often modeled by Kolmogorov systems, that is, the prey densities x and predator one y satisfy differential equations x˙ = xf (x, y), y˙ = yg(x, y), (1.1) where f (x, y), g(x, y) are the growth rates of the two species (see [3, 20]) Moreover, in order to indicate the prey-predator relationship between these species, the following assumptions are given ∂g(x, y) ∂f (x, y) < 0; > ∀ (x, y) ∈ R2+ (1.2) ∂y ∂x It has been well recognized that, these traditional models are in general not adequate to describe the reality because the growth rates are subject to environment noise and other random factors This fact has stimulated studies on population models perturbed by some kind of random noise f (0, 0) > 0; g(0, 0) < 0; 2000 Mathematics Subject Classification Primary: 34C12, 60H10; Secondary: 92D25 Key words and phrases Kolmogorov systems of predator-prey type, telegraph noise, stationary distribution, piecewise deterministic Markov process The first author is supported by NAFOSTED n0 101.02 - 2011.21, The second author is supported in part by the Army Research Office under grant W911NF-12-1-0223 2693 2694 NGUYEN HUU DU AND NGUYEN HAI DANG Although for the past decade, there have been many papers studying stochastic Kolmogorov systems with white noise and/or colored noise (e.g [6, 13, 14, 15, 16, 18, 23] and the references therein), these usually focus on special cases such as LotkaVolterra models and other predator-prey models with special functional responses Meanwhile, not much attention has been drawn to general random Kolmogorov systems For this reason, this paper will consider a Kolmogorov system perturbed by colored noise The colored noise arises from the assumption that there is a random switching between several regimes such as a hot regime and a cold one, rainy and dry seasons, good or poor protection for prey These regimes differ in nutrition, food resources and other factors Consequently, the switching makes changes in growth rates, carrying capacities, as well as inter-specific or intra-specific interactions which cannot be described by a traditional deterministic model The random switching can be represented by a continuous-time Markov chain, so we need to consider a hybrid system subject to such a Markov chain in lieu of Kolmogorov system (1.1) To simplify, we suppose that the Markov chain ξt takes values in a two-element set E = {+, −} representing telegraph noise Recently, the papers [2, 4, 8, 15, 23] among others have dealt with such models In [2], the authors dealt with the classical prey-predator systems with telegraph noise x(t) ˙ = x(t)(a(ξt ) − b(ξt )x(t) − c(ξt )y(t)) (1.3) y(t) ˙ = y(t)(−d(ξt ) + f (ξt )y(t)), where a, b, c, d, f are positive functions defined on E The main findings in [2] are to point out some subsets of the ω−limit set of the solutions and to give criteria for the permanence of the system (1.3) The purpose of this paper is to generalize and improve these results for a general Kolmogorov system x(t) ˙ y(t) ˙ = xa(ξt , x, y) = yb(ξt , x, y), (1.4) where, a(i, x, y), b(i, x, y) are continuously differentiable real-valued functions on R2+ = {(x, y) : x ≥ 0, y ≥ 0} for any i ∈ E In this model, the telegraph noise (ξt ) results in switching between the deterministic Kolmogorov systems x(t) ˙ y(t) ˙ = xa(+, x, y) = yb(+, x, y), (1.5) x(t) ˙ y(t) ˙ = xa(−, x, y) = yb(−, x, y) (1.6) and A similar model has been considered in [7] when (1.4) is of competition type With some slight assumptions on the functions a(±, x, y), b(±, x, y), all ω−limit sets of positive solutions for the equation (1.4) are described and it is shown that the ω-limit sets of all positive solution are the same and absorb all other positive solutions Moreover, if the threshold values λ1 and λ2 are positive, then lim sup x(t) > 0, lim sup y(t) > a.s However, there are still two open problems for t→∞ t→∞ that model They are: • Does the hypothesis λ1 > 0, λ2 > imply the existence of a stationary distribution? • When either λ1 < or λ2 < 0, we not know the behavior of x(t), y(t) RANDOM KOLMOGOROV SYSTEMS WITH PREDATOR-PREY TYPE 2695 The aim of this paper is to study the Kolmogorov equation (1.4) as a model of predator-prey type We want to obtain results like those in [7] and to answer the above open questions In order to describe the ω-limit set, we introduce a threshold value λ and consider three concrete cases We pay more attention to the case either the system (1.5) or (1.6) has a unique limit cycle The threshold value λ plays an important role in practice because it helps us to know whether the system is persistent by means of analyzing coefficients Since λ is given by an integral formula, we can easily calculate it by numerical methods One of the distinctive characteristics of this paper is to show that lim y(t) = if λ < in contrast t→∞ to the existence of a stationary distribution of the Markov process (ξt , x(t), y(t)) with the support in E × intR2+ given that λ > We can also show that this stationary distribution has a density and it attracts all other distributions under certain additional assumptions The rest of the paper is organized as follows: In Section we describe path-wise dynamic behaviors of positive solutions to systems of predator-prey type under the effect of telegraph noise It is shown that the ω−limit set absorbs all positive solutions Section is concerned with the stability of stationary density by using the Foguel alternative theorem in [21] The last section gives an application to the predator-prey model with Beddington-DeAngelis functional response Asymptotic behavior of Kolmogorov prey-predator type systems in random environment Let (Ω, F, P) be a complete probability space and (ξt )t≥0 be a Markov process, defined on (Ω, F, P), taking values in a set of two elements, α β say E = {+, −} Suppose that (ξt ) has transition intensities + → − and − → + with α > 0, β > The process (ξt ) has a unique stationary distribution α β ; q = lim P{ξt = −} = p = lim P{ξt = +} = t→∞ t→∞ α+β α+β The trajectories of (ξt ) are piecewise-constant, cadlag functions Let = τ0 < τ1 < τ2 < < τn < be its jump times Put σ1 = τ1 − τ0 , σ2 = τ2 − τ1 , , σn = τn − τn−1 σ1 = τ1 is the first jump from the initial state, σ2 is the time the process ξt spends in the state into which it has moved from the first state It is known that σk ’s are mutually independent on the condition that the sequence {ξτk }∞ k=1 is known Note that if ξ0 is given then ξτn is known because the process ξt takes only two values Hence, {σk }∞ n=1 is a sequence of conditionally independent R+ -valued random variables Moreover, if ξ0 = + then σ2n+1 has the exponential density α1[0,∞) exp(−αt) and σ2n has the density β1[0,∞) exp(−βt) Conversely, if ξ0 = − then σ2n has the exponential density α1[0,∞) exp(−αt) and σ2n+1 has the density β1[0,∞) exp(−βt) (see [10, vol 2, pp 217]) Here 1[0,∞) = for t ≥ (= for t < 0) Adapting from the conditions of prey-predator relation (1.2), we suppose that for both systems (1.5) and (1.6), the coefficients a(±, x, y) and b(±, x, y) satisfy an assumption making the system (1.4) looks like a prey-predator model Assumption 2.1 a(±, ·) and b(±, ·) are continuously differentiable in R2+ Moreover, ∂a(±, x, 0) < ∀x ≥ 0, ∂x 2696 NGUYEN HUU DU AND NGUYEN HAI DANG a(±, 0, 0) > 0, lim a(±, x, 0) < 0, x→∞ b(±, 0, y) < ∀y ≥ Note that Assumption 2.1 is more relaxed than (1.2) since its conditions are not imposed on R2+ as the relations in (1.2), but only on the boundary ∂R2+ Throughout this paper, we suppose that the system (1.5), (resp (1.6)) has a unique solution (x+ (t, x0 , y0 ), y+ (t, x0 , y0 )) (resp (x− (t, x0 , y0 ), y− (t, x0 , y0 ))) starting in (x0 , y0 ) ∈ intR2+ and this solution is defined on [0, ∞) Likewise, denote by (x(t, x0 , y0 ), y(t, x0 , y0 ) the solution to (1.4) with initial value (x0 , y0 ) Furthermore, we assume both systems (1.5) and (1.6) to be dissipative with a common set D in the following sense Assumption 2.2 For any (x0 , y0 ) ∈ int R2+ , there is a compact set D := D(x0 , y0 ) ⊂ R2+ such that (x0 , y0 ) ∈ D and D is a common invariant set for both systems (1.5) and (1.6) From now on, we will drop x0 , y0 from the notations for solutions to (1.4), (1.5) and (1.6) whenever it causes no ambiguity We need the following lemma Lemma 2.1 Suppose that the system x(t) ˙ = f (x, y), y(t) ˙ = g(x, y), where f, g : R2 → R2 , has a globally asymptotically stable equilibrium (x∗ , y ∗ ), i.e., (x∗ , y ∗ ) is stable and every solution (x(t), y(t)) defined on [0, ∞) satisfies limt→∞ (x(t), y(t)) = (x∗ , y ∗ ) Then, for any compact set K ⊂ R2 and any neighborhood U of (x∗ , y ∗ ), there exists a number T ∗ > such that (x(t, x0 , y0 ), y(t, x0 , y0 )) ∈ U for any t > T ∗ , provided (x0 , y0 ) ∈ K Proof See the proof of Lemma 2.1 in [7] Adapting from the concept in [5], we define the (random) ω−limit set of the trajectories of system (1.4) that start in a closed set B as x(t, ·, ω), y(t, ·, ω) B Ω(B, ω) = T >0 t>T In particular, the ω−limit set of the trajectory starting from (x0 , y0 ) is Ω(x0 , y0 , ω) = x(t, x0 , y0 , ω), y(t, x0 , y0 , ω) T >0 t>T This concept is different from the one in [9] but it is closest to that of an ω−limit set for a deterministic dynamical system In the case where Ω(x0 , y0 , ω) is a.s constant, it is similar to the concept of weak attractor and attractor given in [17, 24] Although, in generally, the ω-limit set in this sense does not have the invariant property, this concept is appropriate for our purpose of describing the pathwise asymptotic behavior of the solution with a given initial value Our task in this section is to show that under some conditions, Ω(x0 , y0 , ω) is deterministic, that is, it is constant almost surely Further, it is also independent of the initial value (x0 , y0 ) As in deterministic cases, the behavior of the solutions on the boundary to the system (1.4) plays an important role For the predator-prey systems, in the absence of the prey (when x(t) = 0), the predator must die out Indeed, by item 3) of RANDOM KOLMOGOROV SYSTEMS WITH PREDATOR-PREY TYPE Assumption 2.1, b(±, 0, y) < ∀y ≥ 0, we can find an such that b(±, 0, y) < − for any < y < M Hence, 2697 > and an M > y(0) > y(t) ˙ = y(t)b(ξt , 0, y(t)) < −y(t) ∀ t > =⇒ lim y(t) = t→∞ Therefore, we focus on considering the system on the boundary R+ × {0} u(t) ˙ = u(t)a(ξt , u(t), 0), u(0) ∈ [0, ∞) (2.1) By Assumption 2.1, there is a unique number u+ > satisfying a(+, u+ , 0) = and a number u− > with a(−, u− , 0) = In the case u+ = u− we suppose u+ < u− and put h+ = h+ (u) = ua(+, u, 0), h− = h− (u) = ua(−, u, 0) It is known that if u(t) is the solution of the system (2.1) then (ξt , u(t)) is a Markov process with the infinitesimal operator L given by   Lg(+, u) = −α(g(+, u) − g(−, u)) + h+ (u) d g(+, u) du d  Lg(−, u) = β(g(+, u) − g(−, u)) + h− (u) g(−, u), du with g(i, x) a function defined on E × (0, ∞), continuously differentiable in x The stationary density (µ+ , µ− ) of (ξt , u(t)) can be found from the Fokker-Planck equation   −αµ+ (u) + βµ− (u) − d [h+ µ+ (u)] = 0, du (2.2) d − −  αµ+ (u) − βµ− (u) − [h µ (u)] = du Solving this equation, we obtain a unique positive density given by θF (u) θF (u) µ+ (u) = , µ− (u) = , (2.3) u|a(+, u, 0)| u|a(−, u, 0)| where u α β u+ + u− F (u) = exp − + dτ , u ∈ [u+ , u− ], u = , τ a(+, τ, 0) τ a(−, τ, 0) u and −1 u− F (u) F (u) θ= p +q du u|a(+, u, 0)| u|a(−, u, 0)| u+ Thus, the process (ξt , u(t)) has a unique stationary distribution with density (µ+, µ− ) (see [1] for the details) Further, for any continuous function f : E × R → R with u− p|f (+, u)|µ+ (u) + q|f (−, u)|µ− (u) du < ∞, u+ we have t→∞ t u− t lim pf (+, u)µ+ (u) + qf (−, u)µ− (u) du f (ξs , u(s)) ds = (2.4) u+ In case u+ = u− , the process (ξt , u(t)) has a unique stationary distribution with generalized density µ+ (u) = µ− (u) = δ(u − u+ ) where δ(·) is the Dirac function Define − u (pb(+, u, 0)µ+ (u) + qb(−, u, 0)µ− (u))du λ= u+ Theorem 2.2 For any x0 > 0, y0 > 0, (2.5) 2698 NGUYEN HUU DU AND NGUYEN HAI DANG a) there exists δ1 > such that lim sup x(t, x0 , y0 ) ≥ δ1 a.s t→∞ b) if λ > 0, there is δ2 > satisfying lim sup y(t, x0 , y0 ) ≥ δ2 a.s t→∞ Proof Let M be a positive number satisfying D ⊂ [0, M ] × [0, M ] By the assumption b(±, 0, y) < 0, a(±, 0, 0) > 0, there exist δ1 > 0, > such that b(±, x, y) < − ∀ < x ≤ δ1 , < y ≤ M and a(±, x, y) > ε1 for all ≤ x, y ≤ δ1 Suppose that lim supt→∞ x(t) < δ1 with a positive probability Then, there is a T1 > such that x(t) < δ1 , y(t) ≤ M ∀t ≥ T1 , which implies y(t) ˙ = yb(ξt , x(t), y(t)) < − y(t) Therefore, for some T2 > T1 we have y(t) < δ1 ∀t ≥ T2 Combining with the inequality x(t) < δ1 we obtain x(t) ˙ > ε1 x(t) ∀t ≥ T2 Hence, x(t) ↑ ∞ This contradiction yields the first assertion of this theorem Item b) can be proved in the same manner as in [7, Theorem 2.1] In the sequel, we always suppose that λ > By the assumption a(±, 0, 0) > b(±, 0, 0) < and Theorem 2.2, there exists δ > such that lim supt→∞ x(t) > δ, lim supt→∞ y(t) > δ and a(±, x, y) > 0, b(±, x, y) < if < x, y ≤ δ Lemma 2.3 With probability 1, there are infinitely many sn = sn (ω) > such that sn > sn−1 , limn→∞ sn = ∞ and x(sn ) ≥ δ, y(sn ) ≥ δ for all n ∈ N Proof The proof is similar to that of Lemma 2.3 in [7] with some slight modifications, so it is omitted here We consider some concrete cases with additional assumptions Firstly, we note T that the eigenvalues of the Jacobian matrix of xa(−, x, y), yb(−, x, y) at (u− , 0) ∂a ∂a are u− ∂x (−, u− , 0) and b(−, u− , 0) In view of Assumption 2.1, ∂x (−, u− , 0) < − 0, so the equilibrium (u , 0) of the system (1.6) is a saddle point if and only if b(−, u− , 0) > Moreover, the condition b(−, u− , 0) < is sufficient for the local stability of (u− , 0) Set xn = x(τn , x, y); yn = y(τn , x, y); F0n = σ(τk : k ≤ n); Fn∞ = σ(τk − τn : k > n) It is clear that (xn , yn ) is F0n − measurable and F0n is independent of Fn∞ if ξ0 is given For the sake of simplicity, we suppose ξ0 = + and define for < ε ≤ N , Hε,N = [ε, N ] × [ε, N ] ∩ D 2.1 Case 1: One system is stable and the other is permanent Assumption 2.3 On the open quadrant int R2+ , the system (1.5) has a globally ∗ stable positive state (x∗+ , y+ ) Moreover, the equilibrium (u− , 0) of the system (1.6) is a saddle point Lemma 2.4 Let Assumption 2.3 be satisfied and M be as in the proof of Theorem 2.2 Then, for any ε > 0, there exists a σ+ (ε) such that x+ (t) > σ+ (ε), y+ (t) > σ+ (ε) for all t > 0, provided (x+ (0), y+ (0)) ∈ Hε,M Proof See [7, Lemma 2.4] Lemma 2.5 Let Assumption 2.3 be satisfied and M be as in the proof of Theorem 2.2 Then, there is a σ− such that x− (t) > σ− , y− (t) > σ− for all t > 0, provided (x− (0), y− (0)) ∈ Hδ,M RANDOM KOLMOGOROV SYSTEMS WITH PREDATOR-PREY TYPE 2699 Proof It is possible to consider the system (1.6) as a special case of the system (1.4) when a(+, x, y) ≡ a(−, x, y) and b(+, x, y) ≡ b(−, x, y) The value λ in this case is b(−, u− , 0) > Hence, from Theorem 2.2 and Lemma 2.3, we derive the existence of a < δ < δ satisfying that for any positive initial value (x0 , y0 ) ∈ D, there is a sequence sn ↑ ∞ such that (x− (sn , x0 , y0 ), y− (sn , x0 , y0 )) ∈ Hδ ,M Note that δ > can be chosen uniformly for every (x0 , y0 ) ∈ D ∩ intR2+ Therefore, for any z = (x, y) ∈ H δ ,M , there exists sz > satisfying x− (sz , z), y− (sz , z) ∈ Hδ ,M By the continuous dependence of the solution on the initial value, we can find an open set Uz z such that x− (sz , w), y− (sz , w) ∈ H δ ,M ∀w ∈ Uz Since H δ ,M 2 is compact, there exists a finite number of points z1 , · · · , zn ∈ H δ ,M such that H δ ,M ⊂ ∪ni=1 Uzi by the Heine-Borel covering theorem Put s = max{sz1 , · · · , szn } and σ− = inf min{x− (t, z), y− (t, z)} : z ∈ H δ ,M and ≤ t ≤ s 2 It is clear that σ− > We now show that x− (t, z) > σ− , y− (t, z) > σ− for any initial value z ∈ H δ ,M and t > Suppose, to the contrary, that there exist z0 ∈ Hδ ,M and t > such that either x− (t, z0 ) ≤ σ− or y− (t, z0 ) ≤ σ− Put t0 = sup{0 ≤ t ≤ t : (x− (t, z0 ), y− (t, z0 )) ∈ H δ ,M } Since {Uzi } covers H δ ,M , 2 z0 = (x− (t0 , z0 ), y− (t0 , z0 )) ∈ Uzi for some i It follows from the property of Uzi that x− (t0 + szi , z0 ), y− (t0 + szi , z0 ) = x− (szi , z0 ), y− (szi , z0 ) ∈ H δ ,M By the definition of t0 , it is obvious t0 ≤ t < t0 + szi ≤ t0 + s As a result x− (t, z0 ) = x− (t − t0 , z0 ) ≥ inf (z,t)∈H δ ,M ×[0,s] x− (t, z), y− (t, z)} := 2σ− Similarly, y− (t, z0 ) ≥ 2σ− This is a contradiction The proof is complete Lemma 2.6 Let Assumption 2.3 be satisfied Then, for δ as above, there are, with probability 1, infinitely many k = k(ω) ∈ N such that x2k+1 > min{σ+ (δ), σ+ (σ− )}, y2k+1 > min{σ+ (δ), σ+ (σ− )} Proof By Lemma 2.3, there exists a sequence sn ↑ ∞ such that x(sn ) ≥ δ, y(sn ) ≥ δ for all n ∈ N Put kn := max{i : τi ≤ sn } From Lemmas 2.4 and 2.5, it is seen that if kn is even, xkn +1 > σ+ (δ), ykn +1 > σ+ (δ) and that if kn is odd, xkn +1 > σ− , ykn +1 > σ− Applying Lemma 2.4 again we obtain xkn +2 > σ+ (σ− ), ykn +2 > σ+ (σ− ), provided that kn is odd Since at least one of the two sets {n : kn +1 is odd} and {n : kn + is even} is infinite the proof of Lemma 2.6 is complete 2.2 Case 2: One system is stable and the other is not permanent ∗ Assumption 2.4 The system (1.5) has a globally stable positive state (x∗+ , y+ ) − Moreover, (u , 0) is a locally stable equilibrium of the system (1.6) Lemma 2.7 Let Assumption 2.4 be satisfied Then, a) for any ε > we have x+ (t) > σ+ (ε), y+ (t) > σ+ (ε) ∀t > 0, provided (x+ (0), y+ (0)) ∈ Hε,M , where σ+ (ε) is as in Lemma 2.4; b) there exists σ − > such that x− (t) ≥ σ − for all t > if (x− (0), y− (0)) ∈ D and x− (0) ≥ δ Moreover, for any < ε ≤ δ, we can find < σ− (ε) < ε such that if y− (0) < σ− (ε), x− (0) ≥ σ − then y− (t) < ε, x− (t) > σ − for all t > 2700 NGUYEN HUU DU AND NGUYEN HAI DANG Proof The assertion in a) has been proved in Lemma 2.4 To prove the first assertion of b), we employ arguments similar to those in the proof of Lemma 2.5 Note that lim supt→∞ x− (t, x, y) > δ, ∀x > 0, y ≥ Hence, for any z = (x, y) ∈ D, x ≥ 2δ , there exists sz > such that x− (t, sz ) ≥ δ Thus, using the same method as in the proof of Lemma 2.5, we can show that there exists σ − > such that x− (t) ≥ σ − provided (x− (0), y− (0)) ∈ D and x− (0) ≥ δ For the proof of the second assertion of the item b), we refer to [7, Lemma 2.6] Lemma 2.8 If Assumption 2.4 is satisfied, then with probability 1, there are infinitely many k = k(ω) ∈ N such that x2k+1 > , y2k+1 > where = σ+ (δ), σ+ (min{σ − , σ− (δ)}) Proof In view of Lemma 2.3, there is a sequence (sn ) ↑ ∞ such that x(sn ) ≥ δ, y(sn ) ≥ δ for all n ∈ N In case τ2k ≤ sn < τ2k+1 we have x(τ2k+1 ) > σ+ (δ) ≥ , y(τ2k+1 ) > σ+ (δ) ≥ by the item a) of Lemma 2.7 If τ2k−1 ≤ sn < τ2k , Lemma 2.7 shows that x2k > σ − Further, when y2k ≥ σ− (δ) it yields x2k+1 > σ+ (min{σ − , σ− (δ)}) ≥ , y2k+1 > σ(min{σ(δ), σ− (δ)}) ≥ We consider the case where y2k < σ− (δ) In the case max{y(t) : τ2k ≤ t ≤ τ2k+1 } ≥ σ− (δ), we choose t1 = min{t ∈ [τ2k , τ2k+1 ] : y(t) ≥ σ− (δ)} It is clear y(t ˙ ) ≥ and y(t1 ) = σ− (δ) < δ since y(t) ˙ < whenever < x(t); y(t) < δ, x− (t1 ) > δ Consequently, x2k+1 > σ+ (min{σ − , σ− (δ)}) ≥ , y2k+1 > σ+ (min{σ − , σ− (δ)}) ≥ If y(t) < σ− (δ), x(t) ≥ δ ∀τ2k ≤ t ≤ τ2k+1 , then y2k+1 < σ− (δ) which implies y(t) < δ ∀ τ2k+1 ≤ t ≤ τ2k+2 Continuing this process, we can either find an odd number 2m + > n satisfying x2m+1 > , y2m+1 > or show that y(t) < δ ∀t > τ2k However, the latter contradicts to Lemma 2.3 Thus, x2m+1 > , y2m+1 > The proof is complete For the purpose of simplifying notations, we denote by πt+ (x, y) = (x+ (t, x, y), y+ (t, x, y)), (resp by πt− (x, y) = (x− (t, x, y), y− (t, x, y)) the solution of the system (1.5) (resp (1.6)) with initial value (x, y) Put (n) S = (x, y) = πtn ∗ · · · πt+2 πt−1 (x∗+ , y+ ) : ≤ t1 , t2 , , tn ; n ∈ N , (2.6) where (k) = (−1)k The pathwise dynamic behavior of the solutions of the system (1.4) is fully described by means of the following theorem which is stated and proved in [7] ∗ Theorem 2.9 Suppose that (1.5) has a globally stable positive equilibrium (x∗+ , y+ ) and there exist and M such that P{ < x2n+1 , y2n+1 < M i.o of n} = Then, a) with probability 1, the closure S of S is a subset of the ω-limit set Ω(x0 , y0 , ω), ∗ b) if there exists t0 > such that the point (x0 , y ) = πt−0 (x∗+ , y+ ) satisfies the following condition det a(+, x0 , y ) a(−, x0 , y ) b(+, x0 , y ) b(−, x0 , y ) = 0, (2.7) then, with probability 1, the closure S of S is the ω-limit set Ω(x0 , y0 , ω) Moreover, S absorbs all positive solutions in the sense that for any initial value (x0 , y0 ) ∈ intR2+ , the value γ(ω) = inf{t > : (x(s, x0 , y0 , ω), y(s, x0 , y0 , ω)) ∈ S ∀ s > t} is finite outside a P-null set ∗ Remark The existence of a point (x0 , y ) = πt−0 (x∗+ , y+ ) satisfying (2.7) is − ∗ ∗ equivalent to the existence of a point (x, y) ∈ {πt (x+ , y+ ) : t ≥ 0} such that the ∗ curve {πt+ (x, y) : t ≥ 0} is not contained in {πt− (x∗+ , y+ ) : t ≥ 0} RANDOM KOLMOGOROV SYSTEMS WITH PREDATOR-PREY TYPE 2701 Remark The assumption that (1.5) has a globally stable positive equilib∗ rium (x∗+ , y+ ) and that there exist and M such that P{ < x2n+1 , y2n+1 < M, i.o of n} = is satisfied if either Assumption 2.3 or Assumption 2.4 holds 2.3 Case 3: One system has a unique stable limit cycle Assumption 2.5 On the quadrant int R2+ , the system (1.5) has a unique equi∗ librium (x∗+ , y+ ) and a unique stable limit cycle Γ which attracts any solution ∗ (x+ (t, x, y), y+ (t, x, y)) starting in (x, y) ∈ intR2+ \{(x∗+ , y+ )} Moreover, (u− , 0) is either a saddle point or a stable critical point of the system (1.6) Denote by D the domain surrounded by Γ As in the proof of Lemma 2.1, given a compact K ∈ intR2 , we can show that, for any neighborhood U of D , there exists a T > such that x+ (t, x, y), y+ (t, x, y)) ∈ U ∀(x, y) ∈ K, ∀t ≥ T Using this property and the same arguments as in the proofs of Lemmas 2.3-2.8 we obtain Lemma 2.10 If Assumption 2.5 is satisfied, there is > and infinitely many k = k(ω) such that x2k+1 > and y2k+1 > almost surely To describe the ω−limit set of the solution (x(t, x0 , y0 ), y(t, x0 , y0 )) in this case we need the following lemma Lemma 2.11 Suppose that Assumption 2.5 is satisfied Let K ∗ be the ε− neigh∗ ) and U0 be the ε0 − neighborhood of a point (x0 , y ) ∈ Γ Then, borhood of (x∗+ , y+ + = inf{t > : there are t0 = t0 (ε0 ) > and T + = T + (ε, ε0 ) > such that Tx,y ε0 + (x+ (t, x, y), y+ (t, x, y)) − (x0 , y ) ≤ } ≤ T for all (x, y) ∈ H ,M \K ∗ and that + + + t0 (x+ (t, x, y), y+ (t, x, y)) ∈ U0 ∀ Tx,y < t < Tx,y Proof Denote by T the period of the periodic solution By the continuous dependence of the solution on initial values, for any (u, v) ∈ Γ, there exists ≤ tu,v ≤ T and a neighborhood Uu,v of (u, v) such that (x+ (tu,v , u , v ), y+ (tu,v , u , v )) − (x0 , y ) ≤ ε20 for all (u , v ) ∈ Uu,v Since Γ is a compact set covered by the family {Uu,v : (u, v) ∈ Γ}, there exist (u1 , v1 ), · · · , (un , ) ∈ Γ such that Γ ⊂ n U := i=1 Uui ,vi As in the proof of Lemma 2.1, we can find a T∗+ = T∗+ (ε) > such that (x+ (t, u, v), y+ (t, u, v)) ∈ U ∀ (u, v) ∈ H ,M \K ∗ and ∀t ≥ T∗+ Let + (x, y) ∈ H ,M \K ∗ and Tx,y = inf{t > : (x+ (t, x, y), y+ (t, x, y)) − (x0 , y ) ≤ ε20 } + ≤ T∗+ + T := T + It again follows from the continuous dependence Obviously, Tx,y of the solution on the initial values that there exists a t0 = t0 (ε0 ) > satisfying (x+ (t, u, v), y+ (t, u, v)) ∈ U0 ∀ < t < t0 provided (u, v) − (x0 , y ) ≤ ε20 Hence, + + (x+ (t, x, y), y+ (t, x, y)) ∈ U0 ∀ Tx,y < t < Tx,y + t0 Lemma 2.11 is proved In analogy with (2.6), for any (u, v) ∈ R2+ we put (n) S(u, v) = (x, y) = πtn (1) · · · πt1 πt+0 (u, v) : ≤ t0 , t1 , t2 , , tn ; n ∈ N , (2.8) where (k) = (−1)k Lemma 2.12 The set S(x0 , y ) does not depend on the choice of (x0 , y ) ∈ Γ and it is denoted by S Proof Let (x0 , y ) and (˘ x0 , y˘0 ) belong to Γ Then, we can find s1 > 0, s2 > such that πs+1 (x0 , y ) = (˘ x0 , y˘0 ) and πs+2 (˘ x0 , y˘0 ) = (x0 , y ) Hence, (n) π tn (1) (n) · · · πt1 πt+0 (x0 , y ) = πtn (1) · · · πt1 πt+0 +s2 (˘ x0 , y˘0 ), 2702 NGUYEN HUU DU AND NGUYEN HAI DANG and (n) π tn (1) (n) · · · πt1 πt+0 (˘ x0 , y˘0 ) = πtn (1) · · · πt1 πt+0 +s1 (x0 , y ) Consequently, S(x0 , y ) = S(˘ x0 , y˘0 ) The proof is complete Theorem 2.13 Suppose that on the quadrant int R2+ , the system (1.5) has a unique ∗ stable limit cycle Γ and a unique equilibrium (x∗+ , y+ ) which is not a equilibrium of ∗ the system (1.6) Suppose further that, for any (x, y) ∈ intR2+ \{(x∗+ , y+ )}, Γ is the ω−limit set of the solution (x+ (t, x, y), y+ (t, x, y)) Finally assume that there exist positive numbers and M such that P{2 < x2n+1 , y2n+1 < M i.o of n} = Then, a) with probability 1, the closure S of S is a subset of the ω-limit set Ω(x0 , y0 , ω); b) if there exists (x0 , y ) ∈ Γ such that det a(+, x0 , y ) a(−, x0 , y ) b(+, x0 , y ) b(−, x0 , y ) = 0, (2.9) then, with probability 1, the closure S of S is the ω-limit set Ω(x0 , y0 , ω) Moreover, S absorbs all positive solutions in the sense that for any initial value (x0 , y0 ) ∈ intR2+ , the value γ(ω) = inf{t > : (x(s, x0 , y0 , ω), y(s, x0 , y0 , ω)) ∈ S ∀ s > t} is finite outside a P-null set Remark The assumption that there exists a (x0 , y ) ∈ Γ satisfying (2.9) is equivalent to the condition that Γ does not contain any orbit of the system (1.6) The proof of Theorem 2.13 We construct a sequence of stopping times η1 = inf{2k + : (x2k+1 , y2k+1 ) ∈ H2 ,M }, η2 = inf{2k + > η1 : (x2k+1 , y2k+1 ) ∈ H2 ··· ,M }, ηn = inf{2k + > ηn−1 : (x2k+1 , y2k+1 ) ∈ H2 ,M } It is easy to see that {ηk = n} ∈ F0n for any k, n Thus, the event {ηk = n} is independent of Fn∞ if ξ0 is given By the hypothesis, ηn < ∞ a.s for all n Let s1 > is such small that (x(t, x0 , y0 ), y(t, x0 , y0 ) ∈ H ,M for all (x0 , y0 ) ∈ ∗ H2 ,M Denote K2∗ by the 2ε− neighborhood of (x∗+ , y+ ) and let ε > be suffi∗ ∗ ∗ ciently small that K2 ⊂ D Since (x+ , y+ ) is not an equilibrium of the system (1.6), there are > and < s2 < s3 such that x− (t, x, y), y− (t, x, y)) ∈ H ,M \K2∗ provided (x, y) ∈ K2∗ , s2 < t < s3 On the other hand, there exists s4 > such that x+ (t, x, y), y+ (t, x, y)) ∈ / K ∗ if < t < s4 , provided (x, y) ∈ / K2∗ Let Bn = {ω : σηn +1 < s1 , s2 < σηn +2 < s3 , σηn +3 < s4 } Using arguments similar to the proof of [7, Theorem 2.2], we obtain that ∞ ∞ Bi = P{ω : σηn +1 < s1 , s2 < σηn +2 < s3 , σηn +3 < s4 i.o of n} = P k=1 i=k ∗ Note that, if Bn occurs and (xηn +1 , yηn + 1) ∈ K2ε then (xηn +3 , yηn +3 ) ∈ D \Kε∗ ⊂ ∗ H ,M \Kε Hence, if Bn occurs, either (xηn +1 , yηn +1 ) or (xηn +3 , yηn +3 ) belongs to G := H ,M \Kε∗ Since Bn occurs infinitely often with probability 1, we also have RANDOM KOLMOGOROV SYSTEMS WITH PREDATOR-PREY TYPE 2703 ζn < ∞ a.s for every n ∈ N where ζ1 = inf{2k : (x2k , y2k ) ∈ G}, ζ2 = inf{2k > ζ1 : (x2k , y2k ) ∈ G}, ··· ζn = inf{2k > ζn−1 : (x2k , y2k ) ∈ G} + , T + , t0 mentioned Let (x0 , y ) ∈ Γ and U0 be its -neighborhood We will use Tx,y in Lemma 2.11 Note that for a random variable of exponential distribution X, P{t < X < t + a} ≥ P{s < X < s + a} whenever t ≤ s Since {ζk = n} ∈ F0n , {ζk = n} is independent of Fn∞ Therefore, P{σζk +1 ∈ / (Tx+ζ k + ,yζk , Txζk ,yζk + t0 )} ∞ + + P{σζk +1 ∈ / (Tx,y , Tx,y + t0 ) | ζk = 2n + 1, xζk = x, yζk = y} = n=0 G × P{ζk = 2n + 1, xζk ∈ dx, yζk ∈ dy} ∞ + + P{σ2n+2 ∈ / (Tx,y , Tx,y + t0 ) | ζk = 2n + 1, x2n+1 = x, y2n+1 = y} = n=0 G × P{ζk = 2n + 1, x2n+1 ∈ dx, y2n+1 ∈ dy} ∞ P{σ2n+2 ∈ / (T + + T, T + + T + t0 )}P{ζk = 2n + 1} ≤ n=0 ∞ P{σ2 ∈ / (T + + T, T + + T + t0 )}P{ζk = 2n + 1} = n=0 =1 − P{σ2 ∈ (T + + T, T + + T + t0 )} := − r < Continuing this way, we obtain n {σζi +1 ∈ (Tx+ζ P i + ,yζi , Txζi ,yζi + t0 )} ≥ − (1 − r)n−k+1 i=k Thus, ∞ P ∞ {σζi +1 ∈ (Tx+ζ i + ,yζi , Txζi ,yζi + t0 )} = 1, k=1 i=k σζn +1 ∈ (Tx+ζ ,yζn , Tx+ζn ,yζn k that is, P{ω : + t0 ) i.o of n} = Hence, with probability 1, there are infinitely many k = k(ω) ∈ N such that (x2k+1 , y2k+1 ) ∈ U0 The next step is to prove that {πt− (x0 , y ) : t ≥ 0} ⊂ Ω(x0 , y0 , ω) a.s Let t1 be an arbitrary positive number and (x0 , y0 ) := πt−1 (x0 , y ) In view of the continuity of the solution in the initial values, for any neighborhood Nδ1 of (x0 , y0 ), there are < t2 < t1 < t3 and δ2 > such that if (u, v) ∈ U0δ2 then πt− (u, v) ∈ Nδ1 for any t2 < t < t3 Put ζ1 = inf{2k + : (x2k+1 , y2k+1 ) ∈ U0δ2 }, ζ2 = inf{2k + > ζ1 : (x2k+1 , y2k+1 ) ∈ U0δ2 }, ··· ζn = inf{2k + > ζn−1 : (x2k+1 , y2k+1 ) ∈ U0δ2 } 2704 NGUYEN HUU DU AND NGUYEN HAI DANG From the previous part of this proof, it follows that ζk < ∞ and lim ζk = ∞ a.s Since {ζk = n} ∈ F0n , {ζk } is independent of Fn∞ Therefore, k→∞ ∞ P{σζk +1 ∈ / (t2 , t3 )} = P{σζk +1 ∈ / (t2 , t3 ) | ζk = 2n + 1}P{ζk = 2n + 1} n=0 ∞ P{σ2n+2 ∈ / (t2 , t3 ) | ζk = 2n + 1}P{ζk = 2n + 1} = n=0 ∞ P{σ2n+2 ∈ / (t2 , t3 )}P{ζk = 2n + 1} = n=0 ∞ P{σ2 ∈ / (t2 , t3 )}P{ζk = 2n + 1} = P{σ2 ∈ / (t2 , t3 )} = n=0 Continuing this way yields P{σζk +1 ∈ / (t2 , t3 ), σζk+1 +1 ∈ / (t2 , t3 )} = P{σ2 ∈ / (t2 , t3 )}2 , By the same argument as above we obtain P{ω : σζn +1 ∈ (t2 , t3 ) i.o of n} = This relation means that (xζk +1 , yζk +1 ) ∈ Nδ1 for many infinite k ∈ N This means that (x0 , y0 ) ∈ Ω(x0 , y0 , ω) a.s Similarly, for any t > 0, the orbit {πs+ πt− (x0 , y ) : s > 0} ⊂ Ω(x0 , y0 , ω) By induction, we conclude that S is a subset of Ω(x0 , y0 ) It follows from the closedness of Ω(x0 , y0 , ω) that S ⊂ Ω(x0 , y0 , ω) a.s b) We now prove the second assertion of this theorem Suppose (x0 , y ) ∈ Γ satisfies the condition (2.9) By the existence and continuous dependence on the initial values of the solutions, there exist two numbers a > and b > such that the function ϕ(s, t) = πt+ πs− (x0 , y ) is defined and continuously differentiable in (−a, a) × (−b, b) We note that ∂ϕ ∂ϕ x a(+, x0 , y ) x0 a(−, x0 , y ) det = det , y b(+, x0 , y ) y b(−, x0 , y ) ∂s ∂t (0,0) = x0 y det a(+, x0 , y ) a(−, x0 , y ) b(+, x0 , y ) b(−, x0 , y ) = Consequently, the Inverse Function theorem implies the existence of < a1 < a, < b1 < b such that ϕ(s, t) is a diffeomorphism between V = (0, a1 ) × (0, b1 ) and U = ϕ(V ) which is evidently an open set Moreover, for every (x, y) ∈ U , we can find (s∗ , t∗ ) ∈ (0, a1 ) × (0, b1 ) such that (x, y) = πt+∗ πs−∗ (x0 , y ) ∈ S Hence, U ⊂ S ⊂ Ω(x0 , y0 , ω) Define γ = inf{t : (x(t), y(t)) ∈ U } Clearly, γ < ∞ a.s since S is an open subset of Ω(x0 , y0 , ω) Since S is a forward invariant set and U ⊂ S, we have (x(t), y(t)) ∈ S ∀t > γ with probability The fact (x(t), y(t)) ∈ S for all t > γ implies that Ω(x0 , y0 , ω) ⊂ S Combining with the part a) we yield S = Ω(x0 , y0 , ω) a.s The proof is complete The semigroup and the stability in distribution Denote z(t) = (x(t), y(t)) It is known that the pair (ξt , z(t)) is a homogeneous Markov process with state space V := E × intR2+ Let B(V) be the σ−Borel algebra on V and λ be the Lebesgue measure on intR2+ Denote by m the product measure on (V, B(V)) defined by m(+, A) = pλ(A) and m(−, A) = qλ(A) Suppose that P (t, i, z, B) with RANDOM KOLMOGOROV SYSTEMS WITH PREDATOR-PREY TYPE 2705 (i, z) ∈ V, B ∈ B(V)) and t ≥ is the transition probability of the Markov process (ξt , z(t)) Let {P (t)}t≥0 be the semigroup defined on the set of measures P(V) by P (t)ν(B) = P (t, i, z, B)ν(di, dz), ν ∈ V, B ∈ B(V) It is clear that if ν is the distribution of (ξ0 , z(0)) then P (t)ν is the distribution of (ξt , z(t)) Lemma 3.1 (see [7]) If ν is absolutely continuous with respect to m, so is P (t)ν This lemma enables us to define P (t)f to be the density of (ξt , zt ) given that f is the density of (ξ0 , z0 ) We denote by D the family of density functions on V and cite from [7] the following theorem Theorem 3.2 Let the hypotheses of Theorem 2.9 (including the hypothesis in its item b)) be satisfied If the semigroup {P (t)}t≥0 has an invariant probability measure ν ∗ , then ν ∗ has a density f ∗ with supp(f ∗ ) = E × S, where S is described by (2.6) Moreover, the semigroup is asymptotically stable in the sense that lim P (t)f − f ∗ = ∀f ∈ D t→∞ The following theorem is similar to Theorem 3.2 However some modifications need to be made in the proof to account for differences between a stable equilibrium and a stable limit cycle Theorem 3.3 Let the hypotheses of Theorem 2.13 (including the hypothesis in its item b)) be satisfied Moreover, there exists a z0 = (x0 , y0 ) ∈ S such that vectors w+ (z0 ) − w− (z0 ), [w+ , w− ](z0 ), [w+ , [w+ , w− ]](z0 ), · · · spans R2 where w± (x, y) = xa(±, x, y), yb(±, x, y) and [·, ·] is the Lie bracket If the semigroup {P (t)}t≥0 has an invariant probability measure ν ∗ , then ν ∗ has a density f ∗ with supp(f ∗ ) = E × S, where S is described in item a) of Theorem 2.13 Moreover the semigroup is asymptotically stable in the sense that lim P (t)f − f ∗ = ∀f ∈ D t→∞ ∗ Proof We show that if ν is a stationary distribution of the process (ξt , z(t)), i.e., P (t)ν ∗ = ν ∗ for all t ≥ with ν ∗ (E × int R2+ ) = 1, then ν ∗ (V ) > for any open subset V in E × S First, we prove that ν ∗ ({+} × U0ε ) > ∀ε > where U0ε is the -neighborhood of z = (x0 , y ) In fact, there is a compact subset H of intR2+ and an > such that ν ∗ ({+} × H \ K ∗ ) > In view of the continuous dependence of the solution on the initial value, for any z ∈ Γ, there exists Tz ∈ [0, T ] and a neighborhood Uz satisfying πT+z (z ) ∈ U0ε ∀ z ∈ Uz By virtue of compactness of Γ we can choose a finite number of points z1 , , zn for which U = ∪ni=1 Uzi ⊃ Γ Since H \ K ∗ is a compact set, it follows directly from Assumption 2.5 that there exists a T > such that πT+ (z) ∈ U for all z ∈ H \ K ∗ Hence, for any z ∈ H \ K ∗ , there is at least one i ∈ {1, , n} satisfying πT+ +Tz (z) ∈ U0ε For such an i, P(T + i Tzi , +, z, {+} × U0ε ) ≥ P{σ1 > T + Tzi } > Therefore, ν ∗ ({+} × U0ε ) = P (T + Tzi )ν ∗ ({+}×U0ε ) ≥ {+}×H P(T +Tzi , +, z, {+}×U0ε )dν ∗ > Let t0 > be so small ε/2 ε/2 that z(t) ∈ U0ε a.s if z0 ∈ U0 We have ν ∗ ({−} × U0ε ) ≥ ν ∗ ({+} × U0 )P{τ1 < t0 < τ2 } > Let z = πt−1 (z ) By the theorem on continuous dependence on initial conditions, for any ε−neighborhood Wε of z, there exists a ε > such that for all z ∈ U0ε , πt−1 (z) ∈ Wε which implies P (t1 , −, z, {−} × Wε ) ≥ P{τ1 > t1 |ξ0 = −} > Hence, ν ∗ ({−} × Wε ) = P (t1 )ν ∗ ({−} × Wε ) ≥ P (t1 , −, z, {−} × Wε )dν ∗ > {−}×U0ε 2706 NGUYEN HUU DU AND NGUYEN HAI DANG As for ν ∗ ({−}×U0ε ), we can show that ν ∗ ({−}×W ε ) > for any ε−neighborhood Wε of z By induction, for all z ∈ S and its ε−neighborhood Wε we have ν ∗ ({i} × Wε ) > 0∀i ∈ E Thus, we have shown that ν ∗ (V ) > for all open sets V ⊂ E × S We now prove that ν ∗ has a density function By Lebesgue’s decomposition theorem, ν ∗ can be uniquely decomposed into absolutely continuous and singular distributions, that is ν ∗ = θνs∗ + (1 − θ)νa∗ where νa∗ is absolutely continuous with respect to m, νs∗ is singular and θ ∈ [0, 1] Since ν ∗ is a stationary measure, ν ∗ = P (t)ν ∗ = θP (t)νs∗ + (1 − θ)P (t)νa∗ = θνs∗ + (1 − θ)νa∗ (3.1) The case θ = says that ν ∗ = νa∗ , i.e., ν ∗ is absolutely continuous to m Suppose θ = Then P (t)νa∗ is absolutely continuous with respect m by Lemma 3.1 Applying Lebesgue Decomposition again to the measure P (t)νs∗ , we have ∗ ∗ P (t)νs∗ = kν1s + (1 − k)ν1a ∗ ν1a where is absolutely continuous and decomposition into (3.1) we obtain ∗ ν1s for a ≤ k ≤ 1, is singular to m Substituting this ∗ ∗ ν ∗ = θ(kν1s + (1 − k)ν1a ) + (1 − θ)P (t)νa∗ ∗ ∗ = θkν1s + θ(1 − k)ν1a + (1 − θ)P (t)νa∗ = θνs∗ + (1 − θ)νa∗ ∗ ∗ and By the uniqueness of the decomposition, we have θkν1s = θP (t)νs∗ Since ν1s ∗ P (t)νs∗ are probability measures, we have k = and v1s = vs∗ Thus, νs∗ is also a stationary distribution and there exists a measurable subset K of S such that λ(K) = and νs∗ (E × K) = If (ξt , z(t)) assumes νs∗ as its initial distribution νs∗ , its distribution at time t is also νs∗ + We define two functions ψ(z,t) (s1 , s2 ) = πt−s ◦πs−2 ◦πs+1 (z) and ψ (z,t) (s1 , s2 ) = −s2 − − + πt−s1 −s2 ◦ πs2 ◦ πs1 (z) with s1 , s2 > and s1 + s2 < t Since the vectors w+ (z0 ) − w− (z0 ), [w+ , w− ](z0 ), [w+ , [w+ , w− ]](z0 ), · · · span R2 , it follows from [22, Remark 7] that there is T > sufficiently small and s1 , s2 with < s1 < T , < s2 < T − s1 such that either det ∂ψz0 ,T ∂ψz0 ,T , ∂s1 ∂s2 = or det (s1 ,s2 ) ∂ψ z0 ,T ∂ψ z0 ,T , ∂s1 ∂s2 = (s1 ,s2 ) Without loss of generality, suppose that ψ(z,t) (s1 , s2 ) has this property Then, there exists ε > satisfying det Uzε0 , ∂ψz,T ∂s1 , ∂ψz,T ∂s2 (s1 ,s2 ) Uzε0 , there is a = for all z in the − neighbor- hood, say of z0 For each z ∈ open neighborhood Wz of (s1 , s2 ) such that ψz,T is a diffeomorphism between Wz and Wz := ψ( z, T )(Wz ) It is easy to see that −1 P(T , +, z, E × (Wz \ K)) ≥ P{(σ1 , σ2 ) ∈ Wz \ ψz, (K), τ2 < T < τ3 } T −1 Since ψz,T is a diffeomorphism and λ(K) = 0, λ(ψz, (K)) = On the other T hand, the distribution of (σ1 , σ2 ) is absolutely continuous w.r.t λ, P{(σ1 , σ2 ) ∈ −1 ψz, (K))} = which implies that T P(T , +, z, E × (S \ K)) ≥ P(T , +, z, E × (Wz \ K)) −1 ≥ P{(σ1 , σ2 ) ∈ Wz \ ψz, (K), τ2 < T < τ3 } T = P{(σ1 , σ2 ) ∈ Wz , τ2 < T < τ3 } > RANDOM KOLMOGOROV SYSTEMS WITH PREDATOR-PREY TYPE 2707 We have proved that ν(V ) > for any stationary distribution ν and any open set V ∈ E × S Consequently, νs∗ ({+} × Uzε0 ) > since νs∗ is also a stationary distribution As a result, νs∗ (E × (S \ K)) = P (T )νs∗ (E × (S \ K)) ≥ {+}×U ε P (T , i, z, E × z0 (S \ K))dνs∗ > This is a contradiction Thus, θ = and ν ∗ is absolutely continuous w.r.t m, so it has a density function f ∗ We have also proved that V f ∗ dm > for any open subset V in E × S Therefore, supp f ∗ = E × S Moreover, it follows from [21, Proposition 2] that {P (t)}t≥0 is asymptotically stable The proof is complete This theorem provides some nice properties of a stationary distribution but it tells us nothing about their existence for which we need some additional assumptions Assumption 3.1 Suppose that b(±, x, 0) is increasing in x > 0; a(±, x, 0) = maxy≥0 {a(±, x, y)}, and b(±, x, 0) = maxy≥0 {b(±, x, y)} Remark Note that well-known predator-prey models satisfy conditions ∂a(±,x,y) ∂y ≤ 0, ∂b(±,x,y) ≥ and ∂b(±,x,y) ≤ for all (x, y) ∈ R2+ (see (1.2)) These conditions ∂x ∂y obviously imply that Assumption 3.1 holds Proposition Let Assumptions 2.1, 2.2 and 3.1 be satisfied If λ > 0, there exists t δ3 > such that lim inf t→∞ 1t y(s) ds ≥ δ3 a.s If λ < 0, then limt→∞ y(t) = a.s Proof Let λ > and u(t) be the solution of the equation u(t) ˙ = u(t)a(ξt , u(t), 0) with u(0) = x(0) By Assumptions 3.1, a(±, x, y) ≤ a(±, x, 0) ∀ (x, y) ∈ R2+ In view of the comparison theorem, u(t) ≥ x(t) ∀ t ≥ We find a positive number J such that ∂a(±, x, y) ∂b(±, x, 0) ∂b(±, x, y) J > max , , 0≤x,y≤M ∂y ∂x ∂y By the Lagrange theorem, from the assumptions b(±, x, 0) = max0≤y≤M {b(±, x, y} ≤ it follows that b(±, x, y) − b(±, x, 0) ≥ −Jy and b(±, x, 0) − and ∂b(±,x,0) ∂x b(±, u, 0) ≥ −J(u − x) provided (x, y), (u, 0) ∈ D and x ≤ u As a result b(ξt , x(t), y(t)) = b(ξt , x(t), y(t)) − b(ξt , x(t), 0) + b(ξt , x(t), 0) − b(ξt , u(t), 0) + b(ξt , u(t), 0) ≥ −J(y(t) + u(t) − x(t)) + b(ξt , u(t), 0) Therefore, t t b(ξs , x(s), y(s))ds t ≥ −J t y(s)ds + t t (u(s) − x(s))ds + t t b(ξs , u(s), 0)ds Noting that lim sup t→∞ t t t y(s) ˙ ds t y(s) t→∞ = lim sup (ln y(t) − ln y(0)) ≤ 0, t→∞ t b(ξs , x(s), y(s))ds = lim sup and t→∞ t t lim b(ξs , u(s), 0)ds = λ a.s, 2708 NGUYEN HUU DU AND NGUYEN HAI DANG one has t t lim inf t→∞ y(s) + (u(s) − x(s)) ds ≥ ∂a(±,x,0) ∂x λ a.s J (3.2) By Assumption 2.1, it is seen that > From the Put = min[0,M ] − assumption a(±, x, 0) = maxy≥0 {a(±, x, y)} and the Lagrange theorem it follows that a(ξt , x(t), y(t)) = a(ξt , x(t), y(t)) − a(ξt , x(t), 0) + a(ξt , x(t), 0) − a(ξt , u(t), 0) + a(ξt , u(t), 0) ≥ −Jy(t) + (u(t) − x(t)) + a(ξt , u(t), 0) t Obviously, lim sup 1t t→∞ t→∞ t lim t a(ξs , x(s), y(s))ds = lim sup 1t (ln x(t) − ln x(0)) ≤ and t→∞ (ln u(t) t→∞ t a(ξs , u(s), 0)ds = lim lim inf t→∞ t t J − ln u(0)) = a.s which imply y(s) − (u(s) − x(s)) ds ≥ a.s (3.3) Combining (3.2) and (3.3) gives lim inf t→∞ In case λ < we have t t y(s)ds ≥ σ3 = λ ( +J)J a.s t b(ξs , x(s), y(s))ds lim sup (ln y(t) − ln y(0)) = lim sup t→∞ t t→∞ t t t b(ξs , x(s), 0)ds ≤ lim sup b(ξs , u(s), 0)ds = λ < a.s ≤ lim sup t→∞ t t→∞ t Hence, with probability 1, y(t) → as t → ∞ The proof is complete Theorem 3.4 Assume Assumptions 2.1, 2.2 and 3.1 are satisfied and λ > Then, (ξt , x(t), y(t)) has a stationary distribution ν ∗ , concentrated on E ×int R2+ In addition, if either the hypothesis of Theorem 3.2 or that of Theorem 3.3 is satisfied, then ν ∗ is the unique stationary distribution with density f ∗ and the semigroup is asymptotically stable in the sense that lim P (t)f − f ∗ = ∀f ∈ D t→∞ Proof Denote by 1{·} the indicator function It follows from the relation t t y(s)ds = t t y(s)1{y(s)< σ23 } ds + t ≤ t y(s)1{y(s)≥ σ23 } ds σ3 +M t 1{y(s)≥ σ23 } ds, and Propostion that with probability 1, lim inf t→∞ t t 1{y(s)≥ σ23 } ds ≥ σ3 2M Applying Fatou’s lemma yields lim inf t→∞ t t P y(s) ≥ σ3 σ3 ds ≥ 2M Using [19, Appendix] , we see that there is a stationary distribution ν on E × R2+ satisfying ν(E × [0, M ] × [ σ23 , M ]) > Since the boundary A = E × {(x, y) : x = 0, y ≥ 0} is invariant under P (t) and limt→∞ y(t) = if x0 = 0, it follows that ν(E × {0} × [ σ23 , M ]) = Thus, ν(E × intR2+ ) > ν(E × (0, M ] × [ σ23 , M ]) > By virtue of the invariance of E×intR2+ , the measure ν ∗ defined by ν ∗ (A) = RANDOM KOLMOGOROV SYSTEMS WITH PREDATOR-PREY TYPE 2709 ν A ∩ E × intR2+ /ν(E × intR2+ ) for any measurable A ∈ B(V) is a stationary distribution on E × int R2+ of the process (ξt , x(t), y(t)) The rest of the proof is then a consequence of Theorem 3.2 or of Theorem 3.3 Applications In this section, we will apply the results in Sections and to the predator-prey model with Beddington-DeAngelis functional response  m(ξt )xy x   − ˙ = r(ξt )x −  x(t) K(ξt ) a(ξt ) + b(ξ)y + c(ξt )x (4.1) ε(ξt )m(ξt )xy   ˙ = −µ(ξt )y +  y(t) a(ξt ) + b(ξ) y + c(ξt )x where r(±), K(±), m(±), a(±), b(±), c(±), µ(±), ε(±) are positive constants The noise (ξt ) intervenes virtually in equation (4.1), causing a switching between two deterministic systems  x m(+)xy   ˙ = r(+)x − −  x(t) K(+) a(+) + b(+)y + c(+)x (4.2) ε(+)m(+)xy   ˙ = −µ(+)y + ,  y(t) a(+) + b(+)y + c(+)x and    ˙ = r(−)x −  x(t) x m(−)xy − K(−) a(−) + b(−)y − c(−)x (4.3) ε(−)m(−)xy   ˙ = −µ(−)y +  y(t) a(−) + b(−)y + c(−)x The deterministic predator-prey model with Beddington-DeAngelis functional response of the form (4.2) or (4.3) has been completely classified (see [11, 12]) m(i) c(i)µ(i) a(i) Put s(i) = m(i)ε(i) c(i)r(i) , δ(i) = b(i)r(i) , d(i) = m(i)ε(i) , A(i) = c(i)K(i) It is shown in [11, 12] that if d(i) ≥ (1 + A(i)−1 , (K(i), 0) is globally asymptotically stable while if d(i) < (1 + A(i))−1 , (K(i), 0) is a saddle point and there is a unique positive equilibrium (x∗i , yi∗ ) Moreover if tr(J(x∗i , yi∗ )) ≤ 0, (x∗i , yi∗ ) is globally asymptotically stable while if tr(J(x∗i , yi∗ ) > 0, there exists exactly one limit cycle which is stable and attracts all positive solution starting in intR2+ \ (x∗i , yi∗ ) ε(±)m(±)M1 Let M1 = max{K(+), K(−)} + Since a(±)+b(±)y+c(±)M → as y → +∞, ε(±)m(±)M1 then there is an M2 > satisfying a(±)+b(±)M < µ(±) We easily check +c(±)M1 that D = [0, M1 ] × [0, M2 ] is the common invariant set of both systems (4.2) and (4.3) Moreover, x(t) ˙ < whenever x(t) ≥ M1 and if x(t) ≤ M1 and y(t) ≥ M2 then y(t) ˙ < We therefore claim that all the positive solutions of system (4.1) ultimately get into D For this reason, we suppose that (x0 , y0 ) ∈ D without loss of generality Suppose further that λ > We first consider the case where the system (4.2) has a globally asymptotically ∗ stable equilibrium (x∗+ , y+ ) Note that, the set of all (x, y) ∈ intR2+ satisfying m(+)y ε(+)m(+)x x )− −µ(+)+ K(+) a(+)+b(+)y+c(+)x a(+)+b(+)y+c(+)x =0 x m(−)y ε(−)m(−)x r(−)(1− )− −µ(−)+ K(−) a(−)+b(−)y−c(−)x a(−)+b(−)y+c(−)x r(+)(1− (4.4) ∗ ∗ is the root set of a polynomial of degree at most Moreover, if (x∗+ , y+ ) = (x∗− , y− ) ∗ ∗ (if (x− , y− ) exists), the degree of this polynomial must be at least We can check that the root set of this polynomial does not contain the whole trajectory 2710 NGUYEN HUU DU AND NGUYEN HAI DANG ∗ {πt− (x∗+ , y+ ), t ≥ 0} Consequently, there exists a t0 > such that the point ∗ (x0 , y ) = πt−0 (x∗+ , y+ ) does not satisfy (4.4) As a result, the ω-limit set of the system (4.1) is S, which is described in Theorem 2.9 Moreover, there is a unique stationary distribution with support E × S We give two examples to illustrate this case Example 4.1 a(+) := 3, a(−) = 2, b(+) = 2, b(−) = 1, c(+) = 1, c(−) = 1, K(+) = 6, K(−) = 5, x(0) = 1, y(0) = 2, r(+) = 1, r(−) = 1, m(+) = 2, m(−) = 0.5, µ(+) = 1, µ(−) = 4, ε(+) = 5, ε(−) = 3; α = 0.6, β = example, √ In this √ all positive 3 (1 + 21), 20 (−1 + 21)) while all solutions of the system (4.3) converge to ( 10 positive solutions of the system (4.2) tend to (5, 0) A sample orbit of (4.1), which provides the image of the ω-limit set is given in Figure 4; A Figure Phase portraits of the system (4.1) with different parameters, number of switching n = 1000 Example 4.2 a(+) := 2, a(−) = 2, b(+) = 1, b(−) = 3, c(+) = 1, c(−) = 1, K(+) = 6, K(−) = 5, x(0) = 1, y(0) = 2, r(+) = 1, r(−) = 1, m(+) = 2, m(−) = 2, µ(+) = 1, µ(−) = 4, ε(+) = 5, ε(−) = 8; α = 0.6, β = For these coefficients, all positive solutions converge to their positive equilibriums √ of2the systems √ (4.2) and (4.3) √ √ 1 (−6 + 51), (−59 + 51) and (15 + 465), (7 + 465) respectively 5 12 12 The dynamics of solutions (4.1) is shown in Figure 4; B ∗ In the case where d(+) < (1 + A(+))−1 and tr(J(x∗+ , y+ ) > 0, there is a unique ∗ limit cycle γ+ Moreover, we suppose that the positive equilibrium (x∗− , y− ) (if it ∗ ∗ exists) does not coincide with (x+ , y+ ) and that the limit cycle γ− (if it exists) does not coincide with γ+ also Using the same arguments as above, we can describe the ω-limit set S as in Theorem 2.13 With this hypothesis, a direct computation shows that the set of z = (x, y) ∈ R2+ satisfying rank {w+ (z) − w− (z), [w+ (z), w− (z)]} < is the root set of a polynomial in variables x and y From the fact that the positive ∗ ∗ equilibrium (x∗− , y− ) (if it exists) does not coincide with (x∗+ , y+ ) and that the limit cycle γ− (if it exists) does not coincide with γ+ , the degree of this polynomial must be at least On the other hand, we note that, with the aforesaid hypothesis, S has an open subset V Obviously, there exists a z0 ∈ V which is not a root of the polynomial For such z0 , we derive that rank {w+ (z0 )−w− (z0 ), [w+ (z0 ), w− (z0 )]} = RANDOM KOLMOGOROV SYSTEMS WITH PREDATOR-PREY TYPE 2711 Combining Theorems 2.13 and 3.4, we claim that the process ξt , x(t), y(t) has a unique stationary distribution We illustrate this case with the following example Example 4.3 a(+) := 0.03, a(−) = 1.56, b(+) = 0.25, b(−) = 0.25, c(+) = 0.25, c(−) = 0.25, K(+) = 4, K(−) = 6, x(0) = 1, y(0) = 2, r(+) = 1, r(−) = 1, m(+) = 0.375, m(−) = 0.5, µ(+) = 0.25, µ(−) = 0.25, ε(+) = 2/3, ε(−) = 0.5; α = 0.2, β = 0.5 In this example, the system (4.3) has a unique periodic orbit while all positive solutions of the system (4.2) tend to (3.079, 2.998) Figure Orbit of the system, number of switchings n = 1000 Conclusion This paper deals with the dynamics of a Kolmogorov system of predator-prey type perturbed by telegraph noise, using a threshold λ to determine whether the system is becomes extinct or survives permanently In case λ < it is shown that limt→∞ y(t) = and the distribution of (ξt , x(t)) weakly converges to the stationary distribution (p, q, µ+ , µ− ) On the other hand, if λ > 0, with a slight additional assumption, we can fully describe the ω−limit set of the solutions starting in the first quadrant intR2+ and show that the ω−limit set of every solution is deterministic as well as independent of the initial value This limit set is also a forward invariant set that absorbs all positive trajectories We also prove the existence of a unique stationary density f ∗ with the support in E× intR2+ to which the distribution of (ξt , x(t), y(t)) is L1 convergent Our results can be applied to a variety of well-known predator-prey models such as the classical Lotka-Volterra model, models of Holling type or models with ratio-dependent functional response As an example, we considered a predator-prey model with Beddington-DeAngelis functional response under telegraph noise Acknowledgment The authors are grateful to the anonymous referee for his/her many helpful comments and suggestions They also would like to thank professor Ian Morrison, from Fordham university, for his careful reading and valuable comments REFERENCES [1] L Arnold, Random Dynamical Systems, Springer-Verlag, Berlin, Heidelberg, New York, 1998 [2] P Auger, N H Du and N T Hieu, Evolution of Lotka-Volterra predator-prey systems under telegraph noise, Math Biosci Eng., (2009), 683–700 [3] A D Bazykin, Nonlinear Dynamics of Interacting Populations, World Scientific, Singapore, 1998 2712 NGUYEN HUU DU AND NGUYEN HAI DANG [4] A Bobrowski, T Lipniacki, K Pichor and R Rudnicki, Asymptotic behavior of distributions of mRNA and protein levels in a model of stochastic gene expression, J Math Anal Appl., 333 (2007), 753–769 [5] Z Brzeniak, M Capifiski and F Flandoli, Pathwise global attractors for stationary random dynamical systems, Probab Theory Relat Fields, 95 (1993), 87–102 [6] N H Dang, N H Du and T V Ton, Asymptotic behavior of predator-prey systems perturbed by white noise, Acta Appl Math., 115 (2011), 351–370 [7] N H Du and N H Dang, Dynamics of Kolmogorov systems of competitive type under the telegraph noise, J Differential Equations, 250 (2011), 386–409 [8] N H Du, R Kon, K Sato and Y Takeuchi, Dynamical behavior of Lotka - Volterra competition systems: Non autonomous bistable case and the effect of telegraph noise, J Comput Appl Math, 170 (2004), 399–422 [9] H Crauel and F Flandoli, Attractors for random dynamical systems, Probab Theory Relat Fields, 100 (1994), 365–393 [10] I I Gihman and A V Skorohod, The Theory of Stochastic Processes, Springer-Verlag, Berlin, Heidelberg, New York, 1979 [11] T W Hwang, Global analysis of the predator-prey system with Beddington DeAngelis functional response, J Math Anal Appl., 281 (2003), 395–401 [12] T W Hwang, Uniqueness of limit cycles of the predator-prey system with BeddingtonDeAngelis functional response, J Math Anal Appl., 290 (2004), 113–122 [13] C Ji and D Jiang, Dynamics of a stochastic density dependent predator-prey system with Beddington-DeAngelis functional response, J Math Anal Appl., 381 (2011), 441–453 [14] C Ji, D Jiang and N Shi, Analysis of a predator-prey model with modified Leslie-Gower and Holling-type II schemes with stochastic perturbation, J Math Anal Appl., 359 (2009), 482–498 [15] Q Luo and X Mao, Stochastic population dynamics under regime switching, J Math Anal Appl., 334 (2007), 69–84 [16] Q Luo and X Mao, Stochastic population dynamics under regime switching II, J Math Anal Appl., 355 (2009), 577–593 [17] X Mao, Attraction, stability and boundedness for stochastic differential delay equations, Nonlinear Analysis: Theory, Methods and Applications, 47 (2001), 4795–4806 [18] X Mao, S Sabanis and E Renshaw, Asymptotic behaviour of the stochastic Lotka-Volterra model, J Math Anal Appl., 287 (2003), 141–156 [19] L Michael, Conservative Markov processes on a topological space, Israel J Math., (1970), 165–186 [20] J D Murray, Mathematical Biology, Springer-Verlag, Berlin, Heidelberg, 2002 [21] K Pich´ or and R Rudnicki, Continuous Markov semigroups and stability of transport equations, J Math Anal Appl., 249 (2000), 668–685 [22] R Rudnicki, K Pich´ or and M Tyran-Kaminska, Markov semigroups and their applications, in Dynamics of Dissipation (P Garbaczewski and R Olkiewicz Eds), Lecture Note in Physics, Vol 587, Springer, Berlin, 2002, 215–238 [23] Y Takeuchi, N H Du, N T Hieu and K Sato, Evolution of predator-prey systems described by a Lotka-Volterra equation under random environment, J Math Anal Appl., 323 (2006), 938–957 [24] C Yuan and X Mao, Attraction and stochastic asymptotic stability and boundedness of stochastic functional differential equations with respect to semimartingales, Stochastic Analysis and Applications, 24 (2006), 1169–1184 [25] C Zhu and G Yin, On competitive Lotka-Volterra model in random environments, Journal of Mathematical Analysis and Applications, 357 (2009), 154–170 Received October 2013; revised March 2014 E-mail address: dunh@vnu.edu.vn E-mail address: dangnh.maths@gmail.com ... KOLMOGOROV SYSTEMS WITH PREDATOR-PREY TYPE 2695 The aim of this paper is to study the Kolmogorov equation (1.4) as a model of predator-prey type We want to obtain results like those in [7] and... × intR2+ ) > ν(E × (0, M ] × [ σ23 , M ]) > By virtue of the invariance of E×intR2+ , the measure ν ∗ defined by ν ∗ (A) = RANDOM KOLMOGOROV SYSTEMS WITH PREDATOR-PREY TYPE 2709 ν A ∩ E × intR2+... G := H ,M \Kε∗ Since Bn occurs infinitely often with probability 1, we also have RANDOM KOLMOGOROV SYSTEMS WITH PREDATOR-PREY TYPE 2703 ζn < ∞ a.s for every n ∈ N where ζ1 = inf{2k : (x2k , y2k

Ngày đăng: 16/12/2017, 02:55

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN