1. Trang chủ
  2. » Luận Văn - Báo Cáo

Luận án tiến sĩ toán ứng dụng: Tính giải được và tính ổn định của phương trình vi sai phân địa số với nhiễu ngẫu nhiên.

103 1 0
Tài liệu đã được kiểm tra trùng lặp

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Cấu trúc

  • 1.1.1. Basic notations of probability theory (19)
  • 1.1.2. Martingales ... 2... 0.20.00... 00.0.0 00004 9 1.1.3. Stochastic integral .. 2... ...0.0..2.......204. 11 1.1.4. It6’sformula..........200. 0.2.0.0. 00000. 15 (21)
  • 1.2 Stochastic differential equations ................00. 16 .1. Definitions... .......0.. 0.0000 0000020 sV 16 .2. Unique existence and stability (28)
  • 1.3 Stochastic difference equations... .............004. 18 (30)
  • 1.4 Index con€epfS..... . . . 0.00000 2 eee 20 .1. Implicit difference equations of index-l (0)
    • 1.4.2. Stochastic differential algebraic equations of index-1 (34)
    • 1.4.3. The Drazin inverse and index-y (35)
  • 2.1 Stochastic differential-algebraic equations of index-y (38)
    • 2.1.1. Solvability of stochastic differential-algebraic equations (39)
    • 2.1.2. Stability of stochastic differential-algebraic equations (45)
  • 2.2 Stability radii of stochastic differential-algebraic equations with (49)
  • 2.3 Conclusion of Chapter2 ............ 000.000 000. 44 (56)
  • Chapter 3 Stochastic implicit difference equations 46 (19)
    • 3.1 Stochastic implicit difference equations of index-l (58)
      • 3.1.1. Solution of stochastic implicit difference equations (59)
      • 3.1.2. The variation of constants formula for stochastic implicit (61)
    • 3.2 Stability of stochastic implicit difference equations of index-1 (66)
      • 3.2.1. Stability of stochastic implicit difference equations (67)
      • 3.2.2. A comparison theorem for stability of linear stochastic im- (77)
    • 3.3 Stochastic implicit difference equations of index-y (79)
      • 3.3.1. Solvability of stochastic implicit difference equations of index-vy 2 (79)

Nội dung

Basic notations of probability theory

Probability theory deals with mathematical models of trials whose outcomes depend on chance All the possible outcomes-elementary events-are grouped to- gether to form a set, 2, with typical element, w € 2 Not every subset of (2 is in general an observable or interesting event So we only group these observable or interesting events together as a family, F, of subsets of Q For the purpose of probability theory, such a family, F, should have the following properties: a, Ú € F, where Ú denotes the empty set; b, AC F => AC € F, where AT = 2 \ A is the complement of A in Q;

Cc, {Ai}isi CcCF=> Ue A; cư.

A probability measure on a measurable space (O,.Z) is a function P : F >(0, 1] such that a, P(Q) =1; b, for any disjoint sequence {4;};>ị C F (i.e A; A; = 0 if i # 7)

Such a triple (Q, F, P) is called a probability space If (Q,F, P) is a probability space, we set

F={AC0:4B,C €F suth that BC ACC, P(B) = P(C)}.

If a sigma-algebra F is the completion of sigma-algebra Z, then F is also a sigma-algebra A probability space (Q, F, P) is complete if F equals F When F and F are not equal, P can be extended to F by assigning the same probability values to subsets with similar properties This results in a complete probability space (Q, Ff, P), known as the completion of (Q, F, P).

A filtration {7;};>0 is a family of increasing sub-o-algebras of F in a probability space (Q, F, P) If F; = N.s:*; for all t > 0, the filtration is right continuous In a complete probability space, the filtration satisfies the usual conditions if it is right continuous and Fp contains all P-null sets These conditions ensure that the filtration captures all relevant information in the probability space.

From now on, unless otherwise specified, we shall always work on a given com- plete probability space (Q,F,P) with a filtration {F;}:50 satisfying the usual conditions We also define F,, = ứ(U¿>o.#;), ie the o-algebra generated by

A family {X;};cr of IR“— valued random variables is called a stochastic process with parameter set (or index set) J and state space IR“ The parameter set ẽ is usually the halfline Ry = [0,0o), but it may also be an interval [a, b|, the nonnegative integers or even subsets of IR“ Note that for each fixed t € I we have a random variable

On the other hand, for each fixed w € 2 we have a function

I3t— X,() € RY which is called a sample path of the process, and we shall write X (w) for the path Sometimes it is convenient to write X(t,w) instead of X;(w), and the

8 stochastic process may be regarded as a function of two variables (t,w) from

Ix Q to Rẻ Similarly, one can define matrix-valued stochastic processes etc We often write a stochastic process {X;}is0 as {X¿}, X, or X(t).

We introduce some further notions of the stochastic process.

Definition 1.1.1 ([6, 41]) Let {X;};>o be an R?7— valued stochastic process.

(i) It is said to be continuous (resp right continuous, left continuous) if for almost all w € Q function X;(w) is continuous (resp right continuous, left continuous) on t > 0.

A stochastic process is known as cadlag, an abbreviation for right continuous and left limit, when it is continuous from the right for all points in the sample space and its left limit exists at almost every point for all positive values of t.

(iii) It is said to be integrable if for every £ > 0, X; is an integrable random variable It is said to be {F;,}-adapted (or simply, adapted) if for every t, X; is {F,}-measurable.

(iv) It is said to be measurable if the stochastic process regarded as a function of two variables (t,w) from R, x Q to R¢ is B(R,) x F,—measurable, where

B(R,) is the family of all Borel sub-sets of R,.

(v) The stochastic process is said to be progressively measurable or progressive if for every T > 0, {Xt }oo is called an increasing process if for almost all w € 2, A;(w) is nonnegative nondecreasing right continuous on t > 0 It is called a process of finite variation if Ay = A; — A with {A;} and {A;} both increasing processes.

Martingales 2 0.20.00 00.0.0 00004 9 1.1.3 Stochastic integral 2 0.0 2 .204 11 1.1.4 It6’sformula 200 0.2.0.0 00000 15

Definition 1.1.2 ((41]) A random variable r : Q — [0,00] is call an {F;}- stopping time (or simply, stopping time) if {w: 7(0) < t} € F; for any £ > 0.

Let 7 and p be two stopping times with 7 < ứ, a.s We define

(Ir ll= {(t,w) ER, x O: rw) 0} which is a sub—o—algebra of Z# If 7 and p are two stopping times with 7 < p a.s., then F; C Fp.

Definition 1.1.3 ([41]) An R?—valued {F,}—adapted integrable process { M;};>o is called a martingale with respect to {F;} (or simply, martingale) if

A real-valued {F,}—adapted integrable process { Mƒ;};>o is called a supermartin- gale (with respect to {F;}) if

It is called a submartingale (with respect to {F;}) if we replace the sign > in the last formula with 1 and {1;} is an R4—valued martingale such that M; € L?(Q;R®%), then {||M;||?} is a non- negative submartingale.

A square-integrable stochastic process X is defined by E||X;|| < co for any t > 0 For a real-valued square-integrable continuous martingale M, a unique continuous integrable adapted increasing process {(M,M),} exists This process ensures that {Mỹ — (M, M),} remains a continuous martingale vanishing at t = 0 The process {(M, M),} is referred to as the quadratic variation of M Notably, for any finite stopping time 7,

If N = {N;};>o is another real-valued square-integrable continuous martingale, we define

10 and call {(M, N);} the joint quadratic variation of M and N It is useful to know that {(M, N),} is the unique continuous integrable adapted process of finite variation such that {M,N,; — (M, N)¿} is a continuous martingale vanishing at t = 0 In particular, for any finite stopping time 7,

Definition 1.1.4 (Í6, 41]) Let (Q,F, P) be a probability space with a filtration {Fi}iso A (standard) one-dimensional Wiener process (also called Brownian motion) is a real-valued continuous {F;,}— adapted process {w;}:50 with the following properties:

(ii) for 0 < s < t < œ, the increment w; — ws is normally distributed with mean zero and variance t — s;

(iii) for 0 < s o is called a d-dimensional Wiener process if every {wi} is a one-dimensional Wiener process, and {w}}, ,{w#} are independent.

It is easy to see that a d-dimensional Wiener process is a d-dimensional contin- uous martingale with the joint quadratic variations

(w',w!),=d,t 1l 0, taking values in R^n It has the form x(t) = x(0) + ∫0^t g(s)ds + ∫0^t f(s)dw(s), where g = (g1, , gn) ∈ L^2(R+; R^n) and f = (fij)n×m ∈ L^2(R+; R^dxm) The stochastic differential of x(t) over t > 0 is given by dx(t) = g(t)dt + f(t)dw(t).

Let C?! (IR; R, xR“) denote the family of all real-valued functions V(t, x) defined on R, x R“ such that they are continuously twice differentiable in z and once in t If V € C?"(R; Ry x R®), we set

Clearly, when V € C^®!{(R;lR, x R4), we have W„ = ov and W+„ = 0 ove Ox Ox? 2

Theorem 1.1.17 ([41]) Let x(t) be a d—dimensional It6’s process ơn t > 0 with the stochastic differential dx(t) = g(t)dt + f(t)dw(t), where g € L'(Ry;R*) and ƒ € £2(R.;RfS”), Let V € C?(R; Ry x R°) Then

V(t, x(t)) is again an It6’s process with the stochastic differential given by dV(t,2(t)) = ee z(1)) + Volt, (6))9(@)

+5 * Trace (fT (t)Vee(t, w(t)) F(0)) |dt + V,(t,0(t) f(t)dw(t) a.s

Now, we introduce formally a multiplication table: dtdt = 0, dw dt = 0, dw;,dw; = dt, dwjdw; = 0 if 7 z 1.

Stochastic differential equations 00 16 1 Definitions .0 0.0000 0000020 sV 16 2 Unique existence and stability

Throughout this subsection, we let w(t) = (wi(t), ,Wm(t))",t > 0 be an m— 2 dimensional Wiener process defined on the space (Q, F, P) Let 0 < to R“ and ƒ : [fo,T] x R4 — R4“ be both Borel measurable Consider the d-dimensional stochastic differential equation of ltô type dx(t) = g(t, x(t))dt + f(t, x(t))\dw(t) onto R¢ is measurable Then, for any given initial value z(0) = zo € R4, equation (1.3.1) has unique global solution which is denoted by x(n; 0, 29).

Assume furthermore that R(n,0) = 0 for all n € N, so equation (1.3.1) has a solution x(n) = 0 corresponding to the initial value z(0) = 0 This is called the trivial solution or the equilibrium position.

Definition 1.3.1 ([55]) The trivial solution of equation (1.3.1) with the initial condition (1.3.2) is called: e Mean square stable if for each e > 0 there exists a 6 > O such that

The stochastic difference equation Rllz(n)||Ÿ < ¢,Vn EN is asymptotically mean square stable if it is mean square stable and satisfies E|lzo|| < co Under this condition, the solution x(n) of the equation satisfies the limit limp ;+ El|a(n)||? = 0.

Consider a function V : Nx R? > R, with V(n, 0) = 0 Putting V, = V(n, 2(n)), n EN, then the difference operator AV,, is defined by

Theorem 1.3.2 ({55]) Assume that there exists a nonnegative function V;, = V(n,x(n)) which satisfies the conditions

EAV, < —œEllz(n)|”, 1 EN, (1.3.5) where cị,ca are positive constants Then the trivial solution of equation (1.3.1) is asymptotically mean square stable.

Corollary 1.3.3 ([55]) Assume that there exists a nonnegative function V„ = Vín,z(n)), which satisfies condition (1.3.4) and

Then c > 0 ts a necessary and sufficient condition for asymptotic mean square stable of the trivial solution of (1.3.1).

Example 1.3.4 Consider the following stochastic difference equation

#{n + 1) = 1 (n) 5 sinZ(n)ti (1.3.7) with the initial condition #(0) = zo.

We define the Lyapunov function V, = 2?(n), then

Since =(qz(n) sin(0(n))twn41) = 0 and E(5 sin? z(n)#` 1) = iE sin? a(n), by simple calculation we get

By Theorem 1.3.2, the trivial solution of this equation is asymptotically mean square stable.

1.4.1 Implicit difference equations of index-1

Let (En, En-1, An) € RO? x R®TM4 x RTM? be a triple of matrices Suppose that rank E, = rank E,-; = r and let T, € GL(R%) such that T, |kerz, is an isomorphism between kerE, and kerE,_1, put E_,; = Ep We can give such an operator 7 by the following way: let Q, (resp Qn—1) be a projector onto kerE,, (resp onto kerE,_1); find the non-singular matrices V, and V,_; such that Qn = V2QE)V-! and Qạ_¡ = Vr-1Q), V4, where QỤ” = diag (0, Iu_y) and finally we obtain 7¡, by putting 7„ = V„_+M„ 1,

Now, we introduce sub-spaces and matrices

Gn := En — AnTnQn, Pạ:= I— Qn,

Qn-1 = —TrQnG;,' An, „ni :=T— Ôn.

We have the following lemmas, see, e.g |3, 4, 20].

Lemma 1.4.1 ((3]) The following assertions are equivalent a) Sy, kerE,_1 = {0}; b) the matrices Gy, = E„ — AnTrQn is non-singular;

Lemma 1.4.2 (|3|) Suppose that the mmatri Œạ is non-singular Then, there hold the following relations: i) Pạ = GE, where Dạ = 1 — Qn; ti) —GyAnTrQn = Qn:

1) On-1 is the projector onto kerE,_1 along Sy; iu) PrGn' An = PrGh' AnPn-1,QnGn' An = QnGn' AnPa-1 — Th *Qn-13

0) TrQnG,,! does not depend on the choice of Ty and Qn.

We now consider a linear implicit difference equation

E„z(n + 1) = Anz(n)+ qn, n EN, (1.4.1) and the homogeneous system associated with (1.4.1) is given by

Enz(n + 1) = Anz(n), n€ẹ, (1.4.2) where E,, An € IR“**“# a, € R“ and the matrix E, may be singular By virtue of Lemma 1.4.1, the index-1 concept for linear implicit difference equations is given in the following definition.

Definition 1.4.3 ([3]) The linear implicit difference equations (1.4.2) is said to be of index-1 tractable (index-1 for short) if for all n € N the following conditions

If equation (1.4.2) exhibits an index-1 property with the consistent initial condition z(0) = P_129, it possesses a unique solution described by an explicit formula This property is applicable when EF equals E and A equals A.

(1.4.2) is equivalent to that the pair (, A) can be transformed to Weierstraf-

Kronecker canonical form, i.e., there exist nonsingular matrices W, U € R¢*¢ such that

Index con€epfS 0.00000 2 eee 20 1 Implicit difference equations of index-l

Stochastic differential algebraic equations of index-1

In this subsection, we present the definition of index-1 in [59] to stochastic algebraic differential equations (SDAEs) of the form

Here g : [to, 7] x R” — R" is a continuous vector-valued function of dimension n, f : [to, T] x R” — R"X” is continuous n x m— dimensional matrix-function,

FE isa constant singular matrix in R"*" with rank E = r < n, and w denotes an m-—dimensional Wiener process given on the probability space (Q, F, P) with a filtration {Z7;};>¡¿¿ For a mathematical treatment of (1.4.3), we understand it as a stochastic integral equation

Ez(s)lj„ + / 9(s,#(s))ds + | f(s,2(s))dw(s) = 0, (1.4.4) t t to to where the second integral is an It6 integral We are interested in strong solutions defined as follows A solution x is a vector-valued stochastic process of dimension n that depends both on the time £ and an element w of the probability space 2.

The argument w is omitted in the notations above The unknown z(t) = z(t, -) is a vector-valued random variable in L?(Q,F,P) and the identity in (1.4.4) means identity for all ¢ and almost surely in w.

Definition 1.4.4 ([59]) A strong solution of (1.4.4) is a process #(-) = {x(t) }reftor with continuous sample paths that fulfills the following conditions: e x(-) is adapted to the filtration {Fi}iefto,7), ° f lgi(s, #(s))|ds < co a.s., Vi=1, ,n, Vt € [to, T], fi fij(s,v(s))?ds < œ as., Vi =1, ,n, Vj =1, ,m, Vt € [to, TI, e (1.4.4) holds a.s.

Let R be a projector along Im E.

Definition 1.4.5 ([59]) Equation (1.4.3) is called tractable with index-1 (or,for short, of index-1) if + Rg/,(t, x) is nonsingular and Rf = 0.

The Drazin inverse and index-y

In this subsection we introduce the basic definitions and properties about the Drazin inverse For the details we can refer to [35].

Definition 1.4.6 ([35]) A matrix pair (F, A), E, A € K"*” is called regular if there exists s € C such that det(sE — A) is different from zero Otherwise, if det(sE — A) = 0 for all s € C, then we say that (EF, A) is singular.

If (E,A) is regular, then a complex number À is called a (generalized finite) eigenvalue of (E, A) if det(XE — A) = 0 The set of all (finite) eigenvalues of (E, A) is called the (finite) spectrum of the pencil (E, A) and denoted by o(E, A).

If # is singular and the pair is regular, then we say that (#, A) has the eigenvalue ow.

Regular pairs (E, 4) can be transformed to Weierstrafs-Kronecker canonical form, see [11, 35, 36], ¡.e., there exist nonsingular matrices W, T € K”X” such that

| Tl, saw] where I,, J„ „ are identity matrices of indicated size, J € KẾ”, and N €

K'"—")*("—") are matrices in Jordan canonical form and N is nilpotent If E is invertible, then r = n, i.e., the second diagonal block does not occur.

Definition 1.4.7 ((59]) Consider a regular pair (E, 4) with E,A € K"*”" in

Weierstra-Kronecker form (1.4.5) If r to, (2.1.7) and the consistent condition of g,#o for solvability of (2.1.1) is

Proof By using Theorem 1.4.10, we obtain

AN = AE(I — EPE) = E(I- EPE)A= NA, "

NŒT— EPE) = E(I— EPE)(I— EPE) = E(I— EPE) = N (2.1.8)

Since N is a nilpotent matrix of degree v, from (2.1.3b) it follows that

= N’"! (Axo(t)dt + (I— EPE)g()dt + (I- EB? E) f(t, 7(t))dw(t))

/ (ẹ⁄-!Aza(s) + N’-19(s))ds 4 N’-| f(s, 2(s))dw/(s) = 0 t to to

Define the stochastic processes xi)= | (ẹ⁄~!Ass(s)+ẹ*"!g(s))a4s= | —~!ƒ(s, x(s))dw(s). to to

Take quadratic variation of X, we obtain

This implies that N’~! f(t, 2(t)) = 0, a.s for all t > to and hence

Multiplying this equation by (I — E? E)A”, we obtain

(I — EPE)APAN“~1z;z(t) + (I — EPE) APN’! g(t) = 0, a.s for all t > 0 By (1.4.14), (2.1.2) and (2.1.8) we get

N’ldaa(t) + APN’ 1g! (t)dt = 0, as Vt > tạ.

= ẹ**? (Azs(éđt + (I— EPE)g(t)dt+(I— EPE) f(t, x(t))dw(t))

= (NY? Aaa (!) + N’-?(1 — EPE)g(t) + APẹ"1#(0) dt

By taking quadratic variation, similar with the above argument, we obtain

AN’~*r9(t) + NY?g(t) + APNY~1g'(t) = 0, as VES to.

Multiplying this equation by (I — EP E)A” gets

Applying this procedure continuously yields

Nag(t) + AP Ng(t) + (APN)?g/(t) + + (APN) 1g’)(t) = 0.

Ndag(t) + APNg'(t)dt + + (APN)’!g’ (t)dt = 0, or equivalently,

By taking quadratic variation, it implies that (I - BE? E)f(t,x(t)) = 0 and

Azz( + (I — EP E)g(t) + APNg'(t) + + (APN) tg’ Y(t) = 0, a.s for allt > to Similarly, multiplying this equation by (T— E? E) A? we obtain

(I — EP E) AP (4zz() + (I — EPE)g() + APNg)+(APN)?g“()

By (1.4.14) and z¿() = (I — E? E)xo(t), it implies that a(t) + (I~ EP E)|AP g(t) + AP(APN)g'(t) + AP(APN}?g"(t)

+ AP(APN)3g(t) + + AP(APẹJ 1g] =0, or equivalently, v-1 ra(t) = ~(I— EP B) À ` AP(APNY'g (Ct), i=0 a.s for all t > to For t = to, r2(to) = (I — E? E)xo and we obtain the consistent condition of g, Xo,

Finally, since f(t, 2) is Lipschitz continuous in #z, E? f(t, z+22(t)) is so in z and there exists a constant K > 0 such that

|B? F(t, r2(t)) || < BP MF (t,0)|| + KJI#”llllza(9)| Wt € [to, 7Ì.

This implies that E? f(t, xo(t)) is square integrable on [to, 7] Thus, by Remark

2.1.2, equation (2.1.5) has a unique solution z(t) on [ío, 7], and hence x(t) = x1(t) + za(f) solves only (2.1.1) The proof is complete.

Motivated by the consistent condition of the perturbation ƒ, we now derive the following notion.

Definition 2.1.6 The SDAEs (2.1.1) is called tractable with index- (or for short, of index-v) if i) Ind(E, A) =1, ii) (I— EPE)f =0.

Remark 2.1.7 In the case = 1 then the condition (I — EPE)f = 0 is equivalent to Im f C Im E and we return the notation of index-1 in [14, 52, 53, 59] The natural restriction 22) is the so called condition that the noise sources do not appear in the constraints, or equivalently a requirement that the constraint part of solution process is not directly affected by random noise.

Theorem 2.1.8 Asume that the SDAEs (2.1.1) has index-v and satisfies (2.1.2). Then, the solution of equation (2.1.1) is given by the formula v—-1 x(t) =eP At) BP Bary — (I — EPE)S ` AP(APN)'g(t) i=0 (2.1.9)

+f PMD a(s\as+ | cB AUS) Ƒ(s, œ(s)) du t t to to

Proof We have decomposition a(t) = x(t) + z2(f) for all t > 0 Since z¡€) solves the equation (2.1.5), by variation of constants formula (see, e.g [41]) we get a1 (t) — cB? Altto) BP Bra +f cE”AU—*)1(s)ds t to

Since the SDAEs (2.1.1) has index- and satisfies (2.1.2), by Proposition 2.1.5 we have ' a(t) = ~T— EP E) S| AT(A”N) 90). i=0

Thus we obtain (2.1.9) The proof is complete.

In the general case, E and A may not be commutative Since (#, A) is regular, there exists Aj € C with det(AoE — A) 4 0 Put

Then it is easy to see that EA = AE and equation (2.1.1) is equivalent to

Thus, by applying Proposition 2.1.5 and Theorem 2.1.8 for equation (2.1.10), we obtain the consistent condition of the perturbation and the formula of solution for the SDAEs (2.1.1).

In what folows, without lost of generality, we will assume that E and A are commutative.

Stability of stochastic differential-algebraic equations

In this subsection, we study the L?-stability and the exponential L-stability for

SDAEs by using the method of Lyapunov functions For defining the stability of zero solution, in equation (2.1.1) we assume that g(t) = 0 and f(t,0) =0,Vt > to Moreover, assume that there exist a;,a2 > 0 and a function y(t) such that

| f(t, 2) || < +(9|lz|l and lh 77(s)ds < ay(t—to)+ay for all t > to Let us consider the equation

Edx(t) = Aa(t)dt + f(t, 2(t))dw(t), (2.1.11) x(to) = 20, where EF, A € K"*” are constant matrices and w(t) be an m-dimensional Wiener process For solvability of equation (2.1.11), by Proposition 2.1.5, the initial condition xp needs to satisty the consistent condition (J — E? FE) a9 = x2(to) = 0, or equivalently, zy € Im(E? £).

Now, we define the L?-stable and exponentially L?-stable for equation (2.1.11).

Definition 2.1.9 Equation (2.1.11) is said to be L?-stable if Hà E(||x(t, to, xo) ||?) dt

< œ for all x9 € Im(E”?E) Equation (2.1.11) is said to be exponentially L?- stable if there exist a, 3 > 0 such that ty x a(t, to, £0) ||? < Be~ || xo ||”, (2.1.12) for all t > to > 0 and xp € Im(E?PE).

A necessary and sufficient condition for exponentially L?-stable of the SDAEs

(2.1.11) is showed in the following theorem.

Theorem 2.1.10 Assume that the SDAEs (2.1.11) has indez-v Then equation

(2.1.11) is exponentially L?-stable if and only if there exists > 0 such that

/ E(lle(t, to, 20) ||2)at < nllaoll2, (2.1.13) to for allt > tạ > 0 and x € Im(EPE).

Proof Necessity: Suppose that (2.1.11) is exponentially L?-stable, i.e., inequality

(2.1.12) holds with some constants a, 3 > 0 Then we obtain

~ 2 Sy atte) jhe I9, — i2 E(||a(t, to, ro) ||" )dt < Be \|zo||"dt = —||zo |. to to a

Sufficiency: Suppose that (2.1.13) holds By Lemma 2.1.3 and Proposition 2.1.5, it implies that x(t) satisfies the equation dx(t) = E? Ax(t)dt + E? f(t, x(t))dw(t) (2.1.14)

34 with the constrained condition (I—E? E)x(t) = za(£) = 0 Let L be the parabolic differential operator associated with system (2.1.14), i.e

+ ; Trace le” f(t.2))* 2 a (t,z)(EP f(bs)| for all t > fọ,z € Im(E?E) Taking 0(,#) = ||#|Ÿ, x = x(t,to,x0), to € Im(E” E), we get easily that

(Lo) (t, a =| (EP Ax)*2a + 5 Trace (GaiG +))'2(Z”/(.z)))|

2|(E2Az)"z| + IE”Iflứ z)lỨ 2IIE7AlIlIzlỦ + IIE”If+?ứ)IIsl”

(2IIZ24ll + +*)JI”lP) lle? = a(e) lla,

IN IN IN where a(t) = 2||E” All + +?()|LEP|E II” ||? = Trace (E?*E”) and || f(t, z)||? = Trace (f(t, 2)* f(t, ứ)) Hence, (Lv)(t, 2) > —a(t)||x||? for allt > to, z € Im(E?E).

By using the Ito’s formula (see, e.g., [6, 41]), we have

Ellz( to, zo)||Ÿ — El|x(to, to, #o)|Ủ = Ellx(t, to, x0) |” — llxo|ẽ

= E|(Lu)(, x(t, to, #0))] > —a£)E|z(t, to, #0)) I, dt for all t > to, zo € Im(E” E) It is equivalent to SẮ efi 9 MB llz(, to, ứo) || ?) >

Therefore, edto (5) d1 t Ù x(to, to, Xo) ||, or equivalently a(t, to, 20) 7 > e_ la “9% xạ], for all to > 0 and xo € Im(E?E).

By the assumption of 7(t), this implies that

E|\ar(t, to, vo) ||? > Me 8) Ilr”, (2.1.15)

35 where M = e~*l'#”l and 8 = 2\|E? Alj + ail|EP|È.

We define V : Ry x Im(E?E) > R, by V(t,y) = SP Eljx(s,t,y)||?ds It is obviously that V(t,y) < nlly||?,t > to,y € Im(E?E) By (2.1.15), it implies that V(t,y) > Fllyll?.t > to.y € Im(E”E) M

Now, for a stochastic variable € : Q -> Im(E?E), define V(t,€) = EV(t,€).

Then, for y € Im(E ”E), we have —||yl|? < V(t,y) = VỤ, ) to

Taking the expectation on both sides, we get

Fj Elle(t, to, zo)||ẽ < V(t, x(t, Mu to, zo)) < nE||x(t, to, zo)|:

ONG AE fot) to.

It is equivalent to d q(t-to)) to > 0,29 € Im(E? E) The proof is complete.

Stability radii of stochastic differential-algebraic equations with

with respect to stochastic perturbations

In this section, we will develop approach in [8, 9] to investigate the robust sta- bility of DAEs subject to stochastic perturbations Consider the regular SDAEs

Edz(t) = Ax(t)dt + CA(Ba(t))dw(t), (22.1)

#(fo) = Xo, where E,A € K"*" are constant matrices, B € K%*", C € K*“*! are structure matrices of perturbations; w(t),¢ > to > 0 is an m-dimensional Wiener process and xo is independent of w(t),t > to > 0 and the disturbance operator A :

K¢ — K! is Lipschitz continuous with A(0) = 0 Then, the Lipschitz norm of A is defined by

Edz(t) = Ax(t)dt (2.2.2) is called the deterministic part of (2.2.1) Assume that o(£, A) C C_, or equiv- alently, equation (2.2.2) is exponentially stable (see, e.g., [18]).

It is already known for the case of perturbed DAEs (see, e.g., [19, 42, 57]), that it is necessary to restrict the perturbations in order to get a meaningful concept of the structured stability radius, since a DAE system may lose its regularity, solvability and/or stability under infinitesimal perturbations We therefore in- troduce the allowable stochastic perturbations which the consistency condition

Definition 2.2.1 Assume that condition (2.2.3) holds Then, the L?-stability radius and the exponential L?-stability radius of the exponentially stable equa- tion (2.2.2) with respect to the stochastic perturbation in the form of (2.2.1) are defined by ri (E, A;C, B) = inf{||Al| : (2.2.1) is not L? — stable}, ré(E,A;C, B) = inf{||Al|: (2.2.1) is not exponentially L? — stable}.

By Theorem 2.1.8 and the consistent initial condition E? Exo = zọ, the solution of (2.2.1) satisfies the equation x(t) = eF Alto) a5 + II cP Als) EPC A(Ba(s))dw(s) (2.2.4) t to

Y= L? [[to, oo), L(Q, K'*TM)], Ho = L? |[to, 00), L2(Q,K"*")],

The spaces V, Ho, H are equipped by the inner product (-,-) as follows u(-), v(-)) = [ (u(t), v(t))dt = [ E Trace(u(t)*u(t) dt. to

With this inner product, V,Ho,H become the Hilbert spaces We now define the operators M : —> Ho by

(Mv) (t) =J cB” Alt-s) EPO(s)da(s), (2.2.5) t to and L: V — H by

Using Weierstra$B-Kronecker canonical form for commutative matrix pair, we have cE? Alt-s) yD =T els) 0 TO,

0 0 where J € KT*" with o(J) = o(E, A) C C_ Therefore there exist K,a > 0 such that jena BP || < Kets),

This implies that the operators M and L are bounded Now, we derive an upper bound for the perturbation such that equation (2.2.1) preserve the exponential stability.

Theorem 2.2.2 Assume that (2.2.3) holds and ||A|| < |\L||~! where L is defined by (2.2.6) Then equation (2.2.1) is exponentially L?-stable.

Bz() = Be®” Alto) x +f Be®’ Als) EPC A(Ba(s))dw(s) (2.2.7) t to

Let y(t) = Bz(t), yo(t) = Be®’ 4) 79, then (2.2.7) can be rewritten as

IIL(A0))() = LAG) Ol ILIlA60)() = A0)()lly

Therefore (2.2.8) has a unique solution y(-) in ? by the contraction theorem.

On the other hand, llu(-) — vol) Ihe = JIL(A0)))ÌÌu

< IL(AW)O) = L(A) Ola + IEA (0) ) Ive

< EMAC) = Ao) Oly + HEMAWM Oly

< LMA ly) — o()llz + WENA MI yo) Ile.

This implies that ||y(-) — yo(-)|la < eae and

Ilu(-)| S |lu(-) — yo) Ila + Ilyo(-) Ile

LIA T| 1 Io(-)l[w + llyo() lac = yo(-)|

1— |IL| | AI | 0 li | 0 ln 1— LI) AI I o( ln l * EP A(t—-to) wD 2 "2

UX Ị EP A(t-to) mD we

0 By Proposition 2.2.4, we imply that rg(E,A;C,B)-e=p 0 Let e > 0 we get r(E, A;C, B) < ||LI[† The proof is complete.

Remark 2.2.6 The above theorem shows that the L?-stability radius equal the exponential L?-stability radius In the case E = I, by letting + = (w;), where w; is one dimensional Wiener process, we will obtain the formula of stability radii in [8, 9, 44].

Example 2.2.7 Consider a DAE subject to stochastic perturbation:

EdX = AXdt+CA(BX)du, (2.2.13) where

Because of EA # AE, put

Then, equation (2.2.13) is equivalent to

Since o(£), Ai) = {—2}, the homogeneous equation of (2.2.14) is exponentially

—=l 1 stable By the direct computation, E? = EĂ = 0 0ẽ” Therefore, solving the

0/4 —p”/4 with Ð,(T—EPE)) = 0 gets P, = pl By Ba) = 0 gets Fp —p/4 p3/4 and Cf P,C; = p/4 p?/4

By Theorem 2.2.5, we obtain rx(E, A; C, B) = rg(E, A;C, B)

=rọ(ŒI, Ai; C1, B) = (Bi, Ai: C1, B) = v2.

Now, we construct two perturbations

0.866 0.866 pp = 1,412, = ,2 ° a see that ||A:ll = ứĂ > ||L||~! > ||Ae|| = p2 With the perturbation A1, equation where p, = 1.5392, z; = | | Then, it is easy to

(2.2.1) is exponentially L?-unstable, see Figure 2.1 With the perturbation Ao, equation (2.2.1) is exponentially L?-stable, see Figure 2.2.

Stochastic implicit difference equations 46

Stochastic implicit difference equations of index-l

Let us consider the stochastic implicit difference equations (SIDEs)

Enx(n+1) = Anz(n) + R(n, 2(n))wngi, n € ẹ, (3.1.1) with the initial condition x(0) = P_y29, where E,, A, € R“*# with rank E, = r < d, the function R: N x R# > R“ is measurable and {wy}: wn € Risa sequence of mutually independent F,,-adapted random variables, and indepen- dent on Fx, k < n satisfying Ew, = 0, equation associated to (3.1.1) is

Ew2 = 1 for all n € N The homogeneous

Now, we give a rigorous definition of solution of (3.1.1).

3.1.1 Solution of stochastic implicit difference equations

Definition 3.1.1 A stochastic process {z(m)} is said to be a solution of the SIDEs (3.1.1) if with probability 1, z(n) satisfies (3.1.1) for all n € N and z(n) is F,-measurable.

Assume that the homogeneous equation (3.1.2) has index-1 Multiplying both sides of equation (3.1.1) by Q,G,,!, we obtain

QnGy Enx(n + 1) = QuG, ` Anz(n) + QạG,` Rín, 2(n))wnsr-

Since Gat „ = Pn, QaŒj!„ = QnPn = 0 This follows that

Since Q„GŒ;!Azz(n) and Q,G_1R(n,x(n)) are Z„-measurable, it implies that

Wn+41 18 a F,-adapted random variable This contradict with wy+41 is independent on F,, Therefore, the equality Q,G,'R(n,z(n)) = 0 needs to be satisfied It is easy to see that this holds if Im R(n,-) C Im #„ for all n € N This natural restriction is the so-called condition that the noise sources do not appear in the tỦn L1 = („G71 Anc(n) constraints, or equivalently a requirement that the constraint part of solution process is not directly affected by random noise (see, e.g [14, 59, 52]) Thus, we derive the following index-1 concept for SIDEs, which is motivated by the index-1 concept for SDAEs.

Definition 3.1.2 The SIDEs (3.1.1) called tractable with index-1 (or for short, of index-1) if i) The deterministic part (3.1.2) of (3.1.1) is a linear IDE with index-1. ii) Im R(n,-) C Im E,, for alln € ẹ.

By using the above notion, we solve the problem of existence and uniqueness of solution of (3.1.1) in the following theorem.

Theorem 3.1.3 Jf equation (3.1.1) is of index-1, then for anyn € N and with the initial condition x(0) = P_1%o, it admits a unique solution x(n) given by the

47 formula x(n) = P„_1u(n), (3.1.3) where {un} is a sequence of F,,-adapted random variables defined by the equation uớn + 1) = PaG,!Agu(n) + PaG,LR(n, Pạ_1u(n))0„¿Ă, n 6ẹ.

Proof Since Gy! Ey, = Py, PạG,!E„ = Py and QnG;,!En = 0 Therefore, multi- plying both sides of equation (3.1.1) by P,G; and Q,G;, we get

Since equation (3.1.1) is of index-1, Im R(n,-) C Im E,, and hence Q„Œ;,!R(n, x(n)) =

0 Then the above equation is reduced to

On the other hand, by item iv) of Lemma 1.4.2, we have

PạG,, An — PG AnPn—1; QnG,'An = QnGn! AnPn-1 — T,'Qn-1-

P,x(n + 1) = PrGy'AnPn-iz(n) + PrG;R(n, 2(n))wnsi,

Paz(n + 1) = PaG-LAa,Pa_iz(n) + P„G-1R(n,z(n))0„ +1,

Putting u(n) = P,-12(n), v(n) = Qn-1x(n), we imply that ứ(n) = —Q„_1u(n) and x(n) = Py12(n) + Qn12(n) = u(n) + on)

Therefore, equation (3.1.5) becomes u(n + 1) = P,Gy | Anu(n) + PạG„'Rín, Py-1Un)Wnvi (3 1 7) v(n) = —Q„_1u(n).

The first equation of (3.1.7) is an explicit stochastic difference equation For a given initial condition u(0), this equation determines the unique solution u(n) which is Z„-measurable This implies that ứ(n) = —Q„_Ău(n) and z{n) =

P,-1u(n) are so Thus, with the consistent initial condition #(0) = P _1zo, equa- tion (3.1.1) have a unique solution x(n) which is given by formulas (3.1.6), (3.1.7) The proof is complete.

3.1.2 The variation of constants formula for stochastic im- plicit difference equations

Now, to construct the variation of constants formula for equation (3.1.1), we need to define the Cauchy operator ®(n,m) of the corresponding homogeneous equation (3.1.2).

First, we find the Cauchy operator ®o(n,m) from the equation ®g(n + 1,m) = P,Gz'An®o(n, m),

It is easy to compute that n-1 ®o(n, m) = I] PyGŒ, ` Ag. k=m

Then, the Cauchy operator ®(n,m) is defined by

We have the following proposition.

Proposition 3.1.4 Let ®(n,m) be the Cauchy operator of equation (3.1.2). Then, we have i) ®(m,m) = P„_1; ii) (n,m) ®(m, k) = ®(n, k); iii) ®(n,m) = [T—j„ PrGg Be.

Proof To prove i), by item iii) in Lemma 1.4.2 we have ®(m,m) = Py—iPm-1 = Pyy-1-

To prove ii), we have ®(n,m)®(m, k) = P„_1®g(n,mm)P„—1P„—1®g(m, k) Ppt (3.1.9)

On the other hand, by item iv) in Lemma 1.4.2 we have

This implies that m—1 m—] ®g(m,1m) P„_¡ = (I mci) Pri = ( II mci) PinG AmPn-1 k=m k=m+1

To prove iii), since P, = Py Pp, P, = PP, we have

(n,m) =P,-1®o(n, Mm) Pm—1 = Pr-1 (I rica) Pm-1 k=m n-1 n-1

= I] Py PyGy,' Ak = I] P.Gy,' Ag. k=m k=m

Now we derive the variation of constants formula for the solution of equation

Theorem 3.1.5 The unique solution of equation (3.1.1) can be expressed as x(n) = đ(n,mm)P,„_iz(m) + ằ đ(n,i+1)P,G;'R(i,#())0;¿ỡ, — (3.110)

=m where ®(n,m) is the fundamental matrix of equation (3.1.2).

Proof We see that uy, is the solution of equation u(n+1) =P,G7tAnu(n) + PaG;!R(n,#(n))t0n¿1, u(m) = P„_iz(m), m 7 8 N > oo, and 6 = 0,7 = 1 Thus, by

Stability of stochastic implicit difference equations of index-1

In this section, we study stability of the SIDE (3.1.1) of index-1 It is well known that the method of Lyapunov functions is very useful to investigate stability of

54 dynamic systems Thus, we will use this method to derive characterizations of mean square stability for equation (3.1.1) First, we introduce the following sta- bility notion which is generalized from Definition 1.3.1 for stochastic difference equations.

3.2.1 Stability of stochastic implicit difference equations

Definition 3.2.1 The trivial solution of equation (3.1.1) is called: e Mean square stable if for any e > 0 and there exists a ở > 0 such that

Ellz(n)||2 < e,Vn EN, if El] P_12(0)|/? < 6. e Asymptotically mean square stable if it is mean square stable and with

E||P_12(0)||? < œ the solution x(n) of (3.1.1) satisfies limps E||x(n)||? = 0.

If the trivial solution of equation (3.1.1) is mean square stable (resp asymptot- ically mean square stable) then we say equation (3.1.1) is mean square stable (resp asymptotically mean square stable).

Theorem 3.2.2 Assume that yo := supyso ILPa—: < oo and there exists a non-negative function V, = V(n, Pr1x(n)) which satisfies the conditions

V(0,P ¡z(0)) < eIEI|P- ¡z(0)|Ẻ, (3.2.1) EAV, 0 is a necessary and sufficient condition for asymptotically mean square stable of the trivial solution of (3.1.1).

Proof Sufficiency follows from Theorem 3.2.2 To prove a necessity it is enough to show that if c < 0 then via (1.3.3) n-1 À ` EAV; = EV, — EY) > 0, or equivalently, EV, > EVo > 0 for each P_,2(0) 4 0 It means that the trivial solution of (3.1.1) cannot be asymptotically mean square stable.

From Theorem 3.2.2, it follows that stability of SIDEs can be reduced to con- struction of appropriate Lyapunov functions Now, we derive characterizations for stability of SIDEs in the form of the quadratic Lyapunov equations.

Theorem 3.2.4 Assume that there exist an, bn,co > 0 such that ||R(n,x)|| < b„||z|| and the matrix equation

AY Hạ 1Aa — El) Hn En—1 = —02 PSPr (3.2.4) has a nonnegative definite solution H,, satisfying for alln > 0

Br l|Hn+illl|Pn—all? — a, < -e2 < 0 (3.2.5)

Then the trivial solution of (3.1.1) is asymptotically mean square stable.

Proof Consider the Lyapunov function

V„ = V(n, P„_1z(n)) = 2" (n)Pl ET) Hy En—1Py12(n), where H,, is a positive definite solution of (3.2.4) Since equation (3.1.1) has index-l, f„_¡ = En—1Pp—1 and hence V, = #f(n)H/ ¡Hạ„E„ ìz(n) Therefore,

=a" (n +1) EP Anyi Ent(n + 1) — a" (n) ED | Hn En12(n)

=(Enx(n +1)" Huài (Euz(n + 1)) — 27 (n) BT, Hy En—12(n)

=zT(n)A} H„++Aa#(n) — a" (n) ES) Hy En—12(n)

Since w,41 is independent on F, and Ewn+1 = 0, it follows that

< |Pu—iz(m)ẽŸ + bà|Ha+i E||z(n)lẽP

= =anELP„iz(n)|Ÿ +b || Hnsi{|E|| Pr—ve(n)||?

< —a7E||Py-12(n)|)? + Op ||Hntilll| Peal Ellen) |?

= (Pil Anvalll|Po—all? = a7) Ell Pn—re(n) 7

On the other hand, it is easy to see that

Thus, by Theorem 3.2.2, the trivial solution of (3.1.1) is asymptotically mean square stable The theorem is proved.

Example 3.2.5 Consider the stochastic implicit difference equation

Euz{n + 1) = Anx(n) + R(n, 2(n))Wngi, n EN, (3.2.6) where n+1 2n _— — e—32n

R(n,e(n)) = a |Bn+dD 2 nan E Ve-3~2 4 ] 0 with x(n) = [x1(n), #a(n)]f € I2 We can compute

If we choose a, = 1 for all € N then the matrix equation (3.2.4) has a

1 0 nonnegative definite solution H, = Ans, = | | Moreover, it is easy to 0 0 calculate ntl Ị /c3n — 1c "=1 _ J1] ~ c-3n

Gn = En — AnTnrQn = ẽ rive i : | ) 0 erly

Bp = TP) Foal? = Tap || nil] = 1.

This implies that 6; = em + l)(n +1 and

2 5 j2 mtnttil 1 ol 2 b2 |Hz+1|||LPa—i||Í — dó = Xn + 12 l< 5 l= 5 0 and a nonnegative definite matric Hy in R¢ satisfying

AP Hny1An — Ely HnEn1+ 24 HạyiZa = —CoP 14 Py (3.2.9)

The stability of the trivial solution to equation (3.2.7) relies on the sign of coefficient co appearing within the equation If co is negative and equation (3.2.9) possesses a non-negative definite solution Hy, the trivial solution becomes unstable in the asymptotic mean square sense Conversely, when co is positive, the trivial solution exhibits asymptotic mean square stability.

Proof We construct a Lyapunov function for (3.2.7) of the form

=r! (n+ 1)Eq Any Ent(n + 1) — 2" (n)Eh_|HnEn-12(n)

=(„#(n + 1))“ Hn+i(En2(n + 1)) — 07 (n) Ey An Bn12(n)) kh x(n) + Zaz(n wri)” Hại(Asz(n) + Z„#(n®)„+1) x" (n) Eq) Hn En—10(n)

=(Anx(n))" Hns1(Anv(n)) + (Ane(n))" Hai (Zu#(n)0n¿1) x(n)wns1)’ Hạyi(Aus(n)) + (Znt(r)Wn41)? Hn+i(Zne(n)wn+1)

=a" (n) Al Hyy1Anx(n) — 27 (n) Et Hy, En_12(n)

+ (Zn r (AnX(n))" Angi (Zn (n)wnsi) + (Zne(n)wn41)! Hn+i(Ana(n)) (Zp (tnt)? An i(Zne(0)Wn1)

+ (Anx(n )) Anva(Znx(n)wnss) r (Zpv(n)wnst) Hns1(Ane(n)) + (Zrv(n)wns) Ans(Znt(n)wnst))-

Since wy+1 is independent on F, and Ew,,, = 0, it follows that

E[(Zn2(n)wnsi) Hn+1(Anv(n))] = E[(Zn2(n))? Hn+1(An2(n))wn41] = 9,

Moreover, since Ew?,, = 1, we have

+E ((Zn0(r)ungt)? Horst (Znit(2)nst))

+E((Zu#(n)Jf Hn si(Zn0(n)))

= t(t n)(AT Any An — BỊ Hy Eni + 2H, \22)x(n ))

On the other hand, it is easy to see that

Thus, by Theorem 3.2.4, the trivial solution of (3.1.1) is asymptotically mean square stable If c < 0 and equation (3.2.9) has a nonnegative definite solution

EAV, = —œE|LP„_iz(n)|? and hence, by Corollary 3.2.3, the trivial solution of equation (3.2.7) is not asymptotically mean square stable The theorem is proved.

Now we consider the linear SIDEs with constant coefficient matrices

Ex(n+ 1) = Az(n) + Zz(n)una+i, n € ẹ, (3.2.10) with the consistent initial condition z(0) = Pao (3.2.11)

Corollary 3.2.7 Assume that there exist cp > 0 and a nonnegative definite matriz H in R@ satisfying

Then the trivial solution of equation (3.2.10) is asymptotically mean square sta- ble Moreover, if cz < 0 and equation (3.2.12) has a nonnegative definite solution

H then the trivial solution of equation (3.2.10) is not asymptotically mean square stable.

Next, we give two numerical examples to illustrate results.

Example 3.2.8 Consider the following SIDE

Ex(n+1) = Az(n) + Zz(n)0„.¡, n EN, (3.2.13) where

We can choose the projections

Then the matrices G,G~! can be calculated by

Thus, equation (3.2.13) is of index-1 If we choose cp = 0.5303 then the matrix equation (3.2.12) has a nonnegative definite solution

Then the trivial solution of (3.2.13) is asymptotically mean square stable.

Example 3.2.9 Consider the following SIDE

EX(n+1) = AX(n)+ ZX(n)unyi, n EN, (3.2.14) where

Here, we can choose the projections

It is clear that rank # = 1 and let

It is easy to calculate that

Thus, equation (3.2.14) is of index-1 If we choose ca = 1.2247 then the matrix equation (3.2.12) has a nonnegative definite solution

Hence, the trivial solution of (3.2.14) is asymptotically mean square stable, see

On the other hand, if the perturbation matrix is quite large, i.e.

—2.12132.1213 } then the matrix equation (3.2.12) has a nonnegative definite solution

~ |=0.0730_ 0.0730 with co = —1.0954 Thus, the trivial solution of (3.2.14) is not asymptotically mean square stable, see Figure 3.2.

3.2.2 A comparison theorem for stability of linear stochastic implicit difference equations of index-1

Theorem 3.2.10 Assume that Ky := sup,so || Pn—1|| < 00 Then if there exists a positive sequence {a„} with Ky := yo 4 Qn < 00 such that lIPaGzLAallÊ + |IPaG;1Z„P2—v||? < 1+ an, Vn EN, then equation (3.2.7) is mean square stable If there exists a positive sequence

{Bn} with Sy 4 Bn = 00 such that lIP„Ga!Azll? + ||PaGu' ZnPa—il|?

Ngày đăng: 21/05/2024, 02:19

TỪ KHÓA LIÊN QUAN

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN

w