xD = A(t)x, (2.16)
where A(t) is a regressive, rd-continuous n n-matrix, and kA(t)k M, for all t 2 Tt0 .
Definition 2.23. The trivial solution x(t) 0 of Equation (2.16) is said to be exponentially asymptotically stable if all solutions x(t) of Equation (2.16) with the initial value x(t0) satisfy the relation
kx(t)k Nkx(t0)ke a(t, t0), t 2 Tt0 ,
for some positive constants N = N(t0) and a > 0 with a 2 R+.
If the constant N can be chosen to be independent of t0, then this solution is called uniformly exponentially asymptotically stable.
Theorem 2.24. Consider Equation (2.16) with the stated conditions on A( ). Then, i) Equation (2.16) is exponentially asymptotically stable if and only if
there exists a constant a > 0 with a 2 R+ such that for every t0 2 T, there is a number N = N(t0) 1 such that
kFA(t,t0)k Ne a(t, t0) for all t 2 Tt0 .
ii) Equation (2.16) is uniformly exponentially asymptotically stable if and only if there exist constants a > 0, N1 with a 2 R+ such that
kFA(t,t0)k Ne a(t, t0) for allt 2 Tt0 .
Proof. Every solution of Equation (2.16) satisfying the initial condition x(t0)
= x0 can be expressed by x(t) = FA(t, t0)x0. Combining with the definition of exponential stability we have the proof.
In the following theorem we give the spectral condition for exponential sta- bility.
Theorem 2.25. Let a := max S, where S is the set of Lyapunov spectra of Equa-tion (2.16). Then, Equation (2.16) is exponentially asymptotically stable if and only if a > 0.
48
Proof. The proof is divided into two parts.
Necessity. Suppose that Equation (2.16) is exponentially asymptotically sta-ble. Then, there exist numbers N 1, and a1> 0, a1 2 R+ such that
kx(t)k Ne a1(t, t0)
for any solution x(t) of Equation (2.16). By Lemma 2.4.v) we have
kL[x( )]a1.
This means that a = max S a1< 0.
Sufficiency. Suppose that a > 0, and let
fxi( ) = (x1i( ), x2i( ), . . . ,xni( ))Tg, i = 1, 2, ..., n
be a system of fundamental solutions of Equation (2.16). Hence, we have kL[xi( )] a < 0
for all i = 1, 2, ..., n, which implies that
lim kxi(t)k = 0.
t!¥ e a (t,t0) 2
Therefore, there exists a number T0> t0, such that
kxi(t)k e 2 (t,t0), for all t 2 T, t0 t T0, i = 1, 2, ..., n.
a
We choose a number N 1, such that kxi(t)k
N sup
1 i n,t0 t T0 e a (t,t0) 2
and then obtain
sup kx (t) k N e a (t, t ).
i 2 0
1 i n,t0 t T0
If x( ) is an arbitrary nontrivial solution of Equation (2.16), then there are constants a1, a2, ..., an, such that
n
x(t) = ồ aixi(t).
i=1 49
Since fx1(t0), x2(t0), ..., xn(t0)} forms a basic of Rn and the norms are equiv-alent to each other in Rn, there is a positive constant K, independent of x(t0), such that
n
Kkx(t0)k ồ jaij.
i=1
Hence,
k k i=1 j jk k i=1 j j! 2 k k 2
n n
x(t)
ồ ai xi(t) N ồ ai e a(t,t0) N x(t0) e a(t,t0),
where N := KN . This means that Equation (2.16) is exponentially asymp- totically stable. The proof is complete.
We now consider the following equation
xD = Ax, (2.17)
where A is a regressive constant matrix. Denote the set of all eigenvalues of matrix A by s(A). From the regressivity of A, it follows that s(A)R.
Theorem 2.26. i) If Equation (2.17) is exponentially asymptotically stable, then kL[el( , t0)] < 0, for alll 2 s(A).
ii) Suppose that all eigenvalues of A are uniformly regressive. Then, the assump-tion kL[el( , t0)] < 0 implies that Equation (2.17) is exponentially asymptot-ically stable.
Proof. Suppose that Equation (2.17) is exponentially asymptotically stable.
Let l 2 s(A) and x0 be its corresponding eigenvector. Since x(t; t0, x0) = el(t, t0)x0
is a solution of (2.17), we have el(t, t0)kx0k= kx(t; t0, x0)k Nkx0ke a(t, t0), where N 1, a > 0, a 2 R+. Hence,
kL[el( , t0)] a < 0.
Next, to prove the second assertion, we define the sequence of l- polynomials by
p0l(t, s) := 1, pkl(t, s) :=Z t 1 pkl 1(t, s)Dt.
s 1 + lm(t)
50
By using this notation, we get an explicit representation for the exponential matrix function on time scale (see [16])
m si
FA(t, t0) = ồ ồ Rik pkli
1(t, t0)eli (t, t0), (2.18)
i=1 k=1
where Rik are the constants, and l1, l2, ..., lm are distinct eigenvalues of the matrix A with respective multiples s1, s2, ..., sm, m n.
We assume that l is uniformly regressive, and kL[el( , t0)] < 0 for every l 2 s(A).
Let # > 0 be an arbitrarily small number. By using L’Hôpital’s rule, we get jpl(t, t0)j
lim 1
t!¥ e#(t, t0)
R t 1
t!¥ t0 e#(t, t0)
lim j1+lm(t)jDt D
= lim t0 j1+lm(t)j D
R t 1 Dt
t!¥ (e#(t,t0))
= lim 1
#j1 + lm(t)je#(t, t0)
t!¥
lim 1 = 0.
t!¥#de#(t, t0)
Since # is arbitrarily small, it follows from Lemma 2.4.iv) that kL[p1l( , t0)]
0. By induction, we get kL[plk( , t0)] 0, k = 1, 2, ..., si and i = 1, 2, ..., m.
Therefore, kL[pl
k(t,t
0)e
l(t,t
0)]
k
L[e
kL[plk(t,t0)] kL[el(t,t0)](t,t
0)]
= k
L[ e
kL[plk(t,t0)]( t, t
0) e
kL[el(t,t0)]( t, t
0)]
kL[e
kL[el(t,t0)](t,t
0)] =k
L[e
l(t,t
0)]
< 0.
Combining the inequality kL[plk(t, t0)el(t, t0)] < 0 with the expression (2.18) and Theorem 2.25 obtains the proof.
Corollary 2.27. If for any eigenvalue l 2 s(A) we have =l 6= 0 and kL[el( , t0)]< 0, then Equation (2.17) is exponentially asymptotically stable.
Proof. The proof follows from the fact that if =l 6= 0 then l is uniformly regressive.
51
Theorem 2.28. Suppose that lim supt!¥<bl(t)< 0, for all l 2 s(A). Then, Equation (2.17) is exponentially asymptotically stable.
Proof. From the assumption and the inequality (2.7), we see that kL[el( , t0)] < 0, for all l 2 s(A).
Set
a := lim sup <bl(t) < 0, l 2 s(A).
a t!¥
Choose 0 < # 2. Then, there exists an element T0 2 T such that sup <bl( ) a + #, which implies that
t T0 t (<l #)(t) a < 0, for all t T0. 2
Hence, limt!¥ e <bl #(b
t, t0) = 0. By applying L’Hôpital’s rule, we obtain
l t 1
sup ( , ) ( ) lim sup Z
t0 t ( )
jp1 t0 el # t, t0 j 1 + lm(t) D
t, t0
limt ¥ t t ¥ e<l #
! ! t j
1 Dj b
t0 Dt
= lim j1+lm(t)j D
t ! ¥ R
e (<l #)(t, t0)
= lim b e
<l #( t, t
0) .
t!¥ ( <bl b#)(t) 1 + lm(t)
Since j j
e<l #(t,t
0)
= (1 + #m(t) + m(t)<l(t)(1 #m(t))e<l #(t,t0)
(<bl b #)(t) j1 + lm(t) j ( <b l b #)(t) j 1 + lm(t) j b
(1 + #m(t) + m(t)<bl(t)(1 + #m(t))e<bl #(t,t0) (<bl #)(t)j1 + lm(t)j
= (1 + #m(t))(1 + m(t)<bl(t))e<bl #(t, t0) (<bl #)(t)j1 + lm(t)j
(1+#m(t))j1+lm(t)je
<bl #(t,t
0)
(<bl #)(t)j1 + lm(t)j , we get
t ¥ jp1 t t0 e
l # t t0 j t!¥ (1 + #m(t))e (t, t0)
( l <b#)(t)
lim sup l( , ) ( , ) lim
<b l # = 0.
!
52
Therefore, p1l
(t, t0)el #(t, t0) is bounded from above by a certain constant C when t is large enough, which implies that
jp1l(t,t0)el(t, t0)j = jp1l(t, t0)el #(t,t0)je #(t,t0) Ce #(t, t0). Thus,
kL[p1l
(t, t0)el(t, t0)] kL[Ce #(t, t0)]
t2T t2T 1 + #m(t)
sup( #) = sup #
# < 0.
1 +
#m By induction, we can prove that
kL[plk(t, t0)el(t,t0)] < 0, for all k = 0, 1, 2, ...
We use expression (2.18), Theorem 2.25 and complete the proof.
Note that if l( )2 R+, then <bl(t) = l(t) for all t 2 T. Therefore, we get a corollary of Theorem 2.28.
Corollary 2.29. If s(A) ( ¥, 0)\ R+ then Equation (2.17) is exponentially asymptotically stable.
To end Chapter 2, we consider an example.
Example 2.30. Considering Equation xD(t) = Ax(t) on time scale T = [¥k=0[2k, 2k + 1],
with 0 0 48 1
1 24
A = 24 B 33 1 2472 2448 C .
It is clear that 8 @ A
2 [¥
m(t) =
<0 if t ¥ [2k, 2k + 1),
1 if t 2 [ k=0f2k + 1g,
k=0
the left extreme exponent is: 1. Further,
2i
s(A) = 2, 1 + 2i, 1 ,
1 1
53
and all l 2 s(A) are uniformly regressive. We consider the following cases:
i) In case l1= 2 and t 2 [2k, 2k + 1], we have
t
Z
e 2(t, 0) = exp 2Ds Pt2I0,t(1 2m(t)) exp 2Ds
0
= e 2t( 1)ke2k = ( 1)ke 2(t k). On the other hand, for all t 2[2k, 2k + 1],
1 t 1 1 s(t)1
e 2(t, 0) = exp Z 0 2Ds Pt2I0,t 1 2m(t) exp Zt 2Ds
= e 12t 1 e12 k= 1 e 12(t k).
2k 2k
By comparing these expressions, we see that there exists c > 0 such that e 2(t, 0) ce 1 (t, 0).
2
Hence 1
kL[e 2( , 0)] kL[e 2( ,0)] = 2 < 0.
1
ii) In case l2 = 1 + 2i , we have
8
t j1 + s( i 1 > 1
< 2 s m(t) s 1
l ( ) = lim 1 + 2 )j =
b & > 2<p
thus : 1
k [ (, 0)] lim sup<bl ( ) =
L e
l2 t!i¥ 2 t
p2
iii) Similarly, in case l3 = 1 2 , we also get k [ e
l3 ( , 0)] lim sup<bl ( ) = 1
L t!¥ 3 t p2
if m(t) = 0, 1 if m(t) = 1, 1<0.
1<0.
Therefore, by Theorem 2.26, the above equation is exponentially asymptoti-cally stable.
Make a note that Equation xD(t) = 2x(t), t 2 T = [¥k=0[2k, 2k + 1] is
t
Z s(t)
exponentially asymptotically stable, meanwhile lim sup <b( 2)(t) = 0. This indicates that, in general, the inverse of Theorem 2.28 is not true.t!¥
54
Conclusions of Chapter 2. By studying the ratio jf(t)j
with parameter a, ea(t, t0) as t ! ¥ we have overcame the difficulty when cannot define the logarithm function on time scales, and obtained some following results:
1. Introducing the Lyapunov exponent kL[ f ( )] of the function f : Tt0 ! K, and obtaining the sufficient and necessary condition for the exis- tence of kL[ f ( )], Lemma 2.2, and as well as its basic properties;
2. Establishing the sufficient condition on the boundedness of Lyapunov exponent kL[x( )], where x( ) is a nontrivial solution of dynamic equa- tion xD= A(t)x in Theorem 2.15. Besides that, we also obtain the Lya- punov’s Inequality in Theorem 2.19;
3. Recommending the necessary and sufficient conditions for the expo- nential stability of equation xD = A(t)x in Theorem 2.24 when A( ) is bounded, and deriving the spectral charaterization for the exponential stability in Theorem 2.25, as well as the sufficient conditions for the asymptotic stability in Theorems 2.26 and 2.28, where A( ) is constant matrix.
The results obtained in Chapter 2 are only preliminary studies of Lyapunov exponent for the homogeneous linear systems. We hope to achieve sharper results for linear dynamic systems, and especially, the ones for linearized systems on time scales.
55
CHAPTER 3
BOHL EXPONENTS
FOR IMPLICIT DYNAMIC EQUATIONS
Consider linear time-varying implicit dynamic equations (IDEs) of the form Es(t)xD(t) = A(t)x(t), t 0,
where Es( ) A( ) are continuous matrix funtions, Es( ) is supposed to be singular. If this equation is subject to an external force f (t), then it becomes
Es(t)xD(t) = A(t)x(t) + f (t), t 0.
In this chapter, we will define the notion of Bohl exponent for linear time- varying IDE with index-1 and investigate the relation between the exponen-tial stability and Bohl exponent as well as the robustness of Bohl exponent when this equation is subject to perturbations acting on only the right-hand side or on both sides. The content of Chapter 3 is based on the paper No.2 and No.3 in the list of the author’s scientific works.
3.1 Linear Implicit Dynamic Equations with index-1
Consider linear time-varying implicit dynamic equation on time scales Es(t)xD(t) = A(t)x(t) + f (t), for all t Ta, (3.1) where A( ), Es( ) are in Lloc¥(Ta, Kn n). Assume that rank E(t) = r, 1
r < n, for all t 2 Ta and ker E(t) is smooth in the sense that there exists a projector Q(t) onto ker E(t) such that Q(t) is continuously differentiable for all t 2(a, ¥), Q2(t) = Q(t) and QD2 Lloc¥(Ta, Kn n). Set P(t) = I Q(t).
56
It is clear that P(t) is a projector along ker E(t), P2(t) = P(t) and we have EP = E. Then, Equation (3.1) can be rewritten in the form
Es(t)(Px) D ¯ (3.2)
(t) = A(t)x(t) + f (t), t a,
where ¯ D2loc n n . A := A + EsP L¥
(Ta; K )
Let H be a continuous function defined on Ta, taking values in the group Gl(Rn) such that Hjker Es is an isomorphism between ker Es and ker E. We define the matrix G := Es ¯
AHQs, and the set S := fx : Ax 2 im Esg. Lemma 3.1. The following assertions are equivalent.
i) S \ ker E = f0g;
ii) G is a nonsingular matrix;
iii) Rn= S ker E.
Proof. See [22, Lemma 2.1].
Following from Lemma 3.1.ii), suppose that matrix G is nonsingular, we have the following lemmas.
Lemma 3.2. There hold the following relations.
i) Ps = G 1Es;
ii) G 1 ¯ Qs; AHQs=
iii)Q := HQsG 1 ¯A is the projector along S onto ke r E. We call Q the canon-
e e
ical projector, and P := I Q;
iv)Let Q be an arbitrary projector onto ker E, and P := I Q. Then, we have
e e
= Q b
sG 1 AP¯ b b PsG1A¯ = PsG 1 AP¯
, QsG 1 A¯ H 1Q.
b b b
Proof. See [22, Lemma 2.2].
Lemma 3.3. The matricies PsG 1, HQsG 1 do not depend on the choice of oper-ators H, and Q.
57
Proof. Let Q, Q0 be two arbitrary projectors onto ker E(t), and set P = I Q, P0 = I Q0, respectively. Let H, H0 be two operators in Gl(Rn) such that HjkerEs , H0jkerEs are the isomorphisms between ker Es and ker E, and G0 := Es AH¯0
Qs0
. Then, we have G 1G0 = G 1(Es AH¯0
Qs0
) = Ps G 1 AHH¯ 1 H0Qs0.
Note that im(H0Qs0
) = ker E and im(H 1 H0Qs0
) = ker Es, so H 1 H0Qs0 = Qs H 1 H0Qs0
, and then Ps H 1 H0Qs0 = 0. Hence,
G 1G0 = Ps G 1 AHQ¯
s H 1 H0Qs0 = Ps + QsH 1H0Qs0
= Ps + H 1H0Qs0 , and we obtain
G 1= (Ps+ H 1 H0Qs0)G0 1. Therefore,
PsG 1 = Ps(Ps+ H 1 H0Q0s)G0 1= PsG0 1, and
HQsG 1 = HQs(Ps+ H 1 H0Q0s)G0 1 = HQs H 1 H0Q0sG0 1 = H0Q0sG0 1. The proof is complete.
Definition 3.4. The IDE (3.1) is said to be index-1 tractable on Ta if G(t) is invertible for almost t 2 Ta and G 12 Lloc¥(Ta; Kn n).
Remark 3.5. According to Lemma 3.1, the index-1 property is independent of the choice of projector Q and the isomorphism H, see also [31, 57].
Let J T be an interval. We denote the set
C1( J, Kn) : = ( x( ) Crd(J, K
n) : P(t)x(t) is J ) .
delta2
-differentiable, almost t 2
Note that, C1(J, Kn) does not depend on the choice of projector functions.
Since P(t), Pb(t) are projectors along ker E, we have P(t)Pb(t) = P(t) and Pb(t)P(t) = Pb(t).
Definition 3.6. The function x( ) is said to be a solution of Equation (3.1) (having index-1) on the interval J if x( ) 2 C1(J, Kn) and satisfies Equation (3.1) for almost t 2 J.
58
Note that, we look for a solution x( ) of Equation (3.1) in the function space C1(J, Kn). So x( ) is not neccessarily delta-differentiable. Since
EsxD= EsPsxD= Es(PDx + PsxD PDx) = Es((Px)D PDx), we agree to use the expression EsxD which stands for Es((Px)D PDx). Multiplying both sides of Equation (3.2) by PsG 1 and QsG 1, respectively, we decouple the index-1 Equation (3.2) into the following system
8 1 s 1
<(Px)D = (PD + PsG 1¯ A)Px + P G 1f , Qx = HQsG APx¯+ HQsG f . :
Since x = (P + Q)x = Px + Qx, we use the variable changes u := Px and v := Qx and get
uD= (P D + PsG 1 ¯ 1f , (3.3)
A)u + PsG
v = HQsG 1 ¯ 1 f . (3.4)
Au + HQsG
It means that Equation (3.2) is decomposed into two sub-equations, the delta-differential part (3.3) and the algebraic one (3.4). It is clear that we can solve u from Equation (3.3), then using Equation (3.4) to compute v.
Finally, x = u + v. Therefore, we only need to address the initial value con- dition to the differential component (3.3). Let t0 a. Inspired by the above decoupling procedure, we state the initial condition u(t0) = P(t0)x(t0), or equivalent to
P(t0)(x(t0) x0) = 0, x02 Kn. (3.5) Remark 3.7. Multiplying both sides of Equation (3.3) by Qs yields QsuD = QsPDu. Noting that (Qu)D = QDu + QsuD and 0 = (QP)D = QDP + QsPD comes to (Qu)D= QDQu. Thus, if Q(t0)u(t0) = 0 then Q(t)u(t) = 0, for all t 2 Tt0 . This means that at the time point t, every solution starting in im P(t0) remains in im P(t).
Consider the homogeneous case, i.e., f (t) = 0,
Es(t)xD(t) = A(t)x(t), (3.6) with initial condition P(t0)(x(t0) x0) = 0. The Cauchy operator F(t, s) generated by Equation (3.6) is defined by
8
<Es(t)FD(t, s) = A(t)F(t, s), :P(s)(F(s, s) I) = 0.
59
We can solve the Cauchy operator F(t, s) by using the canonical projector
Q(t) = H(t)Qs(t)G1(t)A(t) in Lemma 3.2. Let P(t) = I¯ Q(t) = I +
e s
1 ¯ 0 e e
H(t)Q (t)G (t)A(t), and F (t, s)denote the Cauchy operator generated by the system
8 F0D
(t, s) = (PD(t) + Ps(t)G 1(t)A¯
(t))F0(t, s),
<F0(s, s) = I.
Then, the Cauchy operator of Equa tion (3.6) is de fine d as follows
:
F(t, s) = P(t)F0(t, s)P(s). (3.7) By Lemma 3.2 and Remark 3.7, wee
see that
P(t)F(t, s) = P(t)Pe(t)F0(t, s)P(s) = F0(t, s)P(s), (3.8) and hence,
F(r, t)F(t, s) = F(r, s).
By using variation of constants formula, the unique solution of Equation (3.3) is defined by
u(t) = F (t,t )u(t Z t (t,s(s))P (s)G 1 (s) f (s)Ds. (3.9)
) + F
0 0 0 0 s
t0
Moreover, by (3.4), (3.7), (3.8), and (3.9) we have u(t) + v(t) = (I + H(t)Qs(t)G 1 ¯
(t)A(t))F0(t,t0)u(t0)
Z t
1 ¯ F 1
D
+ (I + H(t)Qs(t)G (t)A(t)) 0(t, s(s))Ps(s)G (s) f (s) s t0
+ H(t)Qs(t)G 1(t) f (t)
Z t
= Pe(t)F0(t,t0)P(t0)x0 + Pe(t)F0(t, s(s))Ps(s)G 1(s) f (s)Ds
t0
+ H(t)Qs(t)G 1(t) f (t).
Therefore, the unique solution of the initial value problem for the IDE (3.1) is
x(t) = F(t, t )P(t)x 0 + Z tF(t, s(s))P (s)G 1 (s) f (s)Ds
0 0 t0 s (3.10)
+H(t)Qs(t)G 1(t) f (t). From now on, we suppose that the following assumption holds.
Assumption 3.1. There exists a bounded differentiable projector Q onto ker E. Let us denote P := I Q and K0 := supt a kP(t)k.
60