1. Trang chủ
  2. » Thể loại khác

Springer perko l differential equations and dynamical systems (springer 1991)(k)(t)(208s)

208 91 0
Tài liệu được quét OCR, nội dung có thể không chính xác

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 208
Dung lượng 11,99 MB

Nội dung

Trang 2

Texts in Applied Mathematics

; weet Introduction to Applied Mathematics

ins: Introduction to Applied Nontisear Dynamical Systems and Chaos

3 Hale/Kogak: Differential Equations An Introduction to Dynamics and Bifurcations 4 Chorin /Marsden: ‘A Mathematical Introduction to Fluid Mechanics, 2nd 0,

5 Hubbard/West: Differential Equations: A ‘Dynamical Systems A; Ordinary Differentis} Equations

6 Sontag: Mathematical Control Theory: Deterministic Finite Dimensional Systems 7 Perko: Differential Equations and Dynamical Systems Contents 2 Series Preface Preface Linear Systems 4.1 Uncoupled Linear Systems 1.2 Diagonalization 1⁄3 Exponentials of Operators 1.4 The Fundamental Theorem for Linear Systems 15 Linear Systems in R? 1.6 Complex Eigenvalues 1.7 Multiple Eigenvalues 1.8 Jordan Forms 1.9 Stability Theorem

1.10 Nonhomogeneous Linear Systems

Nonlinear Systema: Local Theory

21 Some Preliminary Concepts and Definitions 22 ‘The Fundamental Existence-Uniqueness Theorem 2.3 Dependence on Initial Conditions and Parameters 24 The Maximal Interval of Existence

2.5 The Flow Defined by a Differential Equation

2.6 Linearization

2.1 The Stable Manifold Theorem 28 The Hartman—Grobman Theorem 29 Stability and Liapunov Functions

2.10 Sađles, Nodes, Foci and Centers 2.11 Nonhyperbolic Critical Points in R?

Trang 3

xii Contents 3 Nonlinear Systems: Global Theory 3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8 3.9

Dynamical Systems and Global Existence Theorems Limit Sets and Attractors

Periodic Orbits, Limit Cycles and Separatrix Cycles The Poincaré Map

"The Stable Manifold Theorem for Periodic Orbits Hamiltonian Systems with Two Degrees of Freedom The Poincaré-Bendixson Theory in R?

Lienard Systems Bendixson’s Criteria

3.10 The Poincaré Sphere and the Behavior at Infinity 3.11 Global Phase Portraits and Separatrix Configurations 3.12 Index Theory 4 Nonlinear Systems: Bifurcation Theory 4.1 4.2 43 44 45 46 47 4.8

Structural Stability and Piexoto’s Theorem Bifurcations at Nonhyperbolic Equilibrium Points Hopf Bifurcations and Bifurcations of Limit Cycles from

a Multiple Focus

Bifurcations at Nonhyperbolic Periodic Orbits One-Parameter Families of Rotated Vector Fields The Global Behavior of One-Parameter Families of Periodic Orbits Homoclinic Bifurcations Melnikov’s Method References Index 163 164 174 184 193 216 226 248 269 273 291 292 305 314 324 346 358 378 395 397 Lawrence|Perko Aitierential Equations and Dynamical Systems _“ With 177 [lustrations Springer-Verlag

Trang 4

Lawrence Perko Department of Mathematics Northern Arizona University Flagstaff, AZ 86015 USA Editors

F John J.Ẹ Marsden L Sirovich

Courant Institute of Department of Division of Applied Mathematical Sciences Mathematics Mathematics New York University University of California Brown University New York, NY 10012 —_ Berkeley, CA 94720 Providence, RI 02912

USA USA _ USA

M Golubitsky W Jager

Department of Department of Applied

Mathematics Mathematics

University of Houston Universitat Heidelberg Houston, TX 77004 Im Neuenheimer Feld 294

USA 6900 Heidelberg, FRG

Mathematics Subject Classification: 34A34, 34035, 58F21, 58F25, 70K10

Printed on acid-free paper

© 1994 Springer-Verlag New York, Inc

All rights reserved This work may not be translated or copied in whole or in part without the written permission of the publisher (Springer-Verlag New York, Inc., 175 Fifth Avenue, New York, NY 10010, USA), except (or brief excerpts in connection with reviews or scholarly analysiẹ Use in connection with any form of information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hefeafter developed is forbiđen The use of general descriptive names, trade names, trademarks, etc., in this publica- tion, even if the former are not especially identified, is not to be taken as a sign that such names, a3 understood by the ‘Trade Marks and Merchandise Marks Act, may accordingly be used freely by

anyonẹ

Photocomposed copy prepared using LaTeX

Printed and bound by R.R Donnelley and Sons, Harrisonburg, Virginiạ Printed in the United States of Americạ

987654321

ISBN 0-387-97443-1 Springer-Verlag New York Berlin Heidelberg ISBN 3-540-97443-1 Springer-Verlag Berlin Heidelberg New York

Trang 5

Series Preface

Mathematics is playing an ever more important role in the physical and

biological sciences, provoking a blurring of boundaries between scientific

disciplines and a resurgence of interest in the modern as well as the clas- sical techniques of applied mathematics This renewal of interest, both in research and teaching, has led to the establishment of the series: Tezts in Applied Mathematics (TAM)

The development of new courses is a natural consequence of a high

level of excitement on the rescarch frontier as newer techniques, such as numerical and symbolic computer systems, dynamical systems, and chaos,

mix with and reinforce the traditional methods of applied mathematics Thus, the purpose of this textbook series is to meet the current and future needs of these advances and encourage the teaching of new courses

Trang 6

Preface

This book covers those topics necessary for a clear understanding of the qualitative theory of ordinary dilferential equations It is written for upper- division or first-year graduate students It begins with a study of linear

systems of ordinary differential equations, a topic already familiar to the

student who has completed a first course in differential equations An effi- cient method for solving any linear system of ordinary differential equations

is presented in Chapter 1

The major part of this book is devoted to a study of nonlinear systerns

of ordinary differential equations Since most noulinear differential equa- tions cannot be solved, this book focuses on the qualitative or geometrical theory of nonlinear systems of differential equations originated by Henri Poincaré in his work on differential equations at the end of the nineteenth centurỵ Our primary goal is to describe the qualitative behavior of the solution set of a given system of differential equations In order to achieve this goal, it is first necessary to develop the local theory for nonlinear systems This is done in Chapter 2 which includes the fundamental local

existence-uniqueness theorem, the Hartman Grobman Theorem and the

Stable Manifold Theorem These latter two theorems establish that the qualitative behavior of the solution set of a nonlinear system of ordinary differential equations near an equilibrium point is typically the same’ as the qualitative behavior of the solution set of the corresponding linearized system near the equilibrium point

After developing the local theory, we turn to the global theory in Chap- ter 3 This includes a study of limit sets of trajectories and the behavior of trajectories at infinitỵ Somẹ unsolved problems of current research inter-

est are also presented in Chapter 3 For example, the Poincaré-Bendixson Theorem, established in Chapter 3, describes the limit sets of trajecto-

ties of two-dimensional systems; however, the limit sets of trajectories of three-dimensional (and higher dimensional) systems can be much more complicated and establishing the nature of these limit sets is a topic of current research interest in mathematics In particular, higher dimensional systems can exhibit strange attractors and chaotic dynamics Al} of the preliminary material necessary for studying these more advanced topics is contained in this textbook This book can therefore serve as a springboard for those students interested in continuing their study of ordinary differ- ential equations and dynamical systems Chapter 3 ends with a technique

Trang 7

dynami-x

` Preface

cal system The gi ,

the solution se La Phase portrait describes the qualitative behavior of solving” nonlinear systems, Seneral, this is as close ag we can come to Linear Systems This chapter presents a study of linear systems of ordinary differential equations: x= Ax (1) where x € R", A is ann x n matrix and đa x= dx _ “#= dt : de, dt

It is shown that the solution of the linear system (1) together with the initial condition x(0) = x9 is given by

x(t) = êtxe

where e“* is an n x n matrix function defined by its Taylor series A good portion of this chapter is concerned with the computation of the matrix e“* in terms of the eigenvalues and eigenvectors of the square matrix Ạ

Throughout this book all vectors will be written as column vectors and AT

will denote the transpose of the matrix Ạ

1.1 Uncoupled Linear Systems

The method of separation of variables can be used to solve the first-order linear differential equation

z= az

The general solution is given by

x(t) = cốt

where the constant c = z(0), the value of the function x(t) at time ¢ = 0

Trang 8

2 1 Linear Systems This system can be written in matrix form as k= Ax (1) -1 0 THỊ

Note that in this case A is a diagonal matrix, A = diag|—~ 1, 2], nnd in general whenever A is a diagonal matrix, the system (1) reduces to an uncoupled linear system The general solution of the above uncoupled linear system can once again be found by the method of separation of variables It is given by : where ăt) =cẽ* ăt) = one (2) or equivalently by -t 9 xứ) = [° axle (2)

where c = x(0) Note that the solution curves (2) lie on the algebraic curves y= k/x? where the constant k = feạ ‘The solution (2) or (2) detines # motion along these curves; i each pulut â ô BR? moves to the point x(t) € R? given by (2’) after time t This motion can be described

geornetrically by drawing the solution curves (2) in the x), x2 plane, referred to as the phase plane, and by using arrows to indicate the direction of

the motion along these curves with increasing time t; cf Figure 1 For cy = cz = 0, 2; (t) = 0 and z9(t) = 0 for all ¢ € Rand the origin is referred to as an equilibrium point in this examplẹ Note that solutions starting on the x)-axis approach the origin as t + oo and that soluiions starting on the z-axis approach the origin as £ — —oọ

The phase portrait of a system of differential equations such as (1) with

x € R” is the set of all solution curves of (1) in the phase space R” Figure 1 gives a geometrical representation of the phase portrait of the uncoupled

linear system considered abovẹ The dynamical system defined by the linear system (1) in this example is simply the mapping ¢: R x R? — R? defined

by the solution x(t,c) given byg (2/); Lẹ, the dynamical system for this example is given by

#(t,e) -[s ô| â

Geometrically, the dynamical system describes the motion of the points in phase space along the solution curves defined by the system of differential equations The function ¡ f(x) = Ax 1.1 Uncoupled Linear Systems 3 Figure 1

on the right-hand side of (1) defines a mapping f: R? = R? (linear in this case) This mapping (which need not be linear) defines a vector field on R?; ịẹ, to each point x € R’, the mapping f assigns & vector f(x) If we draw each vector f(x) with its initial point at the point x € R , we obtain a geometrical representation of the vector field as shown in Figure 2 Note that at each point x in the phase space R?, the solution curves (2) are tangent to the vectors in the vector field Ax This follows since at time t = to, the velocity vector vp = x(t) is tangent to the curve x = x(t) at

the point x9 = x(to) and since x = Ax along the solution curves

Consider the following uncoupled linear system in R”: a=,

fq 22 (3)

Trang 9

1 Linear Systems ⁄//711 ĐA VANN ZAP PRP SH PVN RRA ZZZZ , ĐC NYNGN | 2# gị Ko N N NN SỐ mm poh é@é¢ fev YY NNN NYS l//// IXVVVAIL|1////7/ Figure 2 xã Figure 3 The general solution is given by ăt) = cyet ăt) = cnet 23(t) = cge—*

And the phase portrait for this ; system is shown in Figure 3 abovẹ Th i in Fi

71, Z2 plane is referred to as the unstable subspace of the system (3) and

1.1 Uncoupled Linear Systems 5

the z3 axis is called the stable subspace of the system (3) Precise definitions

of the stable and unstable subspaces of a linear system will be given in the

next section

PROBLEM SET 1

1 Find the general solution and draw the phase portrait for the follow- ing linear systems: 4 =2 3m" #ị =1 (b) 3ạ =2“; y= &) #a =3; đị = —a (4) bạ =# () #ì =T—ZIi +2 đạ = —Z¿

Hint: Write (d) as a second-order linear differential equation with constant coefficients, solve it by standard methods, and note that 27+ z2 = constant on the solution curves In (e), find z2(t) = caẽ° and

then the z-equation becomes a first order linear differential equation

2 Find the general solution and draw the phase portraits for the fol- lowing three-dimensional linear systems: l #ị =#i (a) #z =4 2a=Z3 đị =T—#: (b) #3 =—22 đa = Za 2) = —2 (c) #+=Zi #ạ = —Z3

Hint: In (c), show that the solution curves lie on right circular cylin- ders perpendicular to the z;,72 planẹ Identify the stable and unsta- ble subspaces in (a) and (b) The 23-axis is the stable subspace in (c) and the 1,23 plane is called the center subspace in (c); cf Section

Trang 10

1 Linear Systems

3 Find the general solution of the linear system h=2, đạ = 072

where a is a constant Sketch the phase portraits for a = —1, a =0

and @ = 1 and notice that the qualitative structure of the phase

portrait is the same for all a < 0 as well as for all a > 0, but that it changes at the parameter value a = 0

4 Find the general solution of the linear system (1) when A is the

nm x n diagonal matrix A = diag[Ày, Àa, ›Àn] What condition on

the eigenvalues A¡, ,À„ will guarantee that lim x(t) = 0 for all too solutions x(t) of (1)? 5 What is the relationship between the vector ficids defined by x= Ax and x = kAx

where k is a non-zero constant? (Describe this relationship both for k positive and k negativẹ)

6 (a) If u(t) and v(t) are solutions of the linear system (1), prove that for any constants a and b, w(t) = au(t) + bv(t) is a solution (b) For

find solutions u(t) and v(t) of x = Ax such that every solution is a linear combination of u(t) and v(t)

1.2 Diagonalization

The algebraic technique of diagonalizing a square matrix A can be used to

reduce the linear system

x= Ax ¬ (1)

to an uncoupled linear system We first consider the case when A has real, distinct eigenvalues The following theorem from linear algebra then allows us to solve the linear system (1)

Theorem [f the eigenvalues ÀI,À2, ;, Ần of ann x n matrix A are real

and distinct, then any set of corresponding eigenvectors {v\,v2, ,Vn} forms a basis for R", the matriz P = [vi v2 Val is invertible and

PAP = diag[\1, , An]

1.2 Diagonalization 7

This theorem says that if a linear transformation 7: R" — R" is repre- sented by the n xn matrix A with respect to the standard basis {e;,¢2, , en} for R”, then with respect to any basis of eigenvectors {V\,Vz, ,Vn}, T is represented by the diagonal matrix of eigenvalues, diag[A;, A2, , An] A proof of this theorem can be found, for example, in Lowenthal [Lo] -

In order to reduce the system (1) to an uncoupled linear ‘system using

the above theorem, define the linear transformation of coordinates

y= Px

where P is the invertible matrix defined in the theorem Then

x= Py,

¥ = Pole = Pˆ!Ax = Pˆ`APy

and, according to the above theorem, we obtain the uncoupled linear system y = diaglA,, , Andỵ

This uncoupled linear system has the solution

y(t) = diag[e*, ,e***] (0)

(Cf problem 4 in Problem Set 1.) And then since y(0) = P-'x(0) and

x(t) = Py(t), it follows that (1) has the solution i

x(t) = PE(t)P~'x(0) (2)

where E(t) is the diagonal matrix

E(t) = diag[e**, an 1m,

Corollarỵ Under the hypotheses of the above theorem, the solution of the linear system (1) ts given by the function x(t) defined by (2)

Trang 11

8

1 Linear Systems

The matrix P and its inverse are then given by

={i -1 - 11

Pal) i] ond pre) HE

The student should verify that

Ptapa|-1 0 0 2j°

Then under the coordinate transformation y=P'

coupled linear system x, we obtain the un- Đi = —Vì

Ủa = 2 which has the general solution i(£) = cố"!

trait for this system is given in Fi a igure 1 in Section 1.1 which is reprod i i below And according to the above corollary, the general solution to the original linear system of this example is given by ° › M2(E) = cạc®t, The phase por- ~t x(t) =P (% é| Pole where ¢ = x(0), or equivalently by #I(Ê) = đe—! + căe—t ~ ø?2) - X2(t) = cp (4) xe ~⁄ ¬ | Ầ 1.2 Diagonalization 9

The phase portrait for the linear system of this example can be found by sketching the solution curves defined! by (4) It is shown in Figure 2 The phase portrait in Figure 2 can also be obtained from the phase portrait in Figure 1 by applying the linear transformation of coordinates x = Pỵ Note

that the subspaces spanned by the eigenvectors v, and vz of the matrix

A determine the stable and unstable subspaces of the linear system (1) according to the following definition:

Suppose that the n x n matrix A has k negative eigenvalues A; Ak and n — k positive eigenvalues A,41, ,4, and that these eigenvalues are

distinct Let {vi, ,V¥n} be a corresponding set of eigenvectors Then the stable and unstable subspaces of the linear system (1), E* and E*, are the linear subspaces spanned by {v), , vx} and {vx41,-.-,¥n} respectively; ie,

E* = Span{v),- , ve} E" = Span{Vk+n, , Vn}-

If the matrix A has pure imaginary eigenvalues, then there is also a center subspace E*; cf Problem 2(c) in Section 1.1 The stable, unstable and center subspaces are defined for the general case in Section 1.9

PROBLEM SET 2

1 Find the eigenvalues and eigenvectors of the matrix A and show that B = P-!AP is a diagonal matrix Solve the linear system y = By and then solve x = Ax using the above corollarỵ And then sketch the phase portraits in both the x plane and y planẹ

w 4=[} 5]

waft

(@) A= [¡ al:

2 Find the eigenvalues and eigenvectors for the matrix A, solve the linear system x = Ax, determine the stable and unstable subspacea

for the linear xystem, and sketch the phase portrait for 1 0U

x=l|! 2 0x

] Ð -—l

Trang 12

10 1 Linear Systems (a) #+#+—2z=0 (b) #+z=0 (c) #— 2# T—z++ 2z =0

Hint: Let 2, = x, Z2 = 2, etẹ

4 Using the corollary of this section solve the initial value problem x= Ax

x(0) = x9

(a) with A given by 1(a) above and xq = (1,2)T (b) with A given in problem 2 above and Xo = (1,2,3)7

5 Let the nxn matrix A have real, distinct eigenvalues Find conditions on the eigenvalues that are ni ‘

ecessary and sufficient for li =

where x(t) is any solution of x = Ax cient for lim x(t) = 0

6 Let the n x n matrix A have real, disti inet i values

be the solution of the initial value problem Smeovalves Let oft x0)

x= Ax x(0) = xạ

Show that for each fixed t € R,

yim, #(6,¥0) = H(t, x0)/

This shows that the solution + i ; :

initial condition $(t, xo) is a continuous function of the

7 Let the 2x2 matrix A have real, distin

that an eigenvector of ) is (1,0)? and

Sketch the phase Portraits of x =

(a) O<A<p (b)0<,<A ()A<p<d (đ) A<0< (e)w<0<^A (QÀ=0,,>0 ct eigenvalues À and js Suppose an eigenvector of p is (1, 1, 4x for the following cases: 1⁄3 Exponentials of Operators

In order to define the exponential of a linear operator T: R” > R", it i

Nợ hy to define the concept of convergence in the linear space L(R") of

bp operators on R” This is done using the operator norm of T defined Ty = IT} max [7(x)| 1.3 Exponentials of Operators li where |x| denotes the Euclidean norm of x € R”; ịẹ, |x| = Vz†i+ :+z2 The operator norm has all of the usual properties of a norm, namely, for S,T € L(R")

(a) I[TI| >0 and |TỊ = 0 I7 =0 (b) IIKTII = Ikl for k oR

(e) IS+ TH < ISH + ITỊ

It follows from the Cauchy Schwarz inequality that if T € L(R") is rep-

resented by the matrix A with respect to the standard basis for R", then

|All < V? where ¢ is the maximum length of the rows of Ạ

The convergence of a sequence of operators T, € L(R”) is then defined

in terms of the operator norm as follows:

Definition 1 A sequence of linear operators T;, € L(R") is said to con- verge to a linear operator T € L(R") as k — 00, iẹ,

lin, T=

if for all « > 0 there exists an 'N such that for k > N, | — Tkl| < ẹ Lemmạ For 5S, T € L(R") and x € R",

(1) [7(x)] < NI bed (2) ITSIl < ITUHSH

(3) HT*I SITH* for k =0,1,2,

Trang 13

12 1 Linear Systems

and (3) is an immediate consequence of (2) Theorem Given T € L(R”) and ty > 0, the series

— T*t* k=0 ce

is absolutely and uniformly convergent for all |t| < tạ

Proof Let ||T|| = ¢ It then follows from the above lemma that for |t] < to, |e pak de ake | Teh TI atts khịỊT kh — kì But © ky S60 _ gato 7 : k=0 i It therefore follows from the Weierstrass M-Test that the series a T*t* 1 k=0 k

is absolutely and uniformly convergent for all |t| < to; cf [R], p 148 The exponential of the linear operator T is then defined by the absolutely

convergent series

It follows from properties of limits that eT is a linear operator on R" and

it follows as in the proof of the above theorem that |{e"|| < elTH

Since our main interest in this chapter is the solution of linear systems of the form

x= Ax,

we shall assume that the linear transformation T on R" is represented by the n x n matrix A with respect to the standard basis for R.™ and define

the exponential e4¢

Definition 2 Let A be an n x n matrix Then for ¢ € R,

At oo Ake

e°'= a

k=0

For an n xn matrix A, e“* is ann x n matrix which can be computed in

terms of the eigenvalues and eigenvectors of Ạ This will be carried out in

1.3 Exponentials of Operators - 13

the remainder of this chapter As in the proof of the above theorem ||ê*|| <

ell! where (|A}! = [[T'l| and T is the linear transformation T(x) = Ax

We next establish some basic properties of the linear transformation e” in order to facilitate the computation of eT or of the n x n matrix e4 Proposition 1 If P and T are linear transformations on R" and 89 =

PTP-, then eS = PeT P-!,

Proof It follows from the definition of e* that

n - n

Soi (PTP-)" — uy TỶ p~1 — pePp—l

P= dim, Fr =P dim Dog Pt = Pet k=0 k=0

The next result follows directly from Proposition 1 and Definition 2

Corollary 1 If P-'AP = diag{A,;] then e4t = Pdiagle*s'|P~1,

Proposition 2 If S and T are linear transformations on R" which com-

mute, ịẹ, which satisfy ST = TS, then e5+T = eSeT

Proof If ST = TS, then by the binomial theorem SiT* (S+T) =n! 5S” Fir jtken Therefore, oo © gj ser _ 77% Si 7+ ore La ere = n=0 j+k=n c¬0

We have used the fact that the product of two absolutely convergent series is an absolutely convergent series which is given by its Cauchy product; ef

[RỊ, p 74

Upon setting S = —T in Proposition 2, we obtain

Corollary 2 If T is a linear transformation on R", the inverse of the

Trang 14

14 1 Linear Systems

Proof If \ = a + 46, it follows by induction that a —b]}`_ [Re(A*) -lm(A*) b aj] |Im(A*) Re(A*)

where Re and Im denote the real and imaginary parts of the complex number 4 respectivelỵ Thus, +-Š |) | keo | Im(r) Re(% _ [Re(e’) —Im(e) _ Lm(e) Reto) | a|coeb —sinb sinb can |" Note that if a = 0 in Corollary 3, then e4 is simply a rotation through 6 radians Corollary 4 if Proof Write A = al + B where a-[2 3]: Then af commutes with B and by Proposition 2, e4 = esleB — cseB, And from the definition eF=14+B4+B7/N+ -=14B

since by direct computation B? = 2 = - Ú,

We can now compute the matrix e“* for any 2x2 matrix Ạ In Section 1.8

of this chapter it is shown that there is an invertible 2 x 2 matrix P (whose columns consist of generalized eigenvectors of A) such that the matrix B= PAP has one of the following forms +1 0 Al _fea -6 a=[) a: 2=|a 1] or 2=[ ạ 1.3 Exponentials of Operators 15 It then follows from the above corollaries and Definition 2 that At +

ø:_ |£ 0 Be omil Ê Be at |cosbt =~ sinbt

° -[9 ol: ore |: ‘| ore [see cos bt

respectivelỵ And by Proposition 1, the matrix e4¢ is then given by eMt = Pelt p-,

As we shail see in Section 1.4, finding the matrix e“* is equivalent to solving

the linear system (1) in Section 1.1

PROBLEM SET 3

1 Compute the operator norm of the linear transformation defined by the following matrices:

ws)

® | 4]

fo (5a):

Hint: In (c) maximize |Ax|? = 262? + 10x: 22 + 22 subject to the

constraint z? + 23 = 1 and use the result of Problem 2; or use the

fact that ||Aj] = [Max eigenvalue of AT A]1⁄2,

2 Show that the operator norm of a linear transformation T on R™

satisfies iT(x)|

x TỊÍ = max |T(x)| = sup ——^ II = max 7t) = sp “G

3 Use the lemma in this section to show that if T is an invertible linear transformation then |[T] >°0 and

1 TN > —

ro ĩn

4 If T ig a linear transformation on R" with ||T — /|| < 1, prove that T

is invertible and that the series 3£" (J — T)* converges absolutely

to T7},

Hint: Use the geometric series

5 Compute the exponentials of the following matrices:

Trang 15

16 1 Linear Systems

5 -6 2-1 9 1

4) [3 3] te) ữ 2| () ữ al ,

6 (a) For each matrix in Problem 5 find the eigenvalues of e4

(b) Show that if x is an eigenvector of A corresponding to the eigen-

value , then x is also an eigenvector of ¢4 corresponding to the

eigenvalue e>

(c) Hf A = Pdiag{A,}P-', use Corollary 1 to show that det e4 = etreẹ

Also, using the results in the last paragraph of this section, show that this formula holds for any 2 x 2 matrix Ạ

7 Compute the exponentials of the following matrices:

100 100 0

(a) 020 (b) |0 2 1 (c) 0|

003 902 2

Hint: Write the matrices in (b) and (c) as a diagonal matrix 9 plus a matrix N Show that S and N commute and compute e® as in part

(a) and e™ by using the definition ˆ

om

kt

BPN

S

8 Find 2 x 2 matrices A and B such that e4+® 4 eAe®,

9 Let T be a linear operator on R” that leaves a subspace E Cc R" invariant; iẹ, for all x € E, T(x) € Ẹ Show tat e7 also leaves E invariant

1.4 The Fundamental Theorem for Linear

Systems

Let A be ann xn matrix In this section we establish the fundamental fact that for xy € R” the initial value problem

x= Ax

x(0) = xu (1)

has a unique solution for all t € R which is given by

x() = êtxọ (2)

Notic the similarity in the form of the solution (2) and the solution ăt) =

e**zy of the elementary first-order differential equation < = ax and initial condition 2(0) = zọ

1.4 The Fundamental Theorem for Linear Systems 17 In order to prove this theorem, we first compute the derivative of the

exponential function e4¢ using the basic fact from analysis that two con-

vergent limit processes can be interchanged if one of them converges uni-

formlỵ This is referred to as Moore’s Theorem; -1 Graves [G], p 100 or

Rudin [Rj], p 149

Lemmạ Let A be a square matriz, then d at _ goat

ne = Ae™,

Proof Since A commutes with itself, it follows from Proposition 2 and Definition 2 in Section 3 that

doar eAltth) _ gat ae = jim, h = lim ca (6"~Ð =Ð h¬0 h + Ath Athk-) =eAtin i An Ah =e Br, in (4 + a K ) - Ae“

The last equality follows since by the theorem in Section 1.3 the series defin- ing e“* converges uniformly for |h| < 1 and we can therefore interchange the two limits

Theorem (The Fundamental Theorem for Linear Systems) Let A be ann xn matriz Then for a given xy € R”, the initial value problem

x= Ax

x(U) ~ xo @)

has a unique solution given by

x(t) = eA*xy, ` @

Proof By the preceding lemma, ïf x(£) = êfxạ, then x'(t) = SoM = AeÁxg = Ax(t)

for all t € R Also, x(0) = Ix = xỵ Thus x(t) = e4'xg is a solution To see that this is the only solution, let x(t) be any solution of the initial value problem (1) and set

Trang 16

18 ` 1 Linear Systems Then from the above lemma and the fact that x(t) is a solution of (1) ý(t) = -Aẽ^tx(t) + ẽ ^tx'(t) =~Aẽ^*x(£) + ẽ^t Ax(t) =0

for all t € R since ẽ4* and A commutẹ Thus, y(t) is a constant Setting t = 0 shows that y(t) = x and therefore any solution of the initial value problem (1) is given by x(t) = e4ty(t) = e4txọ This completes the proof of the theorem, Examplẹ Solve the initial value probiem x= Ax -o- for [73]

and sketch the solution curve in the phase plane R? By the above theorem and Corollary 2 of the last section, the solution is given by

=êtx, - „—z|©oat —sinftj |1| — _» [cost

x(1) = e xe =e [see | B =e [Em]:

It follows that |x(£)| = ẽ and that the angle 6(t) = tan7! Z(t)/ai(t) = t

The solution curve therefore spirals into the origin as shown in Figure 1 below X2 Figure 1 1,4 The Fundamental Theorem fur Linear Systems 19 PROBLEM SET 4

1 Use the forms of the matrix Âđ' computed in Section 1.3 and the theorem in this section to solve the linear system x = Bx for sY wary) (@) B= Es H 2 Solve the following linear system and sketch its phase portrait - [=1 -1 x= 1-1 x

The origin is called a stable focus for this system

Trang 17

20 1 Linear Systems 6 Let T be a linear transformation on R” that leaves a subspace E C R® invariant (ịẹ, for all x € E, T(x) € E) and let T(x) = Ax with respect to the standard basis for R® Show that if x(#) is the solution

of the initial value problem

x= Ax x(0) = xo with xp € E, then x(t) € E for allt c R

7 Suppose that the square matrix A has a negative eigenvaluẹ Show that the linear system x = Ax has at least one nontrivial solution x(t) that satisfies

lim x(é) = 0 (00

8 (Continuity with respect to initial conditions.) Let @(€, xo) be the so- lution of the initial value problem (1) Use the Fundamental Theorem to show that for each fixed t c R

jim, (ty) = Xọ

1.5 Linear Systems in R? :

In this section we discuss the various phase portraits that are possible for

the linear system

x= Ax (a)

when x € R? and A is a 2 x 2 matrix We begin by describing the phase portraits for the linear system

x= Bx (2)

where the matrix B = P-'AP has one of the forms given at the end of Section 1.3 The phase portrait for the linear system (1) above is then obtained from the phase portrait for (2) under the linear transformation of

coordinates x = Py as in Figures 1 and 2 in Section 1.2 First of all, if

_ fr 0 _fal _fa -6

#=|Đ |: đ=|Đ a} 2=[5

it follows from the fundamental theorem in Section 1.4 and the form of the

matrix e* computed in Section 1.3 that the solution of the initial value

problem (2) with x(0) = xo is given by

xI9= | „xe xo=e[) ft) xo

1.5 Linear Systems in R? 21

or

x(t) =e at |cosbt —~sindt sinbt cos bt

respectivelỵ We now list the various phase portraits that result from these solutions, grouped according to their topological type:

4 0

Cane 1 B= [} ?| with À < 0< &

Xa

Figure 1 A sađle at the origin

The phase portrait for the linear system (2) in this case is given in Figure 1 See the first example in Section 1.1 The system (2) is said to have a sađle at the origin in this casẹ If w < 0 < A, the arrows in Figure 1 are reversed Whenever A has two real eigenvalues of opposite sign, the phase portrait for the linear system (1) is linearly equivalent to the phase portrait shown in Figure 1; ịẹ, it is obtained from Figure 1 by a linear transformation of coordinates; and the stable and unstable subspaces of (1) are determined by the eigenveetors of A as in the Example in Section

1.2, The four non-zero trajectories or solution curves that approach the equilibrium point at the origin as t ~+ too are called separatrices of the

system

là 8 | 1| vHhÀ<0,

The phase portraits for the linear system (2) in these cases are given in Figure 2 Cf the phase portraits in Problems 1(a), (b) and (c) of Problem Set 1 respectivelỵ The origin is referred tu as a stabl: ode in each of these cases It is called a proper node in the first.case with A = j: and an improper node in the other two cases Ïf À > > 0 or if ) > 0 in Case II, the arrows

Trang 18

22 1 Linear Systems

in Figure 2 are reversed and the origin is referred to as an unstable nodẹ

Whenever A has two real eigenvalues of the same sign, the phase portrait of the Fnear system (1) is linearly equivalent to one of the phase portraits shown in Figure 2 The stability of the node is determined by the sign of

the eigenvalues: stable if A < # <0 and unstable if \ > x > 0 Note that each trajectory in Figure 2 approaches the equilibrium point at the origin along a well-defined tangent line @ = 0 as t > oọ

Xa Xo Xp

MK " 7 " 9 Xị

rep A<p h<o |

Figure 2 A stable node at the origin

Case IIỊ B = | " with a < 0

b>O ,

b<0

Figure 3 A stable focus at the origin

The phase portrait for the linear system (2) in this case is given in Figure 3 Cf Problem 9 The origin is referred to as a stable focus in this casẹ If a >

0, the arrows are reversed in Figure 3; ịẹ, the trajectories spiral away from the origin with increasing ¢ The origin is called an unstable focus in this casẹ Whenever A has a pair of complex conjugaty cigenvalues with non- zero teal part, the phase portraits for the system (1) is linearly equivalent

1.5 Linear Systems in R? 23

to one of the phase portraits shown in Figure 3 Note that the trajectories in Figure 3 do not approach the origin along well-defined tangent lines; iẹ, the angle Ø(£) that the vector x(t} makes with the z-axis does not approach a constant Op as t —» 00, but rather {8(¢)| —+ 00 as tf ~» co and |x(¢)}| + 0 as t — oo ín this casẹ

0 -b Case IV B= b 0

The phase portrait for the linear system (2) in this case is given in Figure 4 Cf Problem 1(d) in Problem Set 1 The system (2) is said to have a center at the origin in this casẹ Whenever A has a pair of pure imaginary complex conjugate eigenvalues, the phase portrait of the linear system (1) is linearly equivalent to one of the phase portraits shown in Figure 4 Note that the trajectories or solution curves in Figure 4 lie on circles [x(t)| = constant The trajectories of the system (1) will lie on ellipses and the solution x(t)

of (1) will satisfy m < |x()| < M for all t € R; cf the following Examplẹ

The angle @(t) also satisfies |0(t)| -+ 00 as t — oo in this casẹ Xo Xo X ° I 0 x b>0 b<0

Figure 4 A center at the origin

If one of the eigenvalues of A is zero, ịc., if det A = 0, the origin is called

a degenerate equilibrium point of (1) The various portraits for the linear system (1) are determined in Problem 4 in this casẹ

Example (A linear system with a center at the origin.) The linear system

x = Ax

0 -—4

2= | |

has a center at the origin since the matrix A has eigenvalues A = +23

According to the theorem in Section 1.6, the invertible matrix

_ƒ2 0] c1 [1/2 0

?=ͧ | with P -|h |

Trang 19

24

1 Linear Systems

reduces A to the matrix

2 0 The student should verify the calculation

The solution to the linear system x = Ax, as determined by Sections 1.3 and 1.4, is then given by 2 —sin2t] 2 —2sin2 x(t) = P cos len cos ‘sin 2t () [2 lai te li cos2e | ° B=P“Ap= [2 H where c = x(), or equivalently by 21 (t) = et cos 2£ — 26x gìn 2t 2(t) = 1/2c; sin 2t + cz cos 2 It is then easily shown that the solutions satisfy #1) + 4z3(0) = cï + 4Š for all £ € R; iẹ, the trajectories of this system lie on ellipses as shown in Figure 5 Xạ

Figure 5 A center at the origin

Definition 1 The linear system (1) is said to have a sađle, a node, a focus or a center at the origin if its phase portrait is linearly equivalent

to one of the phase portraits in Figures 1, 2, 3 or 4 respectively; ịẹ, if the matrix A is similar to one of the Matrices B in Cases I, I, I] or IV respectivelỵ

Remark if the matrix A is similar to the matrix B,iẹ, if there ia a nonsin-

gular matrix P euch that P-! AP = B, then the system (1) is transformed

into the system (2) by the linear transformation of coordinates x = Pỵ If

1.5 Linear Systems in R? 25

B has the form III, then the phase portrait for the system (2) consists of

either a counterclockwise motion (if b > 0) or a clockwise motion (if 6 < 0) on either circles (if a = 0) or spirals (if a # 0) Furthermore, the phase por- trait for the system (1) will be qualitatively the same as the phase portrait for the system (2) if det P > 0 (iẹ, if P is orientation preserving) or it will be qualitatively the same as the phase portrait for the system (2) with a

counterclockwise motion replaced by the corresponding clockwise motion

and vice versa (as in Figures 3 and 4) if det P <0 (ie, if P is orientation reversing)

For det A # 0 there is an easy method for determining if the linear system has a sađle, node, focus or center at the origin This is given in the next theorem Note that if det A # 0 then Ax = 0 iff x = 0; iẹ, the origin is the only equilibrium point of the linear system (1) when det A ¥ 0 If the origin is a focus or a center, the sign o of #2 for rz = 0 (and for small

21 > 0) can be used to determine whether the motion is counterclockwise

(if ¢ > 0) or clockwise (if ¢ < 0)

Theorem Let 6 = det A and 7 = trace A and consider the linear system

x = Ax (1)

(a) If6 <0 then (1) has a sađle at the origin

(b) If 6 > 0 and r? ~ 4ð > 0 then (1) has a node at the origin; it is stable if <0 and unstable if 7 > 0

(c) 6 >0, r?— 4ô < 0, and r # 0 then (1) ha a focus at the origin; it

is stable if r <0 and unstable if r > 0

(d) If 6 > 0 and 7 =0 then (1) has a center at the origin

Note that in case (b), 7? > 4|5] > 0; iẹ, 7 # 0

Proof The eigenvalues of the matrix A are given by

r+v?2= 4

À= 5

Thus (a) if 6 < 0 there are two real eigenvalues of opposite sign

(b) E 6 > 0 and 7? — 45 > 0 then there are two real eigenvalues of the

same sign as T;

(c) if 6 > 0, r? — 46 < O and 7 # 0 then there are two complex conjugate

eigenvalues 4 = a + 2b and, as will be shown in Section 1.6, A is similar to the matrix B in Case III above with a = 7/2; and

(d) if 6 > 0 and + = 0 then there are two pure imaginary complex

conjugate eigenvalues Thus, cases a, b, ¢ and d correspond to the Cases I, II, IH and IV discussed above and we have a sađle, node, focus or center

Trang 20

? 1 Linear Systems Definition 2 A stable node or focus of (1) is called a sink of the linear system and an unstable node or focus of (1) is called a sourve of the linear system

The above results can be summarized in a “bifercation diagram,” shown in Figure 6, which separates the (7, 6)-plane into three components in which the solutions of the linear system (1) have the same “qualitative structure” (defined in Section 1.8 of Chapter 2) In describing the topological behavior or qualitative structure of the solution set of a linear system, wé do not distinguish between nodes and foci, but only if they are stable or unstablẹ Center Sink Š Source Degenerate critical point 0 Sađle Figure 6 A bifurcation diagram for the linear system (1) PROBLEM SET 5

1, Use the theorem in this section to determine if the linear system x = Ax has a sađle, node, focus or center at the origin and determine the stability of each node or focus: (a) A [ 5 H wall onl wae ft 3] orf a 1.5 Linear Systems in R? (f) A= [? 3): 2 Solve the linear system x = Ax and sketch the phase portrait for wa 3 oe emf) (d) A= lo i}

3 For what values of the parameters a and 6 does the linear system

* = Ax have a sink at the origin? a -b

a-[2 4)

4 If det A = 0, then the origin is a degenerate critical point of x= Ax Determine the solution and the corresponding phase portraits for the

linear system with À 0 & A= l3 g] mwư (@) A= [ DỊ: Note that the origin is not an isolated equilibrium point in these cases 5 Write the second-order differential equation Flatt be -0 as a system in R? and determine the nature of the equilibrium point at the origin 6 Find the general solution and draw the phase portrait for the linear system #đị=11 đa = —tì +2z¿

Trang 21

28 1 Linear Systems 7 Describe the separatrices for the linear system đa hơn + 23x #¿ = 3t + 4r¿

Hint: Find the eigenspaces for Ạ

8 Determine the functions r(t) = |x(t)| and @(¢) = tan7} Zăt)/z,(t) for the linear system a = ~z2 #a =ữ 9 (Polar Coordinates) Given the linear system 2, = az, — brz £q = br + arẹ

Differentiate the equations r? = z? + 2? and @ = tan“!{zz/z¡) with

respect to £ in order to obtain

pm Tit † trode and = tif2 “xế:

r r

for r ¥ 0 For the linear system given above, show that these equa-

tions reduce to

f=er and @=5

Solve these equations with the initial conditions r(0) = rp and 6(0) = 9 and show that the phase portraits in Figures 3 and 4 follow im- mediately from your solution (Polar coordinates are discussed more thoroughly in Section 1.10 of Chapter 2) 1.6 Complex Eigenvalues

If the 2n x 2n real matrix A has complex eigenvalues, then they occur in

complex conjugate pairs and if A has 2n distinct complex eigenvalues, the

following theorem from linear algebra proved in Hirsch and Smale {H/S] allows us to solve the linear system

x = Ax

Theorem /f the 2n x 2n real matriz A has 2n distinct complex eigenvalues Ay = a; + ib; and J; = a; - $b; and corresponding compler eigenvectors W¿ = uj + tv; and W; = uj — iv;,j =1, yn, then (ur, V1, , Un, Va}

is a basis for R°", the matriz P=(v; uw v2 U2° V_ Un] 1.6 Complex Eigenvalues + is invertible and ~l PAP An Tp nh

a real 2n x 2n matrix with 2 x 2 blocks along the diagonal

Remark Note that if instead of the matrix P we use the invertible matrix

Q= [út vị tạ vạ - tạ và]

then

a; 6;

wnenoe[ 9)

The next corollary then follows from the above theorem and the funda- mental theorem in Section 1.4

Corollarỵ Under the hypotheses of the above theorem, the solution of the initial value problem

x= Ax (1)

x(0) = xo

is given by

je [cosbjt —sinbjt] pa

x(t) = Pediag e** [sre eel xẹ

Note that the matrix — feos bt — sin bt R= sinbt — cos bt represents a rotation through bt radians

Examplẹ Soive the initial value problem (1) for

1-10 0 1 10 0 A=lo 03 -2 0011

+ i = 1 +i and Ay = 2-+i (as well

The matrix A has the complex eigenvalues 4, l+í and Àa :

Trang 22

® 1 Linear Systems is invertible, 100 90 -1_ {010 0 ?= 0 0 1 -ỊỊ" 000 1 and 1-10 0 1 10 0 “lap PUAP = 0 02 -1 0 01 2 The solution to the initial value problem (1) is given by écost —etsinf 0 0 -=p|°°sint écost 0 0 - x(t) =P 0 0 0 cost —e*tginr | PT Xe 0 e*sint e* cost écost —etsint 0 0 efsint et coat 0 0 =! 0 0 0 = e*(cost+sint) 9 ec sin ~26%sing | *0- e**(cost ~ sin t)

In case A has both real and complex eigenvalues and they are distinct, we have the following result: If A has distinct real eigenvalues A; and cor- responding eigenvectors vj, j =1, ,k and distinct complex eigenvalues

Ay = a5 +iby and A; = a;~ibj and corresponding eigenvectors w; = uj+iv,

and W; = uj —ÉV;, j=k+1, ,n, then the matrix

P=fvi -vy ki Mk+i Và tạ] is invertible and

PULAP = diagi\1, ,Ak, Basis ) Bal where the 2 x 2 blocks

B= l7 mal

bay

for j= k+1, ,n We illustrate this result with an examplẹ

Examplẹ The matrix ( ~3\0 0 A=l 03 -2 0 1 1 31 1.6 Complex Eigenvalues has eigenvalues \,; = —3, Az = 2+ i (and dz = 2 — i) The corresponding eigenvectors 1 0 Vvị= |0 and w¿=u¿+iva= |l+¿| 0 1 Thus 100 10 #0 P=j|j0 1 1|, P!=j0 1 -l 0 01 00 1 and -3 0 0 P'AP=| 0 2 -I1| 2 The solution of the initial value problem (1) is given by e® 0 0

x(t)= Pj} 0 e%cost —e*sint | P~'xo 0 e%sint e%* cost | ©

et 0 0

=| 0 e*(cost +sint) ~2ẻt sint Xọ

0 e* sint e**(cost — sin t)

Trang 23

32 1 Linear Systems PROHLkM ST 6 1 Solve the initial value problem (1) with a-[? “I 2 Solve the initial value problem (1) with 0 -2 0 A=]1 2 of 90 -2 Determine thị

portent " e the stable and unstable subspaces and sketch the phase 3 Solve the initial value problem (1) with 10 O A=/0 2 -3] 13 2 4 Solve the initial value problem (1) with -1 -10 =O 1-10 0 A= 0 00 -2 0 01 2 1.7 Multiple Eigenvalues

The fundamental theorem for linear i i

solution of atthe woven, systems in Section 1.4 tells us that the

x= Ax “ (1)

together with the inilial condition x(0) = xy is giyen by

{ :

x(t) = êtxụ, We have seen how to find the n x n matrix eft

values We now complete the to solve the linear system (1)

: : when A has distinct eigen- picture by showing how to find et, iẹ, how » when A has multipie eigenvalues

Definition 1 Let d be an ei

m Sn Then for k= 1 igenvalue of the n x n matrix A of multiplicity ++,™, any nonzero solution v of

(A-AN*v =0

1.7 Multiple Eigenvalues 33

is called a generalized eigenvector of Ạ

Definition 2 An n x n matrix N is said to be nilpotent of order k if

N*¥-1 40 and N* =0

The following theorem is proved, for example, in Appendix III of Hirech and Smale (H/S}

Theorem 1 Let A be a real n x n matrix with real eigenvalues , -,4n

repeated according to their multiplicitỵ Then there exists a basis of gener- alized eigenvectors for R" And if {vi, ,Vn} is any basis of generalized eigenvectors for R", the matriz P = [v, - vn] is invertible, A=S+N where P-'SP = diag[A,], the matriz N = A-— S is nilpotent of order k <n, and S and N commute, ie, SN=NS +

This theorem together with the propositions in Section 1.3 and the fun-

damental theorem in Section 1.4 then lead to the following result:

Corollary 1 Under the hypotheses of the above theorem, the linear system (1), together with the initial condition x(0) = xo, has the solution

x(t} = Pdiag[e***] P=! [: +Nt+.-.+ x“ Xọ

If is an eigenvalue of multiplicity n of an r <n matrix A, then the above results are particularly easy to apply since in this case S = diag[A] with respect to the usual basis for R” and N=A-S The solution to the initial value problem (1) together with x(0) = xo is therefore given by NEE x(t) = e** [ramet a |»

Let us consider two examples where the n xn matrix A has an eigenvalue

of multiplicity n In these examples, we do not need to compute a basis of

Trang 24

»” 1 Linear Systems Example 1 Soive the initial value problem for (1) with

3 1

s~[3 1}

It is easy to determine that A has an eigenvalue I =2 multiplici ›

iẹ, Ay = Az = 2 Thus, of multiplicity 2: _ [20 s= [ỗ 3| and | =A-S= 1 1 N=A s=[Lj -Í:

Ít is easy to compute N? = 0 and the solution of the initi 5 itial vai

for (1) is therefore given by " be Problem

x(t) = e4'x9 = e*[1 + Ni]xo mex [itt + —t 1~¢|% Example 2 Solve the initial value problem for (1) with 0 -2 -1 -1 1 2 1 1 A= 01 1 0 0 0 0 1

In this case, the matrix A has an eigenval = iplici

test igenvalue A = 1 of multiplicity 4 -Ì -2 -1 -1 N=A-g@-| 0 } 1 1 1 1 0 0 0 0 0 0 and it is easy to compute ~1 -1 -1 -1 Nea 0 0 0 4 1 1 1 fi 0 0 010

and N° = 0; iẹ, N is nilpotent of order ; Le, 3 The - 2 solution of the i initi it: problem for (1) is therefore given by mor the initial value x(t) = e![I + Nt+ N?.?/2|xạ 1-t-/2 -1t-11/2 -t-#/2 -#- 12/2 = et t l+t t t 7/2 t+2/2 1408/2 ps2 | % 0 0 0 1 1.7 Multiple Eigenvalues 35

In the general case, we must first determine a basis of generalized eigen- vectors for R®, then compute S = Pdiag[A;]P—! and N = A—S according to the formulas in the above theorem, and then find the solution of the ini- tial value problem for (1) as in the above corollarỵ

Example 3 Solve the initial value problem for (1) when 100

A=]|-1 2 0| 112

It is easy to see that A has the eigenvalues A; = 1, A2 = dg = 2 And it is

not difficult to find the corresponding eigenvectors

1 0

vị= 1 and vạ= |0}

-2 1

Nonzero multiples of these eigenvectors are the only eigenvectors of A cor-

responding to A, = 1 and Az = Ag = 2 respectivelỵ We therefore must find

Trang 25

36

1 Linear Systems

In the case of multiple complex eigenvalues, we have the following theo-

rem also proved in Appendix III of Hirsch and Smale: {H/S}:

Theorem 2 Let A be a real 2n x 2n matrt with complex eigenvalues A= a; + tb; and A; = a, — iby, f= 1, ,.n There exists a basis of generalized complez eigenvectors w, = Wy + ivy and W, = u, — iv,, i = 1, ,n for C?" and {u, Viy-++)Uns Va} és @ basis for R2", Fọr any such basis, the

matrz P = [vụ - Vntin] invertible,

A=S4N

where

'

P~L§P = dịng l⁄ 2)

the matriz N = A—S is nilpotent of order k S 2n, and S and N commutẹ The next corollary follows from the fundamental theorem in Section 1.4 and the results in Section 1.3:

Corollary 2 Under the hypotheses of the above theorem, the solution of the initial value problem (1), together with x(0) = x0, is given by

= Pdiagesst |C8bjt —sindjt] |, M*#

x(t) = Pdiag e% [oxy coe byt P'li+ + „ xọ

We illustrate these results with an examplẹ

Example 4 Solve the initial value problem for (1) with 0 10 0 1 00 A= 0 00 -1 2 01 The matrix A has eigenvalues \ = i and A= ~i of multiplicity 2 The equation Z -i -1 0 0 / Z 1 - 0 0 (A-ADw= 0 0 -i af | =9 2 9 = 1 ~if fay

is equivalent to z¡ = 22 = 0 and z3 = ‡z4 Thus, we have one eigenvector Wì = (0,0,2, 1)7 Also, the equation —2 % 0 01 ra —aptwa {7% -2 0 6 | [a (A-A2⁄=Í 2 g2 2z | ||“? -i -2 -2 -3] Dạ, 37 1.7 Multiple Eigenvalues

is equivalent-to z; = izg and z3 = i24~ 21 My wn ‘or or

i i tor w2 = (i,1,0,1) Then u = },0,0, 1)", vi = (0,0,1, wae 1,017 v2 = (1,0,0,0)", and according to the above theorem, 2 = (0,1,0,1)", 0010 0001 pra fo -1 oa] 0 010 P=li 90 01° 1 000 0101 0 100 0-10 0 0-10 #0 100 0m, |1 00 OF #=Plẹ go -1IỊP =lo 10 -1 : 0 01 0 1 01 0 0 000 0 000 N=A-S=1) 1 9 0 1 000 and N? = 0 Thus, the solution to the initial value problem cost —sinf 0 i : - x()=P {ant cet Oe | PU + Nexo 0 0 sint cost cost —sint 0 0

gin£ cost 0 0 Xọ

~tsint sing -ftconf coat -sint

Hin£ | f coef faint Kin come

mar’ k If A has both real and complex repe* ad eigenvalues, a combi- nan of the above two theorems can be used as in the result and example

Trang 26

1 Linear Systems 2 Solve the initial value problem (1) with the matrix 100 {a) A=/2 10 321 -1 1 +2 ma-[a a 4 6 0 1 10 waft) 10 9 2 211 (d) A=]0 2 2] 002 3 Solve the initial value problem (1) with the matrix [0 000 =1 001 a) A= 6) 1 001 10 -1 10 f2 000 (b) A=]! 2 0 0 01206 |0 0 1 2 [1 1 11 2222 Az (9 3333 |4 4 4 4 [0 10 o đọ A=|1 00 0 4) 001 +1 |0 01 1 [1 -10 0 1 1090 of¢ e) A= 6) 0 O11 -1 L0 01 1 [1-11 0 Ð A=|l 19 1 @) 0 01-1 |0 01 1 1.8 Jordan Forms 39

4 The “Putzer Algorithm” given below is another method for comput-

ing e4* when we have multiple eigenvalues; cf [W], p 49 et =ri(ĐI +răĐP + + rn(Đ Fá—t

where

Pì=ẽ (AT Àil), P = (AT Àv)(Ä — À2l),

Pa = (Á — Àil) -(A — An)

and r;(#), j = 1, ,m, are the solutions of the first-order linear differential equations and initial conditions ry = Ain with 7,(0)=1, rạ = Àara+rị with r2(0)=0 Tí = Àn?Tø +?n_) WÍh ră0) =0 Use the Putzer Algorithm to compute e“* for the matrix A given in (a) Example 1 (b) Example 3 {c) Problem 2(c) (d) Problem 3(b) 1.8 Jordan Forms The Jordan canonical form of a matrix gives some insight into the form #

of the solution of a linear system of differential equations and it is used in proving some theorems later in the book Finding the Jordan canonical form of a matrix A is not necessarily the best method for solving the related linear syatem since finding a basis of generalized eigenvectors which reduces A to its Jordan canonical form may be difficult On the other hand, any basis of generalized eigenvectors can be used in the method described in the

previous section The Jordan canonical form, described in the next theorem,

does result in a particularly simple form for the nilpotent part N of the matrix A and it is therefore useful in the theory of ordinary differential

equations

Theorem (The Jordan Canonical Form) Let A be a real matriz with

real eigenvalues dj, j = 1, - ,k and complex eigenvalues 44 = aj + ib;

and ; = a; — iby, j= k+1, ,n Then there exists a basis {v1,.-.) Ves

2n-k : :

Trang 27

40

1 Linear Systems

k+l, generalized ei

Im(w;) forj = k+1 such toe of A, uy = Re(w,) and vị =

Uns *Vq Wal is inverts Mm, at the matriz P = ÍY¡ -vụ Views

2

mẽ[ ¬ 1

BỊ ()

tohere the elementary Jorda

form ry n blocks B= By, j= 1, ,7 are either of the À1 0 @ 0 Ad 0 B=] 0 hd (2) mà 0 0 al one of the real eigenvalues of A or of the form Dh gọ 07 B 0 Dh 9 Dh (3) 0 0D] unth D={2 ~ [? ‘| =|} ‘| fi 0 and °=ͧ H

for =a + ib one of the complex eigenvalues of Ạ

shall refer to (1) with ¢ ; gì

canonical form of A he By given by (2) or (3) as the upper Jordan

The Jordan canonical form of A yiel ⁄ seïE 5

the form of the solution of the initial ld ie ex ‘cit information about & = Ax\ (0) = xo (4) which, according to th tion 14, i af ne or ¢ Fundamental Theorem for Linear Systems in Sec- XŒ) = P.diagleP!]P~1xu, 6) 1.8 Jordan Forms 41 If B; = B is an m x m matrix of the form (2) and A is a real eigenvalue of Athen B= AI +N and 1 £ /21 tm=l/(m — 1)! 0 1 t ve t™2/(m ~ 2)! emed ~ 3)! Bt az @MeNt — od 6 o 1 tm~3/(m - 3) 0 1 t 0 0 1 since the m x m matrix 0 1 0- 0 0 1 0 N=| - 0 - 0 1 0 - 0 0 is nilpotent of order m and 0 0 1 0 - 0 0 0 -.- 0 1 pra fO 9 0 1 OF kẽ |0 0 00 0 " 0 0 - wee 0

Trang 28

42

1 Linear Systems The above form of the sol lution (5) of the initial value problem (4) then leads to the following result:

Corollarỵ Each coordinate in the solution x(t) of the initial value problem (4) is a linear combination of functions of the form

t*ecosbt or thet sin bt

where À = a + ib ‘an eigenvalue of the matric A and 0<k<n—1 We next describe a method for finding & basis which reduces A to its Jordan canonical form But first we need t he following definitions: Definition Let \ be an eigenvalue of the matrix Ạ The deficiency indices

6, = dim Ker(A ~ A7)®,

The kernel of a linear operator T:R" — R"

Ker(T) = {x € R” | T(x} = 0}

The deficiency indices 65, can be found by Gaussian reduction; in fact, 5,

is the number of rows of zeros in the reduced row echelon form of (A—AI)* Clearly

6 <&< -<h =n,

Let 1% be the number of elementary Jordan blocks of size k x k in the

Jordan canonical form (1) of the matrix Ạ Then it follows from the above

theorem and the definition of 5, that 5 = Mtl + ty, 2 = M+ 2g + + + Dy, 63 = M1 + Qe + Bug + + 3y, f.-1 = 1 +(n- ly bn = MA Dat Wyte + (2 Domi + ndỵ i Cf Hirsch and Smale [R/S], p 124 These equations can then be solved for "= 26) - & 42 = 22 — 63 — 6 Me = By — bear ~ Se for l¿< k < n„ Ym = ba ~ By-1 43 1.8 Jordan Forms

Exam; -Ổ The only upper Jordan canoni ical forms for a 2 x 2 matrix : ; with oe ‘cigemvalue 4 of multiplicity 2 and the corresponding deficiency

indices are given by Al 4 0 | N and 0 À 6, =& =2 6, = 1,62 =2 ix with a ical forms for a 3 x 3 matrix wit!

Exam, The (upper) Jordan canonic: : n witl

Teal khhoh À 4 multiplicity 3 and the corresponding deficiency indices are given by + 0 0 0 À0 A410 0 A0 à À , 0 od 0 0À 0 0À “bn 22 by = 6, = 62 = 63 =3 §, = 2,6 = 63 =3 5; = 1,62 = 2,43 i i eralized eigenvec-

i orithin for finding a basis B of general eigen

we nh thet tàn matrix A with a real eigenvalue 4 of +4 _ saume its Jordan canonical form J with respect to the basis B; cf [Cu]:

1 Find a basis {v yy for Ker( A—AJ);ịẹ, find a linearly independent ‘ set of eigenvectors of A corresponding to the eigenvalue Ạ

2 If 62 > 61, choose » basis {V!}** , for Ker(A — A) such that (A—-A0)vf? = ví) has &2—6, linearly independent solutions v2, j=l yan về cớ j is for Ker(A — - (v1 = (vfUfft Ụ {vj yy 1 is a basis for ‘ (2)

3 If 63 > 62, choose a basis {V\*)}?2, for Ker(A - Af)? with Vy” € - 3 »

span {v08 for j = 1, ,52 — 6; such that 7 đạm 2 (A= Adve) =v : (3, _ — 69 If has 63 — 62 linearly independent solutions vj", j = 1, ,63 ~ 42 orth) bã81 „ (1)

for j= 1, ,62~61, VO) = S724" ev, tet Vy = Et ý VỆ

and Vi) = Vi) for j = 62 ~ 6, + 1, ,6) Then

- (3)y6s—ða

(vị =(907 60008150021 »

Trang 29

44

1 Linear Systems

4 Continue this process until the kth step when 5k = n to obtain a

basis B = tv) }j-¡ for R” The matrix A will then assume its

Jordan canonical form with Tespect to this basis

The diagonalizing matrix P = [vi -+-¥n] in the above theorem which satisfies P-'AP = J is then obtained by an appropriate ordering of the basis B The manner in which the matrix P is obtained from the basis B is indicated in the following examples Roughly speaking, each general-

ized eigenvector v;") satisfying (A — Av) = VO") is listed immediately

following the generalized eigenvector VEEN,

Example 3 Find a basis for R° which reduces

2 #10

A4A=l0 20

0 -1 2

to its Jordan canonical form It is easy to find that \ = 2 is an eigenvalue of multiplicity 3 and that 0 10 4-AF'=l0 o0 0] 0 -~1 0 Thus, 6; = 2 and (A — Al)v = 0 is equivalent to <2 = 0 We therefore choose 1 91 vị) = |0 and vi) =10 0 l1 as a basis for Ker(A — Aƒ) We next solve 0 10 “ 0 00lv=avfĐ+e)= |9, 0 -1 0 ° This is equivalent to z2 = ¢, and —#a = Gai Le, cy = cg We ch 1 0 ; 1 Vƒ=| 0|, vƒ=|1| ana =o] a 0 \ 0

These three vectors which we re-label as vị, vạ and vs respectively are then & basis for Ker(A — AI)? = R? (Note that we could also choose ví -

v3 = (0,0,1)7 and obtain the same result.) The matrix P = Ivi, v2, v3]

and its inverse are then given by 101 00-1 P= 010 and Pˆ'= |0 + 0 -100 10 1 45 1.8 Jordan Forms respectivelỵ The student should verify that 210 P!AP=|0 2 0| 0 0 2 Example 4 Find a basis for R* which reduces 0 -1 -2 -1 1 2 1 1 4=lo 6 1 96 0 0 1 1 to its Jordan canonical form We find \ = 1 is an eigenvalue of multiplicity 4 and “1-1 -2 -1 1111 A-M=) 9 0 0 0 0 0 1 «0 Using Gaussian reduction, we find 6, = 2 and that the following two vectors QQ) — -l 0 vp = YF | g 1 oor

span Ker(A — AJ) We next solve

(A-ADv= ev) +cav$)

‘These equations are equivalent to z3 = cz and 2, + 42 +23 +24 aay We can therefore choose c, = 1, cg = 0, 7, = 1, 2 = £3 = aw = 0 and fin

vỉ) = (1,0,0,0)7

(with viv = (-1,1,0,0)7); and we can choose c, = 0, cg = 1 = 23, az, = —1, Zz = 24 = 0 and find

vg) = (-1,0,1,0)7

Trang 30

1 Linear Systems and POAP= eoomr omoo —m OO I 0 0 9

In this case we have 6; = 2, 6: = 3 = & = 4, 4 = 26, - b = 0, tạ = 262 — 64 — 6) = 2 and 145 = % = 0

Example 5 Find a basis for R* which reduces 0-2 -1 -1 1 2 1 1 A=/o 1 1 6 0001 to its Jordan canonical form We find d = 1 is an eigenvalue of multiplicity 4 and -1 -2 -1 +1 1 1 1 1 A-M= 1 gy 0 0 0D 0 0 0 Using Gaussian reduction, we find 6; = 2 and that the following two vectors -1 -1 0 0 w=] oi, wale 0 1

span Ker(A ~ AJ) We next solve

(A-ADv = av) + cove)

The last row implies that c = 0 and the third row implies that 22 = 1 The remaining equations are then equivalent to 7, - z2 +23 + m=0

Thus, vị = vị and we choose

vỉ) = (—1,1,0,0)7 —

[

Using Gaussian reduction, we next find that 62 = 3 and tv®, v@) , vin} with vụ = vi) spans Ker(A — AJ) Similarly we find 6; = 4 and we must

find 63 -— 5; = 1 solution of

(A-ADv =v,

where ve = ví, The third row of this equation implies that #a = 0 and

the remaining equations are then equivalent tor; + 23 +24 = 0 We choose vị? =(1,0,0,0)7

1.8 Jordan Forms +

Then B= {vv vv} = (vO, v2), vO, vO} is a basis for Ker(ÃAJ)? = R‘ The matrix P = [v, - v4}, with v; = Vj’, v2 = VỊ”,

V3 = vị and vạ = ve, and its inverse are then given by ~1 —1 1 —1 0010 0 10 0 -_|0 100 P=l i 60 dị 4P Zli1pïg 0 00 1 0001 respectivelỵ And then we obtain 1100 0 110 -lgp PUAP= 10 9 1 9 0001 In this case we have 6, = 2, 62 = 3, 65 = & = 4,1 = 26,-& =1, ta = Đỗy — 53 ~ 6; = 0, vg = 263 — 64 ~ bg = Land %4 = 54 — 53 = 0

Trang 31

1 Linear Systems oat a 11 @ 4=[1 4] 11 ® 4=[_1 1Ì a a— [E —H (i) A= 0 1Í" Find the Jordan canonical forms for the following matrices [110 (2) A=|0 10 |0 01 [110 (Œ) A=|0 11 |0 0: [10 0 () A=]0 0 -1 0 1 0 1 1 0 (d)A=l0 1 0 10 0 -1 10 0 () A=l0 1 1 |0 0 -1 fl oo (f)A=]0 0 1} 19 10

(a) List the five upper Jordan canonical forms for a 4x 4 matrix A with a real cigenvalne À oƒ multiplicity 4 and give the corre

sponding deficiency indices in each cage,

(b) What is the form of the solution of theinitial value problem (4) in each of these cases? (a) What are the four upper Jordan canonical forms for a 4 x 4

matrix A having complex eigenvalues?

(b) What is the form of the solution of the in:tial value problem (4) in each of these cases? (a) List the seven upper Jordan canonical forms for a 5 x 5 ma trix A with a real eigenvalue \ of multiplicity 5 and give the corresponding deficiency indices in each casẹ

49 1.8 Jordan Forms

(b) What is the form of the solution of the initial value problem (4) in each of these cases?

6 Find the Jordan canonical forms for the following matrices [100 (a) A=]1 2 0 |1 2 3 [ 100 (b) A=|-L 2 0 | 102 [1 1 2 () A=|0 2 1 |0 0 2 [2 1 2 (d) A=|0 2 1 |o 0 2 1 000 1200 wmlies 1 2 4 4 r1 000 1200 oles l1 102 [2 140 0210 @MA=19 920 lo 00 2 [2 1 4 0 : 021 -1 Œ) A=lo o2 4 judo 2 Find the solution of the initial value problem (4) for each of these matrices

Suppose that B is an m x m matrix given by equation (2) and that Q = diag{i,c,ẻ, ,e"~'] Note that B can be written in the form

B=Àl+N

Trang 32

1 Linear Systems

This shows that the ones above the diagonal in the upper Jordan canonical form of a matrix can be replacec by any € > 0 A similar

result holds when 2 is given by equation (3;

8 What are the eigenvalues of a nilpotent matrix N?

9 Show that if all of the eigenvalues of the matrix A have negative parts, then for all x» € R” real (

fim, x(t) = 0

where x(t) is the solution of the initial value problem (4)

10 Suppose that the elementary blocks B in the Jordan form of the ma- trix A, given by (2) or (3), have no ones or J, blocks off the diagonal (The matrix A is called semisimple in this casẹ) Show that if all of

the eigenvalues of A have nonpositive real parts, then for all x9 € R" there is a positive constant M such that [x(t)| < M for allt >0

where x(z) is the solution of the initial value problem (4) 1° Show by example that if A is not semisimple,

eigenvalues of A have nonpositive

that then even if all of the

real parts, there is an Xo € R" such

Jim |x(0)| = sọ

Hint: Cf Example 4 in Section 1.7

12 For any solution x(t) of the initial value problem (4) with det A #0 and zo # 0 show that exactly one of the following alternatives

hold: (8) Jim x(t) = 0 and , tim }x(t}| = 00;

(b) Jim [x(t)] = eo and , lim, x(t) = 0;

(c) There are positive constants m and M such that for allteR

th S |xit)] <M;

(4) tian x(t)} = 00;

(e) Jim Ix(2}| = 00, tim | x(t) does not exist;

(f) Jin, |x()| = 00, jim, x(¢) does not exist Hint: See Problem 5 in Problem Set 9

1.9 Stability Theory 51

1.9 Stability Theory

ops

In this section we define the stable, unstable and center subapaces, E*, E

and E* respectively, of a linear system

cede (1)

i i in the case when A had d EY were defined in Section 1.2 in v

ditinct elgenvaloon, We also catablish some important properties of these

i i tion ;

ee wanes be a generalized eigenvector of the (real) matrix A responding’ ta an eigenvalue A; = a; + ib; Note that if 6; = 0 then coi

vy = 0 And let

B= {u, , Ue, Ueet, Vets) Uns Vin}

be a basis of R” (with n = 2m — k) as established by Theorems 1 and 2

and the Remark in Section 1.7

Definition 1 Let A; = a; + ib;, w; = uj + iv; and B be as described abovẹ Then

E* = Span{u,,v; | ay < 0} E* = Span{u;,v; | a; = 0} _ E* = Span{u,,v; | a; > 0};

iẹ, E*, E° and E™ are the subspaces of R” spanned by the real and imag:

ina parts of the generalized eigenvectors w,; corresponding to eigenvalues

dy with negative, zero and positive real parts respectivelỵ ý 5

Trang 33

52

1 Linear Systems

The stable subspace E* of (1) is the z,, 22 plane and the unstable subspace

E of (1) is the z3-axis The phase Portrait for the system (1) is shown in

Figure 1 for this examplẹ Figure 1 The stable and unstable subspaces E* and E™ of the linear system (1) Example 2 The matrix 0 -1 0 A=]1 00 9 02

has Ay =i, uy = (0,1,0)7, vị = (1,0,0)7, A2 = 2 and wy, = (0,0, 1)7 The

center subspace of (1) is the 21,22 plane and the unstable subspace of (1) is the r3-axis The phase Portrait for the system (1) is shown in Figure 2

for this examplẹ Note that all solutions lie on the cylinders 2? + z = c2,

In these examples we see that all solutions in E* approach the equilibrium

point x = 0 as t — oo and that all solutions in E* approach the equilibrium

point x = 0 as t 4 —oo, Also, in the above example the solutions in E<

are bounded and if x(0) 4 0, then they are bounded away from x = 0 for all ¢€ R We shall see that these statements about £? and E* are true in

Trang 34

54

1 Linear Systems

We have Ay = Ay = 0, yy = (0,1)7 ia an eigenvector and tạ = (1,0)7

is a generalized eigenvector corresponding ‘to 1 = 0 Thus E¢ = R’ The

solution of (1) with x(0) = ¢ = (c1,¢2)7 is easily found to be

#I() =e

#2() = c1 + cạ

The phase portrait for (2) in this case is given in Figure 3 Some solutions (those with ¢, = 0) remain bounded while others do not

We next describe the notion of the flow of a system of differential equa-

tions and show that the stable, unstable and center subspaces of (1) are

invariant under the flow of (1)

By the fundamental theorem in Section 1.4, the solution to the initial

value problem associated with (1) is given by

x(t) = êtxo,

The mapping e4t: R" —, R” may be regarded as describing the motion of Points xp € R" along trajectories of (1) This mapping is called the fow of the linear system (1) We next define the important concept of a hyperbolic flow:

Definition 2 If all eigenvalues of the n x n matrix A have nonzero real part, then the flow e4t:R" _, Rn is called a hyperbolic flow and (1) is

called a hyperbolic linear system

Definition 3 A subspace EC R” ig said to be invariant with respect to

the flow e4*| RR” + R” if ATE CE for all t eR

We next show that the stable, unstable and center subspaces, E°, EX

and E* of (1) are invariant under the flow e“¢ of the linear system (1); iẹ,

any solution starting in E*, E* or E° at time t = 0 remains in E*, E¥ or Es respectively for all £ c R

Lemmạ Let E be the generalized eigenspace of A corresponding to an eigenvalue X Then AE C # Proof Let {vi, , vi} be a basis of Beneralized eigenvectors for Ẹ Then given v € +, k v = Nav; =1 and by linearity k Av= 3 c;Av, j=l 1.9 Stability Theory Now since each v; satisfies (A-Af)2v;=0 for some minimal kj, we have (A -ADv; = Vj

i it follows by induction that Vj; € Ker(A — Ai)*°—! C Ẹ Thus, ¡ sa

My f= dy; + V5 E and since E is a subspace of R", it follows that

k

Yay; EB; jel

ie, Av € E and therefore AE c Ẹ

Theorem 1 Let A be a realn x n matrix Then

R" = E° @E"@ E*

bapaces öƒ (1)

", Ew EX are the stable, unstable and center au

veopertindy furthernore E*, E* and E* are invariant with respect to the

flow e“* of (1) respectivelỵ ved at th descri e i = {tt, 0è, Wp‡1, Ve+L, -, tin, Vụn} ca tegnning dí dịa min isa basis for R", it follows from the definition of d E° that E*, E™ an R" = FOR" @OẸ If xo € E* then nụ Xo = eV; =1

where Vj = vj or u, and {V;}"", © 8 is a basis for the stable subspace

E* as described in Definition 1 Then by the linearity of «4, it follows that the êtxụ = Sey eAty; gah But bee A eV; = lim [tate + SP] v, em kV; € E* and since E* is

i j= the above lemma A*V,; € ,

completẹ Thos, for all E BR éxy @ Bai therefore £^*E* C leo

Trang 35

56

1 Linear Systems

We next generalize the definition of sinks d tion in Soe and sources of two-dimensional : Definition 4 If all of the eigenvalues of A have 4 If negati

iti

parts, the origin is called a sink (source) for the li Sate) near system (1) cai Example 4 Consider the linear system (1) with

: -2 -1 0

A= 1 -2 af

0 a 3

We have eigenvalues Ài = —2+ and À¿ = —3 and the same eigenvectors

as in Example 1 E* = RẺ and the origi i phase portinit ievean tgs ne rigin is a sink for this examplẹ The i i

Ch

Figure 4 A linear system with a sink at the origin Theorem 2 The following statements are equivalent:

R Roa

(a) For all xg € R’ » lim e“ “xạ = 0 and for xo ¥ 0, tim, le“*xo| = 00

(b) AU eigenvalues of A have negative real part,

(c) There are positive constants a, c, m and M and a co: such that for all x9 € R™ and t eR

mtn 20 mt*e-*|xol < Je4*xa| < Mẽ"Ixol

Proof (a => b): If one of the eigenvalues \ = a+ ib hi iti

part, a > 0, then by the theorem and corollary in Section 1.8, there exista ion 18, there one 1.9 Stability Theory ” an xạ € R”, xo # Ú, such that [e4*xo] > e%[xo| Therefore |êtxa| —» co as t — 00; iẹ, lim êtxụ # 0 t +00

And if one of the eigenvalues of A has zero real part, say \ = ib, then by the corollary in Section 1.8, there exists x9 € R”, x9 4 0 such that at least

one component of the solution is of the form ct* cos bt or ct* sinbt with

k > 0 And once again

fim c^'Xo z0

Thus, if not all of the eigenvalues of A have negative real part, there exists

Xq € R® such that e4*xy 4 0 as £ — 00; i:e;a > b

(b => c): If all of the eigenvalues of A have negative real part, then it follows from the corollary in Section 1.8 that there exist positive constants

a, m, My and k > 0 such that

mit|* 72" [xl < je4txal < M(1 + [t[*)e7** fxo|

for all t € R and x € R” But the function (1 + |t|*)ẽ(°~°)* is bounded

for 0 < c < a and therefore for 0 < c < a there exists a positive constant

M such that

mit|* ẽ% |xol < |ếxa| < Mẽ“lxa|

for all xọ € R” and ¿€R

(c = a): If this last pair of inequalities is satisfied for all xp € R”, it follows by taking the limit as t + oo on each side of the inequalities that

lim |ếxo|=0 and that lim fe“*xg] = 00

(+00 t¬aneo

for x9 # 0 This completes the proof of Theorem 2

The next theorem is proved in exactly the same manner as Theorem 2 above using the theorem and its corollary in Section 1.8

Theorem 3 The following statements are equivalent:

(a) For all xo ¢R", lim | êtxo = 0 and for xo #0,

(im, le**xo| = 00

(b) All eigenvalues of A have positive real part

(c) There are positive constants a, c, m and M and a constant k > 0 such that for all xe € R” and tc R

Trang 36

58 1 Linear Systems Corollarỵ if x9 € E*, then e4*x9 €-E* for allt € Rand Jim, £ “xe =0, ‡ And if x9 € E™, then e4'x9 € E* for all t€ R and lim e4‘x, = 0 t—-~90

Thus, we see that all solutions of (1) which start in the stable manifold E* of (1) remain in E* for all t and approach the origin exponentially fast as t — 00; and all solutions of (1) which start in the unstable manifold E™ of ( }) remain in E* for all ¢ and approach the origin exponentially fast as t — —oọ As we shall see in Chapter 2 there is an analogous result for li R

cue called the Stable Manifold Theorem; cf Section 2.7 in

PROBLEM SET 9 ‘ot

1 Find the stable, unstable and center gubs; linear system (1) with the matrix P 5 paces £°, E™ and E* of t! ° ofthe & > tt — “—— —— om | =o oS ols a ®A=l§ J] @^= [Š 3] 6 4=|§ 4) ma-[ | (i) ^=[ me

Also, sketch the phase Portrait in each of these c: 1 Whi Whi

matrices define a hyperbolic flow, e4¢? “ _— 1.9 Stability Theory 59 2 Same as Problem 1 for the matrices -1 00 wae[a -2 | 0 03 0-1 0 () A=|1 0 0 0 0-1 2 3 0 (đ) A= | —1 | ` 0 0 -1 3 Solve the system 020 x=|-2 0 0|x 20 6

Find the stable, unstable and center subspaces E*, E™ and E* for

this system and sketch the phase portrait For x9 € E°, show that

the sequence of points x, = e4"x9 € E*; similarly, for x9 € E* or E™, show that x, € E* or E* respectivelỵ

4 Find the stable, unstable and center subspaces E*, E* and E* for

the linear system (1) with the matrix A given by (a) Problem 2(b) in Problem Set 7

(b) Problem 2(d) in Problem Set 7

5 Let A be an n x n nonsingular matrix and let x(t) be the solution of the initial value problem (1) with x(0) = x9 Show that

(a) if xo € E* ~ {0} then dim x(t) = 0 and , lim [x(t)] = 00; (b) if xo € E* ~ {0} then jim {x(¢)| = 00 and t im x(t) = 0;

(c) if xq € E* ~ {0} and A is semisimple (cf Problem 10 in Section 1.8), then there are positive constants m and M such that for

allt € R, m < |x(é)| < M;

(di) if x9 € E* ~ {0} and A is not semisimple, then there is an

Xo € R” such that ‘ lim |x(t)| = 00;

(de) if E* # {0}, E* # {0}, and x, € E* @ B* ~ (E" UE"), then

Trang 37

1 Linear Systems

(c) iC BY 4 {0}, E° 4 {0} and xy ce B*@ Ew (E*U E*), then fim |x(t)] = 00; : lim | x(t) does not exist; (f) if E* # {0}, E° {0}, and xo € E* @ E* ~ (Z* U E*), then

t lim Ix(t)| = 00, ima x(t) does not exist Cf Problem 12 in

Problem Set 8

6 Show that the only invariant lines for the linear system (1) with x € R? are the lines az, + br2 = 0 where v = (—b, a)? is an eigenvector of Ạ

1.10 Nonhomogeneous Linear Systems

In this section we solve the nonhomogeneous linear system

x = Ax + b(t) @)

where A is an n x n matrix and b(£) is a continuous vector valued function Definition A fundamental matriz solution of

% = Ax (2)

is any nonsingular n x n matrix function 4(t) that satisfies

©'(t) = AB(t) for all tcR

Note that according to the lemma in Section 1.4, &(t) = e4* is a fun-

damental matrix solution which satisfies (0) = I, then xn identity ma- trix Furthermore, any fundamental matrix solution ®(£) of (2) is given by ®(t) = Ce** for some nonsingular matrix C Once we have found a fun- damental matrix solution of (2), it is easy to solve the nonhomogeneous system (1) The result is given in the following theorem

Theorem 1 if $(t) is any fundamental matriz solution of (2), then the solution of the nonhomogeneous linear system (1) end the initial condition x(0) = xo is given by x(t} = ®()®—'(0)xo + [ ®()®~'(r)b(r)dr (3) 0 Proof For the function x(t) defined above, x'(t) = B(t)O7" (0) xq + ®()®~'{)b(£) + [ # (0®~'!(r)b(r)dr 0

1.10 Nonhomogeneons [inenr Synkerns 61

And since â(Â) is & fundamental raatrixeelution of (2), it follows that

£

x{)=A [se *œ» +f #(0871()b(04:] + b(t)

0

= Ax(t) + b(t)

for ali t € R And this completes the proof of the theorem

mar k 1 if the matrix A in (1) is time dependent, ix Ai is ti A = Ăt), then exactly 1 the same proof shows that the solution of the vn by (3) aeouded chau Š()

initi iti = is gi b provi

d the initial condition x(0) = xo is given by

Q radeon matrix solution of (2) with A = Ăt) For the most Parts we do not consider solutions of (2) with A = Ăt) in this book e

should congult [C/LỊ, [H] or [W] for a discussion of this topic

Remark 2 With #Ø() = êt, the solution of the nonhomogeneous linear system (1), as given in the above theorem, has the form + x(9) = êtxụ + et f eA b(r)dr 0 Examplẹ Solve the forced harmonic oscillator problem #+z=ƒ(9 This can be written as the nonhomogeneous system 2, = +22 fo = 4, + f(t) or equivalently in the form (1) with a=(? a and bí) =| ey In this case š

eAt [srr et = Rit), sint cost

a rotation matrix; and

Trang 38

62 1 Linear Systems It follows that the solution z(t) = 2,(t) of the original forced harmonic

oscillator problem is given by

x(t) = 2(0) cost — £(0) sint + f f(r) sin(7 — t)dr

0 PROBLEM SET 10

1 Just as the method of variation of parameters can be used to solve

@ nonhomogeneous linear differential equation, it can also be used to solve the nonhomogeneous linear system (1) To see how this method can be used to obtain the solution in the form (3), assume that the solution x(t) of (1) can be written in the form

x(t) = &(t)e(t)

where (t) is a fundamental matrix solution of (2) Differentiate thia

equation for x(4) and substitute it inte (1) to obtain (9) = &-"(t)b(2) Integrate this equation and use the fact that c(0) = 71 (0)xy to obtain t e(t) = 8-(0)x + f &"(r)b(r)ar Finally, substitute the function ¢(t) into x(t) = (t)e(t) to obtain (3) 2 Use Theorem 1 to solve the nonhomogeneous linear system [ae () with the initial condition x(0) = ( 3) - ăt) = + 2 3 Show that esint cost is a fundamental matrix solution of the nonautonomous linear system x = Ăt)x with —2cos?t —1 — sin 2t

Ăt) = |¡ ~sin2£ ~2sin?¿ ]

1.10 Nonhomogeneous Linear Systems 63

Find the inverse of &(t) and use Theorem 1 and Remark 1 to solve the nonhomogenous linear system

Trang 39

2 Nonlinear Systems: Local Theory In Chapter 1 we saw that any linear system x= Ax (2)

has a unique solution through each point xo in the phase space R”; the

solution is given by x(t) = e4'x,y and it is defined for all ý € R In this

chapter we begin our study of nonlinear systems of differential equations

x = fx) (2)

where f:E — R” and E is an open subset of R® We show that under certain conditions on the function f, the nonlinear system (2) has a unique solution through each point xo € E defined on a maximal interval of exis- tence (a, 8) C R In general, it is not pussible to solve the nonlinear system (2); however, a great deal of qualitative information about the local behav- ior of the solution is determined in this chapter In particular, we establish

the Hartman-Grobman Theorem and the Stable Manifold Theorem which

show that topologically the local behavior of the nonlinear system (2) near an equilibrium point xo where f(x;) = 0 is typicall~ determined by the be- havior of the linear system (1) near the origin when the matrix A = Df(xp),

the derivative of f at x9 We also discuss some of the ramifications of these theorems for two-dimensional systems when det Df(zo) # 0 and cite some

of the local results of Andronov et al [A-I] for planar systems (2) with det Df(x9) = 0

2.1 Some Preliminary Concepts and Definitions Before beginning our discussion of the fundamental theory of nonlinear systems of differential equations, we present some preliminary concepts and definitions First of all, in this book we shall only consider autonomous systems of ordinary differential equations

x = f(x) (1)

as opposed to nonautonomous systems

Trang 40

66

2 Nonlinear Systems: Local Theory

where the function f can depend on the independent variable t; however, any nonautonomous system (2) with x € R” can br written as an autonomous

system (1) with x € R"*! simply by letting ray, = ¢ and gay, = 1 The

under slightly weaker hypotheses on f as a function of ¢; cf for example

Cođington and Levinson [C/L} Also, see problem 3 in Problem Set 2, Notice that the existence of the solution of the elementary differential equation = f(t) is given by pt w(t) = 2(0) + ƒ ƒ(s) da

if f(t) is integrablẹ And in general, the differential equations (1) or (2) will have a solution if the function f is continuous; cf [C/L], p 6 However, continuity of the function f in (1) is not sufficient to guarantee uniqueness

of the solution as the next example shows

Example 1 The initial value problem & = 32/3 z(0)=0 has two different solutions through the point (0,0), namely u(t) = 23 and tít) =0

for all ¿ cR Clearly, each of these functions satisfies the differential equa-

tion for all ¢ € R as well as the initial condition 2(0) = 0 (The first solution

u(t) = t can be obtained by the method of separation of variables.) Notice that the function f(z) = 32/3 is continuous at « = 0 but that it is not

differentiable therẹ

Another feature of nonlinear systems that differs from linear systems is that even when the function f in (1) is defined arid continuous for all x € R", the solution x(t) may become unbounded at some finite time t = Ø; Lẹ, the solution may only exist on some proper subinterval (a, 4) C R

This is illustrated by the next examplẹ Example 2 Consider the initial value problem

t= 2?

z(0) = 1

2.1, Some Preliminary Concepts and Definitions 67 The solution, which can be found by the method of separation of variables, is given by 1 ăt) = Tor This solution is only defined for t € (—oco, 1) and lim s(t) = 00 tod

The interval ( 00,1) is called the maximal interval of existence vn solution of this initial value problem Notice tù ay function ăt) nos

tì 1,00); however, - ther branch defined on the interval 1,00); :

3 \ sonnideed as part of the solution of the initial value problem since the miual time t = 0 ¢ (1,00) This is made clear in the definition of a

ion in Section 2.2 - -

._———— and proving the fundamental existence-uniqueness theo- as rem for the nonlinear system (1), it is first necessary to define fone tem

nology and notation concerning the derivative Df of a function f:

Definition 1 The function f:R" — R" is differentiable at xo € R" if

there is a linear transformation Df(xo) € L(R") that satisfies ứa lo +h) - xe) - Dƒ(a)b| _ o lim ————————+————

Ihị¬0 Jhị

The linear transformation f(xa) is called the derivative of f at xọ The following theorem, established for example on p 215 in Rudin [R],

gives us a method for computing the derivative in coordinates

Ngày đăng: 11/05/2018, 16:12