1. Trang chủ
  2. » Khoa Học Tự Nhiên

Linear algebra 4e friedburg

216 33 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 216
Dung lượng 853,73 KB

Nội dung

Preface This Instructor’s Solutions Manual contains solutions for essentially all of the exercises in the text that are intended to be done by hand Solutions to Matlab exercises are not included The Student’s Solutions Manual that accompanies this text contains solutions for only selected odd-numbered exercises, including those exercises whose answers appear in the answer key The solutions that appear in the students’ manual are identical to those provided in this manual, and generally provide a more detailed solution than is available in the answer key Although no pattern is strictly adhered to throughout the student manual, the solutions provided there are primarily to the computational exercises, whereas solutions that involve proof are generally not included None of the solutions to the supplementary end-of-chapter exercises are included in the student manual Contents Preface iii Matrices and Systems of Equations 1.1 Introduction to Matrices and Systems of Linear 1.2 Echelon Form and Gauss-Jordan Elimination 1.3 Consistent Systems of Linear Equations 1.4 Applications 1.5 Matrix Operations 1.6 Algebraic Properties of Matrix Operations 1.7 Linear Independence and Nonsing Matrices 1.8 Data fitting, Numerical Integration 1.9 Matrix Inverses and their Properties 1.10 Supplementary Exercises 1.11 Conceptual Exercises Vectors in 2-Space and 3-Space 2.1 Vectors in the Plane 2.2 Vectors in Space 2.3 The Dot Product and the Cross 2.4 Lines and Planes in Space 2.5 Supplementary Exercises 2.6 Conceptual Exercises The 3.1 3.2 3.3 3.4 3.5 3.6 3.7 1 11 14 15 21 26 29 32 38 40 43 43 45 48 52 55 57 to Rm 59 59 60 65 72 77 81 83 Product Vector Space Rn Introduction Vector Space Properties of Rn Examples of Subspaces Bases for Subspaces Dimension Orthogonal Bases for Subspaces Linear Transformations from Rn Equations v vi CONTENTS 3.8 3.9 3.10 3.11 The 4.1 4.2 4.3 4.4 4.5 4.6 4.7 4.8 4.9 4.10 Least-Squares Solutions to Inconsistent Systems Fitting Data and Least Squares Solutions Supplementary Exercises Conceptual Exercises Eigenvalue Problems Introduction Determinants and the Eigenvalue Problem Elementary Operations and Determinants Eigenvalues and the Characteristic Polynomial Eigenvalues and Eigenvectors Complex Eigenvalues and Eigenvectors Similarity Transformations & Diagonalization Applications Supplementary Exercises Conceptual Exercises 89 92 93 96 99 99 101 104 108 112 117 121 128 132 132 Vector Spaces and Linear Transformations 5.1 Introduction (No exercises) 5.2 Vector Spaces 5.3 Subspaces 5.4 Linear Independence, Bases, and Coordinates 5.5 Dimension 5.6 Inner-products 5.7 Linear Transformations 5.8 Operations with Linear Transformations 5.9 Matrix Representations for Linear Transformations 5.10 Change of Basis and Diagonalization 5.11 Supplementary Exercises 5.12 Conceptual Exercises 135 135 135 139 144 147 150 154 158 161 166 171 173 175 175 175 178 183 186 191 191 Determinants 6.1 Introduction (No exercises) 6.2 Cofactor Expansion of Determinants 6.3 Elementary Operations and Determinants 6.4 Cramer’s Rule 6.5 Applications of Determinants 6.6 Supplementary Exercises 6.7 Conceptual Exercises Eigenvalues and Applications 7.1 Quadratic Forms 7.2 Systems of Differential Equations 7.3 Transformation to Hessenberg Form 7.4 Eigenvalues of Hessenberg Matrices 7.5 Householder Transformations 7.6 QR Factorization & Least-Squares 7.7 Matrix Polynomials & The Cayley-Hamilton Theorem 7.8 Generalized Eigenvectors & Diff Eqns 7.9 Supplementary Exercises 7.10 Conceptual Exercises 193 193 197 199 202 206 208 211 212 216 216 Chapter Matrices and Systems of Equations 1.1 Introduction to Matrices and Systems of Linear Equations Linear Nonlinear Linear Nonlinear Nonlinear Linear x1 + 3x2 = 4x1 − x2 = 1+3·2 = 4·1−2 = 6x1 − x2 + x3 = 14 x1 + 2x2 + 4x3 = x1 + x2 = 3x1 + 4x2 = −1 −x1 + 2x2 = −3 10 · − (−1) + = 14 + · (−1) + · = + (−1) = · + · (−1) = −1 −1 + · (−1) = −3 3x2 = 9, · = 4x1 = 8, · = 11 Unique solution 12 No Solution 13 Infinitely many solutions CHAPTER MATRICES AND SYSTEMS OF EQUATIONS 14 No solution 15 (a) The planes not intersect; that is, the planes are parallel (b) The planes intersect in a line or the planes are coincident 16 The planes intersect in the line x = (1 − t)/2, y = 2, z = t 17 The planes intersect in the line x = − 3t, y = 2t − 1, z = t 18 Coincident planes 19 A = 2   −3  21 Q =  20 C = 22 x1 +2x2 +7x3 = 2x1 +2x2 +4x3 = 23 2x1 + x2 = ; x1 + 4x2 = −3 4x1 + 3x2 = 2x1 + x2 = 3x1 + 2x2 = 24 A = −1 1 25 A = 1 −1 −1  26 A =   27 A =   28 A =  , B= −1 −1 1 1 −1 −1    −1 1 −1  , B =  1 1   1 1 −1  , B =  −1 −1 1 −1 1   1 −3 1 −3 −5 −5  , B =  −1 −3 −1 −3 , B=    −1 −2  1.1 INTRODUCTION TO MATRICES AND SYSTEMS OF LINEAR EQUATIONS     1 1 1 , B =   29 A =  −1 −1 30 Elementary operations on equations: E2 − 2E1 Reduced system of equations: 2x1 + 3x2 = −7x2 = −5 Elementary row operations: R2 − 2R1 Reduced augmented matrix: −7 −5 31 Elementary operations on equations: E2 − E1 , E3 + 2E1 Reduced system of equations: x1 + 2x2 − x3 = −x2 + 3x3 = 5x2 − 2x3 = Elementary row operations: R2 − R1 , R3 + 2R1   −1  Reduced augmented matrix:  −1 −2 32 Elementary operations on equations: E1 ↔ E2 , E3 − 2E1 x1 − x2 + 2x3 = x2 + x = Reduced system of equations: 3x2 − 5x3 = Elementary row operations: R1 ↔ R2 , R3 − 2R1   −1 1  Reduced augmented matrix:  0 −5 33 Elementary operations on equations: E2 − E1 , E3 − 3E1 Reduced system of equations: x1 + x = −2x2 = −2 −2x2 = −21 Elementary row operations: R2 − R1 , R3 − 3R1   1 Reduced augmented matrix:  −2 −2  −2 −21 CHAPTER MATRICES AND SYSTEMS OF EQUATIONS 34 Elementary operations on equations: E2 + E1 , E3 + 2E1 Reduced system of equations: x1 + x + x − x = 2x2 = 3x2 + 3x3 − 3x4 = Elementary row operations: R2 + R1 , R3 + 2R1   1 −1  Reduced augmented matrix:  0 3 −3 35 Elementary operations on equations: E2 ↔ E1 , E3 + E1 x1 + 2x2 − x3 + x4 = x2 + x − x = Reduced system of equations: 3x2 + 6x3 = Elementary row operations: R2 ↔ R1 , R3 + R1   −1 1 −1  Reduced augmented matrix:  36 Elementary operations on equations: E2 − E1 , E3 − 3E1 Reduced system of equations: x1 + x = −2x2 = −2x2 = Elementary row operations: R2 − R1 ,  1  −2 Reduced augmented matrix: −2 R3 − 3R1  0  37 (b) In each case, the graph of the resulting equation is a line 38 Now if a11 = we easily obtain the equivalent system a21 x1 + a22 x2 = b2 a12 x2 = b1 Thus we may suppose that a11 = Then : a11 x1 + a12 x2 = b1 a21 x1 + a22 x2 = b2 E2 − (a21 /a11 )E1 =⇒ 1.1 INTRODUCTION TO MATRICES AND SYSTEMS OF LINEAR EQUATIONS a11 x1 + a12 x2 = b1 ((−a21 /a11 )a12 + a22 )x2 = (−a21 /a11 )b1 + b2 a11 E2 =⇒ a11 x1 + a12 x2 = b1 (a11 a22 − a12 a21 )x2 = −a21 b1 + a11 b2 Each of a11 and (a11 a22 − a12 a21 ) is non-zero 39 Let A= a11 x1 + a12 x2 = b1 a21 x1 + a22 x2 = b2 and let B= a11 x1 + a12 x2 = b1 ca21 x1 + ca22 x2 = cb2 Suppose that x1 = s1 , x2 = s2 is a solution to A Then a11 s1 + a12 s2 = b1 , and a21 s1 + a22 s2 = b2 But this means that ca21 s1 + ca22 s2 = cb2 and so x1 = s1 , x2 = s2 is also a solution to B Now suppose that x1 = t1 , x2 = t2 is a solution to B Then a11 t1 +a12 t2 = b1 and ca21 t1 + ca22 t2 = cb2 Since c = , a21 x1 + a22 x2 = b2 40 Let A= a11 x1 + a12 x2 = b1 a21 x1 + a22 x2 = b2 and let B= a11 x1 + a12 x2 = b1 (a21 + ca11 )x1 + (a22 + ca12 )x2 = b2 + cb1 Let x1 = s1 and x2 = s2 be a solution to A Then a11 s1 +a12 s2 = b1 and a21 s1 +a22 s2 = b2 so a11 s1 +a12 s2 = b1 and (a21 +ca11 )s1 +(a22 +ca12 )s2 = b2 +cb1 as required Now if x1 = t1 and x2 = t2 is a solution to B then a11 t1 +a12 t2 = b1 and (a21 +ca11 )t1 +(a22 +ca12 )t2 = b2 +cb1 , so a11 t1 + a12 t2 = b1 and a21 t1 + a12 t2 = b2 as required 41 The proof is very similar to that of 45 and 46 42 By adding the two equations we obtain: 2x21 − 2x1 = Then x1 = or x1 = −1 and substituting these values in the√second equation we√find that there are three solutions: x1 = −1, x2 = ; x1 = 2, x2 = 3, ; x1 = 2, x2 = − CHAPTER MATRICES AND SYSTEMS OF EQUATIONS 1.2 Echelon Form and Gauss-Jordan Elimination The matrix is in echelon form The row operation R2 − 2R1 transforms the matrix to reduced echelon form Echelon form R2 − 2R1 yields reduced row echelon form −7 3 Not in echelon form (1/2)R1 , R2 − 4R1 , (−1/5)R2 yields echelon form Not in echelon form R1 ↔ R2 yields echelon form Not in echelon form R1 ↔ R2 , (1/2)R1 , (1/2)R2 yields the echelon form Not in echelon form (1/2)R1 yields the echelon form 3/2 1/2 2/5 1 1/2 0 3/2 3/2 1/2 0 echelon form R2 − 4R3 , R1 − 2R3 , R1 − 3R2 yields the reduced echelon form  −2  1   −1/2 3/2 1  Not in echelon form (1/2)R1 , (−1/3)R3 yields the echelon form  0   −1 −2 Not in echelon form (1/2)R2 yields the echelon form  −1 −3/2  0   −4 −4 −6 10 Not in echelon form −R1 , (1/2)R2 yields the echelon form  1/2 −3/2 −3/2  0 Not   0 in 11 x1 = 0, x2 = 12 The system is inconsistent 13 x1 = −2 + 5x3 , x2 = − 3x3 , x3 is arbitrary 14 x1 = − 2x3 , x2 = 202 CHAPTER EIGENVALUES AND APPLICATIONS 16 [e1 , e2 , e3 , e4 ], [e1 , e2 , e4 , e3 ], [e1 , e3 , e2 , e4 ], [e1 , e3 , e4 , e2 ], [e1 , e4 , e2 , e3 ], [e1 , e4 , e3 , e2 ], [e2 , e1 , e3 , e4 ], [e2 , e1 , e4 , e3 ], [e2 , e3 , e1 , e4 ], [e2 , e3 , e4 , e1 ], [e2 , e4 , e1 , e3 ], [e2 , e4 , e3 , e1 ], [e3 , e1 , e2 , e4 ], [e3 , e1 , e4 , e2 ], [e3 , e2 , e1 , e4 ], [e3 , e2 , e4 , e1 ], [e3 , e4 , e1 , e2 ], [e3 , e4 , e2 , e1 ], [e4 , e1 , e2 , e3 ], [e4 , e1 , e3 , e2 ], [e4 , e2 , e1 , e3 ], [e4 , e2 , e3 , e1 ], [e4 , e3 , e1 , e2 ], [e4 , e3 , e2 , e1 ] 17 18 There are n! (n x n) permutation matrices Since the columns of P are some ordering of e1 , e2 , , en , they form an orthonormal set 19 AP = A[ei , ej , ek , , er ] = [Aei , Aej , Aek , , Aer ] = [Ai , Aj , Ak , , Ar ]   a1  a2    20 Let A =   where aj is the j th row of A   an 21 22 7.4     By Exercise 19, AT P = [ai T , aj T , ak T , , ar T ] Therefore P T A = (AT P )T =    Apply Exercise 19 aj ak ar        By Exercise 21 each of the matrices P , P , P , is a permutation matrix By Exercise 17 there are n! distinct (n x n) permutation matrices Therefore there exists integers r and s such that r > s and P r = P s Since P is nonsingular this implies that P r−s = I Eigenvalues of Hessenberg Matrices Note that the given matrix H is in unreduced Hessenberg form We have w = e1 = [1, 0]T , w1 = Hw0 = [2, 1]T , and w2 = Hw1 = [4, 3]T The vector equation a0 w0 +a1 w1 = −w2 is equivalent to the system a0 + 2a1 = −4 a1 = −3 The system has solution a0 = 2, a1 = −3 so p(t) = − 3t + t2 7.4 EIGENVALUES OF HESSENBERG MATRICES 203 w0 = e1 = [1, 0]T ; w1 = [0, 3]T ; w2 = [0, 0]T , The vector equation a0 w0 +a1 w1 = −w2 has solution a0 = a1 = 0, so p(t) = t2 Note that the given matrix H is in unreduced Hessenberg form We have w = e1 = [1, 0, 0]T , w1 = Hw0 = [1, 2, 0]T , w2 = Hw1 = [1, 4, 2]T , and w3 = Hw2 = [3, 6, 8]T The vector equation a0 w0 + a1 w1 + a2 w2 = −w3 is equivalent to the system of equations a0 + a1 + a2 = −3 2a1 + 4a2 = −6 2a2 = −8 The system has unique solution a0 = −4, a1 = 5, a2 = −4 so p(t) = −4 + 5t − 4t2 + t3 w0 = e1 = [1, 0, 0]T ; w1 = [1, 1, 0]T ; w2 = [3, 4, 1]T ; w3 = [12, 14, 6]T The vector equation a0 w0 +a1 w1 +a2 w2 = −w3 has solution a0 = −4, a1 = 10, a2 = −6, so p(t) = −4 + 10t − 6t2 + t3 Note that the given matrix H is in unreduced Hessenberg form We have w = e1 = [1, 0, 0]T , w1 = Hw0 = [2, 1, 0]T , w2 = Hw1 = [8, 3, 1]T , and w3 = Hw2 = [29, 14, 8]T The vector equation a0 w0 +a1 w1 +a2 w2 = −w3 is equivalent to the system of equations a0 + 2a1 + 8a2 = −29 a1 + 3a2 = −14 a2 = −8 The system has unique solution a0 = 15, a1 = 10, a2 = −8 so p(t) = 15 + 10t − 8t2 + t3 w0 = e1 ; w1 = e2 ; w2 = e3 , w3 = e1 The vector equation a0 w0 + +a1 w1 + a2 w2 = −w3 has solution a0 = −1, a1 = a2 = Therefore p(t) = −1 + t3 Note that the given matrix H is in unreduced Hessenberg form We have w = e1 = [1, 0, 0, 0]T , w1 = Hw0 = [0, 1, 0, 0]T , w2 = Hw1 = [1, 2, 1, 0]T , w3 = Hw2 = [2, 6, 2, 2]T , and w4 = Hw3 = [8, 18, 8, 6]T The vector equation a0 w0 +a1 w1 +a2 w2 +a3 w3 = −w4 is equivalent to the system of equations a1 + a2 + 2a3 a1 + 2a2 + 6a3 a2 + 2a3 2a3 = −8 = −18 = −8 = −6 The system has unique solution a0 = 0, a1 = 4, a2 = −2, a3 = −3, so p(t) = 4t−2t2 −3t3 +t4 204 CHAPTER EIGENVALUES AND APPLICATIONS w0 = e1 = [1, 0, 0, 0]T , w1 = [0, 1, 0, 0]T , w2 = [2, 0, 2, 0]T , w3 = [2, 4, 0, 2]T , and w4 = [12, 0, 12, 2]T The vector equation a0 w0 + a1 w1 +a2 w2 +a3 w3 = −w4 has solution a0 = 2, a1 = 4, a2 = −6, a3 = −1 Therefore p(t) = + 4t − 6t2 − t3 + t4 −1 B11 B12 −1 and B22 = , B12 = where B11 = −1 −2 1 O B22 B11 has eigenvalue λ1 = (with algebraic multiplicity 2) with corresponding eigenvector u1 = [−1, 1]T B22 has eigenvalues λ2 = 1, λ3 = with corresponding eigenvectors v2 = [1, 1]T and v3 = [−1, 1]T , respectively Thus H has eigenvalues λ1 = 2, λ2 = 1, λ3 = u1 = [−1, 1, 0, 0]T is an eigenvector for H corresponding to The vector x1 = θ λ1 = The system of equations (B11 − I)u = −B12 v2 has solution u2 = [−9, 5]T , so u2 x2 = = [−9, 5, 1, 1]T is an eigenvector of H corresponding to λ2 = Similarly v2 u3 = [−3, 9, −1, 1]T is an (B11 − 3I)u = −B12 v3 has solution u3 = [−3, 9]T so x3 = v3 eigenvector of H corresponding to λ3 = H = 1 and B22 = B11 has eigenvalues λ1 = and λ2 = and B22 1 has eigenvalues λ3 = 3, λ4 = The corresponding eigenvectors are x1 = [−1, 1, 0, 0]T , x2 = [1, 1, 0, 0]T , x3 = [0, 1, −1, 1]T and x4 = [3/4, 5/4, 0, 1]T     −2 −2 B11 B12 11 H = where B11 =  −1 −2  , B12 =   , O B22 −1 −2 10 B11 = and B22 = [2] B11 has eigenvalues λ1 = and λ2 = −1 (algebraic multiplicity 2) with corresponding eigenvectors u1 = [−1, 1, 1]T and u2 = [−2, 0, 1]T B22 has eigenvalue λ3 = with corresponding eigenvector v3 = [1] Thus H has eigenvalues λ1 = 0, λ2 = −1, and u1 u2 = [−1, 1, 1, 0]T and x2 = = [−2, 0, 1, 0]T are λ3 = The vectors x1 = θ θ eigenvectors for H corresponding to λ1 = and λ2 = −1, respectively The system of u3 equations (B11 − 2I)u1 = −B12 v3 has solution u3 = [1/6, 15/6, 1/6]T so x3 = = v3 [1/6, 15/6, 1/6, 1]T is an eigenvector for H corresponding to λ3 = 2 3 and B22 = B11 has eigenvalues λ1 = and λ2 = −1 and 3 B22 has eigenvalue λ3 = The corresponding eigenvectors for H are x1 = [1, 1, 0, 0]T , x2 = [−1, 1, 0, 0]T , and x3 = [−7/8, −13/8, 0, 1]T 12 B11 = 7.4 EIGENVALUES OF HESSENBERG MATRICES 205 13 det(B) = af wz − af yx − ebwz + ebyx = (af − eb)(wz − yx) = det(B11 ) det(B22 ) 14 det(H) = −1 x −1 −1 = (4)(3) = 12 15 P = [e2 , e3 , e1 ] 16 P = [e2 , e3 , e4 , e1 ] 17 P = [e2 , e3 , , en , e1 ] 18 19 Write P = [P1 , P2 , , Pn ] where, as shown in Exercise 17, P1 = e2 , P2 = e3 , , Pn−1 = en , and Pn = e1 Thus w0 = e1 , w1 = P w0 = P e1 = P1 = e2 , w2 = P w1 = P e2 = P2 = e3 , , wn−1 = P wn−2 = P en−1 = Pn−1 = en , and wn = P wn−1 = P en = Pn = e1 Obviously the vector equation a0 w0 + a1 w1 + · · ·+an−1 wn−1 = −wn has solution a0 = −1, a1 = · · · = an−1 = Therefore p(t) = tn −1 Let H = [hij ] and let λ be an eigenvalue for H, Then      H − λI =     h11 − λ h12 ··· h21 h22 − λ h32 0 0 h1,n−1 h2,n−1 h3,n−1 h1n h2n h3n hn−1,n−1 − λ hn−1,n hn,n−1 hnn − λ          Since h21 , h32 , , hn,n−1 are nonzero, the first n − columns of H − λI form a linearly independent set Therefore rank (H − λI) ≥ n − It follows that nullity (H − λI) ≤ Since λ is an eigenvalue for H it follows that nullity (H − λI) = 1; that is, λ has geometric multiplicity 20 Since H is symmetric, it is diagonalizable Therefore the algebraic multiplicity of λ equals the geometric multiplicity It follows from Exercise 19 that λ has algebraic multiplicity Therefore H necessarily has n distinct eigenvalues 21 If H is unreduced then b = Thus p(t) = t2 − (a + c)t − b2 The eigenvalues for H are λ = [(a + c) ± (a + c)2 + 4b2 ]/2 Since (a + c)2 + 4b2 > 0, H has two distinct eigenvalues 22 Set u = [u1 , u2 , , un ]T and assume that un = Set H = [h1 , h2 , , hn ] Thus λu = H u = u1 h1 + u2 h2 + · · · + un−1 hn−1 There- 206 CHAPTER EIGENVALUES AND APPLICATIONS fore the nth component of H u is un−1 hn,n−1 Since hn,n−1 = it follows that un−1 = Repetition of this argument yields u1 = u2 = · · · = un = , so u = θ Therefore if u = θ then un = 23 Let k be an integer, ≤ k ≤ n, and suppose we have shown that wk−1 has the form wk−1 = [a1 , ak , 0, , 0]T , where ak = If H = [h1 , , hn ] then wk = H wk−1 = a0 h1 + · · · + ak hk But H is in Hessenberg form so hij = when i > j + Therefore the k + component of wk is ak hk+1,k and is nonzero since H is unreduced Thus wk has the form wk = [b1 , , bk+1 , 0, , 0]T , where bk+1 = 7.5 Householder Transformations Qx = x −γu where γ = 2uTx /uTu = (−2)(2)/4 = −1 Thus Qx = [4, 1, 6, 7]T Qx = [4, −3, 5, 4[T Set γ1 = 2uTA1 /uTu = −1 and γ2 = 2uTA2 /uTu = −2 Then QA1 = A1 − γ1 u = [3, 5, 5, 1]T and QA2 = A2 −γ2 u = [3, 1, 4, 2]T   3    Therefore QA =       0   QA =    3 Set γ = 2uTx /uTu = −1 Then Qx = x −γu = x +u = [4, 1, 3, 4]T Thus xT Q = (Qx )T = [4, 1, 3, 4] xT Q = [2, 2, 3, 1] Set x = [2, 1, 2, 1]T and y = [1, 0, 1, 4]T Then QAT = Q[x , y ] =    −1   Therefore AQ = (QAT )T = [Qx , Qy ] =    2 −1 7.5 HOUSEHOLDER TRANSFORMATIONS 207   1/2 5/2 7/2 5/2  −1 −1   BQ =   −7/2 13/2 11/2 5/2  −7 √ Set u1 = If a = − + + = −3 then u2 = v2 − a = + = Finally take u3 = v3 = and u4 = v4 = Thus u = [0, 5, 2, 1]T 10 a = −2, u = [3, 1, 1, 1]T √ 11 a = − 42 + 32 = −5; u1 = u2 = 0; u3 = v3 − a = + = 9; u4 = v4 = Therefore u = [0, 0, 9, 3]T 12 a = 3; u = [0, 0, −5, 2, 1]T 13 a = (−3)2 + 42 = 5; u1 = u2 = u3 = 0; u4 = v4 − a = −8; u5 = v5 = Therefore u = [0, 0, 0, −8, 4]T 14 a = −4; u = [0, 0, 8, 0, 0]T 15 √ We want QA1 = [1, a, 0]T Therefore a = − 32 + 42 = −5 and u = [u1 , u2 , u3 ]T where u1 = 0, u2 = − (−5) = 8, and u3 = Then u = [0, 8, 4]T 16 u = [0, −5, 5]T 17   We want QA1 =  a  Therefore a = u = [u1 , u2 , u3 ]T (−4)2 + 32 = and where u1 = 0, u2 = −4 − = −9, and u3 = Thus u = [0, −9, 3]T 18 u = [0, 0, 8, 4]T 19     We want QA2 =   a  so a =  (−3)2 + 42 = u = [u1 , u2 , u3 , u4 ]T where u1 = u2 = 0, u3 = −3 − = −8, and u4 = Thus u = [0, 0, −8, 4]T 20 u = [0, 0, −1, 1]T 21 QT = (I − buuT )T = I T − (buuT )T = I − buTT u T = I − buuT = Q 208 CHAPTER EIGENVALUES AND APPLICATIONS 22 Set b = 2/uTu Then Qu = (I − buuT )u = Iu −(buuT )u = u −bu (uTu ) = u −2u = −u If uTv = then Qv = (I − buuT )v = Iv −bu (uTv ) = v 23 24 Let {u, w2 , , wn } be as given in the hint By Exercise 22, Qu = −u and Qwi = wi for ≤ i ≤ n Thus Rn has a basis consisting of eigenvectors for Q; that is Q is diagonalizable Moreover Q is similar to the (n x n) diagonal matrix D with diagonal entries d11 = 1, d22 = · · · = dnn = −1 Since Q and D have the same eigenvalues, Q has eigenvalues and −1 To prove (a) note that Q−1 = (Qn−2 · · · Q2 Q1 )−1 = −1 −1 T T T T T Q−1 Q2 · · · Qn−2 = Q1 Q2 · · · Qn−2 = (Qn−2 · · · Q2 Q1 ) = Q To prove (b) assume that A is symmetric Thus H T = (QAQT )T = QTT AT QT = QAQT = H and H is symmetric 25 (a) Set B T = [v1 , v2 , v3 , v4 ] Then QB T = [Qv1 , Qv2 , Qv3 , Qv4 ], and for ≤ j ≤ 4, Qvj = vj − γj u , where γj is a constant Since u = [0, a, b, c]T it follows that Qvj and vj have the same first coordinate Thus B T and QB T have the same first row It follows that B and BQ = (QB T )T have the same first column (b) It follows from (a) that x = [b11 , b12 , 0, 0]T is the first column of BQ Thus Qx is the first column of QBQ But Qx = x −γu where γ = 2uTx /uTu = 0; that is Qx = x 7.6 QR Factorization & Least-Squares 1 x∗ is the unique solution to Rx = c , where R = c= and Thus x∗ = [1, 1]T x∗ = [2, −1]T   x∗ is the unique solution to Rx = c where R =   and 0    c =  Thus x∗ = [2, 1, 2]T 4 x∗ = [1, 2, 1]T 7.6 QR FACTORIZATION & LEAST-SQUARES We require that SA1 = [a, 0]T Therefore a = ± 209 a211 + a221 = −5, u1 = a11 − a = 8, and u2 = a21 = Consequently u = [8, 4]T and SA1 = A1 − u −5 −11 = [−5, 0]T SA2 = A2 −2u = [−11, 2]T , so SA = R = u = [1, 1]T ; R = −1 −5 −3 We require that SA1 = a so take a = − a211 + a221 = −4, u1 = a11 − a = 4, and u2 = a21 = Thus u = [4, 4]T and SA1 = A1 − u = [−4, 0]T Also SA2 = A2 −2u = [−6, −2]T Therefore SA = R = u = [−22, 4]T ; R = −4 −6 −2 −22 We require that SA2 = [2, a, 0]T so set a = − a222 + a232 = −1; u1 = 0, u2 = a22 − a = 1, T and u3 = a32 = Therefore u = [0, 1, 1]T , SA1 = A1 , SA 2 = A2 − u =[2, −1, 0] , and T  SA3 = A3 −14u = [1, −8, −6] Consequently SA = R = −1 −8  0 −6   10 u = [0, 8, 4]T ; R =  −5 −11  0 11 We first require Q1 such that Q1 A1 = [a, 0, 0, 0]T so take u1 = [6, 2, 2, 4]T Then Q1 A1 = A1 −u1 = [−5, 0, 0, 0]T and Q1 A2 = A2 −3u = [−59/3, 6, 2, 3]T We now require Q2 such that Q2 (Q1 A2 ) = [−59/3, a, 0, 0]T Thus set u2 = [0, 13, 2, 3]T Then Q2 (Q1 A1 ) = Q1 A1 and Q2 (Q1 A2 ) = Q1 A2 −u2 =   −5 −59/3  −7   [−59/3, −7, 0, 0]T Therefore Q2 Q1 A =   0  0   −2 −7  0   u = [0, 3, 0, 3]T and 12 u1 = [3, 1, 1, 1]T and Q1 A =   0  210 CHAPTER EIGENVALUES AND APPLICATIONS   −2 −7  −3   Q2 Q1 A =   0  0 T T 13  We require  a matrix Q such that QA2 = [4, a, 0, 0] Thus u = [0, 8, 0, 4] and QA =  −5     0  0    −3   14 u = [0, 5, 1, 2]T and QA =   0  0 15 Let Q1 , u1 , Q2 , u2 be as in Exercise 11 Then Q1 b = b −u1 = [−5, 8, −2, −3]T and Q2 (Q1 b ) = Q1 b −u2 = [−5, −5, −4, −6]T −5 −59/3 −7 The least-squares solution is the unique solution x∗ to Rx= c where R = and c = [−5, −5]T Thus x∗ = [−38/21, 15/21]T 16 Q2 Q1 b = [−4, 2, −1, 3] x∗ is the unique solution to Rx = c where R = c = [−4, 2]T Thus x∗ = [13/3, −2/3]T 17 −2 −7 −3 and With Q and u as in Exercise 13, Qb = b −(12/5)u = [2, −56/5, 16, −8/5]T Therefore x∗ is the unique solution to Rx = c where R = −5 and c = [2, −56/5]T Solving yields x∗ = [−87/25, 56/25]T 18 Qb = [5, −10/3, −19/3, −8/15]T x∗ is the unique solution to Rx = c and c = [5, −10/3]T Solving yields −3 where R = x∗ = [−5/27, 10/9]T 19 Write [A1 , A2 , , An ], where {A1 , A2 , , An } is a linearly independent subset of Rn Now SA = [S A1 , S A2 , , S An ] Suppose c1 , c2 , , cn are scalars such that θ=c1 S A1 + c2 S A + · · · + c n S A n 7.7 MATRIX POLYNOMIALS & THE CAYLEY-HAMILTON THEOREM 211 Then θ= S(c1 A1 + c2 A2 + · · · + cn An ) and S is nonsingular There- fore θ= c1 A1 + c2 A2 + · · · + cn An It follows that c1 = c2 = · · · = cn = and hence, the Rj set {S A1 , S A2 , , S An } is linearly independent For each j, ≤ j ≤ n, SAj = θ th m−n where Rj is the j column of R and θ is in R Therefore the set {R1 , R2 , , Rn } is linearly independent in Rn and the matrix R is nonsingular 7.7 Matrix Polynomials & The Cayley-Hamilton Theorem −1 0 −1 q(A) = A2 − 4A + 3I =   15 −2 14 3I =  −2 10  −1 −4 ; q(B) = B − 4B + 3I = 0 0 ; q(C) = C − 4C + (a) p(A) = (A − I)3 = O; p(B) = (B − I)3 = O; p(C) = (C − I)3 = O; p(I) = (I − I)3 = O3 = O (b) Set q(t) = (t − 1)2 = t2 − 2t + (a) q(t) = s(t)p(t) + r(t) where s(t) = t3 + t − and r(t) = t + (b) q(B) = s(B)p(B)+r(B) = r(B) since p(B) = O Thus q(B) = B +2I = −1 −1   Using Algorthim we obtain H11 = , H22 =   and H33 = 7 p1 (t) = t2 − 9t − 1, p2 (t) = t3 − 6t2 − 5t − 38, and p3 (t) = t2 − 9t − 17 Note that H = (SAS −1 )(SAS −1 ) = (SA2 S −1 ) For some positive integer k ≥ suppose we have shown that H k = SAk S −1 Then H k+1 = H k H = (SAk S −1 )(SAS −1 ) = SAk+1 S −1 It follows by induction that H n = SAn S −1 for each positive integer n Now let q(t) = an tn +an−1 tn−1 +· · ·+a1 t+a0 Then q(H) = an H n +an−1 H n−1 +· · ·+a1 H +a0 I = an SAn S −1 + an−1 SAn−1 S −1 + · · · + a1 SAS −1 + a0 SIS −1 = S(an An + an−1 An−1 + · · · + a1 A + a0 I)S −1 = Sq(A)S −1 Since u1T Au2 is a (1 x 1) matrix, u1T Au2 = (u1T Au2 )T = u2T AT u1TT = u2T Au1 Now u1T Au2 = λ2 u1Tu2 whereas u2T Au1 = λ1 u2Tu1 = λ1 u1Tu2 Therefore λ1 u1Tu2 = λ2 u1Tu2 Since λ1 = λ2 it follows that u1Tu2 = 212 CHAPTER EIGENVALUES AND APPLICATIONS (a) By assumption, Ax0 is in W For some positive integer k, suppose we have shown that Ak x0 is in W Then Ak+1 x0 = A(Ak x0 ) is in W by the assumed property of A By induction, An x0 is in W for each positive integer n (b) θ= m(A)x0 = (A − rI)s(A)x0 Since s(t) has degree k − 1, S(A)x0 = θ Thus if u = S (A)x0 then (A − rI)u = θ It follows that u is an eigenvector of A corresponding to the eigenvalue r Now if s(t) = bk−1 tk−1 +· · ·+b1 t+b0 then S(A)x0 = bk −1 Ak −1 x0 + · · · + b1 Ax0 + b0 x0 Since r is an eigenvalue for A, r is real It follows that b0 , , bk−1 are real Thus S(A)x0 is a linear combination of the vectors Ak−1 x0 , ., Ax0 , x0 in W Hence S(A)x0 is in W (a) Let x and y be in W ; that is, xui T = and yuiT = for ≤ i ≤ k Therefore (x + y)uiT = xuiT + yui T = + = for ≤ i ≤ k Consequently, x+y is in W Likewise if c is a scalar then (cx)uiT = c(xuiT ) = c0 = , for ≤ i ≤ k, so cx is in W Certainly θ is in W, so W is a subspace of Rn (b) Suppose that Aui = λi ui , ≤ i ≤ k, and let x be in W Thus xTui = for ≤ i ≤ k Now (Ax)T ui = xT AT ui = xT Aui = xT (λi ui ) = λi (xTui ) = for ≤ i ≤ k Therefore Ax is in W It now follows from Exercise that A has an eigenvector uk+1 in W By definition of W, {u1 , u2 , , uk , uk+1 } is an orthogonal set of eigenvectors for A 7.8 Generalized Eigenvectors & Differential Equations (a) The given matrix H has characteristic polynomial p(t) = (t − 2)2 , so λ = is the only eigenvalue and it has algebraic multiplicity The vector v1 = [1, −1]T is an eigenvector corresponding to λ = If we solve the system of equations (H − 2I)x = v1 we see that x = [−1 − a, a]T , where a is arbitrary Taking a = we obtain a generalized eigenvector v2 = [−1, 0]T (b) The given matrix H has characteristic polynomial p(t) = t(t + 1)2 The eigenvalue λ = −1 has corresponding eigenvector v1 = [−2, 0, 1]T Solving the system (H − (−1)I)x = v1 yields x = [2 − 2a, 1, a]T where a is arbitrary Thus v2 = [0, 1, 1]T is a generalized eigenvector for λ = −1 The eigenvalue λ = has corresponding eigenvector w1 = [−1, 1, 1]T (c) The given matrix H has characteristic polynomial p(t) = (t − 1)2 (t + 1) The eigenvalue λ = has corresponding eigenvector v1 = [−2, 0, 1]T Solving (H − I)x = v1 yields x = [(5/2) − 2a, 1/2, a]T , where a is arbitrary Thus v2 = [5/2, 1/2, 0]T is a generalized eigenvector of λ = The eigenvalue λ = −1 has corresponding eigenvector w1 = [−9, −1, 1]T 7.8 GENERALIZED EIGENVECTORS & DIFF EQNS 213 For A, λ = is the only eigenvalue Corresponding generalized eigenvectors are v = [0, 0, 0, 1]T , v2 = [0, 0, 1, 0]T , v3 = [0, 1, 0, 0]T , and v4 = [1, 0, 0, 0]T For B generalized eigenvectors are v1 = [−3, −5, −1, 2]T , v2 = [0, 0, 0, 1]T , v3 = [0, 1/2, 0, 1/2]T , and v4 = [1/4, 1/4, 0, 1/4]T     0 0  and H = (a) If Q =   then Q−1 =  0 −3   −69 21 −1  QAQ = −10  is in unreduced Hessenberg form H has characteristic −4 polynomial p(t) = (t+1)2 (t−1) The eigenvalue λ = −1 has corresponding eigenvector v1 = [3, 1, 2]T Solving the system (H − (−1)I)u = v1 yields u = [−(7/2) + (3/2)a, (−1/2) + (1/2)a, a]T , where a is arbitrary Therefore v2 = [−2, 0, 1]T is a generalized eigenvector for λ = −1 The eigenvalue λ = has corresponding eigenvector w1 = [−3, 0, 1]T Set y(t) = Qx(t) and y0 = Qx0 = [−1, −1, −2]T The system y = H y has general solution y(t) = c1 e −t v1 + c2 e −t (v2 + tv1 ) + c3 e t w1 and y0 = y (0) = c1 v1 +c2 v2 +c3 w1 Solving we obtain c = −1, c2 = 2, c3 = −2, so y(t) = 1 −t  e (6t − 7) + 6et  e−t (2t − 1)  Therefore −t t e (4t) − 2e  −t  e (6t − 7) + 6et  x(t) = Q −1 y(t) =  e−t (2t − 1) −t t e (−2t + 3) − 2e     0 0  and (b) If Q =   then Q−1 =  0 −3   −1  is in unreduced Hessenberg form H has characterH = QAQ−1 =  −3 −4 −1 istic polynomial p(t) = (t+1)3 The eigenvalue λ = −1 has corresponding eigenvector v1 = [1, 0, 3]T The system (H − (−1)I)u = v1 has solution u = [−1 + (1/3)a, 1, a]T , where a is arbitrary Therefore v2 = [0, 1, 3]T is a generalized eigenvector of order for λ = −1 The system (H − (−1)I)u = v2 has solution u = [(−4/3) + (1/3)a, 1, a]T so v3 = [0, 1, 4]T is a generalized eigenvector of order for λ = −1 214 CHAPTER EIGENVALUES AND APPLICATIONS Set y (t) = Qx (t) and y0 = Qx0 = [−1, −1, −2]T The system y = H y has general solution y(t) = c1 e −t v1 + c2 e −t (v1 + tv1 ) + c3 e −t (v3 + tv2 + (t /2)v1 ) and y0 =  y(0) = c1 v1 + c2 v2 + c3 v3 Solving we obtain c1 =  −1, c2 = −5, c3 = 4, so y −t e−t (2t2 − 5t − 1) e (2t − 5t − 1) −t −1   Therefore x (t) = Q y (t) =  e−t (4t − 1) (t) =  e (4t − 1) −t −t e (6t − 15t + 1) e (6t − 3t − 2)     0 0 −1     and H = (c) If Q = then Q = 0 −3   −1 −1   is in unreduced Hessenberg form H has characteristic QAQ = −3 −5 −2 polynomial p(t) = (t + 2)3 The eigenvalue λ = −2 has eigenvector v1 = [1, 0, 3]T , generalized eigenvector v2 = [0, 1, 3]T of order two, and generalized eigenvector v3 = [0, 1, 4]T of order Set y(t) = Qx(t) and y0 = Qx0 = [−1, −1, −2]T The system y = H y has general solution y(t) = e −2t [c1 v1 + c2 (v2 + tv1 ) + c3 (v3 + tv2 + (t /2 )v1 )] and y0 = y(0) = c1 v1 + c2 v2 + c3 v3 Solving yields c1 = −1, c2 = −5, and c3 = 4, so  −2t  e (2t − 5t − 1)  Therefore y(t) =  e−2t (4t − 1) −2t e (6t − 3t − 2)  −2t  e (2t − 5t − 1)  x (t) = Q−1 y (t) =  e−2t (4t − 1) −2t e (6t − 15t + 1)  t  e c4  et (c4 t + c3 )   x (t) =   et (c4 t2 /2 + c3 t + c2 )  t e (c4 t /6 + c3 t /2 + c2 t + c1 ) We see that from part(c) of Exercise that x(t) = c1 et v1 +c2 et (v2 +tv1 ) + c3 e−t w1 =           −2 5/2 −2 −9 c1 et   + c2 et   1/2  + t    + c3 e−t  −1  1 Note that (H − λI)v2 = θ since v1 =θ But (H − λI)2 v2 = (H − λI)v1 =θ, so v2 is a generalized eigenvector of order Suppose we have seen that vj is a generalized eigenvector of order j for ≤ j ≤ k where ≤ k < m Then (H − λI)k vk+1 = (H − λI )k −1 vk =θ whereas (H − λI)k+1 vk+1 = (H − λI )k vk =θ 7.8 GENERALIZED EIGENVECTORS & DIFF EQNS 215 Therefore vk+1 is a generalized eigenvector of order k + It follows by induction that vr is a generalized eigenvector of order r for ≤ r ≤ m Clearly the set {v1 } is linearly independent since v1 =θ Suppose we have seen that the set {v1 , , vk } is linearly independent for some k, ≤ k < m Now assume that c v1 + · · · + c k vk + ck +1 vk+1 =θ Note that (H − λI)k vj = θ for ≤ j ≤ k whereas (H − λI)k vk+1 = v1 It follows that θ = (H − λI)k θ=(H − λI)k (c1 v1 + · · · + ck vk + ck +1 vk+1 ) = ck +1 v1 Therefore ck+1 = Since the set {v1 , , vk } is linearly independent, c1 = · · · = ck = This proves that {v1 , , vk , vk+1 } is a linearly independent set It follows by induction that {v1 , , vm } is linearly independent Note that Hvj = λvj + vj−1 for ≤ j ≤ r whereas Hv1 = λv1 It is straightforward to see that xr (t) = Hxr (t) First note that q(H)(H − λ1 I)m1 −1 = (H − λ1 I)m1 −1 q(H) It follows from the equations in (5) that if Eq.(9) is multiplied by (H − λ1 I)m1 −1 then we obtain am1 q(H)v1 =θ Since v1 is an eigenvector corresponding to λ1 , q(H)v1 = q(λ1 )v1 =θ since v1 =θ and q(λ1 ) = Therefore am1 = By a similar argument, multiplication of (9) by (H − λ1 I)m1 −2 shows that am1 −1 = We may continue the process to show that aj = for each j, ≤ j ≤ m1 216 CHAPTER EIGENVALUES AND APPLICATIONS 7.9 Supplementary Exercises A = a , a arbitrary 3−a a = or a = −6 (a) If x = [1, −1]T then q(x) = −2 (b) The matrix B is not symmetric   0 (a) L = (b) L =   1 7.10 Conceptual Exercises Let A have characterstic polynomial p(t) = t3 + ut2 + vt + w Since A is nonsingular, λ = is not an eigenvalue for A Therefore, w = Since A3 + uA2 + vA + wI = O, it follows that [(−1/w)A2 − (u/w)A − (v/w)I]A = I n If B = P −1 AP then B n = (P −1 AP ) = P −1 An P It follows that p(B) = p(P −1 AP ) = P −1 p(A)P For ≤ i ≤ n, aii = eTi Aei > ... Systems of Equations 1.1 Introduction to Matrices and Systems of Linear Equations Linear Nonlinear Linear Nonlinear Nonlinear Linear x1 + 3x2 = 4x1 − x2 = 1+3·2 = 4·1−2 = 6x1 − x2 + x3 = 14 x1... trivial solution so {v2 , v3 } is linearly independent 1.7 LINEAR INDEPENDENCE AND NONSING MATRICES Linearly dependent v3 = 2v1 Linearly dependent v3 = v1 −2 v4 Linearly dependent u4 = u5 27... 61 n = m = 1.7 Linear Independence and Nonsingular Matrices x1 v1 + x2 v2 = θ has only the trivial solution so {v1 , v2 } is linearly independent Linearly dependent v3 = 2v1 Linearly dependent

Ngày đăng: 25/03/2019, 14:07