1. Trang chủ
  2. » Ngoại Ngữ

Linear algebra a geometric approach by theodore shifrin (2nd ed, 2011)

394 349 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 394
Dung lượng 3,33 MB

Nội dung

This page intentionally left blank LINEAR ALGEBRA A Geometric Approach second edition This page intentionally left blank LINEAR ALGEBRA A Geometric Approach second edition Theodore Shifrin Malcolm R Adams University of Georgia W H Freeman and Company New York Publisher: Ruth Baruth Senior Acquisitions Editor: Terri Ward Executive Marketing Manager: Jennifer Somerville Associate Editor: Katrina Wilhelm Editorial Assistant: Lauren Kimmich Photo Editor: Bianca Moscatelli Cover and Text Designer: Blake Logan Project Editors: Leigh Renhard and Techsetters, Inc Illustrations: Techsetters, Inc Senior Illustration Coordinator: Bill Page Production Manager: Ellen Cash Composition: Techsetters, Inc Printing and Binding: RR Donnelley Library of Congress Control Number: 2010921838 ISBN-13: 978-1-4292-1521-3 ISBN-10: 1-4292-1521-6 © 2011, 2002 by W H Freeman and Company All rights reserved Printed in the United States of America First printing W H Freeman and Company 41 Madison Avenue New York, NY 10010 Houndmills, Basingstoke RG21 6XS, England www.whfreeman.com CONTENTS Preface vii Foreword to the Instructor xiii Foreword to the Student xvii Chapter Vectors and Matrices Vectors Dot Product 18 Hyperplanes in Rn 28 Systems of Linear Equations and Gaussian Elimination The Theory of Linear Systems 53 Some Applications 64 Chapter Matrix Algebra 36 81 Matrix Operations 81 Linear Transformations: An Introduction 91 Inverse Matrices 102 Elementary Matrices: Rows Get Equal Time 110 The Transpose 119 Chapter Vector Spaces 127 n Subspaces of R 127 The Four Fundamental Subspaces 136 Linear Independence and Basis 143 Dimension and Its Consequences 157 A Graphic Example 170 Abstract Vector Spaces 176 v vi Contents Chapter Projections and Linear Transformations Inconsistent Systems and Projection 191 Orthogonal Bases 200 The Matrix of a Linear Transformation and the Change-of-Basis Formula 208 Linear Transformations on Abstract Vector Spaces 224 Chapter Determinants 239 Properties of Determinants 239 Cofactors and Cramer’s Rule 245 Signed Area in R2 and Signed Volume in R3 255 Chapter Eigenvalues and Eigenvectors The Characteristic Polynomial Diagonalizability 270 Applications 277 The Spectral Theorem 286 261 261 Chapter Further Topics 299 Complex Eigenvalues and Jordan Canonical Form 299 Computer Graphics and Geometry 314 Matrix Exponentials and Differential Equations 331 For Further Reading 349 Answers to Selected Exercises List of Blue Boxes 367 Index 369 191 351 P R E FA C E O ne of the most enticing aspects of mathematics, we have found, is the interplay of ideas from seemingly disparate disciplines of the subject Linear algebra provides a beautiful illustration of this, in that it is by nature both algebraic and geometric Our intuition concerning lines and planes in space acquires an algebraic interpretation that then makes sense more generally in higher dimensions What’s more, in our discussion of the vector space concept, we will see that questions from analysis and differential equations can be approached through linear algebra Indeed, it is fair to say that linear algebra lies at the foundation of modern mathematics, physics, statistics, and many other disciplines Linear problems appear in geometry, analysis, and many applied areas It is this multifaceted aspect of linear algebra that we hope both the instructor and the students will find appealing as they work through this book From a pedagogical point of view, linear algebra is an ideal subject for students to learn to think about mathematical concepts and to write rigorous mathematical arguments One of our goals in writing this text—aside from presenting the standard computational aspects and some interesting applications—is to guide the student in this endeavor We hope this book will be a thought-provoking introduction to the subject and its myriad applications, one that will be interesting to the science or engineering student but will also help the mathematics student make the transition to more abstract advanced courses We have tried to keep the prerequisites for this book to a minimum Although many of our students will have had a course in multivariable calculus, we not presuppose any exposure to vectors or vector algebra We assume only a passing acquaintance with the derivative and integral in Section of Chapter and Section of Chapter Of course, in the discussion of differential equations in Section of Chapter 7, we expect a bit more, including some familiarity with power series, in order for students to understand the matrix exponential In the second edition, we have added approximately 20% more examples (a number of which are sample proofs) and exercises—most computational, so that there are now over 210 examples and 545 exercises (many with multiple parts) We have also added solutions to many more exercises at the back of the book, hoping that this will help some of the students; in the case of exercises requiring proofs, these will provide additional worked examples that many students have requested We continue to believe that good exercises are ultimately what makes a superior mathematics text In brief, here are some of the distinctive features of our approach: • We introduce geometry from the start, using vector algebra to a bit of analytic geometry in the first section and the dot product in the second vii viii Preface • We emphasize concepts and understanding why, doing proofs in the text and asking the student to plenty in the exercises To help the student adjust to a higher level of mathematical rigor, throughout the early portion of the text we provide “blue boxes” discussing matters of logic and proof technique or advice on formulating problem-solving strategies A complete list of the blue boxes is included at the end of the book for the instructor’s and the students’ reference • We use rotations, reflections, and projections in R2 as a first brush with the notion of a linear transformation when we introduce matrix multiplication; we then treat linear transformations generally in concert with the discussion of projections Thus, we motivate the change-of-basis formula by starting with a coordinate system in which a geometrically defined linear transformation is clearly understood and asking for its standard matrix • We emphasize orthogonal complements and their role in finding a homogeneous system of linear equations that defines a given subspace of Rn • In the last chapter we include topics for the advanced student, such as Jordan canonical form, a classification of the motions of R2 and R3 , and a discussion of how Mathematica draws two-dimensional images of three-dimensional shapes The historical notes at the end of each chapter, prepared with the generous assistance of Paul Lorczak for the first edition, have been left as is We hope that they give readers an idea how the subject developed and who the key players were A few words on miscellaneous symbols that appear in the text: We have marked with an asterisk (∗ ) the problems for which there are answers or solutions at the back of the text As a guide for the new teacher, we have also marked with a sharp ( ) those “theoretical” exercises that are important and to which reference is made later We indicate the end of a proof by the symbol Significant Changes in the Second Edition • We have added some examples (particularly of proof reasoning) to Chapter and streamlined the discussion in Sections and In particular, we have included a fairly simple proof that the rank of a matrix is well defined and have outlined in an exercise how this simple proof can be extended to show that reduced echelon form is unique We have also introduced the Leslie matrix and an application to population dynamics in Section • We have reorganized Chapter 2, adding two new sections: one on linear transformations and one on elementary matrices This makes our introduction of linear transformations more detailed and more accessible than in the first edition, paving the way for continued exploration in Chapter • We have combined the sections on linear independence and basis and noticeably streamlined the treatment of the four fundamental subspaces throughout Chapter In particular, we now obtain all the orthogonality relations among these four subspaces in Section • We have altered Section of Chapter somewhat and have completely reorganized the treatment of the change-of-basis theorem Now we treat first linear maps T : Rn → Rn in Section 3, and we delay to Section the general case and linear maps on abstract vector spaces • We have completely reorganized Chapter 5, moving the geometric interpretation of the determinant from Section to Section Until the end of Section 1, we have tied the computation of determinants to row operations only, proving at the end that this implies multilinearity 358 Answers to Selected Exercises 3.6.2 a., c., e yes; b., d., f no 3.6.3 a., f., g yes; b., c., d., e no 3.6.4 mn 3.6.5 dim U = dim L = 12 n(n + 1), dim D = n 3.6.6 a not a subspace since does not have this property; c a one-dimensional subspace with basis {e−2t }; f a two-dimensional subspace with basis {cos t, sin t}: show that if f lies in this subspace and has the properties f (0) = a and f (0) = b, then f (t) = a cos t + b sin t (Hint: Consider g(t) = f (t) − a cos t − b sin t and show that h(t) = (g(t))2 + (g (t))2 is a constant); g Guess two linearly independent exponential solutions, and it will follow from Theorem 3.4 of Chapter that these form a basis 3.6.10 b {In } 3.6.14 a f (t) = t gives a basis; b f (t) = t − 3.6.15 b f (t) = t − t + 3.6.16 f (t) = 19 − 112t + 110t gives a basis 3.6.18 Hint: Use the addition formulas for sin and cos to derive the formulas 3.6.20 4.1.1 4.1.3 gives a basis gives a basis sin kt sin t = cos(k − )t − cos(k + )t , sin kt cos t sin(k + )t − sin(k − )t = 12 a Suppose x and y are in the given subset Then there are constants C and D so that |xk | ≤ C and |yk | ≤ D for all k; thus |xk + yk | ≤ |xk | + |yk | ≤ C + D for all k, so x + y is in the subset (Note we’ve used the triangle inequality, Exercise 1.2.18.) And |cxk | = |c||xk | ≤ |c|C for all k, so cx is in the subset Since is obviously in the subset, it must be a subspace ⎡ ⎤ b (−1, 0, 1, 3) 1 1⎢ ⎥ ⊥ T a V is spanned by a = (1, 1, 2), so PV ⊥ = aa = ⎣ ⎦ 1 = a ⎡ 1⎢ ⎣1 ⎤ ⎡ ⎥ 1⎢ ⎣ −1 2 ⎦; so PV = I − PV ⊥ = ⎡ −2 ⎢ b Let A = ⎣ −1 A(AT A)−1 AT = ⎥ ⎦ Then AT A = ⎡ −2 1⎢ ⎣ −1 ⎡ ⎤ 1 ⎢ ⎥ x= ; ⎣ ⎦ 14 14 −3 −1 −2 ⎥ , and so PV = ⎡ ⎤ ⎥ 3 −2 0⎦ −3 −3 ⎤ −2 ⎦ −2 −2 ⎤ 4.1.6 −1 = −1 −2 1⎢ ⎣ −1 ⎤ ⎥ −2 ⎦ −2 −2 Answers to Selected Exercises 359 ⎡ ⎤ 4.1.9 ⎢1⎥ ⎢ ⎥ a Fitting y = a to the data yields the inconsistent system Aa = b, with A = ⎢ ⎥ ⎣1⎦ ⎡ ⎤ ⎢1⎥ ⎢ ⎥ T T and b = ⎢ ⎥ Then A A = [4] and A b = [9], so a = 9/4 (Notice this is just ⎣3⎦ the average of the given y-values.) As the theory predicts, the sum of the errors is 23 (0 − 94 ) + (1 − 94 ) + (3 − 94 ) + (5 − 94 ) = 0; c a = 14 , b = 29 , c = 20 20 4.1.11 a ≈ 1.866, k ≈ 0.878 4.1.13 Suppose projV x = p and projV y = q Then x − p and y − q are vectors in V ⊥ Then x + y = (p + q) + (x + y) − (p + q) = (p + q) + (x − p) + (y − q) , ∈V ∈V ⊥ so projV (x + y) = p + q, as required Similarly, since cx = c p + (x − p) = (cp) + (cx − cp) = (cp) + c(x − p) , 4.1.17 ∈V we infer that projV (cx) = cp, as well √ a 1/ √1 (1, 0, 1, 0), q2 = 12 (1, 1, −1, 1), q3 = ∈V ⊥ 4.2.2 c q1 = √1 (0, 1, 0, −1) 4.2.4 a w1 = (1, 3, 1, 1), w2 = 12 (1, −1, 1, 1), w3 = (−2, 0, 1, 1); ⎡ ⎢0 ⎢ b projV (4, −1, 5, 1) = (4, −1, 3, 3); c PV = ⎢ ⎣0 4.2.5 a w1 = (1, −1, 0, 2), w2 = 4.2.7 b 0 1/2 ⎥ ⎥ 1/2 ⎦ 1/2 1/2 1 b1 −1 b2 ⎥ b p = (1, −1, 0, 2); c x = (1, 0) Since rank(A) = 2, we know that C(A) = R2 augmented matrix 4.2.8 (1, 1, 2, 0); ⎤ Row reducing the yields one solution (of many possi- ble) v = (b1 − b2 , b2 , 0) The key point is that the unique solution lying in the row space is obtained by projecting an arbitrary solution onto R(A) The rows of A are orthogonal, so to find the solution in the (1, 1, 1) + v·(0,1,−1) (0, 1, −1) = row space, we take x = projR(A) v = v·(1,1,1) (1,1,1) (0,1,−1) b1 b2 1 1 (1, 1, 1) + (0, 1, −1) = ( b1 , b1 + b2 , b1 − b2 ) ⎤ ⎡ ⎡ ⎤ √1 − ⎥ ⎢ √1 √1 √ ⎢ √1 − ⎥ √12 ⎢ ⎥ ⎢ 2⎥ 1 ⎢ ⎥ a Q = ⎣ √2 − √6 ⎦, R = ; b Q = ⎢ ⎥, √ 1⎥ ⎢ √1 0 √32 2⎦ ⎣ √ ⎡√ √ 2 ⎢ √ R=⎢ ⎣ 0 √1 √1 ⎤ ⎥ ⎥ ⎦ √1 2 360 Answers to Selected Exercises 4.2.11 b The i th row of A−1 is aiT / 4.2.12 a {1, t}, (projV f )(t) = 4.3.1 4.3.3 − t; d {1, cos t, sin t}, (projV f )(t) = sin t 1 a Rotating the vector e1 by −π/4 gives the vector √ ; reflecting that −1 −1 vector across the line x1 = x2 gives √ Similarly, rotating e2 by −π/4 1 gives the vector √ , which is left unchanged by the reflection Thus, the 1 −1 standard matrix for T is √ 1 e2 to −e1 , and leaves e3 fixed Thus, a This symmetry carries ⎡ e1 to e2 , carries ⎤ −1 ⎢ the standard matrix is ⎣ 0 ⎦ Since the columns of this matrix form an 0 ⎥ orthonormal set, the matrix is orthogonal 4.3.5 a The change-of-basis matrix is P = 2 −1 4.3.7 −7 ⎢ ⎣ −4 24 14 5⎦ ⎥ −17 −6 ⎡ 4.3.11 4.3.14 −4 − 35 4.3.24 ⎥ 8⎦ ⎤ ⎥ 0⎦ 1 ⎢ −4 ⎢1 ⎢ 17 ⎣ −4 14 ⎤ − 45 ⎢ ⎣− ⎡ 4.3.16 −4 1⎢ ⎣ ⎡ 24 Thus, [T ]B = P −1 [T ]stand P = −55 −37 ⎤ −3 ⎡ 36 , whose inverse is P −1 = ⎤ ⎥ ⎥ 1⎦ 7⎥ 11 a v1 = (1, 2, 1); b v2 = ⎡ √1 (−1, 0, 1), v3 = √1 (1, −1, 1); ⎢ c ⎣ ⎤ 0 ⎦; −1 ⎥ d T is a rotation of −π/2 around the line spanned by (1, 2, 1) (as viewed from high above that vector) 4.3.26 a If we write x = y1 v1 + y2 v2 + y3 v3 , the equation of the curve of intersection becomes y12 + sin2 φ y22 = 1, y3 = 361 Answers to Selected Exercises ⎡ 4.4.1 ⎢0 ⎢ a ⎢ ⎣0 ⎡ 4.4.2 ⎢ ⎢ ⎢ ⎢ a ⎢ ⎢ ⎢ ⎣ 0 0⎥ ⎥ ⎥, ker(T ) = {O}, image (T ) = M2×2 ; 0⎦ 0 1 2⎥ ⎥ ⎥, ker(T ) = {O}, image (T ) = M2×2 0⎦ ⎢0 ⎢ b ⎢ ⎣3 ⎡ ⎤ 0 ⎤ ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ n 4.4.3 Let f, g ∈ P3 Then, using the usual differentiation rules, T (f + g)(t) = (f + g) (t) + 4(f + g) (t) − 5(f + g)(t) = (f (t) + g (t)) + 4(f (t) + g (t)) − 5(f (t) + g(t)) = f (t) + 4f (t) − 5f (t) + g (t) + 4g (t) − 5g(t) = T (f )(t) + T (g)(t), so T (f + g) = T (f ) + T (g) For any scalar c, we have T (cf )(t) = (cf ) (t) + 4(cf ) (t) − 5(cf )(t) = cf (t) + 4cf (t) − 5cf (t) = c f (t) + 4f (t) − 5f (t) = cT (f )(t), so T (cf ) = cT (f ) Thus, T is a linear transformation To compute the matrix A, we need to apply T to each of the basis vectors v1 = 1, v2 = t, v3 = t , v4 = t : T (v1 ) = −5 = −5v1 , T (v2 ) = − 5t = 4v1 − 5v2 , T (v3 ) = + 4(2t) − 5t = 2v1 + 8v2 − 5v3 , and T (v4 ) = 6t + 4(3t ) − 5t = 6v2 + 12v3 − 5v4 Thus, the matrix is as given in the text 4.4.4 a We use the matrix A from Example and apply Theorem 4.2 Since W = W , we have Q = I The change-of-basis matrix from V to V is ⎤ ⎡ ⎤ ⎡ −1 −1 −2 ⎥ ⎢0 −2 3⎥ ⎢ ⎥ ⎢ P =⎢ −6 ⎦ ⎥ So [T ]V,W = Q−1 AP = AP = ⎣ ⎣0 −3 ⎦ 0 0 0 Note that this checks with T (1) = 0, T (t − 1) = 1, T ((t − 1)2 ) = 2(t − 1), and T ((t − 1)3 ) = 3(t − 1)2 4.4.8 4.4.14 a T u + t (v − u) = T (u) + tT (v − u) = T (u) + t T (v) − T (u) a no; b ker(T ) = Span (1 − 2t, − 3t , − 4t ), image (T ) = R; d ker(T ) = {0}, image (T ) = {g ∈ P : g(0) = 0} 5.1.1 b −4; d 5.1.8 c i

Ngày đăng: 06/03/2018, 12:28

TỪ KHÓA LIÊN QUAN