Tài liệu đại số tuyến tính được viết bằng tiếng anh của đội ngũ các chuyên gia nươc ngoài chuyên đào tạo cho bậc đại học, rất hữu ích cho sinh viên chuyên ngành Toán học, sinh viên khối Khoa Học Tự Nhiên, giảng viên Toán đại cương và nghiên cứu sinh khối khoa học tự nhiên, đặc biệt đối với chuyên ngành Toán học.
Undergraduate Texts in Mathematics Serge Lang Linear Algebra Third Edition Springer Undergraduate Texts in Mathematics Editors s Axler F W Gehring K A Ribet Springer New York Berlin Heidelberg Hong Kong London Milan Paris Tokyo BOOKS OF RELATED INTEREST BY SERGE LANG Math! Encounters with High School Students 1995, ISBN 0-387-96129-1 Geometry: A High School Course (with Gene Morrow) 1988, ISBN 0-387-96654-4 The Beauty of Doing Mathematics 1994, ISBN 0-387-96149-6 Basic Mathematics 1995, ISBN 0-387-96787-7 A First Course in Calculus, Fifth Edition 1993, ISBN 0-387-96201-8 Short Calculus 2002, ISBN 0-387-95327-2 Calculus of Several Variables, Third Edition 1987, ISBN 0-387-96405-3 Introduction to Linear Algebra, Second Edition 1997, ISBN 0-387-96205-0 Undergraduate Algebra, Second Edition 1994, ISBN 0-387-97279-X Math Talks for Undergraduates 1999, ISBN 0-387-98749-5 Undergraduate Analysis, Second Edition 1996, ISBN 0-387-94841-4 Complex Analysis, Fourth Edition 1998, ISBN 0-387-98592-1 Real and Functional Analysis, Third Edition 1993, ISBN 0-387-94001-4 Algebraic Number Theory, Second Edition 1996, ISBN 0-387-94225-4 Introduction to Differentiable Manifolds, Second Edition 2002, ISBN 0-387-95477-5 Challenges 1998, ISBN 0-387-94861-9 Serge Lang Linear Alge bra Third Edition With 21 Illustrations Springer Serge Lang Department of Mathematics Yale University New Haven, CT 06520 USA Editorial Board S Axler Mathematics Department San Francisco State University San Francisco, CA 94132 USA F.W Gehring Mathematics Department East Hall University of Michigan Ann Arbor, MI 48109 USA K.A Ribet Mathematics Department University of California, at Berkeley Berkeley, CA 94720-3840 USA Mathematics Subject Classification (2000): IS-0 Library of Congress Cataloging-in-Publication Data Lang, Serge Linear algebra (Undergraduate texts in mathematics) Includes bibliographical references and index I Algebras, Linear II Title III Series QA2Sl L.26 1987 SI2'.S 86-21943 ISBN 0-387 -96412-6 Printed on acid-free paper The first edition of this book appeared under the title Introduction to Linear Algebra © 1970 by Addison-Wesley, Reading, MA The second edition appeared under the title Linear Algebra © 1971 by Addison-Wesley, Reading, MA © 1987 Springer-Verlag New York, Inc All rights reserved This work may not be translated or copied in whole or in part without the written permission of the publisher (Springer-Verlag New York, Inc., 17S Fifth Avenue, New York, NY 10010, USA), except for brief excerpts in connection with reviews or scholarly analysis Use In connection with any form of information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed is forbidden The use in this publication of trade names, trademarks, service marks, and similar terms, even if they are not identified as such, is not to be taken as an expression of opinion as to whether or not they are subject to proprietary rights Printed in the United States of America 19 18 17 16 IS 14 13 12 11 (Corrected printing, 2004) Springer-Verlag is part of Springer Science+Business Media springeronline com SPIN 10972434 Foreword The present book is meant as a text for a course in linear algebra, at the undergraduate level in the upper division My Introduction to Linear Algebra provides a text for beginning students, at the same level as introductory calculus courses The present book is meant to serve at the next level, essentially for a second course in linear algebra, where the emphasis is on the various structure theorems: eigenvalues and eigenvectors (which at best could occur only rapidly at the end of the introductory course); symmetric, hermitian and unitary operators, as well as their spectral theorem (diagonalization); triangulation of matrices and linear maps; Jordan canonical form; convex sets and the Krein-Milman theorem One chapter also provides a complete theory of the basic properties of determinants Only a partial treatment could be given in the introductory text Of course, some parts of this chapter can still be omitted in a given course The chapter of convex sets is included because it contains basic results of linear algebra used in many applications and "geometric" linear algebra Because logically it uses results from elementary analysis (like a continuous function on a closed bounded set has a maximum) I put it at the end If such results are known to a class, the chapter can be covered much earlier, for instance after knowing the definition of a linear map I hope that the present book can be used for a one-term course The first six chapters review some of the basic notions I looked for efficiency Thus the theorem that m homogeneous linear equations in n unknowns has a non-trivial soluton if n > m is deduced from the dimension theorem rather than the other way around as in the introductory text And the proof that two bases have the same number of elements (i.e that dimension is defined) is done rapidly by the "interchange" VI FOREWORD method I have also omitted a discussion of elementary matrices, and Gauss elimination, which are thoroughly covered in my Introduction to Linear Algebra Hence the first part of the present book is not a substitute for the introductory text It is only meant to make the present book self contained, with a relatively quick treatment of the more basic material, and with the emphasis on the more advanced chapters Today's curriculum is set up in such a way that most students, if not all, will have taken an introductory one-term course whose emphasis is on matrix manipulation Hence a second course must be directed toward the structure theorems Appendix gives the definition and basic properties of the complex numbers This includes the algebraic closure The proof of course must take for granted some elementary facts of analysis, but no theory of complex variables is used Appendix treats the Iwasawa decomposition, in a topic where the group theoretic aspects begin to intermingle seriously with the purely linear algebra aspects This appendix could (should?) also be treated in the general undergraduate algebra course Although from the start I take vector spaces over fields which are subfields of the complex numbers, this is done for convenience, and to avoid drawn out foundations Instructors can emphasize as they wish that only the basic properties of addition, multiplication, and division are used throughout, with the important exception, of course, of those theories which depend on a positive definite scalar product In such cases, the real and complex numbers play an essential role New Haven, Connecticut SERGE LANG Acknowledgments I thank Ron Infante and Peter Pappas for assisting with the proof reading and for useful suggestions and corrections I also thank Gimli Khazad for his corrections S.L Contents CHAPTER I Vector Spaces §1 §2 §3 §4 Definitions Bases Dimension of a Vector Space Sums and Direct Sums 10 15 19 CHAPTER II Matrices 23 §1 The Space of Matrices §2 Linear Equations §3 Multiplication of Matrices 23 29 31 CHAPTER III Linear Mappings 43 §1 Mappings §2 Linear Mappings §3 The Kernel and Image of a Linear Map §4 Composition and Inverse of Linear Mappings §5 Geometric Applications 43 51 59 66 72 CHAPTER IV Linear Maps and Matrices 81 §1 The Linear Map Associated with a Matrix §2 The Matrix Associated with a Linear Map §3 Bases, Matrices, and Linear Maps 81 82 87 CONTENTS Vl11 CHAPTER V Scalar Products and Orthogonality §1 §2 §3 §4 §5 §6 §7 §8 95 Scalar Products Orthogonal Bases, Positive Definite Case Application to Linear Equations; the Rank Bilinear Maps and Matrices General Orthogonal Bases The Dual Space and Scalar Products Quadratic Forms Sylvester's Theorem 95 103 113 118 123 125 132 135 CHAPTER VI Determinants §1 §2 §3 §4 §5 §6 §7 §8 §9 Determinants of Order Existence of Determinants Additional Properties of Determinants Cramer's Rule Triangulation of a Matrix by Column Operations Permutations Expansion Formula and Uniqueness of Determinants Inverse of a Matrix The Rank of a Matrix and Subdeterminants 140 140 143 150 157 161 163 168 174 177 CHAPTER VII - Symmetric, Hermitian, and Unitary Operators §1 Symmetric Operators §2 Hermitian Operators §3 Unitary Operators 180 180 184 188 CHAPTER VIII Eigenvectors and Eigenvalues §1 §2 §3 §4 §5 §6 Eigenvectors and Eigenvalues The Characteristic Polynomial Eigenvalues and Eigenvectors of Symmetric Matrices Diagonalization of a Symmetric Linear Map The Hermitian Case Unitary Operators 194 194 200 213 218 225 227 CHAPTER IX Polynomials and Matrices §1 Polynomials §2 Polynomials of Matrices and Linear Maps 231 231 233 CONTENTS IX CHAPTER X Triangulation of Matrices and Linear Maps §1 Existence of Triangulation §2 Theorem of Hamilton-Cayley §3 Diagonalization of Unitary Maps 237 237 240 242 CHAPTER XI Polynomials and Primary Decomposition §1 §2 §3 §4 §5 §6 The Euclidean Algorithm Greatest Common Divisor Unique Factorization Application to the Decomposition of a Vector Space Schur's Lemma The Jordan Normal Form 245 245 248 251 255 260 262 CHAPTER XII Convex Sets §1 Definitions §2 Separating Hyperplanes §3 Extreme Points and Supporting Hyperplanes §4 The Krein-Milman Theorem 268 268 270 272 274 APPENDIX I Complex Numbers 277 APPENDIX II Iwasawa Decomposition and Others 283 Index 293 APPENDIX II Iwasawa Decomposition and Others Let SLn denote the set of matrices with determinant The purpose of this appendix is to formulate in some general terms results about SL n We shall use the language of group theory, which has not been used previously, so we have to start with the definition of a group Let G be a set We are given a mapping G x G ~ G, which at first we write as a product, i.e to each pair of elements (x, y) of G we associate an element of G denoted by xy, satisfying the following axioms GR The product is associative, namely for all x, y, Z E G we have (xY)Z = x(yz) GR There is an element e E G such that ex GR Given x E = xe = x for all x G there exists an element x-I E E G G such that It is an easy exercise to show that the element in GR is uniquely determined, and it is called the unit element The element x-I in GR is also easily shown to be uniquely determined, and is called the inverse of x A set together with a mapping satisfying the three axioms is called a group Example Let G' = SLn(R) Let the product be the mUltiplication of matrices Then SLn(R) is a group Similarly, SLn(C) is a group The unit element is the unit matrix I 284 IW ASA W A DECOMPOSITION AND OTHERS [APP II] Example Let G be a group and let H be a subset which contains the unit element, and is closed under taking products and inverses, i.e if x, y E H then x-I E Hand xy E H Then H is a group under the "same" product as in G, and is called a subgroup We shall now consider some important subgroups Let G = SLn(R) Note that the subset consisting of the two elements I, -I is a subgroup Also note that SLn(R) is a subgroup of the group GLn{R) (all real matrices with non-zero determinant) We shall now express Theorem 2.1 of Chapter V In the context of groups and subgroups Let: U = subgroup of upper triangular matrices with 1's on the diagonal, u(X) = XI2 XIn o X2n called unipotent 001 A = subgroup of positive diagonal elements: a= with > for all i K = subgroup of real unitary matrices k, satisfying tk = k- l Theorem (Iwasawa decomposition) The product map U x A x K given by (u, a, k) -+ t G uak is a bijection Proof Let eI, ,en be the standard unit vectors of R n (vertical) Let g = (gij) E G Then we have o o [APP II] 285 IW ASA W A DECOMPOSITION AND OTHERS There exists an upper triangular matrix such that B == (bij), so with bij == if i > j, b ll gO) b 12 g(1) + b22g(2) == e~} such that the diagonal elements are positive, that is b 11 , , b nn > 0, and such that the vectors ei, , e~ are mutually perpendicular unit vectors Getting such a matrix B is merely applying the usual Gram Schmidt orthogonalization process, subtracting a linear combination of previous vectors to get orthogonality, and then dividing by the norms to get unit vectors Thus ) e; == n == k n ;=1 q=1 ;=1 Let n n L bijg(i) == L L gq;bijeq == L L gq;bijeq == e;, so maps the orthogonal unit vectors e1, ,en to the orthogonal unit vectors ei, ,e~ Therefore k is unitary, and g == kB- 1• Then gB E K Then ke; q=1 ;=1 g-l == Bk- k and B == au where a is the diagonal matrix with a; == b u and u is unipotent, u == a-I B This proves the surjection G == UAK For uniqueness of the decomposition, if g == uak == u' a' k', let U1 == u- 1u', so using gt g you get a 2t u 11 == U1a,2 These matrices are lower and upper triangular respectively, with diagonals a , a,2, so a == a', and finally U1 == /, proving uniqueness The elements of U are called unipotent because they are of the form u(X) == / + X, where X is strictly upper triangular, and called nilpotent Let 00 y) exp Y == 2:-., )=0 J and xn+1 == O Thus X == u - 00 X; log(/ +X) == 2:(_I)Z+I_ ;=1 / IS 286 IW ASA W A DECOMPOSITION AND OTHERS [APP II] Let n denote the space of all strictly upper triangular matrices Then exp: n t y U, -+ exp Y is a bijection, whose inverse is given by the log series, Y == log( I + X) Note that, because of the nilpotency, the exp and log series are actually polynomials, defining inverse polynomial mappings between U and n The bijection actually holds over any field of characteristic O The relations exp 10g(1 + X) == I +X log exp Y == 10g(1 + X) == Y and hold as identities of formal power series Cf my Complex Analysis, Chapter II, §3, Exercise Geometric interpretation in dimension Let h2 be the upper half plane of complex numbers z == x x, y E Rand y > 0, y == y(z) For g = (: !) E G + iy with = SL2(R) define g(z) == (az + b)(cz + d)-I Then G acts on h2, meaning that the following two conditions are satisfied: If I is the unit matrix, then I(z) == z for all z For g,g' E G we have g(g'(z)) == (gg')(z) Also note the property: If g(z) == z for all z, then g == To see that if z E h2 then g(z) transformation formula + I E h2 also, you will need to check the y(z) y(g(z)) = Icz + d1 ' proved by direct computation These statements are proved by (easy) brute force In addition, for w E h2, let Gw be the subset of elements g E G such that g(w) == w Then Gw is a subgroup of G, called the isotropy group of w Verify that: Theorem The isotropy group of i is K, i e K is the subgroup of elements kEG such that k(i) == i This is the group of matrices COS () ( Or equivalently, a == d, c == -sin () sin () ) cos () -b, a + b == [APP II] For x IW ASA W A DECOMPOSITION AND OTHERS E 287 Rand al > 0, let u(x) = (~ ~) a=(~ and °) WIth a2 == a l a2 If == uak, then u(x)(z) == z + x, so putting y == al, we get a(i) == yi, g(i) == uak(i) == ua(i) == yi + x == x + iy Thus G acts transitively, and we have a description of the action in terms of the Iwasawa decomposition and the coordinates of the upper half plane Geometric interpretation in dimension We hope you know the quaternions, whose elements are and i == j2 == k == -1, ij == k, jk == i, ki == j Define Then - 2 2 ZZ==X I +X2 +X3 +X4' and we define Izi == (zz) 1/2 Let h3 be the upper half space consIstIng of elements z whose kcomponent is 0, and X3 > 0, so we write with > y Let G == SL2(C), so elements of G are matrices with a, b, e, dEC and ad - be == As in the case of h2, define ( z) == (az + b) (ez + d) -1 Verify by brute force that if z E h3 then g(z) E h3, and that G acts on h3, namely the two properties listed in the previous example are also satisfied here Since the quaternions are not commutative, we have to use the quotient as written (az + b)(ez + d)-I Also note that the y-coordinate transformation formula for z E h3 reads the same as for h2, namely y(g(z)) == y(z)/lez + d1 288 IW ASA W A DECOMPOSITION AND OTHERS [APP II] The group G = SL2(C) has the Iwasawa decomposition G= UAK, where: = (~ ~) U = group A K = same group as before in the case of SL2(R); = complex unitary group of elements k such that of elements u(x) with x E C; ll( = k- l • The previous proof works the same way, BUT you can verify directly: Theorem The isotropy group Gj is K If g = uak with u E U, a E A, k E K, u = u(x) and y = y(a), then gO) = x + yj Thus G acts transitively, and the Iwasawa decomposition follows trivially from this group action (see below) Thus the orthogonalization type proof can be completely avoided Prool 01 the Iwasawa decomposition Irom the above two properties Let g E G and g(j) = x + yj Let u = u(x) and a be such that y = al/a2 = a? Let g' = ua Then by the second property, we get gO) = g'(j), so j = g-l g' 0) By the first property, we get g-l g' = k for some k E K, so g'k- l = uak- l = g, concluding the proof The conjugation action By a homomorphism I: G ~ G' of a group into another we mean a mapping which satisfies the properties l(eG) = l(eG') (where e = unit element), and for all g ,g2 E G A homomorphism is called an isomorphism if it has an inverse homomorphism, i.e if there exists a homomorphism I': G' ~ G such that II' = id G " and l'f = id G An isomorphism of G with itself is called an automorphism of G You can verify at once that the set of automorphisms of G, denoted by Aut( G), is a group The product in this group is the composition of mappings Note that a bijective homomorphism is an isomorphism, just as for linear maps Let X be a set A bijective map a: X ~ X of X with itself is called a permutation You can verify at once that the set of permutations of X is a group, denoted by Perm(X) By an action of a group G on X we mean a [APP II] IW ASA W A DECOMPOSITION AND OTHERS 289 map GxX tX denoted by (g, x) + gx, satisfying the two properties: If e is the unit element of G, then ex == x for all x EX For all gl,g2 E G and x E X we have gl(g2 X) == (glg2)X This is just a general formulation of action, of which we have seen an example above Given g E G, the map x + gx of X into itself is a permutation of X You can verify this directly from the definition, namely the inverse permutation is given by x + g-l x Let a(g) denote the permutation associated with g Then you can also verify directly from the definition that g + a(g) is a homomorphism of G into the group of permutations of X Conversely, such a homomorphism gives rise to an action of G on X Let G be a group The conjugation action of G on itself is defined for g,g'EGby c(g)g' == gg' g-l It is immediately verified that the map g + c(g) is a homomorphism of G into Aut( G) (the group of automorphisms of G) Then G also acts on spaces naturally associated to G Consider the special case when G == SLn (R) Let a == vector space of diagonal matrices diag(h , • , h n ) with trace 0, Lhi == O n == vector space of strictly upper triangular matrices (hij) with hij == if i > j t n == vector space of strictly lower diagonal matrices == vector space of n x n matrices of trace O Then is the direct sum a + n + tn, and A acts by conjugation In fact, is a direct sum of eigenspaces for this action Indeed, let Eij (i < j) be the matrix with ij-component and all other components O Then c(a)Eij == (ail aj )Eij == a(J.ij Eij by direct computation, defining a(J.lj == ail aj Thus lI.ij is a homomorphism of A into R+ (positive real multiplicative group) The set of such homomorphisms will be called the set of regular characters, denoted by 9l(n) because n is the direct sum of the dimensional eigenspaces having basis Eij (i < j) We write n == E9 (J.E~(n) n(J., 290 IW ASA W A DECOMPOSITION AND OTHERS where noc is the set of elements X similarly E [APP II] n such that aXa- == a OC X We have Note that a is the O-eigenspace for the conjugation action of A Essentially the same structure holds for SLn(C) except that the Rdimension of the eigenspaces noc is 2, because noc has basis EiJ.' iEiJ The Cdimension is By an algebra we mean a vector space with a bilinear map into itself, called a product We make g into an algebra by defining the Lie product of X, Y E to be [X, Y] == XY - YX It is immediately verified that this product is bilinear but not associative We call the Lie algebra of G Let the space of linear maps 2(g, g) be denoted by End(g), whose elements are called endomorphisms of g By definition the regular representation of on itself is the map ~ End(g) which to each X E associates the endomorphism L(X) of such that L(X)( Y) == [X, Y] Note that X ~ L(X) is a linear map (Chapter XI, §6, Exercise 7) Exercise Verify that denoting L(X) by D x , we have the derivation property for all Y, Z E g, namely Dx[Y,Z] == [DxY,Z] + [Y,DxZ] Using only the bracket notation, this looks like [X, [Y,Z]] == [[X, Y],Z] 11 + [Y,X,Z]] We use a also to denote the character on == diag(h1, ,hn ) by ° given on a diagonal matrix This is the additive version of the multiplicative character previously considered multiplicatively on A Then each noc is also the a-eigenspace for the additive character a, namely for 11 E 0, we have [APP II] IW ASA W A DECOMPOSITION AND OTHERS 291 which you can verify at once from the definition of multiplication of matrices Polar Decompositions We list here more product decompositions in the notation of groups and subgroups Let G == SLn (C) Let U == U (C) be the set of strictly upper triangular matrices with components in C Show that U is a subgroup Let D be the set of diagonal complex matrices with non-zero diagonal elements Show that D is a subgroup Let K be the set of elements k E SLn(C) such that II( == k- l Then K is a subgroup, the complex unitary group Cf Chapter VII, §3, Exercise Verify that the proof of the Iwasawa decomposition works in the complex case, that is G == UAK, with the same A in the real and complex cases The quadratic map Let g E G Define g* == I g Show that (glg2)* == g~g~ An element g EGis hermitian if and only if g == g* Cf Chapter VII, §2 Then gg* is hermitian positive definite, i.e for every v E Cn, we have 0, and == only if v == We denote by SPosn(C) the set of all hermitian positive definite n x n matrices with determinant ° Theorem Let p E SPosn(C) Then p has a unique square root in SPOsn(C) Proof See Chapter VIII, §5, Exercise Let H be a subgroup of G By a (left) coset of H, we mean a subset of G of the form gH with some g E G You can easily verify that two cosets are either equal or they are disjoint By G / H we mean the set of co sets of H in G Theorem The quadratic map g ~ gg* induces a bijection G/ K ~ SPosn(C) Proof Exercise Show injectivity and surjectivity separately Theorem The group G has the decomposition (non-unique) G == KAK If g EGis written as a product g == kl bk2 with kl' k2 E K and b E A, then b is uniquely determined up to a permutation of the diagonal elements 292 IW ASA W A DECOMPOSITION AND OTHERS Proof Given g E G there exists kl E K and b E [APP II] A such that by using Chapter VIII, Theorem 4.4 By the bijection of Theorem 5, there exists k2 E K such that g == k 1bk2, which proves the existence of the decomposition As to the uniqueness, note that b2 is the diagonal matrix of eigenvalues of gg*, i.e the diagonal elements are the roots of the characteristic polynomial, and these roots are uniquely determined up to a permutation, thus proving the theorem Note that there is another version of the polar decomposition as follows Theorem Abbreviate SPosn(C) == P Then G == PK, and the decomposition of an element g == pk with PEP, k E K is unique Proof The existence is a rephrasing of Chapter VIII, §5, Exercise As to uniqueness, suppose g == pk The quadratic map gives gg* == pp* == p2 The uniqueness of the square root in Theorem shows that p is uniquely determined by g, whence so is k, as was to be shown Index A Action 267, 286, 288 Adjoint 185 Algebra 290 Algebraically closed 279 Al terna ting 147 Anti1inear 128 Associated linear map 81 Associated matrix 82 Automorphism 288 B Basis 11, 87 Bessel inequality 102 Bijective 48 Bilinear form 118, 132 Bilinear map 118, 132 Bounded from below 273 Bracket action 267 c Character 289 Characteristic polynomial 200, 206 Characteristic value 194 Coefficients of a matrix 29 Coefficients of a polynomial 232 Column 23 Column equivalence 161 Column rank 113 Column vector 24 Complex numbers 277 Complex unitary 291 Component 3, 99 Component of a matrix 23 Conjugate 278 Conjugation action 289 Constant term 232 Contained Convex 77, 268 Convex closure 79 Coordinate functions 46 Coordinate vector 3, 11 Coordinates with respect to a basis 11 Coset 291 Cramer's rule 157 Cyclic 262 D Degree of polynomial 232 Derivation property 290 Derivative 55, 129, 195 Determinant 140, 201 Diagonal elements 27 Diagonal matrix 27 Diagona1ize 93, 199, 218, 220, 221, 243 Differential equations 64, 197, 220, 258 Dimension 16, 20, 61, 66, 97, 106, 115 Dirac functional 127 Direct product 21 294 INDEX Direct sum 19, 21, Ill, 257 Distance 98 Divide 250 Dot product 6, 31 Dual basis 127 Dual space 126 E Eigencharacter 267 Eigenspace 195, 224 Eigenvalue 194, 201, 216 Eigenvector 194 Element Endomorphism 289 Euclidean algorithm 245 Even permutation 168 Expansion of determinant 143, 149, 169 Extreme point 272 Image 60 Independent 10, 159 Index of nullity 137 Index of positivity 138 Infinite dimensional 17 Injective 47 Intersection Invariant subspace 219, 237, 255, 260 Inverse 35, 48, 69, 163, 174, 283 Inverse image 80 Invertible 35, 87 Irreducible 251 Isomorphism 69 Isotropy group 286 Iwasawa decomposition 284 J Jordan basis 263 Jordan normal form 264 F K Fan 237 Fan basis 237 Field Finite dimensional 17 Fourier coefficient 100, 109 Function space Functional 126 Kernel 59 Krein-Milman theorem L G Generate 6, 248 Gradient 129 Gram-Schmidt orthogonalization Greatest common divisor 250 Group 283 275 104 Leading coefficient 232 Lie 267 Line 17, 57, 72 Linear combination Linear equations 29, 113 Linear mapping 51, 54 Linearly dependent or independent 10, 86, 159, 160 M H Half space 269 Hamilton-Cayley 241 Hermitian form 184 Hermitian map 185, 225 Hermitian matrix 186 Hermitian product 108 Homomorphism 267, 288 Homogeneous equations 29 Hyperplane 269 I Ideal 248 Identity map M:, 88 Mapping 43 Markov matrix 240 Matrix 23, 81, 82, 88, 92, 120 Maximal set of linearly independent elements 13, 17 Maximum 215 Minimal polynomial 254 Multilinear map 146 Multiplicity 253 N 48, 53 Negative definite 224 Nilpotent 42, 94, 240 295 INDEX Non-degenerate 32, 95 Non-singular 35, 175 Non-trivial 29 Norm of a vector 97 Normal 227 Null form 137 Null space 124 Numbers o Odd permutation 168 Operator 68, 181 Orthogonal 7, 96, 188 Orthogonal basis 103 Orthogonal complement 107, 130 Orthonormal 103, 110, 136 p Parallelogram 58, 73, 99 Period 262 Permutation 163 Perpendicular 7, 96 Plane 17 Polar decomposition 292 Polarization 186 Polynomial 231 Positive definite operator 183 Positive definite product 97, 108, 222 Product 283 Product of determinants 172 Product of matrices 32 Projection 99 Proper subset Pythagoras 99 Q Quadratic map 291 Quadratic form 132, 214 Quaternions 287 R Rank 114, 178 Real unitary 284 Reflection 199 Regular action 267 Regular characters 289 Regular representation 290 Root 205, 233, 246 Rotation 85, 93 Row 23 Row rank 113 s Scalar product 6, 95 Schur's lemma 261 Schwarz inequality 100, 110 Segment 57, 72 Self-adjoint 185 Semilinear 128 Semipositive 183, 222, 226 Separating hyperplane 269 Sign of permutation 166 Similar matrices 93 Skew-symmetric 65, 183 Span 73, 75, 79 Spectral theorem 219, 226 Square matrix 24 Stable subspace 219 Strictly upper triangular 41 Subfield Subgroup 284 Subset Subspace Sum of subspaces 9, 19 Supporting hyperplane 270 Surjective 48 Sylvester's theorem 137 Symmetric form 132 Symmetric linear map 182, 213 Symmetric matrix 26, 213 T Trace 40, 64 Translation 49, 75 Transpose of linear map 182 Transpose of matrix 26, 37, 89 Transposition 164 Triangle 75 Triangle inequality 101 Triangulable 238 Triangular 28, 41 Trilinear 146 Trivial solution 29 u Union Unipotent 284, 285 296 Unique factorization 251 U nit element 283 Unit ideal 248 Unit sphere 215 Unit vector 99, 110 Unitary group 284, 291 Unitary map 188, 228, 243 Unitary matrix 27, 190 Unknown 29 Upper triangular 28, 41 INDEX v Value Vandermonde determinant Vector Vector space z Zero mapping 53, 55 Zero matrix 25 155 Linear Algebra is intended for a one-term course at the junior or senior level It begins with an exposition of the basic theory of vector spaces and proceeds to explain the fundamental structure theorems for linear maps, including eigenvectors and eigenvalues, quadratic and hermitian forms, diagonalization of symmetric, hermitian, and unitary linear maps and matrices, triangulation, and Jordan canonical form The book also includes a useful chapter on convex sets and the finite-dimensional Krein-Milman theorem The presentation is aimed at the student who has already had some exposure to the elementary theory of matrices, determinants, and linear maps However the book is logically self-contained In this new edition, many parts of the book have been rewritten and reorganized, and new exercises have been added ISBN 0-387-96412-6 m • Z sprtngeronllne.com ... Congress Cataloging-in-Publication Data Lang, Serge Linear algebra (Undergraduate texts in mathematics) Includes bibliographical references and index I Algebras, Linear II Title III Series QA2Sl L.26... book appeared under the title Introduction to Linear Algebra © 1970 by Addison-Wesley, Reading, MA The second edition appeared under the title Linear Algebra © 1971 by Addison-Wesley, Reading, MA... The present book is meant as a text for a course in linear algebra, at the undergraduate level in the upper division My Introduction to Linear Algebra provides a text for beginning students, at