1. Trang chủ
  2. » Kinh Doanh - Tiếp Thị

a concise text on advanced linear algebra pdf

333 28 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

www.Technicalbookspdf.com A Concise Text on Advanced Linear Algebra This engaging textbook for advanced undergraduate students and beginning graduates covers the core subjects in linear algebra The author motivates the concepts by drawing clear links to applications and other important areas The book places particular emphasis on integrating ideas from analysis wherever appropriate and features many novelties in its presentation For example, the notion of determinant is shown to appear from calculating the index of a vector field which leads to a self-contained proof of the Fundamental Theorem of Algebra; the Cayley–Hamilton theorem is established by recognizing the fact that the set of complex matrices of distinct eigenvalues is dense; the existence of a real eigenvalue of a self-adjoint map is deduced by the method of calculus; the construction of the Jordan decomposition is seen to boil down to understanding nilpotent maps of degree two; and a lucid and elementary introduction to quantum mechanics based on linear algebra is given The material is supplemented by a rich collection of over 350 mostly proof-oriented exercises, suitable for readers from a wide variety of backgrounds Selected solutions are provided at the back of the book, making it ideal for self-study as well as for use as a course text www.Technicalbookspdf.com www.Technicalbookspdf.com A Concise Text on Advanced Linear Algebra YISONG YANG Polytechnic School of Engineering, New York University www.Technicalbookspdf.com University Printing House, Cambridge CB2 8BS, United Kingdom Cambridge University Press is part of the University of Cambridge It furthers the University’s mission by disseminating knowledge in the pursuit of education, learning and research at the highest international levels of excellence www.cambridge.org Information on this title: www.cambridge.org/9781107087514 c Yisong Yang 2015 This publication is in copyright Subject to statutory exception and to the provisions of relevant collective licensing agreements, no reproduction of any part may take place without the written permission of Cambridge University Press First published 2015 Printed in the United Kingdom by Clays, St Ives plc A catalogue record for this publication is available from the British Library Library of Congress Cataloguing in Publication data Yang, Yisong A concise text on advanced linear algebra / Yisong Yang, Polytechnic School of Engineering, New York University pages cm Includes bibliographical references and index ISBN 978-1-107-08751-4 (Hardback) – ISBN 978-1-107-45681-5 (Paperback) Algebras, Linear–Textbooks Algebras, Linear–Study and teaching (Higher) Algebras, Linear–Study and teaching (Graduate) I Title II Title: Advanced linear algebra QA184.2.Y36 2015 512 5–dc23 2014028951 ISBN 978-1-107-08751-4 Hardback ISBN 978-1-107-45681-5 Paperback Cambridge University Press has no responsibility for the persistence or accuracy of URLs for external or third-party internet websites referred to in this publication, and does not guarantee that any content on such websites is, or will remain, accurate or appropriate www.Technicalbookspdf.com For Sheng, Peter, Anna, and Julia www.Technicalbookspdf.com www.Technicalbookspdf.com Contents Preface Notation and convention page ix xiii Vector spaces 1.1 Vector spaces 1.2 Subspaces, span, and linear dependence 1.3 Bases, dimensionality, and coordinates 1.4 Dual spaces 1.5 Constructions of vector spaces 1.6 Quotient spaces 1.7 Normed spaces 1 13 16 20 25 28 Linear mappings 2.1 Linear mappings 2.2 Change of basis 2.3 Adjoint mappings 2.4 Quotient mappings 2.5 Linear mappings from a vector space into itself 2.6 Norms of linear mappings 34 34 45 50 53 55 70 Determinants 3.1 Motivational examples 3.2 Definition and properties of determinants 3.3 Adjugate matrices and Cramer’s rule 3.4 Characteristic polynomials and Cayley–Hamilton theorem 78 78 88 102 Scalar products 4.1 Scalar products and basic properties 115 115 vii www.Technicalbookspdf.com 107 viii Contents 4.2 4.3 4.4 4.5 Non-degenerate scalar products Positive definite scalar products Orthogonal resolutions of vectors Orthogonal and unitary versus isometric mappings 120 127 137 142 Real quadratic forms and self-adjoint mappings 5.1 Bilinear and quadratic forms 5.2 Self-adjoint mappings 5.3 Positive definite quadratic forms, mappings, and matrices 5.4 Alternative characterizations of positive definite matrices 5.5 Commutativity of self-adjoint mappings 5.6 Mappings between two spaces 147 147 151 157 164 170 172 Complex quadratic forms and self-adjoint mappings 6.1 Complex sesquilinear and associated quadratic forms 6.2 Complex self-adjoint mappings 6.3 Positive definiteness 6.4 Commutative self-adjoint mappings and consequences 6.5 Mappings between two spaces via self-adjoint mappings 180 180 184 188 194 199 Jordan decomposition 7.1 Some useful facts about polynomials 7.2 Invariant subspaces of linear mappings 7.3 Generalized eigenspaces as invariant subspaces 7.4 Jordan decomposition theorem 205 205 208 211 218 Selected topics 8.1 Schur decomposition 8.2 Classification of skewsymmetric bilinear forms 8.3 Perron–Frobenius theorem for positive matrices 8.4 Markov matrices 226 226 230 237 242 Excursion: Quantum mechanics in a nutshell 9.1 Vectors in Cn and Dirac bracket 9.2 Quantum mechanical postulates 9.3 Non-commutativity and uncertainty principle 9.4 Heisenberg picture for quantum mechanics 248 248 252 257 262 Solutions to selected exercises Bibliographic notes References Index 267 311 313 315 www.Technicalbookspdf.com Preface This book is concisely written to provide comprehensive core materials for a year-long course in Linear Algebra for senior undergraduate and beginning graduate students in mathematics, science, and engineering Students who gain profound understanding and grasp of the concepts and methods of this course will acquire an essential knowledge foundation to excel in their future academic endeavors Throughout the book, methods and ideas of analysis are greatly emphasized and used, along with those of algebra, wherever appropriate, and a delicate balance is cast between abstract formulation and practical origins of various subject matters The book is divided into nine chapters The first seven chapters embody a traditional course curriculum An outline of the contents of these chapters is sketched as follows In Chapter we cover basic facts and properties of vector spaces These include definitions of vector spaces and subspaces, concepts of linear dependence, bases, coordinates, dimensionality, dual spaces and dual bases, quotient spaces, normed spaces, and the equivalence of the norms of a finitedimensional normed space In Chapter we cover linear mappings between vector spaces We start from the definition of linear mappings and discuss how linear mappings may be concretely represented by matrices with respect to given bases We then introduce the notion of adjoint mappings and quotient mappings Linear mappings from a vector space into itself comprise a special but important family of mappings and are given a separate treatment later in this chapter Topics studied there include invariance and reducibility, eigenvalues and eigenvectors, projections, nilpotent mappings, and polynomials of linear mappings We end the chapter with a discussion of the concept of the norms of linear mappings and use it to show that being invertible is a generic property of a linear mapping and ix www.Technicalbookspdf.com 304 Solutions to selected exercises As a consequence, for any u ∈ U , we have u = v + w where v = f1 (T )g1 (T )u and w = f2 (T )g2 (T )u Hence g2 (T )v = f1 (T )pT (T )u = and g1 (T )w = f2 (T )pT (T )u = This shows v ∈ N (g2 (T )) and w ∈ N (g1 (T )) Therefore U = N (g1 (T )) + N (g2 (T )) Pick u ∈ N (g1 (T ))∩N (g2 (T )) Applying (S22) to u we see that u = Hence the problem follows 7.2.6 We proceed by induction on k If k = 1, (7.2.16) follows from Exercise 7.2.3 Assume that at k − ≥ the relation (7.2.16) holds That is, U = R(T1 ) ⊕ · · · ⊕ R(Tk−1 ) ⊕ W, W = N (T1 ) ∩ · · · ∩ N (Tk−1 ) (S23) Now we have U = R(Tk ) ⊕ N (Tk ) We assert N (Tk ) = R(T1 ) ⊕ · · · ⊕ R(Tk−1 ) ⊕ V In fact, pick any u ∈ N (Tk ) Then (S23) indicates that u = u1 + · · · + uk−1 + w for u1 ∈ R(T1 ), , uk−1 ∈ R(Tk−1 ), w ∈ W Since Tk ◦ Ti = for i = 1, , k − 1, we see that u1 , , uk−1 ∈ N (Tk ) Hence w ∈ N (Tk ) So w ∈ V This establishes the assertion and the problem follows Section 7.3 7.3.2 (ii) From (7.3.41) we see that, if we set p(λ) = λn − an−1 λn−1 − · · · − a1 λ − a0 , then p(T )(T k (u)) = T k (p(T )(u)) = for k = 0, 1, , n − 1, where T = I This establishes p(T ) = since {T n−1 (u), , T (u), u} is a basis of U So pT (λ) = p(λ) It is clear that mT (λ) = pT (λ) 7.3.3 Let u be a cyclic vector of T Assume that S and T commute Let a0 , a1 , , an−1 ∈ F be such that S(u) = an−1 T n−1 (u) + · · · + a1 T (u)+a0 u Set p(t) = an−1 t n−1 +· · ·+a1 t +a0 Hence S(T k (u)) = T k (S(u)) = (T k p(T ))(u) = p(T )(T k (u)) for k = 1, , n − This proves S = p(T ) since u, T (u), , T n−1 (u) form a basis of U 7.3.4 (i) Let u be a cyclic vector of T Since T is normal, U has a basis consisting of eigenvectors, say u1 , , un , of T , associated with the corresponding eigenvalues, λ1 , , λn Express u as u = a1 u1 + · · · + an un for some a1 , , an ∈ C Hence T (u) = a1 λ1 u1 + · · · + an λn un , ··· ··· ························ T n−1 n−1 (u) = a1 λn−1 u1 + · · · + an λn un www.Technicalbookspdf.com Solutions to selected exercises 305 Inserting the above relations into the equation x1 u + x2 T (u) + · · · + xn T n−1 (u) = 0, we obtain ⎧ a1 (x1 + λ1 x2 + · · · + λn−1 ⎪ xn ) ⎪ ⎪ ⎪ ⎨ a (x + λ x + · · · + λn−1 x ) 2 n ⎪ ··························· ⎪ ⎪ ⎪ ⎩ an (x1 + λn x2 + · · · + λn−1 n xn ) (S24) = 0, = 0, ··· ··· (S25) = If λ1 , , λn are not all distinct, then (S25) has a solution (x1 , , xn ) ∈ Cn , which is not the zero vector for any given a1 , , an , contradicting with the linear independence of the vectors u, T (u), , T n−1 (u) (ii) With (7.3.42), we consider the linear dependence of the vectors u, T (u), , T n−1 (u) as in part (i) and come up with (S24) and (S25) Since λ1 , , λn are distinct, in view of the Vandermonde determinant, we see that the system (S25) has only zero solution x1 = 0, , xn = if and only if = for any i = 1, , n 7.3.5 Assume there is such an S Then S is nilpotent of degree m where m satisfies 2(n − 1) < m ≤ 2n since S 2(n−1) = T n−1 = and S 2n = T n = By Theorem 2.22, we arrive at 2(n − 1) < m ≤ n, which is false Section 7.4 7.4.5 Let the invertible matrix C ∈ C(n, n) be decomposed as C = P + iQ, where P , Q ∈ R(n, n) If Q = 0, there is nothing to show Assume Q = Then from CA = BC we have P A = BP and QA = BQ Thus, for any real number λ, we have (P + λQ)A = B(P + λQ) It is clear that there is some λ ∈ R such that det(P + λQ) = For such λ, set K = P + λQ Then A = K −1 BK 7.4.9 (i) Let u ∈ Cn \ {0} be an eigenvector associated to the eigenvalue λ Then Au = λu Thus Ak u = λk u Since there is an invertible matrix B ∈ C(n, n) such that Ak = B −1 AB, we obtain (B −1 ABu) = λk u or A(Bu) = λk (Bu), so that the problem follows l (ii) From (i) we see that if λ ∈ C is an eigenvalue then λk , λk , , λk are all eigenvalues of A which cannot be all distinct when l is large enough So there are some integers ≤ l < m such that www.Technicalbookspdf.com 306 Solutions to selected exercises l 7.4.11 7.4.12 7.4.17 7.4.18 m λk = λk Since A is nonsingular, then λ = Hence λ satisfies m−l λk = as asserted Use (3.4.37) without assuming det(A) = or (S16) Take u = (1, , 1)t ∈ Rn It is clear that Au = nu It is also clear that n(A) = n − since r(A) = So n(A − nIn ) = b2 bn and A ∼ diag{n, 0, , 0} Take v = (1, , , )t ∈ Rn Then n n Bv = nv Since r(B) = 1, we get n(B) = n − So n(B − nIn ) = Consequently, B ∼ diag{n, 0, , 0} Thus A ∼ B Suppose otherwise that there is an A ∈ R(3, 3) such that m(λ) = λ2 + 3λ + is the minimal polynomial of A Let pA (λ) be the characteristic polynomial of A Since pA (λ) ∈ P3 and the coefficients of pA (λ) are all real, so pA (λ) has a real root On the other hand, recall that the roots of m(λ) are all the roots of pA (λ) but the former has no real root So we arrive at a contradiction (i) Let mA (λ) be the minimal polynomial of A Then mA (λ)|λ2 + However, λ2 + is prime over R, so mA (λ) = λ2 + (ii) Let pA (λ) be the characteristic polynomial of A Then the degree of pA (λ) is n Since mA (λ) = λ2 + contains all the roots of pA (λ) in C, which are ±i, which must appear in conjugate pairs because pA (λ) is of real coefficients, so pA (λ) = (λ2 + 1)m for some integer m ≥ Hence n = 2m (iii) Since pA (λ) = (λ − i)m (λ + i)m and mA (λ) = (λ − i)(λ + i) (i.e., ±i are single roots of mA (λ)), we know that N (A − iIn ) = {x ∈ Cn | Ax = ix}, N (A + iIn ) = {y ∈ Cn | Ay = −iy}, are both of dimension m in Cn and Cn = N (A − iIn ) ⊕ N (A + iIn ) Since A is real, we see that if x ∈ N (A − iIn ) then x ∈ N (A + iIn ), and vice versa Moreover, if {w1 , , wm } is a basis of N (A − iIn ), then {w1 , , w m } is a basis of N (A + iIn ), and vice versa Thus {w1 , , wm , w , , wm } is a basis of Cn We now make the decomposition wi = ui + ivi , ui , vi ∈ Rn , i = 1, , m (S26) Then ui , vi (i = 1, , m) satisfy (7.4.23) It remains to show that these vectors are linearly independent in Rn In fact, consider www.Technicalbookspdf.com Solutions to selected exercises m 307 m ui + i=1 bi vi = 0, a1 , , am , b1 , , bm ∈ R (S27) i=1 From (S26) we have ui = (wi + wi ), vi = (wi − wi ), 2i i = 1, , m (S28) Inserting (S28) into (S27), we obtain m m (ai − ibi )wi + i=1 (ai + ibi )w i = 0, i=1 which leads to = bi = for all i = 1, , m (iv) Take ordered basis B = {u1 , , um , v1 , , vm } Then it is seen that, with respect to B, the matrix representation of the mapping TA ∈ L(Rn ) defined by TA (u) = Au, u ∈ Rn is simply C= Im −Im More precisely, if a matrix called B is formed by using the vectors in the ordered basis B as its first, second, , and the nth column vectors, then AB = BC, which establishes (7.4.24) Section 8.1 8.1.5 If A is normal, there is a unitary matrix P ∈ C(n, n) such that A = P † DP , where D is a diagonal matrix of the form diag{λ1 , , λn } with λ1 , , λn the eigenvalues of A which are assumed to be real Thus A† = A However, because A is real, we have A = At 8.1.6 We proceed inductively on dim(U ) If dim(U ) = 1, there is nothing to show Assume that the problem is true at dim(U ) = n − ≥ We prove the conclusion at dim(U ) = n ≥ Let λ ∈ C be an eigenvalue of T and Eλ the associated eigenspace of T Then for u ∈ Eλ we have T (S(u)) = S(T (u)) = λS(u) Hence S(u) ∈ Eλ So Eλ is invariant under S As an element in L(Eλ ), S has an eigenvalue μ ∈ C Let u ∈ Eλ be an eigenvector of S associated with μ Then u is a common eigenvector of S and T Applying this observation to S and T since S , T commute as well, we know that S and T also have a common eigenvector, say w, satisfying S (w) = σ w, T (w) = γ w, for some σ, γ ∈ C www.Technicalbookspdf.com 308 Solutions to selected exercises Let V = (Span{w})⊥ Then V is invariant under S and T as can be seen from (w, S(v)) = (S (w), v) = (σ w, v) = σ (w, v) = 0, (w, T (v)) = (T (w), v) = (γ w, v) = γ (w, v) = 0, for v ∈ V Since dim(V ) = n − 1, we may find an orthonormal basis of V , say {u1 , , un−1 }, under which the matrix representations of S and T are upper triangular Let un = w/ w Then {u1 , , un−1 , un } is an orthonormal basis of U under which the matrix representations of S and T are upper triangular 8.1.9 Let λ ∈ C be any eigenvalue of T and v ∈ U an associated eigenvector Then we have ((T − λI )(u), v) = (u, (T − λI )(v)) = 0, u ∈ U, (S29) which implies that R(T − λI ) ⊂ (Span{v})⊥ Therefore r(T − λI ) ≤ n − Thus, in view of the rank equation, we obtain n(T − λI ) ≥ In other words, this shows that λ must be an eigenvalue of T Section 8.2 8.2.3 Let B = {u1 , , un } be a basis of U and x, y ∈ Fn the coordinate vectors of u, v ∈ U with respect to B With A = (aij ) = (f (ui , uj )) and (8.2.2), we see that u ∈ U0 if and only if x t Ay = for all y ∈ Fn or Ax = In other words, u ∈ U0 if and only if x ∈ N (A) So dim(U0 ) = n(A) = n − r(A) = dim(U ) − r(A) 8.2.4 (i) As in the previous exercise, we use B = {u1 , , un } to denote a basis of U and x, y ∈ Fn the coordinate vectors of any vectors u, v ∈ U with respect to B With A = (aij ) = (f (ui , uj )) and (8.2.2), we see that u ∈ V ⊥ if and only if (Ax)t y = 0, v ∈ V (S30) Let dim(V ) = m We can find m linearly independent vectors y (1) , , y (m) in Fn to replace the condition (S30) by (y (1) )t (Ax) = 0, , (y (m) )t (Ax) = These equations indicate that, if we use B to denote the matrix formed by taking (y (1) )t , , (y (m) )t as its first, , and mth row vectors, then Ax ∈ N (B) = {z ∈ Fn | Bz = 0} In other words, the subspace of Fn consisting of the coordinate vectors of the vectors in V ⊥ is given by X = {x ∈ Fn | Ax ∈ N (B)} Since A is invertible, www.Technicalbookspdf.com Solutions to selected exercises 309 we have dim(X) = dim(N (B)) = n(B) = n − r(B) = n − m This establishes dim(V ⊥ ) = dim(X) = dim(U ) − dim(V ) (ii) For v ∈ V , we have f (u, v) = for any u ∈ V ⊥ So V ⊂ (V ⊥ )⊥ On the other hand, from (i), we get dim(V ) + dim(V ⊥ ) = dim(V ⊥ ) + dim((V ⊥ )⊥ ) So dim(V ) = dim((V ⊥ )⊥ ) which implies V = (V ⊥ )⊥ 8.2.7 If V = V ⊥ , then V is isotropic and dim(V ) = dim(U ) in view of Exercise 8.2.4 Thus V is Lagrangian Conversely, if V is Lagrangian, then V is isotropic such that V ⊂ V ⊥ and dim(V ) = dim(U ) From ⊥ Exercise 8.2.4, we have dim(V ) = dim(U ) = dim(V ) So V = V ⊥ Section 8.3 8.3.1 Let u = (ai ) ∈ Rn be a positive eigenvector associated to r Then the ith component of the relation Au = ru reads n rai = aij aj , i = 1, , n (S31) j =1 Choose k, l = 1, , n such that ak = min{ai | i = 1, , n}, al = max{ai | i = 1, , n} Inserting these into (S31) we find n rak ≥ ak n akj , ral ≤ al j =1 alj j =1 From these we see that the bounds stated in (8.3.27) follow 8.3.3 Use the notation A = {λ ∈ R | λ ≥ 0, Ax ≥ λx for some x ∈ S}, where S is defined by (8.3.3) Recall the construction (8.3.7) We see that rA = sup{λ ∈ A } Since A ≤ B implies A ⊂ B , we deduce rA ≤ rB Section 8.4 8.4.4 From lim Am = K, we obtain lim (At )m = lim (Am )t = K t m→∞ m→∞ m→∞ Since all the row vectors of K and K t are identical, we see that all entries of K are identical By the condition (8.4.13), we deduce (8.4.24) www.Technicalbookspdf.com 310 Solutions to selected exercises 8.4.5 It is clear that all the entries of A and At are non-negative It remains to show that and u = (1, , 1)t ∈ Rn are a pair of eigenvalues and eigenvectors of both A and At In fact, applying Ai u = u and Ati u = u (i = 1, , k) consecutively, we obtain Au = A1 · · · Ak u = u and At u = Atk · · · At1 u = u, respectively Section 9.3 9.3.2 (i) In the uniform state (9.3.30), we have A = n n A2 = λi , i=1 n n λ2i i=1 Hence the uncertainty σA of the observable A in the state (9.3.30) is given by the formula σA2 = A2 − A = n n λ2i − i=1 n n λi i=1 (ii) From (9.3.25) and (S32), we obtain the comparison n n λ2i − i=1 n n λi i=1 ≤ (λmax − λmin )2 www.Technicalbookspdf.com (S32) Bibliographic notes We end the book by mentioning a few important but more specialized subjects that are not touched in this book We point out only some relevant references for the interested Convex sets In Lang [23] basic properties and characterizations of convex sets in Rn are presented For a deeper study of convex sets using advanced tools such as the Hahn–Banach theorem see Lax [25] Tensor products and alternating forms These topics are covered elegantly by Halmos [18] In particular, there, the determinant is seen to arise as the unique scalar, associated with each linear mapping, defined by the onedimensional space of top-degree alternating forms, over a finite-dimensional vector space Minmax principle for computing the eigenvalues of self-adjoint mappings This is a classical variational resolution of the eigenvalue problem known as the method of the Rayleigh–Ritz quotients For a thorough treatment see Bellman [6], Lancaster and Tismenetsky [22], and Lax [25] Calculus of matrix-valued functions These techniques are useful and powerful in applications For an introduction see Lax [25] Irreducible matrices Such a notion is crucial for extending the Perron– Frobenius theorem and for exploring the Markov matrices further under more relaxed conditions See Berman and Plemmons [7], Horn and Johnson [21], Lancaster and Tismenetsky [22], Meyer [29], and Xu [38] for related studies Transformation groups and bilinear forms Given a non-degenerate bilinear form over a finite-dimensional vector space, the set of all linear mappings on the space which preserve the bilinear form is a group under the operation of composition With a specific choice of the bilinear form, a particular such transformation group may thus be constructed and investigated For a concise introduction to this subject in the context of linear algebra see Hoffman and Kunze [19] 311 www.Technicalbookspdf.com 312 Bibliographic notes Computational methods for solving linear systems Practical methods for solving systems of linear equations are well investigated and documented See Lax [25], Golub and Ortega [13], and Stoer and Bulirsch [32] for a description of some of the methods Computing the eigenvalues of symmetric matrices This is a much developed subject and many nice methods are available See Lax [25] for methods based on the QR factorization and differential flows See also Stoer and Bulirsch [32] Computing the eigenvalues of general matrices Some nicely formulated iterative methods may be employed to approximate the eigenvalues of a general matrix under certain conditions These methods include the QR convergence algorithm and the power method See Golub and Ortega [13] and Stoer and Bulirsch [32] Random matrices The study of random matrices, the matrices whose entries are random variables, was pioneered by Wigner [36, 37] to model the spectra of large atoms and has recently become the focus of active mathematical research See Akemann, Baik, and Di Francesco [1], Anderson, Guionnet, and Zeitouni [2], Mehta [28], and Tao [33], for textbooks, and Beenakker [5], Diaconis [12], and Guhr, Müller-Groeling, and Weidenmüller [17], for survey articles Besides, for a rich variety of applications of Linear Algebra and its related studies, see Bai, Fang, and Liang [3], Bapat [4], Bellman [6], Berman and Plemmons [7], Berry, Dumais, and O’Brien [8], Brualdi and Ryser [9], Datta [10], Davis [11], Gomide et al [14], Graham [15], Graybill [16], Horadam [20], Latouche and Vaidyanathan [24], Leontief [26], Lyubich, Akin, Vulis, and Karpov [27], Meyn and Tweedie [30], Stinson [31], Taubes [34], Van Dooren and Wyman [35], and references therein www.Technicalbookspdf.com References [1] G Akemann, J Baik, and P Di Francesco, The Oxford Handbook of Random Matrix Theory, Oxford University Press, Oxford, 2011 [2] G W Anderson, A Guionnet, and O Zeitouni, An Introduction to Random Matrices, Cambridge University Press, Cambridge, 2010 [3] Z Bai, Z Fang, and Y.-C Liang, Spectral Theory of Large Dimensional Random Matrices and Its Applications to Wireless Communications and Finance Statistics, World Scientific, Singapore, 2014 [4] R B Bapat, Linear Algebra and Linear Models, 3rd edn, Universitext, SpringerVerlag and Hindustan Book Agency, New Delhi, 2012 [5] C W J Beenakker, Random-matrix theory of quantum transport, Reviews of Modern Physics 69 (1997) 731–808 [6] R Bellman, Introduction to Matrix Analysis, 2nd edn, Society of Industrial and Applied Mathematics, Philadelphia, 1997 [7] A Berman and R J Plemmons, Nonnegative Matrices in the Mathematical Sciences, Society of Industrial and Applied Mathematics, Philadelphia, 1994 [8] M Berry, S Dumais, and G O’Brien, Using linear algebra for intelligent information retrieval, SIAM Review 37 (1995) 573–595 [9] R A Brualdi and H J Ryser, Combinatorial Matrix Theory, Encyclopedia of Mathematics and its Applications 39, Cambridge University Press, Cambridge, 1991 [10] B N Datta, Numerical Linear Algebra and Applications, 2nd edn, Society of Industrial and Applied Mathematics, Philadelphia, 2010 [11] E Davis, Linear Algebra and Probability for Computer Science Applications, A K Peters/CRC Press, Boca Raton, FL, 2012 [12] P Diaconis, Patterns in eigenvalues: the 70th Josiah Willard Gibbs lecture, Bulletin of American Mathematical Society (New Series) 40 (2003) 155–178 [13] G H Golub and J M Ortega, Scientific Computing and Differential Equations, Academic Press, Boston and New York, 1992 [14] J Gomide, R Melo-Minardi, M A dos Santos, G Neshich, W Meira, Jr., J C Lopes, and M Santoro, Using linear algebra for protein structural comparison and classification, Genetics and Molecular Biology 32 (2009) 645–651 [15] A Graham, Nonnegative Matrices and Applicable Topics in Linear Algebra, John Wiley & Sons, New York, 1987 313 www.Technicalbookspdf.com 314 References [16] F A Graybill, Introduction to Matrices with Applications in Statistics, Wadsworth Publishing Company, Belmont, CA, 1969 [17] T Guhr, A Müller-Groeling, and H A Weidenmüller, Random-matrix theories in quantum physics: common concepts, Physics Reports 299 (1998) 189–425 [18] P R Halmos, Finite-Dimensional Vector Spaces, 2nd edn, Springer-Verlag, New York, 1987 [19] K Hoffman and R Kunze, Linear Algebra, Prentice-Hall, Englewood Cliffs, NJ, 1965 [20] K J Horadam, Hadamard Matrices and Their Applications, Princeton University Press, Princeton, NJ, 2007 [21] R A Horn and C R Johnson, Matrix Analysis, Cambridge University Press, Cambridge, New York, and Melbourne, 1985 [22] P Lancaster and M Tismenetsky, The Theory of Matrices, 2nd edn, Academic Press, San Diego, New York, London, Sydney, and Tokyo, 1985 [23] S Lang, Linear Algebra, 3rd edn, Springer-Verlag, New York, 1987 [24] G Latouche and R Vaidyanathan, Introduction to Matrix Analytic Methods in Stochastic Modeling, Society of Industrial and Applied Mathematics, Philadelphia, 1999 [25] P D Lax, Linear Algebra and Its Applications, John Wiley & Sons, Hoboken, NJ, 2007 [26] W Leontief, Input-Output Economics, Oxford University Press, New York, 1986 [27] Y I Lyubich, E Akin, D Vulis, and A Karpov, Mathematical Structures in Population Genetics, Springer-Verlag, New York, 2011 [28] M L Mehta, Random Matrices, Elsevier Academic Press, Amsterdam, 2004 [29] C Meyer, Matrix Analysis and Applied Linear Algebra, Society of Industrial and Applied Mathematics, Philadelphia, 2000 [30] S P Meyn and R L Tweedie, Markov Chains and Stochastic Stability, Springer-Verlag, London, 1993; 2nd edn, Cambridge University Press, Cambridge, 2009 [31] D R Stinson, Cryptography, Discrete Mathematics and its Applications, Chapman & Hall/CRC Press, Boca Raton, FL, 2005 [32] J Stoer and R Bulirsch, Introduction to Numerical Analysis, Springer-Verlag, New York, Heidelberg, and Berlin, 1980 [33] T Tao, Topics in Random Matrix Theory, American Mathematical Society, Providence, RI, 2012 [34] C H Taubes, Lecture Notes on Probability, Statistics, and Linear Algebra, Department of Mathematics, Harvard University, Cambridge, MA, 2010 [35] P Van Dooren and B Wyman, Linear Algebra for Control Theory, IMA Volumes in Mathematics and its Applications, Springer-Verlag, New York, 2011 [36] E Wigner, Characteristic vectors of bordered matrices with infinite dimensions, Annals of Mathematics 62 (1955) 548–564 [37] E Wigner, On the distribution of the roots of certain symmetric matrices, Annals of Mathematics 67 (1958) 325–327 [38] Y Xu, Linear Algebra and Matrix Theory (in Chinese), 2nd edn, Higher Education Press, Beijing, 2008 www.Technicalbookspdf.com Index 1-form, 16 characteristic roots, 107 addition, adjoint mapping, 50, 122 adjoint matrix, 103 adjugate matrix, 103 adjunct matrix, 103 algebraic multiplicity, 216 algebraic number, 13 angular frequencies, 255 annihilating polynomials, 221 annihilator, 19 anti-Hermitian mappings, 195 anti-Hermitian matrices, anti-lower triangular matrix, 99 anti-self-adjoint, 126 anti-self-adjoint mappings, 195 anti-self-dual, 126 anti-symmetric, anti-symmetric forms, 230 anti-upper triangular matrix, 99 basis, 13 basis change matrix, 15 basis orthogonalization, 117 basis transition matrix, 15, 45 Bessel inequality, 140 bilinear form, 147, 180 bilinear forms, 147 boxed diagonal matrix, 57 boxed lower triangular form, 99 boxed upper triangular form, 98 boxed upper triangular matrix, 56 bracket, 248 Cayley–Hamilton theorem, 110 characteristic of a field, characteristic polynomial, 107 characteristic polynomial of linear mapping, 109 Cholesky decomposition theorem, 168 Cholesky relation, 168 co-prime, 207 codimension, 27 cofactor, 89 cofactor expansion, 89 cokernel, 39 column rank of a matrix, 52 commutator, 256 complement, 22 complete set of orthogonal vectors, 139 component, composition of mappings, 37 congruent matrices, 148 contravariant vectors, 18 convergent sequence, 28 convex, 164 coordinate vector, 14 coordinates, 14 covariant vectors, 18 Cramer’s formulas, 80, 104 Cramer’s rule, 80, 104 cyclic vector, 62, 217 cyclic vector space, 217 Darboux basis, 235 degree, 86 degree of a nilpotent mapping, 62 dense, 72 determinant, 79 determinant of a linear mapping, 105 315 www.Technicalbookspdf.com 316 Index diagonal matrix, diagonalizable, 222 diagonally dominant condition, 101 dimension, 13 direct product of vector spaces, 22 direct sum, 21 direct sum of mappings, 58 dominant, 237 dot product, doubly Markov matrices, 247 dual basis, 17 dual mapping, 122 dual space, 16 Heisenberg picture, 262 Heisenberg uncertainty principle, 258 Hermitian congruent, 181 Hermitian conjugate, 7, 134 Hermitian matrices, Hermitian matrix, 134 Hermitian scalar product, 127 Hermitian sesquilinear form, 182 Hermitian skewsymmetric forms, 236 Hermitian symmetric matrices, Hilbert–Schmidt norm of a matrix, 137 homogeneous, 148 hyponormal mappings, 199 eigenmodes, 255 eigenspace, 57 eigenvalue, 57 eigenvector, 57 Einstein formula, 255 entry, equivalence of norms, 30 equivalence relation, 48 equivalent polynomials, 207 Euclidean scalar product, 124, 127 ideal, 205 idempotent linear mappings, 60 identity matrix, image, 38 indefinite, 161 index, 83 index of negativity, 119 index of nilpotence, 62 index of nullity, 119 index of positivity, 119 infinite dimensional, 13 injective, 38 invariant subspace, 55 inverse mapping, 42 invertible linear mapping, 41 invertible matrix, irreducible linear mapping, 56 irreducible polynomial, 207 isometric, 142 isometry, 142 isomorphic, 42 isomorphism, 42 isotropic subspace, 235 Fermat’s Little Theorem, 267 field, field of characteristic 0, field of characteristic p, finite dimensional, 13 finite dimensionality, 13 finitely generated, 13 form, 16 Fourier coefficients, 139 Fourier expansion, 139 Fredholm alternative, 53, 137, 178 Frobenius inequality, 44 functional, 16 Fundamental Theorem of Algebra, 85 generalized eigenvectors, 216 generalized Schwarz inequality, 194 generated, generic property, 72 geometric multiplicity, 57 Gram matrix, 136 Gram–Schmidt procedure, 117 great common divisor, 207 Gronwall inequality, 263 Hamiltonian, 254 hedgehog map, 88 Heisenberg equation, 263 Jordan block, 220 Jordan canonical form, 220 Jordan decomposition theorem, 220 Jordan matrix, 220 kernel, 38 least squares approximation, 176 left inverse, 7, 42 Legendre polynomials, 141 Levy–Desplanques theorem, 101 limit, 28 linear complement, 22 linear function, 16 linear span, www.Technicalbookspdf.com Index linear transformation, 55 linearly dependent, 8, linearly independent, 10 linearly spanned, locally nilpotent mappings, 62 lower triangular matrix, mapping addition, 35 Markov matrices, 243 matrix, matrix exponential, 76 matrix multiplication, matrix-valued initial value problem, 76 maximum uncertainty, 261 maximum uncertainty state, 261 Measurement postulate, 252 metric matrix, 136 minimal polynomial, 67, 221 Minkowski metric, 120 Minkowski scalar product, 120 Minkowski theorem, 101 minor, 89 mutually complementary, 22 negative definite, 161 negative semi-definite, 161 nilpotent mappings, 62 non-definite, 161 non-degenerate, 120 non-negative, 161 non-negative matrices, 237 non-negative vectors, 237 non-positive, 161 nonsingular matrix, norm, 28 norm that is stronger, 29 normal equation, 176 normal mappings, 172, 194, 195 normal matrices, 197 normed space, 28 null vector, 115 null-space, 38 nullity, 39 nullity-rank equation, 40 observable postulate, 252 observables, 252 one-parameter group, 74 one-to-one, 38 onto, 38 open, 73 orthogonal, 115 orthogonal mapping, 123, 132 317 orthogonal matrices, orthogonal matrix, 134 orthonormal basis, 119, 130 Parseval identity, 140 Pauli matrices, 257 period, 62 permissible column operations, 95 permissible row operations, 92 perpendicular, 115 Perron–Frobenius theorem, 237 photoelectric effect, 255 Planck constant, 254 polar decomposition, 198 polarization identities, 149 polarization identity, 132 positive definite Hermitian mapping, 188 positive definite Hermitian matrix, 188 positive definite quadratic form, 188 positive definite quadratic forms, 158 positive definite scalar product over a complex vector space, 128 positive definite scalar product over a real vector space, 128 positive definite self-adjoint mapping, 188 positive definite self-adjoint mappings, 158 positive definite symmetric matrices, 158 positive diagonally dominant condition, 101 positive matrices, 237 positive semi-definite, 161 positive vectors, 237 preimages, 38 prime polynomial, 207 principal minors, 166 product of matrices, projection, 60 proper subspace, Pythagoras theorem, 129 QR factorization, 134 quadratic form, 148, 181 quotient space, 26 random variable, 252 range, 38 rank, 39 rank equation, 40 rank of a matrix, 52 reducibility, 56 reducible linear mapping, 56 reflective, 19 reflectivity, 19 regular Markov matrices, 243 www.Technicalbookspdf.com 318 Index regular stochastic matrices, 243 relatively prime, 207 Riesz isomorphism, 122 Riesz representation theorem, 121 right inverse, 7, 42 row rank of a matrix, 52 scalar multiple of a functional, 16 scalar multiplication, scalar product, 115 scalar-mapping multiplication, 35 scalars, 1, Schrödinger equation, 254 Schrödinger picture, 262 Schur decomposition theorem, 226 Schwarz inequality, 129 self-adjoint mapping, 122 self-dual mapping, 122 self-dual space, 122 sesquilinear form, 180 shift matrix, 65 signed area, 79 signed volume, 80 similar matrices, 48 singular value decomposition for a mapping, 202 singular value decomposition for a matrix, 203 singular values of a mapping, 202 singular values of a matrix, 203 skewsymmetric, skew-Hermitian forms, 236 skew-Hermitian matrices, skewsymmetric forms, 230 special relativity, 120 square matrix, stable Markov matrices, 247 stable matrix, 246 standard deviation, 259 State postulate, 252 state space, 252 states, 252 stochastic matrices, 243 subspace, sum of functionals, 16 sum of subspaces, 20 surjective, 38 Sylvester inequality, 44 Sylvester theorem, 118 symmetric, symmetric bilinear forms, 149 symplectic basis, 235 symplectic complement, 236 symplectic forms, 235 symplectic vector space, 235 time evolution postulate, 253 topological invariant, 82 transcendental number, 13 transpose, uncertainty, 259 uniform state, 261 unit matrix, unitary mapping, 132 unitary matrices, unitary matrix, 134 upper triangular matrix, Vandermonde determinant, 105 variance, 257 variational principle, 164 vector space, vector space over a field, vectors, 1, wave function, 252 wave–particle duality hypothesis, 255 www.Technicalbookspdf.com ... A catalogue record for this publication is available from the British Library Library of Congress Cataloguing in Publication data Yang, Yisong A concise text on advanced linear algebra / Yisong.. .A Concise Text on Advanced Linear Algebra This engaging textbook for advanced undergraduate students and beginning graduates covers the core subjects in linear algebra The author motivates... (Paperback) Algebras, Linear? ??Textbooks Algebras, Linear? ??Study and teaching (Higher) Algebras, Linear? ??Study and teaching (Graduate) I Title II Title: Advanced linear algebra QA184.2.Y36 2015 512

Ngày đăng: 20/10/2021, 21:35

Xem thêm:

TỪ KHÓA LIÊN QUAN

Mục lục

    1.2 Subspaces, span, and linear dependence

    1.3 Bases, dimensionality, and coordinates

    1.5 Constructions of vector spaces

    2.5 Linear mappings from a vector space into itself

    2.6 Norms of linear mappings

    3.2 Definition and properties of determinants

    3.3 Adjugate matrices and Cramer’s rule

    3.4 Characteristic polynomials and Cayley–Hamilton theorem

    4.1 Scalar products and basic properties

    4.3 Positive definite scalar products

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN