This page intentionally left blank ADVANCED LINEAR ALGEBRA TEXTBOOKS in MATHEMATICS Series Editor: Ken Rosen PUBLISHED TITLES ABSTRACT ALGEBRA: AN INQUIRY-BASED APPROACH Jonathan K Hodge, Steven Schlicker, and Ted Sundstrom ABSTRACT ALGEBRA: AN INTERACTIVE APPROACH William Paulsen ADVANCED CALCULUS: THEORY AND PRACTICE John Srdjan Petrovic ADVANCED LINEAR ALGEBRA Nicholas Loehr COLLEGE GEOMETRY: A UNIFIED DEVELOPMENT David C Kay COMPLEX VARIABLES: A PHYSICAL APPROACH WITH APPLICATIONS AND MATLAB® Steven G Krantz ESSENTIALS OF TOPOLOGY WITH APPLICATIONS Steven G Krantz INTRODUCTION TO ABSTRACT ALGEBRA Jonathan D H Smith INTRODUCTION TO MATHEMATICAL PROOFS: A TRANSITION Charles E Roberts, Jr INTRODUCTION TO PROBABILITY WITH MATHEMATICA®, SECOND EDITION Kevin J Hastings LINEAR ALBEBRA: A FIRST COURSE WITH APPLICATIONS Larry E Knop LINEAR AND NONLINEAR PROGRAMMING WITH MAPLE™: AN INTERACTIVE, APPLICATIONS-BASED APPROACH Paul E Fishback MATHEMATICAL AND EXPERIMENTAL MODELING OF PHYSICAL AND BIOLOGICAL PROCESSES H T Banks and H T Tran ORDINARY DIFFERENTIAL EQUATIONS: APPLICATIONS, MODELS, AND COMPUTING Charles E Roberts, Jr REAL ANALYSIS AND FOUNDATIONS, THIRD EDITION Steven G Krantz TEXTBOOKS in MATHEMATICS ADVANCED LINEAR ALGEBRA NICHOLAS LOEHR Virginia Polytechnic Institute and State University Blacksburg, USA CRC Press Taylor & Francis Group 6000 Broken Sound Parkway NW, Suite 300 Boca Raton, FL 33487-2742 © 2014 by Taylor & Francis Group, LLC CRC Press is an imprint of Taylor & Francis Group, an Informa business No claim to original U.S Government works Version Date: 20140306 International Standard Book Number-13: 978-1-4665-5902-8 (eBook - PDF) This book contains information obtained from authentic and highly regarded sources Reasonable efforts have been made to publish reliable data and information, but the author and publisher cannot assume responsibility for the validity of all materials or the consequences of their use The authors and publishers have attempted to trace the copyright holders of all material reproduced in this publication and apologize to copyright holders if permission to publish in this form has not been obtained If any copyright material has not been acknowledged please write and let us know so we may rectify in any future reprint Except as permitted under U.S Copyright Law, no part of this book may be reprinted, reproduced, transmitted, or utilized in any form by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying, microfilming, and recording, or in any information storage or retrieval system, without written permission from the publishers For permission to photocopy or use material electronically from this work, please access www.copyright.com (http:// www.copyright.com/) or contact the Copyright Clearance Center, Inc (CCC), 222 Rosewood Drive, Danvers, MA 01923, 978-750-8400 CCC is a not-for-profit organization that provides licenses and registration for a variety of users For organizations that have been granted a photocopy license by the CCC, a separate system of payment has been arranged Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe Visit the Taylor & Francis Web site at http://www.taylorandfrancis.com and the CRC Press Web site at http://www.crcpress.com Dedication This book is dedicated to Nanette and Olivia and Heather and Linda and Zoe This page intentionally left blank Contents Preface I xvii Background on Algebraic Structures Overview of Algebraic Systems 1.1 Groups 1.2 Rings and Fields 1.3 Vector Spaces 1.4 Subsystems 1.5 Product Systems 1.6 Quotient Systems 1.7 Homomorphisms 1.8 Spanning, Linear Independence, 1.9 Summary 1.10 Exercises Basis, and Dimension Permutations 2.1 Symmetric Groups 2.2 Representing Functions as Directed Graphs 2.3 Cycle Decompositions of Permutations 2.4 Composition of Cycles 2.5 Factorizations of Permutations 2.6 Inversions and Sorting 2.7 Signs of Permutations 2.8 Summary 2.9 Exercises 3 11 13 15 17 23 23 24 24 26 27 28 29 30 30 Polynomials 3.1 Intuitive Definition of Polynomials 3.2 Algebraic Operations on Polynomials 3.3 Formal Power Series and Polynomials 3.4 Properties of Degree 3.5 Evaluating Polynomials 3.6 Polynomial Division with Remainder 3.7 Divisibility and Associates 3.8 Greatest Common Divisors of Polynomials 3.9 GCDs of Lists of Polynomials 3.10 Matrix Reduction Algorithm for GCDs 3.11 Roots of Polynomials 3.12 Irreducible Polynomials 3.13 Factorization of Polynomials into Irreducibles 3.14 Prime Factorizations and Divisibility 35 35 36 37 39 40 41 43 44 45 46 48 49 51 52 vii viii Contents 3.15 3.16 3.17 3.18 3.19 3.20 3.21 3.22 II Irreducible Polynomials in Q[x] Irreducibility in Q[x] via Reduction Mod p Eisenstein’s Irreducibility Criterion for Q[x] Kronecker’s Algorithm for Factoring in Q[x] Algebraic Elements and Minimal Polynomials Multivariable Polynomials Summary Exercises Matrices 71 Basic Matrix Operations 4.1 Formal Definition of Matrices and Vectors 4.2 Vector Spaces of Functions 4.3 Matrix Operations via Entries 4.4 Properties of Matrix Multiplication 4.5 Generalized Associativity 4.6 Invertible Matrices 4.7 Matrix Operations via Columns 4.8 Matrix Operations via Rows 4.9 Elementary Operations and Elementary Matrices 4.10 Elementary Matrices and Gaussian Elimination 4.11 Elementary Matrices and Invertibility 4.12 Row Rank and Column Rank 4.13 Conditions for Invertibility of a Matrix 4.14 Summary 4.15 Exercises Determinants via Calculations 5.1 Matrices with Entries in a Ring 5.2 Explicit Definition of the Determinant 5.3 Diagonal and Triangular Matrices 5.4 Changing Variables 5.5 Transposes and Determinants 5.6 Multilinearity and the Alternating Property 5.7 Elementary Row Operations and Determinants 5.8 Determinant Properties Involving Columns 5.9 Product Formula via Elementary Matrices 5.10 Laplace Expansions 5.11 Classical Adjoints and Inverses 5.12 Cramer’s Rule 5.13 Product Formula for Determinants 5.14 Cauchy–Binet Formula 5.15 Cayley–Hamilton Theorem 5.16 Permanents 5.17 Summary 5.18 Exercises 53 54 54 55 56 58 60 62 73 73 74 75 77 78 79 81 83 84 86 87 87 89 90 92 101 101 102 103 103 104 106 107 109 109 111 113 114 115 116 118 120 121 123 This page intentionally left blank Appendix: Basic Definitions This appendix records some general mathematical definitions and notations that occur throughout the text The word iff is defined to mean “if and only if.” Sets We first review some definitions from set theory that will be used constantly All capital letters appearing below denote sets • Set Membership: x ∈ S means x is a member of the set S • Set Non-membership: x ∈ S means x is not a member of the set S • Subsets: A ⊆ B means for all x, if x ∈ A then x ∈ B • Binary Union: For all x, x ∈ A ∪ B iff x ∈ A or x ∈ B • Binary Intersection: For all x, x ∈ A ∩ B iff x ∈ A and x ∈ B • Set Difference: For all x, x ∈ A ∼ B iff x ∈ A and x ∈ B • Empty Set: For all x, x ∈ ∅ • Indexed Unions: For all x, x ∈ i∈I • Indexed Intersections: For all x, x ∈ Ai iff there exists i ∈ I with x ∈ Ai i∈I Ai iff for all i ∈ I, x ∈ Ai ã Cartesian Products: A ì B is the set of all ordered pairs (a, b) with a ∈ A and b ∈ B More generally, A1 × · · · × An is the set of all ordered n-tuples (a1 , , an ) with ∈ Ai for ≤ i ≤ n • Number Systems: We write N = {0, 1, 2, 3, } for the set of natural numbers, N+ = {1, 2, 3, } for the set of positive integers, Z for the set of integers, Q for the set of rational numbers, R for the set of real numbers, and C for the set of complex numbers Q+ denotes the set of positive rational numbers; R+ denotes the set of positive real numbers For all n ∈ N+ , we write [n] to denote the finite set {1, 2, , n} Functions Formally, a function is an ordered triple f = (X, Y, G), where X is a set called the domain of f , Y is a set called the codomain of f , and G ⊆ X × Y is a set called the graph of f , 583 584 Advanced Linear Algebra which is required to satisfy this condition: for all x ∈ X, there exists a unique y ∈ Y with (x, y) ∈ G For all x ∈ X, we write y = f (x) iff (x, y) ∈ G The notation f : X → Y means that f is a function with domain X and codomain Y One often introduces a new function by a phrase such as: “Let f : X → Y be given by f (x) = · · · ,” where · · · is some formula involving x One must check that for each fixed x ∈ X, this formula always does produce exactly one output, and that this output lies in the claimed codomain Y By our definition, two functions f and g are equal iff they have the same domain and the same codomain and the same graph To check equality of the graphs, one must check that f (x) = g(x) for all x in the common domain of f and g Given functions f : X → Y and g : Y → Z, the composite function g ◦ f is the function with domain X, codomain Z, and graph {(x, g(f (x))) : x ∈ X} Thus, g ◦ f : X → Z satisfies (g ◦ f )(x) = g(f (x)) for all x ∈ X Let f : X → Y be any function We say f is one-to-one (or injective, or an injection) iff for all x1 , x2 ∈ X, if f (x1 ) = f (x2 ) then x1 = x2 We say f is onto (or surjective, or a surjection) iff for each y ∈ Y , there exists x ∈ X with y = f (x) We say f is bijective (or a bijection) iff f is one-to-one and onto iff for each y ∈ Y , there exists a unique x ∈ X with y = f (x) The composition of two injections is an injection; the composition of two surjections is a surjection; and the composition of two bijections is a bijection The identity function on any set X is the function idX : X → X given by idX (x) = x for all x ∈ X Given f : X → Y , we say that a function g : Y → X is the inverse of f iff f ◦ g = idY and g ◦ f = idX , in which case we write g = f −1 One can show that the inverse of f is unique when it exists; and f −1 exists iff f is a bijection, in which case f −1 is also a bijection and (f −1 )−1 = f Suppose f : X → Y is a function and Z ⊆ X We obtain a new function g : Z → Y with domain Z by setting g(z) = f (z) for all z ∈ Z We call g the restriction of f to Z, denoted g = f |Z or f |Z Suppose f : X → Y is any function For all A ⊆ X, the direct image of A under f is the set f [A] = {f (a) : a ∈ A} ⊆ Y For all B ⊆ Y , the inverse image of B under f is the set f −1 [B] = {x ∈ X : f (x) ∈ B} ⊆ X This notation is not meant to suggest that the inverse function f −1 must exist But, when f −1 does exist, the inverse image of B under f coincides with the direct image of B under f −1 , so the notation f −1 [B] is not ambiguous The image of f is the set f [X]; f is a surjection iff f [X] = Y We use square brackets for direct and inverse images to prevent ambiguity More precisely, if A is both a member of X and a subset of X, then f (A) is the value of f at the point A in its domain, whereas f [A] is the direct image under f of the subset A of the domain Relations A relation from X to Y is a subset R of X × Y For x ∈ X and y ∈ Y , xRy means (x, y) ∈ R A relation on a set X is a relation R from X to X R is called reflexive on X iff for all x ∈ X, xRx R is called symmetric iff for all x, y, xRy implies yRx R is called antisymmetric iff for all x, y, xRy and yRx implies x = y R is called transitive iff for all x, y, z, xRy and yRz implies xRz R is called an equivalence relation on X iff R is reflexive on X, symmetric, and transitive Suppose R is an equivalence relation on a set X For x ∈ X, the equivalence class of x relative to R is the set [x]R = {y ∈ X : xRy} A given equivalence class typically has many names; more precisely, for all x, z ∈ R, [x]R = [z]R iff xRz The quotient set X modulo R is the set of all equivalence classes of R, namely X/R = {[x]R : x ∈ X} Appendix: Basic Definitions 585 A set partition of a given set X is a collection P of nonempty subsets of X such that for all x ∈ X, there exists a unique S ∈ P with x ∈ S For every equivalence relation R on a fixed set X, the quotient set X/R is a set partition of X consisting of the equivalence classes [x]R for x ∈ X Conversely, given any set partition P of X, the relation R defined by “xRy iff there exists S ∈ P with x ∈ S and y ∈ S” is an equivalence relation on X with X/R = P Formally, letting EQX be the set of all equivalence relations on X and letting SPX be the set of all set partitions on X, the map f : EQX → SPX given by f (R) = X/R for R ∈ EQX is a bijection Partially Ordered Sets A partial ordering on a set X is a relation ≤ on X that is reflexive on X, antisymmetric, and transitive A partially ordered set or poset is a pair (X, ≤) where ≤ is a partial ordering on X A poset (X, ≤) is totally ordered iff for all x, y ∈ X, x ≤ y or y ≤ x More generally, a subset Y of a poset (X, ≤) is called a chain iff for all x, y ∈ Y , x ≤ y or y ≤ x By definition, y ≥ x means x ≤ y; x < y means x ≤ y and x = y; and x > y means x ≥ y and x = y Let S be a subset of a poset (X, ≤) An upper bound for S is an element x ∈ X such that for all y ∈ S, y ≤ x A greatest element of S is an element x ∈ S such that for all y ∈ S, y ≤ x; x is unique if it exists A lower bound for S is an element x ∈ X such that for all y ∈ S, x ≤ y A least element of S is an element x ∈ S such that for all y ∈ S, x ≤ y; x is unique if it exists We say x ∈ X is a least upper bound for S iff x is the least element of the set of upper bounds of S in X In detail, this means y ≤ x for all y ∈ S; and for any z ∈ X such that y ≤ z for all y ∈ S, x ≤ z The least upper bound of S is unique if it exists; we write x = sup S in this case For S = {y1 , , yn }, the notation y1 ∨ y2 ∨ · · · ∨ yn is also used to denote sup S We say x ∈ X is a greatest lower bound for S iff x is the greatest element of the set of lower bounds of S in X In detail, this means x ≤ y for all y ∈ S; and for any z ∈ X such that z ≤ y for all y ∈ S, z ≤ x The greatest lower bound of S is unique if it exists; we write x = inf S in this case For S = {y1 , , yn }, the notation y1 ∧ y2 ∧ · · · ∧ yn is also used to denote inf S A lattice is a poset (X, ≤) such that for all a, b ∈ X, the least upper bound a ∨ b and the greatest lower bound a ∧ b exist in X A complete lattice is a poset (X, ≤) such that for every nonempty subset S of X, sup S and inf S exist in X A maximal element in a poset (X, ≤) is an element x ∈ X such that for all y ∈ X, if x ≤ y then y = x A minimal element of X is an element x ∈ X such that for all y ∈ X, if y ≤ x then y = x Zorn’s lemma states that if (X, ≤) is a poset in which every chain Y ⊆ X has an upper bound in X, then X has a maximal element Zorn’s lemma is discussed in detail in §16.6 This page intentionally left blank Further Reading Chapter There are many introductory accounts of modern algebra, including the texts by Durbin [14], Fraleigh [15], Gallian [16], and Rotman [48] For more advanced treatments of modern algebra, one may consult the textbooks by Dummit and Foote [13], Hungerford [29], Jacobson [30], and Rotman [47] Introductions to linear algebra at various levels abound; among many others, we mention the books by Larson and Falvo [33], Lay [34], and Strang [56] Two more advanced linear algebra books that are similar, in some respects, to the present volume are the texts by Halmos [24] and Hoffman and Kunze [27] Chapter For basic facts on permutations, one may consult any of the abstract algebra texts mentioned above There is a vast literature on permutations and the symmetric group; we direct the reader to the texts by Bona [6], Rotman [51], and Sagan [54] for more information Chapter Thorough algebraic treatments of polynomials may be found in most texts on abstract algebra, such as those by Dummit and Foote [13] or Hungerford [29] For more details on formal power series, one may consult [36, Chpt 7] The matrix reduction algorithm in §3.10 for computing gcds of polynomials (or integers) comes from an article by W Blankinship [5] Cox, Little, and O’Shea have written an excellent book [11] on multivariable polynomials and their role in computational algebraic geometry Chapter Three classic texts on matrix theory are the books by Gantmacher [18], Horn and Johnson [28], and Lancaster [32] Chapter There is a vast mathematical literature on the subject of determinants Lacking the space to give tribute to all of these, we only mention the text by Turnbull [59], the book of Aitken [2], and the treatise of Muir [41] Muir has also written an extensive four-volume work chronicling the historical development of the theory of determinants [40] Chapter This chapter developed a “dictionary” linking abstract concepts defined for vector spaces and linear maps to concrete concepts defined for column vectors and matrices Many texts on matrix theory, such as Horn and Johnson [28], heavily favor matrix-based descriptions and proofs Other texts, most notably Bourbaki [7, Chpt II], prefer a very abstract development that makes almost no mention of matrices We think it is advisable to gain facility with both languages for discussing linear algebra For alternative developments of this material, see Halmos [24] and Hoffman and Kunze [27] Chapter The analogy between complex numbers and complex matrices, including the theorems on the polar decomposition of a matrix, is based on the exposition in Halmos [24] A wealth of additional material on properties of Hermitian, unitary, positive definite, and normal matrices may be found in Horn and Johnson’s text [28] Chapter One can derive the Jordan canonical form theorem in many different ways In abstract algebra, one often deduces this theorem from the rational canonical form 587 588 Advanced Linear Algebra theorem [27], which in turn is derivable from the classification of finitely generated modules over principal ideal domains See Chapter 18 or [30] for this approach Matrix theorists might prefer a more algorithmic construction that triangularizes a complex matrix and then gradually reduces it to Jordan form [12, 28] Various elementary derivations can be found in [8, 17, 19, 23, 60] Chapter Further information on QR factorizations can be found in Chapter of Golub and van Loan [20], part II of Trefethen and Bau [58], and §5.3 of Kincaid and Cheney [31] For LU factorizations, see [20, Chpt 3] or [28, Sec 3.5] These references also contain a wealth of information on the numerical stability properties of matrix factorizations and the associated algorithms Chapter 10 Our treatment of iterative algorithms for solving linear systems and computing eigenvalues is similar to that found in §4.6 and §5.1 of Kincaid and Cheney [31] For more information on this topic, one may consult the numerical analysis texts authored by Ackleh, Allen, Kearfott, and Seshaiyer [1, §3.4, Chpt 5], Cheney and Kincaid [9, §8.2, §8.4], Trefethen and Bau [58, Chpt VI], and Golub and Van Loan [20] Chapter 11 For a very detailed treatment of convex sets and convex functions, the reader may consult Rockafellar’s text [46] A wealth of material on convex polytopes can be found in Gră unbaums encyclopedic work [21] The presentation in §11.17 through §11.21 is similar to [63, Lecture 1] Chapter 12 Our exposition of ruler and compass constructions is similar to the accounts found in [30, Vol 1, Chpt 4] and [49, App C] Treatments of Galois theory may be found in these two texts, as well as in the books by Cox [10], Dummit and Foote [13], and Hungerford [29] See Tignol’s book [57] for a very nice historical account of the development of Galois theory Another good reference for geometric constructions and other problems in field theory is Hadlock [22] Chapter 13 Another treatment of dual spaces and their relation to complex inner product spaces appears in Halmos [24] A good discussion of dual spaces in the context of Banach spaces is given in Simmons [55, Chpt 9] The book of Cox, Little, and O’Shea [11] contains an excellent exposition of the ideal-variety correspondence and other aspects of affine algebraic geometry Chapter 14 Two other introductions to Hilbert spaces at a level similar to ours can be found in Simmons [55, Chpt 10] and Rudin [53, Chpt 4] Halmos’ book [25] contains an abundance of problems on Hilbert spaces For more on metric spaces, see Simmons [55] or Munkres [42] Chapter 15 A nice treatment of commutative groups, finitely generated or not, appears in Chapter 10 of Rotman’s group theory text [51] Another exposition of the reduction algorithm for integer matrices and its connection to classifying finitely generated commutative groups is given by Munkres [43, §11] Chapter 16 Our treatment of independence structures is based on [30, Vol 2] A very nice introduction to matroids is [62, Sec 8.2] The books by Oxley [45] and Welsh [61] contain detailed accounts of matroid theory Chapter 17 Four excellent accounts of module theory appear in Anderson and Fuller’s Further Reading 589 text [3], Atiyah and Macdonald’s book [4], Jacobson’s Basic Algebra and [30] (especially the third chapter in each volume), and Rotman’s homological algebra book [52] Bourbaki [7, Chpt II] provides a very thorough and general, but rather difficult, treatment of modules Chapter 18 The classification of finitely generated modules over principal ideal domains is a standard topic covered in advanced abstract algebra texts such as [13, 30] We hope that our coverage may be more quickly accessible to readers with a little less background in group theory and ring theory There are two approaches to proving the rational canonical form for square matrices over a field The approach adopted here deduces this result from the general theory for PIDs The other approach avoids the abstraction of PIDs by proving all necessary results at the level of finite-dimensional vector spaces, T -invariant subspaces, and T -cyclic subspaces See [27] for such a treatment The author’s opinion is that proving the special case of the classification theorem for torsion F [x]-modules is not much simpler than proving the full theorem for all finitely generated modules over all PIDs In fact, because of all the extra structure of the ring F [x], focusing on this special case might even give the reader less intuition for what the proof is doing To help the reader build intuition, we chose to cover the much more concrete case of Z-modules in an earlier chapter Chapter 19 Two sources that give due emphasis to the central role of universal mapping properties in abstract algebra are Jacobson’s two-volume algebra text [30] and Rotman’s homological algebra book [52] The appropriate general context for understanding UMP’s is category theory, the basic elements of which are covered in the two references just cited A more comprehensive introduction to category theory is given in Mac Lane’s book [38] Chapter 20 A nice introduction to multilinear algebra is the text by Northcott [44] A very thorough account of the subject, including detailed discussions of tensor algebras, exterior algebras, and symmetric algebras, appears in [7, Chpt III] This page intentionally left blank Bibliography [1] Azmy Ackleh, Edward J Allen, Ralph Kearfott, and Padmanabhan Seshaiyer, Classical and Modern Numerical Analysis: Theory, Methods, and Practice, Chapman and Hall/CRC Press, Boca Raton, FL (2010) [2] A C Aitken, Determinants and Matrices (eighth ed.), Oliver and Boyd Ltd., Edinburgh (1954) [3] Frank W Anderson and Kent R Fuller, Rings and Categories of Modules (Graduate Texts in Mathematics Vol 13, second ed.), Springer-Verlag, New York (1992) [4] M F Atiyah and I G Macdonald, Introduction to Commutative Algebra, AddisonWesley, Reading, MA (1969) [5] W A Blankinship, “A new version of the Euclidean algorithm,” Amer Math Monthly 70 #7 (1963), 742–745 [6] Mikl` os B` ona, Combinatorics of Permutations, Chapman and Hall/CRC, Boca Raton, FL (2004) [7] Nicolas Bourbaki, Algebra 1, Springer-Verlag, New York (1989) [8] R Brualdi, “The Jordan canonical form: an old proof,” Amer Math Monthly 94 #3 (1987), 257–267 [9] E Ward Cheney and David R Kincaid, Numerical Mathematics and Computing (sixth ed.), Brooks/Cole, Pacific Grove, CA (2007) [10] David A Cox, Galois Theory (second ed.), John Wiley and Sons, New York (2012) [11] David A Cox, John Little, and Donal O’Shea, Ideals, Varieties, and Algorithms: An Introduction to Computational Algebraic Geometry and Commutative Algebra (third ed.), Springer-Verlag, New York (2010) [12] R Fletcher and D Sorenson, “An algorithmic derivation of the Jordan canonical form,” Amer Math Monthly 90 #1 (1983), 12–16 [13] David S Dummit and Richard M Foote, Abstract Algebra (third ed.), John Wiley and Sons, New York (2003) [14] John R Durbin, Modern Algebra: An Introduction (sixth ed.), John Wiley and Sons, New York (2008) [15] John B Fraleigh, A First Course in Abstract Algebra (seventh ed.), Addison Wesley, Reading (2002) [16] Joseph A Gallian, Contemporary Abstract Algebra (fifth ed.), Houghton Mifflin, Boston (2001) 591 592 Advanced Linear Algebra [17] A Galperin and Z Waksman, “An elementary approach to Jordan theory,” Amer Math Monthly 87 #9 (1980), 728–732 [18] F R Gantmacher, The Theory of Matrices (two volumes), Chelsea Publishing Co., New York (1960) [19] I Gohberg and S Goldberg, “A simple proof of the Jordan decomposition theorem for matrices,” Amer Math Monthly 103 #2 (1996), 157–159 [20] Gene Golub and Charles Van Loan, Matrix Computations (third ed.), The Johns Hopkins University Press, Baltimore (1996) [21] Branko Gră unbaum, Convex Polytopes (Graduate Texts in Mathematics Vol 221, second ed.), Springer-Verlag, New York (2003) [22] Charles R Hadlock, Field Theory and Its Classical Problems, Carus Mathematical Monograph no 19, Mathematical Association of America, Washington, D.C., (1978) [23] J Hall, “Another elementary approach to the Jordan form,” Amer Math Monthly 98 #4 (1991), 336–340 [24] Paul R Halmos, Finite-Dimensional Vector Spaces, Springer-Verlag, New York (1974) [25] Paul R Halmos, A Hilbert Space Problem Book (Graduate Texts in Mathematics Vol 19, second ed.), Springer-Verlag, New York (1982) [26] Paul R Halmos, Naive Set Theory, Springer-Verlag, New York (1998) [27] Kenneth Hoffman and Ray Kunze, Linear Algebra (second ed.), Prentice Hall, Upper Saddle River, NJ (1971) [28] Roger Horn and Charles Johnson, Matrix Analysis (second ed.), Cambridge University Press, Cambridge (2012) [29] Thomas W Hungerford, Algebra (Graduate Texts in Mathematics Vol 73), SpringerVerlag, New York (1980) [30] Nathan Jacobson, Basic Algebra I and II (second ed.), Dover Publications, Mineola, NY (2009) [31] David Kincaid and Ward Cheney, Numerical Analysis: Mathematics of Scientific Computing (second ed.), Brooks/Cole, Pacific Grove, CA (1996) [32] Peter Lancaster, Theory of Matrices, Academic Press, New York (1969) [33] Ron Larson and David Falvo, Elementary Linear Algebra (sixth ed.), Brooks Cole, Belmont, CA (2009) [34] David C Lay, Linear Algebra and Its Applications (fourth ed.), Addison Wesley, Reading, MA (2011) [35] Hans Liebeck, “A proof of the equality of column and row rank of a matrix,” Amer Math Monthly 73 #10 (1966), 1114 [36] Nicholas A Loehr, Bijective Combinatorics, Chapman and Hall/CRC, Boca Raton, FL (2011) Bibliography 593 [37] Nicholas A Loehr, “A direct proof that row rank equals column rank,” College Math J 38 #4 (2007), 300–301 [38] Saunders Mac Lane, Categories for the Working Mathematician (Graduate Texts in Mathematics Vol 5, second ed.), Springer-Verlag, New York (1998) [39] J Donald Monk, Introduction to Set Theory, McGraw-Hill, New York (1969) [40] Thomas Muir, The Theory of Determinants in the Historical Order of Development (four volumes), Dover Publications, New York (1960) [41] Thomas Muir, A Treatise on the Theory of Determinants, revised and enlarged by William Metzler, Dover Publications, New York (1960) [42] James R Munkres, Topology (second ed.), Prentice Hall, Upper Saddle River, NJ (2000) [43] James R Munkres, Elements of Algebraic Topology, Perseus Publishing, Cambridge, MA (1984) [44] D G Northcott, Multilinear Algebra, Cambridge University Press, Cambridge (1984) [45] James G Oxley, Matroid Theory (second ed.), Oxford University Press, Oxford (2011) [46] R Tyrrell Rockafellar, Convex Analysis, Princeton University Press, Princeton, NJ (1972) [47] Joseph J Rotman, Advanced Modern Algebra (second ed.), American Mathematical Society, Providence, RI (2010) [48] Joseph J Rotman, A First Course in Abstract Algebra (third ed.), Prentice Hall, Upper Saddle River, NJ (2005) [49] Joseph J Rotman, Galois Theory (second ed.), Springer-Verlag, New York (1998) [50] Joseph J Rotman, An Introduction to Algebraic Topology (Graduate Texts in Mathematics, Vol 119), Springer-Verlag, New York (1988) [51] Joseph J Rotman, An Introduction to the Theory of Groups (fourth ed.), SpringerVerlag, New York (1994) [52] Joseph J Rotman, Notes on Homological Algebra, Van Nostrand Reinhold, New York (1970) [53] Walter Rudin, Real and Complex Analysis (third ed.), McGraw-Hill, Boston (1987) [54] Bruce E Sagan, The Symmetric Group: Representations, Combinatorial Algorithms, and Symmetric Functions (second ed.), Springer-Verlag, New York (2001) [55] George F Simmons, Introduction to Topology and Modern Analysis, Krieger Publishing Co., Malabar, FL (2003) [56] Gilbert Strang, Introduction to Linear Algebra (fourth ed.), Wellesley Cambridge Press, Wellesley, MA (2009) [57] Jean-Pierre Tignol, Galois’ Theory of Algebraic Equations, World Scientific Publishing, Singapore (2001) 594 Advanced Linear Algebra [58] Lloyd N Trefethen and David Bau III, Numerical Linear Algebra, SIAM, Philadelphia (1997) [59] Herbert W Turnbull, The Theory of Determinants, Matrices, and Invariants (second ed.), Blackie and Son Ltd., London (1945) [60] H Valiaho, “An elementary approach to the Jordan form of a matrix,” Amer Math Monthly 93 #9 (1986), 711–714 [61] D J Welsh, Matroid Theory, Academic Press, New York (1976) [62] Douglas B West, Introduction to Graph Theory (second ed.), Prentice Hall, Upper Saddle River, NJ (2001) [63] Gă unter M Ziegler, Lectures on Polytopes (Graduate Texts in Mathematics, Vol 152), Springer-Verlag, New York (1995) This page intentionally left blank ... and Linear Algebra Chapter 11: Affine Geometry and Convexity Most introductions to linear algebra include an account of vector spaces, linear subspaces, the linear span of a subset of Rn , linear. .. side to linear algebra involving abstract vector spaces, subspaces, linear independence, spanning sets, bases, dimension, and linear transformations But there is much more to linear algebra than... Chapter 6 Advanced Linear Algebra 1.3 Vector Spaces Most introductions to linear algebra study real vector spaces, where the vectors can be multiplied by real numbers called scalars For more advanced