applied linear algebra and matrix analysis - thomas s. shores

337 531 0
applied linear algebra and matrix analysis - thomas s. shores

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

APPLIED LINEAR ALGEBRA AND MATRIX ANALYSIS Thomas S Shores Author address: c C OPYRIGHT ­M AY 2000 A LL RIGHTS RESERVED Contents Preface i Chapter LINEAR SYSTEMS OF EQUATIONS 1.1 Some Examples 1.2 Notations and a Review of Numbers 1.3 Gaussian Elimination: Basic Ideas 18 1.4 Gaussian Elimination: General Procedure 29 1.5 *Computational Notes and Projects 39 Review 47 Chapter MATRIX ALGEBRA 49 2.1 Matrix Addition and Scalar Multiplication 49 2.2 Matrix Multiplication 55 2.3 Applications of Matrix Arithmetic 62 2.4 Special Matrices and Transposes 72 2.5 Matrix Inverses 85 2.6 Basic Properties of Determinants 96 2.7 *Applications and Proofs for Determinants 106 2.8 *Tensor Products 114 2.9 *Computational Notes and Projects 118 Review 123 Chapter VECTOR SPACES 125 3.1 Definitions and Basic Concepts 125 3.2 Subspaces 135 3.3 Linear Combinations 142 3.4 Subspaces Associated with Matrices and Operators 152 3.5 Bases and Dimension 160 3.6 Linear Systems Revisited 166 3.7 *Change of Basis and Linear Operators 174 CONTENTS 3.8 *Computational Notes and Projects 178 Review 182 Chapter GEOMETRICAL ASPECTS OF STANDARD SPACES 185 4.1 Standard Norm and Inner Product 185 4.2 Applications of Norms and Inner Products 192 4.3 Unitary and Orthogonal Matrices 202 4.4 *Computational Notes and Projects 210 Review 212 Chapter THE EIGENVALUE PROBLEM 213 5.1 Definitions and Basic Properties 213 5.2 Similarity and Diagonalization 223 5.3 Applications to Discrete Dynamical Systems 232 5.4 Orthogonal Diagonalization 240 5.5 *Schur Form and Applications 244 5.6 *The Singular Value Decomposition 247 5.7 *Computational Notes and Projects 250 Review 259 Chapter GEOMETRICAL ASPECTS OF ABSTRACT SPACES 261 6.1 Normed Linear Spaces 261 6.2 Inner Product Spaces 266 6.3 Gram-Schmidt Algorithm 276 6.4 Linear Systems Revisited 286 6.5 *Operator Norms 295 6.6 *Computational Notes and Projects 299 Review 306 Appendix A Table of Symbols 307 Appendix B Solutions to Selected Exercises 309 Bibliography 323 Index 325 Preface This book is about matrix and linear algebra, and their applications For many students the tools of matrix and linear algebra will be as fundamental in their professional work as the tools of calculus; thus it is important to ensure that students appreciate the utility and beauty of these subjects, as well as understand the mechanics One way to so is to show how concepts of matrix and linear algebra make concrete problems workable To this end, applied mathematics and mathematical modeling ought to have an important role in an introductory treatment of linear algebra One of the features of this book is that we weave significant motivating examples into the fabric of the text Needless to say, I hope that instructors will not omit this material; that would be a missed opportunity for linear algebra! The text has a strong orientation towards numerical computation and applied mathematics, which means that matrix analysis plays a central role All three of the basic components of linear algebra – theory, computation and applications – receive their due The proper balance of these components will give a diverse audience of physical science, social science, statistics, engineering and math students the tools they need as well as the motivation to acquire these tools Another feature of this text is an emphasis on linear algebra as an experimental science; this emphasis is to be found in certain examples, computer exercises and projects Contemporary mathematical software makes an ideal “lab” for mathematical experimentation At the same time, this text is independent of specific hardware and software platforms Applications and ideas should play center stage, not software This book is designed for an introductory course in matrix and linear algebra It is assumed that the student has had some exposure to calculus Here are some of its main goals: ¯ ¯ To provide a balanced blend of applications, theory and computation which emphasizes their interdependence To assist those who wish to incorporate mathematical experimentation through computer technology into the class Each chapter has an optional section on computational notes and projects and computer exercises sprinkled throughout The student should use the locally available tools to carry out the experiments suggested in the project and use the word processing capabilities of the computer system to create small reports on his/her results In this way they gain experience in the use of the computer as a mathematical tool One can also envision reports on a grander scale as mathematical “term papers.” I have made such assignments in some of my own classes with delightful results A few major report topics are included in the text i ii PREFACE ¯ ¯ ¯ To help students to think precisely and express their thoughts clearly Requiring written reports is one vehicle for teaching good expression of mathematical ideas The projects given in this text provide material for such reports To encourage cooperative learning Mathematics educators are becoming increasingly appreciative of this powerful mode of learning Team projects and reports are excellent vehicles for cooperative learning To promote individual learning by providing a complete and readable text I hope that students will find the text worthy of being a permanent part of their reference library, particularly for the basic linear algebra needed for the applied mathematical sciences An outline of the book is as follows: Chapter contains a thorough development of Gaussian elimination and an introduction to matrix notation It would be nice to assume that the student is familiar with complex numbers, but experience has shown that this material is frequently long forgotten by many Complex numbers and the basic language of sets are reviewed early on in Chapter (The advanced part of the complex number discussion could be deferred until it is needed in Chapter 4.) In Chapter 2, basic properties of matrix and determinant algebra are developed Special types of matrices, such as elementary and symmetric, are also introduced About determinants: some instructors prefer not to spend too much time on them, so I have divided the treatment into two sections, one of which is marked as optional and not used in the rest of the text Chapter begins by introducing the student to the “standard” Euclidean vector spaces, both real and complex These are the well springs for the more sophisticated ideas of linear algebra At this point the student is introduced to the general ideas of abstract vector space, subspace and basis, but primarily in the context of the standard spaces Chapter introduces goemetrical aspects of standard vectors spaces such as norm, dot product and angle Chapter provides an introduction to eigenvalues and eigenvectors Subsequently, general norm and inner product concepts are examined in Chapter Two appendices are devoted to a table of commonly used symbols and solutions to selected exercises Each chapter contains a few more “optional” topics, which are independent of the nonoptional sections I say this realizing full well that one instructor’s optional is another’s mandatory Optional sections cover tensor products, linear operators, operator norms, the Schur triangularization theorem and the singular value decomposition In addition, each chapter has an optional section of computational notes and projects I have employed the convention of marking sections and subsections that I consider optional with an asterisk Finally, at the end of each chapter is a selection of review exercises There is more than enough material in this book for a one semester course Tastes vary, so there is ample material in the text to accommodate different interests One could increase emphasis on any one of the theoretical, applied or computational aspects of linear algebra by the appropriate selection of syllabus topics The text is well suited to a course with a three hour lecture and lab component, but the computer related material is not mandatory Every instructor has her/his own idea about how much time to spend on proofs, how much on examples, which sections to skip, etc.; so the amount of material covered will vary considerably Instructors may mix and match any of the optional sections according to their own interests, since these sections are largely independent PREFACE iii of each other My own opinion is that the ending sections in each chapter on computational notes and projects are partly optional While it would be very time consuming to cover them all, every instructor ought to use some part of this material The unstarred sections form the core of the book; most of this material should be covered There are 27 unstarred sections and 12 optional sections I hope the optional sections come in enough flavors to please any pure, applied or computational palate Of course, no one shoe size fits all, so I will suggest two examples of how one might use this text for a three hour one semester course Such a course will typically meet three times a week for fifteen weeks, for a total of 45 classes The material of most of the the unstarred sections can be covered at a rate of about one and one half class periods per section Thus, the core material could be covered in about 40 class periods This leaves time for extra sections and and in-class exams In a two semester course or a semester course of more than three hours, one could expect to cover most, if not all, of the text If the instructor prefers a course that emphasizes the standard Euclidean spaces, and moves at a more leisurely pace, then the core material of the first five chapters of the text are sufficient This approach reduces the number of unstarred sections to be covered from 27 to 23 In addition to the usual complement of pencil and paper exercises (with selected solutions in Appendix B), this text includes a number of computer related activities and topics I employ a taxonomy for these activities which is as follows At the lowest level are computer exercises Just as with pencil and paper exercises, this work is intended to develop basic skills The difference is that some computing equipment (ranging from a programmable scientific calculator to a workstation) is required to complete such exercises At the next level are computer projects These assignments involve ideas that extend the standard text material, possibly some experimentation and some written exposition in the form of brief project papers These are analogous to lab projects in the physical sciences Finally, at the top level are reports These require a more detailed exposition of ideas, considerable experimentation – possibly open ended in scope, and a carefully written report document Reports are comparable to “scientific term papers” They approximate the kind of activity that many students will be involved in through their professional life I have included some of my favorite examples of all three activities in this textbook Exercises that require computing tools contain a statement to that effect Perhaps projects and reports I have included will be paradigms for instructors who wish to build their own project/report materials In my own classes I expect projects to be prepared with text processing software to which my students have access in a mathematics computer lab Projects and reports are well suited for team efforts Instructors should provide background materials to help the students through local system dependent issues For example, students in my own course are assigned a computer account in the mathematics lab and required to attend an orientation that contains specific information about the available linear algebra software When I assign a project, I usually make available a Maple or Mathematica notebook that amounts to a brief background lecture on the subject of the project and contains some of the key commands students will need to carry out the project This helps students focus more on the mathematics of the project rather than computer issues iv PREFACE Most of the computational computer tools that would be helpful in this course fall into three categories and are available for many operating systems: ¯ ¯ ¯ Graphing calculators with built-in matrix algebra capabilities such as the HP 28 and 48, or the TI 85 and 92 These use floating point arithmetic for system solving and matrix arithmetic Some eigenvalues Computer algebra systems (CAS) such as Maple, Mathematica and Macsyma These software products are fairly rich in linear algebra capabilities They prefer symbolic calculations and exact arithmetic, but will floating point calculations, though some coercion may be required Matrix algebra systems (MAS) such as MATLAB or Octave These software products are specifically designed to matrix calculations in floating point arithmetic, though limited symbolic capabilities are available in the basic program They have the most complete set of matrix commands of all categories In a few cases I have included in this text some software specific information for some projects, for the purpose of illustration This is not to be construed as an endorsement or requirement of any particular software or computer Projects may be carried out with different software tools and computer platforms Each system has its own strengths In various semesters I have obtained excellent results with all these platforms Students are open to all sorts of technology in mathematics This openness, together with the availability of inexpensive high technology tools, is changing how and what we teach in linear algebra I would like to thank my colleagues whose encouragement has helped me complete this project, particularly Jamie Radcliffe, Jim Lewis, Dale Mesner and John Bakula Special thanks also go to Jackie Kohles for her excellent work on solutions to the exercises and to the students in my linear algebra courses for relentlessly tracking down errors I would also like to thank my wife, Muriel, for an outstanding job of proofreading and editing the text I’m in the process of developing a linear algebra home page of material such as project notebooks, supplementary exercises, etc, that will be useful for instructors and students of this course This site can be reached through my home page at http://www.math.unl.edu/~tshores/ I welcome suggestions, corrections or comments on the site or book; both are ongoing projects These may be sent to me at tshores@math.unl.edu CHAPTER LINEAR SYSTEMS OF EQUATIONS There are two central problems about which much of the theory of linear algebra revolves: the problem of finding all solutions to a linear system and that of finding an eigensystem for a square matrix The latter problem will not be encountered until Chapter 4; it requires some background development and even the motivation for this problem is fairly sophisticated By contrast the former problem is easy to understand and motivate As a matter of fact, simple cases of this problem are a part of the high school algebra background of most of us This chapter is all about these systems We will address the problem of when a linear system has a solution and how to solve such a system for all of its solutions Examples of linear systems appear in nearly every scientific discipline; we touch on a few in this chapter 1.1 Some Examples Here are a few elementary examples of linear systems: E XAMPLE 1.1.1 For what values of the unknowns Ü and Ý are the following equations satisfied? Ü · ¾Ý Ü·Ý S OLUTION The first way that we were taught to solve this problem was the geometrical approach: every equation of the form Ü Ý represents the graph of a straight line, and conversely, every line in the xy-plane is so described Thus, each equation above represents a line We need only graph each of the lines, then look for the point where these lines intersect, to find the unique solution to the graph (see Figure 1.1.1) Of course, the two equations may represent the same line, in which case there are infinitely many solutions, or distinct parallel lines, in which case there are no solutions These could be viewed as exceptional or “degenerate” cases Normally, we expect the solution to be unique, which it is in this example · · ¼ We also learned how to solve such an equation algebraically: in the present case we may use either equation to solve for one variable, say Ü, and substitute the result into the other equation to obtain an equation which is easily solved for Ý For example,   Ý and substitution into the second yields the first equation above yields Ü   Ý Ý , i.e.,   Ý   , so that Ý Now substitute for Ý in the first equation and obtain that Ü   ´ ắ àà ẵ ắắà ẵ ắ ắ LINEAR SYSTEMS OF EQUATIONS y 4x + y = (1,2) x + 2y = x F IGURE 1.1.1 Graphical solution to Example 1.1.1 E XAMPLE 1.1.2 For what values of the unknowns Ü, Ý and Þ are the following equations satisfied? Ü·Ý·Þ ¾Ü · ¾Ý à ị ĩà íà ị ẵẵ ắ S OLUTION The geometrical approach becomes somewhat impractical as a means of obtaining an explicit solution to our problem: graphing in three dimensions on a flat sheet of paper doesn’t lead to very accurate answers! Nonetheless, the geometrical point of view is useful, for it gives us an idea of what to expect without actually solving the system of equations With reference to our system of three equations in three unknowns, the first fact to take note of is that each of the three equations is an instance of the general equation Ü Ý Þ Now we know from analytical geometry that the graph of this equation is a plane in three dimensions, and conversely every such plane is the graph of some equation of the above form In general, two planes will intersect in a line, though there are exceptional cases of the two planes represented being identical or distinct and parallel Hence we know the geometrical shape of the solution set to the first two equations of our system: a plane, line or point Similarly, a line and plane will intersect in a point or, in the exceptional case that the line and plane are parallel, their intersection will be the line itself or the empty set Hence, we know that the above system of three equations has a solution set that is either empty, a single point, a line or a plane · · · ¼ Which outcome occurs with our system of equations? We need the algebraic point of view to help us calculate the solution The matter of dealing with three equations and three unknowns is a bit trickier than the problem of two equations and unknowns Just as with two equations and unknowns, the key idea is still to use one equation to solve for one unknown Since we have used one equation up, what remains is two equations in the remaining unknowns In this problem, subtract times the first equation from the second and times the first equation from the third to obtain the system ¾ ¿Þ ¾Ý · Þ ¿ B SOLUTIONS TO SELECTED EXERCISES íẳ íẵ íắ The system is ĩ ĩ ĩ ĩắ ĩ ĩ à ẵ ẳà ắ ẳ ắ ẳ à ẵ ẵ à ắ ẵ ắ à ẵ ¾ · ¾ ¾ ¼ ¼ The determinant of the coefcient matrix is ĩẵ ĩẳ àĩắ ĩẳ àĩắ ĩẵ which equals ẳ when ĩẵ ĩẳ , ĩẵ ĩắ , or ĩắ ĩẳ 315 Since the matrix of minors of has must be a mainteger coefficients, trix of integer coefficients Since  ½ ½ ½ ,  ½ is the prodØ uct of an integer scalar and a matrix with integer coefficients so  ½ must have integer coefficients Section 2.8, Page 117 ắ ẵ ắ ẵ ¼ ¼ ¾ ¼ ¼ ¼ ¼  ¾ ¾ ¾ ẵ ẳ ắ ẳ ẳ ẳ ẳ ẵ ắ ẳ ẳ ắ ắ ẵ ẳ ẳ ẵ ẵ ẳ ẳ ắ ắ ẵ ẵ ẳ ẵ ắ ẳ ắ ẳ ắ ẵ ẳ ẳ ẳ ẳ ắ ẵ ẳ ẳ ¼ ½ ¼ ¼ ¼  ½ ¼ ,  ½ ¼ ẳ ẵ ẳ ắ ẳ ẳ ắ Section 2.9, Page 123 ắ ắ ắ ẳ ẵ ẳ ẳ ẵ ẵ ẳ ẵ ẳ ắ ẳ ẵ ẵ ẵ ẵ ,ĩ ẵ , ắ Í  ¾ ¾ Ì Let , ¾ and Í ẳ ẳ Ê Ê ẵẵ ẵ ềề ẳ , ẳ ẵ where So ½½ Ù½½ , but ½½ ¼ so ½½ Ù½½ Since there is no Í , there is not an LU factorization of ẳ Section 2.10, Page 123 ắ ½ ½   ¼ ¼ ½ ¼  ½ ắẵ is invertible if ẵ ắ , and then ẵ ẳ ẵ Then ẳ ¼ Let ½ and ¼ and matrices since Ø · is not invertible since and are invertible Ø Ø´ ¼, but à ẳ Let is à So Ã Ì so the Ø entry of · and the Ø entry is · is symmetric Ì 316 B SOLUTIONS TO SELECTED EXERCISES Section 3.1, Page 134 Ỵ ´ is a vector space ẵ ẵ ẻ is not a vector space because it is not closed under vector addition and scalar multiplication ẻ is not a vector space because it is not closed under scalar multiplication Suppose Law Else if ẻ ẵ is a vector space Ỵ is not a vector space because it is not closed under vector addition ẻ ẵ ẳ, ẳ, were done by then there is ẵ so that ẵ àẳ ¼ ¼ ¼ ¼ or Ú ¼ 13 (a) Is linear, but range is not Ỵ (b) Is not linear, (c) Is linear, range is ẻ ẳ à ẳà ẳà ¼ ¼ · ´ ¼ · ´  ¼µµ ¼·¼ ¼ ¼ · ´  ¼µµ ¼ ¼  Úµ ¼ If Ú So ẳ ẳ ẵ is a vector space   µÚ Similarly, you can show ´ 14 (a) Ì is not a linear transformation è ẵ ẵà because è ẵ ẳà à ẳ ẵàà ẵẳ ẵà ẳ ẵà but è ẵ ẳà à è ẳ ẵà ẵẳ ẳà à ẳẳ ẵà ẳ ẳà Section 3.2, Page 141 ẽ is not a subspace of Ỵ because Ï is not closed under addition and scalar multiplication Ï is a subspace of Ỵ Ï is a subspace of Ỵ Ï is a subspace of Ỵ Ï is a subspace of Ỵ Ï is not a subspace of Î because not closed under scalar multiplication 7.Not a subspace, since the zero element 11 (a) Spans È ¾ ẵ ẵ ẳà ẵ ắà is doesnt contain (b) does not span 12 For example, Ú ´½ Ï Ï È ¾ and Û 13 The spans are equal ¢ 14 Let and be Ò Ò diagonal matrices is a diagonal matrix and · is a Then diagonal matrix so the set of diagonal matrices is closed under matrix addition and scalar multiplication Therefore the set of diagonal matrices is a subspace of ÊỊ Ị 15 (a) If Ü Ý ¾ and ĩ í ắ ẻ , ĩ í ắ ẻ Then ĩ ắ and ĩ ắ ẻ so ĩ ắ ẻ , and ĩ Ã í ¾ Í and Ü · Ý ¾ Ỵ so Ü Ã í ắ ẻ Therefore ẻ is closed under addition and scalar multiplication so it is a subspace of ẽ (b) If ẵ ắ and ẵ ắ ẻ, ẵ à ẵ ắ à ắ à ẻ Then ẵ and ẵ ẻ so ẵ Ãẵ ẵ à ẵ Ãẻ , and ẵ à ắ ẻ so and ẵ à ắ ắ ắ ắ ắ ắ ẻ ẻ ắ ắ ắ ắ ẵ à ẵà à ắ à ắà ẵ à ắà à ẵ à à Therefore à is closed ắà under addition and scalar multiplication so it is a subspace of Ï , Ú ´ µ there exIf Ú ´ µ ists only one Ú ´ µ so for Ú ẵẵ ắẵ ẵắ ắắ à, there exists only one Therefore the vec 17 ´ ½½ (a) If ắẵ ẵắ ắắ so for B SOLUTIONS TO SELECTED EXERCISES ẵẵ ấ (b) Let ắẵ ẵắ operation establishes a one-to-one correspondence between matrices in Ỵ and vectors in ắắ à ẵẵ à ẵ, 18 If ắắ ẵắ à ẵẵ and So 317 ắắ à ẵẵ ắẵ à ẻ ắẵ à ẳ ìễ ề ắắ ẵắ ắẵ ẵắ à ẵ Section 3.3, Page 150 ×Ơ Ị (a) linearly independent, (b) each vector is redundant, (c) each vector is redundant 13 Start with a nontrivial linear combination of the functions that sums to ¼ and differentiate it (a) linearly independent, (b) linearly independent, (c) each vector is redundant ắ à, (b) ẵ (a) ắ ìễ ề ẵ ẳ ẵà (a) ắ ìễ ề ẵ ắ (a) ẵ ìễ ề ẵ ắ 15 Assume ẵ ắ , (b) ắ Ã Ú Ú Then there exists such that ½ Ú½ · ắ ắ à ẳ à à Therefore ẵ ắ pendent ẵ ắ , (b) ẵ ắ ìễ ề ẵ ắ à à ề ề ẳ ề is linearly de- Section 3.4, Page 159 ặ ẵ ẵ (a) ẳ ắ ẳ ẳ ẵ ìễ ề , (c) ìễ ề ẵ ẵ ẳ ẵ ắà , (d) ìễ ề ẵ ắ ẳ ẳ ẵà (e) vector of , ấ (f) ô ắ belong , to of coefcients ắ ẵ ẳ ẳ ẳà ẳ ẳ ẵ ẵ ẳà ẵ ẳ ẳ ẳ ẵà ơ ắô ẵ ẳ ẵ ẵ ẳ ìễ ề ắ vector ìễ ề ắ ẵ ẳ ẳ ẳà ẳ ẳ ẵ ắ ẵ ẵ ẵà coefcients (e) (b) ẵ ẵ ắ ẵ ắ ẳ belong to ẵ ẵ ẳà ẵ ẳ ẳ ẳ ẵà ắ è is not one-to-one since Ư Ì ¼ , 10 Since is nilpotent, there exists ẹ such ẳ so ỉ ẹ ỉ àẹ ẳ that ẹ and ỉ ẳ Therefore is not invertible ẳ Also since is nilpotent, so ẵ by Exercise 14 in Section 2.4, ắ ẹẵ à à à à Since is ẳ invertible, ´Á (a) true, (b) false, (c) true, (d) true, (e) true, (f) true, (g) false, (h) false, (i) false, (j) false, (k) true for each because is the only matrix with a nonzero entry in the ´ µth po½ Đ sition Therefore ½ Ị is a basis of the vector space ặ ẵ ắ ẳ ẳ ẵ ẳ ¼ ½ ½ ¼ ¼ (a) ¼ ¼ ¼ ô ắô ẳ ẵ ẵ ẳà ẵ ẳ ẳ ẳ ẵà , (c) ề ẵ ẵ ẳ ẵ ắà , (d) ấ (b) ặ ìễ ề ắ ẵ ẳ ẳ ẳà ẳ ẳ ìễ ìễ ề ẵ ắ ẳ ẳ ẵà ẵ ắ ẵ ẵ ẵà ặ Section 3.5, Page 164 (a) none È is a subspace of ẳ ẵ and Since ẹ is innite, ẹ ẳ ẵ is innite ẩ ẹ ẵẵ ẵẵ à ẹề ẹ ẳ, à ấẹ ề Section 3.6, Page 173 ấ ìễ ề ẳ ìễ ề ẵ ẳà ẳ ẵà ẵ ẵà ắ ẵ ẵà , ặ , à If ẳ ấẹ Ò 13 The dimension of the space is Ò´Ò · ẵà ắ , 318 B SOLUTIONS TO SELECTED EXERCISES ẳ , ìễ ề ấ ặ ấ ìễ ề ẵ ẳ ìễ ề ắ ẳ ẵà , ẵ ẵà ắ ẳ ẵà , ìễ ề ẵ ẳ (a) , ắ ắà ẳ ẵ ìễ ề ẵ ẵà ắ ẳà , ặ ẵ ẳà ắ ắ ẳ ẵà ẵ ắ ìễ ề ắ ìễ ề ắ ẵ ắ à ắ , ẵ ẵ à ắ and ẵ ắ ắà , à such that ẵ ẵ à ắ ắ à ắ ắ , ẵ ẳ where are free Dimension of ìễ ề ẵ ắ is ắ (b) ẵ ắ such that ắ à ẳ where ẵ ẳ ẳ ẳ ẳ ẵ , à, ệ ề ìễ ề ẵ ẳ ẳà, ẵ ẵ ắ ắ à, ẵ ẵ ẵ à, ặ ẳ ẳ ẳ ẵ ẳ ẵ ẵ ẵ ẵ ẵ ẵ ẳ ẳ ẳ ẳà, ìễ ề ẵ ẵ ẳà, ẳ ẳ ẵà ắ 10 Since ĩ is a consistent, If , the set of columns of , ´ µ has redundant vectors in it, ẵ ẵ à ắ ắ · · Ị Ị ¼ for some ¼ If · ề ề is a soluẵ ẵ à ắ ắ à , then ẵ à ẵ ẵ à ¾ · tion for Ü is also a so· ´ ề à ềà ề ắà ắ à lution Therefore the system has more than one solution ềÂề ấàà ềắ ắ ẹềÂề ấàà ềắ ẹ 11 ẳ ẵ ẵ ẳ ẳ ắ à, ẳ ềéé ỉí ắ ¿,   , and is free (a) Ê´ µ ìễ ề ẵ ẳ ẵ ẳ ẳ ắ à, ắ ắ ẳ ắ ắà ẳ ẵ ẳà ắ pendent since ềắ à ẵ ists ềắ so must be linearly de- ẹ ẳ So there ex- à ẵ · ¾ ¾ · ¡ ¡ ¡ · Ị¾ Ị ẳ Pick ẳ ắ ềÂề ấà with some ắ ễĩà ẳ à ẵ ĩ Ã Ă Ă Ă Ã ềắ ề so ễĩà ẳ and ễ ẳ ắ Section 3.7, Page 178 ắ ẵ ẵ ẳ ắ ẵ ẵ ẳ ẳ , è ểẹ ẵ ìễ ề ẵ ẳ ẳà ẳ ẵ ẳà ẳ ẳ ẵà ìễ ề ẵ ẵ ẳà ắ è ẳ ệ ẵ ẵà ẳ ẳ ẵà , , ệ ề è µ Section 3.9, Page 182 (a) Ï is not a subspace of Ỵ because Ï is not closed under matrix addition (b) Ï is not a subspace of Ỵ because Ï is not closed under matrix addition (a) true, (b) false, (c) true, (d) true, (e) true, (f) false, (g) false, (h) false, (i) false (consider ½ ½ ½ ½ ) Section 4.1, Page 191 (c) Ô Ú Ú by Basic Norm Law #2 Ê and ¼, Ú Ú Since So a unit vector in direction of Ú is Ù Ú Ú Ú Ú Ú ÙÚ If ẳ, then ắ ề ắ ấề , ề ẵ ề ắ ấ , and ắ ấ Then Ă ẵ µÚ½ · · ´ ÙỊ µÚỊ and Ú ¡ ´ ẵ ẵ à à ề ề so Let ẵẳ, ắ Ã Ă Ă ẳ and ½ Ú ¡ ´ Ùµ Similarly, you can show Ú ¡ ´ Ùµ ´Ú ¡ Ùµ ´Ù ¡ Ú µ Section 4.2, Page 200 Ơ Ơ ¾ ¾ Ì (a) ẳ ặ , (b) ẵ ẳ ắ , ắ ắ ễ ễ ễ è , (c) ẳ, (d) ẵ ắ Ă ẵ ẵ ắ ắ ễ Ă so ẵ ắ , B SOLUTIONS TO SELECTED EXERCISES è The normal equations are è ẵ  ắẵ ắ Êè ,  (1) ĩ Êè ắẵ ắẵ ¿   ,   Ơ Ü Ü ¢ (2) Ü free, ễẵ ĩ ĩ ắ ĩ 319 Âắ   Ü Ü£ ¿   ¿ £Ì where Ü¿ is , ĩ ắẵ ắẵ Section 4.3, Page 208 (a),(b), (c), (d) are linearly independent; (a), (c) and (d) are orthogonal; (c) is an orthonormal set ¡ ¡ ¡ ẵ ắ ẳ, ẵ ẳ, and ắ ẳ so ẵ ắ is an orthogonal basis of ấ The coordinates of , ẵ ẵ à ắ ắ à ắ ẵ à, ắ such that are ắ ẵ ¿ makes µ symmetric Section 4.5, Page 212 Ơ Ô  Ô ¿ Ì Ô Ù ¡ ,Ù Ú ẳ Section 5.1, Page 222 ắ ẵ, (b) ắ ẳ ẵ ẵ ẵ , (c) ắ ắ , (e) (a) (a) The basis of ắ is ẵà , and the basis of ẵ is ẵà Both algebraic and geometric multiplicity of each is ½ (b) The basis of ắ is ẵ ẳ ẳà , the basis of ẳ is ẳ ẵ , and the basis is ẳ ẵ ắà (c) The basis of ắ is of ẳ ẵ ẵà ẵ ẳ ẳà , and the basis of is ẵ ẵ ẳà (e) A basis of ẵ is ẳ ẳ ẵà The algebraic multiplicity of the eigenvalue ½ is ¿ while the geometric multiplicity is ẵ ắ à ẵ, (b) ỉệ (a) ỉệ ắÃẳà (a) ẵ ắ (b) ỉệ ẵ ắ ẵ ắ ễ , (c) ỉệ ẵ Ãắ ẵ Ãắ ắ à ắ à ễ à ẵ ẵ à ắ, ắ à ắ à , à, à, ỉ (c) ¾ For , the basis of ½ is ´½ ẳà , and the basis of ắ is ẵ ẵà For Ì , the basis of ½ is ´ ½ ẵà , and the basis of ắ is ẳ ½µ Section 5.2, Page 230 (a) ÜỊ ½ ½ ềÃẵ ẵ ẳ ề ềÃẵ à ề ềÃắ ềÃẵ ềÃẵ ĩềÃẵ ắ are ễ ắ à ắ and ắ Since ỉ Ø´ Á µ Ì   Á  Ì µ Ì Ø´ µ so Á  and µ, Ì Ì Ø´ and Á  have the same eigenvalues Since Ü is an eigenvector of with eigenÜ So ´ ܵ ´ ܵ value , Ü ´ ܵ ´ ܵ Therefore Ü is an eigenvector of with eigenvalue 10 Let be an eigenvalue of with eigenÚ So ´Á µÚ vector Ú such that Ú ÁÚ ẵ Thus if is an eigenvalue of , ẵ is an eigenvalue Since ẵ, ẵ ẳ so every of Á is nonzero Therefore eigenvalue of Á is invertible ắÃắà ẵ Ãắ ắ ễ ½ ´ ¾ ¾ Ơ¿ The eigenvalues of                 11 Let be an eigenvalue of with eigenÚ If is invertvector Ú such that ẵ ẵ and thus ẳ so ible,  ½ Ú ½ Ú Therefore ½ is an eigenvalue of  ½ 320 B SOLUTIONS TO SELECTED EXERCISES  Ơ (b) ẵ ắ ễ ề ề ễ ẵà ễ ề à ẵẳ ắ à ẵẳ Section 5.3, Page 238 ẵ ắ (a), (b), (c), and (e) are diagonalizable because they are non-defective (d) and (f) are not diagonalizable because they are defective ắ ắ (a) ẵ ẳ ắ ẳ ẳ ẳ ẳ ẳ ẳ ắ , (b) ẵ ẳ ẩ ẳ ẳ ẵ , ẵ ẵ ẵ ắ , ẵ ẳ ắ ẩ ẵ , ẳ ẵ ẳ ắ ¼ (b) has Ð Đ ½ È , (c) ắ ắ ẳ ẳ ẵ ẳ (a) there is no dominant eigenvalue, (b) , (c) there is no dominant eigenvalue   Section 5.4, Page 243 Ô Ô ễ ắ, (b) ắ, (c) ẳ ắ (f) (a) ắà ắ ắ ẵ ẵ Ưắ ắ ễắ ễ (a) The basis of ễ ễắ ẵà , and the basis of         is à ắ is ẵ ắ ắà (b) The basis of is ắ ẵà , and the basis of ắ is ẵ ắà Also ẩ ắà ễ ắ ẵ ẵ , ắ ẳ ẳ (c) The ắ basis of ẳ is ẵ ẵ ẳà , the basis of is ẳ ẳ ẵà , and the basis of ¿ is ´½ ½ Section 5.7, Page 258 ễ Eigenvalues are ắ ễ ẵ à ắ ẵ (d) The basis of ẳ is ẵ ẵ à ắ , ắ and the basis of is ẵ à ắ ẵà Also ẩ ắ ẳà ễ ẵ à ẵà ễắ ẩ ẳ ẳ ¼ , ¾ ¿ , (b) the basis of ¾ is , the basis of is ẵà , (c) ắ ắ ẵà (a) ắ ẩ ẩ ẵ è ẵ ắ , ẳ ẳ , (d) Section 5.8, Page 259 ắ ắ ẵ ẳ ẳ ¼ ½ ¼  ½ ¿ ¿ ¿ ¼ ¼ Let be an eigenvalue of an eigenvector of so that Ú and Ú be Ú So Section 6.1, Page 265 í ẵ , í ễ ắ, ắ ĩ ễ ắ ắắ, ắ, í ẵ Ãẵà Ă Ã The eigenvalues of are and (with multiplicity Ị ½) The ba, and the basis of ¼ is sis of ¡ is     ´ ¾· Ú·Ú is an eigenvalue of ¼ (a) false, (b) true, (c) false, (d) false, (e) true ẵ à Ãẵ Therefore, ẳ ĩ à , ẳ ẳ ẵ ẵ ẩ ẳ ĩ ẵ ễ ắ, ắ ẵ ề ẳ ẳà ẳ ẳ ẳ ẵ ẳ ẳà ½µ Let be Hermitian symmetric so À À À À So ´ À µ ´ µ ´ µ Therefore is normal B SOLUTIONS TO SELECTED EXERCISES Ơ2 ½½ ½½  ¿  ½µ, Ù ´½  ¿  ½µ, Ù½ ´½  ¿ ẵà ẵ ẵ ẵ à ắ ẵ ẵ Ù · Ú Ơ Ù · ½ ½ · Ú ẵ ắ ĩ ẹ ĩ ĩ ẵ ẹ ĩ à à ẵ ẳ Let ẵ ề and ề so Ù ½ Ù½ · · ÙỊ Also Ù ½ Ù½ · · ÙỊ Ù½ · · ÙỊ Ù ½ , and ½ · Ú · ½ · Ù½ · Ú½ · ÙỊ · Ú½ ½ Ú 321 Ð ĐỊ ¢  ½ Ú   ÚỊ   ÙỊ · ÚỊ · ÚỊ · ·  ½ ½ Ì ½ ÚỊ  Ị £Ì so ề ẵ ề ề àẵ ẳ and ề ề ắ ế ẵ ề àắ à ề àắ ẳ Therefore ề ẵ é ĐỊ ½ ÚỊ is the same with respect to both ẵ ề à that ẵ- and ắ-norm ẳ ẳ Ì Section 6.2, Page 274 Ơ ÙÚ Ỉ , (b) ẵẵ (a) ẳ ặ ắ ĩẵ ĩắ ĩẵ ĩắ ĩ ắĩ ắ Since ĩắ ắĩắ is not necessarily greater ẵ than or equal to ¼, the given law fails the first condition of the denition of an inner product è ắ ẵ ắ ắ ẵ ắ ắẵ ẵ à ắ ắ ắ ẵ ắ All are linearly independent; (a), (c), (d) and (e) are orthogonal; (c) is an orthonormal set è ễắ ẵẳ ¾ If Û Ỵ then Theorem 4.3.3 supplies a formula for Û which we can check Doing so shows ìễ ề ẵ ẵ ẳà ẵ ẵ ẵà ¾   Section 6.3, Page 284 For standard inner product ắ other gives ẵ ẵẳ ẵ ẵẳ ắ ễắ2 ẳ ắ ễắ ẳ  ắ ắ ắ ẵà; ắ (d) ẵ ẳ ẵà ẵ ắ ẵ ắ ẳ ẳ Êè ẳ ẵ ắ ẵ ắ the , (e) ẻễ ắ àà ẩ (a) ẵ ẵ ắ (a) ẵ ẳ ẳà ắ ẳ ẵ ẵ ẵà , (b) ắ ắ ắ (c) ắ ắ à, ắ ắ ẵ ắ ẵ ẵ ẵà ẵ (b) (c) ắ ắ ẵ ắ ẳ ẳà ẵ à ẵ ẵ ẵ ắ ẵ ắ ẳ ẳà, ẵ ắ ẵ (e) ẩ (a) ắ ẳ ẳà, ắ ẵ ắ (d) ẻ ìễ ề ẵ ẵ ẳà ìễ ề ắ ắ ẳ ẵà ẵ ẳà ẵ ẳ ẵà Section 6.7, Page 306 ẳ ễắ ắ (a) ẵ ẵ ẵ ẵà ẵ , (b) ẵ ẵ ẵ ẵ ẵà ắ ẵ ẳ ẵà ẳ ẵ ẳ ẵà ắ , (c) ẵ ắ ẳ ẵ ắ ẳ ẳ ẳ ẵ ẳ ẵ ẵ ẵà ẵ ẵ ẵ ẵà ẵ ắ ẵ ắ ẳ ẵ ắ ẳ ẳ ẳ ẳ ẵ 10 ắ ẵ ắ ẳ ẳ ẳ ẳ ễ ẵ ẵ ẵà ễ ẳ ẵ ẳ ẳ ẵ ắ ẵ ẳ ẳà ắ ẳ ẳ ẳ ẵ ắ Section 6.4, Page 293 (a) ẻ ẵ ẵ ẵà ìễ ề ắ ẵ ắ ẳ ẳ ẵ ẵà , ắ ẻ ìễ ề ẵà , 322 B SOLUTIONS TO SELECTED EXERCISES ễắ ẵẳ The formula does not define an inner product on ʾ Bibliography [1] [2] [3] [4] [5] Richarc Bellman Introduction to Matrix Analysis, 2nd ed SIAM, Philadelphia, PA, 1997 G Caughley Parameters for seasonally breeding populations Ecology, 48:834–839, 1967 Biswa Nath Datta Numerical Linear Algebra and Applications Brooks/Cole, New York, 1995 James W Demmel Applied Numerical Linear Algebra SIAM, Philadelphia, PA, 1997 G H Golub and C F Van Loan Matrix Computations Johns Hopkins University Press, Baltimore, Maryland, 1983 [6] R A Horn and C R Johnson Matrix Analysis Cambridge University Press, Cambridge, UK, 1985 [7] P Lancaster and M Tismenetsky The Theory of Matrices Academic Press, Orlando, Florida, 1985 [8] Lloyd Trefethen and David Bau Numerical Linear Algebra SIAM, Philadelphia, PA, 1997 323 324 BIBLIOGRAPHY Index Abstract vector space, 128 Adjacency matrix, 68 Adjoint formula, 107 matrix, 106 Affine set, 167 an, 101 Angle, 193, 270 Argument, 14 Augmented matrix, 22 Consistency, 26 in terms of column space, 166 in terms of rank, 36 Coordinate map, 182 Coordinates, 146 orthogonal, 202, 273 standard, 147 vector, 147 Counteridentity, 105 Cramer’s rule, 109 Ball, 263 closed, 263 open, 263 Banach lemma, 297 Basis, 145 coordinates relative to, 147 ordered, 145 Basis theorem, 161 Block matrix, 76, 83, 124 Bound variable, 25 DeMoivre’s Formula, 15 Determinant, 97 Determinants computational efficiency, 109 Diagonal, 75 Diagonalizable matrix, 228 orthogonally, 240 unitarily, 240, 246 Diagonalization theorem, 228 Diagonalizing matrix, 228 Diffusion process, 4, 116 Digraph, 66 adjacency matrix, 68 walk, 67 Dimension definition, 163 theorem, 163 Discrete dynamical system, 224 Displacement vector, 126 Domain, 132, 158 Dominant eigenvalue, 239, 250 Dot product, 190 Cayley-Hamilton theorem, 231, 239 CBS inequality, 192, 270 Change of basis, 176 Change of basis matrix, 150 Change of variables, 152, 210 Characteristic equation, 215 polynomial, 215 Coefficient matrix, 21 Cofactors, 99 Column space, 153 Companion matrix, 105, 254 Complex number, 11 plane, 11 Complex number argument, 14 Polar form, 14 Complex numbers, 11 Component, 196 Condition number, 297, 300 Conditions for matrix inverse, 91 Conductivity, Conformable matrices, 56 Eigenpair, 213 Eigenspace, 216 Eigensystem, 216 algorithm, 216 Eigenvalue, 213 dominant, 239, 250 repeated, 221 simple, 221 Eigenvector, 213 Elementary inverse operations, 31 325 326 matrices, 72 row operations, 22 transposes of matrix, 79 Equation linear, Sylvester, 114 Equivalent linear system, 31 Factorization full QR, 303 LU, 118 QR, 291, 292 Fibonacci numbers, 230 Field of scalars, 125 Finite dimensional, 161 Flat, 167 Flop, 41 Fourier analysis, 304 Fourier coefficients, 305 Fourier heat law, Fredholm alternative, 92, 290 Free variable, 25 Frobenius norm, 266, 295 Full column rank, 35 Function continuous, 129, 132, 134 linear, 62 Fundamental Theorem of Algebra, 12 Gauss-Jordan elimination, 23, 29 Gaussian elimination, 29 Gerschgorin circle theorem, 251 Gram-Schmidt algorithm, 276 Graph, 66, 67, 122 adjacency matrix, 68 dominance-directed, 66, 67 loop, 179 walk, 68 Hermitian matrix, 80 Hessenberg matrix, 182 Householder matrix, 206, 209, 244, 302 Idempotent matrix, 61, 84, 283 Identity matrix, 58 Image, 159 Imaginary part, 11 Induced norm, 270 Inner product, 78, 190 abstract, 267 space, 267 standard, 189 weighted, 267 Input-output matrix, model, 29 Input-output model, 6, Integers, 10 INDEX Interpolation, 8, 113 Intersection, 142, 286 set, Inverse, 85, 107 Inverse iteration method, 253 Inverse power method, 253 Isomorphic vector spaces, 158 isomorphism, 158 Jordan block, 237 Jordan canonical form, 237, 255, 259 Kernel, 157 Kronecker delta, 106 Kronecker symbol, 58 Leading entry, 21 Least squares, 197, 283 solver, 293, 303 Least squares solution, 198 Legendre polynomial, 277 Leslie matrix, 257 Limit vector, 71, 155, 188, 191 Linear mapping, 132, 133 operator, 132, 133 transformation, 132, 133 Linear combination, 51, 138 trivial, 143, 156 Linear dependence, 143 Linear function, 62 Linear independence, 143 Linear system coefficient matrix, 21 equivalent, 31 form of general solution, 167 right hand side vector, 21 List, 143 Loop, 67, 179 LU factorization, 118 Markov chain, 64, 65 Matrix adjoint, 106 block, 76 change of basis, 150, 176 cofactors, 106 companion, 254 complex Householder, 208 condition number, 297 defective, 221 definition, 20 diagonal, 75 diagonalizable, 228 difference, 50 elementary, 72 entry, 20 equality, 49 INDEX exponent, 60 full column rank, 35 Hermitian, 80 Householder, 206, 209 idempotent, 61, 283 identity, 58 inverse, 107 invertible, 85 leading entry, 21 minors, 106 multiplication, 56 multiplication not commutative, 57 negative, 50 nilpotent, 61, 84, 314 nonsingular, 85 normal, 124, 245, 259 of a linear operator, 175 orthogonal, 204 permutation, 120 pivot, 24 positive definite, 198 positive semidefinite, 198 projection, 209, 282 reflection, 209 scalar, 74 scalar multiplication, 51 similar, 226 singular, 85 size, 20 skew-symmetric, 84, 141 standard, 176 strictly diagonally dominant, 258 sum, 50 super-augmented, 90 symmetric, 80 transition, 224 triangular, 75 tridiagonal, 76 unitary, 204 upper Hessenberg, 182 Vandermonde, 105, 113 vectorizing, 115 Matrix norm, 295 infinity, 300 Matrix, strictly triangular, 75 Max, 35 Min, 35 Minors, 99 Multiplicity algebraic, 221 geometric, 221 Multipliers, 119 Natural number, 10 Network, 8, 178 Newton method, 93 327 formula, 94 Nilpotent matrix, 61, 84, 95, 160, 314 Non-negative definite matrix, 201 Nonsingular matrix, 85 Norm complex, 186 Frobenius, 266, 295 general, 261 induced, 270 infinity, 262 matrix, 295 operator, 296 p-norm, 262 standard, 185 Normal equations, 198 Normal matrix, 124, 245, 259 Normalization, 187, 192 Normed linear space, 261 Notation for elementary matrices, 22 Null space, 153 Nullity, 35 Number complex, 11 integer, 10 natural, 10 rational, 10 real, 10 One-to-one, 157 Operator, 132 additive, 133 domain, 158 image, 159 kernel, 157, 158 linear, 133 one-to-one, 157 outative, 133 range, 158 target, 158 Orthogonal complement, 286 complements theorem, 289 coordinates theorem, 202, 273 matrix, 204 projection formula, 282 set, 202, 273 vectors, 194, 271 Orthogonal coordinates theorem, 202 Orthonormal set, 202, 273 Outer product, 78 Parallel vectors, 195 Parallelogram law, 272 Partial pivoting, 40 Perturbation theorem, 298 Pivot, 24 strategy, 40 328 Polar form, 14 Polarization identity, 276 Polynomial, 13 characteristic, 215 companion matrix, 105 Legendre, 277 monic, 215 Positive definite matrix, 198, 201, 223, 244, 275 Positive semidefinite matrix, 198 Power matrix, 60 vertex, 67 Power method, 252 Principal axes theorem, 241, 245 Probability distribution vector, 65 Product inner, 78 outer, 78 Projection, 196, 281 column space formula, 284 formula, 195, 273 formula for subspaces, 281 matrix, 282 problem, 280 theorem, 281 Projection formula, 195, 273 Projection matrix, 209 Pythagorean theorem, 200, 271, 273, 276 QR algorithm, 294 QR factorization, 291, 292 full, 303 Quadratic form, 80, 84, 123, 255 Quadratic formula, 14 Range, 158 Rank, 34 full column, 199 of matrix product, 82 theorem, 172 Rational number, 10 Real numbers, 10 Real part, 11 Reduced row echelon form, 32 Reduced row form, 32 Redundancy test, 144 Redundant vector, 143 Reflection matrix, 209 Residual, 197 Resolution of identity, 306 Roots of unity, 14 Rotation matrix, 149, 205 Row operations, 22 Row space, 153 Scalar, 19, 74, 125 Schur triangularization theorem, 244 Set, 9, 143 INDEX closed, 263 empty, equal, intersection, 9, 142 prescribe, proper, subset, union, Similar matrices, 226 Singular values, 248 vectors, 248 Singular matrix, 85 Singular Value Decomposition, 247 Skew-symmetric, 141 Skew-symmetric matrix, 84, 105 Solution general form, 25 genuine, 198 least squares, 198 non-unique, 24 set, 30 , 15 to Þ Ị vector, 30 Solutions to linear system, 19 Soluton to linear system, Space inner product, 267 normed linear, 261 Span, 138 Spectral radius, 232 Standard basis, 146 coordinates, 146 inner product, 189 norm, 185 vector space, 127 Standard form, 11 States, 224 Steinitz substitution, 162, 164 Strictly diagonally dominant, 258 Subspace definition, 135 intersection, 142 projection, 281 sum, 142 test, 135 trivial, 137 Sum of subspaces, 142, 286 Super-augmented matrix, 90 SVD, 247 Symmetric matrix, 80 System consistent, 26 equivalent, 29 INDEX homogeneous, 37 inconsistent, 26 linear, non-homogeneous, 37 overdetermined, 197 Target, 132, 158 Tensor product, 114 Trace, 222 Transformation, 132 Transition matrix, 224 Transpose, 77 rank, 81 Triangular, 75 lower, 75 strictly, 75, 84 unit, 119 upper, 75, 99, 231, 246 Tridiagonal matrix, 76, 223 Tuple notation, 30 Unique reduced row echelon form, 33 Unit vector, 187 Unitary matrix, 204 Vandermonde matrix, 105, 113, 303 Variable bound, 25 free, 25 Vector angle between, 193 convergence, 188 coordinates, 147 definition, 20 direction, 187 displacement, 127 limit, 71, 155, 188, 191 linearly dependent, 143 linearly independent, 143 orthogonal, 194, 271 parallel, 195 product, 56 redundant, 143 unit, 187 Vector space abstract, 128 concrete, 125 finite dimensional, 161 geometrical, 126 infinite dimensional, 161 laws, 128 of functions, 129 polynomials, 137 standard, 127 Walk, 67, 68 Wronskian, 151 329 ... *Computational Notes and Projects 39 Review 47 Chapter MATRIX ALGEBRA 49 2.1 Matrix Addition and Scalar Multiplication 49 2.2 Matrix Multiplication 55 2.3 Applications of Matrix Arithmetic 62... subjects, as well as understand the mechanics One way to so is to show how concepts of matrix and linear algebra make concrete problems workable To this end, applied mathematics and mathematical modeling... that occurs in the th row and th column is called the ỉ entry of the matrix ẵ ẵ The objects we have just defined are basic “quantities” of linear algebra and matrix analysis, along with scalar

Ngày đăng: 31/03/2014, 15:02

Từ khóa liên quan

Tài liệu cùng người dùng

Tài liệu liên quan