1. Trang chủ
  2. » Khoa Học Tự Nhiên

handbook of linear algebra

1,4K 1,6K 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 1.402
Dung lượng 14,4 MB

Nội dung

Hogben is a frequent organizer of meetings, workshops, and special sessions in combinatorial linearalgebra, including the workshop, “Spectra of Families of Matrices Described by Graphs,

Trang 7

6000 Broken Sound Parkway NW, Suite 300

Boca Raton, FL 33487-2742

© 2007 by Taylor & Francis Group, LLC

Chapman & Hall/CRC is an imprint of Taylor & Francis Group, an Informa business

No claim to original U.S Government works

Printed in the United States of America on acid-free paper

10 9 8 7 6 5 4 3 2 1

International Standard Book Number-10: 1-58488-510-6 (Hardcover)

International Standard Book Number-13: 978-1-58488-510-8 (Hardcover)

This book contains information obtained from authentic and highly regarded sources Reprinted material is quoted with permission, and sources are indicated A wide variety of references are listed Reasonable efforts have been made to publish reliable data and information, but the author and the publisher cannot assume responsibility for the validity of all materials or for the consequences of their use

No part of this book may be reprinted, reproduced, transmitted, or utilized in any form by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying, microfilming, and recording, or in any informa- tion storage or retrieval system, without written permission from the publishers.

For permission to photocopy or use material electronically from this work, please access www.copyright.com (http:// www.copyright.com/) or contact the Copyright Clearance Center, Inc (CCC) 222 Rosewood Drive, Danvers, MA 01923, 978-750-8400 CCC is a not-for-profit organization that provides licenses and registration for a variety of users For orga- nizations that have been granted a photocopy license by the CCC, a separate system of payment has been arranged.

Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used only for

identification and explanation without intent to infringe.

Visit the Taylor & Francis Web site at

http://www.taylorandfrancis.com

and the CRC Press Web site at

http://www.crcpress.com

Trang 8

I dedicate this book to my husband, Mark Hunacek, with gratitude both for his support

throughout this project and for our wonderful life together

vii

Trang 10

I would like to thank Executive Editor Bob Stern of Taylor & Francis Group, who envisioned this project andwhose enthusiasm and support has helped carry it to completion I also want to thank Yolanda Croasdale,Suzanne Lassandro, Jim McGovern, Jessica Vakili and Mimi Williams, for their expert guidance of thisbook through the production process

I would like to thank the many authors whose work appears in this volume for the contributions oftheir time and expertise to this project, and for their patience with the revisions necessary to produce aunified whole from many parts

Without the help of the associate editors, Richard Brualdi, Anne Greenbaum, and Roy Mathias, thisbook would not have been possible They gave freely of their time, expertise, friendship, and moral support,and I cannot thank them enough

I thank Iowa State University for providing a collegial and supportive environment in which to work,not only during the preparation of this book, but for more than 25 years

Leslie Hogben

ix

Trang 12

The Editor

Leslie Hogben, Ph.D., is a professor of mathematics at Iowa State University She received her B.A from

Swarthmore College in 1974 and her Ph.D in 1978 from Yale University under the direction of NathanJacobson Although originally working in nonassociative algebra, she changed her focus to linear algebra

in the mid-1990s

Dr Hogben is a frequent organizer of meetings, workshops, and special sessions in combinatorial linearalgebra, including the workshop, “Spectra of Families of Matrices Described by Graphs, Digraphs, and SignPatterns,” hosted by American Institute of Mathematics in 2006 and the Topics in Linear Algebra Conferencehosted by Iowa State University in 2002 She is the Assistant Secretary/Treasurer of the International LinearAlgebra Society

An active researcher herself, Dr Hogben particularly enjoys introducing graduate and undergraduatestudents to mathematical research She has three current or former doctoral students and nine master’sstudents, and has worked with many additional graduate students in the Iowa State University Combinato-rial Matrix Theory Research Group, which she founded Dr Hogben is the co-director of the NSF-sponsoredREU “Mathematics and Computing Research Experiences for Undergraduates at Iowa State University”and has served as a research mentor to ten undergraduates

xi

Trang 14

Virginia Polytechnic Institute

and State University

Biswa Nath Datta

Northern Illinois University

Trang 15

The College of Charleston, SC

Ant ´onio Leal Duarte

Roy Mathias

University of Birmingham,England

Simo Puntanen

University of Tampere, Finland

Robert Reams

Virginia CommonwealthUniversity

Joachim Rosenthal

University of Zurich,Switzerland

George A F Seber

University of Auckland, NZ

Peter ˇSemrl

University of Ljubljana,Slovenia

Trang 16

Paul Weiner

St Mary’s University ofMinnesota

Trang 18

Preliminaries P-1

Part I Linear Algebra

Basic Linear Algebra

1 Vectors, Matrices and Systems of Linear Equations

5 Inner Product Spaces, Orthogonal Projection, Least Squares

and Singular Value Decomposition

Lixing Han and Michael Neumann 5-1

Matrices with Special Properties

Trang 19

Advanced Linear Algebra

J A Dias da Silva and Armando Machado 13-1

14 Matrix Equalities and Inequalities

Topics in Advanced Linear Algebra

20 Inverse Eigenvalue Problems

Trang 20

26 Matrices Leaving a Cone Invariant

Bit-Shun Tam and Hans Schneider 26-1

Part II Combinatorial Matrix Theory and Graphs

Matrices and Graphs

27 Combinatorial Matrix Theory

Michael G Neubauer and William Watkins 32-1

33 Sign Pattern Matrices

Frank J Hall and Zhongshan Li 33-1

34 Multiplicity Lists for the Eigenvalues of Symmetric Matrices

with a Given Graph

Charles R Johnson, Ant´onio Leal Duarte, and Carlos M Saiago 34-1

35 Matrix Completion Problems

Leslie Hogben and Amy Wangsness 35-1

36 Algebraic Connectivity

Steve Kirkland 36-1

Part III Numerical Methods

Numerical Methods for Linear Systems

37 Vector and Matrix Norms, Error Analysis, Efficiency and Stability

Ralph Byers and Biswa Nath Datta 37-1

38 Matrix Factorizations, and Direct Solution of Linear Systems

Christopher Beattie 38-1

xix

Trang 21

40 Sparse Matrix Methods

Esmond G Ng 40-1

41 Iterative Solution Methods for Linear Systems

Anne Greenbaum 41-1

Numerical Methods for Eigenvalues

42 Symmetric Matrix Eigenvalue Techniques

45 Computation of the Singular Value Deconposition

Alan Kaylor Cline and Inderjit S Dhillon 45-1

46 Computing Eigenvalues and Singular Values to High Relative Accuracy

Zlatko Drmaˇc 46-1

Computational Linear Algebra

47 Fast Matrix Multiplication

Trang 22

Applications to Probability and Statistics

52 Random Vectors and Linear Statistical Models

Simo Puntanen and George P H Styan 52-1

53 Multivariate Statistical Analysis

Simo Puntanen, George A F Seber, and George P H Styan 53-1

54 Markov Chains

Beatrice Meini 54-1

Applications to Analysis

55 Differential Equations and Stability

Volker Mehrmann and Tatjana Stykel 55-1

56 Dynamical Systems and Linear Algebra

Fritz Colonius and Wolfgang Kliemann 56-1

57 Control Theory

Peter Benner 57-1

58 Fourier Analysis

Kenneth Howell 58-1

Applications to Physical and Biological Sciences

59 Linear Algebra and Mathematical Physics

63 Information Retrieval and Web Search

Amy Langville and Carl Meyer 63-1

Trang 23

Part V Computational Software

Interactive Software for Linear Algebra

71 MATLAB

Steven J Leon 71-1

72 Linear Algebra in Maple

David J Jeffrey and Robert M Corless 72-1

Zhaojun Bai, James Demmel, Jack Dongarra, Julien Langou, and Jenny Wang 75-1

76 Use of ARPACK and EIGS

D C Sorensen 76-1

77 Summary of Software for Linear Algebra Freely Available on the Web

Jack Dongarra, Victor Eijkhout, and Julien Langou 77-1

Glossary G-1

Notation Index N-1

Index I-1

xxii

Trang 24

It is no exaggeration to say that linear algebra is a subject of central importance in both mathematics and avariety of other disciplines It is used by virtually all mathematicians and by statisticians, physicists, biologists,computer scientists, engineers, and social scientists Just as the basic idea of first semester differential calculus(approximating the graph of a function by its tangent line) provides information about the function, theprocess of linearization often allows difficult problems to be approximated by more manageable linear ones.This can provide insight into, and, thanks to ever-more-powerful computers, approximate solutions of theoriginal problem For this reason, people working in all the disciplines referred to above should find the

Handbook of Linear Algebra an invaluable resource.

The Handbook is the first resource that presents complete coverage of linear algebra, combinatorial

lin-ear algebra, and numerical linlin-ear algebra, combined with extensive applications to a variety of fields andinformation on software packages for linear algebra in an easy to use handbook format

Content

The Handbook covers the major topics of linear algebra at both the graduate and undergraduate level as well

as its offshoots (numerical linear algebra and combinatorial linear algebra), its applications, and software

packages for linear algebra computations The Handbook takes the reader from the very elementary aspects

of the subject to the frontiers of current research, and its format (consisting of a number of independentchapters each organized in the same standard way) should make this book accessible to readers with divergentbackgrounds

Format

There are five main parts in this book The first part (Chapters 1 through Chapter 26) covers linear algebra;the second (Chapter 27 through Chapter 36) and third (Chapter 37 through Chapter 49) cover, respectively,combinatorial and numerical linear algebra, two important branches of the subject Applications of linearalgebra to other disciplines, both inside and outside of mathematics, comprise the fourth part of the book(Chapter 50 through Chapter 70) Part five (Chapter 71 through Chapter 77) addresses software packagesuseful for linear algebra computations

Each chapter is written by a different author or team of authors, who are experts in the area covered Eachchapter is divided into sections, which are organized into the following uniform format:

rDefinitions

rFacts

rExamples

xxiii

Trang 25

ogy of linear algebra, combinatorial linear algebra, and numerical linear algebra, is available at the end ofthe book to provide definitions of terms that appear in different chapters In addition to the definition,the Glossary also provides the number of the chapter (and section, thereof) where the term is defined TheNotation Index serves the same purpose for symbols.

The Facts (which elsewhere might be called theorems, lemmas, etc.) are presented in list format, whichallows the reader to locate desired information quickly In lieu of proofs, references are provided for all facts.The references will also, of course, supply a source of additional information about the subject of the chapter

In this spirit, we have encouraged the authors to use texts or survey articles on the subject as references, whereavailable

The Examples illustrate the definitions and facts Each section is short enough that it is easy to go backand forth between the Definitions/Facts and the Examples to see the illustration of a fact or definition Somesections also contain brief applications following the Examples (major applications are treated in their ownchapters)

Trang 26

This chapter contains a variety of definitions of terms that are used throughout the rest of the book, but arenot part of linear algebra and/or do not fit naturally into another chapter Since these definitions have littleconnection with each other, a different organization is followed; the definitions are (loosely) alphabetized andeach definition is followed by an example

Algebra

An (associative) algebra is a vector space A over a field F together with a multiplication (x, y)→ xy from

A × A to A satisfying two distributive properties and associativity, i.e., for all a, b ∈ F and all x, y, z ∈ A:

(ax + by)z = a(xz) + b(yz), x(ay + bz) = a(xy) + b(xz) (xy)z= x(yz).

Except in Chapter 69 and Chapter 70 the term algebra means associative algebra In these two chapters,

associativity is not assumed

Examples:

The vector space of n × n matrices over a field F with matrix multiplication is an (associative) algebra.

Boundary

The boundary∂S of a subset S of the real numbers or the complex numbers is the intersection of the closure

of S and the closure of the complement of S.

Examples:

The boundary of S = {x ∈ C : |z| ≤ 1} is ∂S = {x ∈ C : |z| = 1}.

Complement

The complement of the set X in universe S, denoted S \ X, is all elements of S that are not in X When the

universe is clear (frequently the universe is{1, , n}) then this can be denoted X c

Examples:

For S = {1, 2, 3, 4, 5} and X = {1, 3}, S \ X = {2, 4, 5}.

Complex Numbers

Let a, b ∈ R The symbol i denotes√−1

The complex conjugate of a complex number c = a + bi is c = a − bi.

The imaginary part of a + bi is im(a + bi) = b and the real part is re(a + bi) = a.

The absolute value of c = a + bi is |c| =a2+ b2

xxv

Trang 27

The closed right half planeC+

0 is{z ∈ C : re(z) ≥ 0}.

The open left half planeC−is{z ∈ C : re(z) < 0}.

The closed left half planeC−is{z ∈ C : re(z) ≤ 0}.

Let V be a real or complex vector space.

Let{v1, v2, , v k } ∈ V A vector of the form a1v1+a2v2+· · ·+a kvk with all the coefficients a inonnegativeand

a i= 1 is a convex combination of {v1, v2, , v k}

A set S ⊆ V is convex if any convex combination of vectors in S is in S.

The convex hull of S is the set of all convex combinations of S and is denoted by Con(S).

An extreme point of a closed convex set S is a point v ∈ S that is not a nontrivial convex combination of

other points in S, i.e., ax + (1 − a)y = v and 0 ≤ a ≤ 1 implies x = y = v.

A convex polytope is the convex hull of a finite set of vectors inRn

Let S ⊆ V be convex A function f : S → R is convex if for all a ∈ R, 0 < a < 1, x, y ∈ S, f (ax + (1 − a)y) ≤ a f (x) + (1 − a) f (y).

Facts:

1 A set S ⊆ V is convex if and only if Con(S) = S.

2 The extreme points of Con(S) are contained in S.

3 [HJ85] Krein-Milman Theorem: A compact convex set is the convex hull of its extreme points.

Trang 28

Elementary Symmetric Function

The kth elementary symmetric function of α i , i = 1, , n is

A binary relation≡ in a nonempty set S is an equivalence relation if it satisfies the following conditions:

1 (Reflexive) For all a ∈ S, a ≡ a.

2 (Symmetric) For all a, b ∈ S, a ≡ b implies b ≡ a.

3 (Transitive) For all a, b, c ∈ S, a ≡ b and a ≡ b imply a ≡ c.

1 (Commutativity) For each a, b ∈ F , a + b = b + a and ab = ba.

2 (Associativity) For each a, b, c ∈ F , (a + b) + c = a + (b + c) and (ab)c = a(bc).

3 (Identities) There exist two elements 0 and 1 in F such that 0 + a = a and 1a = a for each a ∈ F

4 (Inverses) For each a ∈ F , there exists an element −a ∈ F such that (−a) + a = 0 For each nonzero a ∈ F , there exists an element a−1∈ F such that a−1a= 1

5 (Distributivity) For each a, b, c ∈ F, a(b + c) = ab + ac.

Examples:

The real numbers,R, the complex numbers, C, and the rational numbers, Q, are all fields The set of integers,

Z, is not a field

Greatest Integer Function

The greatest integer or floor function x (defined on the real numbers) is the greatest integer less than or equal to x.

Examples:

1.5 = 1, 1 = 1, −1.5 = −2.

xxvii

Trang 29

(See also Chapter 67 and Chapter 68.)

A group is a nonempty set G with a function G × G → G denoted (a, b) → ab, which satisfies the following

axioms:

1 (Associativity) For each a, b, c ∈ G, (ab)c = a(bc).

2 (Identity) There exists an element e ∈ G such that ea = a = ae for each a ∈ G.

3 (Inverses) For each a ∈ G, there exists an element a−1∈ G such that a−1a = e = aa−1.

A group is abelian if ab = ba for all a, b ∈ G.

Examples:

1 Any vector space is an abelian group under+

2 The set of invertible n × n real matrices is a group under matrix multiplication.

3 The set of all permutations of a set is a group under composition

Interlaces

Let a1≥ a2≥ · · · ≥ a n and b1≥ b2≥ · · · ≥ b n−1, two sequences of real numbers arranged in decreasing

order Then the sequence{b i } interlaces the sequence {a i } if a n ≤ b n−1 ≤ a n−1· · · ≤ b1 ≤ a1 Further, if

all of the above inequalities can be taken to be strict, the sequence{b i } strictly interlaces the sequence {a i}.Analogous definitions are given when the numbers are in increasing order

3 For all x, y ∈ S, f (x, y) = 0 implies x = y.

4 For all x, y ∈ S, f (x, y) = f (y, x).

5 For all x, y, z ∈ S, f (x, y) + f (y, z) ≥ f (x, z).

A metric is intended as a measure of distance between elements of the set

xxviii

Trang 30

Let, f, g be real valued functions of N or R, i.e., f, g : N → R or f, g : R → R.

f is O(g ) (big-oh of g ) if there exist constants C, k such that | f (x)| ≤ C|g(x)| for all x ≥ k.

f is o(g ) (little-oh of g ) if lim x→∞f (n)

A permutation is a one-to-one onto function from a set to itself.

The set of permutations of{1, , n} is denoted S n The identity permutation is denotedε n In this book,

permutations are generally assumed to be elements of S n for some n.

A cycle or k-cycle is a permutation τ such that there is a subset {a1, , a k } of {1, , n} satisfying τ(a i)=

a i+1andτ(a k)= a1; this is denotedτ = (a1, a2, , a k ) The length of this cycle is k.

A transposition is a 2-cycle.

A permutation is even (respectively, odd) if it can be written as the product of an even (odd) number of

transpositions

The sign of a permutationτ, denoted sgn τ, is +1 if τ is even and −1 if τ is odd.

Note: Permutations are functions and act from the left (see Examples).

3 S nwith the operation of composition is a group

xxix

Trang 31

1 Ifτ = (1523) ∈ S6, thenτ(1) = 5, τ(2) = 3, τ(3) = 1, τ(4) = 4, τ(5) = 2, τ(6) = 6.

2 (123)(12)=(13)

3 sgn(1234) =−1, because (1234) = (14)(13)(12)

Ring

(See also Section 23.1)

A ring is a set R together with a function R × R → R called addition, denoted (a, b) → a +b, and a function

R × R → R called multiplication, denoted (a, b) → ab, which satisfy the following axioms:

1 (Commutativity of +) For each a, b ∈ R, a + b = b + a.

2 (Associativity) For each a, b, c ∈ R, (a + b) + c = a + (b + c) and (ab)c = a(bc).

3 (+ identity) There exists an element 0 in R such that 0 + a = a.

4 (+ inverse) For each a ∈ R, there exists an element −a ∈ R such that (−a) + a = 0.

5 (Distributivity) For each a, b, c ∈ R, a(b + c) = ab + ac and (a + b)c = ac + bc.

A zero divisor in a ring R is a nonzero element a ∈ R such that there exists a nonzero b ∈ R with ab = 0

or ba= 0

Examples:

r The set of integers,Z, is a ring

r Any field is a ring.

r Let F be a field Then F n ×n, with matrix addition and matrix multiplication as the operations, is

(For sign of a permutation, see permutation.)

The sign of a complex number is defined by:

sign(z)=



z /|z|, if z = 0;

1, if z = 0.

If z is a real number, this sign function yields 1 or−1

This sign function is used in numerical linear algebra

The sign of a real number (as used in sign patterns) is defined by:

Trang 32

Linear Algebra

Basic Linear Algebra

1 Vectors, Matrices, and Systems of Linear Equations Jane Day 1-1

2 Linear Independence, Span, and Bases Mark Mills 2-1

3 Linear Transformations Francesco Barioli 3-1

4 Determinants and Eigenvalues Luz M DeAlba 4-1

5 Inner Product Spaces, Orthogonal Projection, Least Squares,

and Singular Value Decomposition Lixing Han and Michael Neumann 5-1

Matrices with Special Properties

6 Canonical Forms Leslie Hogben 6-1

7 Unitary Similarity, Normal Matrices, and Spectral Theory Helene Shapiro 7-1

8 Hermitian and Positive Definite Matrices Wayne Barrett 8-1

9 Nonnegative and Stochastic Matrices Uriel G Rothblum 9-1

10 Partitioned Matrices Robert Reams 10-1

Advanced Linear Algebra

11 Functions of Matrices Nicholas J Higham 11-1

12 Quadratic, Bilinear, and Sesquilinear Forms Raphael Loewy 12-1

13 Multilinear Algebra Jos´e A Dias da Silva and Armando Machado 13-1

14 Matrix Equalities and Inequalities Michael Tsatsomeros 14-1

15 Matrix Perturbation Theory Ren-Cang Li 15-1

16 Pseudospectra Mark Embree 16-1

17 Singular Values and Singular Value Inequalities Roy Mathias 17-1

18 Numerical Range Chi-Kwong Li 18-1

19 Matrix Stability and Inertia Daniel Hershkowitz 19-1

Topics in Advanced Linear Algebra

20 Inverse Eigenvalue Problems Alberto Borobia 20-1

21 Totally Positive and Totally Nonnegative Matrices Shaun M Fallat 21-1

22 Linear Preserver Problems Peter ˇ Semrl 22-1

23 Matrices over Integral Domains Shmuel Friedland 23-1

24 Similarity of Families of Matrices Shmuel Friedland 24-1

25 Max-Plus Algebra Marianne Akian, Ravindra Bapat, St´ephane Gaubert 25-1

26 Matrices Leaving a Cone Invariant Bit-Shun Tam and Hans Schneider 26-1

Trang 34

Basic Linear

Algebra

1 Vectors, Matrices, and Systems of Linear Equations Jane Day 1-1

Vector Spaces • Matrices • Gaussian and Gauss–Jordan Elimination • Systems of Linear Equations • Matrix Inverses and Elementary Matrices • LU Factorization

2 Linear Independence, Span, and Bases Mark Mills 2-1

Span and Linear Independence • Basis and Dimension of a Vector Space • Direct Sum

Decompositions • Matrix Range, Null Space, Rank, and the Dimension

Theorem • Nonsingularity Characterizations • Coordinates and Change

of Basis • Idempotence and Nilpotence

3 Linear Transformations Francesco Barioli 3-1

Basic Concepts • The Spaces L (V, W) and L (V, V ) • Matrix of a Linear

Transformation • Change of Basis and Similarity • Kernel and Range • Invariant

Subspaces and Projections • Isomorphism and Nonsingularity Characterization

• Linear Functionals and Annihilator

4 Determinants and Eigenvalues Luz M DeAlba 4-1

Determinants • Determinants: Advanced Results • Eigenvalues and Eigenvectors

5 Inner Product Spaces, Orthogonal Projection, Least Squares,

and Singular Value Decomposition Lixing Han and Michael Neumann 5-1

Inner Product Spaces • Orthogonality • Adjoints of Linear Operators on Inner

Product Spaces • Orthogonal Projection • Gram–Schmidt Orthogonalization and QR

Factorization • Singular Value Decomposition • Pseudo-Inverse • Least Squares

Problems

Trang 36

1 Vectors, Matrices, and

Throughout this chapter, F will denote a field The references [Lay03], [Leo02], and [SIF00] are good

sources for more detail about much of the material in this chapter They discuss primarily the field of realnumbers, but the proofs are usually valid for any field

1.1 Vector Spaces

Vectors are used in many applications They often represent quantities that have both direction and

magnitude, such as velocity or position, and can appear as functions, as n-tuples of scalars, or in other

disguises Whenever objects can be added and multiplied by scalars, they may be elements of some vectorspace In this section, we formulate a general definition of vector space and establish its basic properties

An element of a field, such as the real numbers or the complex numbers, is called a scalar to distinguish itfrom a vector

Definitions:

A vector space over F is a set V together with a function V × V → V called addition, denoted (x,y) →

x+ y, and a function F × V → V called scalar multiplication and denoted (c,x) → cx, which satisfy

the following axioms:

1 (Commutativity) For each x, y∈ V, x + y = y + x.

2 (Associativity) For each x, y, z∈ V, (x + y) + z = x + (y + z).

3 (Additive identity) There exists a zero vector in V, denoted 0, such that 0 + x = x for each x ∈ V.

4 (Additive inverse) For each x∈ V, there exists −x ∈ V such that (−x) + x = 0.

5 (Distributivity) For each a ∈ F and x, y ∈ V, a(x + y) = ax + ay.

6 (Distributivity) For each a, b ∈ F and x ∈ V, (a + b) x = ax + bx.

1-1

Trang 37

7 (Associativity) For each a, b ∈ F and x ∈ V, (ab) x = a(bx).

8 For each x∈ V, 1x = x.

The properties that for all x, y∈ V, and a ∈ F, x + y ∈ V and ax ∈ V, are called closure under addition and closure under scalar multiplication, respectively The elements of a vector space V are called vectors.

A vector space is called real if F =R, complex if F = C.

If n is a positive integer, F n denotes the set of all ordered n-tuples (written as columns) These are

sometimes written instead as rows [x1 · · · x n ] or (x1, , x n) Forx =

denote the n-tuple of zeros For x ∈ F n , x jis called the jthcoordinate of x.

A subspace of vector space V over field F is a subset of V, which is itself a vector space over F when

the addition and scalar multiplication of V are used If S1and S2are subsets of vector space V, define

S1+ S2= {x + y : x ∈ S1and y∈ S2}.

Facts:

Let V be a vector space over F

1 F n is a vector space over F

2 [FIS03, pp 11–12] (Basic properties of a vector space):

rThe vector 0 is the only additive identity in V

rFor each x∈ V, −x is the only additive inverse for x in V.

rFor each x∈ V, −x = (−1)x.

rIf a ∈ F and x ∈ V, then ax = 0 if and only if a = 0 or x = 0.

r(Cancellation) If x, y, z∈ V and x + y = x + z, then y = z.

3 [FIS03, pp 16–17] Let W be a subset of V The following are equivalent:

r W is a subspace of V.

r W is nonempty and closed under addition and scalar multiplication.

r0∈ W and for any x, y ∈ W and a, b ∈ F, ax + by ∈ W.

4 For any vector space V, {0} and V itself are subspaces of V.

5 [FIS03, p 19] The intersection of any nonempty collection of subspaces of V is a subspace of V

6 [FIS03, p 22] Let W1and W2be subspaces of V Then W1+ W2is a subspace of V containing W1

and W2 It is the smallest subspace that contains them in the sense that any subspace that contains both W1and W2must contain W1+ W2.

Examples:

1 The setRn of all ordered n-tuples of real numbers is a vector space overR, and the set Cn of

all ordered n-tuples of complex numbers is a vector space overC For instance, x =

Trang 38

Vectors, Matrices, and Systems of Linear Equations 1-3

3 The vector spacesR, R2

, andR3

are the usual Euclidean spaces of analytic geometry There arethree types of subspaces ofR2

:{0}, a line through the origin, andR2

itself There are four types ofsubspaces ofR3

:{0}, a line through the origin, a plane through the origin, andR3

itself For instance,

let v= (5, −1, −1) and w = (0, 3, −2) The lines W1= {sv : s ∈ R} and W2= {s w : s ∈ R} are

subspaces ofR3 The subspace W1+ W2= {sv + t w : s, t ∈ R} is a plane The set {sv + w: s ∈ R}

is a line parallel to W1, but is not a subspace (For more information on geometry, see Chapter 65.)

4 Let F [x] be the set of all polynomials in the single variable x, with coefficients from F To add polynomials, add coefficients of like powers; to multiply a polynomial by an element of F , multi- ply each coefficient by that scalar With these operations, F [x] is a vector space over F The zero polynomial z, with all coefficients 0, is the additive identity of F [x] For f ∈ F [x], the function

− f defined by − f (x) = (−1) f (x) is the additive inverse of f.

5 In F [x], the constant polynomials have degree 0 For n > 0, the polynomials with highest power term x n are said to have degree n For a nonnegative integer n, let F [x; n] be the subset of F [x] consisting of all polynomials of degree n or less Then F [x; n] is a subspace of F [x].

6 When n > 0, the set of all polynomials of degree exactly n is not a subspace of F [x] because it is

not closed under addition or scalar multiplication The set of all polynomials inR[x] with rational

coefficients is not a subspace ofR[x] because it is not closed under scalar multiplication.

7 Let V be the set of all infinite sequences (a1, a2, a3, .), where each a j ∈ F Define addition and scalar multiplication coordinate-wise Then V is a vector space over F

8 Let X be a nonempty set and let F(X, F ) be the set of all functions f : X → F Let f, g ∈ F(X, F ) and define f + g and cf pointwise, as ( f + g)(x) = f (x) + g(x) and (cf )(x) = cf (x) for all x ∈ X.

With these operations,F(X, F ) is a vector space over F The zero function is the additive identity

and (−1) f = − f, the additive inverse of f.

9 Let X be a nonempty subset ofRn The set C(X) of all continuous functions f : X →R is a subspace

ofF(X, R) The set D(X) of all differentiable functions f : X → R is a subspace of C(X) and also

ofF(X, R).

1.2 Matrices

Matrices are rectangular arrays of scalars that are used in a great variety of ways, such as to solve linearsystems, model linear behavior, and approximate nonlinear behavior They are standard tools in almostevery discipline, from sociology to physics and engineering

⎦, with entries from F The

notation A = [a ij ] that displays a typical entry is also used The element a ij of the matrix A is called the (i, j )

entry of A and can also be denoted (A) ij The shape (or size) of A is m × p, and A is square if m = p; in

this case, m is also called the size of A Two matrices A = [a ij ] and B = [b ij] are said to be equal if they have

the same shape and a ij = b ij for all i, j Let A = [a ij ] and B = [b ij ] be m × p matrices, and let c be a scalar.

Define addition and scalar multiplication on the set of all m × p matrices over F entrywise, as A + B = [a ij + b ij ] and cA = [ca ij] The set of all m × p matrices over F with these operations is denoted F m ×p

If A is m × p , row i is [a i1, , a ip ] and column j is

a 1j

a mj

⎦ These are called a row vector and

a column vector respectively, and they belong to F n×1and F1×n , respectively The elements of F nare

identified with the elements of F n×1(or sometimes with the elements of F1×n) Let 0mp denote the m × p

matrix of zeros, often shortened to 0 when the size is clear Define−A = (−1)A.

Trang 39

matrix–vector product of A and b is Ab = b1a1+ · · · + b pap Notice Ab is m × 1.

If A ∈ F m ×p and C = [c1 . cn]∈ F p ×n , define the matrix product of A and C as AC =

A matrix A = [a ij ] is diagonal if a ij = 0 whenever i = j, lower triangular if a ij = 0 whenever i < j,

and upper triangular if a ij = 0 whenever i > j A unit triangular matrix is a lower or upper triangular

matrix in which each diagonal entry is 1

The identity matrix I n , often shortened to I when the size is clear, is the n × n matrix with main

diagonal entries 1 and other entries 0

A scalar matrix is a scalar multiple of the identity matrix.

A permutation matrix is one whose rows are some rearrangement of the rows of an identity matrix.

Let A ∈ F m ×p The transpose of A, denoted A T , is the p × m matrix whose (i, j) entry is the ( j, i) entry of A.

The square matrix A is symmetric if A T = A and skew-symmetric if A T = −A.

When F =C,that is, when A has complex entries, the Hermitianadjoint of A is its conjugate transpose,

A= ¯A T ; that is, the (i, j ) entry of Ais a ji Some authors, such as [Leo02], write A H instead of A.

The square matrix A is Hermitian if A= A and skew-Hermitian if A= −A.

Letα be a nonempty set of row indices and β a nonempty set of column indices A submatrix of A is

a matrix A[ α, β] obtained by choosing the entries of A, which lie in rows α and columns β A principal

submatrix of A is a submatrix of the form A[ α, α] A leading principal submatrix of A is one of the form

3 [SIF00, p 88] Let c ∈ F, let A and B be matrices over F, let I denote an identity matrix, and

assume the shapes allow the following sums and products to be calculated Then:

Trang 40

Vectors, Matrices, and Systems of Linear Equations 1-5

4 [SIF00, p 5 and p 20] Let c ∈ F, let A and B be matrices over F, and assume the shapes allow the

following sums and products to be calculated Then:

r( A T)T = A

r( A + B) T = A T + B T

r(cA) T = cA T

r(AB) T = B T A T

5 [Leo02, pp 321–323] Let cC, let A and B be matrices over C, and assume the shapes allow the

following sums and products to be calculated Then:

25

− 9

36

=



−414

which will be equal only if b = c = 0.

4 The product of matrices can be a zero matrix even if neither has any zero entries For example, if

lower triangular, and

Ngày đăng: 12/05/2014, 08:41

TỪ KHÓA LIÊN QUAN

w