1. Trang chủ
  2. » Mẫu Slide

Chương 2: Ma trận và các phép toán trên ma trận

39 17 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 39
Dung lượng 570,51 KB

Nội dung

(1) Reduce the matrix A to quasi row echelon form by only performing the elementary row operation of adding a multiple of a row higher in the array to another row lower in the array. Let[r]

(1)

W W L CHEN

c

W W L Chen, 1982, 2008

This chapter originates from material used by the author at Imperial College, University of London, between 1981 and 1990 It is available free to all individuals, on the understanding that it is not to be used for financial gain,

and may be downloaded and/or photocopied, with or without permission from the author However, this document may not be kept on any information storage and retrieval system without permission

from the author, unless such system is not accessible to any individuals other than its owners

Chapter 2

MATRICES

2.1 Introduction

A rectangular array of numbers of the form 

a11 a1n am1 amn

 (1)

is called an m × n matrix, with m rows and n columns We count rows from the top and columns from the left Hence

( ai1 ain) and

  

a1j

amj

  

represent respectively the i-th row and the j-th column of the matrix (1), and aij represents the entry

in the matrix (1) on the i-th row and j-th column Example 2.1.1 Consider the × matrix

 23 −11 5 2 −1

  Here

( ) and  35

(2)

represent respectively the 2-nd row and the 3-rd column of the matrix, and represents the entry in the matrix on the 2-nd row and 3-rd column

We now consider the question of arithmetic involving matrices First of all, let us study the problem of addition A reasonable theory can be derived from the following definition

Definition Suppose that the two matrices A =

a11 a1n am1 amn

 and B = 

b11 b1n bm1 bmn

  both have m rows and n columns Then we write

A + B = 

 a11+ b 11 a1n+ b 1n am1+ bm1 amn+ bmn

  and call this the sum of the two matrices A and B

Example 2.1.2 Suppose that A =

 23 −11 5 2 −1

 and B = 

 10 −22 4 −17 −2 3

  Then

A + B = 

 + 13 + 0 + − −1 + 71 + + 4 2 − 1 −1 − + + +

  =

 33 63 1 −3 10

  Example 2.1.3 We not have a definition for “adding” the matrices



2 −1 −1



and 

 23 31 5 −1

 

PROPOSITION 2A (MATRIX ADDITION) Suppose that A, B, C are m × n matrices Suppose further that O represents the m × n matrix with all entries zero Then

(a) A + B = B + A;

(b) A + (B + C) = (A + B) + C; (c) A + O = A; and

(d) there is an m × n matrix A0 such that A + A0 = O

Proof Parts (a)–(c) are easy consequences of ordinary addition, as matrix addition is simply entry-wise addition For part (d), we can consider the matrix A0 obtained from A by multiplying each entry of A

by −1

The theory of multiplication is rather more complicated, and includes multiplication of a matrix by a scalar as well as multiplication of two matrices

(3)

Definition Suppose that the matrix A =

a11 a1n am1 amn

  has m rows and n columns, and that c ∈ R Then we write

cA = 

ca 11 ca 1n cam1 camn

  and call this the product of the matrix A by the scalar c

Example 2.1.4 Suppose that

A = 

 23 −11 5 2 −1

  Then

2A = 

 46 −22 10 4 −2 14 12

 

PROPOSITION 2B (MULTIPLICATION BY SCALAR) Suppose that A, B are m×n matrices, and that c, d ∈ R Suppose further that O represents the m × n matrix with all entries zero Then

(a) c(A + B) = cA + cB; (b) (c + d)A = cA + dA; (c) 0A = O; and

(d) c(dA) = (cd)A

Proof These are all easy consequences of ordinary multiplication, as multiplication by scalar c is simply entry-wise multiplication by the number c

The question of multiplication of two matrices is rather more complicated To motivate this, let us consider the representation of a system of linear equations

a11x1+ + a1nxn = b1,

am1x1+ + amnxn = bm,

(2) in the form Ax = b, where

A = 

a11 a1n am1 amn

 and b =  b

bm

 (3)

represent the coefficients and

x =  x

xn

(4)

represents the variables This can be written in full matrix notation by 

a11 a1n am1 amn

 

 x

xn

  =

 b

bm

  Can you work out the meaning of this representation?

Now let us define matrix multiplication more formally Definition Suppose that

A = 

a11 a1n am1 amn

 and B =   

b11 a1p

bn1 bnp

  

are respectively an m × n matrix and an n × p matrix Then the matrix product AB is given by the m × p matrix

AB =   

q11 q1p

qm1 qmp

   , where for every i = 1, , m and j = 1, , p, we have

qij = n

X

k=1

aikbkj= ai1b1j+ + ainbnj

Remark Note first of all that the number of columns of the first matrix must be equal to the number of rows of the second matrix On the other hand, for a simple way to work out qij, the entry in the i-th

row and j-th column of AB, we observe that the i-th row of A and the j-th column of B are respectively ( ai1 ain) and

  

b1j

bnj

  

We now multiply the corresponding entries – from ai1 with b1j, and so on, until ainwith bnj – and then

add these products to obtain qij

Example 2.1.5 Consider the matrices A =

 23 −11 5 2 −1

 and B =   

1 −2

  

Note that A is a × matrix and B is a × matrix, so that the product AB is a × matrix Let us calculate the product

AB = 

qq1121 qq1222 q31 q32

(5)

Consider first of all q11 To calculate this, we need the 1-st row of A and the 1-st column of B, so let us

cover up all unnecessary information, so that 

× × × ×2 −1 × × × ×

 

  

1 × × × ×

   =

 q× ×11 ×

× ×   From the definition, we have

q11= · + · + · + (−1) · = + + − =

Consider next q12 To calculate this, we need the 1-st row of A and the 2-nd column of B, so let us cover

up all unnecessary information, so that 

× × × ×2 −1 × × × ×

 

  

× × × −2 ×

   =

 × q× ×12

× ×   From the definition, we have

q12= · + · + · (−2) + (−1) · = + 12 − − = 13

Consider next q21 To calculate this, we need the 2-nd row of A and the 1-st column of B, so let us cover

up all unnecessary information, so that 

× × × ×3 2 × × × ×

 

  

1 × × × ×

   =

 q× ×21 ×

× ×   From the definition, we have

q21= · + · + · + · = + + + = 11

Consider next q22 To calculate this, we need the 2-nd row of A and the 2-nd column of B, so let us

cover up all unnecessary information, so that 

× × × ×3 2 × × × ×

 

  

× × × −2 ×

   =

 × ×× q22

× ×   From the definition, we have

q22= · + · + · (−2) + · = 12 + − 10 + =

Consider next q31 To calculate this, we need the 3-rd row of A and the 1-st column of B, so let us cover

up all unnecessary information, so that 

 × × × ×× × × × −1

 

  

1 × × × ×

   =

  × ×× ×

q31 ×

  From the definition, we have

(6)

Consider finally q32 To calculate this, we need the 3-rd row of A and the 2-nd column of B, so let us

cover up all unnecessary information, so that 

 × × × ×× × × × −1

 

  

× × × −2 ×

   =

 × ×× ×

× q32

  From the definition, we have

q32= (−1) · + · + · (−2) + · = −4 + + −14 + = −12

We therefore conclude that AB =

 23 −11 5 2 −1

 

  

1 −2

   =

117 137 17 −12

 

Example 2.1.6 Consider again the matrices A =

 23 −11 5 2 −1

 and B =   

1 −2

  

Note that B is a × matrix and A is a × matrix, so that we not have a definition for the “product” BA

We leave the proofs of the following results as exercises for the interested reader

PROPOSITION 2C (ASSOCIATIVE LAW) Suppose that A is an m×n matrix, B is an n×p matrix andC is an p × r matrix Then A(BC) = (AB)C

PROPOSITION 2D (DISTRIBUTIVE LAWS)

(a) Suppose that A is an m × n matrix and B and C are n × p matrices Then A(B + C) = AB + AC (b) Suppose thatA and B are m × n matrices and C is an n × p matrix Then (A + B)C = AC + BC PROPOSITION 2E Suppose that A is an m × n matrix, B is an n × p matrix, and that c ∈ R Then c(AB) = (cA)B = A(cB)

2.2 Systems of Linear Equations

Note that the system (2) of linear equations can be written in matrix form as Ax = b,

where the matrices A, x and b are given by (3) and (4) In this section, we shall establish the following important result

(7)

Proof Clearly the system (2) has either no solution, exactly one solution, or more than one solution It remains to show that if the system (2) has two distinct solutions, then it must have infinitely many solutions Suppose that x = u and x = v represent two distinct solutions Then

Au = b and Av = b, so that

A(u − v) = Au − Av = b − b = 0,

where is the zero m × matrix It now follows that for every c ∈ R, we have

A(u + c(u − v)) = Au + A(c(u − v)) = Au + c(A(u − v)) = b + c0 = b,

so that x = u + c(u − v) is a solution for every c ∈ R Clearly we have infinitely many solutions

2.3 Inversion of Matrices

For the remainder of this chapter, we shall deal with square matrices, those where the number of rows equals the number of columns

Definition The n × n matrix

In=

a11 a1n an1 ann

  , where

aij =



1 if i = j, if i 6= j, is called the identity matrix of order n

Remark Note that

I1= ( ) and I4=

  

1 0 0 0 0 0 0

  

The following result is relatively easy to check It shows that the identity matrix Inacts as the identity

for multiplication of n × n matrices

PROPOSITION 2G For every n × n matrix A, we have AIn= InA = A

This raises the following question: Given an n×n matrix A, is it possible to find another n×n matrix B such that AB = BA = In?

(8)

Definition An n × n matrix A is said to be invertible if there exists an n × n matrix B such that AB = BA = In In this case, we say that B is the inverse of A and write B = A−1

PROPOSITION 2H Suppose that A is an invertible n × n matrix Then its inverse A−1 is unique.

Proof Suppose that B satisfies the requirements for being the inverse of A Then AB = BA = In It

follows that

A−1= A−1I

n = A−1(AB) = (A−1A)B = InB = B

Hence the inverse A−1 is unique

PROPOSITION 2J Suppose that A and B are invertible n × n matrices Then (AB)−1= B−1A−1.

Proof In view of the uniqueness of inverse, it is sufficient to show that B−1A−1 satisfies the

require-ments for being the inverse of AB Note that

(AB)(B−1A−1) = A(B(B−1A−1)) = A((BB−1)A−1) = A(I

nA−1) = AA−1 = In

and

(B−1A−1)(AB) = B−1(A−1(AB)) = B−1((A−1A)B) = B−1(I

nB) = B−1B = In

as required

PROPOSITION 2K Suppose that A is an invertible n × n matrix Then (A−1)−1= A.

Proof Note that both (A−1)−1 and A satisfy the requirements for being the inverse of A−1 Equality

follows from the uniqueness of inverse

2.4 Application to Matrix Multiplication

In this section, we shall discuss an application of invertible matrices Detailed discussion of the technique involved will be covered in Chapter

Definition An n × n matrix

A = 

a11 a1n an1 ann

  , where aij = whenever i 6= j, is called a diagonal matrix of order n

Example 2.4.1 The × matrices 

1 00 0 0

 and 

0 00 0 0

  are both diagonal

Given an n × n matrix A, it is usually rather complicated to calculate Ak= A A

| {z }

k

(9)

Example 2.4.2 Consider the × matrix A =

 1745 −10 −5−28 −15 −30 20 12

  Suppose that we wish to calculate A98 It can be checked that if we take

P = 

 13 20 3 −2

  , then

P−1 =

−3−2 4/32 11 −5/3 −1

  Furthermore, if we write

D = 

−3 00 2 0 0

  , then it can be checked that A = P DP−1, so that

A98= (P DP−1) (P DP−1)

| {z }

98

= P D98P−1= P

 3

98 0 0

0 298 0

0 298

  P−1.

This is much simpler than calculating A98 directly Note that this example is only an illustration We

have not discussed here how the matrices P and D are found

2.5 Finding Inverses by Elementary Row Operations

In this section, we shall discuss a technique by which we can find the inverse of a square matrix, if the inverse exists Before we discuss this technique, let us recall the three elementary row operations we discussed in the previous chapter These are: (1) interchanging two rows; (2) adding a multiple of one row to another row; and (3) multiplying one row by a non-zero constant

Let us now consider the following example Example 2.5.1 Consider the matrices

A = 

aa1121 aa1222 aa1323 a31 a32 a33

 and I3= 

1 00 0 0

 

• Let us interchange rows and of A and likewise for I3 We obtain respectively

aa2111 aa2212 aa2313 a31 a32 a33

 and 

0 01 0 0

(10)

Note that

aa2111 aa2212 aa2313 a31 a32 a33

  =

0 01 0 0

 

aa1121 aa1222 aa1323 a31 a32 a33

  • Let us interchange rows and of A and likewise for I3 We obtain respectively

aa1131 aa1232 aa1333 a21 a22 a23

 and 

1 00 1

  Note that

aa1131 aa1232 aa1333 a21 a22 a23

  =

1 00 1

 

aa1121 aa1222 aa1323 a31 a32 a33

 

• Let us add times row to row of A and likewise for I3 We obtain respectively

3a11a+ a11 21 3a12a12+ a22 3a13a13+ a23 a31 a32 a33

 and 

1 03 0 0

  Note that

3a11a+ a11 21 3a12a12+ a22 3a13a13+ a23 a31 a32 a33

  =

1 03 0 0

 

aa1121 aa1222 aa1323 a31 a32 a33

  • Let us add −2 times row to row of A and likewise for I3 We obtain respectively

−2a31a21+ a11 −2a32a22+ a12 −2a33a23+ a13 a31 a32 a33

 and 

1 −20 1 0 0

  Note that

−2a31a21+ a11 −2a32a22+ a12 −2a33a23+ a13 a31 a32 a33

  =

1 −20 1 0 0

 

aa1121 aa1222 aa1323 a31 a32 a33

  • Let us multiply row of A by and likewise for I3 We obtain respectively

5aa1121 5aa1222 5aa1323 a31 a32 a33

 and 

1 00 0 0

  Note that

5aa1121 5aa1222 5aa1323 a31 a32 a33

  =

1 00 0 0

 

aa1121 aa1222 aa1323 a31 a32 a33

  • Let us multiply row of A by −1 and likewise for I3 We obtain respectively

 aa1121 aa1222 aa1323 −a31 −a32 −a33

 and 

1 00 1 00 0 −1

(11)

Note that

 aa1121 aa1222 aa1323 −a31 −a32 −a33

  =

1 00 1 00 0 −1

 

aa1121 aa1222 aa1323 a31 a32 a33

 

Let us now consider the problem in general

Definition By an elementary n×n matrix, we mean an n×n matrix obtained from Inby an elementary

row operation

We state without proof the following important result The interested reader may wish to construct a proof, taking into account the different types of elementary row operations

PROPOSITION 2L Suppose that A is an n × n matrix, and suppose that B is obtained from A by an elementary row operation Suppose further that E is an elementary matrix obtained from In by the same elementary row operation Then B = EA

We now adopt the following strategy Consider an n×n matrix A Suppose that it is possible to reduce the matrix A by a sequence α1, α2, , αk of elementary row operations to the identity matrix In If

E1, E2, , Ek are respectively the elementary n × n matrices obtained from In by the same elementary

row operations α1, α2 , αk, then

In= Ek E2E1A

We therefore must have

A−1 = E

k E2E1= Ek E2E1In

It follows that the inverse A−1can be obtained from I

nby performing the same elementary row operations

α1, α2, , αk Since we are performing the same elementary row operations on A and In, it makes sense

to put them side by side The process can then be described pictorially by (A|In)−−−→ (Eα1 1A|E1In)

α2

−−−→ (E2E1A|E2E1In) α3

−−−→

αk

−−−→ (Ek E2E1A|Ek E2E1In) = (In|A−1)

In other words, we consider an array with the matrix A on the left and the matrix In on the right We

now perform elementary row operations on the array and try to reduce the left hand half to the matrix In If we succeed in doing so, then the right hand half of the array gives the inverse A−1

Example 2.5.2 Consider the matrix

A = 

 13 20 3 −2

  To find A−1, we consider the array

(A|I3) =

 13 00 0 −2 0

(12)

We now perform elementary row operations on this array and try to reduce the left hand half to the matrix I3 Note that if we succeed, then the final array is clearly in reduced row echelon form We

therefore follow the same procedure as reducing an array to reduced row echelon form Adding −3 times row to row 2, we obtain

 10 −3 −3 −3 01 0 −2 0

  Adding times row to row 3, we obtain

10 −3 −3 −3 01 0

  Multiplying row by 3, we obtain

10 −3 −3 −3 01 0 15 12

  Adding times row to row 3, we obtain

10 −3 −3 −3 01 0 0 −3 −9

  Multiplying row by 3, we obtain

30 −3 −3 −3 03 0 0 −3 −9

  Adding times row to row 1, we obtain

30 −3 −3 −33 −15 10 61 0 0 −3 −9

  Adding −1 times row to row 2, we obtain

30 −33 00 −15 106 −4 −36 0 −3 −9

  Adding times row to row 1, we obtain

30 −30 00 −96 −4 −36 0 −3 −9

  Multiplying row by 1/3, we obtain

10 −30 00 −36 −4 −32 0 −3 −9

(13)

Multiplying row by −1/3, we obtain 

1 00 1 00 −3−2 4/3 12 0 −3 −9

  Multiplying row by −1/3, we obtain

1 0 −30 −2 4/32 11 0 −5/3 −1

 

Note now that the array is in reduced row echelon form, and that the left hand half is the identity matrix I3 It follows that the right hand half of the array represents the inverse A−1 Hence

A−1=

−3−2 4/32 11 −5/3 −1

 

Example 2.5.3 Consider the matrix A =

  

1 2 0 0

   To find A−1, we consider the array

(A|I4) =

  

1 0 2 0 0 0 0 0 0

  

We now perform elementary row operations on this array and try to reduce the left hand half to the matrix I4 Adding −2 times row to row 2, we obtain

  

1 0 0 0 −1 −2 0 0 0 0 0 0

   Adding times row to row 4, we obtain

  

1 0 0 0 −1 −2 0 0 0 0 0 −2 1

   Interchanging rows and 3, we obtain

  

1 0 0 0 0 0 0 −1 −2 0 0 0 −2 1

(14)

At this point, we observe that it is impossible to reduce the left hand half of the array to I4 For those

who remain unconvinced, let us continue Adding times row to row 1, we obtain 

 

1 −5 0 0 0 0 0 −1 −2 0 0 0 −2 1

   Adding −1 times row to row 3, we obtain

  

1 −5 0 0 0 0 0 −1 0 −1 0 0 −2 1

  

Multiplying row by (here we want to avoid fractions in the next two steps), we obtain 

 

6 12 −30 18 0 0 0 0 0 −1 0 −1 0 0 −2 1

   Adding −15 times row to row 1, we obtain

  

6 12 0 −15 0 0 0 0 −1 0 −1 0 0 −2 1

   Adding −2 times row to row 1, we obtain

  

6 12 0 −2 −15 0 0 0 0 −1 0 −1 0 0 −2 1

  

Multiplying row by 1/6, multiplying row by 1/3, multiplying row by −1 and multiplying row by −1/2, we obtain

  

1 0 1/2 −1/3 −5/2 0 0 1/3 0 0 0 0 0 −1/2 −1/2

  

Note now that the array is in reduced row echelon form, and that the left hand half is not the identity matrix I4 Our technique has failed In fact, the matrix A is not invertible

2.6 Criteria for Invertibility

Examples 2.5.2–2.5.3 raise the question of when a given matrix is invertible In this section, we shall obtain some partial answers to this question Our first step here is the following simple observation PROPOSITION 2M Every elementary matrix is invertible

(15)

These elementary row operations can clearly be reversed by elementary row operations For (1), we interchange the two rows again For (2), if we have originally added c times row i to row j, then we can reverse this by adding −c times row i to row j For (3), if we have multiplied any row by a non-zero constant c, we can reverse this by multiplying the same row by the constant 1/c Note now that each elementary matrix is obtained from In by an elementary row operation The inverse of this elementary

matrix is clearly the elementary matrix obtained from Inby the elementary row operation that reverses

the original elementary row operation

Suppose that an n × n matrix B can be obtained from an n × n matrix A by a finite sequence of elementary row operations Then since these elementary row operations can be reversed, the matrix A can be obtained from the matrix B by a finite sequence of elementary row operations

Definition An n × n matrix A is said to be row equivalent to an n × n matrix B if there exist a finite number of elementary n × n matrices E1, , Ek such that B = Ek E1A

Remark Note that B = Ek E1A implies that A = E1−1 E−1k B It follows that if A is row

equivalent to B, then B is row equivalent to A We usually say that A and B are row equivalent The following result gives conditions equivalent to the invertibility of an n × n matrix A PROPOSITION 2N Suppose that

A = 

a11 . a1n . an1 ann

  , and that

x =  x .1

xn

 and =  0 .

0  

aren × matrices, where x1, , xn are variables

(a) Suppose that the matrix A is invertible Then the system Ax = of linear equations has only the trivial solution

(b) Suppose that the systemAx = of linear equations has only the trivial solution Then the matrices A and In are row equivalent

(c) Suppose that the matricesA and In are row equivalent ThenA is invertible

Proof (a) Suppose that x0is a solution of the system Ax = Then since A is invertible, we have

x0= Inx0= (A−1A)x0= A−1(Ax0) = A−10 =

It follows that the trivial solution is the only solution

(b) Note that if the system Ax = of linear equations has only the trivial solution, then it can be reduced by elementary row operations to the system

x1= 0, , xn=

This is equivalent to saying that the array 

a11 a1n an1 ann

0

(16)

can be reduced by elementary row operations to the reduced row echelon form 

1

300 500 200

(20)

and which can be converted to reduced row echelon form 

1 00 0 0

3200/27 6100/27 700/9   This gives x1≈ 119, x2≈ 226 and x3≈ 78, to the nearest integers

2.9 Matrix Transformation on the Plane

Let A be a × matrix with real entries A matrix transformation T : R2 → R2 can be defined as

follows: For every x = (x1, x2) ∈ R, we write T (x) = y, where y = (y1, y2) ∈ R2 satisfies

 y1 y2  = A  x1 x2 

Such a transformation is linear, in the sense that T (x0+ x00) = T (x0) + T (x0) for every x0, x00∈ R2 and

T (cx) = cT (x) for every x ∈ R2 and every c ∈ R To see this, simply observe that

A 

x0 1+ x001

x0 2+ x002

 = A  x0 x0  + A  x00 x00  and A  cx1 cx2  = cA  x1 x2 

We shall study linear transformations in greater detail in Chapter Here we confine ourselves to looking at a few simple matrix transformations on the plane

Example 2.9.1 The matrix A =  0 −1  satisfies A  x1 x2  =  0 −1   x1 x2  =  x1 −x2  for every (x1, x2) ∈ R2, and so represents reflection across the x1-axis, whereas the matrix

A =  −1 0  satisfies A  x1 x2  =  −1 0   x1 x2  =  −x1 x2 

for every (x1, x2) ∈ R2, and so represents reflection across the x2-axis On the other hand, the matrix

A =  −1 0 −1  satisfies A  x1 x2  =  −1 0 −1   x1 x2  =  −x1 −x2  for every (x1, x2) ∈ R2, and so represents reflection across the origin, whereas the matrix

A =  1  satisfies A  x1 x2  =  1   x1 x2  =  x2 x1 

for every (x1, x2) ∈ R2, and so represents reflection across the line x1= x2 We give a summary in the

table below:

Transformation Equations Matrix Reflection across x1-axis n yy12= x= −x1 2

 0 −1



Reflection across x2-axis n yy1= −x1 2= x2

 −1

0 

Reflection across origin n yy1= −x1

2= −x2

 −1

0 −1 

Reflection across x1= x2 n yy12= x= x21

 1

(21)

Example 2.9.2 Let k be a fixed positive real number The matrix A =  k 0 k  satisfies A  x1 x2  =  k 0 k   x1 x2  =  kx1 kx2 

for every (x1, x2) ∈ R2, and so represents a dilation if k > and a contraction if < k < On the

other hand, the matrix A =  k 0  satisfies A  x1 x2  =  k 0   x1 x2  =  kx1 x2 

for every (x1, x2) ∈ R2, and so represents an expansionn in the x1-direction if k > and a compression

in the x1-direction if < k < 1, whereas the matrix

A =  0 k  satisfies A  x1 x2  =  0 k   x1 x2  =  x1 kx2 

for every (x1, x2) ∈ R2, and so represents a expansion in the x2-direction if k > and a compression in

the x2-direction if < k < We give a summary in the table below:

Transformation Equations Matrix Dilation or contraction by factor k >



y1= kx1

y2= kx2

 k 0 k



Expansion or compression in x1-direction by factor k >



y1= kx1

y2= x2

 k 0



Expansion or compression in x2-direction by factor k > n yy12= x= kx12

 0 k



Example 2.9.3 Let k be a fixed real number The matrix A =  k  satisfies A  x1 x2  =  k   x1 x2  = 

x1+ kx2

x2



for every (x1, x2) ∈ R2, and so represents a shear in the x1-direction For the case k = 1, we have the

following:

Example 2.9.2 Let k be a fixed positive real number The matrix A = ! k 0 k " satisfies A ! x1 x2 " = ! k 0 k " ! x1 x2 " = ! kx1 kx2 "

for every (x1, x2) ∈ R2, and so represents a dilation if k > and a contraction if < k < On the

other hand, the matrix A = ! k 0 " satisfies A ! x1 x2 " = ! k 0 " ! x1 x2 " = ! kx1 x2 "

for every (x1, x2) ∈ R2, and so represents an expansionn in the x1-direction if k > and a compression

in the x1-direction if < k < 1, whereas the matrix

A = ! 0 k " satisfies A ! x1 x2 " = ! 0 k " ! x1 x2 " = ! x1 kx2 "

for every (x1, x2) ∈ R2, and so represents a expansion in the x2-direction if k > and a compression in

the x2-direction if < k < We give a summary in the table below:

Transformation Equations Matrix Dilation or contraction by factor k >

#

y1= kx1

y2= kx2

! k 0 k

"

Expansion or compression in x1-direction by factor k >

#

y1= kx1

y2= x2

! k 0

"

Expansion or compression in x2-direction by factor k > $ yy12= x= kx12

! 0 k

"

Example 2.9.3 Let k be a fixed real number The matrix A = ! k " satisfies A ! x1 x2 " = ! k " ! x1 x2 " = !

x1+ kx2

x2

"

for every (x1, x2) ∈ R2, and so represents a shear in the x1-direction For the case k = 1, we have the

following:

• • • •

• • • •

T (k=1)

For the case k = −1, we have the following:

• • • •

• • • •

T (k=−1)

For the case k = −1, we have the following:

Example 2.9.2 Let k be a fixed positive real number The matrix A = ! k 0 k " satisfies A ! x1 x2 " = ! k 0 k " ! x1 x2 " = ! kx1 kx2 "

for every (x1, x2) ∈ R2, and so represents a dilation if k > and a contraction if < k < On the

other hand, the matrix A = ! k 0 " satisfies A ! x1 x2 " = ! k 0 " ! x1 x2 " = ! kx1 x2 "

for every (x1, x2) ∈ R2, and so represents an expansionn in the x1-direction if k > and a compression

in the x1-direction if < k < 1, whereas the matrix

A = ! 0 k " satisfies A ! x1 x2 " = ! 0 k " ! x1 x2 " = ! x1 kx2 "

for every (x1, x2) ∈ R2, and so represents a expansion in the x2-direction if k > and a compression in

the x2-direction if < k < We give a summary in the table below:

Transformation Equations Matrix Dilation or contraction by factor k >

#

y1= kx1

y2= kx2

! k 0 k

"

Expansion or compression in x1-direction by factor k >

#

y1= kx1

y2= x2

! k 0

"

Expansion or compression in x2-direction by factor k > $ yy12= x= kx12

! 0 k

"

Example 2.9.3 Let k be a fixed real number The matrix A = ! k " satisfies A ! x1 x2 " = ! k " ! x1 x2 " = !

x1+ kx2

x2

"

for every (x1, x2) ∈ R2, and so represents a shear in the x1-direction For the case k = 1, we have the

following:

• • • •

• • • •

T (k=1)

For the case k = −1, we have the following:

• • • •

• • • •

(22)

Similarly, the matrix A =

 k



satisfies A 

x1

x2

 =

 k

  x1

x2

 =

 x1

kx1+ x2



for every (x1, x2) ∈ R2, and so represents a shear in the x2-direction We give a summary in the table

below:

Transformation Equations Matrix Shear in x1-direction



y1= x1+ kx2

y2= x2

 k



Shear in x2-direction n yy21= x= kx11+ x2

 k



Example 2.9.4 For anticlockwise rotation by an angle θ, we have T (x1, x2) = (y1, y2), where

y1+ iy2= (x1+ ix2)(cos θ + i sin θ),

and so

 y1

y2

 =



cos θ − sin θ sin θ cos θ

  x1

x2

 It follows that the matrix in question is given by

A = 

cos θ − sin θ sin θ cos θ

 We give a summary in the table below:

Transformation Equations Matrix Anticlockwise rotation by angle θ



y1= x1cos θ − x2sin θ

y2= x1sin θ + x2cos θ



cos θ − sin θ sin θ cos θ



We conclude this section by establishing the following result which reinforces the linearity of matrix transformations on the plane

PROPOSITION 2T Suppose that a matrix transformation T : R2 → R2 is given by an invertible

matrixA Then

(a) the image under T of a straight line is a straight line;

(b) the image under T of a straight line through the origin is a straight line through the origin; and (c) the images under T of parallel straight lines are parallel straight lines

Proof Suppose that T (x1, x2) = (y1, y2) Since A is invertible, we have x = A−1y, where

x = 

x1

x2



and y = 

y1

y2



The equation of a straight line is given by αx1+ βx2= γ or, in matrix form, by

( α β ) 

x1

x2



(23)

Hence

( α β ) A−1y1

y2



= ( γ ) Let

( α0 β0) = ( α β ) A−1.

Then

( α0 β0)

 y1

y2



= ( γ )

In other words, the image under T of the straight line αx1+ βx2= γ is α0y1+ β0y2= γ, clearly another

straight line This proves (a) To prove (b), note that straight lines through the origin correspond to γ = To prove (c), note that parallel straight lines correspond to different values of γ for the same values of α and β

2.10 Application to Computer Graphics

Example 2.10.1 Consider the letter M in the diagram below: Hence

( α β ) A−1!y1

y2

"

= ( γ ) Let

( α" β") = ( α β ) A−1.

Then

( α" β")

! y1

y2

"

= ( γ )

In other words, the image under T of the straight line αx1+ βx2= γ is α"y1+ β"y2= γ, clearly another

straight line This proves (a) To prove (b), note that straight lines through the origin correspond to γ = To prove (c), note that parallel straight lines correspond to different values of γ for the same values of α and β !

2.10 Application to Computer Graphics

Example 2.10.1 Consider the letter M in the diagram below:

Following the boundary in the anticlockwise direction starting at the origin, the 12 vertices can be represented by the coordinates

! 0 " , ! " , ! " , ! " , ! " , ! " , ! " , ! 8 " , ! " , ! " , ! " , ! " Let us apply a matrix transformation to these vertices, using the matrix

A = ! 1 " , representing a shear in the x1-direction with factor 0.5, so that

A ! x1 x2 " = !

x1+12x2

x2

"

for every (x1, x2) ∈ R2

Following the boundary in the anticlockwise direction starting at the origin, the 12 vertices can be represented by the coordinates

 0  ,   ,   ,   ,   ,   ,   ,  8  ,   ,   ,   ,   Let us apply a matrix transformation to these vertices, using the matrix

A =  1  , representing a shear in the x1-direction with factor 0.5, so that

A  x1 x2  = 

x1+12x2

x2



(24)

Then the images of the 12 vertices are respectively  0  ,   ,   ,   ,  10  ,   ,   ,  12  ,  11  ,   ,   ,   , noting that  1  

0 1 7 8 0 6 0 8 8

 =



0 4 10 12 11 5 0 6 0 8 8

 In view of Proposition 2T, the image of any line segment that joins two vertices is a line segment that joins the images of the two vertices Hence the image of the letter M under the shear looks like the following:

Then the images of the 12 vertices are respectively ! 0 " , ! " , ! " , ! " , ! 10 " , ! " , ! " , ! 12 " , ! 11 " , ! " , ! " , ! " , noting that ! 1 " !

0 1 7 8 0 6 0 8 8

" =

!

0 4 10 12 11 5 0 6 0 8 8

" In view of Proposition 2T, the image of any line segment that joins two vertices is a line segment that joins the images of the two vertices Hence the image of the letter M under the shear looks like the following:

Next, we may wish to translate this image However, a translation is a transformation by vector h = (h1, h2) ∈ R2 is of the form

! y1 y2 " = ! x1 x2 " + ! h1 h2 "

for every (x1, x2) ∈ R2,

and this cannot be described by a matrix transformation on the plane To overcome this deficiency, we introduce homogeneous coordinates For every point (x1, x2) ∈ R2, we identify it with the point

(x1, x2, 1) ∈ R3 Now we wish to translate a point (x1, x2) to (x1, x2) + (h1, h2) = (x1+ h1, x2+ h2), so

we attempt to find a × matrix A∗such that

 xx12+ h+ h12

1   = A∗

 xx12

1 

 for every (x1, x2) ∈ R2

It is easy to check that 

xx12+ h+ h12

  =

1 h0 h12 0

 

 xx12

1 

 for every (x1, x2) ∈ R2

It follows that using homogeneous coordinates, translation by vector h = (h1, h2) ∈ R2can be described

by the matrix

A∗=

1 h0 h12 0

 

Next, we may wish to translate this image However, a translation is a transformation by vector h = (h1, h2) ∈ R2 is of the form

 y1 y2  =  x1 x2  +  h1 h2 

for every (x1, x2) ∈ R2,

and this cannot be described by a matrix transformation on the plane To overcome this deficiency, we introduce homogeneous coordinates For every point (x1, x2) ∈ R2, we identify it with the point

(x1, x2, 1) ∈ R3 Now we wish to translate a point (x1, x2) to (x1, x2) + (h1, h2) = (x1+ h1, x2+ h2), so

we attempt to find a × matrix A∗ such that

xx12+ h+ h12

  = A∗

 xx12

1 

 for every (x1, x2) ∈ R2

It is easy to check that 

xx12+ h+ h12

  =

1 h0 h12 0

 

 xx12

1 

 for every (x1, x2) ∈ R2

It follows that using homogeneous coordinates, translation by vector h = (h1, h2) ∈ R2can be described

by the matrix

A∗=

1 h0 h12 0

(25)

Remark Consider a matrix transformation T : R2→ R2 on the plane given by a matrix

A = 

a11 a12

a21 a22

 Suppose that T (x1, x2) = (y1, y2) Then

 y1

y2

 = A

 x1

x2

 =



a11 a12

a21 a22

  x1

x2



Under homogeneous coordinates, the image of the point (x1, x2, 1) is now (y1, y2, 1) Note that

 yy12

1   =

aa1121 aa1222 00 0

 

 xx12

1  

It follows that homogeneous coordinates can also be used to study all the matrix transformations we have discussed in Section 2.9 By moving over to homogeneous coordinates, we simply replace the × matrix A by the × matrix

A∗=A

0 

Example 2.10.2 Returning to Example 2.10.1 of the letter M, the 12 vertices are now represented by homogeneous coordinates, put in an array in the form

0 1 7 8 00 6 0 8 8 1 1 1 1 1 1

  Then the × matrix

A = 

1

0  is now replaced by the × matrix

A∗=

 1

1

0 0

  Note that

A∗

0 1 7 8 00 6 0 8 8 1 1 1 1 1 1

 

=  1

1

0 0

 

0 1 7 8 00 6 0 8 8 1 1 1 1 1 1

 

= 

0 4 10 12 11 5 40 6 0 8 8 8 1 1 1 1 1 1

 

Next, let us consider a translation by the vector (2, 3) The matrix under homogeneous coordinates for this translation is given by

B∗=

1 20 3 0

(26)

Note that

B∗A∗

0 1 7 8 00 6 0 8 8 1 1 1 1 1 1

 

= 

1 20 3 0

 

0 4 10 12 11 5 40 6 0 8 8 8 1 1 1 1 1 1

 

= 

2 6 12 10 14 13 73 9 3 11 11 11 116 1 1 1 1 1 1

  , giving rise to coordinates in R2, displayed as an array



2 6 12 10 14 13 7 3 9 3 11 11 11 11



Hence the image of the letter M under the shear followed by translation looks like the following: Note that

B∗A∗

0 1 7 8 00 6 0 8 8 1 1 1 1 1 1

  =

1 20 3 0

 

0 4 10 12 11 5 40 6 0 8 8 8 1 1 1 1 1 1

 

= 

2 6 12 10 14 13 73 9 3 11 11 11 116 1 1 1 1 1 1

  , giving rise to coordinates in R2, displayed as an array

%

2 6 12 10 14 13 7 3 9 3 11 11 11 11

&

Hence the image of the letter M under the shear followed by translation looks like the following:

Example 2.10.3 Under homogeneous coordinates, the transformation representing a reflection across the x1-axis, followed by a shear by factor in the x1-direction, followed by anticlockwise rotation by

90◦, and followed by translation by vector (2, −1), has matrix

1 00 −12 0

 

0 −1 01 0 0 0

 

1 00 0 0

 

10 −1 00 0

  =

01 −2 −11 0

 

2.11 Complexity of a Non-Homogeneous System

Consider the problem of solving a system of linear equations of the form Ax = b, where A is an n × n invertible matrix We are interested in the number of operations required to solve such a system By an operation, we mean interchanging, adding or multiplying two real numbers

Example 2.10.3 Under homogeneous coordinates, the transformation representing a reflection across the x1-axis, followed by a shear by factor in the x1-direction, followed by anticlockwise rotation by

90◦, and followed by translation by vector (2, −1), has matrix

1 00 −12 0

 

0 −1 01 0 0 0

 

1 00 0 0

 

10 −1 00 0

  =

01 −2 −11 0

 

2.11 Complexity of a Non-Homogeneous System

(27)

One way of solving the system Ax = b is to write down the augmented matrix 

a11 a1n an1 ann

b1

bn

 , (7)

and then convert it to reduced row echelon form by elementary row operations The first step is to reduce it to row echelon form:

(I) First of all, we may need to interchange two rows in order to ensure that the top left entry in the array is non-zero This requires n + operations

(II) Next, we need to multiply the new first row by a constant in order to make the top left pivot entry equal to This requires n + operations, and the array now looks like

   

1 a12 a1n

a21 a22 a2n

an1 an2 ann

... and x3≈ 78, to the nearest integers

2.9 Matrix Transformation on the Plane

Let A be a × matrix with real entries A matrix transformation T : R2 → R2 can be defined... linearity of matrix transformations on the plane

PROPOSITION 2T Suppose that a matrix transformation T : R2 → R2 is given by an invertible

matrixA Then

... follows that the matrix in question is given by

A = 

cos θ − sin θ sin θ cos θ

 We give a summary in the table below:

Transformation Equations Matrix Anticlockwise

Ngày đăng: 20/04/2021, 16:55

TỪ KHÓA LIÊN QUAN

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN

w