1. Trang chủ
  2. » Kinh Doanh - Tiếp Thị

Solution manual for applied linear algebra by olver

41 23 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Solution Manual for Applied Linear Algebra by Olver Solutions — Chapter 1.1.1 (a) Reduce the system to x − y = 7, y = −4; then use Back Substitution to solve for x = 17 ,y = − (b) Reduce the system to u + v = 5, − 52 v = 52 ; then use Back Substitution to solve for u = 1, v = −1 (c) Reduce the system to p + q − r = 0, −3 q + r = 3, − r = 6; then solve for p = 5, q = −11, r = −6 (d) Reduce the system to u − v + w = 2, − 23 v + w = 2, − w = 0; then solve for u = 13 , v = − 34 , w = (e) Reduce the system to x1 + x2 − x3 = 9, 15 x2 − 52 x3 = 25 , x3 = −2; then solve for x1 = 4, x2 = −4, x3 = −1 (f ) Reduce the system to x + z − w = − 3, − y + w = 1, − z − 16 w = − 4, w = 6; then solve for x = 2, y = 2, z = −3, w = 55 (g) Reduce the system to x1 + x2 = 1, 38 x2 + x3 = 32 , 21 x3 + x4 = , 21 x4 = ; then 2 solve for x1 = 11 , x2 = 11 , x3 = 11 , x4 = 11 1.1.2 Plugging in the given values of x, y and z gives a+2 b− c = 3, a−2− c = 1, 1+2 b+c = Solving this system yields a = 4, b = 0, and c = ♥ 1.1.3 (a) With Forward Substitution, we just start with the top equation and work down Thus x = −6 so x = −3 Plugging this into the second equation gives 12 + 3y = 3, and so y = −3 Plugging the values of x and y in the third equation yields −3 + 4(−3) − z = 7, and so z = −22 (b) We will get a diagonal system with the same solution (c) Start with the last equation and, assuming the coefficient of the last variable is = 0, use the operation to eliminate the last variable in all the preceding equations Then, again assuming the coefficient of the next-to-last variable is non-zero, eliminate it from all but the last two equations, and so on (d) For the systems in Exercise 1.1.1, the method works in all cases except (c) and (f ) Solving the reduced system by Forward Substitution reproduces the same solution (as it must): (a) The system reduces to 32 x = 17 , x + y = 15 (b) The reduced system is 15 u = 2 , u − v = (c) The method doesn’t work since r doesn’t appear in the last equation (d) Reduce the system to 32 u = 12 , 72 u − v = 52 , u − w = −1 (e) Reduce the system to 32 x1 = 83 , x1 + x2 = 4, x1 + x2 + x3 = −1 (f ) Doesn’t work since, after the first reduction, z doesn’t occur in the next to last equation 21 (g) Reduce the system to 55 21 x1 = , x2 + x3 = , x3 + x4 = , x3 + x4 = 1.2.1 (a) × 4, (b) 7, (c) 6, (d) ( −2 ), 1 C (e) B @ A −6 Solution Manual for Applied Linear Algebra by Olver 1 6C 1.2.2 (a) A, (b) 1 C (e) B @ A, (f ) ( ) B @4 1 ! , (c) B @4 7C A, (d) ( 3 ), 1.2.3 x = − 31 , y = 43 , z = − 31 , w = 23 1.2.4 (a) (b) (c) (d) (e) (f ) (g) ! 1 ! ! −1 x A= , x= , b= ; y ! ! ! u ; , b= , x= A= v −2 1 1 −1 p B C B C A=B 3C @ −1 A, x = @ q A , b = @ A ; −1 −1 r 1 2 u B C B C A=B 3C @ −1 A, x = @ v A, b = @ −2 A; −3 w1 71 0 −1 x1 B C C B C A=B @ −1 A, x = @ x2 A, b = @ A; x3 −1 1 1 1 −2 x −3 B C B C B C −1 −1 C B yC B 3C B C, x = B C , b = B C; A=B @ −6 −4 @ zA @ 2A 2A −1 w 0 1 0 x1 B B C B C 0C C Bx C B1C B C, x = B C, b = B C A=B @0 1A @ x3 A @1A x4 0 1.2.5 (a) x − y = −1, x + y = −3 The solution is x = − 65 , y = − 51 (b) u + w = −1, u + v = −1, v + w = The solution is u = −2, v = 1, w = (c) x1 − x3 = 1, −2 x1 − x2 = 0, x1 + x2 − x3 = The solution is x1 = 51 , x2 = − 52 , x3 = − 25 (d) x + y − z − w = 0, −x + z + w = 4, x − y + z = 1, y − z + w = The solution is x = 2, y = 1, z = 0, w = 1.2.6 B B0 (a) I = B @0 (b) I + O = 0 I, 0 0 0 B 0C C B0 0 C, O=B @0 0 0A 0 I O = O I = O No, it does (f ) B @3 11 −12 ! , (d) undefined, (e) undefined, 1 9 −2 14 B −12 C −17 C A, (g) undefined, (h) @ −8 A, (i) undefined 12 −3 28 1.2.7 (a) undefined, (b) undefined, (c) −1 0C C C 0A not Solution Manual for Applied Linear Algebra by Olver 1.2.8 Only the third pair commute 1.2.9 1, 6, 11, 16 1.2.10 (a) B @0 0 0 0C A, (b) −1 1.2.11 (a) True, (b) true ♥ 1.2.12 (a) Let A = x z B B0 B @0 0 −2 0 0 0 0C C C 0A −3 ! y Then A D = w ax az by bw ! = ax bz ay bw ! = D A, so if a = b these ! a = a I are equal if and only if y = z = (b) Every × matrix commutes with a x 0 C (c) Only × diagonal matrices (d) Any matrix of the form A = B @ y z A (e) Let u v D = diag (d1 , , dn ) The (i, j) entry of A D is aij dj The (i, j) entry of D A is di aij If di = dj , this requires aij = 0, and hence, if all the di ’s are different, then A is diagonal 1.2.13 We need A of size m × n and B of size n × m for both products to be defined Further, A B has size m × m while B A has size n × n, so the sizes agree if and only if m = n 1.2.14 B = x y x ! where x, y are arbitrary 1.2.15 (a) (A + B)2 = (A + B)(A + B) = AA + AB + BA + BB = A2 + 2AB + B , since ! ! 0 AB = BA (b) An example: A = ,B= 1 1.2.16 If A B is defined and A is m × n matrix, then B is n × p matrix and A B is m × p matrix; on the other hand if B A is defined we must have p = m and B A is n × n matrix Now, since A B = B A, we must have p = m = n 1.2.17 A On×p = Om×p , Ol×m A = Ol×n 1.2.18 The (i, j) entry of the matrix equation c A = O is c aij = If any aij = then c = 0, so the only possible way that c = is if all aij = and hence A = O 1.2.19 False: for example, 0 ! 0 ! 0 = ! 1.2.20 False — unless they commute: A B = B A 1.2.21 Let v be the column vector with in its j th position and all other entries Then A v is the same as the j th column of A Thus, the hypothesis implies all columns of A are and hence A = O 1.2.22 (a) A must be a square matrix (b) By associativity, A A2 = A A A = A2 A = A3 (c) The naăve answer is n A more sophisticated answer is to note that you can comr pute A2 = A A, A4 = A2 A2 , A8 = A4 A4 , and, by induction, A2 with only r matrix multiplications More generally, if the binary expansion of n has r + digits, with s nonzero digits, then we need r + s − multiplications For example, A13 = A8 A4 A since 13 is 1101 in binary, for a total of multiplications: to compute A2 , A4 and A8 , and more to multiply them together to obtain A13 Solution Manual for Applied Linear Algebra by Olver 1.2.23 A = 0 ! ♦ 1.2.24 (a) If the ith row of A has all zero entries, then the (i, j) entry of A B is ai1 b1j + · · · + ain bnj = b1j + · · · + bnj = 0, which holds for all j, so the ith row of A B will have all 0’s ! ! ! 1 1 , then B A = ,B= (b) If A = 3 0 −1 1.2.25 The same solution X = 1.2.26 (a) ! , (b) −2 ! −2 ! in both cases −1 They are not the same ! 1.2.27 (a) X = O (b) Yes, for instance, A = ,B= −2 ! ,X= −1 1 ! 1.2.28 A = (1/c) I when c = If c = there is no solution ♦ 1.2.29 (a) The ith entry of A z is ai1 +1 ai2 +· · ·+1 ain = ai1 +· · ·+ain , which is the ith row sum 1−n (b) Each row of W has n − entries equal to n and one entry equal to n and so its row 1−n sums are (n − 1) n + n = Therefore, by part (a), W z = Consequently, the row sums the of B = A W are 1 B C (c) z = B @ A, and so A z = @ −4 10 − −1 C B 3 B B CB B − @ A@ 3 1 − −1 3 entries of = AW = A = 0, and the result follows z1 0z 1B 2 −1 B C B C 3C A @ A = @ A, while B = A W = −1 1 1 − −3 3C B C 1C C, and so B z = B C=B 0C @ A −1 @ A 3A −5 − ♦ 1.2.30 Assume A has size m × n, B has size n × p and C has size p × q The (k, j) entry of B C is p X l=1 bkl clj , so the (i, j) entry of A (B C) is On the other hand, the (i, l) entry of A B is p X l=1 @ n X k=1 aik bkl A clj = n X p X n X kk= X i=1 aik @ p X l=1 bkl clj A = n X p X k=1 l=1 aik bkl clj aik bkl , so the (i, j) entry of (A B) C is aik bkl clj The two results agree, and so A (B C) = k=1 l=1 (A B) C Remark : A more sophisticated, simpler proof can be found in Exercise 7.1.44 ♥ 1.2.31 (a) We need A B and B A to have the same size, and so this follows from Exercise 1.2.13 (b) A B − B A = O if and only if A B = B A ! ! 1 0 −1 B , (iii) @ 1 C , (ii) (c) (i) A; 0 −1 (d) (i) [ c A + d B, C ] = (c A + d B)C − C(c A + d B) = c(A C − C A) + d(B C − C B) = c [ A, B ] + d [ B, C ], [ A, c B + d C ] = A(c B + d C) − (c B + d C)A = c(A B − B A) + d(A C − C A) = c [ A, B ] + d [ A, C ] (ii) [ A, B ] = A B − B A = − (B A − A B) = − [ B, A ] Solution Manual for Applied Linear Algebra by Olver (iii) h h h i [ A, B ], C = (A B − B A) C − C (A B − B A) = A B C − B A C − C A B + C B A, i [ C, A ], B = (C A − A C) B − B (C A − A B) = C A B − A C B − B C A + B A C, i [ B, C ], A = (B C − C B) A − A (B C − C B) = B C A − C B A − A B C + A C B Summing the three expressions produces O ♦ 1.2.32 (a) (i) 4, (ii) 0, (b) tr(A + B) = (c) The diagonal entries of A B are n X i=1 j =1 entries of B A are n X i=1 n X (aii + bii ) = n X i=1 aij bji , so tr(A B) = n X bji aij , so tr(B A) = n X i=1 j =1 aii + n X n X i=1 n X i=1 j =1 bii = tr A + tr B aij bji ; the diagonal bji aij These double summations are clearly equal (d) tr C = tr(A B − B A) = tr A B − tr B A = by part (a) (e) Yes, by the same proof ♦ 1.2.33 If b = A x, then bi = ai1 x1 + ai2 x2 + · · · + ain xn for each i On the other hand, cj = (a1j , a2j , , anj )T , and so the ith entry of the right hand side of (1.13) is x1 ai1 + x2 ai2 + · · · + xn ain , which agrees with the expression for bi ♥ 1.2.34 (a) This follows by direct computation (b) (i) ! ! ! ! ! ! ! −2 1 −2 −2 −2 −1 = ( −2 ) + (1 0) = + = 3 −6 −6 ! ! ! ! (ii) −2 1 −2 B C ( −1 ) ( −3 ) + (2 5) + 0A = @ −3 −1 −3 −3 −1 −1 = (iii) −1 B @ −1 1 10 −6 −15 ! + 0 ! + 0 −2 ! = −1 ! −17 1 3 −1 B C B C B C B C 1C A @ −1 A = @ −1 A( ) + @ A( −1 ) + @ A( ) −5 1 −5 1 1 −3 −4 14 −3 C B B B =B 8C 1C −1 9C @ −2 −3 A + @ −2 A + @0 A = @4 A 3 −1 −20 −5 −18 −1 (c) If we set B = x, where x is an n × matrix, then we obtain (1.14) (d) The (i, j) entry of A B is n X k=1 aik bkj On the other hand, the (i, j) entry of ck rk equals the product of the ith entry of ck , namely aik , with the j th entry of rk , namely bkj Summing these entries, aik bkj , over k yields the usual matrix product formula ♥ 1.2.35 ! −1 −2 −8 , q(A) = (a) p(A) = A − 3A + I , q(A) = 2A + I (b) p(A) = (c) p(A)q(A) = (A − 3A + I )(2A + I ) = 2A − 5A + 4A − 3A + I , while p(x)q(x) = x5 − x3 + x2 − x + (d) True, since powers of A mutually commute For the particular matrix from (b), ! p(A) q(A) = q(A) p(A) = −4 −6 ! −1 Solution Manual for Applied Linear Algebra by Olver ♥ 1.2.36 2 (a) Check that S = A by direct computation Another example: S = ! Or, more generally, times any of the matrices in part (c) (b) S is only defined if S is square.! ! ±1 a b (c) Any of the matrices , , where a is arbitrary and b c = − a2 ±1 c −a ! −1 (d) Yes: for example 1 1 −1 B 1C C B C B C (c) Since matrix addition is 1 ♥ 1.2.37 (a) M has size (i+j)×(k+l) (b) M = B C B @ −2 0A 1 −1 done entry-wise, adding the entries of each block is the same as adding the blocks (d) X has size k × m, Y has size k × n, Z has size l × m, and W has size l × n Then A X + B Z will have size i × m Its (p, q) entry is obtained by multiplying the pth row of M times the q th column of P , which is ap1 x1q + · · · + api xiq + bp1 z1q + · · · + bpl zlq and equals the sum of the (p, q) entries of A X and B Z A similar argument works for the remaining three ! ! 0 −1 blocks (e) For example, if X = (1), Y = ( ), Z = , W = , then 1 0 1 −1 B 0C B C B C B C C The individual block products are P = @ 0 −1 A, and so M P = B −1 B C @ −2 −4 −2 A 1 0 −1 0 ! B C @ −2 A = = ! (1) + 1 1 B C B @ −2 A (1) + @ 1.3.1 (a) −1 1 −2 −9 ˛ ˛ ˛ ˛ ˛ ˛ ! ! ! ! C 0A , −1 2R1 +R2 −→ ! , 1 ! −1 −1 B C −2 C A = @ −2 A ( −1 B @ −4 ˛ ˛ ˛ ˛ ˛ ! 1 = (2 0) + 0 −1 1 0) + B @2 ! 1 0C A −1 ! −1 , ! −1 ! Back Substitution yields x2 = 2, x1 = −10 10 ˛ ! −5 ˛˛ −1 − R1 +R2 −5 ˛˛ −1 ˛ 26 Back Substitution yields w = 2, z = ˛ −→ (b) ˛ 13 ˛ ˛ ˛ ˛ 1 03 0 ˛ −2 ˛ 4R1 +R3 −2 ˛˛ 23 R2 +R3 −2 ˛˛ B B (c) B −8 ˛˛˛ C −8 ˛˛˛ C −8 ˛˛˛ C A −→ @ A −→ @ @ A ˛ ˛ −9 −9 −4 −3 13 0 ˛ Back Substitution˛ yields ˛ z = 3, y 0= 16, x = 29.˛ 1 −2 ˛˛ 2R1 +R2 −2 ˛˛ −3R1 +R3 −2 ˛˛ B ˛ C B ˛ C B ˛ (d) @ −2 −3 ˛˛ −7 A −→ @ −7 ˛˛ −5 A −→ @ −7 ˛˛ −5 C A −2 ˛ −1 −2 ˛ −1 −14 ˛ −4 ˛ 1 −2 ˛˛ R +R −→ B −7 ˛˛˛ −5 C A Back Substitution yields r = 3, q = 2, p = −1 @0 ˛ − 51 0 − 17 4 Solution Manual for Applied Linear Algebra by Olver ˛ ˛ 1 −2 ˛˛ −1 −2 ˛˛ −1 ˛ C B −1 ˛ C −1 ˛˛ C B0 C ˛ C reduces to B ˛ C (e) @0 0 −3 ˛˛ A −3 ˛˛ A −4 0 ˛ −5 0 −5 ˛ 15 Solution: x4 = −3, x3 = − , x2 = −1, x1 = −4 ˛ ˛ 0 1 −1 −1 ˛˛ −2 −1 −1 ˛˛ −2 B B −1 −1 ˛˛ C ˛˛ −2 C B C C B ˛ C reduces to B ˛ C (f ) B @ @ 0 −2 −1 ˛˛ A ˛˛ 8A ˛ ˛ −1 0 0 −24 −48 Solution: w = 2, z = 0, y = −1, x = B B B @ 1.3.2 (a) x + y = 2, − x − y = −1; solution: x = 4, y = −5, (b) x + y = −3, − x + y + z = −6, − x − z = 1; solution: x = 1, y = −2, z = −1, (c) x − y + z = −3, −2 y − z = −1, x − y + z = −3; solution: x = 32 , y = 3, z = −1, (d) x − y = 0, − x + y − z = 1, − y + z − w = 1, − z + w = 0; solution: x = 1, y = 2, z = 2, w = , y = − 34 ; (b) u = 1, v = −1; (c) u = 23 , v = − 31 , w = 16 ; (d) x1 = 1.3.3 (a) x = 17 11 10 2 19 , x2 = − , x3 = − ; (e) p = − , q = , r = ; (f ) a = , b = 0, c = , d = − ; (g) x = 31 , y = 67 , z = − 38 , w = 92 1.3.4 Solving = a + b + c, = a + b + c, = a + b + c, yields a = −1, b = 1, c = 6, so y = −x2 + x + 1.3.5 (a) Regular: (b) Not regular ! −→ ! −2 −2 B 10 (c) Regular: −3 C − 83 C A −→ @ A 3 −2 0 1 −2 −2 B C B (d) Not regular: @ −2 −1 A −→ @ 0 5C A −1 −7 (e) Regular: 1 −3 −3 B C B B −4 C B −1 −1 C B0 C B0 B C −→ B C −→ B @ 3 −6 A @ −6 @0 1A −3 −3 B @ −1 1.3.6 −i 1− i (a) i (b) B @ −1 (c) 1− i −i 1+ i 2i 2i ˛ ˛ ˛ ˛ ˛ −1 −3 i ! −→ −i 1+ i − 2i ˛ ˛ ˛ ˛ ˛ 3 0 −1 − 2i −3 −4 −5 −1 0 B 2C C B0 C −→ B @0 5A ! ; use Back Substitution to obtain the solution y = 1, x = − i ˛ 1 − i ˛˛ 2i i 1− i B + i ˛˛˛ C 1+ i A −→ @ i ˛ − 2i i 0 −2 − i solution: z= i , y = − − 23 i , x˛ = + i ˛ ! ˛ 1− i ˛˛ i ˛ i ˛ ˛ −→ i ˛ − 23 − 12 i + i ˛ −1 solution: y = − 41 + 43 i , x = 21 ˛ ˛ ˛ ˛ ˛ ˛ ˛ ! 2i − 2i ; C A 3 0 −3 −4 −5 2C C C 5A Solution Manual for Applied Linear Algebra by Olver (d) B @ ˛ ˛ 1 1+ i i + i ˛˛ i + i ˛˛ ˛ B C ˛ A −→ @ −2 + i ˛˛˛ C i A; ˛ 0 −6 + i ˛ i − 11 i ˛ solution: z = − 12 − 21 i , y = − 25 + 21 i , x = 52 + i 1+ i 1− i − 3i 1.3.7 (a) x = 3, − y = 4, z = 1, u = 6, v = − 24 (b) x = 23 , y = − 4, z = 31 , u = 6, v = − (c) You only have to divide by each coefficient to find the solution ♦ 1.3.8 is the (unique) solution since A = ♠ 1.3.9 Back Substitution start set xn = cn /unn for i = n − to with increment −1 set xi = uii next j @c i − i+1 X j =1 uij xj A end 1.3.10 Since a11 a12 a22 ! b11 b12 b22 ! ! = a11 b11 ! a11 b12 + a12 b22 , a22 b22 ! ! a11 b11 a22 b12 + a12 b11 a11 a12 b11 b12 , = a22 b22 a22 b22 the matrices commute if and only if a11 b12 + a12 b22 = a22 b12 + a12 b11 , or (a11 − a22 )b12 = a12 (b11 − b22 ) 1.3.11 Clearly, any diagonal matrix is both lower and upper triangular Conversely, A being lower triangular requires that aij = for i < j; A upper triangular requires that aij = for i > j If A is both lower and upper triangular, aij = for all i = j, which implies A is a diagonal matrix ♦ 1.3.12 (a) Set lij = ( (b) L = B @ −2 ♦ 1.3.13 aij , 0, 0 i > j, , i ≤ j, 0C A, uij = D=B @0 0 ( aij , 0, −4 i < j, i ≥ j, 0C A, dij = 0 U =B @0 ( 0 aij , 0, i = j, i = j −1 2C A 0 C (a) By direct computation, A2 = B @ 0 A, and so A = O 0 (b) Let A have size n × n By assumption, aij = whenever i > j − By induction, one proves that the (i, j) entries of Ak are all zero whenever i > j − k Indeed, to compute the (i, j) entry of Ak+1 = A Ak you multiply the ith row of A, whose first i entries are 0, Solution Manual for Applied Linear Algebra by Olver by the j th column of Ak , whose first j − k − entries are non-zero, and all the rest are zero, according to the induction hypothesis; therefore, if i > j − k − 1, every term in the sum producing this entry is 0, and the induction is complete In particular, for k = n, n every entry of Ak is zero, and ! so A = O 1 (c) The matrix A = has A2 = O −1 −1 1.3.14 (a) Add (b) Add (c) Add (d) Add (e) Add −2 times the second row to the first row of a × n matrix times the first row to the second row of a × n matrix −5 times the third row to the second row of a × n matrix times the first row to the third row of a × n matrix −3 times the fourth row to the second row of a × n matrix B B0 1.3.15 (a) B @0 0 0 1.3.16 L3 L2 L1 = 0 1 B @2 0 0C C C, (b) 0A 1 − 21 1 B B0 B @0 0 0 0 0 0C A = L L2 L3 1 0C C C, (c) −1 A 0 B B0 B @0 0 0 0 1 0C C C, (d) 0A 1 B B0 B @0 0 −2 0 1 0C C C 0A 1 0 0 B 0C 0C 1.3.17 E3 E2 E1 = B A, E1 E2 E3 = @ −2 A The second is easier to predict @ −2 1 −1 −2 since its entries are the same as the corresponding entries of the Ei 1.3.18 e adds d = times row k to (a) Suppose that E adds c = times row i to row j = i, while E e row l = k If r1 , , rn are the rows, then the effect of E E is to replace (i) rj by rl + c ri + d rk for j = l; (ii) rj by rj + c ri and rl by rl + (c d) ri + d rj for j = k; (iii) rj by rj + c ri and rl by rl + d rk otherwise e is to replace On the other hand, the effect of E E (i) rj by rl + c ri + d rk for j = l; (ii) rj by rj + c ri + (c d) rk and rl by rl + d rk for i = l; (iii) rj by rj + c ri and rl by rl + d rk otherwise e =E e E whenever i = l and j = k Comparing results, we see that E E (b) E1 E2 = E2 E1 , E1 E3 = E3 E1 , and E3 E2 = E2 E3 (c) See the answer to part (a) 1.3.19 (a) Upper triangular; (b) both special upper and special lower triangular; (c) lower triangular; (d) special lower triangular; (e) none of the above 1.3.20 (a) aij = for all i = j; (b) aij = for all i > j; (c) aij = for all i > j and aii = for all i; (d) aij = for all i < j; (e) aij = for all i < j and aii = for all i ♦ 1.3.21 (a) Consider the product L M of two lower triangular n × n matrices The last n − i entries in the ith row of L are zero, while the first j − entries in the j th column of M are zero So if i < j each summand in the product of the ith row times the j th column is zero, Solution Manual for Applied Linear Algebra by Olver and so all entries above the diagonal in L M are zero (b) The ith diagonal entry of L M is the product of the ith diagonal entry of L times the ith diagonal entry of M (c) Special matrices have all 1’s on the diagonal, and so, by part (b), does their product 1.3.22 (a) L = (c) L = B @ −1 1 (e) L = B @ −2 −1 B B (g) L = B B −1 @ 0 B B U =B @0 ! ! ! ! 1 1 , U= , (b) L = , U= , −1 3 −8 1 0 0 −1 −1 B B 0C 0C 0C (d) L = B A, U = @ A, U = @ A, @2 0 13 0 1 −1 0 0 0 B B C B 0C 0C A, U = @ −3 A, (f ) L = @ A, U = @ −1 0 −3 31 0 1 −1 0 0 0 B −1 B −1 C 0C B C C −1 B C, U = B C (h) L = B B C C, @ −2 1 0A @0 2A −1 −2 − 0 −10 0 −2 −4 3C C C, 1A (i) L = B B B B B @ 1 2 − 37 0 − 22 0 C B C B 0C B0 C, U = B C B0 0A @ 1 − 23 − 22 7 35 22 0 − 21 C A, −1 4C A, − 13 0C C C, 0A 1 C C C C C A 1.3.23 (a) Add times first row to second row (b) Add −2 times first row to third row (c) Add times second row to third row 1.3.24 1 0 B C B2 0C C (a) B @3 0A (b) (1) Add −2 times first row to second row (2) Add −3 times first row to third row (3) Add −5 times first row to fourth row (4) Add −4 times second row to third row (5) Add −6 times second row to fourth row (6) Add −7 times third row to fourth row (c) Use the order given in part (b) ♦ 1.3.25 See equation (4.51) for the general case 0 B Bt B B Bt @ t31 Bt @ t21 t2 t22 t32 t3 t23 t33 ! t1 t1 t2 t2 t22 1 B t3 C A = @ t1 t21 t23 = 0 1 C B C B t4 C B t1 C=B Bt t24 C A @ t31 t4 0 ! 0 t1 + t t2 − t1 10 B 0C A@0 1 t1 + t t21 + t1 t2 + t22 B B0 B B B0 @ t2 − t1 0 ! t2 − t1 0 t1 + t + t t3 − t (t3 − t1 ) (t3 − t2 ) 10 1 C t3 − t A, (t3 − t1 ) (t3 − t2 ) C 0C C C 0C A 1 C C t4 − t C C C (t4 − t1 ) (t4 − t2 ) A (t4 − t1 ) (t4 − t2 ) (t4 − t3 ) Solution Manual for Applied Linear Algebra by Olver Exercise 1.7.8 and [ 11 ] for more sophisticated ways to speed up the computation 1.7.3 Back Substitution requires about one half the number of arithmetic operations as multiplying a matrix times a vector, and so is twice as fast ♦ 1.7.4 We begin by proving (1.61) We must show that + + + + (n − 1) = n(n − 1)/2 for n = 2, 3, For n = both sides equal Assume that (1.61) is true for n = k Then + + + + (k − 1) + k = k(k − 1)/2 + k = k(k + 1)/2, so (1.61) is true for n = k + Now the first equation in (1.62) follows if we note that + + + + (n − 1) + n = n(n + 1)/2 Next we prove the first equation in (1.60), namely + + 12 + + (n − 1)n = 31 n3 − 31 n for n = 2, 3, For n = both sides equal Assume that the formula is true for n = k Then + + 12 + + (k − 1)k + k(k + 1) = 31 k3 − 13 k + k + k = 31 (k + 1)3 − 31 (k + 1), so the formula is true for n = k + 1, which completes the induction step The proof of the second equation is similar, or, alternatively, one can use the first equation and (1.61) to show that n X j =1 (n − j)2 = n X j =1 (n − j)(n − j + 1) − n X j =1 (n − j) = n3 − n n2 − n n3 − n + n − = ♥ 1.7.5 We may assume that the matrix is regular, so P = I , since row interchanges have no effect on the number of arithmetic operations (a) First, according to (1.60), it takes 13 n3 − 13 n multiplications and 13 n3 − 21 n2 + 16 n additions to factor A = L U To solve L cj = ej by Forward Substitution, the first j − entries of c are automatically 0, the j th entry is 1, and then, for k = j + 1, n, we need k − j − multiplications and the same number of additions to compute the k th entry, for a total of 12 (n − j)(n − j − 1) multiplications and additions to find cj Similarly, to solve U xj = cj for the j th column of A−1 requires 21 n2 + 12 n multiplications and, since the first j − entries of cj are 0, also 21 n2 − 12 n − j + additions The grand total is n3 multiplications and n (n − 1)2 additions “ ” (b) Starting with the large augmented matrix M = A | I , it takes 12 n2 (n − 1) multipli“ ” cations and 21 n (n − 1)2 additions to reduce it to triangular form U | C with U upper triangular and C“ lower triangular, then n2 multiplications to obtain the special upper ” triangular form V | B , and then 12 n2 (n − 1) multiplications and, since B is upper “ ” triangular, 21 n (n − 1)2 additions to produce the final matrix I | A−1 The grand total is n3 multiplications and n (n − 1)2 additions Thus, both methods take the same amount of work and 31 n3 − 31 n 1.7.6 Combining (1.60–61), we see that it takes 31 n3 + 21 n2 − 56 n multiplications “ ” additions to reduce the augmented matrix to upper triangular form U | c Dividing the j th row by its pivot requires n−j +1 multiplications, for a total of 21 n2 + 12 n multiplications “ ” “ ” to produce the special upper triangular form V | e To produce the solved form I | d requires an additional 21 n2 − 21 n multiplications and the same number of additions for a grand total of 31 n3 + 23 n2 − 56 n multiplications and 13 n3 + 12 n2 − 56 n additions needed to solve the system 1.7.7 Less efficient, by, roughly, a factor of n − n additions It takes 27 n3 + n − n multiplications and Solution Manual for Applied Linear Algebra by Olver ♥ 1.7.8 (a) D1 + D3 − D4 − D6 = (A1 + A4 ) (B1 + B4 ) + (A2 − A4 ) (B3 + B4 ) − − (A1 + A2 ) B4 − A4 (B1 − B3 ) = A1 B1 + A2 B3 = C1 , D4 + D7 = (A1 + A2 ) B4 + A1 (B2 − B4 ) = A1 B2 + A2 B4 = C2 , D5 − D6 = (A3 + A4 ) B1 − A4 (B1 − B3 ) = A3 B1 + A4 B3 = C3 , D1 − D2 − D5 + D7 = (A1 + A4 ) (B1 + B4 ) − (A1 − A3 ) (B1 + B2 ) − − (A3 + A4 ) B1 + A1 (B2 − B4 ) = A3 B2 + A4 B4 = C4 (b) To compute D1 , , D7 requires multiplications and 10 additions; then to compute C1 , C2 , C3 , C4 requires an additional additions for a total of multiplications and 18 additions The traditional method for computing the product of two × matrices requires multiplications and additions (c) The method requires multiplications and 18 additions of n × n matrices, for a total of n3 and n2 (n−1)+18 n2 ≈ n3 additions, versus n3 multiplications and n2 (n−1) ≈ n3 additions for the direct method, so there is a savings by a factor of 87 (d) Let µr denote the number of multiplications and αr the number of additions to compute the product of 2r × 2r matrices using Strassen’s Algorithm Then, µr = µr−1 , while αr = αr−1 + 18 · 22 r−2 , where the first factor comes from multiplying the blocks, and the second from adding them Since µ1 = 1, α1 = Clearly, µr = 7r , while an induction proves the formula for αr = 6(7r−1 − 4r−1 ), namely αr+1 = αr−1 + 18 · 4r−1 = 6(7r − · 4r−1 ) + 18 · 4r−1 = 6(7r − 4r ) Combining the operations, Strassen’s Algorithm is faster by a factor of n3 23 r+1 , = µr + αr 13 · 7r−1 − · 4r−1 which, for r = 10, equals 4.1059, for r = 25, equals 30.3378, and, for r = 100, equals 678, 234, which is a remarkable savings — but bear in mind that the matrices have size around 1030 , which is astronomical! ! ! A O B O (e) One way is to use block matrix multiplication, in the trivial form = O I O I ! C O where C = A B Thus, choosing I to be an identity matrix of the appropriate O I size, the overall size of the block matrices can be arranged to be a power of 2, and then the reduction algorithm can proceed on the larger matrices Another approach, trickier to program, is to break the matrix up into blocks of nearly equal size since the Strassen formulas not, in fact, require the blocks to have the same size and even apply to rectangular matrices whose rectangular blocks are of compatible sizes 1.7.9 (a) B @ −1 0 B B −1 (b) B @ 0 −1 −2 −1 −1 0 B 1C A = @ −1 −2 0 C 0C B B −1 C=B 1A @ −1 10 1 B C 0C A @ 1 A, 0 10 0 B 0C CB CB −1 A@ 0 −1 28 −2 C x=B @ A; −1 0 1 0C C C, 1A 0 x= 1 B C B0C B C; @1A Solution Manual for Applied Linear Algebra by Olver B −1 B (c) B @ 0 1.7.10 (a) B @ −1 B −1 B B @ 0 0 B −1 B B B B @ 0 (b) “ −4 B 2C C B C x=B B − C @ 5A 0 −1 0 B−1 C CB −1 −1 C = @ A, A A@ 2 00 − 0 103 1 −1 0 0 0 B B−1 0C 0C CB C B −1 0C C −1 CB C , C=B CB C −1 A B −1 − A@ A @ 3 −1 00 − 0 100 1 0 0 −1 0 0 B CB − 0 −1 B C B C −1 0C B CB B B − 23 0C C=B CB −1 0C B C CB B −1 −1 A B 0C 0 −4 0 @ A@ −1 0 −5 0 0 1 − 41 1 B 0C CB CB A@ 0 , ( 2, 3, 3, )T , , 4, , 4, 0 0 0C C C, −1 A −4 − 35 10 “ −1 0 1 −1 −1 −1 −1 ” T B 0C −1 C B C=B −1 A @ −1 10 0 −1 −1 −1 0 , 2, −3 −1 ” T 0 −1 0C C C 0C C; C −1 C A (c) The subdiagonal entries in L are li+1,i = − i/(i + 1) → −1, while the diagonal entries in U uii = (i + 1)/i → ♠ 1.7.11 (a) B @ −1 0 B B −1 B @ 0 B1 B B B0 B @0 “ −1 −1 0 ” 1 T , , 3 10 1 0 0 B B 0C 1C 1C A = @−2 A@ A , 12 00 − 0 50 1 0 0 CB B−1 C 0 CB 0C B 2 CB C=B B − 25 0C 1A B A@ @ −1 0 − 12 0100 1 0 0 0 B CB − 0 B C C B 0C B CB C B CB 0 − 0 CB 0C =B C B CB B 1A B 0C 0 − 12 @ A@ 12 0 − 29 “ ” 13 11 20 T , , , 29 29 29 29 “ ” 2 T , , , , 10 2√5 10 (b) , , (c) The subdiagonal√entries in L approach − U approach + = 2.414214 1.7.12 Both false 10 1 0 CB B 1 1 B CB B CB @ 1 A@ 0 1 0 ♠ 1.7.13 B B1 @ 1 For 1 1 example, 0 C 0C B B C=B 1A @1 1 1C B B1 1C A=@4 4 1 10 CB B0 0C A@ 1 15 29 0C C C , 1C A 12 29 12 0 12 5 0 0 29 12 0C C C 0C C; C 1C A 70 29 = −.414214, and the diagonal entries in 0 1C C C, 2A 1 1 B B1 B @0 1 3C C 4A 18 1 0 1 1 0 −1 B 0C C B C =B @ −1 1A 1 0 −1 −1 0 1 −1 C C C 0A Solution Manual for Applied Linear Algebra by Olver 0 B B B1 B B0 @ B B1 B B B0 B B @0 1 0 For B B1 B B B0 B B B0 B B @0 the 0 0 0 1 0 1C B C B1 0C B C=B C B 1A @ 15 1 − 15 1 0 B1 C B 0C B4 C B C B =B0 0C 15 C B C 1A B @ 1 4 − 15 10 CB 15 C B CB CB B 0C A@ 0 10 0 CB C B 0 CB CB B 0C CB CB 15 B 0C A@ 56 56 19 0 × version we have 1 0 0 B1 C B B4 0 0C C B C B 0C 15 C=B B C B C 0C B 0 B C 1A B @ 1 0 4 − 15 0 15 56 56 0 56 209 − 209 1 15 0 0 0 26 C C C C 16 C 15 A 24 − 41 56 15 1 56 15 0 10 CB B C CB CB B 0C CB CB B 0C CB CB B 0C A@ 0 209 56 15 0 0 − 14 15 55 56 66 19 56 15 0 C C C C C C C C A 0 209 56 0 0 780 209 − 14 15 − 56 210 209 45 13 C C C C C C C C C C C A The pattern is that the only the entries lying on the diagonal, the subdiagonal or the last row of L are nonzero, while the only nonzero entries of U are on its diagonal, superdiagonal or last column ♥ 1.7.14 (a) Assuming regularity, the only row operations required to reduce A to upper triangular form U are, for each j = 1, , n − 1, to add multiples of the j th row to the (j + 1)st and the nth rows Thus, the only nonzero entries below the diagonal in L are at positions (j, j + 1) and (j, n) Moreover, these row operations only affect zero entries in the last column, leading 1to the of0U final form1 −1 −1 0 −1 −1 CB C B B −2 C −1 A = @ −1 A@ (b) @ −1 A, 0 −1 −1 −1 10 1 0 0 −1 0 −1 −1 0 −1 B B B −1 0 0C −1 −1 C B −1 CB C −1 0C C B B CB C C B B B C 0 CB 0 −1 −1 C C = B −1 B −1 −1 , C B 3C B C @ 0 − 12 0 0C − −1 −1 A B @ A@ A 2 13 −1 0 −1 −1 −1 − 12 − 37 0 0 −1 0 −1 B −1 −1 0 0C B C B −1 0C B −1 C B C= B C −1 −1 B C @ 0 −1 −1 A −1 0 −1 0 0 −1 1 0 0 10 −1 B B −1 −1 0 −1 C 0 0C CB C B B B −1 −1 −1 C 0 0C CB C B CB B 1C CB C B 0 0 −1 − − 2 2C CB B B C C B @ 0 − 72 0 33 A@ − 87 A −1 −1 − 12 − 17 − 33 0 0 104 33 The × case is a singular matrix 30 Solution Manual for Applied Linear Algebra by Olver ♥ 1.7.15 (a) If matrix A is tridiagonal, then the only nonzero elements in ith row are ai,i−1 , aii , ai,i+1 So aij = whenever | i − j | > 0 1 01 1 0 01 B1 1 0C B1 1 0C C B C B C B C B B1 1 1C B1 1 0C C has band C has band width 2; B (b) For example, B B1 1 1C B0 1 1C C B C B @0 1 1A @0 1 1A 0 1 0 1 width (c) U is a matrix that result from applying the row operation # to A, so all zero entries in A will produce corresponding zero entries in U On the other hand, if A is of band width k, then for each column of A we need to perform no more than k row replacements to obtain zero’s below the diagonal Thus L which reflects these row replacements will have at most k nonzero entries below the diagonal 10 1 0 0 1 0 0 B B C C 1 0 B1 B 0 0C 0C B1 1 0C B2 CB C B B1 C CB C B B C CB C 0 B1 1 0C B2 CB C 3 CB C, C=B (d) B 1 B0 1 1C B CB C 0 0 B B CB C C 2 B CB C @0 1 1A B B 34 21 C 0 12 C @ A@ A 0 1 0 0 0 10 1 0 0 0 0 B C C 1 01 B 1 B B1 0 0C 0C B1 1 0C CB C B2 2 B CB C C B1 1 C CB B C B 0 1 1 CB C B C B2 3 3 CB C B C=B 1 C CB B1 1 1C B1 0 0 B C C B B2 C 4 C CB B @0 1 1A 1C CB B A@ 0 0 5 A @ 0 1 0 34 53 14 0 0 34 “ ”T “ ”T 1 2 (e) 31 , 13 , 0, 0, 13 , 31 , 3, 3,−3,−3, 3, (f ) For A we still need to compute k multipliers at each stage and update at most k entries, so we have less than (n − 1)(k + k ) multiplications and (n − 1) k additions For the right-hand side we have to update at most k entries at each stage, so we have less than (n − 1)k multiplications and (n − 1)k additions So we can get by with less than total (n − 1)(2 k + k ) multiplications and (n − 1)(k + k ) additions (g) The inverse of a0banded matrix is1not necessarily banded For example, the inverse of 1 −2 4C B C B B 1C C @ A is B − − 2A @ 1 −2 1.7.16 (a) ( −8, )T , (b) ( −10, −4.1 )T , (c) ( −8.1, −4.1 )T (d) Partial pivoting reduces the effect of round off errors and results in a significantly more accurate answer 1 1.7.17 (a) x = 11 ≈ 1.57143, y = ≈ 142857, z = − ≈ 142857, (b) x = 3.357, y = 5, z = −.1429, (c) x = 1.572, y = 1429, z = −.1429 1.7.18 (a) x = −2, y = 2, z = 3, (b) x = −7.3, y = 3.3, z = 2.9, (c) x = −1.9, y = 2., z = 2.9, (d) partial pivoting works markedly better, especially for the value of x 31 Solution Manual for Applied Linear Algebra by Olver 1.7.19 (a) x = −220., y = 26, z = 91; (b) x = −190., y = 24, z = 84; (c) x = −210, y = 26, z = (d) The exact solution is x = −213.658, y = 25.6537, z = 858586 Full pivoting is the most accurate Interestingly, partial pivoting fares a little worse than regular elimination 1.7.20 (a) 1.7.21 (a) (c) 1 B B 13 B− @ − 95 C C C A 1.2 C =B @ −2.6 A, −1.8 @ − 13 A = 13 121 C B B 38 C C B 121 C= B B 59 C C B @ 242 A 56 − 121 (b) ! −.0769 , 6154 0165 C B B 3141 C C, B @ 2438 A −.4628 −1 B B B− B B B @ C C C C, C C A B −5 B (b) B @ − 15 − 19 15 0 B C B1C C, (c) B @1A C C C A (d) − 32 B 35 B 19 B− B 35 B 12 B− @ 35 − 76 35 1 C C C C C C A −.8000 C =B @ −.5333 A, −1.2667 −.732 C (d) B @ −.002 A .508 1.7.22 The results are the same ♠ 1.7.23 Gaussian Elimination With Full Pivoting start for i = to n set σ(i) = τ (i) = i next i for j = to n if mσ(i),j = for all i ≥ j, stop; print “A is singular” choose i ≥ j and k ≥ j such that mσ(i),τ (k) is maximal interchange σ(i) ←→ σ(j) interchange τ (k) ←→ τ (j) for i = j + to n set z = mσ(i),τ (j) /mσ(k),τ (j) set mσ(i),τ (j) = for k = j + to n + set mσ(i),τ (k) = mσ(i),τ (k) − z mσ(i),τ (k) next k next i next j end 32 −.9143 B C B −.5429 C C =B @ −.3429 A −2.1714 Solution Manual for Applied Linear Algebra by Olver ♠ 1.7.24 We let x ∈ R n be generated using a random number generator, compute b = Hn x and then solve Hn y = b for y The error is e = x − y and we use e = max | ei | as a measure of the overall error Using Matlab, running Gaussian Elimination with pivoting: n e 10 00097711 20 35.5111 50 318.3845 100 1771.1 Using Mathematica, running regular Gaussian Elimination: n e 10 000309257 20 19.8964 50 160.325 100 404.625 In Mathematica, using the built-in LinearSolve function, which is more accurate since it uses a more sophisticated solution method when confronted with an ill-posed linear system: n e 10 00035996 20 620536 50 65328 100 516865 (Of course, the errors vary a bit each time the program is run due to the randomness of the choice of x.) ♠ 1.7.25 −360 30 C (a) H3−1 = B −36 192 −180 @ A, 30 −180 180 16 −120 240 −140 B −120 1200 −2700 1680 C C B C, H4−1 = B @ 240 −2700 6480 −4200 A −140 1680 −4200 2800 25 −300 1050 −1400 630 B −300 4080 −18900 26880 −12600 C C B B −1 C 1050 −18900 79380 −117600 56700 C H5 = B B C @ −1400 26880 −117600 179200 −88200 A 630 −12600 56700 −88200 44100 (b) The same results are obtained when using floating point arithmetic in either Mathematica or Matlab f H , where K f (c) The product K 10 10 10 is the computed inverse, is fairly close to the 10 × 10 identity matrix; the largest error is 0000801892 in Mathematica or 000036472 in f H , it is nowhere close to the identity matrix: in MathematMatlab As for K 20 20 ica the diagonal entries range from −1.34937 to 3.03755, while the largest (in absolute value) off-diagonal entry is 4.3505; in Matlab the diagonal entries range from −.4918 to 3.9942, while the largest (in absolute value) off-diagonal entry is −5.1994 1.8.1 (a) (b) (c) (d) (e) (f ) (g) Unique solution: (− 21 , − 34 )T ; infinitely many solutions: (1 − z, −1 + z, z)T , where z is arbitrary; no solutions; unique solution: (1, −2, 1)T ; infinitely many solutions: (5 − z, 1, z, 0)T , where z is arbitrary; infinitely many solutions: (1, 0, 1, w)T , where w is arbitrary; unique solution: (2, 1, 3, 1)T 33 Solution Manual for Applied Linear Algebra by Olver 1.8.2 (a) Incompatible; (b) incompatible; (c) (1, 0)T ; (d) (1 + x2 − x3 , x2 , x3 )T , where x2 T T and x3 are arbitrary; (e) (− 15 , 23, −10) ; (f ) (−5 − x4 , 19 − x4 , −6 − x4 , x4 ) , where x4 is arbitrary; (g) incompatible 1.8.3 The planes intersect at (1, 0, 0) 1.8.4 (i) a = b and b = 0; (ii) b = 0, a = −2; 1.8.5 (a) b = 2, c = −1 or b = 1.8.6 (a) “ 2, + i − 12 (1 + i )y, y, − i (iii) a = b = 0, or a = −2 and b = c = 2; (b) b = 2, 21 ; ”T (c) b = 2, c = −1, or b = , where y is arbitrary;; T (b) ( i z + + i , i z + − i , z ) , where z is arbitrary; (c) ( + i , −1 + i , i )T ; (d) ( − z − (3 + i )w, −z − (1 + i )w, z, w )T , where z and w are arbitrary 1.8.7 (a) 2, (b) 1, (c) 2, (d) 3, (e) 1, (f ) 1, (g) 2, (h) 2, (i) 1.8.8 (a) (b) (c) (d) (e) (f ) 1 −2 ! ! ! 1 = , 1 −3 ! ! ! 3 , = 0 −1 −2 −1 −3 10 1 −1 1 0 −1 B C B CB 1C @ −1 A = @ 1 A @ A, −1 −1 1 0 10 0 10 −1 0 −1 0 CB B CB B1 −1 C @0 1A@1 A = @ A@ 1 0 1 0 −1 10 1 3 0 B C B C B 0C @ 0A = @ A @ A, −2 −3 ( −1 ) = ( )( −1 ), 0 B B1 (g) B @0 0 B2 B B1 (h) B B @4 0 (i) B @0 x + y = 1, (b) y + z = 0, x − z = x = 1, 1.8.9 (a) y = 0, z = 1.8.10 (a) 10 0 −3 0 B C B 0 0C B CB −1 C C=B CB − 2A @ A@ 1 −1 −5 0 − − 74 10 −1 1 0 0 C B B0 −1 0C B2 0 0C CB B CB C = B 1 0 CB −3 −1 C C B CB −1 A @ 1 A@ −50 −2 01 01 0 0 1 B C B 1C A @ −3 −2 A = @ 0 −2 −2 0 0 ! , (b) 0 B @0 0 x + y = 1, (c) y + z = 0, x − z = 0C A, (c) 0 B @0 34 −1 C A, 10 1 C B 0C CB C C, B 0C A@ 0 A 0 1 −1 −5 −2 C C C, 0 0C C 0 0A 0 10 −3 B 0C −1 A@0 0 1C A, (d) 0 B @0 0 1 0C A 1 −2 2C A 1 2, c = Solution Manual for Applied Linear Algebra by Olver 1.8.11 (a) x2 + y = 1, x2 − y = 2; (b) y = x2 , x − y + = 0; solutions: x = 2, y = and x = −1, y = 1; (c) y = x3 , x − y = 0; solutions: x = y = 0, x = y = −1, x = y = 1; (d) y = sin x, y = 0; solutions: x = k π, y = 0, for k any integer 1.8.12 That variable does not appear anywhere in the system, and is automatically free (although it doesn’t enter into any of the formulas, and so is, in a sense, irrelevant) 1.8.13 True For example, take a matrix in row echelon form with r pivots, e.g., the matrix A with aii = for i = 1, , r, and all other entries equal to 1.8.14 Both false The zero matrix has no pivots, and hence has rank ♥ 1.8.15 (a) Each row of A = v wT is a scalar multiple, namely vi w, of the vector w If necessary, we use a row interchange to ensure that the first row is non-zero We then subtract the appropriate scalar multiple of the first row from all the others This makes all rows below the first zero, and so the resulting matrix is in row echelon form has a single nonzero row, and hence a single pivot1— proving that A has rank ! ! −8 −1 2 −2 B C (b) (i) , (ii) @ 0 A, (iii) −3 −3 −9 −2 (c) The row echelon form of A must have a single nonzero row, say w T Reversing the elementary row operations that led to the row echelon form, at each step we either interchange rows or add multiples of one row to another Every row of every matrix obtained in such a fashion must be some scalar multiple of w T , and hence the original matrix A = v wT , where the entries vi of the vector v are the indicated scalar multiples 1.8.16 1.8.17 1.8.18 Example: A = has rank 0 ! , B= 0 ! so A B = 0 ! has rank 1, but B A = 0 0 ! ♦ 1.8.19 “ ” (a) Under elementary row operations, the reduced form of C will be U Z where U is the row echelon form ! of A Thus, C has!at least r pivots, namely ! the pivots in A Examples: ! 1 2 1 rank = = rank , while rank = > = rank 2 4 ! U (b) Applying elementary row operations, we can reduce E to where U is the row echW elon form of A If we can then use elementary row operations of type #1 to eliminate all entries of W , then the row echelon form of E has the same number of pivots as A and so rank E = rank A Otherwise, at least one new pivot appears in the rows below U , and rank E0> rank1A Examples: ! ! 2 2 C C rank B , while rank B @ A = = rank @ A = > = rank 4 ♦ 1.8.20 By Proposition 1.39, A can be reduced to row echelon form U by a sequence of elementary row operations Therefore, as in the proof of the L U decomposition, A = E E2 · · · EN U −1 are the elementary matrices representing the row operations If A is where E1−1 , , EN singular, then U = Z must have at least one all zero row 35 Solution Manual for Applied Linear Algebra by Olver “ ” ♦ 1.8.21 After row operations, the augmented matrix becomes N = U | c where the r = rank A nonzero rows of U contain the pivots of A If the system is compatible, then the last m − r entries of c are all zero, and hence N is itself a row echelon matrix with r nonzero rows and hence rank M = rank N = r If the system is incompatible, then one or more of the last m − r entries of c are nonzero, and hence, by one more set of row operations, N is placed in row echelon form with a final pivot in row r + of the last column In this case, then, rank M = rank N = r + 1.8.22 (a) x = z, y = z, where z is arbitrary; (b) x = − 32 z, y = 97 z, where z is arbitrary; (c) x = y = z = 0; (d) x = 13 z − 32 w, y = 56 z − 16 w, where z and w are arbitrary; (e) x = 13 z, y = z, w = 0, where z is arbitrary; (f ) x = 32 w, y = 12 w, z = 21 w, where w is arbitrary (c) “ ”T “ ”T , where y is arbitrary; (b) − 65 z, 85 z, z , where z is arbitrary; y, y “ ”T − 11 , where z and w are arbitrary; (d) ( z, − z, z )T , z + w, z − w, z, w T T 1.8.23 (a) where z is arbitrary; (e) ( −4 z, z, z ) , where z is arbitrary; (f ) ( 0, 0, ) ; (g) ( z, z, z, )T , where z is arbitrary; (h) ( y − w, y, w, w )T , where y and w are arbitrary 1.8.24 If U has only nonzero entries on the diagonal, it must be nonsingular, and so the only solution is x = On the other hand, if there is a diagonal zero entry, then U cannot have n pivots, and so must be singular, and the system will admit nontrivial solutions 1.8.25 For the homogeneous case x1 = x3 , x2 = 0, where x3 is arbitrary For the inhomogeneous case x1 = x3 + 14 (a + b), x2 = 21 (a − b), where x3 is arbitrary The solution to the homogeneous version is a line going through the origin, while the inhomogeneous solution “ is a parallel line going through the point 14 (a + b), 0, 21 (a − b) free variable x3 is the same as in the homogeneous case ”T The dependence on the 1.8.26 For the homogeneous case x1 = − 16 x3 − 16 x4 , x2 = − 23 x3 + 34 x4 , where x3 and x4 are arbitrary For the inhomogeneous case x1 = − 16 x3 − 61 x4 + 13 a + 61 b, x2 = − 23 x3 + 1 x4 + a + b, where x3 and x4 are arbitrary The dependence on the free variable x3 is the same as in the homogeneous case 1.8.27 (a) k = or k = −2; (b) k = or k = 2; (c) k = 1.9.1 (a) Regular matrix, reduces to upper triangular form U = 36 1 ! −1 , so determinant is 2; −2 C (b) Singular matrix, row echelon form U = A, so determinant is 0; 0 1 B 2C (c) Regular matrix, reduces to upper triangular form U = @ A, so determinant is −3; 00 −3 −2 B (d) Nonsingular matrix, reduces to upper triangular form U = @ −1 C A after one row interchange, so determinant is 6; 0 B @ −1 0 Solution Manual for Applied Linear Algebra by Olver (e) Upper triangular matrix, so the determinant is a product of0diagonal −2 B B (f ) Nonsingular matrix, reduces to upper triangular form U = B @0 0 one row interchange, so determinant is 40; −2 B0 B B0 (g) Nonsingular matrix, reduces to upper triangular form U = B B @0 0 after one row interchange, so determinant is 60 1.9.2 det A = −2, det B = −11 and det A B = ! det B @ −2 10 entries: −180; 1 −1 −7 C C C after −2 −8 A 10 −3 0 −1 −12 −5 −5 2C C C 24 C C 10 A 1 1C A = 22 ; (b) By formula (1.82), 1.9.3 (a) A = −1 −2 = det I = det(A2 ) = det(A A) = det A det A = (det A)2 , so det A = ±1 1.9.4 det A2 = (det A)2 = det A, and hence det A = or 1.9.5 (a) True By Theorem 1.52, A ! is nonsingular, so, by Theorem 1.18, A−1 exists (b) False For A = , we have det A = −2 and det A = −4 In general, −1 −2 n det(2 A) = det A ! ! ! (c) False For A = and B = , we have det(A + B) = det = −1 −2 0 −1 −2 = −1 = det A + det B (d) True det A−T = det(A−1 )T = det A−1 = 1/ det A, where the second equality follows from Proposition 1.56, and the third equality follows from Proposition 1.55 (e) True det(A B −1 ) = det A det B −1 = det A/ det B, where the first equality follows from formula (1.82) and the second equality follows from Proposition 1.55 ! ! ! −4 = , then det(A + B)(A − B) = det and B = (f ) False If A = 0 −1 −2 ! = However, if A B = B A, then det(A + B)(A − B) = = det(A2 − B ) = det det(A2 − A B + B A − B ) = det(A2 − B ) (g) True Proposition 1.42 says rank A = n if and only if A is nonsingular, while Theorem 1.52 implies that det A = (h) True Since det A = = 0, Theorem 1.52 implies that A is nonsingular, and so B = A−1 O = O 1.9.6 Never — its determinant is always zero 1.9.7 By (1.82, 83) and commutativity of numeric multiplication, det B = det(S −1 A S) = det S −1 det A det S = det A det S = det A det S 1.9.8 Multiplying one row of A by c multiplies its determinant by c To obtain c A, we must multiply all n rows by c, and hence the determinant is multiplied by c a total of n times 1.9.9 By Proposition 1.56, det LT = det L If L is a lower triangular matrix, then LT is an 37 Solution Manual for Applied Linear Algebra by Olver upper triangular matrix By Theorem 1.50, det LT is the product of its diagonal entries which are the same as the diagonal entries of L 1.9.10 (a) See Exercise 1.9.8 (b) If n is odd, det(− A) = − det A On the other hand, if ! AT = − A, then det A = det AT = − det A, and hence det A = (c) A = −1 ♦ 1.9.11 We have det a c + ka b d + kb c a det det ka c kb d a det d b b d ! ! ! ! a c = a d + a k b − b c − b k a = a d − b c = det = c b − a d = −(a d − b c) = − det a c = k a d − k b c = k (a d − b c) = k det b d a c ! b d ! , , b d ! , = a d − b = ad ♦ 1.9.12 (a) The product formula holds if A is an elementary matrix; this is a consequence of the determinant axioms coupled with the fact that elementary matrices are obtained by applying the corresponding row operation to the identity matrix, with det I = (b) By induction, if A = E1 E2 · · · EN is a product of elementary matrices, then (1.82) also holds Proposition 1.25 then implies that the product formula is valid whenever A is nonsingular (c) The first result is in Exercise 1.2.24(a), and so the formula follows by applying Lemma 1.51 to Z and Z B (d) According to Exercise 1.8.20, every singular matrix can be written as A = E E2 · · · EN Z, where the Ei are elementary matrices, while Z, its row echelon form, is a matrix with a row of zeros But then Z B = W also has a row of zeros, and so A B = E1 E2 · · · EN W is also singular Thus, both sides of (1.82) are zero in this case 1.9.13 Indeed, by (1.82), det A det A−1 = det(A A−1 ) = det I = ♦ 1.9.14 Exercise 1.6.28 implies that, if A is regular, so is AT , and they both have the same pivots Since the determinant of a regular matrix is the product of the pivots, this implies det A = det AT If A is nonsingular, then we use the permuted L U decomposition to write A = P T L U where P T = P −1 by Exercise 1.6.14 Thus, det A = det P T det U = ± det U , while det AT = det(U T LT P ) = det U det P = ± det U where det P −1 = det P = ±1 Finally, if A is singular, then the same computation holds, with U denoting the row echelon form of A, and so det A = det U = = ± det AT 1.9.15 a11 B B a21 det B @ a31 a41 a12 a13 a14 a22 a23 a24 C C C= a32 a33 a34 A a42 a43 a44 a11 a22 a33 a44 − a11 a22 a34 a43 − a11 a23 a32 a44 + a11 a23 a34 a42 − a11 a24 a33 a42 + a11 a24 a32 a43 − a12 a21 a33 a44 + a12 a21 a34 a43 + a12 a23 a31 a44 − a12 a23 a34 a41 + a12 a24 a33 a41 − a12 a24 a31 a43 + a13 a21 a32 a44 − a13 a21 a34 a42 − a13 a22 a31 a44 + a13 a22 a34 a41 − a13 a24 a32 a41 + a13 a24 a31 a42 − a14 a21 a32 a43 + a14 a21 a33 a42 + a14 a22 a31 a43 − a14 a22 a33 a41 + a14 a23 a32 a41 − a14 a23 a31 a42 38 Solution Manual for Applied Linear Algebra by Olver ♦ 1.9.16 (i) Suppose B is obtained from A by adding c times row k to row l, so < alj + c aij , i = l, bij = : Thus, each summand in the determinantal formula for aij , i = l det B splits into two terms, and we find that det B = det A + c det C, where C is the matrix obtained from A by replacing row l by row k But rows k and l of C are identical, and so, by axiom (ii), if we interchange the two rows det C = − det C = Thus, det B = det A (ii) Let B be obtained from A by interchanging rows k and l Then each summand in the formula for det B equals minus the corresponding summand in the formula for det A, since the permutation has changed sign, and so det B = − det A (iii) Let B be obtained from A by multiplying rows k by c Then each summand in the formula for det B contains one entry from row k, and so equals c times the corresponding term in det A, hence det B = c det A (iv ) The only term in det U that does not contain at least one zero entry lying below the diagonal is for the identity permutation π(i) = i, and so det U is the product of its diagonal entries ♦ 1.9.17 If U is nonsingular, then, by Gauss–Jordan elimination, it can be reduced to the identity matrix by elementary row operations of types #1 and #3 Each operation of type #1 doesn’t change the determinant, while operations of type #3 multiply the determinant by the diagonal entry Thus, det U = u11 u22 · · · unn det I On the other hand, U is singular if and only if one or more of its diagonal entries are zero, and so det U = = u11 u22 · · · unn ♦ 1.9.18 The determinant of an elementary matrix of type #2 is −1, whereas all elementary matrices of type #1 have determinant +1, and hence so does any product thereof ♥ 1.9.19 (a) Since A is regular, a = and a d − b c = Subtracting c/a times the first from from the ! a b , and its pivots second row reduces A to the upper triangular matrix d + b (−c/a) ad − bc det A c = a are a and d − b a = a (b) As in part (a) we reduce A to an upper triangular form First, we subtract c/a times the first row from the second row, and g/a times the first row from third row, resulting a b e B ad − bc af − ce C C B0 C Performing the final row operation reduces in the matrix B a a @ ah − bg aj − cg A a b e a a B C ad − bc af − ce C B0 C, whose pivots the matrix to the upper triangular form U = B − @ A a a ad − bc are a, , and 0 P a (a f − c e)(a h − b g) adj + bf g + ech − af h − bcj − edg det A aj − eg − = = a a (a d − b c) ad − bc ad − bc (c) If A is a regular n × n matrix, then its first pivot is a11 , and its k th pivot, for k = 2, , n, is det Ak /det Ak−1 , where Ak is the k × k upper left submatrix of A with entries aij for i, j = 1, , k A formal proof is done by induction ♥ 1.9.20 (a–c) Applying an elementary column operation to a matrix A is the same as applying the elementary row operation to its transpose AT and then taking the transpose of the result Moreover, Proposition 1.56 implies that taking the transpose does not affect the de39 Solution Manual for Applied Linear Algebra by Olver terminant, and so any elementary column operation has exactly the same effect as the corresponding elementary row operation (d) Apply the transposed version of the elementary row operations required to reduce A T to upper triangular form Thus, if the (1, 1) entry is zero, use a column interchange to place a nonzero pivot in the upper left position Then apply elementary column operations of type #1 to make all entries to the right of the pivot zero Next, make sure a nonzero pivot is in the (2, 2) position by a column interchange if necessary, and then apply elementary column operations of type #1 to make all entries to the right of the pivot zero Continuing in this fashion, if the matrix is nonsingular, the result is an lower triangular matrix (e) We first interchange the first and second columns, and then use elementary column operations of type #1 to reduce the matrix to lower triangular form: 0 1 2 B C 5C det B @ −1 A = − det @ −1 A −3 −3 1 1 0 0 C B C = − det B @ −1 −1 A = − det @ −1 A = −3 −3 ♦ 1.9.21 Using the L U factorizations established in Exercise 1.3.25: ! 1 1 C = t2 − t1 , (b) det B (a) det @ t1 t2 t3 A = (t2 − t1 )(t3 − t1 )(t3 − t2 ), t1 t2 2 t1 t2 t3 1 1 C Bt B t2 t3 t4 C C (c) det B B t2 t2 t2 t2 C = (t2 − t1 )(t3 − t1 )(t3 − t2 )(t4 − t1 )(t4 − t2 )(t4 − t3 ) @ 4A t31 t32 t33 t34 The general formula is found in Exercise 4.4.29 ♥ 1.9.22 (a) By direct substitution: pd − bq aq − pc pd − bq aq − pc ax + by = a +b = p, cx + dy = c +d = q ad − bc ad − bc ad − bc ad − bc ! ! 1 13 13 (b) (i) x = − det det = −2.6, y = − = 5.2; 10 10 ! ! −2 det det = , y= =− (ii) x = −2 −2 12 12 (c) Proof by direct0substitution, expanding all the determinants 1 1 (d) (i) x = det B y = det B , 1C 1C @2 @ A=− , A= 9 9 −1 −1 −1 1 −1 1 B C B 2C z = det @ 2 A = ; (ii) x = − det @ −3 A = 0, 9 −1 −1 1 0 −1 1 C B B A = 4, z = − det @ −3 C y = − det @ A = 2 −1 (e) Assuming A is nonsingular, the solution to A x = b is xi = det Ai / det A, where Ai is obtained by replacing the ith column of A by the right hand side b See [ 60 ] for a complete justification 40 Solution Manual for Applied Linear Algebra by Olver ♦ 1.9.23 (a) We can individually reduce A and B to upper triangular forms U1 and U2 with the determinants equal to the products of their respective diagonal entries Applying the analogous !elementary row operations to D will reduce it to the upper triangular form U1 O , and its determinant is equal to the product of its diagonal entries, which O U2 are the diagonal entries of both U1 and U2 , so det D = det U1 det U2 = det A det B (b) The same argument as in part (a) proves the result The row operations applied to A are also applied to C, but this doesn’t affect the final upper triangular form (c) (i) det B @0 (ii) B −3 B det B @ 0 (iii) B −3 B det B @ 0 (iv ) B B det B @2 0 −2 −5 C A = det(3) det −2 2 0 −1 −2 0 −5 C C C = det 3A −2 −3 −1 C C B C = det @ −3 8A −3 0C C C = det −2 A −5 −5 ! 41 ! det −2 ! = · (−8) = −56, −1 = · 43 = 129, 4C A det(−3) = (−5) · (−3) = 15, ! det −2 −5 ! = 27 · (−2) = −54

Ngày đăng: 20/08/2020, 13:34

Xem thêm:

TỪ KHÓA LIÊN QUAN

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN