Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống
1
/ 213 trang
THÔNG TIN TÀI LIỆU
Thông tin cơ bản
Định dạng
Số trang
213
Dung lượng
1,81 MB
Nội dung
Answers to Exercises Jim Hefferon 2 1 1 3 1 2 3 1 2 1 x 1 · 1 3 x · 1 2 x · 3 1 2 1 6 8 6 2 8 1 Notation R real numbers N natural numbers: {0, 1, 2, . . .} C complex numbers {. . . . . .} set of . . . such that . . . . . . sequence; like a set but order matters V, W, U vector spaces v, w vectors 0, 0 V zero vector, zero vector of V B, D bases E n = e 1 , . . . , e n standard basis for R n β, δ basis vectors Rep B (v) matrix representing the vector P n set of n-th degree polynomials M n×m set of n×m matrices [S] span of the set S M ⊕ N direct sum of subspaces V ∼ = W isomorphic spaces h, g homomorphisms, linear maps H , G matrices t, s transformations; maps from a space to itself T, S square matrices Rep B,D (h) matrix representing the map h h i,j matrix entry from row i, column j |T | determinant of the matrix T R(h), N (h) rangespace and nullspace of the map h R ∞ (h), N ∞ (h) generalized rangespace and nullspace Lower case Greek alphabet name character name character name character alpha α iota ι rho ρ beta β kappa κ sigma σ gamma γ lambda λ tau τ delta δ mu µ upsilon υ epsilon nu ν phi φ zeta ζ xi ξ chi χ eta η omicron o psi ψ theta θ pi π omega ω Cover. This is Cramer’s Rule for the system x + 2y = 6, 3x + y = 8. The size of the first box is the determinant shown (the absolute value of the size is the area). The size of the second box is x times that, and equals the size of the final box. Hence, x is the final determinant divided by the first determinant. These are answers to the exercises in LinearAlgebra by J. Hefferon. Corrections or comments are very welcome, email to jimjoshua.smcvt.edu An answer labeled here as, for instance, One.II.3.4, matches the question numbered 4 from the first chapter, second section, and third subsection. The Topics are numbered separately. Contents Chapter One: Linear Systems 3 Subsection One.I.1: Gauss’ Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 Subsection One.I.2: Describing the Solution Set . . . . . . . . . . . . . . . . . . . . . . . 10 Subsection One.I.3: General = Particular + Homogeneous . . . . . . . . . . . . . . . . . 14 Subsection One.II.1: Vectors in Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 Subsection One.II.2: Length and Angle Measures . . . . . . . . . . . . . . . . . . . . . . 20 Subsection One.III.1: Gauss-Jordan Reduction . . . . . . . . . . . . . . . . . . . . . . . 25 Subsection One.III.2: Row Equivalence . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 Topic: Computer Algebra Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 Topic: Input-Output Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 Topic: Accuracy of Computations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 Topic: Analyzing Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 Chapter Two: Vector Spaces 36 Subsection Two.I.1: Definition and Examples . . . . . . . . . . . . . . . . . . . . . . . . 37 Subsection Two.I.2: Subspaces and Spanning Sets . . . . . . . . . . . . . . . . . . . . . 40 Subsection Two.II.1: Definition and Examples . . . . . . . . . . . . . . . . . . . . . . . . 46 Subsection Two.III.1: Basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 Subsection Two.III.2: Dimension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 Subsection Two.III.3: Vector Spaces and Linear Systems . . . . . . . . . . . . . . . . . . 61 Subsection Two.III.4: Combining Subspaces . . . . . . . . . . . . . . . . . . . . . . . . . 66 Topic: Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 Topic: Crystals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 Topic: Voting Paradoxes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 Topic: Dimensional Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 Chapter Three: Maps Between Spaces 74 Subsection Three.I.1: Definition and Examples . . . . . . . . . . . . . . . . . . . . . . . 75 Subsection Three.I.2: Dimension Characterizes Isomorphism . . . . . . . . . . . . . . . . 83 Subsection Three.II.1: Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 Subsection Three.II.2: Rangespace and Nullspace . . . . . . . . . . . . . . . . . . . . . . 90 Subsection Three.III.1: Representing Linear Maps with Matrices . . . . . . . . . . . . . 95 Subsection Three.III.2: Any Matrix Represents a Linear Map . . . . . . . . . . . . . . . 103 Subsection Three.IV.1: Sums and Scalar Products . . . . . . . . . . . . . . . . . . . . . 107 Subsection Three.IV.2: Matrix Multiplication . . . . . . . . . . . . . . . . . . . . . . . . 108 Subsection Three.IV.3: Mechanics of Matrix Multiplication . . . . . . . . . . . . . . . . 112 Subsection Three.IV.4: Inverses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116 Subsection Three.V.1: Changing Representations of Vectors . . . . . . . . . . . . . . . . 121 Subsection Three.V.2: Changing Map Representations . . . . . . . . . . . . . . . . . . . 124 Subsection Three.VI.1: Orthogonal Projection Into a Line . . . . . . . . . . . . . . . . . 128 Subsection Three.VI.2: Gram-Schmidt Orthogonalization . . . . . . . . . . . . . . . . . 131 Subsection Three.VI.3: Projection Into a Subspace . . . . . . . . . . . . . . . . . . . . . 137 Topic: Line of Best Fit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 Topic: Geometry of Linear Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 Topic: Markov Chains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150 Topic: Orthonormal Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157 Chapter Four: Determinants 158 Subsection Four.I.1: Exploration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 Subsection Four.I.2: Properties of Determinants . . . . . . . . . . . . . . . . . . . . . . . 161 Subsection Four.I.3: The Permutation Expansion . . . . . . . . . . . . . . . . . . . . . . 164 Subsection Four.I.4: Determinants Exist . . . . . . . . . . . . . . . . . . . . . . . . . . . 166 Subsection Four.II.1: Determinants as Size Functions . . . . . . . . . . . . . . . . . . . . 168 Subsection Four.III.1: Laplace’s Expansion . . . . . . . . . . . . . . . . . . . . . . . . . 171 4 Linear Algebra, by Hefferon Topic: Cramer’s Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174 Topic: Speed of Calculating Determinants . . . . . . . . . . . . . . . . . . . . . . . . . . 175 Topic: Projective Geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176 Chapter Five: Similarity 178 Subsection Five.II.1: Definition and Examples . . . . . . . . . . . . . . . . . . . . . . . . 179 Subsection Five.II.2: Diagonalizability . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182 Subsection Five.II.3: Eigenvalues and Eigenvectors . . . . . . . . . . . . . . . . . . . . . 186 Subsection Five.III.1: Self-Composition . . . . . . . . . . . . . . . . . . . . . . . . . . . 190 Subsection Five.III.2: Strings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192 Subsection Five.IV.1: Polynomials of Maps and Matrices . . . . . . . . . . . . . . . . . . 196 Subsection Five.IV.2: Jordan Canonical Form . . . . . . . . . . . . . . . . . . . . . . . . 203 Topic: Method of Powers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210 Topic: Stable Populations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210 Topic: Linear Recurrences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210 Chapter One: Linear Systems Subsection One.I.1: Gauss’ Method One.I.1.16 Gauss’ method can be performed in different ways, so these simply exhibit one possible way to get the answer. (a) Gauss’ method −(1/2)ρ 1 +ρ 2 −→ 2x + 3y = 7 − (5/2)y = −15/2 gives that the solution is y = 3 and x = 2. (b) Gauss’ method here −3ρ 1 +ρ 2 −→ ρ 1 +ρ 3 x − z = 0 y + 3z = 1 y = 4 −ρ 2 +ρ 3 −→ x − z = 0 y + 3z = 1 −3z = 3 gives x = −1, y = 4, and z = −1. One.I.1.17 (a) Gaussian reduction −(1/2)ρ 1 +ρ 2 −→ 2x + 2y = 5 −5y = −5/2 shows that y = 1/2 and x = 2 is the unique solution. (b) Gauss’ method ρ 1 +ρ 2 −→ −x + y = 1 2y = 3 gives y = 3/2 and x = 1/2 as the only solution. (c) Row reduction −ρ 1 +ρ 2 −→ x − 3y + z = 1 4y + z = 13 shows, because the variable z is not a leading variable in any row, that there are many solutions. (d) Row reduction −3ρ 1 +ρ 2 −→ −x − y = 1 0 = −1 shows that there is no solution. (e) Gauss’ method ρ 1 ↔ρ 4 −→ x + y − z = 10 2x − 2y + z = 0 x + z = 5 4y + z = 20 −2ρ 1 +ρ 2 −→ −ρ 1 +ρ 3 x + y − z = 10 −4y + 3z = −20 −y + 2z = −5 4y + z = 20 −(1/4)ρ 2 +ρ 3 −→ ρ 2 +ρ 4 x + y − z = 10 −4y + 3z = −20 (5/4)z = 0 4z = 0 gives the unique solution (x, y, z) = (5, 5, 0). (f) Here Gauss’ method gives −(3/2)ρ 1 +ρ 3 −→ −2ρ 1 +ρ 4 2x + z + w = 5 y − w = −1 − (5/2)z −(5/2)w = −15/2 y − w = −1 −ρ 2 +ρ 4 −→ 2x + z + w = 5 y − w = −1 − (5/2)z −(5/2)w = −15/2 0 = 0 which shows that there are many solutions. One.I.1.18 (a) From x = 1 − 3y we get that 2(1 −3y) = −3, giving y = 1. (b) From x = 1 − 3y we get that 2(1 −3y) + 2y = 0, leading to the conclusion that y = 1/2. Users of this method must check any potential solutions by substituting back into all the equations. 6 Linear Algebra, by Hefferon One.I.1.19 Do the reduction −3ρ 1 +ρ 2 −→ x − y = 1 0 = −3 + k to conclude this system has no solutions if k = 3 and if k = 3 then it has infinitely many solutions. It never has a unique solution. One.I.1.20 Let x = sin α, y = cos β, and z = tan γ: 2x − y + 3z = 3 4x + 2y −2z = 10 6x − 3y + z = 9 −2ρ 1 +ρ 2 −→ −3ρ 1 +ρ 3 2x − y + 3z = 3 4y − 8z = 4 −8z = 0 gives z = 0, y = 1, and x = 2. Note that no α satisfies that requirement. One.I.1.21 (a) Gauss’ method −3ρ 1 +ρ 2 −→ −ρ 1 +ρ 3 −2ρ 1 +ρ 4 x − 3y = b 1 10y = −3b 1 + b 2 10y = −b 1 + b 3 10y = −2b 1 + b 4 −ρ 2 +ρ 3 −→ −ρ 2 +ρ 4 x − 3y = b 1 10y = −3b 1 + b 2 0 = 2b 1 − b 2 + b 3 0 = b 1 − b 2 + b 4 shows that this system is consistent if and only if both b 3 = −2b 1 + b 2 and b 4 = −b 1 + b 2 . (b) Reduction −2ρ 1 +ρ 2 −→ −ρ 1 +ρ 3 x 1 + 2x 2 + 3x 3 = b 1 x 2 − 3x 3 = −2b 1 + b 2 −2x 2 + 5x 3 = −b 1 + b 3 2ρ 2 +ρ 3 −→ x 1 + 2x 2 + 3x 3 = b 1 x 2 − 3x 3 = −2b 1 + b 2 −x 3 = −5b 1 + +2b 2 + b 3 shows that each of b 1 , b 2 , and b 3 can be any real number — this system always has a unique solution. One.I.1.22 This system with more unknowns than equations x + y + z = 0 x + y + z = 1 has no solution. One.I.1.23 Yes. For example, the fact that the same reaction can be performed in two different flasks shows that twice any solution is another, different, solution (if a physical reaction occurs then there must be at least one nonzero solution). One.I.1.24 Because f(1) = 2, f(−1) = 6, and f(2) = 3 we get a linear system. 1a + 1b + c = 2 1a − 1b + c = 6 4a + 2b + c = 3 Gauss’ method −ρ 1 +ρ 2 −→ −4ρ 1 +ρ 2 a + b + c = 2 −2b = 4 −2b − 3c = −5 −ρ 2 +ρ 3 −→ a + b + c = 2 −2b = 4 −3c = −9 shows that the solution is f(x) = 1x 2 − 2x + 3. One.I.1.25 (a) Yes, by inspection the given equation results from −ρ 1 + ρ 2 . (b) No. The given equation is satisfied by the pair (1, 1). However, that pair does not satisfy the first equation in the system. (c) Yes. To see if the given row is c 1 ρ 1 + c 2 ρ 2 , solve the system of equations relating the coefficients of x, y, z, and the constants: 2c 1 + 6c 2 = 6 c 1 − 3c 2 = −9 −c 1 + c 2 = 5 4c 1 + 5c 2 = −2 and get c 1 = −3 and c 2 = 2, so the given row is −3ρ 1 + 2ρ 2 . One.I.1.26 If a = 0 then the solution set of the first equation is {(x, y) x = (c − by)/a}. Taking y = 0 gives the solution (c/a, 0), and since the second equation is supposed to have the same solution set, substituting into it gives that a(c/a) + d ·0 = e, so c = e. Then taking y = 1 in x = (c − by)/a gives that a((c − b)/a) + d · 1 = e, which gives that b = d. Hence they are the same equation. When a = 0 the equations can be different and still have the same solution set: e.g., 0x + 3y = 6 and 0x + 6y = 12. Answers to Exercises 7 One.I.1.27 We take three cases, first that a == 0, second that a = 0 and c = 0, and third that both a = 0 and c = 0. For the first, we assume that a = 0. Then the reduction −(c/a)ρ 1 +ρ 2 −→ ax + by = j (− cb a + d)y = − cj a + k shows that this system has a unique solution if and only if −(cb/a) + d = 0; remember that a = 0 so that back substitution yields a unique x (observe, by the way, that j and k play no role in the conclusion that there is a unique solution, although if there is a unique solution then they contribute to its value). But −(cb/a)+ d = (ad−bc)/a and a fraction is not equal to 0 if and only if its numerator is not equal to 0. This, in this first case, there is a unique solution if and only if ad − bc = 0. In the second case, if a = 0 but c = 0, then we swap cx + dy = k by = j to conclude that the system has a unique solution if and only if b = 0 (we use the case assumption that c = 0 to get a unique x in back substitution). But — where a = 0 and c = 0 — the condition “b = 0” is equivalent to the condition “ad −bc = 0”. That finishes the second case. Finally, for the third case, if both a and c are 0 then the system 0x + by = j 0x + dy = k might have no solutions (if the second equation is not a multiple of the first) or it might have infinitely many solutions (if the second equation is a multiple of the first then for each y satisfying both equations, any pair (x, y) will do), but it never has a unique solution. Note that a = 0 and c = 0 gives that ad − bc = 0. One.I.1.28 Recall that if a pair of lines share two distinct p oints then they are the same line. That’s because two points determine a line, so these two points determine each of the two lines, and so they are the same line. Thus the lines can share one point (giving a unique solution), share no points (giving no solutions), or share at least two points (which makes them the same line). One.I.1.29 For the reduction operation of multiplying ρ i by a nonzero real number k, we have that (s 1 , . . . , s n ) satisfies this system a 1,1 x 1 + a 1,2 x 2 + ··· + a 1,n x n = d 1 . . . ka i,1 x 1 + ka i,2 x 2 + ··· + ka i,n x n = kd i . . . a m,1 x 1 + a m,2 x 2 + ··· + a m,n x n = d m if and only if a 1,1 s 1 + a 1,2 s 2 + ··· + a 1,n s n = d 1 . . . and ka i,1 s 1 + ka i,2 s 2 + ··· + ka i,n s n = kd i . . . and a m,1 s 1 + a m,2 s 2 + ··· + a m,n s n = d m by the definition of ‘satisfies’. But, because k = 0, that’s true if and only if a 1,1 s 1 + a 1,2 s 2 + ··· + a 1,n s n = d 1 . . . and a i,1 s 1 + a i,2 s 2 + ··· + a i,n s n = d i . . . and a m,1 s 1 + a m,2 s 2 + ··· + a m,n s n = d m 8 Linear Algebra, by Hefferon (this is straightforward cancelling on both sides of the i-th equation), which says that (s 1 , . . . , s n ) solves a 1,1 x 1 + a 1,2 x 2 + ··· + a 1,n x n = d 1 . . . a i,1 x 1 + a i,2 x 2 + ··· + a i,n x n = d i . . . a m,1 x 1 + a m,2 x 2 + ··· + a m,n x n = d m as required. For the pivot operation kρ i + ρ j , we have that (s 1 , . . . , s n ) satisfies a 1,1 x 1 + ··· + a 1,n x n = d 1 . . . a i,1 x 1 + ··· + a i,n x n = d i . . . (ka i,1 + a j,1 )x 1 + ··· + (ka i,n + a j,n )x n = kd i + d j . . . a m,1 x 1 + ··· + a m,n x n = d m if and only if a 1,1 s 1 + ··· + a 1,n s n = d 1 . . . and a i,1 s 1 + ··· + a i,n s n = d i . . . and (ka i,1 + a j,1 )s 1 + ··· + (ka i,n + a j,n )s n = kd i + d j . . . and a m,1 s 1 + a m,2 s 2 + ··· + a m,n s n = d m again by the definition of ‘satisfies’. Subtract k times the i-th equation from the j-th equation (re- mark: here is where i = j is needed; if i = j then the two d i ’s above are not equal) to get that the previous compound statement holds if and only if a 1,1 s 1 + ··· + a 1,n s n = d 1 . . . and a i,1 s 1 + ··· + a i,n s n = d i . . . and (ka i,1 + a j,1 )s 1 + ··· + (ka i,n + a j,n )s n − (ka i,1 s 1 + ··· + ka i,n s n ) = kd i + d j − kd i . . . and a m,1 s 1 + ··· + a m,n s n = d m which, after cancellation, says that (s 1 , . . . , s n ) solves a 1,1 x 1 + ··· + a 1,n x n = d 1 . . . a i,1 x 1 + ··· + a i,n x n = d i . . . a j,1 x 1 + ··· + a j,n x n = d j . . . a m,1 x 1 + ··· + a m,n x n = d m as required. One.I.1.30 Yes, this one-equation system: 0x + 0y = 0 is satisfied by every (x, y) ∈ R 2 . [...]... obvious conjecture is that row operations do not change linear relationships among columns (c) A case-by-case proof follows the sketch given in the first item Topic: Computer Algebra Systems 1 (a) The commands > A:=array( [[40,15], [-50,25]] ); > u:=array([100,50]); > linsolve(A,u); yield the answer [1, 4] (b) Here there is a free variable: 32 Linear Algebra, by Hefferon > A:=array( [[7,0,-7,0], [8,1,-5,2],... row, are left to the reader) Consider the i = 2 version of the equation that gives each row of B as a linear combination of the rows of D Focus on the 1 -th and 2 -th component equations b2, 1 = c2,1 d1, 1 + c2,2 d2, 1 + · · · + c2,m dm, 1 b2, 2 = c2,1 d1, 2 + c2,2 d2, 2 + · · · + c2,m dm, 2 30 Linear Algebra, by Hefferon The first of these equations shows that c2,1 is zero because δ1, 1 is not zero,... in the prior item We can also use linear systems −→ i0 −→ i2 i1 ↓ i3 ←− Using the variables from the diagram we get a linear system i0 − i1 − i2 =0 i1 + i2 − i3 = 0 2i1 =9 7i2 =9 which yields the unique solution i1 = 81/14, i1 = 9/2, i2 = 9/7, and i3 = 81/14 Of course, the first and second paragraphs yield the same answer Esentially, in the first paragraph we solved the linear system by a method less systematic... is a2 − 1 = 0, the system has the single solution x = a2 + 1, y = −a For a = −1 and a = 1, we obtain the systems −x + y = −1 x+y=1 x−y= 1 x+y=1 both of which have an infinite number of solutions 14 Linear Algebra, by Hefferon One.I.2.31 This is how the answer was given in the cited source Let u, v, x, y, z be the volumes in cm3 of Al, Cu, Pb, Ag, and Au, respectively, contained in the sphere, which... method on the associated homogeneous system gives 1 −1 0 1 0 1 −1 0 1 0 1 −1 0 1 0 −2ρ1 +ρ2 −(1/5)ρ2 +ρ3 2 3 −1 0 0 −→ 0 5 −1 −2 0 0 5 −1 −2 0 −→ 0 1 1 1 0 0 1 1 1 0 0 0 6/5 7/5 0 16 Linear Algebra, by Hefferon so this is the solution to the homogeneous problem: −5/6 1/6 { −7/6 w w ∈ R} 1 (a) That vector is indeed a particular solution so the required general solution is... because setting any parameters to be rationals will produce an all-rational solution Subsection One.II.1: Vectors in Space 18 One.II.1.1 One.II.1.2 (a) 2 1 0 (d) 0 0 4 (c) 0 −3 −1 2 (b) Linear Algebra, by Hefferon (a) No, their canonical positions are different 1 −1 0 3 (b) Yes, their canonical positions are the same 1 −1 3 One.II.1.3 That line is this set 7 −2 1 9 { +... 2 −0.5 1 0 + 1 · 2 = 2 0 0 0 which has a parameter twice as large (b) The vector is not the result of adding 2 −0.5 2 −0.5 (0 + 1 · 1) + (0 + 0 · 1) 0 0 0 1 20 Linear Algebra, by Hefferon instead it is 2 −0.5 −0.5 1 0 + 1 · 1 + 0 · 1 = 2 0 0 1 0 which adds the parameters One.II.1.9 The “if” half is straightforward If b1 − a1 = d1 − c1... multiple of the other But that’s equivalent to the assertion that one of the two vectors u and v is a scalar multiple of the other, as desired One.II.2.19 No These give an example 1 1 1 u= v= w= 0 0 1 22 Linear Algebra, by Hefferon One.II.2.20 We prove that a vector has length zero if and only if all its components are zero Let u ∈ Rn have components u1 , , un Recall that the square of any real number... cos θ = ku1 v1 + · · · + kun vn 2 2 (ku1 ) + · · · + (kun ) One.II.2.37 Let u1 u = , un b1 2 + · · · + bn 2 v1 . v= vn = k u·v |k| u v w1 w= wn =± u v u v 24 Linear Algebra, by Hefferon and then kv1 mw1 + u u1 kv + mw = un kvn mwn u1 kv1 + mw1 = un kvn + mwn = u1 (kv1 + mw1 ) + · · · + un... −1/2 0 −1/2 −1 −(7/2)ρ2 +ρ3 (1/2)ρ1 0 1 2 5 −→ 0 1 2 5 −→ −(1/8)ρ2 0 0 −8 −12 0 0 1 3/2 1 −1/2 0 −1/2 1 0 0 1/2 −2ρ3 +ρ2 (1/2)ρ2 +ρ1 1 0 2 −→ 0 1 0 2 −→ 0 0 0 1 3/2 0 0 1 3/2 26 Linear Algebra, by Hefferon One.III.1.8 Use Gauss-Jordan reduction (1/2)ρ1 2 1 1 1/2 −(1/2)ρ2 +ρ1 1 0 (a) −→ −→ −→ 0 5/2 (2/5)ρ2 0 1 0 1 1 3 1 1 3 1 1 3 0 1 0 0 −2ρ1 +ρ2 −(1/6)ρ2 (1/3)ρ3 +ρ2 . the final determinant divided by the first determinant. These are answers to the exercises in Linear Algebra by J. Hefferon. Corrections or comments are very welcome, email to jimjoshua.smcvt.edu An. . . 90 Subsection Three.III.1: Representing Linear Maps with Matrices . . . . . . . . . . . . . 95 Subsection Three.III.2: Any Matrix Represents a Linear Map . . . . . . . . . . . . . . . 103 Subsection. 168 Subsection Four.III.1: Laplace’s Expansion . . . . . . . . . . . . . . . . . . . . . . . . . 171 4 Linear Algebra, by Hefferon Topic: Cramer’s Rule . . . . . . . . . . . . . . . . . . . . . . . . .