Vibrations Fundamentals and Practice Appc

17 98 0
Vibrations Fundamentals and Practice Appc

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

Vibrations Fundamentals and Practice Appc Maintaining the outstanding features and practical approach that led the bestselling first edition to become a standard textbook in engineering classrooms worldwide, Clarence de Silva''s Vibration: Fundamentals and Practice, Second Edition remains a solid instructional tool for modeling, analyzing, simulating, measuring, monitoring, testing, controlling, and designing for vibration in engineering systems. It condenses the author''s distinguished and extensive experience into an easy-to-use, highly practical text that prepares students for real problems in a variety of engineering fields.

de Silva, Clarence W “Appendix C” Vibration: Fundamentals and Practice Clarence W de Silva Boca Raton: CRC Press LLC, 2000 Appendix C Review of Linear Algebra Linear algebra, the algebra of sets, vectors, and matrices, is useful in the study of mechanical vibration In practical vibrating systems, interactions among various components are inevitable There are many response variables associated with many excitations It is thus convenient to consider all excitations (inputs) simultaneously as a single variable, and also all responses (outputs) as a single variable The use of linear algebra makes the analysis of such a system convenient The subject of linear algebra is complex and is based on a rigorous mathematical foundation This appendix reviews the basics of vectors and matrices, which form the foundation of linear algebra C.1 VECTORS AND MATRICES In the analysis of vibrating systems, vectors and matrices will be useful in both time and frequency domains First, consider the time-domain formulation of a vibration problem For a single-degreeof-freedom system with a single forcing excitation f(t) and a corresponding single displacement response y, the dynamic equation is my˙˙ + cy˙ + ky = f (t ) (C.1) Note that, in this single-dof case, the quantities f, y, m, c, and k are scalars If the system has n degrees of freedom, with excitation forces f1(t), f2(t), …, fn(t) and associated displacement responses y1, y2, …, yn, then the equations of motion can be expressed as My˙˙ + Cy˙ + Ky = f (t ) in which  y1  y  y =   = displacement vector (nth - order column vector) M    yn   f1  f  f =   = forcing excitation vector (nth - order column vector) M    fn   m11 m 21 M=  M  mn1 ©2000 CRC Press m12 m22 L L mn L m1n  m2 n   = mass matrix (n × n square matrix)   mnn  (C.2) c11 c 21 C=  M  cn1  k11 k 21 K=  M  kn1 c12 c22 L L cn L k12 k22 L L kn L c1n  c2 n   = damping matrix (n × n square matrix)   cnn  k1n  k2 n   = stiffness matrix (n × n square matrix)   knn  In this manner, vectors and matrices are introduced into the formulation of a multi-degree-offreedom vibration problem Further, vector-matrix concepts will enter into the picture in subsequent analysis; for example, in modal analysis, as discussed in Chapters and 11 Next consider the frequency-domain formulation In the single-degree-of-freedom case, the system equation can be given as y = Gu (C.3) where u = frequency spectrum (fourier spectrum) of the forcing excitation (input) y = frequency spectrum (Fourier spectrum) of the response (output) G = frequency-transfer function (frequency-response function) of the system The quantities u, y, and G are scalars because each one is a single quantity, and not a collection of several quantities Next consider a two-degree-of-freedom system having two excitations u1 and u2, and two responses y1 and y2; yi now depends on both u1 and u2 It follows that one needs four transfer functions to represent all the excitation-response relationships that may exist in this system One can use the four transfer functions (G11, G12, G21, and G22) For example, the transfer function G12 relates the excitation u2 to the response y1 The associated two equations that govern the system are: y1 = G11u1 + G12 u2 y2 = G21u1 + G22 u2 (C.4) Instead of considering the two excitations (two inputs) as two separate quantities, one can consider them as a single “vector” u having the two components u1 and u2 As before, one can write this as a “column” vector:  u1  u=  u2  Alternately, one can write a “row” vector as u = [u1 , u2 ] ©2000 CRC Press It is common to use the column-vector representation Similarly, one can express the two outputs y1 and y2 as a vector y Consequently, the column vector is given by  y1  y=   y2  and the row vector by y = [ y1 , y2 ] It should be kept in mind that the order in which the components (or elements) are given is important because the vector [u1, u2] is not equal to the vector [u2, u1] In other words, a vector is an “ordered” collection of quantities Summarizing, one can express a collection of quantities, in an orderly manner, as a single vector Each quantity in the vector is known as a component or an element of the vector What each component means will depend on the particular situation For example, in a dynamic system, it can represent a quantity such as voltage, current, force, velocity, pressure, flow rate, temperature, or heat transfer rate The number of components (elements) in a vector is called the order, or dimension, of the vector Next, the concept of a matrix is introduced, using the frequency-domain example given above Note that one needs four transfer functions to relate the two excitations to the two responses Instead of considering these four quantities separately, one can express them as a single matrix G having four elements Specifically, the transfer function matrix for the present example is G11 G= G21 G12  G22  Note that the matrix has two rows and two columns Hence, the size or order of the matrix is 2×2 Since the number of rows is equal to the number of columns in this example, one has a square matrix If the number of rows is not equal to the number of columns, one has a rectangular matrix Actually, a matrix can be interpreted as a collection of vectors Hence, in the previous example, the matrix G is an assembly of the two column vectors G11  G   21  and G12  G   22  or, alternatively, an assembly of the two row vectors [G 11 , G12 ] and [G 21 , G22 ] C.2 VECTOR-MATRIX ALGEBRA The advantage of representing the excitations and the responses of a vibrating system as the vectors u and y, and the transfer functions as the matrix G, is clear from the fact that the excitation-response (input-output) equations can be expressed as the single equation ©2000 CRC Press y = Gu (C.5) instead of the collection of scalar equations (C.4) Hence, the response vector y is obtained by “premultiplying” the excitation vector u by the transfer function matrix G Of course, certain rules of vector-matrix multiplication have to be agreed on in order that this single equation is consistent with the two scalar equations given by equations (C.4) Also, one must agree on rules for the addition of vectors or matrices A vector is a special case of a matrix Specifically, a third-order column vector is a matrix having three rows and one column Hence, it is a 3×1 matrix Similarly, a third-order row vector is a matrix having one row and three columns Accordingly, it is a 1×3 matrix It follows that one only needs to know matrix algebra, and the vector algebra will follow from the results for matrices C.2.1 MATRIX ADDITION AND SUBTRACTION Only matrices of the same size can be added The result (sum) will also be a matrix of the same size In matrix addition, one adds the corresponding elements (i.e., the elements at the same position) in the two matrices, and write the results at the corresponding places in the resulting matrix As an example, consider the 2×3 matrix −1 A= 2 3 −2  2 B= 0 −3 −5  and a second matrix The sum of these two matrices is given by 1 A+ B =  2 −2   The order in which the addition is done is immaterial Hence, A+ B = B+ A (C.6) In other words, matrix addition is commutative Matrix subtraction is defined just like matrix addition, except the corresponding elements are subtracted (or sign changed and added) An example is given below:  −1 3  −4 2   −   −3  −5 −1 =    −1 0   C.2.2 NULL MATRIX The null matrix is a matrix for which the elements are all zeros Hence, when one adds a null matrix to an arbitrary matrix, the result is equal to the original matrix One can define a null vector in a similar manner One can write ©2000 CRC Press A+0 = A (C.7) As an example, the 2×2 null matrix is: 0 0  0  C.2.3 MATRIX MULTIPLICATION Consider the product AB of two matrices A and B One can write this as: C = AB (C.8) As such, B is premultiplied by A or, equivalently, A is post-multiplied by B For this multiplication to be possible, the number of columns in A must be equal to the number of rows in B Then, the number of rows of the product matrix C is equal to the number of rows in A, and the number of columns in C is equal to the number of columns in B The actual multiplication is done by multiplying the elements in a given row (say, the ith row) of A by the corresponding elements in a given column (say, jth column) of B and summing these products The result is the element cij of the product matrix C Note that cij denotes the element that is common to the ith row and the jth column of matrix C Thus, cij = ∑a b (C.9) ik kj k As an example, suppose: 1 A= 3 −3 −1  1 B = 2 5 −1 −3 −4 4   Note that the number of columns in A is equal to 3, and the number of rows in B is also equal to Hence, one can perform the premultiplication of B by A For example, c11 = × + × + ( −1) × = c12 = × ( −1) + × + ( −1) × ( −3) = c13 = × + × ( −4) + ( −1) × = −7 c14 = × + × + ( −1) × = c21 = × + ( −3) × + × = 17 c22 = × ( −1) + ( −3) × + × ( −3) = −24 etc ©2000 CRC Press The product matrix is 0 C= 17 −7 22 −24 8 6 It should be noted that both products AB and BA are not always defined; and even when they are defined, the two results are not equal in general Unless both A and B are square matrices of the same order, the two product matrices will not be of the same order Summarizing, matrix multiplication is not commutative: AB ≠ BA (C.10) C.2.4 IDENTITY MATRIX An identity matrix (or unity matrix) is a square matrix whose diagonal elements are all equal to and all the remaining (off-diagonal) elements are zeros This matrix is denoted by I For example, the third-order identity matrix is 1 I = 0 0 0   It is easy to see that when any matrix is multiplied by an identity matrix (provided, of course, that the multiplication is possible), the product is equal to the original matrix; thus, AI = IA = A (C.11) C.3 MATRIX INVERSE An operation similar to scalar division can be defined with regard to the inverse of a matrix A proper inverse is defined only for a square matrix and, even for a square matrix, an inverse might not exist The inverse of a matrix is defined as follows: Suppose that a square matrix A has the inverse B Then, these must satisfy the equation: AB = I (C.12) BA = I (C.13) or, equivalently, where I is the identity matrix, as previously defined The inverse of A is denoted by A–1 The inverse exists for a matrix if and only if the determinant of the matrix is non-zero Such matrices are termed nonsingular The determinant is discussed in section C.3.3; but, before explaining a method for determining the inverse of a matrix, one can verify that 2 1  ©2000 CRC Press 1 1 is the inverse of −1  1 −1  To show this, simply multiply the two matrices and show that the product is the second-order unity matrix Specifically, 1 −1  −1 2  1 1 1 = 1 0 0  −1 1 =  0 0  or 2 1  1  1 −1 C.3.1 MATRIX TRANSPOSE The transpose of a matrix is obtained by simply interchanging the rows and the columns of the matrix The transpose of A is denoted by AT For example, the transpose of the 2×3 matrix −2 1 A= −2 3  is the 3×2 matrix −2    1 A = −2  T Note that the first row of the original matrix has become the first column of the transposed matrix, and the second row of the original matrix has become the second column of the transposed matrix If AT = A, then the matrix A is symmetric Another useful result on the matrix transpose is expressed by ( AB)T = BT AT (C.14) It follows that the transpose of a matrix product is equal to the product of the transposed matrices, taken in the reverse order C.3.2 TRACE OF A MATRIX The trace of a square matrix is given by the sum of the diagonal elements The trace of matrix A is denoted by tr(A) tr( A) = ∑a ii i ©2000 CRC Press (C.15) For example, the trace of the matrix −2 A =   −1 −4 0  3 is given by tr( A) = ( −2) + ( −4) + = −3 C.3.3 DETERMINANT OF A MATRIX The determinant is defined only for a square matrix It is a scalar value computed from the elements of the matrix The determinant of a matrix A is denoted by det(A) or ΈAΈ Instead of giving a complex mathematical formula for the determinant of a general matrix in terms of the elements of the matrix, one can compute the determinant as follows First consider the 2×2 matrix  a11 A= a21 a12  a22  Its determinant is given by det( A) = a11a22 − a12 a21 Next consider the 3×3 matrix  a11  A = a21 a31 a12 a22 a32 a13   a23  a33  Its determinant can be expressed as det( A) = a11 M11 − a12 M12 + a13 M13 where ©2000 CRC Press a22 M11 = det  a32 a23  a33  a21 M12 = det  a31 a23  a33  a 21 M13 = det  a31 a22  a32  Note that Mij is the determinant of the matrix obtained by deleting the ith row and the jth column of the original matrix The quantity Mij is known as the minor of the element aij of the matrix A If the proper sign is attached to the minor, then depending on the position of the corresponding matrix element, one has a quantity known as the cofactor Specifically, the cofactor Cij corresponding to the minor Mij is given by Cij = ( −1) i+ j Mij (C.16) Hence, the determinant of the 3×3 matrix can be given by det( A) = a11C11 + a12 C12 + a13C13 Note that in the two formulas given above for computing the determinant of a 3×3 matrix, one has expanded along the first row of the matrix The same answer is obtained, however, if one expands along any row or any column Specifically, when expanded along the ith row, one obtains det( A) = ai1Ci1 + Ci + 3Ci Similarly, if one expands along the jth column, then det( A) = a1 j C1 j + a2 j C2 j + a3 j C3 j These ideas of computing a determinant can be easily extended to 4×4 and higher-order matrices in a straightforward manner Hence, one can write det( A) = ∑a C = ∑a C ij ij j C.3.4 ADJOINT OF A ij ij (C.17) i MATRIX The adjoint of a matrix is the transponse of the matrix whose elements are the cofactors of the corresponding elements of the original matrix The adjoint of matrix A is denoted by adj(A) As an example, in the 3×3 case, one has ©2000 CRC Press C11  adj( A) = C21 C31 C12 C22 C32 C13   C23  C33  C11  = C12 C13 C21 C22 C23 C31   C32  C33  T In particular, it is easily seen that the adjoint of the matrix 1 A = 0 1 −1   is given by 1 adj( A) = −3  2 −2 −3   1 adj( A) =  −3 −3 7 −2   T Accordingly, Hence, in general, [ ] adj( A) = Cij C.3.5 INVERSE OF A T (C.18) MATRIX At this point, one can define the inverse of a square matrix Specifically, A−1 = adj( A) det( A) (C.19) Hence, in the 3×3 matrix example given before, since the adjoint has already been determined, it remains only to compute the determinant in order to obtain the inverse Now, expanding along the first row of the matrix, the determinant is given by det( A) = × + × + ( −1) × ( −3) = Accordingly, the inverse is given by A−1 = 1 1 8 −3 −3 7 −2   For two square matrices A and B, ( AB)−1 = B −1 A−1 ©2000 CRC Press (C.20) BOX C.1 Summary of Matrix Properties Addition : Am×n + Bm×n = Cm×n Multiplication : Am×n Bn×r = Cm×r Identity : AI = IA = A ⇒ I is the identity matrix Note : AB = ⇒ / A = or B = in general Transposition : C T = ( AB) = BT AT T Inverse : AP = I = PA ⇒ A = P −1 and P = A−1 ( AB)−1 = B −1 A−1 Community : AB ≠ BA in general Associativity : ( AB)C = A( BC ) Distributivity : C( A + B) = CA + CB Distributivity : ( A + B) D = AD + BD As a final note, if the determinant of a matrix is 0, the matrix does not have an inverse Then, that matrix is singular Some important matrix properties are summarized in Box C.1 C.4 VECTOR SPACES C.4.1 FIELD (Ᏺ) Consider a set of scalars If for any α and β from the set, α + β and αβ are also elements in the set; and if: α + β = β + α and αβ = βα (α + β) + γ = α + (β + γ) and (αβ)γ = α(βγ) α(β + γ) = αβ + αγ (Commutativity) (Associativity) (Distributivity) are satisfied, and if: Identity elements and exist in the set such that α + = α and 1α = α Inverse elements exist in the set such that α + (–α) = and α·α–1 = then, the set is a field For example, the set ‫ ޒ‬of real numbers is a field C.4.2 VECTOR SPACE (ᏸ) Properties Vector addition (x + y) and scalar multiplication (αx) are defined ©2000 CRC Press Commutativity: x + y = y + x Associativity: (x + y) + z = x + (y + z) are satisfied Unique null vector and negation (–x) exist such that: x+0=x x + (–x) = Scalar multiplication satisfies: α(βx ) = (αβ)x (Associativity) α( x + y) = αx + βy    (α + β)x = αx + βx  (Distributivity) 1x = x, x = Special Case Vector space ᏸn has vectors with n elements from the field Ᏺ Consider  x1  x  x =  , M    xn   y1  y  y=  M    yn  Then,  x1 + y1    x+ y=  M = y+x  x n + yn  C.4.3 SUBSPACE ᏿ OF and  αx1    αx =  M  αx n  ᏸ If x and y are in ᏿ then x + y is also in ᏿ If x is in ᏿ and α is in Ᏺ, then αx is also in ᏿ C.4.4 LINEAR DEPENDENCE Consider the set of vectors: x1, x2, …, xn They are linearly independent if any one of these vectors cannot be expressed as a linear combination of one more remaining vectors Necessary and sufficient condition for linear independence: α1 x1 + α x2 + K + α n xn = gives α = (trivial solution) as the only solution For example, 1  x1 = 2  3 ©2000 CRC Press (C.21) 2 x2 = −1   5 x3 = 0  5 These vectors are not linearly independent because x1 + 2x2 = x3 C.4.5 BASIS AND DIMENSION OF A VECTOR SPACE If a set of vectors can be combined to form any vector in ᏸ, then that set of vectors is said to span the vector space ᏸ (i.e., a generating system of vectors) If the spanning vectors are all linearly independent, then this set of vectors is a basis for that vector space The number of vectors in the basis = Dimension of the vector space Note: The dimension of a vector space is not necessarily the order of the vectors For example, consider two intersecting third-order vectors They will form a basis for the plane (two dimensional) that contains the two vectors Hence, the dimension of the vector space = 2, but the order of each vector in the basis = Note: ᏸn is spanned by n linearly independent vectors ⇒ dim(ᏸn) = n For example, 1  0    0 ,   M 0  0  0  1  0      0 , K,  M      M 0  0  1  C.4.6 INNER PRODUCT ( x, y ) = y H x (C.22) where H denotes the hermitian transpose (i.e., complex conjugate and transpose) Hence yH = (y*)T where ( )* denotes complex conjugation Note: (x,x) ≥ and (x,x) = if and only if (iff) x = (x,y) = (y,x)* (λx,y) = λ(x,y) (x,λy) = λ*(x,y) (x,y + z) = (x,y) + (x,z) ©2000 CRC Press C.4.7 NORM Properties x ≥ and λx = λ x x = iff x = for any scalar λ x+y ≤ x + y For example, the Euclidean norm:  x =  n ∑ i =1 2 xi   (C.23) Unit vector: x = Normalization: x = xˆ x Angle between vectors: We have cos θ = ( x, y ) = x y ( xˆ, yˆ ) (C.24) where θ is the angle between x and y Orthogonal: iff ( x, y) = (C.25) Note: n orthogonal vectors in ᏸn are linearly independent and span ᏸn, and form a basis for ᏸn C.4.8 GRAM-SCHMIDT ORTHOGONALIZATION Given a set of vectors x1, x2, …, xn that are linearly independent in ᏸn, one can construct a set of orthonormal (orthozonal and normalized) vectors yˆ 1, yˆ 2, …, yˆ n that are linear combinations of xˆ i Start: yˆ1 = xˆ1 = x1 x1 i −1 Then: yi = xi − ∑ (x , yˆ ) yˆ i j j for i = 1, 2, K, n j =1 C.4.9 MODIFIED GRAM-SCHMIDT PROCEDURE In each step, compute new vectors that are orthogonal to the just-computed vector x1 x1 Step 1: yˆ1 = as before Then: xi(1) = xi − ( yˆ1 , xi ) yˆ1 yˆ i = xi(1) for i = 2, 3, K, n for i = 2, 3, K, n xi(1) ( ) and xi( ) = xi(1) − yˆ , xi(1) yˆ , i = 3, 4, K, n, and so on ©2000 CRC Press C.5 DETERMINANTS Now one can address several analytical issues of the determinant of a square matrix Consider the matrix  a11  A= M an1 L L a1n   M  ann  The minor of aij = Mij = the determinant of the matrix formed by deleting the ith row and the jth column of the original matrix Cofactor of aij = Cij = (–1)i+jMij cof(A) = Cofactor matrix of A adj(A) = Adjoint A = (cof A)T C.5.1 PROPERTIES OF DETERMINANT OF A MATRIX Interchange two rows (columns) ⇒ Determinant sign changes Multiply one row (column) by α ⇒ αdet( ) Add a [α × row (column)] to a second row (column) ⇒ Determinant unchanged Identical rows (columns) ⇒ Zero determinant For two square matrices A and B, det(AB) = det(A) det(B) C.5.2 RANK OF A MATRIX Rank A = Number of linearly independent columns = Number of linearly independent rows = dim(column space) = dim(row space) Here, “dim” denotes the “dimension of.” C.6 SYSTEM OF LINEAR EQUATIONS Consider the set of linear algebraic equations a11 x1 + a12 x + K + a1n x n = c1 a21 x1 + a22 x + K + a2 n x n = c2 M am1 x1 + am x + K + amn x n = cm One needs to solve for x1, x2, …, xn This problem can be expressed in the vector-matrix form: Am×n xn = cm Solution exists iff rank (A,c) = rank (A) ©2000 CRC Press B = ( A, c) Two cases can be considered: Case 1: If m ≥ n and rank (A) = n ⇒ unique solution for x Case 2: If m ≤ n and rank (A) = m ⇒ infinite number of solutions for x; x = AH(AAH)–1C ⇐ minimum norm form Specifically, out of the infinite possibilities, this is the solution that minimizes the norm xHx Note that the superscript “H” denotes the “hermitian transpose,” which is the transpose of the complex conjugate of the matrix For example, 1 + j A= 3 − j + 3j  −1 − j  Then,  1− j A H = 2 − j  3+ j   −1 + j  If the matrix is real, its hermitian transpose is simply the ordinary transpose In general, if rank (A) ≤ n ⇒ infinite number of solutions The space formed by solutions Ax = ⇒ is called the null space dim (null space) = n – k, where rank (A) = k REFERENCES Meirovitch, L., Computational Methods in Structural Dynamics, Sijthoff & Noordhoff, The Netherlands, 1980 Noble, B., Applied Linear Algebra, Prentice-Hall, Inc., Englewood Cliffs, NJ, 1969 ©2000 CRC Press ... Consider a set of scalars If for any α and β from the set, α + β and αβ are also elements in the set; and if: α + β = β + α and αβ = βα (α + β) + γ = α + (β + γ) and (αβ)γ = α(βγ) α(β + γ) = αβ +... u, y, and G are scalars because each one is a single quantity, and not a collection of several quantities Next consider a two-degree-of-freedom system having two excitations u1 and u2, and two... of vectors and matrices, which form the foundation of linear algebra C.1 VECTORS AND MATRICES In the analysis of vibrating systems, vectors and matrices will be useful in both time and frequency

Ngày đăng: 05/05/2018, 09:37

Mục lục

    Vibration: Fundamentals and Practice

    Appendix C: Review of Linear Algebra

    C.2.1 Matrix Addition and Subtraction

    C.3.2 Trace of a Matrix

    C.3.3 Determinant of a Matrix

    C.3.4 Adjoint of a Matrix

    C.3.5 Inverse of a Matrix

    C.4.5 Basis and Dimension of a Vector Space

    C.5.1 Properties of Determinant of a Matrix

    C.5.2 Rank of a Matrix

Tài liệu cùng người dùng

Tài liệu liên quan