1. Trang chủ
  2. » Kỹ Thuật - Công Nghệ

Vibration and Shock Handbook 03

57 122 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 57
Dung lượng 895,15 KB

Nội dung

Vibration and Shock Handbook 03 Every so often, a reference book appears that stands apart from all others, destined to become the definitive work in its field. The Vibration and Shock Handbook is just such a reference. From its ambitious scope to its impressive list of contributors, this handbook delivers all of the techniques, tools, instrumentation, and data needed to model, analyze, monitor, modify, and control vibration, shock, noise, and acoustics. Providing convenient, thorough, up-to-date, and authoritative coverage, the editor summarizes important and complex concepts and results into “snapshot” windows to make quick access to this critical information even easier. The Handbook’s nine sections encompass: fundamentals and analytical techniques; computer techniques, tools, and signal analysis; shock and vibration methodologies; instrumentation and testing; vibration suppression, damping, and control; monitoring and diagnosis; seismic vibration and related regulatory issues; system design, application, and control implementation; and acoustics and noise suppression. The book also features an extensive glossary and convenient cross-referencing, plus references at the end of each chapter. Brimming with illustrations, equations, examples, and case studies, the Vibration and Shock Handbook is the most extensive, practical, and comprehensive reference in the field. It is a must-have for anyone, beginner or expert, who is serious about investigating and controlling vibration and acoustics.

3 Modal Analysis 3.1 3.2 Introduction 3-1 Degrees of Freedom and Independent Coordinates 3-2 3.3 System Representation 3-4 The University of British Columbia Stiffness and Flexibility Matrices † Inertia Matrix Approach for Equations of Motion † Direct 3.4 3.5 Modal Vibrations 3-10 Orthogonality of Natural Modes 3-14 3.6 Static Modes and Rigid-Body Modes 3-15 3.7 Other Modal Formulations 3-22 3.8 Clarence W de Silva Nonholonomic Constraints Modal Mass and Normalized Modal Vectors Static Modes † Linear Independence of Modal Vectors † Modal Stiffness and Normalized Modal Vectors † Rigid-Body Modes † Modal Matrix † Configuration Space and State Space Nonsymmetric Modal Formulation Modal Formulation † Transformed Symmetric Forced Vibration 3-28 First Mode (Rigid-Body Mode) (Oscillatory Mode) † Second Mode 3.9 Damped Systems 3-32 3.10 State-Space Approach 3-36 Proportional Damping Modal Analysis † Mode Shapes of Nonoscillatory Systems Mode Shapes of Oscillatory Systems † Appendix 3A Linear Algebra 3-41 Summary This chapter presents the modal analysis of lumped-parameter mechanical vibrating systems In the considered systems, inertia, flexibility, and damping characteristics are lumped at a finite number of discrete points in the system Techniques for determining the natural frequencies and mode shapes of vibration are given The orthogonality of mode shapes is established The existence of natural modes in damped systems is investigated Proportional damping is discussed Both free vibration and forced vibration of multi-degree-of-freedom (multi-DoF) systems are analyzed 3.1 Introduction Complex vibrating systems usually consist of components that possess distributed energy-storage and energy-dissipative characteristics In these systems, inertial, stiffness, and damping properties vary (piecewise) continuously with respect to the spatial location Consequently, partial differential equations, with spatial coordinates (e.g., Cartesian coordinates x; y; z) and time t as independent variables are necessary to represent their vibration response 3-1 © 2005 by Taylor & Francis Group, LLC 3-2 Vibration and Shock Handbook A distributed (continuous) vibrating system may be approximated (modeled) by an appropriate set of lumped masses properly interconnected using discrete spring and damper elements Such a model is termed lumped-parameter model or discrete model An immediate advantage resulting from this lumpedparameter representation is that the system equations become ordinary differential equations Often, linear springs and linear viscous damping elements are used in these models The resulting linear ordinary differential equations can be solved by the modal analysis method The method is based on the fact that these idealized systems (models) have preferred frequencies and geometric configurations (or natural modes) in which they tend to execute free vibration An arbitrary response of the system can be interpreted as a linear combination of these modal vibrations, and as a result its analysis may be conveniently done using modal techniques Modal analysis is an important tool in vibration analysis, diagnosis, design, and control In some systems, mechanical malfunction or failure can be attributed to the excitation of their preferred motion such as modal vibrations and resonances By modal analysis, it is possible to establish the extent and location of severe vibrations in a system For this reason, it is an important diagnostic tool For the same reason, modal analysis is also a useful method for predicting impending malfunctions or other mechanical problems Structural modification and substructuring are techniques of vibration analysis and design that are based on modal analysis By sensitivity analysis methods using a modal model, it is possible to determine which degrees of freedom (DoFs) of a mechanical system are most sensitive to addition or removal of mass and stiffness elements In this manner, a convenient and systematic method can be established for making structural modifications to eliminate an existing vibration problem, or to verify the effects of a particular modification A large and complex system can be divided into several subsystems which can be independently analyzed By modal analysis techniques, the dynamic characteristics of the overall system can be determined from the subsystem information This approach has several advantages, including: (1) subsystems can be developed by different methods such as experimentation, finite element method, or other modeling techniques and assembled to obtain the overall model; (2) the analysis of a high order system can be reduced to several lower order analyses; and (3) the design of a complex system can be carried out by designing and developing its subsystems separately These capabilities of structural modification and substructure analysis which are possessed by the modal analysis method make it a useful tool in the design development process of mechanical systems Modal control, a technique that employs modal analysis, is quite effective in the vibration control of complex mechanical systems 3.2 Degrees of Freedom and Independent Coordinates The geometric configuration of a vibrating system can be completely determined by a set of independent coordinates This number of independent coordinates, for most systems, is termed the number of DoFs of the system For example, a particle freely moving on a plane requires two independent coordinates to completely locate it (e.g., x and y Cartesian coordinates or r and u polar coordinates); its motion has two DoF A rigid body that is free to take any orientation in (threedimensional) space needs six independent coordinates to completely define its position For instance, its centroid is positioned using three independent Cartesian coordinates ðx; y; zÞ: Any axis fixed in the body and passing through its centroid can be oriented by two independent angles ðu; fÞ: The orientation of the body about this body axis can be fixed by a third independent angle ðcÞ: Altogether, six independent coordinates have been utilized; the system has six DoF Strictly speaking, the number of DoF is equal to the number of independent, incremental, generalized coordinates that are needed to represent a general motion In other words, it is the number of incremental independent motions that are possible For holonomic systems (i.e., systems possessing holonomic constraints only), the number of independent incremental generalized coordinates is equal to the number of independent generalized coordinates; hence, either definition may be used for the number of DoF If, on the other hand, the system has nonholonomic © 2005 by Taylor & Francis Group, LLC Modal Analysis 3-3 constraints, the definition based on incremental coordinates should be used, because in these systems the number of independent incremental coordinates is in general less than the number of independent coordinates that are required to completely position the system 3.2.1 Nonholonomic Constraints Constraints of a system that cannot be represented by purely algebraic equations in its generalized coordinates and time are termed nonholonomic constraints For a nonholonomic system, more coordinates than the number of DoF are required to completely define the position of the system The number of excess coordinates is equal to the number of nonalgebraic relations that define the nonholonomic constraints in the system Examples for nonholonomic systems are afforded by bodies rolling on surfaces and bodies whose velocities are constrained in some manner Example 3.1 A good example for a nonholonomic system is provided by a sphere rolling, without slipping, on a plane surface In Figure 3.1, the point O denotes the center of the sphere at a given instant, and P is an arbitrary point within the sphere The instantaneous point of contact with the plane surface is denoted by Q, so that the radius of the sphere is OQ ¼ a This system requires five independent generalized coordinates to position it For example, the center O is fixed by the Cartesian coordinates x and y: Since the sphere is free to roll along any arbitrary path on the plane and return to the starting point, the line OP can assume any arbitrary orientation for any given position for the center O This line can be oriented by two independent coordinates u and f; defined as in Figure 3.1 Furthermore, since the sphere is free to spin about the z-axis and is also free to roll on any trajectory (and return to its starting point), it follows that the sphere can take any orientation about the line OP (for a specific location of point O and line OP) This position can be oriented by the angle c: These five generalized coordinates x; y; u; f; and c are independent The corresponding incremental coordinates dx; dy; du; df; and dc are, however, not independent, as a result of the constraint of rolling without slipping It can be shown that two independent differential equations can be written for this constraint, and that consequently there exist only three independent incremental coordinates; the system actually has only three DoF To establish the equations for the two nonholonomic constraints note that the incremental displacements dx and dy of the center O about the instantaneous point of contact Q can be written dx ¼ a db; dy ¼ 2a da z ψ φ P y β θ O a α x Q FIGURE 3.1 Rolling sphere on a plane (an example of a nonholonomic system) © 2005 by Taylor & Francis Group, LLC 3-4 Vibration and Shock Handbook in which the rotations of a and b are taken as positive about the positive directions of x and y; respectively (Figure 3.1) Next, we will express da and db in terms of the generalized coordinates Note that du is directed along the z direction and has no components along the x and y directions On the other hand, df has the components df cos u in the positive y direction and df sin u in the negative x direction Furthermore, the horizontal component of dc is dc sin f: This in turn has the components ðdc sin fÞcos u and ðdc sin fÞsin u in the positive x and y directions, respectively It follows that da ¼ 2df sin u ỵ dc sin f cos u db ẳ df cos u ỵ dc sin f sin u Consequently, the two nonholonomic constraint equations are dx ¼ aðdf cos u ỵ dc sin f sin uị dy ẳ adf sin u dc sin f cos uÞ Note that these are differential equations that cannot be directly integrated to give algebraic equations A particular choice for the three independent incremental coordinates associated with the three DoF in the present system of a rolling sphere would be du; df; and dc: The incremental variables da; db; and du will form another choice The incremental variables dx; dy; and du will also form a possible choice Once three incremented displacements are chosen in this manner, the remaining two incremental generalized coordinates are not independent and can be expressed in terms of these three incremented variables using the constraint differential equations Example 3.2 A relatively simple example for a nonholonomic system is a single-dimensional rigid body (a straight line) moving on a plane such that its velocity is always along the body axis The idealized motion of a ship in calm water is a practical situation representing such a system This body needs three independent coordinates to completely define all possible configurations that it can take For example, the centroid of the body can be fixed by two Cartesian coordinates x and y on the plane, and the orientation of the axis through the centroid may be fixed by a single angle u: Note that, for a given location ðx; yÞ of the centroid, any arbitrary orientation ðuÞ for the body axis is feasible, because, as in the previous example, any arbitrary trajectory can be followed by this body and return the centroid to the starting point, but with a different orientation of the axis of the body Since the velocity is always directed along the body axis, a nonholonomic constraint exists and it is expressed as dy ¼ tan u dx It follows that there are only two independent incremental variables; the system has only two DoF Some useful definitions and properties that were discussed in this section are summarized in Box 3.1 3.3 System Representation Some damped systems not possess real modes If a system does not possess real modes, modal analysis could still be used, but the results would only be approximately valid In modal analysis it is convenient to first neglect damping and develop the fundamental results, and then subsequently extend the results to damped systems, for example, by assuming a suitable damping model that possesses real modes Since damping is an energy dissipation phenomenon, it is usually possible to determine a model that possesses real modes and also has an energy dissipation capacity equivalent to that of the actual system © 2005 by Taylor & Francis Group, LLC Modal Analysis 3-5 Box 3.1 SOME DEFINITIONS AND PROPERTIES OF MECHANICAL SYSTEMS Holonomic constraints Constraints that can be represented by purely algebraic relations Nonholonomic constraints Constraints that require differential relations for their representation Holonomic system A system that possesses holonomic constraints only Nonholonomic system Number of DoFs A system that possesses one or more nonholonomic constraints The number of independent incremental coordinates that are needed to represent general incremental motion of a system ¼ number of independent incremental motions ¼ £ number of DoF (typically) Order of a system For a holonomic system Number of independent incremental coordinates For a nonholonomic system Number of independent incremental coordinates ¼ Number of independent coordinates ¼ number of DoF ,Number of independent coordinates Consider the three undamped system representations (models) shown in Figure 3.2 The motion of system (a) consists of the translatory displacements y1 and y2 of the lumped masses m1 and m2 : The masses are subjected to the external excitation forces (inputs) f1 ðtÞ and f2 ðtÞ and the restraining forces of the discrete, tensile-compressive stiffness (spring) elements k1 ; k2 ; and k3 : Only two independent y1 (a) Translatory System k1 k1 Flexural System m1 m1 y2 k2 f1(t) f1t) k2 m2 k1 m1 k2 f2(t) m2 k3 y2 y1 (c) FIGURE 3.2 © 2005 by Taylor & Francis Group, LLC k3 y2 f1(t) Torsional System f2(t) f2(t) y1 (b) k3 m2 Three types of two-DoF systems 3-6 Vibration and Shock Handbook incremental coordinates (dy1 and dy2 ) are required to completely define the incremental motion of the system subject to its inherent constraints It follows that the system has two DoF In system (b), shown in Figure 3.2, the elastic stiffness to the transverse displacements y1 and y2 of the lumped masses is provided by three bending ( flexural) springs that are considered massless This flexural system is very much analogous to the translatory system (a) even though the physical construction and the motion itself are quite different System (c) in Figure 3.2 is the analogous torsional system In this case, the lumped elements m1 and m2 should be interpreted as polar moments of inertia about the shaft axis, and k1 ; k2 ; and k3 as the torsional stiffness in the connecting shafts Furthermore, the motion coordinates y1 and y2 are rotations and the external excitations f1 ðtÞ and f2 ðtÞ are torques applied at the inertia elements Practical examples where these three types of vibration system models may be useful are: (a) a two-car train, (b) a bridge with two separate vehicle loads, and (c) an electric motor and pump combination The three systems shown in Figure 3.2 are analogous to each other in the sense that the dynamics of all three systems can be represented by similar equations of motion For modal analysis, it is convenient to express the system equations as a set of coupled second-order differential equations in terms of the displacement variables (coordinates) of the inertia elements Since in modal analysis we are concerned with linear systems, the system parameters can be given by a mass matrix and a stiffness matrix, or by a flexibility matrix Lagrange’s equations of motion directly yield these matrices; however, we will now present an intuitive method for identifying the stiffness and mass matrices The linear, lumped-parameter, undamped systems shown in Figure 3.2 satisfy the set of dynamic equations " #" # " #" # " # m11 m12 k11 k12 y1 f1 y1 ỵ ẳ m21 m22 k21 k22 y2 f2 y2 or My ỵ Ky ẳ f 3:1ị Here, M is the inertia matrix which is the generalized case of mass matrix, and K is the stiffness matrix There are many ways to derive Equation 3.1 Below, we will describe an approach, termed the influence coefficient method, which accomplishes the task by separately determining K and M 3.3.1 Stiffness and Flexibility Matrices In the systems shown in Figure 3.2 suppose the accelerations y€ and y€ are both zero at a particular instant, so that the inertia effects are absent The stiffness matrix K is given under these circumstances by the constitutive relation for the spring elements: " # " #" # f1 k11 k12 y1 ¼ f2 k21 k22 y2 or f ẳ Ky T 3:2ị T in which f is the force vector ½f1 ; f2 and y is the displacement vector ½y1 ; y2 : Both are column vectors The elements of the stiffness matrix, in this two-DoF case, are explicitly given by " # k11 k12 K¼ k21 k22 Suppose that y1 ¼ and y2 ¼ (i.e., give a unit displacement to m1 while holding m2 at its original position) Then k11 and k21 are the forces needed at location and location 2, respectively, to maintain this static configuration For this condition it is clear that f1 ẳ k1 ỵ k2 and f2 ẳ 2k2 : Accordingly, k11 ẳ k1 ỵ k2 ; © 2005 by Taylor & Francis Group, LLC k21 ¼ 2k2 Modal Analysis 3-7 Similarly, suppose that y1 ¼ and y2 ¼ 1: Then k12 and k22 are the forces needed at location and location 2, respectively, to maintain the corresponding static configuration It follows that k12 ẳ 2k2 ; k22 ẳ k2 ỵ k3 Consequently, the complete stiffness matrix can be expressed in terms of the stiffness elements in the system as " # k1 þ k2 2k2 K¼ 2k2 k2 þ k3 From the foregoing development, it should be clear that the stiffness parameter kij represents the force that is needed at the location i to obtain a unit displacement at location j: Hence, these parameters are termed stiffness influence coefficients Observe that the stiffness matrix is symmetric Specifically, kij ¼ kji for i j or KT ẳ K 3:3ị Note, however, that K is not diagonal in general (kij – for at least two values of i – j) This means that the system is statically coupled (or flexibly coupled) Flexibility matrix L is the inverse of the stiffness matrix L ¼ K21 ð3:4Þ To determine the flexibility matrix using the influence coefficient approach, we have to start with a constitutive relation of the form y ẳ Lf 3:5ị Assuming that there are no inertia forces at a particular instant, we then proceed as before For the systems in Figure 3.2, for example, we start with f1 ¼ and f2 ¼ 0: In this manner, we can determine the elements l11 and l21 of the flexibility matrix " # l11 l12 L¼ l21 l22 However, here, the result is not as straightforward as in the previous case For example, to determine l11 , we will have to find the flexibility contributions from either side of m1 : The flexibility of the stiffness element k1 is 1=k1 : The combined flexibility of k2 and k3 ; which are connected in series, is 1=k2 ỵ 1=k3 because the displacements (across variables) are additive in series The two flexibilities on either side of m1 are applied in parallel at m1 : Since the forces (through variables) are additive in parallel, the stiffness will also be additive Consequently, 1 ỵ ẳ l11 1=k1 ị 1=k2 ỵ 1=k3 ị After some algebraic manipulation we get l11 ẳ â 2005 by Taylor & Francis Group, LLC k2 ỵ k3 k1 k2 ỵ k2 k3 þ k3 k1 3-8 Vibration and Shock Handbook TABLE 3.1 Combination Rules for Stiffness and Flexibility Elements Connection Graphical Representation Combined Stiffness Combined Flexibility Series 1=k1 ỵ 1=k2 ị l1 ỵ l2 Parallel k1 ỵ k2 1=l1 ỵ 1=l2 Þ Since there is no external force at m2 in the assumed loading configuration, the deflections at m2 and m1 are proportioned according to the flexibility distribution along the path Accordingly, l21 ẳ 1=k3 l 1=k3 ỵ 1=k2 11 or l21 ẳ k2 k1 k2 ỵ k2 k3 ỵ k3 k1 l12 ẳ k2 k1 k2 ỵ k2 k3 þ k3 k1 l22 ¼ k1 þ k2 k1 k2 þ k2 k3 þ k3 k1 Similarly, we can obtain and Note that these results confirm the symmetry of flexibility matrices lij ¼ lji for i – j or LT ¼ L ð3:6Þ Also, we can verify the fact that L is the inverse of K The series –parallel combination rules for stiffness and flexibility that are useful in the present approach are summarized in Table 3.1 The flexibility parameters lij represent the displacement at the location i when a unit force is applied at location j: Hence, these parameters are termed flexibility influence coefficients 3.3.2 Inertia Matrix The mass matrix, which is used in the case of translatory motions, can be generalized as inertia matrix M in order to include rotatory motions as well To determine M for the systems shown in Figure 3.2, suppose the deflections y1 and y2 are both zero at a particular instant so that the springs are in their static equilibrium configuration Under these conditions, the equation of motion 3.1 becomes f ¼ M€y For the present two-DoF case, the elements of M are denoted by " # m11 m12 Mẳ m21 m22 â 2005 by Taylor & Francis Group, LLC ð3:7Þ Modal Analysis 3-9 To identify these elements, first set y€ ¼ and y€ ¼ 0: Then, m11 and m21 are the forces needed at the locations and 2, respectively, to sustain the given accelerations; specifically, f1 ¼ m1 and f2 ¼ 0: It follows that m11 ¼ m1 ; m21 ¼ Similarly, by setting y€1 ¼ and y€ ¼ 1; we get m12 ¼ 0; Then, the mass matrix is obtained as " M¼ m22 ¼ m2 m1 0 m2 # It should be clear now that the inertia parameter mij represents the force that should be applied at the location i in order to produce a unit acceleration at location j: Consequently, these parameters are called inertia influence coefficients Note that the mass matrix is symmetric in general; specifically mij ¼ mji for i j or MT ẳ M 3:8ị Furthermore, when the independent displacements of the lumped inertia elements are chosen as the motion coordinates, as is typical, the inertia matrix becomes diagonal If not, it can be made diagonal by using straightforward algebraic substitutions so that each equation contains the second derivative of just one displacement variable Hence, we may assume mij ẳ for i j 3:9ị Then the system is said to be inertially uncoupled This approach to finding K and M is summarized in Box 3.2 It can be conveniently extended to damped systems for determining the damping matrix C Box 3.2 INFLUENCE COEFFICIENT METHOD OF DETERMINING SYSTEM MATRICES (UNDAMPED CASE ) Stiffness Matrix (K) Mass Matrix (M) Set y€ ¼ f ¼ Ky Set y ¼ f ¼ M€y Set yj ¼ and yi ¼ for all i – j Set y€j ¼ and y€i ¼ for all i – j Determine f from the system diagram, that is needed to main equilibrium ¼ jth column of K Determine f to maintain this condition ¼ jth column of M Repeat for all j Repeat for all j © 2005 by Taylor & Francis Group, LLC 3-10 Vibration and Shock Handbook y2 y1 k1y1 m1 k2 (y2 − y1) f1(t) FIGURE 3.3 3.3.3 k2 (y2 − y1) m2 k3y2 f2(t) Free-body diagram of the two-DoF system Direct Approach for Equations of Motion The influence coefficient approach that was described in the previous section is a rather indirect way of obtaining the equations of motion 3.1 for a multi-DoF system The most straightforward approach, however, is to sketch a free-body diagram for the system, mark the forces or torques on each inertia element, and finally, apply Newton’s Second Law This approach is now illustrated for the system shown in Figure 3.2(a) The equations of motion for the systems in Figures 3.2(b) and (c) will follow analogously The free-body diagram of the system in Figure 3.2(a) is sketched in Figure 3.3 Note that all the forces on each inertia element are marked Application of Newton’s Second Law to the two mass elements separately gives m1 y€1 ẳ 2k1 y1 ỵ k2 y2 y1 ị ỵ f1 tị m2 y ẳ 2k2 y2 y1 ị k3 y2 ỵ f2 tị The terms can be rearranged to obtain the following two coupled, second order, linear, ordinary differential equations: m1 y ỵ k1 ỵ k2 ịy1 k2 y2 ẳ f1 tị m2 y 2 k2 y1 ỵ k2 ỵ k3 ịy2 ẳ f2 ðtÞ which may be expressed in the vector–matrix form as " #" # " #" # " # m1 y k1 ỵ k2 2k2 y1 f1 tị þ ¼ m2 y€ 2k2 k2 þ k3 y2 f2 ðtÞ Observe that this result is identical to what we obtained by the influence coefficient approach Another convenient approach that would provide essentially the same result is the energy method through the application of Lagrange’s equations Two common types of models used in vibration analysis and applications are summarized in Box 3.3 3.4 Modal Vibrations Among the infinite number of relative geometric configurations the lumped masses in a multi-DoF system could assume under free motion (i.e., with fðtÞ ¼ 0), when excited by an arbitrary initial state, there is a finite number of configurations that are naturally preferred by the system Each of these configurations will have an associated frequency of motion These motions are termed modal motions By choosing the initial displacement y(0) proportional to a particular modal conguration, with zero initial velocity, y_ 0ị ẳ 0; that particular mode can be excited at the associated natural frequency of motion The displacements of different DoF retain this initial proportion at all times This constant proportion in displacement can be expressed as a vector c for that mode, and represents the mode shape Note that each modal motion is a harmonic motion executed at a specific frequency v known as the natural frequency (undamped) In view of these general properties of modal motions, © 2005 by Taylor & Francis Group, LLC Modal Analysis 3-43 Similarly, we can express the two outputs, y1 and y2 ; as a vector y: Consequently, we have the column vector " # y1 yẳ y2 or the row vector y ẳ ẵy1 ; y2 It should be kept in mind that the order in which the components (or elements) are given is important since the vector ½u1 ; u2 is not equal to the vector ½u2 ; u1 : In other words, a vector is an “ordered” collection of quantities Summarizing, we can express a collection of quantities, in an orderly manner, as a single vector Each quantity in the vector is known as a component or an element of the vector What each component means will depend on the particular situation For example, in a dynamic system it may represent a quantity such as voltage, current, force, velocity, pressure, flow rate, temperature, or heat transfer rate The number of components (elements) in a vector is called the order, or dimension of the vector Next let us introduce the concept of a matrix using the frequency-domain example given above Note that we needed four transfer functions to relate the two excitations to the two responses Instead of considering these four quantities separately we can express them as a single matrix G having four elements Specifically, the transfer function matrix for the present example is " # G11 G12 G¼ G21 G22 Note that our matrix has two rows and two columns Hence the size or order of the matrix is £ Since the number of rows is equal to the number of columns in this example, we have a square matrix If the number of rows is not equal to the number of columns, we have a rectangular matrix Actually, we can interpret a matrix as a collection of vectors Hence, in the previous example, the matrix G is an assembly of the two column vectors " # " # G11 G12 and G21 G22 or alternatively, an assembly of the two row vectors ½G11 ; G12 and ½G21 ; G22 3A.3 Vector– Matrix Algebra The advantage of representing the excitations and the responses of a mechatronic system as the vectors u and y; and the transfer functions as the matrix G is clear from the fact that the excitation – response (input –output) equations can be expressed as the single equation y ẳ Gu 3A:5ị instead of the collection of scalar equations (Equation 3A.4) Hence, the response vector y is obtained by premultiplying the excitation vector u by the transfer function matrix G: Of course, certain rules of vector–matrix multiplication have to be agreed upon in order that this single equation is consistent with the two scalar equations given by Equation 3A.4 Also, we have to agree upon rules for the addition of vectors or matrices © 2005 by Taylor & Francis Group, LLC 3-44 Vibration and Shock Handbook A vector is a special case of a matrix Specifically, a third-order column vector is a matrix having three rows and one column Hence, it is a £ matrix Similarly, a third-order row vector is a matrix having one row and three columns Accordingly, it is a £ matrix It follows that we only need to know matrix algebra; the vector algebra will follow from the results for matrices 3A.3.1 Matrix Addition and Subtraction Only matrices of the same size can be added The result (sum) will also be a matrix of the same size In matrix addition, we add the corresponding elements (i.e., the elements at the same position) in the two matrices, and write the results at the corresponding places in the resulting matrix As an example, consider the £ matrix " # 21 A¼ 22 and a second matrix " B¼ # 25 23 1 22 The sum of these two matrices is given by " AỵBẳ # The order in which the addition is done is immaterial Hence AỵBẳBỵA ð3A:6Þ In other words, matrix addition is commutative Matrix subtraction is defined just like matrix addition, except the corresponding elements are subtracted An example is given below: 3 21 25 7 7 21 ¼ 1 5 24 3A.3.2 23 21 Null Matrix The null matrix is a matrix whose elements are all zeros Hence, when we add a null matrix to an arbitrary matrix, the result is equal to the original matrix We can define a null vector in a similar manner We can write Aỵ0ẳA As an example, the £ null matrix is 3A.3.3 " 0 0 ð3A:7Þ # Matrix Multiplication Consider the product AB of the two matrices A and B Let us write this as C ẳ AB â 2005 by Taylor & Francis Group, LLC ð3A:8Þ Modal Analysis 3-45 We say that B is premultiplied by A or, equivalently, A is postmultiplied by B For this multiplication to be possible, the number of columns in A has to be equal to the number of rows in B Then, the number of rows of the product matrix C is equal to the number of rows in A, and the number of columns in C is equal to the number of columns in B The actual multiplication is done by multiplying the elements in a given row (say, the ith row) of A by the corresponding elements in a given column (say, the jth column) of B and summing these products The result is the element cij of the product matrix C Note that cij denotes the element that is common to the ith row and the jth column of matrix C So, we have X cij ẳ aik bkj 3A:9ị k As an example, suppose " A¼ 2 21 23 B¼6 42 21 24 23 # 27 Note that the number of columns in A is equal to three and the number of rows in B is also equal to three Hence, we can perform the premultiplication of B by A For example c11 ¼ Ê ỵ Ê ỵ 21ị Ê ẳ c12 ẳ Ê 21ị ỵ Ê ỵ 21ị Ê 23ị ẳ c13 ẳ Ê ỵ Ê 24ị ỵ 21ị Ê ẳ 27 c14 ẳ Ê ỵ Ê ỵ 21ị Ê ẳ c21 ẳ Ê ỵ 23ị Ê ỵ Ê ẳ 17 c22 ẳ Ê 21ị ỵ 23ị Ê ỵ Ê 23ị ẳ 224 and so on The product matrix is " C¼ 27 17 224 22 # It should be noted that both products AB and BA are not always defined and, even when they are defined, the two results are not equal in general Unless both A and B are square matrices of the same order, the two product matrices will not be of the same order Summarizing, matrix multiplication is not commutative: AB – BA 3A.3.4 ð3A:10Þ Identity Matrix An identity matrix (or unity matrix) is a square matrix whose diagonal elements are all equal to and all the remaining elements are zeros This matrix is denoted by I For example, the third-order identity matrix is 0 7 I¼6 40 05 © 2005 by Taylor & Francis Group, LLC 3-46 Vibration and Shock Handbook It is easy to see that when any matrix is multiplied by an identity matrix (provided, of course, that the multiplication is possible) the product is equal to the original matrix; thus AI ¼ IA ¼ A 3A.4 ð3A:11Þ Matrix Inverse An operation similar to scalar division can be defined in terms of the inverse of a matrix A proper inverse is defined only for a square matrix and, even for a square matrix, an inverse may not exist The inverse of a matrix is defined as follows Suppose that a square matrix A has the inverse B Then, these must satisfy the equation AB ¼ I ð3A:12Þ BA ¼ I ð3A:13Þ or equivalently where I is the identity matrix, as defined before The inverse of A is denoted by A 21 The inverse exists for a matrix if and only if the determinant of the matrix is nonzero Such matrices are termed nonsingular We shall discuss the determinant in Section 3A.6 Before explaining a method for determining the inverse of a matrix, let us verify that " # 1 is the inverse of " 1 21 21 # To show this, we simply multiply the two matrices and show that the product is the second-order unity matrix Specifically, " #" # " # 21 1 ¼ 21 1 or " 1 3A.4.1 #" 21 21 # " ¼ 0 # Matrix Transpose The transpose of a matrix is obtained by simply interchanging the rows and the columns of the matrix The transpose of A is denoted by A T For example, the transpose of the £ matrix " # 22 A¼ 22 is the £ matrix 22 AT ¼ 22 © 2005 by Taylor & Francis Group, LLC 7 Modal Analysis 3-47 Note that the first row of the original matrix has become the first column of the transposed matrix, and the second row of the original matrix has become the second column of the transposed matrix If AT ¼ A; then we say that the matrix A is symmetric Another useful result on the matrix transpose is expressed by ABịT ẳ BT AT 3A:14ị It follows that the transpose of a matrix product is equal to the product of the transposed matrices, taken in the reverse order 3A.4.2 Trace of a Matrix The trace of a square matrix is given by the sum of the diagonal elements The trace of matrix A is denoted by trAị: X trAị ẳ aii 3A:15ị i For example, the trace of the matrix 22 A¼6 4 21 24 0 17 is given by trAị ẳ 22ị ỵ 24ị ỵ ¼ 23 3A.4.3 Determinant of a Matrix The determinant is defined only for a square matrix It is a scalar value computed from the elements of the matrix The determinant of a matrix A is denoted by detðAÞ or lAl: Instead of giving a complex mathematical formula for the determinant of a general matrix in terms of the elements of the matrix, we now explain a way to compute the determinant First, consider the £ matrix " # a11 a12 A¼ a21 a22 Its determinant is given by detAị ẳ a11 a22 a12 a21 Next, consider the £ matrix a11 a12 a13 A¼6 a21 a22 a23 a31 a32 a33 Its determinant can be expressed as detAị ẳ a11 M11 a12 M12 ỵ a13 M13 where the minors of the associated matrix elements are defined as " # " # " a22 a23 a21 a22 a21 M11 ¼ det ; M12 ¼ det ; M13 ¼ det a32 a33 a31 a32 a31 © 2005 by Taylor & Francis Group, LLC a22 a32 # 3-48 Vibration and Shock Handbook Note that Mij , the determinant of the matrix, is obtained by deleting the ith row and the jth column of the original matrix The quantity Mij is known as the minor of the element aij of the matrix A If we attach a proper sign to the minor depending on the position of the corresponding matrix element, we have a quantity known as the cofactor Specifically, the cofactor, Cij ; corresponding to the minor, Mij ; is given by Cij ẳ 21ịiỵj Mij 3A:16ị Hence, the determinant of the £ matrix may be given by detðAÞ ẳ a11 C11 ỵ a12 C12 ỵ a13 C13 Note that in the two formulas given above for computing the determinant of a £ matrix, we have expanded along the first row of the matrix We get the same answer, however, if we expand along any row or any column Specifically, when expanded along the ith row, we have detAị ẳ ai1 Ci1 ỵ ai2 Ci2 ỵ ai3 Ci3 Similarly, if we expand along the jth column, we have detAị ẳ a1j C1j ỵ a2j C2j ỵ a3j C3j These ideas of computing a determinant can be easily extended to £ and higher-order matrices in a straightforward manner Hence, we can write X X aij Cij 3A:17ị aij Cij ẳ detAị ẳ i j 3A.4.4 Adjoint of a Matrix The adjoint of a matrix is the transpose of the matrix whose elements are the cofactors of the corresponding elements of the original matrix The adjoint of matrix A is denoted by adjðAÞ: As an example, in the £ case, we have 3 C11 C12 C13 T C11 C21 C31 7 7 adjAị ẳ C21 C22 C23 ¼ C12 C22 C32 C31 C32 C33 C13 C23 In particular, it is easily seen that the adjoint of the matrix 21 7 A¼6 40 1 is given by adjAị ẳ 23 Accordingly, we have 2 23 22 23 23 3T 7 adjAị ẳ â 2005 by Taylor & Francis Group, LLC 22 C33 Modal Analysis 3-49 Hence, in general adjAị ẳ ẵCij 3A.4.5 T 3A:18ị Inverse of a Matrix At this juncture, it is appropriate to give a formula for the inverse of a square matrix Specifically, A21 ¼ adjðAÞ detðAÞ ð3A:19Þ Hence, in the £ matrix example given before, since we have already determined the adjoint, it remains only to compute the determinant in order to obtain the inverse Now, expanding along the first row of the matrix, the determinant is given by detAị ẳ Ê ỵ Ê ỵ 21ị Ê 23ị ẳ Accordingly, the inverse is given by A21 ¼ 16 84 23 23 7 22 For two square matrices A and B we have ABị21 ẳ B21 A21 ð3A:20Þ As a final note, if the determinant of a matrix is zero, the matrix does not have an inverse Then we say that the matrix is singular Some important matrix properties are summarized in Box 3A.1 Box 3A.1 SUMMARY OF MATRIX PROPERTIES Addition: AmÊn ỵ BmÊn ẳ CmÊn Multiplication: Am£n Bn£r ¼ Cm£r Identity: AI ¼ IA ¼ A ) I is the identity matrix Note: AB ¼ ) ⁄ A ¼ or B ¼ in general Transposition: CT ẳ ABịT ẳ BT AT Inverse: AP ¼ I ¼ PA ) A ¼ P21 and P ẳ A21 ABị21 ẳ B21 A21 Commutativity: AB BA in general Associativity: ABịC ẳ ABCị Distributivity: CA ỵ Bị ẳ CA ỵ CB Distributivity: A ỵ BịD ẳ AD ỵ BD â 2005 by Taylor & Francis Group, LLC 3-50 Vibration and Shock Handbook 3A.5 Vector Spaces 3A.5.1 Field (F) Consider a set of scalars For any a and b from the set, if a ỵ b and ab are also elements in the set; if a þ b ¼ b þ a and ab ¼ ba a ỵ bị ỵ g ẳ a ỵ b ỵ gị and abịg ẳ abgị ab ỵ gị ẳ ab ỵ ag (commutativity) (associativity) (distributivity) are satised; and if Identity elements and exist in the set such that a ỵ ẳ a and 1a ¼ a Inverse elements exist in the set such that a ỵ 2aị ẳ and aãa21 ẳ then the set is a field For example, the set of real numbers is a field 3A.5.2 Vector Space (L) Properties: Vector addition x ỵ yị and scalar multiplication axị are dened Commutativity: x ỵ y ẳ y ỵ x and associativity: x ỵ yị ỵ z ẳ x ỵ y ỵ zị are satised Unique null vector and negation ð2xÞ exist such that x ỵ ẳ x, x ỵ 2xị ẳ 0: Scalar multiplication satises abxị ẳ abịx associativityị ax ỵ yị ẳ ax ỵ by ) distributivityị a ỵ bịx ẳ ax þ bx 1x ¼ x; 0x ¼ Special case: Vector space Ln has vectors with n elements from the field F: Consider x1 6x 27 x ¼ 7; xn y1 6y 27 y¼6 yn Then 6 xỵy ẳ6 x1 ỵ y1 xn ỵ yn â 2005 by Taylor & Francis Group, LLC 7 7ẳyỵx Modal Analysis 3-51 and ax1 7 ax ¼ axn 3A.5.3 Subspace S of L If x and y are in S then x ỵ y is also in S: If x is in S and a is in F then ax is also in S: 3A.5.4 Linear Dependence Consider the set of vectors: x1 ; x2 ; …; xn :They are linearly independent if any one of these vectors cannot be expressed as a linear combination of one or more remaining vectors Necessary and sufficient condition for linear independence: a1 x1 ỵ a2 x2 ỵ ã ã ã ỵ an xn ¼ gives a ¼ (trivial solution) as the only solution For example, 3 7 7 x1 ¼ x2 ¼ 5; 21 5; 3A:21ị x3 ẳ 5 These vectors are not linearly independent because x1 ỵ 2x2 ẳ x3 : 3A.5.5 Bases and Dimension of a Vector Space If a set of vectors can be combined to form any vector in L then that set of vectors is said to span the vector space L (i.e., a generating system of vectors) If the spanning vectors are all linearly independent, then this set of vectors is a basis for that vector space The number of vectors in the basis ¼ the dimension of the vector space Note: The dimension of a vector space is not necessarily the order of the vectors For example, consider two intersecting third-order vectors They will form a basis for the plane (twodimensional) that contains the two vectors Hence, the dimension of the vector space ¼ 2, but the order of each vector in the basis ¼ Note: L n is spanned by n linearly independent vectors ) dimLn ị ẳ n For example, â 2005 by Taylor & Francis Group, LLC 3 0 7 607 617 607 7 7 7 7 7; 7; · · ·; 7 7 6.7 6.7 6.7 6.7 6.7 6.7 607 5 0 3-52 Vibration and Shock Handbook 3A.5.6 Inner Product ðx; yị ẳ yH x 3A:22ị where H denotes the Hermitian transpose (i.e., complex conjugate and transpose) Hence, yH ¼ ðyp ÞT where ( )p denotes complex conjugation Note: x; xị $ and x; xị ẳ if and only if (iff) x ¼ ðx; yị ẳ y; xịp lx; yị ẳ lx; yịx; lyị ẳ lp x; yị x; y ỵ zị ẳ x; yị ỵ x; zị 3A.5.7 Norm Properties kxk $ and kxk ¼ iff x ¼ klxk ¼ lllkxk for any scalar l kx ỵ yk # kxk þ kyk For example, the Euclidean norm: H kxk ¼ x x ẳ n X iẳ1 xi2 !1=2 3A:23ị Unit vector kxk ¼ Normalization x ¼ x^ kxk Angle between vectors We have cos u ẳ x; yị ẳ ð^x; y^ Þ kxkkyk ð3A:24Þ where u is the angle between x and y: Orthogonal vectors iff x; yị ẳ ð3A:25Þ Note: n orthogonal vectors in L n are linearly independent and span Ln ; and form a basis for Ln : 3A.5.8 Gram –Schmidt Orthogonalization Given a set of vectors, x1 ; x2 ; …; xn ; that are linearly independent in Ln ; we construct a set of orthonormal (orthogonal and normalized) vectors, y^ ; y^ ; …; y^ n ; which are linear combinations of x^ i : Start y^ ¼ x^ ¼ © 2005 by Taylor & Francis Group, LLC x1 kx1 k Modal Analysis 3-53 Then y i ¼ xi i21 X jẳ1 xi ; y^ j ị^yj for i ¼ 1; 2; …; n Normalize yi to produce y^ i : 3A.5.9 Modified Gram –Schmidt Procedure In each step, compute new vectors that are orthogonal to the just-computed vector Initialization: x y^ ¼ kx1 k as before Then y1 ; xi ị^y1 x1ị i ẳ xi ^ y^ i ẳ x1ị i kx1ị i k for i ¼ 2; 3; …; n for i ¼ 2; 3; …n and ð1Þ y2 ; y2 ; xð1Þ x2ị i ị^ i ẳ xi ^ i ẳ 3; 4; …; n and so on 3A.6 Determinants Now, let us address several analytical issues of the determinant of a square matrix Consider the matrix a11 · · · a1n 7 A ¼ an1 · · · ann The minor of aij ¼ Mij ¼ the determinant of matrix formed by deleting the ith row and the jth column of the original matrix The cofactor of aij ¼ Cij ẳ 21ịiỵj Mij cof Aị ẳ cofactor matrix of A adjAị ẳ adjoint A ẳ cof AịT 3A.6.1 Properties of Determinant of a Matrix Interchange two rows (columns) ) determinant’s sign changes Multiply one row (column) by a ) a det( ) Add a [a £ row(column)] to a second row(column) ) determinant unchanged Identical rows(columns) ) zero determinant For two square matrices A and B; detABị ẳ detAịdetBị: 3A.6.2 Rank of a Martix Rank A ¼ number of linearly independent columns ¼ number of linearly independent rows ¼ dim(column space) ¼ dim(row space) Here “dim” denotes the “dimension of.” © 2005 by Taylor & Francis Group, LLC 3-54 3A.7 Vibration and Shock Handbook System of Linear Equations Consider the following set of linear algebraic equations: a11 x1 ỵ a12 x2 ỵ ã ã ã þ a1n xn ¼ c1 a21 x1 þ a22 x2 þ · · · þ a2n xn ¼ c2 am1 x1 ỵ am2 x2 ỵ ã ã ã ỵ amn xn ¼ cm We need to solve for x1 ; x2 ; …; xn : This problem can be expressed in the vector– matrix form: Am£n xn ¼ cm B ẳ ẵA; c A solution exists iff rankẵA; c ¼ rank½A Two cases can be considered: Case 1: If m $ n and rankẵA ẳ n ) unique solution for x: Case 2: If m # n and rank½A ¼ m ) infinite number of solutions for x x ¼ AH ðAAH Þ21 C ( minimum norm form Specifically, out of the infinite possibilities, this is the solution that minimizes the norm, xH x: Note that the superscript H denotes the Hermitian transpose, which is the transpose of the complex conjugate of the matrix For example, " # þ j þ 3j A¼ 32j 21 2j Then 12j AH ¼ 2 3j 3ỵj 7 21 ỵ 2j If the matrix is real, its Hermitian transpose is simply the ordinary transpose In general, if rank½A # n ) infinite number of solutions The space formed by solutions Ax ¼ ) is called the null space dimnull spaceị ẳ n k where rankẵA ẳ k 3A.8 Quadratic Forms Consider a vector, x; and a square matrix, A: Then the function Qxị ẳ x; Axị is called a quadratic form For a real vector x and a real and symmetric matrix A; Qxị ẳ xT Ax Positive definite matrix: If ðx; AxÞ for all x – 0; then A is said to be a positive definite matrix Also, the corresponding quadratic form is also said to be positive definite © 2005 by Taylor & Francis Group, LLC Modal Analysis 3-55 Positive semidefinite matrix: If ðx; AxÞ $ for all x – 0; then A is said to be a positive semidefinite matrix Note that in this case the quadratic form can assume a zero value for a nonzero x: The corresponding quadratic form is also said to be positive semidefinite Negative definite matrix: If ðx; AxÞ , for all x – 0; then A is said to be a negative definite matrix The corresponding quadratic form is also said to be negative definite Negative semidefinite matrix: If ðx; AxÞ # for all x – 0; then A is said to be a negative semidefinite matrix Note that, in this case, the quadratic form can assume a zero value for a nonzero x: The corresponding quadratic form is also said to be negative semidefinite Note: If A is positive definite, then 2A is negative definite If A is positive semidefinite, then 2A is negative semidefinite Principal minors: Consider the matrix a11 a12 · · · a1n 6a 21 a22 · · · a2n 7 A¼6 an1 an2 · · · ann Its principal minors are the determinants of the various matrices along the principal diagonal, as given by a11 a12 a13 " # a11 a12 7 D1 ¼ a11 ; D2 ¼ det ; D3 ¼ det6 a21 a22 a23 5; and so on a21 a22 a31 a32 a33 Sylvester’s theorem: A matrix is positive if definite if all its principal minors are positive 3A.9 3A.9.1 Matrix Eigenvalue Problem Characteristic Polynomial Consider a square matrix A: The polynomial Dsị ẳ detẵsI A is called the characteristic polynomial of A: 3A.9.2 Characteristic Equation The polynomial equation Dsị ẳ detẵsI A ẳ is called the characteristic equation of the square matrix A: 3A.9.3 Eigenvalues The roots of the characteristic equation of a square matrix A are the eigenvalues of A: For an n £ n matrix, there will be n eigenvalues 3A.9.4 Eigenvectors The eigenvalue problem of a square matrix A is given by Av ¼ lv © 2005 by Taylor & Francis Group, LLC 3-56 Vibration and Shock Handbook where the objective is to solve for l and the corresponding nontrivial (i.e., nonzero) solutions for v: The problem can be expressed as lI Aịv ẳ Note: If v is a solution of this equation, then any multiple of it, av; is also a solution Hence, an eigenvector is arbitrary up to a multiplication factor For a nontrivial (i.e., nonzero) solution to be possible for v; one must have detẵlI A ẳ Since this is the characteristic equation of A; as defined above, it is clear that the roots of l are the eigenvalues of A: The corresponding solutions for v are the eigenvectors of A: For an n £ n matrix, there will be n eigenvalues and n corresponding eigenvectors 3A.10 Matrix Transformations 3A.10.1 Similarity Transformation Consider a square matrix, A; and a nonsingular square matrix, T: Then, the matrix obtained according to B ¼ T21 AT is the similarity transformation of A by T: The transformed matrix B has the same eigenvalues as the original matrix A: Also, A and B are said to be similar 3A.10.2 Orthogonal Transformation Consider a square matrix A and another square matrix T: Then, the matrix obtained according to B ¼ TT AT is the orthogonal transformation of A by T: If T21 ¼ TT then the matrix T is said to be an orthogonal matrix In this case, the similarity transformation and the orthogonal transformation become identical 3A.11 Matrix Exponential The matrix exponential is given by the innite series expAtị ẳ I ỵ At þ 2 A t þ ··· 2! ð3A:26Þ expltị ẳ ỵ lt ỵ 2 l t ỵ ããã 2! 3A:27ị exactly like the scalar exponential The matrix exponential may be determined by reducing the infinite series given in Equation 3A.26 into a finite matrix polynomial of order n (where, A is n £ n) by using the Cayley –Hamilton theorem 3A.11.1 Cayley –Hamilton Theorem This theorem states that a matrix satisfies its own characteristic equation The characteristic polynomial of A can be expressed as Dlị ẳ detA lIị ẳ an ln ỵ an21 ln21 ỵ ã ã ã ỵ a0 â 2005 by Taylor & Francis Group, LLC ð3A:28Þ Modal Analysis 3-57 in which det( ) denotes determinant The notation DAị ẳ an An ỵ an21 An21 ỵ ã ã ã ỵ a0 I ð3A:29Þ is used Then, by the Cayley –Hamilton theorem, we have ẳ an An ỵ an21 An21 þ · · · þ a0 I 3A.11.2 ð3A:30Þ Computation of Matrix Exponential Using the Cayley –Hamilton theorem, we can obtain a finite polynomial expansion for expðAtÞ: First, we express Equation 3A.26 and Equation 3A.27 as expAtị ẳ SAịãDAị ỵ an21 An21 ỵ an22 An22 ỵ ã ã ã ỵ a0 I n21 expltị ẳ SlịãDlị ỵ an21 l n22 þ an22 l þ · · · þ a0 ð3A:31Þ ð3A:32Þ in which Sð·Þ is an appropriate infinite series, which is the result of dividing the exponential (infinite) series by the characteristic polynomial Dãị: Next, since DAị ẳ by the Cayley Hamilton theorem, Equation 3A.31 becomes expAtị ẳ an21 An21 ỵ an22 An22 ỵ ã ã ã ỵ a0 I ð3A:33Þ Now it is just a matter of determining the coefficients, a0 ; a1 ; …; an21 ; which are functions of time This is done as follows If l1 ; l2 ; …; ln are the eigenvalues of A; however, then, by denition Dli ị ẳ detA li Iị ẳ for i ẳ 1; 2; …; n ð3A:34Þ Thus, from Equation 3A.32, we obtain expðli tị ẳ an21 ln21 ỵ an22 ln22 ỵ ã ã ã ỵ a0 i i for i ẳ 1; 2; …; n ð3A:35Þ If the eigenvalues are all distinct, Equation 3A.35 represents a set of n independent algebraic equations from which the n unknowns a0 ; a1 ; …; an21 could be determined If some eigenvalues are repeated, the derivatives of the corresponding equations (Equation 3A.35) have to be used as well © 2005 by Taylor & Francis Group, LLC ... LLC 3-4 Vibration and Shock Handbook in which the rotations of a and b are taken as positive about the positive directions of x and y; respectively (Figure 3.1) Next, we will express da and db... rigid-body modes det K ¼ 0; Ki ¼ 0; and vi ¼ Presence of static modes det M ¼ 0; Mi ¼ 0; and vi ! © 2005 by Taylor & Francis Group, LLC 3-18 Vibration and Shock Handbook pffiffiffiffiffi in which v0 ¼ k=m:... Francis Group, LLC ð3:23Þ 3-22 Vibration and Shock Handbook and the inverse C21 exists Before showing this fact, note that the orthogonality conditions (Equation 3.19 and Equation 3.21) can be written

Ngày đăng: 05/05/2018, 09:32

TỪ KHÓA LIÊN QUAN

TÀI LIỆU CÙNG NGƯỜI DÙNG

  • Đang cập nhật ...

TÀI LIỆU LIÊN QUAN