1. Trang chủ
  2. » Kỹ Thuật - Công Nghệ

CRC Press - Robotics and Automation Handbook Episode 2 Part 5 ppt

20 311 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Nội dung

21 -10 Robotics and Automation Handbook collision detection, kinematic constraints) can be directly answered using kinematic simulation. Inclusion of the dynamics becomes important when the engineer is interested in more detailed information such as reaction forces, sizing motors/actuators, sensor selection, control design, estimating power consumption and determining how contacting parts behave. We consider two approaches to simulation of the robot dynamics. First is the canned approach that handles the computation of the dynamic equations of motion based on a CAD model of the system. The second approach is based on using a time domain simulation package such as Matlab’s Simulink or MatrixX’s SystemBuild ® .Thefirst approach is generally much easier and provides sufficient information to answer fundamental questions. In addition, in many cases, it is possible to integrate the dynamic simulation package with a control package such as Simulink or System- Build. The second approach requires a deeper understanding of the system’s dynamics but enables greater flexibility in terms of modeling the system (including control algorithms, sensory feedback, arbitrary inputs). 21.3.4.1 Multibody Dynamic Packages Two examples of multibody dynamic simulation packages are LMS International’s Dynamic Motion Sim- ulation (DADS) and MSC Software’s Adams. With both packages, you begin by either building a pa- rameterized model of the robot or importing the model from a CAD system, much like the simulation packages described earlier. The software automatically formulates and solves the dynamic equations of motion. Both MSC Adams and DADS ® have integration capabilities with control simulation software such as Simulink and SystemBuild. The engineer can take the geometrically defined models and easily incorporate them within block diagrams created with the control system software. It is then possible to evaluate a wide class of mechanical and control designs and estimate the system’s performance prior to ever cutting metal. SolidWorks provides an option for integrating the Adams simulation technology directly into the SolidWorks environment. This interface, CosmosMotion ® , is a physics-based motion simulation pack- age that is fully embedded into the SolidWorks environment. The part’s geometry and mass properties are defined inside SolidWorks. CosmosMotion reads these characteristics, along with all of the defined constraints. The engineer specifies joint motion or forces, and the software displays the resulting forces and/or motion of the system based on fundamental physical principles. The advantage of this approach is twofold. First, the design and simulation is embedded in the same interface. The user is not concerned with file transfers or compatability issues. Second, the same software package that the engineer uses to design the robot is used to define the geometry and mass properties for the simulation. These advantages come at a cost. A professional version of SolidWorks, at the time of this writing, is $4000. Likewise, the CosmosMotion package is $6000. Another option is to import the CAD model into a time domain simulation package such as Matlab and Simulink. SimMechanics ® provides an effortless importation of SolidWorks CAD assemblies into the Simulink time domain simulation environment. This approach enables the ability to simulate the behavior of mechanical, control, and other dynamic systems. As an example, we constructed a sim- ple three-degrees-of-freedom manipulator in the SolidWorks environment. This model, shown in Figure 21.9, consists of four parts: a base and three links. These four parts are combined in an as- sembly. The assembly includes the ability to define constraints (what SolidWorks calls Mates) between parts. In the case of the robot which has three revolute joints, we require six constraints: three con- straints forcing the joints on adjoining parts to be collinear and three constraints forcing each face on adjoining parts to be as close as possible. Now, when moving a part, the whole assembly moves, il- lustrating the constrained motion due to the robot’s kinematics. It is also possible to include collision detection and restrict motion when constraints are reached. At this point, SolidWorks is restricted to constraint detection and ranges of motion. However, when defining parts, you can include not only the geometry but the types of materials as well (specifically the material’s density). SolidWorks, like many CAD packages, includes the ability to compute, based upon the geometry and materials, the resulting mass properties for each part (mass, location of center of mass, principal axes of inertia, Copyright © 2005 by CRC Press LLC 21 -12 Robotics and Automation Handbook 21.4.1 Graphical Animation There are a number of different ways to visually display robot motion. In this section, we will focus on where to begin if we wish to develop a package similar to the systems covered in the previous section. Many of these packagesexploit theOpenGL interface. OpenGL was originally developed by Silicon Graphics, Inc. (SGI) as a multi-purpose, platform-independent graphical API. OpenGL is a collection of approximately 150 distinct commands that you can use to specify objects and operations needed to produce interactive three-dimensional applications. To provide a starting point, note that OpenGL is based on object oriented programming, C++. To efficiently develop a robot simulation package, you develop a set of classes that can constructsimple objects suchas spheres, cylinders, rectangles, and whateverbasic objects youneedto define a robot. OpenGL uses a number of vector and matrix operations to define objects in three-dimensional space. As a simple example, the following function would draw a cube. Void DrawCube(float x, float y, float z) { glPushMatrix(); glTranslatef(x,y,z); glBegin(GL\_POLYGON); glVertex3f(0.0f, 0.0f, 0.0f); // top face glVertex3f(0.0f, 0.0f, -1.0f); glVertex3f(-1.0f, 0.0f, -1.0f); glVertex3f(-1.0f, 0.0f, 0.0f); glVertex3f(0.0f, 0.0f, 0.0f); // front face glVertex3f(-1.0f, 0.0f, 0.0f); glVertex3f(-1.0f, -1.0f, 0.0f); glVertex3f(0.0f, -1.0f, 0.0f); glVertex3f(0.0f, 0.0f, 0.0f); // right face glVertex3f(0.0f, -1.0f, 0.0f); glVertex3f(0.0f, -1.0f, -1.0f); glVertex3f(0.0f, 0.0f, -1.0f); glVertex3f(-1.0f, 0.0f, 0.0f); // left face glVertex3f(-1.0f, 0.0f, -1.0f); glVertex3f(-1.0f, -1.0f, -1.0f); glVertex3f(-1.0f, -1.0f, 0.0f); glVertex3f(0.0f, 0.0f, 0.0f); // bottom face glVertex3f(0.0f, -1.0f, -1.0f); glVertex3f(-1.0f, -1.0f, -1.0f); glVertex3f(-1.0f, -1.0f, 0.0f); glVertex3f(0.0f, 0.0f, 0.0f); // back face glVertex3f(-1.0f, 0.0f, -1.0f); glVertex3f(-1.0f, -1.0f, -1.0f); glVertex3f(0.0f, -1.0f, -1.0f); glEnd(); glPopMatrix(); } OpenGL uses what is called a matrix stack to construct complicated models of many simple objects. In the case of the cube, the function first gets the most recent coordinate frame, possibly pushed onto the stack prior to this function call. Then the function glTranslate() performs a translation on the coordinate frame. Likewise there are functions such as glRotate() and glScale() that provide the ability to rotate and scale the object. The glBegin() and glEnd() functions provide bounds on the definition of the verticies, defined by the glVertex3f() function, of the object. Your robot will consist of a number of these primitives, thus the motivation for object-oriented pro- gramming. Animation of motion will require the coordination of these primitives. It is now clear that a visual robot simulation program requires a combination of programming for animation (using utilities such as OpenGL) and an understanding of robot kinematics. There are a number of references, includ- ing this handbook, that provide excellent guidelines for writing the algorithms necessary for providing Copyright © 2005 by CRC Press LLC Robot Simulation 21 -13 FIGURE 21.10 OpenSim simulation of vehicle and arm (courtesy of Oak Ridge National Laboratory). coordinated motion of a robot (Asada and Slotine, 1989; Craig, 1989; Spong and Vidyasagar, 1989; and Yoshikawa, 1990). Basically, you will have some form of interface that provides a desired motion for your robot. This can be from a script file, mouse motion, slider bars on a dialog box, or any basic form of input into your application. The kinematic routines will transform the desired motion of the robot into the required transformations for each primitive or link on your robot. The results of these transformations are passed to functions such as glTranslate() and glRotate() to provide the animation of the coordinated motion. Clearly, the above description is not sufficient to enable you to immediately start writing software for robot animation. However, it does provide a good starting point. There are many examples where research groups have used OpenGL to develop their own robot simulation package. 7,8 In addition, there are a number of good references on programming in OpenGL (Hawkings and Astle, 2001; Woo et al., 1999). These references, combined with the above robotics references, provide guidance on exactly how to write software that animates objects, for example, the robot shown in Figure 21.10. 21.4.2 Numerical Simulation InSection21.3,webrieflydescribedtheutility of programs such as SimulinkandSystemBuildforsimulating the motion of a robotic system. However, this approach required the use of a secondary software package for computing the dynamic equations of motion. In this final section, we will describe a simple procedure for computing the equations of motion for a serial link manipulator and embedding the dynamics in a Simulink simulation. The basic methodology requires the use of a symbolic software package and a deeper understanding of dynamics. The approach is based on using homogenous transforms to describe key points on the robot: joint locations and centers of mass. Using the transformations, we construct linear and angular velocity vectors about each center of mass. The displacement terms are used to construct a potential energy term while the velocity terms are used in the computation of the kinetic energy. We use the LaGrange approach to compute the resulting equations of motion. We then describe how to include joint 7 ORNL’s simulation package: http://marvin.ornl.gov/opensim/ 8 Georgia Tech’s simulation package: http://www.imdl.gatech.edu/mbsim/index.html Copyright © 2005 by CRC Press LLC 21 -14 Robotics and Automation Handbook and tip forces as well as partitioning the equations so that they are suitable for any numerical integration routine. While this approach is presently limited to serial link manipulators, it should give the reader a better understanding of the mechanics associated with simulation of robot dynamics. Two primary elements in any time domain simulation consist of the state equations and an integration routine. The state equations are derived using basic principles in dynamics. Our objective is to formulate the dynamics into a general form, given in Equation (21.1), where ¯ x represents a vector of states for the system and ¯ u is a vector of control inputs. In the case of a robot, the vector ¯ x could represent the position and velocities of each of the joints. The vector ¯ u would contain the joint forces due to the actuators driving the robot. The equation is derived from the system’s dynamic equations of motion. ˙ ¯ x = F ( ¯ x, ¯ u, t) (21.1) During each time step of the simulation, we compute the forces driving the robot, possibly due to a control law, and compute the derivative of the state using Equation (21.1). We then use an integration routine to compute the state of the system, ¯ x, for the next time step. The basic elements are shown in Figure 21.11. Clearly, the computation of the state equations requires quite a bit of effort. The following section provides a basic procedure for computing the dynamic equation of motion, in the state equation format described above, for a general serial link manipulator. The approach is based on computing, symbolically, the position and orientation of each joint and center of mass of the robot. While not necessary, we use the Denavit-Hartenburg approach. First, as a review, the homogeneous transform is expressed using the traditional Denavit-Hartenberg (D-H) representation found in most robotics texts where the four quantities θ i (angle), α i (twist), d i (offset), l i (length) are parameters of link and joint i. H i =      c θi −s θi c αi s θi s αi a i c θi s θi c θi c αi −c θi s αi a i s θi 0 s αi c αi d i 00 0 1      (21.2) The conventional use of the homogeneous transform treats each subsequent transformation as a body fixed rotation and translation. We also assume that we can define an additional set of homogeneous transforms from each joint to a point on each link where the associated mass properties (mass and inertia matrix) are known. So, our basic methodology consists of using two sets of homogeneous transformations; one to describe the pose of the arm and one to identify the displacements and velocities, both translation and rotation, of the center of mass of each link and payload with respect to the manipulators state (joint position and velocity). We extract from the transforms the vertical displacement of each center of mass of each link for an expression of the total potential energy of the system. Likewise, computation of the system’s kinetic energy is based on computing the linear and angular and velocity of each links center of gravity with respect to the inertial frame. Once the kinetic and potential energy terms are derived, we simply use the Matlab jacobian() command to symbolically calculate the mass matrix and nonlinear dynamic x des Controller x = F ( x , u , t ) u t t–Ts x (t) = ∫ F( x , u , t ) dt + x ( t – Ts ) x . . FIGURE 21.11 General simulation block diagram. Copyright © 2005 by CRC Press LLC Robot Simulation 21 -15 terms following the Lagrange formulation. Symbolic packages such as Maple and Mathmatica do all the rest. First, the position of the center of mass for each link, with respect to the system’s inertial coordinate system, is computed by iteratively postmultiplying each of the homogeneous transforms, starting from the base and working to the tip of the robot. H i base = H i−1 base H i i−1 =  R i−1 base ¯ x i−1 base ¯ 01  R i i−1 ¯ x i i−1 ¯ 01  =  R i base ¯ x i base ¯ 01  , ¯ x i base =      x i base y i base z i base      (21.3) The potential energy due to gravity for link i is the vertical component (z i base in direction of gravity) of ¯ x i base times the mass of the link. V i = m i gz i base (21.4) To compute the kinetic energy, we must first derive expressions for the linear and angular velocity of the center of gravity for each link as a function of the states of the manipulator. We have an expression, ¯ x i base in Equation (21.3), for the position of the center of gravity of link i with respect to the base inertial frame. The velocity vector ¯ v i is computed by multiplying the Jacobian (with respect to the combined states of the manipulator ¯ q i base )of ¯ x i base , J ( ¯ x i base , ¯ q), by the state velocity vector, ˙ ¯ q. Note from Equation (21.5), based on the chain rule, that ¯ q i base is a vector of robot joint displacements from the base of the robot (joint q 0 ) to the ith joint (q i ). ¯ v i = ∂ ¯ x i ∂t = i  j=1 ∂ ¯ x i ∂q j ˙ q j (21.5) = J ( ¯ x i , ¯ q) ˙ ¯ q where ¯ q = [q 0 q 1 q i ] The rotational velocity is a little more involved but can be simplified by again using the homogeneous transform. Starting at the base of the robot, project the net rotational velocity vector forward to the center of mass of each link using the rotational matrix R i base . Each subsequent joint consists of projecting the total angular velocity vector of the previous joint to the current joint’s coordinate system, using the rotational component of that joint’s homogenous transform, and adding the joint angular velocity. ¯ ω i = ˙ ¯ q i + R i i−1 ¯ ω i−1 (21.6) We now have expressions for the linear and angular velocity of the center of mass for each link. The total kinetic energy of the system is T = 1 2 N  i=1 m i v t i v i + R i base ¯ ω t i [I i ] ¯ ω i (21.7) where m i is the mass of link i and I i is the inertia matrix of link i about the center of gravity. As a final step, we add external forces applied to the system. For now, we assume forces are applied only to the joints Copyright © 2005 by CRC Press LLC 21 -16 Robotics and Automation Handbook and tip of the robot. We use the principle of virtual work to lump these terms together. ∂W = N  i=1 τ i ∂q i + ¯ F tip ∂ ¯ x tip + ¯ M tip ∂θ tip =  ¯ τ + J t tip ( ¯ q)  ¯ F tip ¯ M tip  ∂ ¯ q = ¯ Q∂ ¯ q (21.8) J tip ( ¯ q) = ∂ ¯ x tip base ∂ ¯ q Equation (21.4) and Equation (21.7) provide expressions for the kinetic and potential energy of the system. We start with the classic definition of the Lagrange equations of motion. ∂ ∂t  ∂T ∂ ˙ ¯ q  − ∂T ∂q + ∂V ∂q = ¯ Q (21.9) The first term in Equation (21.9) can be expanded using the chain rule. ∂ ∂t = ∂ ∂ ¯ q ∂ ¯ q ∂t + ∂ ∂ ˙ ¯ q ∂ ˙ ¯ q ∂t (21.10) Substituting Equation (21.10) into Equation (21.9), ∂ ∂ ˙ ¯ q  ∂T ∂ ˙ ¯ q  ¨ ¯ q + ∂ ∂ ¯ q  ∂T ∂ ˙ ¯ q  ˙ ¯ q − ∂T ∂ ¯ q + ∂V ∂ ¯ q = ¯ Q (21.11) As with the velocity computation in Equation (21.5), we can exploit the jacobian() function in Matlab for the evaluation of many of the terms in Equation (21.11). First, the term ∂T/∂ ˙ ¯ q isthe differential of thescalar kineticenergy term with respect to the full state velocity vector defined in Equation (21.5).This results in the vector, L v , in the script files shown below. The first term in Equation (21.11) represents the mass matrix. This expression is easily computed by taking the Jacobian on L v with respect to the full state velocity vector. The remaining elements in Equation (21.11) represent the nonlinear dynamics (coriolis, centripetal, gravitational) of the system. Thus, it should be clear that once the kinetic and potential energy terms are defined, it is straightforward to symbolically evaluate the dynamic equations. The Jacobian for projecting external forces to the generalized coordinates can similarly be computed using the tip position of the robot and the Matlab jacobian() function. This basic methodology was developed to compute the dynamics of a class of robots operating on the deck of a ship (a moving platform). We provide two examples to verify this methodology: a simple one-degree-of-freedom (dof) system operating on a three-dof moving platform and a three-dof system experiencing all six degrees of motion from the sea state. The first example is simple enough to verify through hand calculations. The second example is more complex, yet practical. Figure 21.12 shows the basic kinematic model of the one degree of freedom system experiencing three sea states in the X-Y plane. We are assuming a one-dof system with mass M and rotary inertia I z located at the tip of a link of length L. The system is experiencing only three of the six sea states: surge (x s ), heave (y s ), and pitch (θ s ). The only external force applied to the system is a joint torque τ, applied at joint 1. Appendix A shows the listing of Matlab code used to generate the dynamic equations of motion. The two output variables of interest are the MassMatrix and NLT (nonlinear terms). The resulting output is listed in Equation (21.12) and can be Copyright © 2005 by CRC Press LLC 21 -18 Robotics and Automation Handbook FIGURE 21.13 Force controlled hydraulic manipulator (courtesy Oak Ridge National Laboratory). and acceleration). This expression is in the same form as Equation (21.1). ¨ ¯ q r = M −1 rr { ¯ Q r + J t ( ¯ q r ) ¯ F ext − NLT r − M rs ¨ ¯ q s } (21.14) While the output of the single degree of freedom case can be listed in Equation (21.12), the results of the dynamic equations of motion for the second system generates 84 pages of C code and would require considerable effort to derive by hand. Fortunately, the code can be directly imported into Simulink through the S-Function builder. The motivation for computing the dynamics equations of motion are threefold. First, by having the dynamics in a symbolic form, it is possible to aid in the design process, changing parameters to optimize the system. Second, a model of the system dynamics can aid in increasing the fidelity of simulation for control design and analysis. Finally, there is no doubt about how the simulation derived the system’s equations of motion. The system modeled in this investigation is displayed in Figure 21.13. This system has a 500-lb payload capacity andhas three active degreesof freedom. The actuatormodels include nonlineardynamic modeling of the hydraulic system (servo-valve orifice equations, asymetric cylinders, fluid compliance ),controls, and the dynamic equations of motion computed above. The hydraulic actuator models generate force as a function of the servovalve current, actuator position, and velocity. The Simulink model of the full system, including the sea state inputs, hydraulic models, controls, and dynamic equations of motion, is shown in Figure 21.14. + -c- -c- k- ku -c- Sine Wave Sine Wave1 Sine Wave2 + + + + × ×1 z z1 Human Hand Position o −− + Human Stiffness In1 Out1 In1 Out1 Subsystem Subsystem 3 Subsystem 2 Subsystem1 Human Force To Workspace 1 alfa accomodation calo LDRD Joint Velocity MATLAB Function MATLAB Function MATLAB Function 1 s Des Tip Position Des Joint Position Joint States Joint States TipForce Tip Force Tip Position Tip Position switch 3 switch1 switch [time,sea_state] Sea State Out 1 1 1 Comp Force Comp Sea Force1 Tip Pos. Env. Force atrix + + Sea State Ext Tip Force ID_ command Actuator Force LDRD Arm – – FIGURE 21.14 HAT simulation model (courtesy Oak Ridge National Laboratory). Copyright © 2005 by CRC Press LLC Robot Simulation 21 -19 Step ID_command Des Joint Position External Forces applied to tip of robot Ext Tip Force Sea State Double click here for information on this demonstration k- + + − − + − In Out1 In Out1 In Out1 Joint Controllers Gc 1 Gc 2 Gc 3 Hydraulics xv1 xv2 xv3 t1 t2 t3 F1 F2 F3 q1& q1_d x2& x2_d x3& x3_d 1 Gain 2 4 Actuator Force S-function that has full nonlinear equations of motion Torques States Ext Forces Tip Force Tip Force Sea State Subsystem Joint States 1 2 theta_to_L theta_to_L S-Function Builder 1 S-Function Builder k- k- k- Jt1 Angle Jt2 Angle Jt3 Angle xv4 1 4 3 2 3 Tip Position MATLAB Function FIGURE 21.15 Details of HAT controller and manipulator model. The nonlinear model of the hydraulic manipulator, shown as the LDRD arm block in Figure 21.14, is expanded in Figure 21.15. The inputs to the model include the desired joint positions, the eighteen elements of the sea state (displacement, velocity, and acceleration of roll, pitch, yaw, heave, surge, and sway), and the external force applied to the robot. In addition, this model includes auxiliary inputs for system identification. There are two primary blocks to note in Figure 21.15. The first is the system dynamics block. This contains the C code generated previously that represents the forward dynamics of the LDRD arm. The second is the hydraulics block. From the expanded hydraulics block, each of the three joints has two primary elements: the servo valve and the actuator. In the case of the second and third joints, there is also a transmission associated with the coupling of the linear actuator with the joint (the two “theta to L” blocks in the upper right). The servo valve models are based on the full nonlinear orifice equations and regulate the fluid flow to the actuators as a function of excitation signal (command to the moving coil on the servo vavle), the supply, and return pressure, as well as the pressure on both sides of the actua- tor. The actuator model generates a force based on the position, velocity, and bulk modulus (stiffness) of the fluid. The position and velocity of the actuator come from the dynamic model of the manipu- lator. The force from the actuator is the excitation to the dynamic model of the manipulator. It should now be clear that the full nonlinear behavior of the manipulator is embodied in the simulation of the manipulator. To complete our example, we now consider the impact sea state has on the performance of a force controlled system. Inputs to the system consist of a human command regulating about a single point and a sea state with conditions commensurate with a destroyer moving at 20 knots in sea state 5. Figure 21.16 shows the force provided by the human while attempting to regulate the tip position. There is the expected DC bias on the Z-direction force required to offset the gravitational load. The system is used for strength amplification. The human only senses a fraction of the total payload weight. For this example, the payload weight is 2224 N which projects to a human force of 22.4 N when in static equilibrium with a 100:1 force amplification ratio. However, during the 120 sec simulation run, the maximum perturbation felt by Copyright © 2005 by CRC Press LLC Robot Simulation 21 -21 1.89 0 20 40 60 80 100 120 X-Direction Tip Position (m) Time (sec) 1.888 1.886 1.884 1.882 1.88 1.878 1.876 FIGURE 21.18 X tip direction. The primary motivation for this exercise is to show the potential for robot simulation. In the case above, studying the control of a robot on the deck of a moving ship could prove quite costly in terms of either deploying hardware on a ship or designing and constructing a testbed capable of emulating ship motion. Clearly, the final step in the process is validation of the simulation through experimentation. However, a high fidelity simulation can provide insight into many problems at a fraction of the cost. In addition, simulation reduces the hazards associated with testing immature control concepts. No one gets hurt if the simulation fails. 4 × 10 –3 200 40 60 80 100 120 Y-Direction Tip Position (m) Time (sec) 3 2 1 −1 −2 −3 −4 −5 0 FIGURE 21.19 Y tip direction. Copyright © 2005 by CRC Press LLC 21 -22 Robotics and Automation Handbook 1.22 200 40 60 80 100 120 Z-Direction Tip Position (m) Time (sec) 1.215 1.21 1.205 1.195 1.19 1.185 1.18 1.2 FIGURE 21.20 Z tip direction. 21.5 Conclusion This chapter focused on recent developments in robot simulation. The traditional role of robot simulation in the automotive and aerospace industries still exists but has expanded in terms of its utility and flexi- bility. However, more advanced engineering tools are providing the ability for rapid prototyping, design optimization, and hardware in the loop-simulation. With the expanding use of robotics in areas such as medicine and the battlefield, as well as advancements in haptics and man-machine interfaces, it is likely that there will be greater interaction between humans and simulation in the near future. References Asada, H. and Slotine, J. (1989). Robot Analysis and Control, Wiley Interscience, New York. Bridges, M. and Diamond, D. (1999). The financial impact of teaching surgical residents in the operating room, Am. J. Surg., Vol. 177, pp. 28–32. Craig, J. (1989). Introduction to Robotics: Mechanics and Control, 2nd ed., Addison-Wesley, Reading, MA. Cushieri, A. (1995). Whither minimal access surgery: tribulations and expectations, Am. J. Surg., Vol. 69, pp. 9–19. Hawkings, K. and Astle, D. (2001). OpenGL Game Programming, Prima Publishing, Roseville, CA. Spong, M. and Vidyasagar, M. (1989). Robot Dynamics and Control, John Wiley & Sons, New York. Van Hoesen, D., Bzorgi, F., Kelsey, A., Wiles, C., and Beeson, K. (1995). Underground radioactive waste tank remote inspection and sampling, DOE EM Fact Sheet, Gunite and Associated Tanks Operable Unit at Waste Area Grouping 1 at Oak Ridge National Laboratory, The Department of Energy Environmental Program. Woo, M., Neider, J., Davis, T., and Shreiner, D. (1999). OpenGL Programming Guide: The Official Guide to Learning OpenGL, 3rd ed., Addison-Wesley, Reading, MA. Yong, Y. and Bonney, M. (1999). Offline programming, In: Handbook of Industrial Robotics, S. Nof, ed., John Wiley & Sons, New York, Chapter 19. Yoshikawa, T. (1990). Foundations of Robotics: Analysis and Control, MIT Press, Cambridge. Copyright © 2005 by CRC Press LLC [...]... Vtip=Jacobian(Rtip,[th1;th2;th3;roll;pitch;yaw;heave;surge;sway])*[th1d;th2d;th3d; roll_d;pitch_d;yaw_d;heave_d;surge_d;sway_d]; Copyright © 20 05 by CRC Press LLC Robot Simulation 2 1 -2 7 % total kinetic energy: T = 1 /2 qdot' * I * qdot + 1 /2 V' M V T=(1 /2* transpose(Q1)*I1*Q1 + 1 /2* transpose(Q2)*I2*Q2 + 1 /2* transpose(Q3)*I3*Q3 + 1 /2* m2*transpose(V2c)*V2c + 1 /2* m3*transpose(V3c)*V3c) + 1 /2* M_payload*(transpose(Vtip)*Vtip);;... structure Copyright © 20 05 by CRC Press LLC 2 2- 6 Robotics and Automation Handbook equation:  x1 ⊗ x1 1 2 2  2  x1 ⊗ x2 LE =    T xn ⊗ xn 1 2  T T     s  E = 0,   s L ∈ Rn×9 (22 .5) and choosing E s as the eigenvector of L T L associated with the eigenvalue 0 .2 E can then be obtained by “unstacking” E s After obtaining E , we can project it to the space of essential matrix and decompose it... smallest eigenvalue 2 It can be shown that for n(≤8) points in general positions, L T L has only one zero eigenvalue Copyright © 20 05 by CRC Press LLC (22 .6) 2 2- 7 A Survey of Geometric Vision q1 q2 q3 FIGURE 22 .3 The two images of a calibration cube and two views of the reconstructed structure The three angles are θ1 = 89.7◦ , 2 = 92. 3◦ , and θ3 = 91.9◦ Example 22 .1 As shown in Figure 22 .3, two images... for each point in all views ˜ x − x 2 > for some threshold > 0, then k ← k + 1 and go to 5 If the reprojection error step 2, otherwise stop 4 Now we assume that the cameras are all calibrated, which is the case of Euclidean reconstruction This algorithm also works for uncalibrated case Copyright © 20 05 by CRC Press LLC 2 2- 1 2 Robotics and Automation Handbook FIGURE 22 .5 The four images used to reconstruct... 1 (det(Si )) 3 ∈ R3 (22 .16) In this algorithm, the initialization can be done using the eight-point algorithm for two views The initial estimate on the motion of second frame [R2 , T2 ] can be obtained using the standard two-view eight-point algorithm Initial estimate of the point depth is then αj = − T j x 2 T2 j j x 2 R2 x 1 2 j x 2 T2 j = 1, , n , (22 .17) j In the multiple-view case, the least... ai*sin(thi); 2 1 -2 6 Robotics and Automation Handbook 0 0 sin(alfai) 0 cos(alfai) 0 di; 1]; sin(thi)*sin(alfai) -cos(thi)*sin(alfai) cos(alfai) 0 ai*cos(thi); ai*sin(thi); di; 1]; ai=L4;alfai=-pi /2; di=0;thi=th4-pi /2; H5=[cos(thi) -sin(thi)*cos(alfai) sin(thi)*sin(alfai) sin(thi) cos(thi)*cos(alfai) -cos(thi)*sin(alfai) 0 sin(alfai) cos(alfai) 0 0 0 ai*cos(thi); ai*sin(thi); di; 1]; ai=L3y;alfai=0;di=0;thi=pi /2; ... Readings 22 .2 • 3-D Reconstruction Two-View Geometry Epipolar Constraint and Essential Matrix Algorithm • Further Readings 22 .3 • Eight-Point Linear Multiple-View Geometry Rank Condition on Multiple Views of Point Feature Reconstruction Algorithm • Further Readings 22 .4 Ohio State University Yi Ma University of Illinois 22 .5 Linear Utilizing Prior Knowledge of the Scene — Symmetry Symmetric Multiple-View... translate to C-code MassMatrix_cc=ccode(MassMatrix); NLT1_cc=ccode(NLT1); NLT2_cc=ccode(NLT2); NLT3_cc=ccode(NLT3); NLT4_cc=ccode(NLT4); NLT5_cc=ccode(NLT5); % calculation of jacobian from tip frame to joint space LDRDJacobian=simple(Jacobian(H(1:3,4),[th1;th2;th3])); LDRDJacobian_cc=ccode(LDRDJacobian); Copyright © 20 05 by CRC Press LLC 22 A Survey of Geometric Vision 22 .1 Introduction Camera Model and Image... NLT2=simple(Jacobian(dT_qdot,transpose(qs))*qsd); NLT3=simple(Jacobian(dT_qdot,transpose(qsd))*qsdd); NLT4=simple(transpose (-1 *(Jacobian(T,q)))); NLT5=simple(((Jacobian(V,q)))); NLT=simple(NLT1+NLT2+NLT3+NLT4+NLT5); Appendix B: Matlab Code for The 3-DOF System, Full Sea State syms ai alfai di thi mass g syms q1 q1_d q1_dd syms roll pitch yaw heave surge sway Copyright © 20 05 by CRC Press LLC 2 1 -2 5 Robot... ai=L2;alfai=0;di=0;thi=th2+pi /2; H2=[cos(thi) -sin(thi)*cos(alfai) sin(thi) cos(thi)*cos(alfai) 0 sin(alfai) 0 0 sin(thi)*sin(alfai) -cos(thi)*sin(alfai) cos(alfai) 0 ai=L2c;alfai=0;di=0;thi=th2+pi /2; H2c=[cos(thi) -sin(thi)*cos(alfai) sin(thi) cos(thi)*cos(alfai) 0 sin(alfai) 0 0 sin(thi)*sin(alfai) -cos(thi)*sin(alfai) cos(alfai) 0 ai=L3x;alfai=0;di=0;thi=th3-pi /2; H3=[cos(thi) -sin(thi)*cos(alfai) sin(thi) . 10 –3 20 0 40 60 80 100 120 Y-Direction Tip Position (m) Time (sec) 3 2 1 −1 2 −3 −4 5 0 FIGURE 21 .19 Y tip direction. Copyright © 20 05 by CRC Press LLC 21 -2 2 Robotics and Automation Handbook 1 .22 20 0. structure Copyright © 20 05 by CRC Press LLC 22 -6 Robotics and Automation Handbook equation: LE s . =         x 1 1 ⊗ x 1 2  T  x 2 1 ⊗ x 2 2  T . . .  x n 1 ⊗ x n 2  T        E s =. and Automation Handbook 1 .22 20 0 40 60 80 100 120 Z-Direction Tip Position (m) Time (sec) 1 .21 5 1 .21 1 .20 5 1.1 95 1.19 1.1 85 1.18 1 .2 FIGURE 21 .20 Z tip direction. 21 .5 Conclusion This chapter

Ngày đăng: 10/08/2014, 02:20

TỪ KHÓA LIÊN QUAN