Control of Robot Manipulators in Joint Space - R. Kelly, V. Santibanez and A. Loria Part 10 pps

30 296 0
Control of Robot Manipulators in Joint Space - R. Kelly, V. Santibanez and A. Loria Part 10 pps

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

Bibliography 259 • For any selection of the symmetric positive definite matrices K p and K v , the origin of the closed-loop equation of robots with the PD control law with compensation, expressed in terms of the state vector  ˜ q T ˙ ˜ q T  T ,is globally uniformly asymptotically stable. Therefore, the PD control law with compensation satisfies the motion control objective, globally. This implies in particular, that for any initial position error ˜ q(0) ∈ IR n and any velocity error ˙ ˜ q(0) ∈ IR n ,wehavelim t→∞ ˜ q(t)=0. • For any choice of the symmetric positive definite matrices K p and K v , the origin of the closed-loop equation of a robot with the PD+ control law, expressed in terms of the state vector  ˜ q T ˙ ˜ q T  T , is globally uniformly asymptotically stable. Therefore, PD+ control satisfies the motion control objective, globally. In particular, for any initial position error ˜ q(0) ∈ IR n and velocity error ˙ ˜ q(0) ∈ IR n , we have lim t→∞ ˜ q(t)=0. Bibliography The structure of the PD control law with compensation has been proposed and studied in • Slotine J. J., Li W., 1987. “On the adaptive control of robot manipulators”, The International Journal of Robotics Research, Vol. 6, No. 3, pp. 49–59. • Slotine J. J., Li W., 1988. “Adaptive manipulator control: A case study”, IEEE Transactions on Automatic Control, Vol. AC-33, No. 11, November, pp. 995–1003. • Slotine J. J., Li W., 1991, “Applied nonlinear control”, Prentice-Hall. The Lyapunov function (11.3) for the analysis of global uniform asymptotic stability for the PD control law with compensation was proposed in • Spong M., Ortega R., Kelly R., 1990, “Comments on “Adaptive manip- ulator control: A case study”, IEEE Transactions on Automatic Control, Vol. 35, No. 6, June, pp.761–762. • Egeland O., Godhavn J. M., 1994, “A note on Lyapunov stability for an adaptive robot controller”, IEEE Transactions on Automatic Control, Vol. 39, No. 8, August, pp. 1671–1673. The structure of the PD+ control law was proposed in • Koditschek D. E., 1984, “Natural motion for robot arms”, Proceedings of the IEEE 23th Conference on Decision and Control, Las Vegas, NV., December, pp. 733–735. 260 11 PD+ Control and PD Control with Compensation PD+ control was originally presented in 2 • Paden B., Panja R., 1988, “Globally asymptotically stable PD+ controller for robot manipulators”, International Journal of Control, Vol. 47, No. 6, pp. 1697–1712. The material in Subsection 11.2.1 on the Lyapunov function to show global uniform asymptotic stability is taken from • Whitcomb L. L., Rizzi A., Koditschek D. E., 1993, “Comparative experi- ments with a new adaptive controller for robot arms”, IEEE Transactions on Robotics and Automation, Vol. 9, No. 1, February, pp. 59–70. Problems 1. Consider the model of an ideal pendulum studied in Example 2.2 (see page 30) J ¨q + mgl sin(q)=τ. Assume that we apply the PD controller with compensation τ = k p ˜q + k v ˙ ˜q + J[¨q d + λ ˙ ˜q]+mgl sin(q) where λ = k p /k v , k p and k v are positive numbers. a) Obtain the closed-loop equation in terms of the state vector  ˜q ˙ ˜q  T . Verify that the origin is its unique equilibrium point. b) Show that the origin  ˜q ˙ ˜q  T = 0 ∈ IR 2 is globally asymptotically stable . Hint: Use the Lyapunov function candidate V (˜q, ˙ ˜q)= 1 2 J  ˙ ˜q + λ˜q  2 + k p ˜q 2 . 2. Consider PD+ control for the ideal pendulum presented in Example 11.2. Propose a Lyapunov function candidate to show that the origin  ˜q ˙ ˜q  = [0 0] T = 0 ∈ IR 2 of the closed-loop equation ml 2 ¨ ˜q + k v ˙ ˜q + k p ˜q =0 is a globally asymptotically stable equilibrium point. 2 This, together with PD control with compensation were the first controls with rigorous proofs of global uniform asymptotic stability proposed for the motion control problem. Problems 261 3. Consider the model of the pendulum from Example 3.8 and illustrated in Figure 3.13,  J m + J L r 2  ¨q +  f m + f L r 2 + K a K b R a  ˙q + k L r 2 sin(q)= K a rR a v where • v is the armature voltage (input) • q is the angular position of the pendulum with respect to the vertical (output), and the rest of the parameters are constants related to the electrical and mechanical parts of the system and which are positive and known. It is desired to drive the angular position q(t) to a constant value q d .For this, we propose to use the following control law of type PD+ 3 , v = rR a K a  k p ˜q −k v ˙q + k L r 2 sin(q)  with k p and k v positive design constants and ˜q(t)=q d − q(t). a) Obtain the closed-loop equation in terms of the state [˜q ˙q] T . b) Verify that the origin is an equilibrium and propose a Lyapunov func- tion to demonstrate its stability. c) Could it be possible to show as well that the origin is actually globally asymptotically stable? 4. Consider the control law τ = K p ˜ q + K v ˙ ˜ q + M(q) ¨ q d + C(q, ˙ q d ) ˙ q + g(q) . a) Point out the difference with respect to the PD+ control law given by Equation (11.7) b) Show that in reality, the previous controller is equivalent to the PD+ controller. Hint: Use Property 4.2. 5. Verify Equation (11.6) by use of (11.5) . 3 Notice that since the task here is position control, in this case the controller is simply of type PD with gravity compensation. 12 Feedforward Control and PD Control plus Feedforward The practical implementation of controllers for robot manipulators is typically carried out using digital technology. The way these control systems operate consists basically of the following stages: • sampling of the joint position q (and of the velocity ˙ q); • computation of the control action τ from the control law; • the ‘order’ to apply this control action is sent to the actuators. In certain applications where it is required that the robot realize repetitive tasks at high velocity, the previous stages must be executed at a high cadence. The bottleneck in time-consumption terms, is the computation of the control action τ . Naturally, a reduction in the time for computation of τ has the ad- vantage of a higher processing frequency and hence a larger potential for the execution of ‘fast’ tasks. This is the main reason for the interest in controllers that require “little” computing power. In particular, this is the case for con- trollers that use information based on the desired positions, velocities, and accelerations q d (t), ˙ q d (t), and ¨ q d (t) respectively. Indeed, in repetitive tasks the desired position q d (t) and its time derivatives happen to be vectorial pe- riodic functions of time and moreover they are known once the task has been specified. Once the processing frequency has been established, the terms in the control law that depend exclusively on the form of these functions, may be computed and stored in memory, in a look-up table. During computation of the control action, these precomputed terms are simply collected out of memory, thereby reducing the computational burden. In this chapter we consider two control strategies which have been sug- gested in the literature and which make wide use of precomputed terms in their respective control laws: • feedforward control; • PD control plus feedforward. Each of these controllers is the subject of a section in the present chapter. 264 12 Feedforward Control and PD Control plus Feedforward 12.1 Feedforward Control Among the conceptually simplest control strategies that may be used to con- trol a dynamic system we find the so-called open-loop control, where the controller is simply the inverse dynamics model of the system evaluated along the desired reference trajectories. For the case of linear dynamic systems, this control technique may be roughly sketched as follows. Consider the linear system described by ˙ x = Ax + u where x ∈ IR n is the state vector and at same time the output of the system, A ∈ IR n×n is a matrix whose eigenvalues λ i {A} have negative real part, and u ∈ IR n is the input to the system. Assume that we specify a vectorial function x d as well as its time derivative ˙ x d to be bounded. The control goal is that x(t) → x d (t) when t →∞. In other words, defining the errors vector ˜ x = x d −x, the control problem consists in designing a controller that allows one to determine the input u to the system so that lim t→∞ ˜ x(t)=0. The solution to this control problem using the inverse dynamic model approach consists basically in substituting x and ˙ x with x d and ˙ x d in the equation of the system to control, and then solving for u, i.e. u = ˙ x d − Ax d . In this manner, the system formed by the linear system to control and the previous controller satisfies ˙ ˜ x = A ˜ x which in turn is a linear system of the new state vector ˜ x and moreover we know from linear systems theory that since the eigenvalues of the matrix A have negative real parts, then lim t→∞ ˜ x(t)=0 for all ˜ x(0) ∈ IR n . In robot control, this strategy provides the supporting arguments to the following argument. If we apply a torque τ at the input of the robot, the behavior of its outputs q and ˙ q is governed by (III.1), i.e. d dt ⎡ ⎣ q ˙ q ⎤ ⎦ = ⎡ ⎣ ˙ q M(q) −1 [τ − C(q, ˙ q) ˙ q − g(q)] ⎤ ⎦ . (12.1) If we wish that the behavior of the outputs q and ˙ q be equal to that specified by q d and ˙ q d respectively, it seems reasonable to replace q, ˙ q and ¨ q by q d , ˙ q d , and ¨ q d in the Equation (12.1) and to solve for τ . This reasoning leads to the equation of the feedforward controller, given by τ = M(q d ) ¨ q d + C(q d , ˙ q d ) ˙ q d + g(q d ) . (12.2) 12.1 Feedforward Control 265 Notice that the control action τ does not depend on q nor on ˙ q, that is, it is an open loop control. Moreover, such a controller does not possess any design parameter. As with any other open-loop control strategy, this approach needs the precise knowledge of the dynamic system to be controlled, that is, of the dynamic model of the manipulator and specifically, of the structure of the matrices M(q), C(q, ˙ q) and of the vector g(q) as well as knowledge of their parameters (masses, inertias etc.). For this reason it is said that the feedforward control is (robot-) ‘model-based’. The interest in a controller of this type resides in the advantages that it offers in implementation. Indeed, having determined q d , ˙ q d and ¨ q d (in particular for repetitive tasks), one may determine the terms M(q d ), C(q d , ˙ q d ) and g(q d ) off-line and easily compute the control action τ according to Equation (12.2). This motivates the qualifier “feedforward” in the name of this controller. Nonetheless, one should not forget that a controller of this type has the intrinsic disadvantages of open-loop control systems, e.g. lack of robustness with respect to parametric and structural uncertainties, performance degra- dation in the presence of external perturbations, etc. In Figure 12.1 we present the block-diagram corresponding to a robot under feedforward control. C(q d , ˙ q d ) g(q d ) τ ROBOT q ˙ q ˙ q d q d ¨ q d M(q d ) ΣΣ Figure 12.1. Block-diagram: feedforward control The behavior of the control system is described by an equation obtained by substituting the equation of the controller (12.2) in the model of the robot (III.1), that is M(q) ¨ q + C(q, ˙ q) ˙ q + g(q)=M (q d ) ¨ q d + C(q d , ˙ q d ) ˙ q d + g(q d ) . (12.3) To avoid cumbersome notation in this chapter we use from now on and whenever it appears, the following notation M = M(q) M d = M(q d ) C = C(q, ˙ q) 266 12 Feedforward Control and PD Control plus Feedforward C d = C(q d , ˙ q d ) g = g(q) g d = g(q d ) . Equation (12.3) may be written in terms of the state vector  ˜ q T ˙ ˜ q T  T as d dt ⎡ ⎣ ˜ q ˙ ˜ q ⎤ ⎦ = ⎡ ⎣ ˙ ˜ q −M −1 [(M d − M) ¨ q d + C d ˙ q d − C ˙ q + g d − g] ⎤ ⎦ , which represents an ordinary nonlinear nonautonomous differential equation. The origin  ˜ q T ˙ ˜ q T  T = 0 ∈ IR 2n is an equilibrium point of the previous equa- tion but in general, it is not the only one. This is illustrated in the following examples. Example 12.1. Consider the model of an ideal pendulum of length l with mass m concentrated at the tip and subject to the action of gravity g. Assume that a torque τ is applied at the rotating axis ml 2 ¨q + mgl sin(q)=τ where we identify M(q)=ml 2 , C(q, ˙q)=0andg(q)=mgl sin(q). The feedforward controller (12.2), reduces to τ = ml 2 ¨q d + mgl sin(q d ) . The behavior of the system is characterized by Equation (12.3), ml 2 ¨q + mgl sin(q)=ml 2 ¨q d + mgl sin(q d ) or, in terms of the state  ˜q ˙ ˜q  T ,by d dt ⎡ ⎣ ˜q ˙ ˜q ⎤ ⎦ = ⎡ ⎣ ˙ ˜q − g l [sin(q d ) −sin(q d − ˜q)] ⎤ ⎦ . Clearly the origin  ˜q ˙ ˜q  T = 0 ∈ IR 2 is an equilibrium but so are the points  ˜q ˙ ˜q  T =[2nπ 0] T for any integer value that n takes. ♦ The following example presents the study of the feedforward control of a 3- DOF Cartesian robot. The dynamic model of this manipulator is an innocuous linear system. 12.1 Feedforward Control 267 Example 12.2. Consider the 3-DOF Cartesian robot studied in Exam- ple 3.4 (see page 69) and shown in Figure 3.5. Its dynamic model is given by [m 1 + m 2 + m 3 ]¨q 1 +[m 1 + m 2 + m 3 ]g = τ 1 [m 1 + m 2 ]¨q 2 = τ 2 m 1 ¨q 3 = τ 3 , where we identify M(q)= ⎡ ⎣ m 1 + m 2 + m 3 00 0 m 1 + m 2 0 00m 1 ⎤ ⎦ C(q, ˙ q)=0 g(q)= ⎡ ⎣ [m 1 + m 2 + m 3 ]g 0 0 ⎤ ⎦ . Notice that the dynamic model is characterized by a linear differen- tial equation. The “closed-loop” equation 1 obtained with feedforward control is given by d dt ⎡ ⎣ ˜ q ˙ ˜ q ⎤ ⎦ = ⎡ ⎣ 0 I 00 ⎤ ⎦ ⎡ ⎣ ˜ q ˙ ˜ q ⎤ ⎦ , which has an infinite number of non-isolated equilibria given by  ˜ q T ˙ ˜ q T  T =  ˜ q T 0 T  T ∈ IR 2n , where ˜ q is any vector in IR n . Naturally, the origin is an equilibrium but it is not isolated. Consequently, this equilibrium (and actually any other) may not be asymptotically stable even locally. Moreover, due to the linear nature of the equation that characterizes the control system, it may be shown that in this case any equilibrium point is unstable (see problem 12.2). ♦ The previous examples makes it clear that multiple equilibria may coexist for the differential equation that characterizes the behavior of the control system. Moreover, due to the lack of design parameters in the controller, it is impossible to modify either the location or the number of equilibria, and even 1 Here we write “closed-loop” in quotes since as a matter of fact the control system in itself is a system in open loop. 268 12 Feedforward Control and PD Control plus Feedforward less, their stability properties, which are determined only by the dynamics of the manipulator. Obviously, a controller whose behavior in robot control has these features is of little utility in real applications. As a matter of fact, its use may yield catastrophic results in certain applications as we show in the following example. l 2 l c2 q 2 q 1 I 1 m 1 g y x l 1 m 2 I 2 l c1 Link 1 Link 2 Figure 12.2. Diagram of the Pelican prototype Example 12.3. Consider the 2-DOF prototype robot studied in Chapter 5, and shown in Figure 12.2. Consider the feedforward control law (12.2) on this robot. The desired trajectory in joint space is given by q d (t) which is defined in (5.7) and whose graph is depicted in Figure 5.5 (cf. page 129). The initial conditions for positions and velocities are chosen as q 1 (0) = 0,q 2 (0) = 0 ˙q 1 (0) = 0, ˙q 2 (0) = 0 . Figure 12.3 presents experimental results; it shows the components of position error ˜ q(t), which tend to a largely oscillatory behavior. Naturally, this behavior is far from satisfactory. ♦ 12.2 PD Control plus Feedforward 269 0246810 −0.5 0.0 0.5 1.0 1.5 2.0 [rad] ˜q 1 ˜q 2 t [s] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Figure 12.3. Graphs of position errors ˜q 1 and ˜q 2 So far, we have presented a series of examples that show negative features of the feedforward control given by (12.2). Naturally, these examples might discourage a formal study of stability of the origin as an equilibrium of the differential equation which models the behavior of this control system. Moreover, a rigorous generic analysis of stability or instability seems to be an impossible task. While we presented in Example 12.2 the case when the origin of the equation which characterizes the control system is unstable, Problem 12.1 addresses the case in which the origin is a stable equilibrium. The previous observations make it evident that feedforward control, given by (12.2), even with exact knowledge of the model of the robot, may be inadequate to achieve the motion control objective and even that of position control. Therefore, we may conclude that, in spite of the practical motivation to use feedforward control (12.2) should not be applied in robot control. Feedforward control (12.2) may be modified by the addition, for example, of a feedback Proportional–Derivative (PD) term τ = M(q d ) ¨ q d + C(q d , ˙ q d ) ˙ q d + g(q d )+K p ˜ q + K v ˙ ˜ q (12.4) where K p and K v are the gain matrices (n × n) of position and velocity respectively. The controller (12.4) is now a closed-loop controller in view of the explicit feedback of q and ˙ q used to compute ˜ q and ˙ ˜ q respectively. The controller (12.4) is studied in the following section. 12.2 PD Control plus Feedforward The wide practical interest in incorporating the smallest number of computa- tions in real time to implement a robot controller has been the main motiva- tion for the PD plus feedforward control law, given by [...]... perspective on robot control: The energy Lyapunov function approach”, International Journal of Adaptive Control and Signal Processing, Vol 4, pp 487–500 Kelly R., Salgado R., 1994, “PD control with computed feedforward of robot manipulators: A design procedure”, IEEE Transactions on Robotics and Automation, Vol 10, No 4, August, pp 566–571 The proof of existence of proportional and derivative gains that... Measurement, and control, Vol 105 , pp 136–142 An C., Atkeson C., Hollerbach J., 1988, “Model-based control of a robot manipulator”, The MIT Press Khosla P K., Kanade T., 1988, “Experimental evaluation of nonlinear feedback and feedforward control schemes for manipulators , The International Journal of Robotics Research, Vol 7, No 1, pp 18–28 Kokkinis T., Stoughton R., 1991, “Dynamics and control of a closed-chain... “Introduction to robotics: Mechanics and control , Addison– Wesley, Reading, MA • Yoshikawa T., 1990, “Foundations of robotics: Analysis and control , The MIT Press (Local) asymptotic stability under PD control plus feedforward has been analyzed in • • • Paden B., Riedle B D., 1988, “A positive–real modification of a class of nonlinear controllers for robot manipulators , Proceedings of the American Control. .. possibly of poor quality for certain bands of operation Second, avoiding the use of velocity measurements removes the need for velocity sensors such as tachometers and therefore, leads to a reduction in production cost while making the robot lighter The design of controllers that do not require velocity measurements to control robot manipulators has been a topic of investigation since broached in the... advanced issues on robot control We deal with topics such as control without velocity measurements and control under model uncertainty We recommend this part of the text for a second course on robot dynamics and control or for a course on robot control at the first year of graduate level We assume that the student is familiar with the notion of functional spaces, i.e the spaces L2 and L∞ If not, we... for conventional industrial manipulators , Control Engineering Practice, Vol 2, No 6, pp 103 9 105 0 284 • 12 Feedforward Control and PD Control plus Feedforward Reyes F., Kelly R., 2001, “Experimental evaluation of model-based controllers on a direct-drive robot arm”, Mechatronics, Vol 11, pp 267–282 Problems 1 Consider feedforward control of the ideal pendulum studied in Example 12.1 Assume that the... Derivation of the dynamic robot model to be controlled Particularly, com˙ putation of M (q), C(q, q) and g(q) in closed form • Computation of the constants λMax {M (q)}, λmin {M (q)}, kM , kM ,kC1 , kC2 , k and kg For this, it is suggested that the information given in Table 4.1 (cf page 109 ) is used ˙ ¨ • Computation of q d Max , q d Max from the specification of a given task to the robot • Computation of. .. P“D” control with gravity compensation and P“D” control with desired gravity compensation; • Introduction to adaptive robot control; • PD control with adaptive gravity compensation; • PD control with adaptive compensation 13 P“D” Control with Gravity Compensation and P“D” Control with Desired Gravity Compensation Robot manipulators are equipped with sensors for the measurement of joint ˙ positions and. .. the measurement of velocities by the described methods may be quite mediocre in accuracy, specifically for certain intervals of velocity On certain occasions this may have as a consequence, an unacceptable degradation of the performance of the control system The interest in using controllers for robots that do not explicitly require the measurement of velocity, is twofold First, it is inadequate to feed... −0.02 0 2 4 6 8 10 t [s] Figure 12.5 Graphs of position errors q1 and q2 ˜ ˜ – usually neglected in the theoretical analysis – such as digital implementation of the continuous-time closed-loop control system (described by an ordinary differential equation), measurement noise and, most important in our experimental setting, friction at the arm joints Yet, in contrast to Example 12.3 where the controller did . forget that a controller of this type has the intrinsic disadvantages of open-loop control systems, e.g. lack of robustness with respect to parametric and structural uncertainties, performance. feedforward control of a 3- DOF Cartesian robot. The dynamic model of this manipulator is an innocuous linear system. 12.1 Feedforward Control 267 Example 12.2. Consider the 3-DOF Cartesian robot. and of the vector g(q) as well as knowledge of their parameters (masses, inertias etc.). For this reason it is said that the feedforward control is (robot- ) ‘model-based’. The interest in a controller

Ngày đăng: 10/08/2014, 01:23

Từ khóa liên quan

Tài liệu cùng người dùng

Tài liệu liên quan