Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống
1
/ 27 trang
THÔNG TIN TÀI LIỆU
Thông tin cơ bản
Định dạng
Số trang
27
Dung lượng
0,92 MB
Nội dung
94 ◾ EssentialsofControlTechniquesand Theory Consider a simpler example, the system described by gain Gs sa ()= + 1 , when the input is e −at , i.e., s = –a. If we turn back to the differential equation that the gain function represents, we see xa e a += − x t . Since the “complementary function” is now the same as the input function, we must look for a “particular integral” which has an extra factor of t, i.e., the general solution is: –135 Phase 1/3 Gain (log scale) 3 ω - log scale a/3 3a –180 Figure 7.6 Bode plot showing stabilization of a 2 /s 2 using phase-advance (a + 3jω)/ (3a + jω). 91239.indb 94 10/12/09 1:42:53 PM Frequency Domain Methods ◾ 95 xteAe at at =+ −− . As t becomes large, we see that the ratio of output to input also becomes large— but the output still tends rapidly to zero. Even if the system had a pair of poles representing an undamped oscillation, applying the same frequency at the input would only cause the amplitude to ramp upwards at a steady rate; and there would be no sudden infinite output. Let one of the poles stray so that its real part becomes positive, however, and there will be an exponential runaway in the amplitude of the output. ω = a Figure 7.7 Family of response curves for various damping factors. (Screen grab from www.esscont.com/7/7-7-damping.htm) 91239.indb 95 10/12/09 1:42:54 PM This page intentionally left blank 97 8Chapter Discrete Time Systems and Computer Control 8.1 Introduction ere is a belief that discrete time control is somehow more difficult to understand than continuous control. It is true that a complicated approach can be made to the subject, but the very fact that we have already considered digital simulation shows that there need be no hidden mysteries. e concept of differentiation, with its need to take limits of ratios of small increments that tend to zero, is surely more challenging than considering a sequence of discrete values. But first let us look at discrete time control in general. When dedicated to a real-time control task, a computer measures a number of system variables, computes a control action, and applies a corrective input to the system. It does not do this continuously, but at discrete instants of time. Some processes need frequent correction, such as the attitude of an aircraft, while on the other hand the pumps and levels of a sewage process might only need attention every five minutes. Provided the corrective action is sufficiently frequent, there seems on the surface to be no reason for insisting that the intervals should be regular. When we look deeper into the analysis, however, we will find that we can take shortcuts in the mathematics if the system is updated at regular intervals. We find ourselves dealing with difference equations that have much in common with the methods we can use for differential equations. Since we started with continuous state equations, we should start by relating these to the discrete time behavior. 91239.indb 97 10/12/09 1:42:55 PM 98 ◾ EssentialsofControlTechniquesand Theory 8.2 State Transition We have become used to representing a linear continuous system by the state equations: xAxBu=+. (8.1) Now, let us say that the input of our system is driven by a digital-to-analog converter, controlled by the output of a computer. A value of input u is set up at time t and remains constant until the next output cycle at t + T. If we sample everything at constant intervals of length T and write t = nT, we find that the equivalent discrete time equations are of the form: xMxNu nnn+ =+ 1 , where x n denotes the state measured at the nth sampling interval, at t = nT. By way of a proof (which you can skip if you like, going on to Section 8.3) we consider the following question. If x n is the state at time t, what value will it reach at time t + T? In the working that follows, we will at first simplify matters by taking the initial time, t, to be zero. In Section 2.2, we considered the solution of a first order equation by the “integrating factor” method. We can use a similar technique to solve the mul- tivariable matrix equation, provided we can find a matrix e –At whose derivative is e –At (–A). An exponential function of a matrix might seem rather strange, but it becomes simple when we consider the power series expansion. For the scalar case we had: eata t a t at =+ +++1 23 2 2 3 3 !! When we differentiate term by term we see that d dt eaat a t at =+++ +0 2 23 2 ! and when we compare the series we see that each power of t in the second series has an extra factor of a. From the series definition it is clear that d dt eea at at = . 91239.indb 98 10/12/09 1:42:57 PM Discrete Time Systems and Computer Control ◾ 99 e product At is simply obtained by multiplying each of the coefficients of A by t. For the exponential, we can define et tt tA IA AA=+ +++ 2 2 3 3 23!! where I is the unit matrix. Now, d dt et t tA AA A=+ ++ +0 2 23 2 ! so just as in the scalar case we have d dt ee ttAA A= By the same token, d dt ee tt−− − AA A= . ere is a good reason to write the A matrix after the exponential. e state equations, 8.1, tell us that: xAxBu=+ so xAxBu− = and multiplying through by e –At we have ee e tt t−− − − AA A xAxBu = and just as in the scalar case, the left-hand side can be expressed as the derivative of a product d dt ee tt () . −−AA xBu= 91239.indb 99 10/12/09 1:43:01 PM 100 ◾ EssentialsofControlTechniquesand Theory Integrating, we see that eedt t T t T −−AA xBu [] = ∫ 0 0 . When t = 0, the matrix exponential is simply the unit matrix, so eT edt Tt T − − AA xx Bu() ()0 0 [] = ∫ − and we can multiply through by e At to get xx Bu AAA () ()Te ee dt TTt T − 0 0 = ∫ − which can be rearranged as xx Bu AAA () () .Te ee dt TTt T =+ ∫ 0 0 − What does this mean? Since u will be constant throughout the integral, the right-hand side can be rearranged to give xx B AAA () () ()Te eedt TTt T =+ ∫ 00 0 − u which is of the form: xMxNu() () ()T =+00 91239.indb 100 10/12/09 1:43:03 PM Discrete Time Systems and Computer Control ◾ 101 where M and N are constant matrices once we have given a value to T. From this it follows that xMxNu() () ()tT tt+= + and when we write nT for t, we arrive at the form: xMxNu nnn+ =+ 1 where the matrices M and N are calculated from M A = e T and NB AA = ∫ eedt Tt T − 0 while the system output is still given by yCx nn = . e matrix M = e AT is termed the “state transition matrix.” 8.3 Discrete Time State Equations and Feedback As long as there is a risk of confusion between the matrices of the discrete state equations and those of the continuous ones, we will use the notation M and N. (Some authors use A and B in both cases, although the matrices have different values.) Now if our computer is to provide some feedback control action, this must be based on measuring the system output, y n , taking into account a command input, v n , and computing an input value u n with which to drive the digital-to-analog con- verters. For now we will assume that the computation is performed instantaneously as far as the system is concerned, i.e., the intervals are much longer than the com- puting time. We see that if the action is linear, uFyGv nn =+ n (8.2) 91239.indb 101 10/12/09 1:43:06 PM 102 ◾ EssentialsofControlTechniquesand Theory where v n is a command input. As in the continuous case, we can substitute the expression for u back into the system equations to get xMxNFy Gv nnnn+ =+ + 1 () and since y n = Cx n , xMNFCx NGv nnn+ =+ + 1 () . (8.3) Exactly as in the continuous case, we see that the system matrix has been modi- fied by feedback to describe a different performance. Just as before, we wish to know how to ensure that the feedback changes the performance to represent a “bet- ter” system. But to do this, we need to know how to assess the new state transition matrix M + NFC. 8.4 Solving Discrete Time Equations When we had a differential equation like xxx++=560, we looked for a solution in the form of e mt . Suppose that we have the difference equation xxx nnn++ ++= 21 560 what “eigenfunction” can we look for? We simply try x n = k n and the equation becomes kkk nnn++ ++= 21 50, so we have ()kkk n2 56 0++ = and once more we find ourselves solving a quadratic to find k = –2 or k = –3. 91239.indb 102 10/12/09 1:43:09 PM Discrete Time Systems and Computer Control ◾ 103 e roots are in the left-hand half plane, so will this represent a stable system? Not a bit! From an initial value of one, the sequence of values for x can be 1, –3, 9, –27, 81… So what is the criterion for stability? x must die away from any initial value, so |k| < 1 for all the roots of any such equation. In the cases where the roots are com- plex it is the size, the modulus of k that has to be less than one. If we plot the roots in the complex plane, as we did for the frequency domain, we will see that the roots must lie within the “unit circle.” 8.5 Matrices and Eigenvectors When we multiply a vector by a matrix, we get another vector, probably of a dif- ferent magnitude and in a different direction. Just suppose, though, that for a special vector the direction was unchanged. Suppose that the new vector was just the old vector, multiplied by a scalar constant k. Such a vector would be called an “eigenvector” of the matrix and the constant k would be the corresponding “eigenvalue.” If we repeatedly multiply that vector by the matrix, doing it n times, say, then the resulting vector will be k n times the original vector. But that is just what hap- pens with our state transition matrix. If we keep the command input at zero, the state will be repeatedly multiplied by M as each interval advances. If our initial state coincides with an eigenvector, we have a formula for every future value just by multiplying every component by k n . So how can we find the eigenvectors of M? Suppose that ξ is an eigenvector. en Μξ ξξ==.kkI We can move everything to the left to get MIξξ−=k 0, ()MI− k ξ=0. Now, the zero on the right is a vector zero, with all its components zero. Here we have a healthy vector, ξ, which when multiplied by (M – kI) is reduced to a vector of zeros. One way of viewing a product such as Ax is that each column of A is multiplied by the corresponding component of x and the resulting vectors 91239.indb 103 10/12/09 1:43:10 PM [...]... index of the particular component of the state or input vector The sample time is “now.” Assume that all variables have been declared and initialized, and that we are concerned with computing the next value of the state knowing the input u The state has n components and there are m components of input For the coefficients of the discrete state matrix, we will use A[i][j], since there is no risk of confusion,... of that later For now, let us see how state space and matrices can help us 8.10 Controllers with Added Dynamics When we add dynamics to the controller it becomes a system in its own right, with state variables, such as xslow and state equations of its own These variables can be added to the list of the plant’s variables and a single composite matrix equation can be formed Suppose that the states of. .. problem of balancing a pendulum is undamentally f different At least one commercial vendor of laboratory experiments has shown a total misunderstanding of the problem We will analyze the system using the heory t that we have met so far and devise software for computer control We will also c onstruct a simulation that shows all the important properties of the real system The literature is full of simulated... v 1 tilt −bf l tiltrate L and all we have to do is to find some eigenvalues 91239.indb 117 10/12/09 1:43:37 PM 118 ◾ EssentialsofControlTechniquesand Theory We can save a lot of typing if we define the length of the stick to be one meter To find the characteristic equation, we must then find the determinant and equate it to zero: −λ bc det 0 −bc 1 bd − λ 0 −bd 0...104 ◾ EssentialsofControlTechniquesand Theory added together. We know that in evaluating the determinant of A, we can add combinations of columns together without changing the determinant’s value If we can get a resulting column that is all zeros, then the determinant must clearly be zero too So in this case, we see that det( M − kI ) = 0 This will give us a polynomial in k of the same order... the states of the controller are represented by the vector w In place of Equation 8.3.1, we will have u n = Fy n + Gv n + Hw n , while w can be the state of a system that has inputs of both y and v, 91239.indb 112 w n+1 = Kw n + Ly n + Pv n 10/12/09 1:43:31 PM Discrete Time Systems and Computer Control ◾ 113 so when we combine these with the system equations x n+1 = Mx n + Nu n and y n = Cx n we... characteristic equation: 91239.indb 109 λ 2 + (0.36788ak − 1.36788)λ + (0.26424ak + 0.36788) = 0 10/12/09 1:43:25 PM 110 ◾ EssentialsofControlTechniquesand Theory The limit of ak for stability has now reduced below 2.4 (otherwise the product of the roots is greater than unity), and for equal roots we have (0.36788ak − 1.36788)2 = 4(0.26424ak + 0.36788) Pounding a pocket calculator gives ak = 0.196174—smaller... pulley and belt system, just as was shown in Figure 4.4 A pendulum is pivoted on the trolley and a transducer measures the angle of tilt (Figure 9.1) The analysis can be made very much easier if we make a number of assumptions The first is that the tilt of the pendulum is small, so that we can equate the sine of the tilt to the value of the tilt in radians 115 91239.indb 115 10/12/09 1:43:33 PM 116 ◾ Essentials. .. acceleration of the tilt angle can be expressed as the difference between the acceleration of the top and the acceleration of the bottom, divided by the length L of the stick Since we know that the acceleration of the bottom of the stick is the acceleration of the trolley, or just bu, we have two more equations d tilt = tiltrate dt and d tiltrate = ( g tilt − bu )/L dt In matrix form, this looks like 0 x ... framework 91239.indb 111 10/12/09 1:43:29 PM 112 ◾ EssentialsofControlTechniquesand Theory Since our motor is a simple two-integrator system with no damping, our state variables x and v will be simulated by x = x + x*dt and v = v + u*dt We have already seen that we can estimate the velocity with vest = k * (x-xslow) xslow = xslow + vest * dt and then we can set u =-f*x-d*vest Put these last three . 36788 0 2 642 4 0 36788+−++(. .)(. .)ak ak == 0 91239.indb 109 10/12/09 1 :43 :25 PM 110 ◾ Essentials of Control Techniques and Theory e limit of ak for stability has now reduced below 2 .4 (otherwise. and Computer Control ◾ 109 i.e., ()ak ak2 148 2 361 0− += ak =± =± 741 741 361 741 740 756 2 () − Since ak must be less than 20, we must take the smaller value, giving a value of ak = 0. 244 —very. 103 10/12/09 1 :43 :10 PM 1 04 ◾ Essentials of Control Techniques and Theory added together. We know that in evaluating the determinant of A, we can add combinations of columns together without