1. Trang chủ
  2. » Kinh Doanh - Tiếp Thị

Essentials of Control Techniques and Theory_9 potx

27 416 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 27
Dung lượng 1,31 MB

Nội dung

202 ◾ Essentials of Control Techniques and Theory it can be rearranged as    xxxxu 3321 342+++= is will be equivalent to      x xxxu 1 111 342+++= If we take the Laplace transform, we have   ()() ()sssXsUs 32 1 342+++= at gives us a system with the correct set of poles. In matrix form, the state equations are:       x x x 1 2 3 010 001 342           = −−−           xx x x u 1 2 3 0 0 1             +           at settles the denominator. How do we arrange the zeros, though? Our out- put now needs to contain derivatives of x 1 ,   y xxx=++4 111   But we can use our first two state equations to replace this by   yx xx=++ 12 3 4 i.e.,   y = [] 114 x We can only get away with this form y = Cx if there are more poles than zeros. If they are equal in number, we must first perform one stage of “long division” of the numerator polynomial by the denominator to split off a Du term proportional to the input. e remainder of the numerator will then be of a lower order than the denominator and so will fit into the pattern. If there are more zeros than poles, give up. Now whether it is a simulation or a filter, the system can be generated in terms of a few lines of software. If we were meticulous, we could find a lot of unanswered questions about the stability of the simulation, about the quality of 91239.indb 202 10/12/09 1:45:38 PM Linking the Time and Frequency Domains ◾ 203 the approximation and about the choice of step length. For now let us turn our attention to the computational techniques of convolution. Q 14.7.1 We wish to synthesize the filter s 2 /(s 2  + 2s + 1) in software. Set up the state equations and write a brief segment of program. 91239.indb 203 10/12/09 1:45:38 PM This page intentionally left blank 205 15Chapter Time, Frequency, and Convolution Although the coming sections might seem something of a mathematician’s playground, they are extremely useful for getting an understanding of underlying principles of functions of time and the way that dynamic systems affect them. In fact, many of the issues of convolution can be much more easily be explored in terms of discrete time and sampled systems, but first we will take the more tradi- tional approach of infinite impulses and vanishingly small increments of time. 15.1 Delays and the Unit Impulse We have already looked into the function of time that has a Laplace transform which is just 1. is is the “delta function” δ(t) when t = 0. e unit step has Laplace transform 1/s, and so we can think of the delta function as its derivative. Before we go on, we must derive an important property of the Laplace transform, the “shift theorem.” If we have a function of time, x(t), and if we pass this signal through a time delay τ, then the output is the same signal that was input τ seconds earlier, x(t – τ). e bilateral Laplace transform of this output will be xt te dt st (–) – −∞ ∞ ∫ 91239.indb 205 10/12/09 1:45:39 PM 206 ◾ Essentials of Control Techniques and Theory If we write T for t – τ, then dt will equal dT, and the integral becomes xT edT sT () ()−+ −∞ ∞ ∫ τ = −− −∞ ∞ ∫ exTe dT ssTτ () = − eXs sτ () where X(s) is the Laplace transform of x(t). If we delay a signal by time τ, its Laplace transform is simply multiplied by e −sτ . Since we are considering the bilateral Laplace transform, integrated over all time both positive and negative, we could consider time advances as well. Clearly all signals have to be very small for large negative t, otherwise their contribution to the integral would be enormous when multiplied by the exponential. We can immediately start to put the shift theorem to use. It tells us that the transform of δ(t – τ), the unit impulse shifted to occur at t = τ, is e −sτ . We could of course have worked this out from first principles. We can regard the delta function as a “sampler.” When we multiply it by any function of time, x(t) and integrate over all time, we will just get the contribution from the product at the time the delta function is non-zero. xt tdtx()() ()δτ τ−= −∞ ∞ ∫ (15.1) So when we write L δτ δτ () ()tetdt st − ( ) =− − −∞ ∞ ∫ we can think of the answer as sampling e −st at the value t = τ. Let us briefly indulge in a little philosophy about the “meaning” of functions. We could think of x(t) as a simple number, the result of substituting some value of t into a formula for computing x. We can instead expand our vision of the function to consider the whole graph of x(t) plotted against time, as in a step response. In control theory we have to take this broader view, regarding inputs and outputs as time “histories,” not just as simple values. is is illustrated in Figure 15.1. 91239.indb 206 10/12/09 1:45:41 PM Time, Frequency, and Convolution ◾ 207 Now we can view Equation 15.1 as a sampling process, allowing us to pick one single value of the function out of the time history. But just let us exchange the symbols t and τ in the equation and suddenly the perspective changes. e substi- tution has no absolute mathematical effect, but it expresses our time history x(t) as the sum of an infinite number of impulses of size x(τ)dτ, xt xtd() ()()=− −∞ ∞ ∫ τδττ (15.2) is result may not look important, but it opens up a whole new way of looking at the response of a system to an applied input. 15.2 The Convolution Integral Let us first define the situation. We have a system described by a transfer function G(s), with input function u(t) and output y(t), as in Figure 15.2. If we apply a unit impulse to the system at t = 0, the output will be g(t), where the Laplace transform of g(t) is G(s). is is portrayed in Figure 15.3. How do we go about deducing the output function for any general u(t)? Perhaps the most fundamental property of a linear system is the “principle of superposition.” If we know the output response to a given input function and also to another function, then if we add the two input functions together and apply them, the output will be the sum of the two corresponding output responses. In mathematical terms, if u 1 (t) produces the response y 1 (t) and u 2 (t) produces response y 2 (t), then an input of u 1 (t) + u 2 (t) will give an output y 1 (t) + y 2 (t). Now an input of the impulse δ(t) to G(s) provokes an output g(t). An impulse applied at time t = τ, u(τ)δ(t – τ) gives the delayed response u(τ)g(t – τ). If we apply several impulses in succession, the output will be the sum of the individual responses, as shown in Figure 15.4. u(t) x(t) t t Figure 15.1 Input and output as time plots. 91239.indb 207 10/12/09 1:45:42 PM 208 ◾ Essentials of Control Techniques and Theory Notice that as the time parameter in the u-bracket increases, the time in the g-bracket reduces. At some later time t, the effect of the earliest impulse will have had longest to decay. e latest impulse has an effect that is still fresh. Now we see the significance of Equation 15.2. It allows us to express the input signal u(t) as an infinite train of impulses u(τ)δτ δ(t – τ). So to calculate the output, u(t) y(t) G(s) Figure 15.2 Time-functions and the system. δ(t) t = 0 t = 0 g(t) G(s) Figure 15.3 For a unit impulse input, G(s) gives an output g(t). u 1 u 2 u 2 u 1 u 3 u 3 τ 1 τ 2 τ 2 τ 1 τ 3 τ 2 τ 1 τ 3 τ 3 τ 1 τ 2 τ 3 t t t t (t – τ 1 ) (t – τ 2 ) (t – τ 3 ) u 1 g(t – τ 1 ) u 2 g(t – τ 2 ) u 3 g(t – τ 3 ) Figure 15.4 Superposition of impulse responses. 91239.indb 208 10/12/09 1:45:44 PM Time, Frequency, and Convolution ◾ 209 we add all the responses to these impulses. As we let δτ tend to zero, this becomes the integral, yt ugtd() ()()= ∞ ∞ ∫ τ−ττ − (15.3) is is the convolution integral. We do not really need to integrate over all infinite time. If the input does not start until t = 0 the lower limit can be zero. If the system is “causal,” meaning that it cannot start to respond to an input before the input happens, then the upper limit can be t. 15.3 Finite Impulse Response (FIR) Filters We see that instead of simulating a system to generate a filter’s response, we could set up an impulse response time function and produce the same result by convolution. With infinite integrals lurking around the corner, this might not seem such a wise way to proceed! In looking at digital simulation, we have already cut corners by taking a finite step-length and accepting the resulting approximation. A digital filter must similarly accept limitations in its performance in exchange for simplification. Instead of an infinite train of impulses, u(t) is now viewed as a train of samples at finite intervals. e infinitesimal u(τ)dτ has become u(nT)T. Instead of impulses, we have numbers to input into a computational process. e impulse response function g(t) is similarly broken down into a train of sample values, using the same sampling interval. Now the infinitesimal operations of integration are coarsened into the summation ynTTurTgnrT r r () ()(( ))=− =−∞ =∞ ∑ (15.4) e infinite limits still do not look very attractive. For a causal system, however, we need go no higher than r = n, while if the first signal was applied at r = 0 then this can be the lower limit. Summing from r = 0 to n is a definite improvement, but it means that we have to sum an increasing number of terms as time advances. Can we do any better? Most filters will have a response which eventually decays after the initial impulse is applied. e one-second lag 1/(s + 1) has an initial response of unity, gives an output of around 0.37 after one second, but after 10 seconds the output has decayed to less than 0.00005. ere is a point where g(t) can safely be ignored, where indeed it is 91239.indb 209 10/12/09 1:45:46 PM 210 ◾ Essentials of Control Techniques and Theory less than the resolution of the computation process. Instead of regarding the impulse response as a function of infinite duration, we can cut it short to become a Finite Impulse Response. Why the capital letters? Since this is the basis of the FIR filter. We can rearrange Equation 15.4 by writing n – r instead of r and vice versa. We get ynTTun rT grT r r () (( ))()= =∞ =∞ ∑ − − Now if we can say that g(rT ) is zero for all r < 0, and also for all r > N, the sum- mation limits become ynTTun rT grT r rN () (( ))()= = = ∑ − 0 e output now depends on the input u at the time in question, and on its past N values. ese values are now multiplied by appropriate fixed coefficients and summed to form the output, and are moved along one place to admit the next input sample value. e method lends itself ideally to a hardware application with a “bucket-brigade” delay line, as shown in Figure 15.5. e following software suggestion can be made much more efficient in time and storage; it concentrates on showing the method. Assume that the impulse response has already been set up in the array g(i), where i ranges from 0 to N. We provide another array u(i) of the same length to hold past values. //Move up the input samples to make room for a new one for(i=N;i>0;i ){ u[i]=u[i-1]; } //Take in a new sample u[0]=GetNewInput(); Delay Delay Delay Delay a 0 a 1 a 2 a n u(t) y(t)++ + Figure 15.5 A FIR filter can be constructed from a “bucket-brigade” delay line. 91239.indb 210 10/12/09 1:45:47 PM Time, Frequency, and Convolution ◾ 211 //Now compute the output y=0; for(i=0;i<N+1;i++){ y=y+u[i]*g[i]; } //y now holds the output value is still seems more trouble than the simulation method; what are the advan- tages? Firstly, there is no question of the process becoming unstable. Extremely sharp filters can be made for frequency selection or rejection which would have poles very close to the stability limit. Since the impulse response is defined exactly, stability is assured. Next, the rules of causality can be bent a little. Of course the output cannot precede the input, but by considering the output signal to be delayed the impulse response can have a “leading tail.” Take the non-causal smoothing filter discussed earlier, for example. is has a bell-shaped impulse response, symmetrical about t = 0 as shown in Figure 15.6. By delaying this function, all the important terms can be contained in a positive range of t. ere are many applications, such as offline sound and picture filtering, where the added delay is no embarrassment. 15.4 Correlation is is a good place to give a mention to that close relative of convolution, correlation. You will have noticed that convolution combines two functions of time by running the time parameter forward in the one and backward in the other. In correlation the parameters run in the same direction. Non-causal response—impossible in real time A time shift makes a causal approximation t = 0 t = 0 g(t) Figure 15.6 By delaying a non-causal response, it can be made causal. 91239.indb 211 10/12/09 1:45:48 PM [...]... but not controllable Neither controllable nor observable Figure 16.7 Four partitions of a system 91239.indb 227 10/12/09 1:46:25 PM 228  ◾  Essentials of Control Techniques and Theory 3 Observable but not controllable 4 Neither controllable not observable Clearly any system model built to represent a transfer function must be both c ­ ontrollable and observable—there is no point in adding uncontrollable... variables that are in need of control, but for which there are no sensors Some of the variables could be both uncontrollable and unobservable, that is to say, they might have neither input nor output connections The system can now be partitioned into four subsystems, as illustrated in Figure 16.7: 1 Controllable and observable 2 Controllable but not observable Controllable and observable Controllable but not... provides a mixture of inputs to the independent state integrators 91239.indb 221 10/12/09 1:46:13 PM 222  ◾  Essentials of Control Techniques and Theory c1 1 λ1 u c2 1 y λ2 c3 1 λ3 Figure 16.2 A SISO system with unity B matrix coefficients b1 1 λ1 u y b2 1 λ2 b3 1 λ3 Figure 16.3 A SISO system with unity C matrix coefficients If we double the values of all the B elements and halve all those of C, the overall...212  ◾  Essentials of Control Techniques and Theory The use of correlation is to compare two time functions and find how one is influenced by the other The classic example of correlation is found in the satellite global positioning system (GPS) The satellite transmits a pseudo random binary sequence (PRBS) which is picked up by the receiver Here... ◾  Essentials of Control Techniques and Theory When we consider a transformation to a new set of state variables, w, where w and x are related by the transformation w = Tx with inverse transformation x = T −1 w then we find that  w = TAT −1 w + TBu y = CT −1 w (16.2) The new equations still represent the same system, just expressed in a different set of coordinates, so the “essential” properties of. .. no point in adding uncontrollable or unobservable modes The control engineer is baffled by the fourth type of subset, somewhat frustrated by sets two and three, and is really only at home in applying control to the first Note that a system can be controllable and observable with very few inputs and outputs In the two cascaded lags example of Section 16.4, we saw that the single input was applied to... the nature of linear systems, the ways in which their state equations can be transformed and the formal analysis of dynamic compensators We have now seen the same system described by arrays of transfer functions, d ­ ifferential equations and by first-order matrix state equations We have seen that however grandiose the system may be, the secret of its behavior is unlocked by finding the roots of a single... terms of matrix equations and step and impulse responses, or does the transfer function tell us more, with its possibilities of frequency response and root locus? In the next chapter, we will start to tear the state equations apart to see what the system is made of Maybe we can get the best of both worlds 91239.indb 215 10/12/09 1:45:54 PM This page intentionally left blank Chapter 16 More about Time and. .. complicated time solution then usually crumbles into an assortment of exponential functions of time In this chapter we are going to hit the matrix state equations with the power of algebra, to open up the can of worms and simplify the structure inside it We will see that a transformation of variables will let us unravel the system into an assortment of simple subsystems, whose only interaction occurs at the... 10/12/09 1:46:22 PM 226  ◾  Essentials of Control Techniques and Theory a a b b a c Figure 16.6  Simulation structure of a system with repeated roots The second-order equation with roots −k + / −j ⋅ n,  + 2ky + (k 2 + n 2 ) y = u  y is more neatly represented for simulation by   x1   −k  x  =  −n  2   n   x1  0  u + −k   x 2   1      (16.8) than by a set of matrices with complex . ignored, where indeed it is 91 2 39. indb 2 09 10/12/ 09 1:45:46 PM 210 ◾ Essentials of Control Techniques and Theory less than the resolution of the computation process. Instead of regarding the impulse. be xt te dt st (–) – −∞ ∞ ∫ 91 2 39. indb 205 10/12/ 09 1:45: 39 PM 206 ◾ Essentials of Control Techniques and Theory If we write T for t – τ, then dt will equal dT, and the integral becomes xT. response, it can be made causal. 91 2 39. indb 211 10/12/ 09 1:45:48 PM 212 ◾ Essentials of Control Techniques and Theory e use of correlation is to compare two time functions and find how one is influenced

Ngày đăng: 21/06/2014, 07:20