Numerical Methods for Ordinary Dierential Equations Episode 12 pptx

35 389 0
Numerical Methods for Ordinary Dierential Equations Episode 12 pptx

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

LINEAR MULTISTEP METHODS 369 Table 461(I) Coefficients, γ 0 , γ 1 , , γ p , for Nordsieck methods p =2 p =3 p =4 p =5 p =6 p =7 p =8 γ 0 1 2 5 12 3 8 251 720 95 288 19087 60480 5257 17280 γ 1 1111111 γ 2 1 2 3 4 11 12 25 24 137 120 49 40 363 280 γ 3 1 6 1 3 35 72 5 8 203 270 469 540 γ 4 1 24 5 48 17 96 49 192 967 2880 γ 5 1 120 1 40 7 144 7 90 γ 6 1 720 7 1440 23 2160 γ 7 1 5040 1 1260 γ 8 1 40320 so that the result computed by the Adams–Bashforth predictor will be y ∗ n = η [n−1] 0 + η [n−1] 1 + ···+ η [n−1] p . If an approximation is also required for the scaled derivative at x n ,thiscan be found from the formula, also based on a Taylor expansion, hy  (x n ) ≈ η [n−1] 1 +2η [n−1] 2 + ···+ pη [n−1] p . (461d) To find the Nordsieck equivalent to the Adams–Moulton corrector formula, it is necessary to add β 0 multiplied by the difference between the corrected value of the scaled derivative and the extrapolated value computed by (461d). That is, the corrected value of η [n] 0 becomes η [n] 0 = β 0 ∆ n + η [n−1] 0 + η [n−1] 1 + ···+ η [n−1] p , where ∆ n = hf(x n ,y ∗ n ) − s  i=1 iη [n−1] i . In this formulation we have assumed a PECE mode but, if further iterations are carried out, the only essential change will be that the second argument of hf(x n ,y ∗ n ) will be modified. For constant stepsize, the method should be equivalent to the Adams predictor–corrector pair and this means that all the output values will be modified in one way or another from the result that would have been formed by simple extrapolation from the incoming Nordsieck components. Thus we can write the result computed in a step as 370 NUMERICAL METHODS FOR ORDINARY DIFFERENTIAL EQUATIONS            η [n] 0 η [n] 1 η [n] 2 . . . η [n] p−1 η [n] p            =            γ 0 γ 1 γ 2 . . . γ p−1 γ p            ∆ n +            111··· 11 012··· p − 1 p 001···  p−1 2  p 2  . . . . . . . . . . . . . . . 000··· 1 p 000··· 01                       η [n−1] 0 η [n−1] 1 η [n−1] 2 . . . η [n−1] p−1 η [n−1] p            . (461e) The quantities γ i , i =0,1, 2, ,p, have values determined by the equivalence with the standard fixed stepsize method and we know at least that γ 0 = β 0 ,γ 1 =1. The value selected for γ 1 ensures that η [n] 1 is precisely the result evaluated from η [n] 0 using the differential equation. We can arrive at the correct values of γ 2 , , γ p , by the requirement that the matrix         13···  p−1 2  p 2  01···  p−1 3  p 3  . . . . . . . . . . . . 00··· 1 p 00··· 01         −         γ 2 γ 3 . . . γ p−1 γ p         [ 23··· p −1 p ] has zero spectral radius. Values of the coefficients γ i , i =0, 1, ,p, are given in Table 461(I) for p =2, 3, ,8. Adjustment of stepsize is carried out by multiplying the vector of output approximations formed in (461e) at the completion of step n, by the diagonal matrix D(r) before the results are accepted as input to step n +1,where D(r)=diag(1,r,r 2 , ,r p ). It was discovered experimentally by Gear that numerical instabilities can result from using this formulation. This can be seen in the example p =3, wherewefindthevaluesγ 2 = 3 4 , γ 3 = 1 6 Stability is determined by products of matrices of the form  − 1 2 r 2 3 4 r 2 − 1 3 r 3 1 2 r 3  , and for r ≥ 1.69562, this matrix is no longer power-bounded. Gear’s pragmatic solution was to prohibit changes for several further steps after a stepsize change had occurred. An alternative to this remedy will be considered in the next subsection. LINEAR MULTISTEP METHODS 371 462 Variable stepsize for Nordsieck methods The motivation we have presented for the choice of γ 1 , γ 2 , in the formulation of Nordsieck methods was to require a certain matrix to have zero spectral radius. Denote the vector γ and the matrix V by γ =       γ 1 γ 2 . . . γ p       ,V=         123··· p 013··· 1 2 p(p − 1) 001··· 1 6 p(p − 1)(p −2) . . . . . . . . . . . . 000··· 1         , and denote by e 1 the basis row vector e 1 =[ 10··· 0 ]. The characteristic property of γ is that the matrix (I −γe 1 )V (462a) has zero spectral radius. When variable stepsize is introduced, the matrix in (462a) is multiplied by D(r)=diag(r, r 2 ,r 3 , ,r p ) and, as we have seen, if γ is chosen on the basis of constant h, there is a deterioration in stable behaviour. We consider the alternative of choosing γ as a function of r so that ρ(D(r)(I −γe 1 )V )=0. The value of γ 1 still retains the value 1 but, in the only example we consider, p = 3, it is found that γ 2 = 1+2r 2(1 + r) ,γ 3 = r 3(1 + r) , and we have D(r)(I − γe 1 )V =    00 0 0 − r 3 1+r 3r 2 2(1+r) 0 − 2r 4 3(1+r) r 3 2(1+r)    . (462b) It is obvious that this matrix is power-bounded for all positive values of r. However, if a sequence of n steps is carried out with stepsize changes r 1 , r 2 , , r n then the product of matrices of the form given by (462b) for these values of r to be analysed to determine stability. The spectral radius of such a product is found to be |r 1 − r n |r 2 1 1+r 1 · |r 2 − r 1 |r 2 2 1+r 2 · |r 3 − r 2 |r 2 3 1+r 3 ··· |r n − r n−1 |r 2 n 1+r n , 372 NUMERICAL METHODS FOR ORDINARY DIFFERENTIAL EQUATIONS and this will be bounded by 1 as long as r i ∈ [0,r  ], where r  has the property that r 1 r 2 |r 2 − r 1 |  (1 + r 1 )(1 + r 2 ) ≤ 1, whenever r 1 ,r 2 ∈ [0,r  ]. It is found after some calculations that stability, in the sense of this discussion, is achieved if r  ≈ 2.15954543. 463 Local error estimation The standard estimator for local truncation error is based on the Milne device. That is, the difference between the predicted and corrected values provides an approximation to some constant multiplied by h p+1 y (p+1) (x n ), and the local truncation error can be estimated by multiplying this by a suitable scale factor. This procedure has to be interpreted in a different way if, as in some modern codes, the predictor and corrector are accurate to different orders. We no longer have an asymptotically correct approximation to the local truncation error but to the error in the predictor, assuming this has the lower order. Nevertheless, stepsize control based on this approach often gives reliable and useful performance. To allow for a possible increase in order, estimation is also needed for the scaled derivative one order higher than the standard error estimator. It is very difficult to do this reliably, because any approximation will be based on a linear combination of hy  (x)fordifferentx arguments. These quantities in turn will be of the form hf(x, y(x)+Ch p+1 + O(h p+2 )), and the terms of the form Ch p+1 + O(h p+2 ) will distort the result obtained. However, it is possible to estimate the scaled order p+2 derivative reliably, at least if the stepsize has been constant over recent steps, by forming the difference of approximations to the order p+1 derivative over two successive steps. If the stepsize has varied moderately, the approximation this approximation will still be reasonable. In any case, if the criterion for increasing order turns out to be too optimistic for any specific problem, then after the first step with the new order a rejection is likely to occur, and the order will either be reduced again or else the stepsize will be lowered while still maintaining the higher order. Exercises 46 46.1 Show how to write y(x n +rh)intermsofy(x n ), hy  (x n )andhy  (x n −h), to within O(h 3 ). Show this approximation might be used to generalize the order 2 Adams–Bashforth method to variable stepsize. 46.2 How should the formulation of Subsection 461 be modified to represent Adams–Bashforth methods? Chapter 5 General Linear Methods 50 Representing Methods in General Linear Form 500 Multivalue–multistage methods The systematic computation of an approximation to the solution of an initial value problem usually involves just two operations: evaluation of the function f defining the differential equation and the forming of linear combinations of previously computed vectors. In the case of implicit methods, further complications arise, but these can also be brought into the same general linear formulation. We consider methods in which a collection of vectors forms the input at the beginning of a step, and a similar collection is passed on as output from the current step and as input into the following step. Thus the method is a multivalue method, and we write r for the number of quantities processed in this way. In the computations that take place in forming the output quantities, there are assumed to be s approximations to the solution at points near the current time step for which the function f needs to be evaluated. As for Runge–Kutta methods, these are known as stages and we have an s-stage or, in general, multistage method. The intricate set of connections between these quantities make up what is known as a general linear method. Following Burrage and Butcher (1980), we represent the method by four matrices which we will generally denote by A, U, B and V . These can be written together as a partitioned (s + r) ×(s + r) matrix  AU BV  . The input vectors available at step n will be denoted by y [n−1] 1 , y [n−1] 2 , , y [n−1] r . During the computations which constitute the step, stage values Y 1 , Y 2 , , Y s , are computed and derivative values F i = f(Y i ), i =1, 2, ,s, are computed in terms of these. Finally, the output values are computed and, because these will constitute the input at step n + 1, they will be denoted by Numerical Methods for Ordinary Differential Equations, Second Edition. J. C. Butcher © 2008 John Wiley & Sons, Ltd. ISBN: 978-0-470-72335-7 374 NUMERICAL METHODS FOR ORDINARY DIFFERENTIAL EQUATIONS y [n] i , i =1, 2, ,r. The relationships between these quantities are defined in terms of the elements of A, U, B and V by the equations Y i = s  j=1 a ij hF j + r  j=1 u ij y [n−1] j ,i=1, 2, ,s, (500a) y [n] i = s  j=1 b ij hF j + r  j=1 v ij y [n−1] j ,i=1, 2, ,r. (500b) It will be convenient to use a more concise notation, and we start by defining vectors Y,F ∈ R sN and y [n−1] ,y [n] ∈ R rN as follows: Y =       Y 1 Y 2 . . . Y s       ,F=       F 1 F 2 . . . F s       ,y [n−1] =       y [n−1] 1 y [n−1] 2 . . . y [n−1] r       ,y [n] =       y [n] 1 y [n] 2 . . . y [n] r       . Using these supervectors, it is possible to write (500a) and (500b) in the form  Y y [n]  =  A ⊗ I N U ⊗ I N B ⊗ I N V ⊗ I N  hF y [n−1]  . (500c) In this formulation, I N denotes the N × N unit matrix and the Kronecker product is given by A ⊗ I N =       a 11 I N a 12 I N ··· a 1s I N a 21 I N a 22 I N ··· a 2s I N . . . . . . . . . a s1 I N a s2 I N ··· a ss I N       . When there is no possibility of confusion, we simplify the notation by replacing  A ⊗ I N U ⊗ I N B ⊗ I N V ⊗ I N  by  AU BV  . In Subsections 502–505, we illustrate these ideas by showing how some known methods, as well as some new methods, can be formulated in this manner. First, however, we will discuss the possibility of transforming a given method into one using a different arrangement of the data passed from step to step. GENERAL LINEAR METHODS 375 501 Transformations of methods Let T denote a non-singular r × r matrix. Given a general linear method characterized by the matrices (A, U, B, V ), we consider the construction of a second method for which the input quantities, and the corresponding output quantities, are replaced by linear combinations of the subvectors in y [n−1] (or in y [n] , respectively). In each case the rows of T supply the coefficients in the linear combinations. These ideas are well known in the case of Adams methods, where it is common practice to represent the data passed between steps in a variety of configurations. For example, the data imported into step n may consist of approximations to y(x n−1 ) and further approximations to hy  (x n−i ), for i =1, 2, ,k. Alternatively it might, as in Bashforth and Adams (1883), be expressed in terms of y(x n−1 ) and of approximations to a sequence of backward differences of the derivative approximations. It is also possible, as proposed in Nordsieck (1962), to replace the approximations to the derivatives at equally spaced points in the past by linear combinations which will approximate scaled first and higher derivatives at x n−1 . Let z [n−1] i , i =1, 2, ,r, denote a component of the transformed input data where z [n−1] i = r  j=1 t ij y [n−1] j ,z [n] i = r  j=1 t ij y [n] j . This transformation can be written more compactly as z [n−1] = Ty [n−1] ,z [n] = Ty [n] . Hence the method which uses the y data and the coefficients (A, U, B, V ), could be rewritten to produce formulae for the stages in the form Y = hAF + Uy [n−1] = hAF + UT −1 z [n−1] . (501a) The formula for y [n] = hBF + Vy [n−1] , when transformed to give the value of z [n] , becomes z [n] = T  hBF + Vy [n−1]  = h(TB)F +(TVT −1 )z [n−1] . (501b) Combine (501a) and (501b) into the single formula to give  Y z [n]  =  AUT −1 TB TVT −1  hF z [n−1]  . Thus, the method with coefficient matrices (A, UT −1 ,TB,TVT −1 ) is related to the original method (A, U, B, V ) by an equivalence relationship with a natural computational significance. The significance is that a sequence of approximations, using one of these formulations, can be transformed into the sequence that would have been generated using the alternative formulation. 376 NUMERICAL METHODS FOR ORDINARY DIFFERENTIAL EQUATIONS It is important to ensure that any definitions concerning the properties of a generic general linear method transform in an appropriate manner, when the coefficient matrices are transformed. Even though there may be many interpretations of the same general linear method, there may well be specific representations which have advantages of one sort or another. Some examples of this will be encountered later in this section. 502 Runge–Kutta methods as general linear methods Since Runge–Kutta methods have a single input, it is usually convenient to represent them, as general linear methods, with r = 1. Assuming the input vector is an approximation to y(x n−1 ), it is only necessary to write U = 1, V =1,writeB as the single row b of the Runge–Kutta tableau and, finally, identify A with the s ×s matrix of the same name also in this tableau. A very conventional and well-known example is the classical fourth order method 0 1 2 1 2 1 2 0 1 2 1 001 1 6 1 3 1 3 1 6 which, in general linear formulation, is represented by the partitioned matrix         0000 1 1 2 0001 0 1 2 001 0010 1 1 6 1 3 1 3 1 6 1         . A more interesting example is the Lobatto IIIA method 0 00 0 1 2 5 24 1 3 − 1 24 1 1 6 2 3 1 6 1 6 2 3 1 6 for which the straightforward representation, with s =3andr =1,is misleading. The reason is that the method has the ‘FSAL property’ in the sense that the final stage evaluated in a step is identical with the first stage of the following step. It therefore becomes possible, and even appropriate, to GENERAL LINEAR METHODS 377 use a representation with s = r = 2 which expresses, quite explicitly, that the FSAL property holds. This representation would be      1 3 − 1 12 1 5 12 2 3 1 6 1 1 6 2 3 1 6 1 1 6 0100      , (502a) and the input quantities are supposed to be approximations to y [n−1] 1 ≈ y(x n−1 ),y [n−1] 2 ≈ hy  (x n−1 ). Finally, we consider a Runge–Kutta method introduced in Subsection 322, with tableau 0 − 1 2 − 1 2 1 2 3 4 − 1 4 1 −212 1 6 0 2 3 1 6 . (502b) As we pointed out when the method was introduced, it can be implemented as a two-value method by replacing the computation of the second stage derivative by a quantity already computed in the previous step. The method is now not equivalent to any Runge–Kutta method but, as a general linear method, it has coefficient matrix         000 10 3 4 00 1 − 1 4 −220 11 1 6 2 3 1 6 10 010 00         . (502c) 503 Linear multistep methods as general linear methods For a linear k-stepmethod[α, β] of the special form α(z)=1−z, the natural way of writing this as a general linear method is to choose r = k +1,s =1 and the input approximations as y [n−1] ≈         y(x n−1 ) hy  (x n−1 ) hy  (x n−2 ) hy  (x n−k )         . 378 NUMERICAL METHODS FOR ORDINARY DIFFERENTIAL EQUATIONS The matrix representing the method now becomes                 β 0 1 β 1 β 2 β 3 ··· β k−1 β k β 0 1 β 1 β 2 β 3 ··· β k−1 β k 1 00 0 0··· 00 0 01 0 0··· 00 0 00 1 0··· 00 . . . . . . . . . . . . . . . . . . . . . 0 00 0 0··· 00 0 00 0 0··· 10                 . Because y [n−1] 1 and y [n−1] k+1 occur in the combination y [n−1] 1 + β k y [n−1] k+1 in each of the two places where these quantities are used, we might try to simplify the method by transforming using the matrix T =            100··· 0 β k 010··· 00 001··· 00 . . . . . . . . . . . . . . . 000··· 10 000··· 01            . The transformed coefficient matrices become  AUT −1 TB TVT −1  =                 β 0 1 β 1 β 2 β 3 ··· β k−1 0 β 0 1 β 1 β 2 β 3 ··· β k−1 + β k 0 1 00 0 0··· 00 0 01 0 0··· 00 0 00 1 0··· 00 . . . . . . . . . . . . . . . . . . . . . 0 00 0 0··· 00 0 00 0 0··· 10                 , and we see that it is possible to reduce r from k +1 to k, because the (k + 1)th input vector is never used in the calculation. The well-known technique of implementing an implicit linear multistep method by combining it with a related explicit method to form a predictor– corrector pair fits easily into a general linear formulation. Consider, for example, the PECE method based on the third order Adams– Bashforth and Adams–Moulton predictor–corrector pair. Denote the predicted [...]... Runge–Kutta methods, they have some desirable properties For (505a) the stage order is 2, and for (505b) the stage order is 3 384 NUMERICAL METHODS FOR ORDINARY DIFFERENTIAL EQUATIONS Exercises 50 50.1 Write the general linear method given by (503a) in transformed form using the matrix   1 0 0 0  0 1 0 0    T = 3 1    0 4 −1 4 1 0 1 −1 6 3 6 Note that this converts the method into Nordsieck form... problem satisfying a Lipschitz condition For notational convenience, (512a) will usually be abbreviated as uy(x) Formally, we write φ(h) for the starting approximation associated with the method and with a given initial value problem 388 NUMERICAL METHODS FOR ORDINARY DIFFERENTIAL EQUATIONS Definition 512A A general linear method (A, U, B, V ), is ‘convergent’ if for any initial value problem y (x) =... r) × (s + r) coefficient matrix is now   5 0 0 1 23 − 4 12 3 12   5 1 0   12 0 1 2 − 12 3   5 2 1  0    12 0 1 3 − 12 (503a)  0 1 0 0 0 0       0 0 0 1 0 0  1 0 0 0 0 0 The one-leg methods, introduced by Dahlquist (1976) as counterparts of linear multistep methods, have their own natural representations as general linear methods For the method characterized by the polynomial pair [α(z),... linear form, and then rewritten it, using equally simple operations, into a less recognizable form, an obvious question arises The question is whether it might have been more appropriate to use the general linear formulation from the start, and then explore the existence of suitable methods that have no connection with linear multistep methods 382 NUMERICAL METHODS FOR ORDINARY DIFFERENTIAL EQUATIONS. .. using the formula k yn = k αi yn−i + i=1 βi hF, (503b) i=0 where k i=0 βi yn−i k i=0 βi Y = (503c) This does not fit into the standard representation for general linear methods but it achieves this format when Y and yn are separated out from the two expressions (503b) and (503c) We find −1 k Y = β0 hF + (β0 αi + βi )yn−i , i=0 k yn = i=1 k βi hF + i=0 k βi αi yn−i i=1 380 NUMERICAL METHODS FOR ORDINARY. .. (510c) 386 NUMERICAL METHODS FOR ORDINARY DIFFERENTIAL EQUATIONS Just as for linear multistep methods, we need a concept of stability In the general linear case this is defined in terms of the power-boundedness of V and, as we shall see, is related to the solvability of the problem y = 0 Definition 510C A general linear method (A, U, B, V ) is ‘stable’ if there exists a constant C such that, for all n... equivalent to (510b) We have no interest in methods that are not covariant even though it is possible to construct artificial methods which do not have this property but can still yield satisfactory numerical results GENERAL LINEAR METHODS 387 σ ν σ ◦ν σ ν◦ ν σ Figure 511(i) 512 A commutative diagram for covariance Definition of convergence Just as for linear multistep methods, the necessity of using a starting... 505 0 0 − 225 128 384 155 2304 3085 2304 3085 0 0 465 3085 465 3085 0 0 540 128 783 617 783 617 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0                     0 0 0 0 0 0 0 1  0 200 128 − 297 31 − 135 617 − 135 617 153 128 − 212 31 31 − 617 31 − 617 225 128 − 1395 155 135 − 3085 135 − 3085 300 128 2130 − 155 495 − 3085 495 − 3085 45 128 − 309 155... · + V n−1 (B1 − u) n y [n] − u = Because V has bounded powers, it can be written in the form V = S −1 I 0 0 W S, where I is r × r for r ≤ r and W is power-bounded and is such that 1 ∈ σ(W ) This means that y [n] − u = S −1 I 0 1 n (I 0 − W )−1 (I − W n ) S(B1 − u), 390 NUMERICAL METHODS FOR ORDINARY DIFFERENTIAL EQUATIONS whose limit as n → ∞ is S −1 I 0 0 0 S(B1 − u) If y [n] − u is to converge to... the terms on the right-hand side, and we find i−1 E [i] ≤ αhC E [j] + Ciβh2 + C E [0] j=0 396 NUMERICAL METHODS FOR ORDINARY DIFFERENTIAL EQUATIONS This means that E [i] is bounded by ηi defined by i−1 ηj + Ciβh2 + η0 , ηi = αhC η0 = C E [0] j=0 To simplify this equation, find the difference of the formulae for ηi and ηi−1 to give the difference equation ηi − ηi−1 = αhCηi−1 + Cβh2 with solution ηi = (1 . denoted by Numerical Methods for Ordinary Differential Equations, Second Edition. J. C. Butcher © 2008 John Wiley & Sons, Ltd. ISBN: 978-0-470-72335-7 374 NUMERICAL METHODS FOR ORDINARY DIFFERENTIAL. one of these formulations, can be transformed into the sequence that would have been generated using the alternative formulation. 376 NUMERICAL METHODS FOR ORDINARY DIFFERENTIAL EQUATIONS It. general linear formulation from the start, and then explore the existence of suitable methods that have no connection with linear multistep methods. 382 NUMERICAL METHODS FOR ORDINARY DIFFERENTIAL EQUATIONS We

Ngày đăng: 13/08/2014, 05:21

Từ khóa liên quan

Tài liệu cùng người dùng

Tài liệu liên quan