Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống
1
/ 35 trang
THÔNG TIN TÀI LIỆU
Thông tin cơ bản
Định dạng
Số trang
35
Dung lượng
348,51 KB
Nội dung
88 NUMERICAL METHODS FOR ORDINARY DIFFERENTIAL EQUATIONS Table 222(I) Errors in the numerical solution of the orbital problem (201d) with zero eccentricity through a half period using (222a) n y 1 error Ratio y 2 error Ratio 32 0.00295976 0.00537347 7.8987 2.3976 64 0.00037472 0.00224114 8.0168 3.3219 128 0.00004674 0.00067465 8.0217 3.6879 256 0.00000583 0.00018294 8.0136 3.8503 512 0.00000073 0.00004751 8.0074 3.9267 1024 0.00000009 0.00001210 n y 3 error Ratio y 4 error Ratio 32 −0.00471581 −0.00154957 2.1899 7.9797 64 −0.00215339 −0.00019419 3.2451 8.1221 128 −0.00066358 −0.00002391 3.6551 8.1017 256 −0.00018155 −0.00000295 3.8351 8.0620 512 −0.00004734 −0.00000037 3.9194 8.0339 1024 −0.00001208 −0.00000005 experiments we report here, the first step is taken using the Runge–Kutta method introduced in the previous subsection. The errors are shown in Table 222(I) and we see that, for this problem at least, the results are just as good as for the Runge–Kutta method (221a) and (221b), even though only one derivative is computed in each step. In fact, for components 1 and 4, better than second order convergence is observed. 223 Use of higher derivatives For many practical problems, it is possible to derive formulae for the second and higher derivatives of y, making use of the formula for y given by a differential equation. This opens up many computational options, which can be used to enhance the performance of multistage (Runge–Kutta) and multivalue (multistep) methods. If these higher derivatives are available, then the most popular option is to use them to evaluate a number of terms in Taylor’s theorem. Even though we consider this idea further in Section 25, we present a simple illustrative example here. Consider the initial value problem y = yx + y 2 ,y(0) = 1 2 , (223a) NUMERICAL DIFFERENTIAL EQUATION METHODS 89 10 −1 10 −2 10 −3 10 −4 10 −5 10 −6 10 −2 10 −4 10 −6 10 −8 10 −10 10 −12 h |E| p =1 p =2 p =3 p =4 Figure 223(i) Errors in problem (223a) using Taylor series with orders p =1, 2, 3, 4 with solution y(x)= exp( 1 2 x 2 ) 2 − x 0 exp( 1 2 x 2 )dx . By differentiating (223a) once, twice and a third time, it is found that y =(x +2y)y + y, (223b) y =(x +2y)y +(2+2y )y , (223c) y (4) =(x +2y)y +(3+6y )y . (223d) We illustrate the Taylor series method by solving (223a) with output point x =1.Usingn steps and stepsize h =1/n,forn = 8, 16, 32, ,2 20 ,the methodwasusedwithordersp = 1, 2, 3 and 4. For example, if p =4,then y n = y n−1 + hy + h 2 2 y + h 3 6 y + h 2 24 y (4) , where y , y , y and y (4) are given by (223a), (223b), (223c) and (223d) with x n−1 and y n−1 substituted for x and y, respectively. The results for these experiments are shown in Figure 223(i). In each case the error is plotted, where we note that the exact result is exp( 1 2 )/ 2 − 1 0 exp( 1 2 x 2 )dx , with numerical value 2.04799324543883. 90 NUMERICAL METHODS FOR ORDINARY DIFFERENTIAL EQUATIONS Use of f derivatives Use of y derivatives More use of past values More calculations per step Runge–Kutta Euler Taylor series General linear Linear multistep Obreshkov Rosenbrock Figure 224(i) Classification of general method types 224 Multistep–multistage–multiderivative methods While multistep methods, multistage methods and multiderivative methods all exist in their own right, many attempts have been made to combine their attributes so as to obtain new methods of greater power. By introducing higher y derivatives into multistep methods, a new class of methods is found. These are known as Obreshkov methods, after their discoverer Obreshkov (1940). The best-known combination of the use of higher derivatives with Runge– Kutta methods is in Rosenbrock methods (Rosenbrock, 1963). This is actually a greater generalization, in the sense that derivatives of f are used. These must be regarded as more general, because y canbefoundinthecaseof an autonomous problem as y (x)=f (y(x))(f(y(x))). On the other hand, it is not possible to compute f (y(x)) from values of the various y derivatives. Rosenbrock methods have a role in the solution of stiff problems. Other potentially useful combinations certainly exist but, in this book, we mainly confine ourselves to combinations of multistage and multiderivative methods. These we refer to as ‘general linear methods’. The various methods that come under the classifications we have discussed here can be seen in a diagrammatic representation in Figure 224(i). The Euler method can be thought of as the infimum of all the method classes, and is shown at the lowest point of this diagram. On the other hand, the class of general linear methods is the supremum of all multistage and multivalue methods. The supremum of all methods, including also those with a multiderivative nature, is represented by the highest point in Figure 224(i). NUMERICAL DIFFERENTIAL EQUATION METHODS 91 225 Implicit methods We have already seen, in Subsection 204, that the implicit Euler method has a role in the solution of stiff problems. Implicitness also exists in the case of linear multistep and Runge–Kutta methods. For example, the second order backward difference formula (also known as BDF2), y n = 2 3 hf(x n ,y n )+ 4 3 y n−1 − 1 3 y n−2 , (225a) is also used for stiff problems. There are also implicit Runge–Kutta methods, suitable for the solution of stiff problems. Another example of an implicit method is the ‘implicit trapezoidal rule’, given by y n = y n−1 + h 2 f(x n ,y n )+f(x n−1 ,y n−1 ) . (225b) Like the Euler method itself, and its implicit variant, (225b) is, at the same time, a linear multistep method and a Runge–Kutta method. As a linear multistep method, it can be regarded as a member of the Adams–Moulton family of methods. As a Runge–Kutta method, it can be regarded as a member of the Lobatto IIIA family. Implicit methods carry with them the need to solve the nonlinear equation on which the solution, at a new step value, depends. For non-stiff problems, this can be conveniently carried out by fixed-point iteration. For example, the solution of the implicit equation (225b) is usually found by evaluating a starting approximation η [0] ,givenasy n in (222a). A sequence of approximations η [k] , k =1, 2, , is then formed by inserting η [k] in place of y n on the left-hand side of (225b), and η [k−1] in place of y n on the right-hand side. That is, η [k] = y n−1 + h 2 f x n ,η [k−1] + f (x n−1 ,y n−1 ) ,k=1, 2, (225c) The value of y n actually used for the solution is the numerically computed limit to this sequence. For stiff problems, unless h is chosen abnormally small, this sequence will not converge, and more elaborate schemes are needed to evaluate the solution to the implicit equations. These schemes are generally variants of the Newton– Raphson method, and will be discussed further in reference to the particular methods as they arise. 226 Local error estimates It is usually regarded as necessary to have, as an accompaniment to any numerical method, a means of assessing its accuracy, in completing each step it takes. The main reason for this is that the work devoted to each step, 92 NUMERICAL METHODS FOR ORDINARY DIFFERENTIAL EQUATIONS and the accuracy that is achieved in the step, should be balanced for overall efficiency. If the cost of each step is approximately constant, this means that the error committed in the steps should be approximately equal. A second reason for assessing the accuracy of a method, along with the computation of the solution itself, is that it may be more efficient to change to a higher, or lower, member of the family of methods being used. The only way that this can really be decided is for the accuracy of a current method to be assessed and, at the same time, for some sort of assessment to be made of the alternative method under consideration. We discuss here only the local error of the current method. It is not known how much a computed answer differs from what would correspond to the exact answer, defined locally. What is often available, instead, is a second approximation to the solution at the end of each step. The difference of these two approximations can sometimes be used to give quantitative information on the error in one of the two solution approximations. We illustrate this idea in a single case. Suppose the method given by (222a) is used to give a starting value for the iterative solution of (225b). It is possible to estimate local errors by using the difference of these two approximations. We discuss this in more detail in the context of predictor–corrector Adams methods. Exercises 22 22.1 Assuming the function f satisfies a Lipschitz condition and that y, y , y and y are continuous, explain why the method given by (221a) and (221b) has order 2. 22.2 Explain why the method given by (222a) has order 2. 22.3 Find a method similar to (221a) and (221b), except that it is based on the mid-point rule, rather than the trapezoidal rule. 22.4 For a ‘quadrature problem’, f(x, y)=φ(x), compare the likely accuracies of the methods given in Subsections 221 and 222. 22.5 Verify your conclusion in Exercise 22.4 using the problem y (x)=cos(x) on the interval [0,π/2]. 22.6 Show that the backward difference method (225a) has order 2. 22.7 Calculate the solution to (203a) using the backward difference method (225a). Use n steps with constant stepsize h = π/n for n = 2 0 , 2 1 , 2 2 , ,2 10 . Verify that second order behaviour is observed. NUMERICAL DIFFERENTIAL EQUATION METHODS 93 23 Runge–Kutta Methods 230 Historical introduction The idea of generalizing the Euler method, by allowing for a number of evaluations of the derivative to take place in a step, is generally attributed to Runge (1895). Further contributions were made by Heun (1900) and Kutta (1901). The latter completely characterized the set of Runge–Kutta methods of order 4, and proposed the first methods of order 5. Special methods for second order differential equations were proposed by Nystr¨om (1925), who also contributed to the development of methods for first order equations. It was not until the work of Hu ˇ ta (1956, 1957) that sixth order methods were introduced. Since the advent of digital computers, fresh interest has been focused on Runge–Kutta methods, and a large number of research workers have contributed to recent extensions to the theory, and to the development of particular methods. Although early studies were devoted entirely to explicit Runge–Kutta methods, interest has now moved to include implicit methods, which have become recognized as appropriate for the solution of stiff differential equations. A number of different approaches have been used in the analysis of Runge– Kutta methods, but the one used in this section, and in the more detailed analysis of Chapter 3, is that developed by the present author (Butcher, 1963), following on from the work of Gill (1951) and Merson (1957). 231 Second order methods In Subsection 221, a method was introduced based on the trapezoidal rule quadrature formula. It turns out that for any non-zero choice of a parameter θ, it is possible to construct a method with two stages and this same order. All that is required is a first partial step to form an approximation a distance θh into the step. Using the derivative at this point, together with the derivative at the beginning of the step, the solution at the end of the step is then found using the second order quadrature formula 1 0 φ(x)dx ≈ 1 − 1 2θ φ(0) + 1 2θ φ(θ). Thus, to advance the solution from x n−1 to x n = x n−1 + h, the result is found from Y = y n−1 + θhf(x n−1 ,y n−1 ), (231a) y n = y n−1 + 1 − 1 2θ hf(x n−1 ,y n−1 )+ 1 2θ hf(x n−1 + θh,Y ). (231b) 94 NUMERICAL METHODS FOR ORDINARY DIFFERENTIAL EQUATIONS Note that the intermediate stage value Y is an approximation to the solution at the ‘off-step’ point x n−1 +θh, and is equal to y ∗ n , in the special case we have already considered, given by (221a) and (221b), in which θ = 1. The other most commonly used value is θ = 1 2 , as in the ‘mid-point rule’. 232 The coefficient tableau It is convenient to represent a Runge–Kutta method by a partitioned tableau, of the form c A b in which the vector c indicates the positions, within the step, of the stage values, the matrix A indicates the dependence of the stages on the derivatives found at other stages, and b is a vector of quadrature weights, showing how the final result depends on the derivatives, computed at the various stages. Inthecaseofexplicitmethods,suchasthosewehaveconsideredsofar in this section, the upper triangular components of A are left blank, because they have zero value. The first two of the following examples of Runge–Kutta tableaux are, respectively, for the Euler method and the general second order method, parameterized by an arbitrary non-zero θ. The special cases, which are also given, are for the trapezoidal rule method, designated here as RK21 and the mid-point rule method, RK22, correspond to θ =1andθ = 1 2 , respectively: 0 1 0 θ θ 1 − 1 2θ 1 2θ RK21 : 0 1 1 1 2 1 2 (232a) RK22 : 0 1 2 1 2 01 (232b) NUMERICAL DIFFERENTIAL EQUATION METHODS 95 233 Third order methods It is possible to construct methods with three stages, which have order 3 numerical behaviour. These have the form 0 c 2 a 21 c 3 a 31 a 32 b 1 b 2 b 3 , where a 21 = c 2 and a 31 + a 32 = c 3 . The conditions for order 3, taken from results that will be summarized in Subsection 234, are b 1 + b 2 + b 3 =1, (233a) b 2 c 2 + b 3 c 3 = 1 2 , (233b) b 2 c 2 2 + b 3 c 2 3 = 1 3 , (233c) b 3 a 32 c 2 = 1 6 . (233d) The following tableaux RK31 : 0 2 3 2 3 2 3 1 3 1 3 1 4 0 3 4 (233e) and RK32 : 0 1 2 1 2 1 −12 1 6 2 3 1 6 (233f) give two possible solutions to (233a)–(233d). 234 Introduction to order conditions As the order being sought increases, the algebraic conditions on the coefficients of the method become increasingly complicated. The pattern behind these conditions is known and, in this brief introduction to the order conditions, we state the results without any justification and show, by examples, how they are used. 96 NUMERICAL METHODS FOR ORDINARY DIFFERENTIAL EQUATIONS root leaf order=2 root leaf leaf order= 3 root leaf leaf order= 4 root leaf leaf leaf order= 5 root leaf leaf leaf leaf order=8 root leaf order=5 Figure 234(i) Some illustrative rooted trees Let T denote the set of all ‘rooted trees’. These are simple combinatorial graphs, which have the properties of being connected, having no cycles, and having a specific vertex designated as the root. The ‘order’ of a tree is the number of vertices in this tree. If the order is greater than 1, then the ‘leaves’ of a tree are the vertices from which there are no outward-growing arcs; in other words, a leaf is a vertex, other than the root, which has exactly one other vertex joined to it. An assortment of trees of various orders, with leaves and the root indicated in each case, is shown in Figure 234(i). In pictorial representations of particular rooted trees, as in this figure, we use the convention of placing the root at the lowest point in the picture. For each tree t, a corresponding polynomial in the coefficients of the method can be written down. Denote this by Φ(t). Also associated with each tree t is an integer γ(t). We now explain how Φ(t)andγ(t) are constructed. In the case of Φ(t), associate with each vertex of the tree, except the leaves, alabeli, j, , and assume that i is the label attached to the root. Write down a sequence of factors of which the first is b i . For each arc of the tree, other than an arc that terminates in a leaf, write down a factor, say a jk ,wherej and k are the beginning and end of the arc (assuming that all directions are in the sense of movement away from the root). Finally, for each arc terminating at a leaf, write down a factor, say c j ,wherej is the label attached to the beginning of this arc. Having written down this sequence of factors, sum their product for all possible choices of each of the labels, in the set {1, 2, ,s}. To find the value of γ(t), associate a factor with each vertex of the tree. For NUMERICAL DIFFERENTIAL EQUATION METHODS 97 Table 234(I) The rooted trees up to order 4 Tree Order 12 3 3 Φ i b i i b i c i i b i c 2 i ij b i a ij c j γ 12 3 6 Tree Order 44 4 4 Φ i b i c 3 i ij b i c i a ij c j ij b i a ij c 2 j ijk b i a ij a jk c k γ 4 8 12 24 the leaves this factor is 1, and for all other vertices it is equal to the sum of the factors attached to all outward-growing neighbours, plus 1. The product of the factors, for all vertices of the tree, is the value of γ(t). The values of these quantities are shown in Table 234(I), for each of the eight trees with orders up to 4. A further illustrative example is given by the tree t = for which Φ(t)= ij b i c 2 i a ij c 2 j and γ(t) = 18. Details of the calculation of these quantities are presented in Figure 234(ii). On the left-hand diagram labels i and j are attached to the non-terminal vertices, as used in the formula for Φ(t), using the factors shown in the middle diagram. On the right-hand diagram, the factors are shown whose product gives the value of γ(t). i j −→ b i a ij c i c i c j c j Φ(t)= ij b i c 2 i a ij c 2 j 6 3 11 11 γ(t)=1·1·3·1·1·6=18 Figure 234(ii) Calculation details for Φ(t)andγ(t), where t = [...]... EQUATIONS Table 244 (I) k 1 2 3 4 5 6 7 8 β1 1 3 2 23 12 55 24 1901 720 42 77 144 0 198721 6 048 0 16083 44 80 β2 β0 1 β3 −1 2 4 3 − 59 24 − 1387 360 − 2 641 48 0 − 18637 2520 − 1152169 120960 Table 244 (II) k 0 1 2 3 4 5 6 7 Coefficients and error constants for Adams–Bashforth methods 5 12 37 24 109 30 49 91 720 235183 20160 242 653 1 344 0 4 β5 −3 8 − 637 360 − 3 649 720 − 107 54 945 − 296053 1 344 0 251 720 959 48 0 135713... a43 c3 = b2 c3 + b3 c3 + b4 c3 = 2 3 4 b3 c3 a32 c2 + b4 c4 a42 c2 + b4 c4 a43 c3 = b3 a32 c2 + b4 a42 c2 + b4 a43 c2 = 2 2 3 b4 a43 a32 c2 = 1 , 2 1 , 3 1 , 6 1 , 4 1 , 8 1 , 12 1 24 (235a) (235b) (235c) (235d) (235e) (235f) (235g) (235h) That c4 = 1 can be shown, by solving for b2 , b3 and b4 , from equations (235b), (235c) and (235e); by then solving for a32 , a42 and a43 from (235d), (235f) and... 1 24 53 360 241 720 586 945 123133 120960 4 19 − 720 173 − 144 0 6737 − 20160 88 547 − 120960 β5 3 160 263 2520 1537 44 80 β6 β7 C 1 2 1 − 12 1 24 19 − 720 3 160 863 − 6 048 0 863 275 − 6 048 0 241 92 11351 275 33953 − 120960 241 92 − 3628800 5 where the error constant is equal to C = 12 The values for the Adams– Bashforth methods are given in Table 244 (I) and for the Adams–Moulton methods in Table 244 (II)... 2102 243 120960 β6 95 − 288 − 5603 2520 − 115 747 1 344 0 β7 19087 6 048 0 32863 1 344 0 β8 5257 − 17280 C −1 2 5 12 −3 8 251 720 95 − 288 19087 6 048 0 5257 − 17280 1070017 3628800 Coefficients and error constants for Adams–Moulton methods β1 1 1 2 2 5 2 12 3 3 19 8 24 251 323 720 360 95 142 7 288 144 0 19087 2713 6 048 0 2520 5257 139 849 17280 120960 β2 1 − 12 5 − 24 11 − 30 − 133 240 − 1 548 7 20160 − 45 11 44 80 β3... Adams–Moulton methods with p = k + 1 for k = 1, 2, 3 For k = 4, the Taylor expansion of ( 241 c) takes the form hy (xn )(1 − β0 − β1 − β2 − β3 − 4 ) + h2 y (xn ) − 1 + β1 + 2β2 + 3β3 + 4 4 2 1 3 (3) + h y (xn ) 6 − 1 (β1 + 4 2 + 9β3 + 16 4 ) 2 1 + h4 y (4) (xn ) − 24 + 1 (β1 + 8β2 + 27β3 + 64 4 ) + O(h5 ), 6 so that C 1 = 1 − β0 − β 1 − β 2 − β 3 − β 4 , C2 = − 1 + β1 + 2β2 + 3β3 + 4 4 , 2 C3 = 1 − 1 (β1 + 4 2... severely for PEC methods For example, the iterative starting procedure that we have used, failed to converge for large stepsizes (not shown in the diagrams) This effect persisted for a larger range of stepsizes for PEC methods than was the case for PECE methods NUMERICAL METHODS FOR ORDINARY DIFFERENTIAL EQUATIONS 10−6 10 4 1 14 10−8 E 2 10−10 3 4 10 4 10−3 5 10−2 h Figure 247 (ii) Orbital calculations for. .. + 9β3 + 16 4 ), 6 2 1 C4 = − 24 + 1 (β1 + 8β2 + 27β3 + 64 4 ) 6 For the Adams–Bashforth methods the value of β0 is zero; for k = 2 we also have β3 = 4 = 0 and we must solve the equations C1 = C2 = 0 This gives β1 = 3 and β2 = − 1 For k = 3 we allow β3 to be non-zero and we require that 2 2 C1 = C2 = C3 = 0 The solutions of these equations is β1 = 23 , β2 = − 4 , 12 3 5 β3 = 12 For k = 4, we solve... C1 = C2 = C3 = C4 = 0 to find β1 = 55 , 24 β2 = − 59 , β3 = 37 , 4 = − 3 24 24 8 For the Adams–Moulton methods we allow β0 to be non-zero For k = 1 (p = 2) we have β2 = β3 = 4 = 0 and C1 = C2 = 0; this gives β0 = β1 = 1 2 5 1 In a similar manner we find for k = 2 (p = 3) that β0 = 12 , β1 = 2 , β2 = − 12 ; 3 5 1 and for k = 3 (p = 4) that β0 = 3 , β1 = 19 , β2 = − 24 , β3 = 24 8 24 NUMERICAL DIFFERENTIAL...98 NUMERICAL METHODS FOR ORDINARY DIFFERENTIAL EQUATIONS 235 Fourth order methods Write the order conditions presented in the previous subsection, in the special case s = 4, assuming, because the method will be explicit, that aij = 0 unless i > j This yields the conditions b1 + b2 + b3 + b4 = 1, b2 c2 + b3 c3 + b4 c4 = b2 c2 + b3 c2 + b4 c2 = 2 3 4 b3 a32 c2 + b4 a42 c2 + b4 a43 c3 = b2 c3... similar behaviour for implicit methods For the initial value problem (201a), with output computed at x = 1, (237a) and (237b) gave slightly worse results than for corresponding explicit NUMERICAL METHODS FOR ORDINARY DIFFERENTIAL EQUATIONS 10−6 10 4 1 04 −8 E 2 10 3 5 10 −10 4 10 4 Figure 239(ii) 10−3 h 10−2 Runge–Kutta methods with cost corrections methods However, for the fourth order method (237c), . b 3 c 2 3 + b 4 c 2 4 = 1 3 , (235c) b 3 a 32 c 2 + b 4 a 42 c 2 + b 4 a 43 c 3 = 1 6 , (235d) b 2 c 3 2 + b 3 c 3 3 + b 4 c 3 4 = 1 4 , (235e) b 3 c 3 a 32 c 2 + b 4 c 4 a 42 c 2 + b 4 c 4 a 43 c 3 = 1 8 ,. 0.00537 347 7.8987 2.3976 64 0.0003 747 2 0.002 241 14 8.0168 3.3219 128 0.000 046 74 0.0006 746 5 8.0217 3.6879 256 0.00000583 0.000182 94 8.0136 3.8503 512 0.00000073 0.000 047 51 8.00 74 3.9267 10 24 0.00000009. β 1 +2β 2 +3β 3 +4 4 , C 3 = 1 6 − 1 2 (β 1 +4 2 +9β 3 +16β 4 ), C 4 = − 1 24 + 1 6 (β 1 +8β 2 +27β 3 + 64 4 ). For the Adams–Bashforth methods the value of β 0 is zero; for k =2wealso have β 3 = β 4 = 0 and we must solve the equations