DSpace at VNU: On functionally-fitted Runge-Kutta methods tài liệu, giáo án, bài giảng , luận văn, luận án, đồ án, bài t...
BIT Numerical Mathematics (2006) 46: 861–874 DOI: 10.1007/s10543-006-0092-x Published online: September 2006 – c Springer 2006 ON FUNCTIONALLY-FITTED RUNGE–KUTTA METHODS NGUYEN SI HOANG1 , ROGER B SIDJE2 and NGUYEN HUU CONG3 Mathematics Department, Kansas State University, Manhattan, KS 66502, USA email: nguyenhs@math.ksu.edu Advanced Computational Modelling Centre, Department of Mathematics, University of Queensland, Brisbane QLD 4072, Australia email: rbs@maths.uq.edu.au School of Graduate Studies, Vietnam National University, Hanoi, Vietnam email: congnh@vnu.edu.vn Abstract Functionally-fitted methods are generalizations of collocation techniques to integrate an equation exactly if its solution is a linear combination of a chosen set of basis functions When these basis functions are chosen as the power functions, we recover classical algebraic collocation methods This paper shows that functionally-fitted methods can be derived with less restrictive conditions than previously stated in the literature, and that other related results can be derived in a much more elegant way The novelty in our approach is to fully retain the collocation framework without reverting back into derivations based on cumbersome Taylor series expansions AMS subject classification (2000): 65L05, 65L06, 65L20, 65L60 Key words: functional fitting, collocation, Runge-Kutta Introduction Consider a system of first order differential equations of dimension d (1.1) y (t) = f (t, y(t)), y(t0 ) = y , y : R → Rd , f : R × Rd → Rd , t ∈ [t0 , t0 + T ] initial condition, with the usual assumption that f is continuous and satisfies a Lipschitz condition on the region of interest, i.e [t0 , t0 + T ] × Rd Collocation techniques allow us to construct implicit integrators for solving this system They are purposely designed to integrate any ODE system exactly if its solution is an algebraic polynomial up to a certain degree [12, 15] [1, p 93] It is possible to design other collocation methods that are also exact with functions other than algebraic polynomials Examples include trigonometric polynomials, exponenReceived September 21, 2005 Accepted May 3, 2006 Communicated by Timo Eirola 862 NGUYEN S HOANG ET AL tial functions [7, 11, 3, 4, 13, 14], as well as mixed algebraic and trigonometric polynomials [2] Their general principle is to exactly integrate any ODE problem whose solution can be expressed as a linear combination of the corresponding set of basis functions In fact, recent studies [7, 8, 9] have culminated into functionally-fitted methods that allow more general basis functions Results from these studies have established the existence of such generalized s-stage collocation RK methods for any set of linearly independent basis functions {ui }si=1 that satisfy the condition that the Wronskian W (u1 , , us )(h) = for small h > This condition is satisfied by many particular sets of functions such as that mentioned above But it only guarantees the existence of the method (i.e., its coefficients) for small enough h > There are very simple sets of functions that not satisfy the condition for large h A case in point is {cos(ωt), cos(2ωt)} for which W (u1 , u2 )( kπ ω ) = 0, ∀k ∈ Z In practice, however, the coefficients can exist for large, specific values of h It will therefore be better if the theoretical characterization is not restricted to small h only This paper extends the results of Ozawa [8] to alleviate this restriction We establish the existence of the coefficients for a larger range of the stepsize h Another significant contribution of this paper is that we fully retain the collocation framework without reverting back into derivations based on cumbersome Taylor series expansions as Ozawa did As an end-result, we obtain functionally-fitted methods for a larger class of basis functions (Section 2) We establish that the new methods share some common properties with earlier methods, namely that the order of accuracy of a functionally-fitted s-stage RK method is at least s and at most 2s (Section 3) We conclude with some numerical results (Section 4) Functionally-fitted RK methods We shall briefly summarize previous definitions before subsequently presenting our extensions Recall that a given s-stage RK method to solve (1.1) is defined by its Butcher-tableau c A bT , A = [aij ] ∈ Rs×s , b = (b1 , , bs )T , c = (c1 , , cs )T , e = (1, , 1)T , in which, for an explicit RK-method, A is strictly lower triangular and c1 = Using the current value y n ≈ y(tn ) and taking an appropriate stepsize h, the next iterate y n+1 ≈ y(tn + h) is computed as s Yi = y n + h aij f (tn + cj h, Yj ), j=1 s y n+1 = y n + h bj f (tn + cj h, Yj ) j=1 i = 1, , s, FUNCTIONALLY-FITTED RUNGE–KUTTA 863 These relations are often represented compactly using a Kronecker tensor product notation In the scalar case (i.e., d = 1), this becomes: Y = eyn + hAf (etn + ch, Y ) ∈ Rs , yn+1 = yn + hbT f (etn + ch, Y ) ∈ R, where Y = (Y1 , , Ys )T and f (etn +ch, Y ) = (f (tn +c1 h, Y1 ), , f (tn +cs h, Ys ))T We shall use this compact notation in the remainder of our presentation 2.1 The strong existence condition of Ozawa Ozawa [8] defined functionally-fitted RK methods by first choosing a set of scalar basis functions {ui (t)}si=1 and demanding that they satisfy the integration exactly Definition 2.1 (Functionally-fitted RK) An s-stage RK method is a functionally-fitted RK (or a generalized collocation RK) method with respect to the basis functions {ui (t)}si=1 if the following relations are satisfied for all i = 1, , s ui (t + h) = ui (t) + hb(t, h)T ui (et + ch), (2.1) ui (et + ch) = eui (t) + hA(t, h)ui (et + ch) This immediately gives a linear system that can be solved for A and b, yielding a method with variable coefficients that generally depend on t and h We shall refer to methods constructed this way as FRK methods The parameters (ci )si=1 are usually taken in [0, 1] and we assume that they are distinct By construction, FRK methods are inherently implicit But semi-implicit or diagonally-implicit methods can be obtained by imposing a lower triangular pattern to the RK matrix [10] Clearly, not all arbitrary basis functions satisfy (2.1) Ozawa [8] showed that A and b are uniquely determined for small h > and t ∈ [t0 , t0 + T ] if the (1) (1) Wronskian W (u1 , , us )(t) = 0, i.e., (1) (1) ··· us (t) (2) (2) ··· us (t) = (s) ··· us (t) u1 (t) u2 (t) W (1) u1 , , u(1) s u1 (t) u2 (t) (t) = (s) u1 (t) u2 (t) (1) (2) (s) With this existence criterion and some simplifying assumptions, he successfully studied the order of the methods We pursued his study further in [6] to characterize the stability properties of the methods in the case of trigonometric polynomials The existence criterion is however restricted to small enough h and furthermore, the constraint W (u1 , , us )(t) = 0, ∀t ∈ [t0 , t0 + T ] is too strict and is not satisfied by a simple set such as {u1 (t), u2 (t)} = {cos(ωt), cos(2ωt)} given that W (u1 , u2 )( kπ ω ) = 0, for all k ∈ Z Our starting aim is therefore to lessen these requirements 864 NGUYEN S HOANG ET AL 2.2 A more general condition for functionally-fitted methods We shall now introduce a very general condition that is less stringent than Ozawa’s condition We refer to our new condition as the collocation condition It indicates that we can build functional fitting methods with a wide-class of basis functions Definition 2.2 (Collocation condition) A set of sufficiently smooth functions {u1 (t), u2 (t), , us (t)} is said to satisfy the collocation condition if the following matrices E(t, h) = u1 (et + ch) − u1 (et), u2 (et + ch) − u2 (et), , us (et + ch) − us (et) F (t, h) = u1 (et + ch), u2 (et + ch), , us (et + ch) satisfy the condition that for any given value t0 , both E(t0 , h) and F (t0 , h) are nonsingular almost everywhere on the interval h ∈ [0, T ] Remark 2.1 We shall see later (cf proof of Lemma 2.2) that saying that E(t0 , h) is nonsingular is equivalent to saying that the following matrix is nonsingular ⎛ ⎞ u1 (t0 ) u2 (t0 ) · · · us (t0 ) ⎜ ⎟ ⎜1 u1 (t1 ) u2 (t1 ) · · · us (t1 )⎟ ⎜ ⎟ ⎟ , ti = t0 + ci h, i = 1, , s ⎜ ⎝ ⎠ u1 (ts ) u2 (ts ) ··· us (ts ) Thus this more readable matrix could alternatively be used in the definition of the collocation condition Remark 2.2 We can write (2.1) at t = t0 as E(t0 , h) = hA(t0 , h)F (t0 , h) This presumes however that the matrix A(t0 , h) exists, which is only guaranteed if F (t0 , h) is nonsingular as we required In summary, when the collocation condition is satisfied, F (t0 , h) and E(t0 , h) are nonsingular, and A(t0 , h) exists and is nonsingular too (1) (1) Remark 2.3 Using Taylor series, Ozawa showed that W (u1 , , us )(t) = 0, ∀t ∈ [t0 , t0 + T ] is a sufficient condition for det F (t, h) = when h is small enough (cf [8, Theorem 1]) Using the same approach, we can see that (1) (1) W (u1 , , us )(t) = 0, ∀t ∈ [t0 , t0 + T ] is also a sufficient (but not necessary) condition for det E(t, h) = when h is small enough Hence, our condition is lenient than his Also, it is easily seen that the set {cos(ωt), cos(2ωt)} does not satify Ozawa’s condition but satisfies our collocation condition The practical implication of the proposed collocation condition is that for any given t0 we can control the stepsize h to get a nonsingular F (t0 , h) and E(t0 , h) FUNCTIONALLY-FITTED RUNGE–KUTTA 865 For a prescribed set of collocation functions {u1 (t), , us (t)} and a given t0 , there will be at most countable values of h such that both E(t0 , h) and F (t0 , h) are singular (since they satisfy the collocation condition) This proves the following result Theorem 2.1 The coefficients of an FRK method with respect to a set of basis functions that satisfy the collocation condition are uniquely determined almost everywhere on the integration domain As with conventional Radau-type collocation methods, Ozawa showed that an s-stage FRK method has a stage order s and an overall order at least s and at most 2s Superconvergent methods that attain the maximum order 2s can be constructed by specifically choosing the collocation parameters (ci )si=1 to satisfy some orthogonality condition, as is the case with Gauss–Legendre points Furthermore, if we expand the coefficients into their Taylor series (0) (1) (2) aij = aij + aij h + aij h2 + (0) bi = bi (0) (1) (2) + bi h + bi h2 + (0) the leading terms aij and bi are constant and conform to implicit RK schemes (Ozawa [8, Corollary 1]) This shows therefore that when h → 0, the FRK method converges to the corresponding constant coefficient collocation method defined by the ci This is useful in practice because we can directly use these constant coefficients when h → Most of his work is obtained by using Taylor series techniques Here, by using elegant collocation techniques, we are able to establish the order of accuracy as well as the superconvergence 2.3 The collocation solution To prove the above properties without resorting to cumbersome Taylor series techniques, we shall establish the existence of a fundamental function reminiscent of the collocation polynomial found in classical algebraic collocation techniques We shall refer to this fundamental function as the collocation solution Given the basis functions {u1 , , us }, let H = Span{1, u1 , , us } = {v ∈ s C[t0 , t0 + T ] : v(t) = a0 + i=1 ui (t), ∈ R, i = 0, , s} Choose (ci )si=1 distinct and non-zero We call u(t) the collocation solution if it is an element of H that satisfies the differential equation at the collocation points, i.e., (2.2) u(t0 ) = y0 , u (t0 + ci h) = f (t0 + ci h, u(t0 + ci h)), i = 1, , s If u(t) exists, the numerical solution after one step is taken as (2.3) y1 = u(t0 + h) This indirect way of doing so is usually referred to as the collocation method and has provided a firm foundation for establishing further properties in classical 866 NGUYEN S HOANG ET AL algebraic collocation techniques As u(t) is only defined implicitly, its existence can not be taken for granted We first guarantee that it can also be assumed in our context Lemma 2.2 Suppose that we are given s + values y0 , , ys and the pair (t0 , h) is such that E(t0 , h) is nonsingular, then there exists an interpolation function ϕ ∈ H such that ϕ(t0 ) = y0 and ϕ(t0 + ci h) = yi , i = 1, , s Proof Any function ϕ ∈ H can be represented in the form s ϕ(t) = a0 + ui (t) i=1 Letting ti = t0 + ci h, i = 1, , s, the interpolation criteria can be written therefore as ⎛ ⎞ u1 (t0 ) u2 (t0 ) · · · us (t0 ) ⎛a0 ⎞ ⎛y0 ⎞ ⎜ ⎟ ⎟ ⎜ ⎟ ⎜1 u1 (t1 ) u2 (t1 ) · · · us (t1 )⎟ ⎜ ⎜a1 ⎟ ⎜y1 ⎟ ⎜ ⎟ = ⎜ ⎟ ⎟ ⎜ ⎟⎜ ⎝ ⎠⎝ ⎠ ⎝ ⎠ as ys u1 (ts ) u2 (ts ) · · · us (ts ) For this equation to have a unique solution the matrix in the left-hand side must be nonsingular Subtracting its first row from the other rows, we see that its determinant is u1 (t1 ) − u1 (t0 ) u2 (t1 ) − u2 (t0 ) · · · us (t1 ) − us (t0 ) u1 (t2 ) − u1 (t0 ) u2 (t2 ) − u2 (t0 ) · · · us (t2 ) − us (t0 ) = det E(t0 , h) u1 (ts ) − u1 (t0 ) u2 (ts ) − u2 (t0 ) us (ts ) − us (t0 ) ··· Hence the solution is uniquely determined under the assumption that E(t0 , h) is nonsingular Theorem 2.3 The collocation method (2.3) is equivalent to the s-stage FRK method with coefficients (c, b(t, h), A(t, h)), with respect to the given basis functions Proof Consider the equations we have to solve in an s-stage FRK method s Yi = y0 + h aij f (t0 + cj h, Yj ), i = 1, , s j=1 Suppose that the solutions of the above equations are Y i , i = 1, , s Lemma 2.2 ensures that there exists an interpolation function ϕ(t) such that ϕ(t0 +ci h) = Y i , i = 1, , s, and ϕ(t0 ) = y0 We can write the interpolation equalities altogether using the compact notation (2.4) ϕ(et0 + ch) = eϕ(t0 ) + hA(t0 , h)f (et0 + ch, ϕ(et0 + ch)) 867 FUNCTIONALLY-FITTED RUNGE–KUTTA Since ϕ(t) ∈ H, it can be represented in the form s ϕ(t) = a0 + (2.5) ui (t) i=1 From the definition of an FRK method, the coefficients (c, b(t, h), A(t, h)) satisfy the equalities ui (et0 + ch) = ui (et0 ) + hA(t0 , h)ui (et0 + ch), i = 1, , s Using these relations with the fact that ϕ(t) is a linear combination (2.5), we are led to (2.6) ϕ(et0 + ch) = eϕ(t0 ) + hA(t0 , h)ϕ (et0 + ch) Recall from Remark 2.2 that A(t0 , h) is nonsingular Equalities (2.4) and (2.6) imply that ϕ (t0 + ci h) = f (t0 + ci h, ϕ(t0 + ci h)) Therefore if we choose u(t) = ϕ(t) then u(t) satisfies (2.2) The existence of Y i , i = 1, , s, implies the existence of a collocation solution u(t) Accuracy properties With the results above, we can provide different proofs to the accuracy properties first stated by Ozawa in [8] 3.1 Order Lemma 3.1 Suppose that f (t) ∈ C n ([a, b]), n ≥ Let t0 ∈ (a, b) and define the function g(t) by g(t) = f (t)−f (t0 ) t−t0 f (t0 ) if t = t0 if t = t0 Then g(t) is an element of C n−1 ([a, b]) Proof Without loss of generality, we can assume that f (t0 ) = Since the case n = is obvious, assume n > For a fixed given t ∈ (a, b), t = t0 , and ≤ m < n, taking the mth partial Taylor series expansion of f (t0 ) around t with Lagrange residual, we get m (t0 − t)m−i f (t0 ) = i=0 f (m−i) (t) f (m+1) (ξt ) + (t0 − t)m+1 (m − i)! (m + 1)! 868 NGUYEN S HOANG ET AL where t0 < ξt < t or t < ξt < t0 Using the Newton–Leibniz rule on g(t), we have m g (m) (t) = − i=0 =− (i) m (m−i) f (t) i t0 − t m! (t0 − t)m+1 m (t0 − t)m−i i=0 f (m−i) (t) (m − i)! f (m+1) (ξt ) = · m+1 This implies that lim+ g (m) (t) = t→t0 f (m+1) (t0 ) = lim− g (m) (t), m+1 t→t0 m = 1, , n − Hence g(t) is differentiable n − times at t0 , and so it is differentiable n − times everywhere on (a, b) Remark 3.1 Suppose that f (t) ∈ C m+n ([t0 , t0 + T ]) is such that f (ti ) = 0, f (t) i = 1, , n According to Lemma 3.1 above, the function f1 (t) = t−t is a mem1 m+n−1 ber of C ([t0 , t0 + T ]) Applying the lemma repeatedly, we conclude that f (t) fn (t) = (t−t1 )···(t−t is a member of C m ([t0 , t0 + T ]) n) Theorem 3.2 An s-stage FRK method has a stage order r = s and a step order at least p = s Proof Let u(t) be the collocation solution corresponding to an s-stage FRK method (c, b, A) Recall that u(t) satisfies (2.2), the error function u (t0 + t) − f (t0 + t, u(t0 + t)) is zero whenever t = ci h, i = 1, , s Assume without loss of generality that t0 = 0, and consider now the following function g(t) = u (t) − f (t, u(t)) s i=1 (t − ci h) This function is well defined as a result of Lemma 3.1, and we can equivalently write s (t − ci h) u (t) = f (t, u(t)) + g(t) i=1 Let R(t) = u(t) − y(t) denote the difference between the collocation solution and the exact solution Thus R (t) = u (t) − y (t), and since y (t) = f (t, y(t)), we have s (3.1) R (t) = f (t, u(t)) − f (t, y(t)) + g(t) (t − ci h) i=1 869 FUNCTIONALLY-FITTED RUNGE–KUTTA Define f (t,u(t))−f (t,y(t)) , u(t)−y(t) ∂f ∂y (t, u(t)), L(t) = (3.2) if u(t) = y(t), if u(t) = y(t) Because f satisfies the Lipschitz condition there exists a constant L such that |L(t)| ≤ L, t ∈ [t0 , t0 + h] = [0, h] From (3.1) and (3.2) we have s (t − ci h) R (t) = L(t)R(t) + g(t) i=1 Solving this standard first-order differential equation for R with R(t0 ) = 0, we get (3.3) R(t) = e t s t L(ξ)dξ e − x L(ξ)dξ (x − ci h)dx g(x) i=1 Assigning x := xh in the above equality, we obtain R(t) = hs+1 e t t L(ξ)dξ s e− xh 0 L(ξ)dξ (x − ci )dx g(xh) i=1 Hence R(t) = hs+1 k(t) with k(t) a continuous function (and thus bounded on the time interval) Now taking t = cj h, we get Yj − y(cj h) = u(cj h) − y(cj h) ≡ R(cj h) = O(hs+1 ), j = 1, , s Finally, y1 − y(h) = u(h) − y(h) = O(hs+1 ), which concludes the proof of the theorem Remark 3.2 Since f (t, u) − f (t, y) ∂f = (t, y) + O(u − y) u−y ∂y and u(t) − y(t) ≡ R(t) = O(hs+1 ), it follows from (3.2) that L(t) = And since expansion ∂f ∂y (t, y) e− t ∂f (t, y(t)) + O(hs+1 ), ∂y t ∈ [t0 , t0 + h] = [0, h] is smooth enough, we can prove the existence of a Taylor L(ξ)dξ = α0 + + αs ts + O(ts+1 ) + O(hs+1 ) 870 NGUYEN S HOANG ET AL 3.2 Superconvergence Theorem 3.3 If an s-stage FRK method has collocation parameters (ci )si=1 that satisfy s (ξ − ci )dξ = 0, ξj (3.4) j = 0, , q − 1, i=1 then the method has a step order p = s + q Proof Remarks 3.1 and 3.2 ensure the following Taylor expansion s−1 e− x L(ξ)dξ αj xj + O(xs ) + O(hs+1 ) g(x) = j=0 Taking t = h in (3.3), together with | h |R(h)| ≤ e h L(ξ)dξ| ≤ Lh, we get s−1 Lh s j s s+1 αj x + O(x ) + O(h j=0 (x − ci h)dx ) i=1 Assigning x := hξ in the above inequality we obtain s−1 |R(h)| ≤ eLh s hs+j+1 αj j=0 ξj (ξ − ci )dξ + O(h2s+1 ) i=1 The conclusion of the theorem follows directly from this inequality We recall that Gauss points satisfy the orthogonal condition (3.4) with q = s, and so all s-stage FRK methods based on Gauss points attain the maximum order of accuracy 2s A practical case study As a complement to our theoretical presentation, we look at an example in detail to illustrate how functionally-fitted methods work in practice Consider the system (see [14, p 354]) y1 = −2y1 + y2 + sin t, y2 = −(β + 2)y1 + (β + 1)(y2 + sin t − cos t) This system has been used with β = −3 and β = −1000 in order to illustrate the phenomenon of stiffness If the initial conditions are y1 (0) = and y2 (0) = the exact solution is β-independent and read y1 (t) = 2e−t + sin t, y2 (t) = 2e−t + cos t As functionally-fitted methods entail variable coefficients and are more costly, they expect to take in compensation any advantage from the special properties of the solution that may be known in advance Hence we shall use the two sets of basis functions {cos(ωt), sin(ωt)} and {cos(ωt), sin(ωt), exp(−t)} 871 FUNCTIONALLY-FITTED RUNGE–KUTTA 4.1 Derivation of the method For the first set of basis functions {cos(ωt), sin(ωt)}, the coefficients of the corresponding method are defined as follow (cf [11]) c1 cos(c2 ν) − cos((c2 − c1 )ν) ν sin((c1 − c2 )ν) − cos(c1 ν) ν sin((c1 − c2 )ν) cos(c2 ν) − ν sin((c1 − c2 )ν) cos((c2 − c1 )ν) − cos(c1 ν) , ν sin((c1 − c2 )ν) cos((1 − c1 )ν) − cos(c1 ν) ν sin((c1 − c2 )ν) cos(c2 ν) − cos((1 − c2 )ν) ν sin((c1 − c2 )ν) (4.1) c2 ν ≡ ωh We refer to this 2-stage method as FRKCS (also called TIRK2 in [6]) For the second set of basis functions {cos(ωt), sin(ωt), exp(−t)}, the coefficients A and b of the corresponding method satisfy the equations (letting ν ≡ ωh) ⎛ ⎞ cos(c1 ν) sin(c1 ν) e−c1 h ⎜ ⎟ νbT ⎝cos(c2 ν) sin(c2 ν) e−c2 h ⎠ = sin ν − cos ν ω − ωe−h cos(c3 ν) sin(c3 ν) e−c3 h and ⎛ cos(c1 ν) ⎜ νA ⎝cos(c2 ν) cos(c3 ν) sin(c1 ν) sin(c2 ν) sin(c3 ν) ⎞ ⎛ e−c1 h sin(c1 ν) − cos(c1 ν) ⎜ −c2 h ⎟ e ⎠ = ⎝sin(c2 ν) − cos(c2 ν) e−c3 h sin(c3 ν) − cos(c3 ν) ⎞ ω − ωe−c1 h ⎟ ω − ωe−c2 h ⎠ ω − ωe−c3 h It is possible to apply Cramer rules to solve these systems and get the coefficients explicitly as in (4.1) However, their closed-form expressions are quite cumbersome We chose to solve for them numerically on the fly We refer to this 3-stage method as FRKCSE 4.2 Numerical experiments Using MATLAB in double-precision (machine precision = 0.2 × 10−15 ), we implemented FRKCS and FRKCSE with ω = and Gauss points, i.e., (c1 , c2 ) = √ √ √ √ 3 15 1 15 ( − , + ) and (c1 , c2 , c3 ) = ( − 10 , , + 10 ) respectively (cf [5, p 76]) In [6], we showed that those symmetric Gauss points make FRKCS A-stable for all ν ∈ (0, π] Numerical results are presented on Table 4.1 for the interval [t0 , t0 + T ] = [0, 10] For ease of comparison, we also reproduce the numerical results for the Optimal implicit exponentially-fitted RK (EFRK) method reported in [14] Since EFRK used Radau points (c1 , c2 ) = ( 13 , 1) rather than Gauss points, we further include a corresponding variant of FRKCS that uses the same Radau points For clarity we call it FRKCS( 13 , 1) Unlike Gauss points, they satisfy the orthogonality condition (3.4) only at q = and so both EFRK and FRKCS( 13 , 1) are of order p = s + q = Using a similar technique as in [6], it can be shown that FRKCS( 31 , 1) is A-stable for all ν ∈ (0, π] 872 Table 4.1: Errors NGUYEN S HOANG ET AL yi = |yi (T )−yicomput (T )| for the two components i = 1, at T = 10 β = −3 h EFRK( 13 , 1) [14, p 354] y1 β = −1000 y1 y2 y2 0.100 0.050 0.025 0.021 ∗ 10−04 0.105 ∗ 10−04 0.034 ∗ 10−05 0.135 ∗ 10−05 0.046 ∗ 10−06 0.169 ∗ 10−06 0.603 ∗ 10−05 0.457 ∗ 10−05 0.666 ∗ 10−06 0.191 ∗ 10−06 0.800 ∗ 10−07 0.596 ∗ 10−07 0.100 0.050 0.025 0.247 ∗ 10−07 0.247 ∗ 10−07 0.319 ∗ 10−08 0.319 ∗ 10−08 0.392 ∗ 10−09 0.392 ∗ 10−09 0.247 ∗ 10−07 0.247 ∗ 10−07 0.319 ∗ 10−08 0.319 ∗ 10−08 0.392 ∗ 10−09 0.392 ∗ 10−09 0.500 0.250 0.125 0.134 ∗ 10−06 0.134 ∗ 10−06 0.825 ∗ 10−08 0.825 ∗ 10−08 0.514 ∗ 10−09 0.514 ∗ 10−09 0.134 ∗ 10−06 0.134 ∗ 10−06 0.825 ∗ 10−08 0.825 ∗ 10−08 0.514 ∗ 10−09 0.514 ∗ 10−09 0.500 0.250 0.000 ∗ 10−15 0.111 ∗ 10−15 0.111 ∗ 10−15 0.111 ∗ 10−15 0.040 ∗ 10−12 0.121 ∗ 10−12 0.229 ∗ 10−13 0.157 ∗ 10−13 FRKCS( 13 , 1) FRKCS/Gauss FRKCSE/Gauss Looking at the results we see that FRKCSE is almost accurate to machine precision (particularly when β = −3) This corroborates the theory Indeed, since the closed-form expression of the solution can be represented in terms of the basis functions, FRKCSE can integrate the test equation exactly Roundoff and approximation errors arise in practice, however, due for example to the nonlinear Newton scheme to retrieve the stage values Also from the table, the numerical results confirm that the 2-stage FRKCS with Gauss points has the order of accuracy p = 2s = stated in Theorem 3.3 Either variant of FRKCS is not as accurate as FRKCSE, but their results are better than those of the Optimal implicit EFRK-method given in [14, p 354] In fact, the Optimal EFRK-method requires a stepsize about five time smaller than that of FRKCS( 13 , 1) and nearly ten times smaller than that of FRKCS/Gauss to achieve the same accuracy Intuitively this may come from the fact that exp(−t) damps as soon as t > 1, making the solution mostly behave in terms of sin(t) and cos(t), which FRKCS captures best Figure 4.1 supports this intuition We see that the scaled errors ||y n − y(tn )||∞ /h3 decrease as t grows, whereas similar plots in [14, p 355] showed that the scaled errors of EFRK vary in a nearly periodic manner Intriguingly, observe also on the table that FRKCS produces y1 = y2 regardless of β This curious coincidence is specific to the test problem and the basis functions of FRKCS Indeed an explicit derivation of the collocation solution for this problem showed that it can be written in the form u(t) = a cos t + (b + 1) sin t + c , (a + 1) cos t + b sin t + c FUNCTIONALLY-FITTED RUNGE–KUTTA 873 Figure 4.1: Scaled errors ||y n − y(tn )||∞ /h3 obtained from FRKCS( 31 , 1), which has the same third order accuracy as EFRK( 13 , 1) [14, p 355] There are three plots with h = 0.1, 0.05, 0.025 but these are indistinguishable The damping suggests that y n is a better approximation of y(tn ) over time where the scalars a, b, and c can be obtained by writing that u(t) satisfies the problem at the collocation points The resulting system to solve for them turns out to be independent of β, so that y n = |y(tn ) − y n | = |y(tn ) − u(tn )| = |an cos tn + bn sin tn + cn − 2e−tn | yields y1 = 1 y2 independent of β Concluding remarks This paper has studied functionally-fitted methods (also known as generalized collocation methods) by using a different approach from that of Ozawa [8] We were able to establish accuracy properties by revealing the existence of a fundamental collocation solution Our results apply to a larger class of basis functions, instead of being only restricted to basis functions satisfying the property that their Wronskian is nonsingular on the whole computation interval We established the existence of the coefficients for almost all values of the stepsize h Numerical results for a representative test problem were presented to highlight some practical aspects Acknowledgments The authors are thankful to the anonymous referees for their constructive comments that improved the paper 874 NGUYEN S HOANG ET AL REFERENCES K Burrage, Parallel and Sequential Methods for Ordinary Differential Equations, Oxford University Press, Oxford, 1995 J P Coleman and S C Duxbury, Mixed collocation methods for y = f (t, y), J Comput Appl Math., 126 (2000), pp 47–75 J M Franco, An embedded pair of exponentially fitted explicit Runge–Kutta methods, J Comput Appl Math., 149 (2002), pp 407–414 J M Franco, Exponentially tted explicit RungeKuttaNystră om methods, J Comput Appl Math., 167 (2004), pp 1–19 E Hairer and G Wanner, Solving Ordinary Differential Equations II, Stiff and Differential-Algebraic Problems, Springer, Berlin, 1991 N S Hoang, R B Sidje, and N H Cong, Analysis of trigonometric implicit Runge–Kutta methods, J Comput Appl Math., to appear, 2006 K Ozawa, A four-stage implicit RungeKuttaNystră om method with variable coefficients for solving periodic initial value problems, Japan J Indust Appl Math., 16 (1999), pp 25– 46 K Ozawa, Functional fitting Runge–Kutta method with variable coefficients, Japan J Indust Appl Math., 18 (2001), pp 105–128 K Ozawa, Functional tting RungeKuttaNystră om method with variable coecients, Japan J Indust Appl Math., 19 (2002), pp 55–85 10 K Ozawa, A functionally fitted three-stage explicit singly diagonally implicit Runge–Kutta method, Japan J Indust Appl Math., 22 (2005), pp 403–427 11 B Paternoster, RungeKutta(Nystră om) methods for ODEs with periodic solutions based on trigonometric polynomials, Appl Numer Math., 28 (1998), pp 401–412 12 P J van der Houwen, B P Sommeijer, and N H Cong, Stability of collocation-based RungeKuttaNystră om methods, BIT, 31 (1991), pp 469–481 13 G Vanden Berghe, L G Ixaru, and H De Meyer, Frequency determination and steplength control for exponentially-fitted Runge–Kutta methods, J Comput Appl Math., 132 (2001), pp 95–105 14 G Vanden Berghe, L G Ixaru, and M Van Daele, Optimal implicit exponentially-fitted Runge–Kutta methods, Comput Phys Comm., 140 (2001), pp 346–357 15 K Wright, Some relationships between implicit Runge–Kutta, collocation and Lanczos τ methods, and their stability properties, BIT, 10 (1970), pp 217–227 ... very general condition that is less stringent than Ozawa’s condition We refer to our new condition as the collocation condition It indicates that we can build functional fitting methods with a... collocation condition The practical implication of the proposed collocation condition is that for any given t0 we can control the stepsize h to get a nonsingular F (t0 , h) and E(t0 , h) FUNCTIONALLY-FITTED. .. basis functions Definition 2.2 (Collocation condition) A set of sufficiently smooth functions {u1 (t), u2 (t), , us (t)} is said to satisfy the collocation condition if the following matrices E(t,