276 COMPUTER BASED NUMERICAL AND STATISTICAL TECHNIQUES P n (x) = () 0 1 2 n ii i a aT x = ∑ (1) is very near solution to the problem. () () 11 11 00 max min nn ii ii x x ii fx Bx fx Bx −≤ ≤−≤ ≤ == −=− − ∑∑ i.e., the partial sum (1) is closely the best approximation to f(x). Chebyshev polynomial approximation T 0 T 1 T 2 T 3 T 6 T 5 T 4 1 .8.6 .4.2 0 –.2–.4 –.6 –1 x 1 .5 0 –.5 –1 Chebyshev polynomials FIG. 5.2. Chebbyshev polynomiats T 0 (x) through T 6 (x). Note that T j has j roots in the interval (–1,1) and that all the polynomials are bounded between +1. (iii) Economization of power series: To describe the process of economization, which is essential due to Lanczos, we first express the given function as a power series in x. Let power series expansion of x is. 2 01 2 ( ) ; 1 1 n n fx A Ax Ax Ax x =+ + + + −≤≤ (1) Now convert each term in the power series, in terms of Chebyshev polynomials. Thus we obtain the Chebyshev series expansion of the given continuous function ()fx on the interval [–1, 1]. i.e., 0 () () n nii i Px BTx = = ∑ (2) or P n 011 22 ( ) B ( ) ( ) ( ) nn xBTxBTx BTx =+ + + + INTERPOLATION WITH UNEQUAL INTERVAL 277 Now, if the truncated Chebyshew expansion is taken by (2) then 12 1 max ( ) ( ) nnn x fx P x B B ++ −≤≤ −≤++≤∈ and Hence P n (x) is a good uniform approximation to ()fx in which the number of terms retained depends on the given tolerance of ∈ . However for a large number of functions, an expansion as in (2), converges more rapidly than the initial power series for the given function. This process is known as ‘economization of the power series’, which is essentially due to Lanczos. Replacing each Chebyshev polynomial T i (x) by its polynomial form and rearranging the terms, we get the required economized polynomial approximation. We have, thus economized the initial power series in the sense of using fewer terms to achieve almost the same accuracy. Example 5. Economize the power series. sin x = 25 7 xx x x 6 120 5040 −+ + + to 3 significant digit accuracy. Sol. Here, we have sin x = x 25 7 6 120 5040 xx x −+ + + Now it is required to compute sin x correct to 3 significant digits. So truncating after 3 terms as the truncation error after 3 terms of the given series is 1 0.000198 5040 ≤= . Thus, sin x x≈ – 35 6120 xx + Now converting the powers of x in to Chebyshev polynomials. sin x ≈ T 1 (x) – 1 24 [3T 1 (x) + T 3 (x)] + () () () 135 1 10 5 1920 Tx Tx Tx ++ ⇒ sin x ≈ () () () 13 5 169 5 1 192 128 1920 Tx Tx Tx −+ Again, since the truncation error after two terms of the series is 1 0.00052 1920 ≤= . Thus we have sin x ≈ () () 13 169 5 192 128 Tx Tx − Now, to get the economized series, we put basic values of T 1 and T 3 sin x ≈ () () 3 169 5 43 192 128 xxx −− ⇒ sin x = 3 383 5 384 32 xx− ⇒ sin x = 0.9974x – 0.1562x 3 Which gives sin x to 3 significant digit accuracy and therefore, it is the economized series. 278 COMPUTER BASED NUMERICAL AND STATISTICAL TECHNIQUES Example 6. Prove that () () () 2 nn1 n 1xTx U x xU x + −=− . Sol. If x = cos θ , we get T n (cos θ ) = cos n θ and U n (cos θ ) = sin n θ [here U n (cos θ ) = sin n θ ] is Chebyshev polynomial of second kind of degree n over the interval [–1, 1]. Then we have to prove, sin θ cos n θ = sin (n + 1) θ – cos θ sin n θ Now, R.H.S = sin n θ cos θ + cos n θ sin θ – cos θ sin n θ = sin θ cos n θ = L.H.S. Example 7. Find a uniform polynomial approximation of degree four or less to sin –1 x on [–1, 1], using Lanczos economization with an error tolerance of 0.05 Sol. We have, sin –1 x = x + 3 57 315 6 40 336 x xx++ + Since the error is required to be less than 0.05 and we see that if the given series is truncated after three terms then the truncation error. 7 15 336 x < 0.044643 On [–1, 1] hence, we retain three terms and write sin –1 x = x + 3 6 x + 5 3 40 x = T 1 + 1 24 [3T 1 + T 3 ] + 3 640 [10T 1 + 5T 3 + T 5 ] = 75 64 135 25 3 384 640 TTT++ Now, the co-efficient of T 5 = 3 640 = 0.0046875 and as 5 1 T ≤ For all x ∈ [–1, 1], we have 5 3 640 T < 0.0046875 Therefore, we omit this term and this omission will not affect the desired accuracy, because the total error = 0.044642857 + 0.0046875 = 0.04933 < 0.05 Hence, required expension for Sin –1 x is sin –1 x = 75 64 T 1 + 25 384 = 125 128 x + 3 25 96 x INTERPOLATION WITH UNEQUAL INTERVAL 279 Example 8. Find a uniform polynomial approximation of degree 4 or less to e x in [–1, 1], using lanczos economization with a tolerance of ∈ = 0.02. Sol. Since f(x)= e x = 1 + x + 234 5 2 6 24 120 xxx x +++ + Since, 1 120 = 0.00833 Therefore, we take f(x) up to 4 24 x with a tolerance of ∈ = 0.02 S.t. f(x)= e x = 1 + x + 234 2624 xxx ++ (1) Changing each power of x in (1) in terms of Chebyshev polynomials, we get e x = 01 2 3 4 81 9 13 1 1 64 8 48 24 192 TT T T T++ + + Neglecting the last term because its magnitude 0.005 is less than 0.02. Hence, the required economized polynomial approximation for e x is given by e x ≈ 3 2 01 2 3 81 9 13 1 13 191 64 8 48 24 6 24 192 x TT T T xx++ + =+ ++ (iv) Least square approximation: To obtain a polynomial approximation to the given function f(x) on the interval [a, b] using least square approximation, with weight function w(x). Let P n (x) = a 0 + a 1 x 2 + a 2 x 2 + +a n x n (1) be a polynomial of degree n. Where a 0 , a 1 , a 2 , a n , are arbitrary constant we then have, () () () 2 01 0 , b n i ni i a Sa a a wx f x ax dx = =− ∑ ∫ (2) where w(x) > 0 is a weight function. The necessary conditions for S to be minimum, are given by () () 0 0 20, b n i i i a S wx f x ax dx a = ∂ =− − = ∂ ∑ ∫ () () 1 0 20, b n i i i a S w x f x a x xdx a = ∂ =− − = ∂ ∑ ∫ () () 2 2 0 20, b n i i i a S wx f x ax xdx a = ∂ =− − = ∂ ∑ ∫ () () 0 20, b n in i n i a S wx f x ax xdx a = ∂ =− − = ∂ ∑ ∫ 280 COMPUTER BASED NUMERICAL AND STATISTICAL TECHNIQUES after simplication, we get () () () () () +++ = ∫∫ ∫∫ 01 bb b b n n aa a a a wxdx a xwxdx a xwxdx wxfxdx () () () () () 21 01 bb b b n n aa a a a xw x dx a x w x dx a x w x dx xw x f x dx + +++ = ∫∫ ∫ ∫ () () () ()() 1 2 01 bb bb n nnn n aa aa a x w x dx a x w x dx a x w x dx x w x f x dx + +++= ∫∫ ∫∫ which are normal equations for P n (x). These are (n + 1) equations in (n + 1) unknowns and are solved to obtain a 0 , a 1 , a 2 , a n . Example 9. Obtain a least-square quadratic approximation to the function y(x) = x on [0, 1] w.r.t. weight function w(x) = 1. Sol. Let y = a 0 + a 1 x + a 2 x 2 be required quadratic approximation then, S(a 0 , a 1 , a 2 )= 1 1/2 2 01 2 0 xaaxaxdx −− − ∫ = minimum The normal equations are 1 1/2 2 01 2 0 0 1 1/2 2 01 2 1 0 1 21/2 2 01 2 2 0 20 20 20 S xaaxaxdx a S xx a axaxdx a S xx aaxaxdx a ∂ =− − − − = ∂ ∂ =− − − − = ∂ ∂ =− − − − = ∂ ∫ ∫ ∫ or 1111 1/2 2 01 2 0000 x dx a dx a xdx a x dx=+ + ∫∫∫∫ =+ + ∫∫∫∫ 1111 3/2 2 3 01 2 0000 x dxa xdxa xdxa xdx 1111 3 5/2 2 4 012 0000 x dxa xdxa xdxa xdx=++ ∫∫∫∫ or Simplifying above equations, we get 12 0 012 0 12 2 233 2 2345 2 3457 aa a a aa aaa ++= ++ = ++= INTERPOLATION WITH UNEQUAL INTERVAL 281 On solving the equations, we get 012 648 20 ,, 35 35 35 aaa===− Hence the required quadratic approximation to y = x on [0, 1] is 2 648 20 35 35 35 yxx=+ − or y = () 2 1 648 20 35 xx +− . Ans. Example 10. Using the Chebyshev polynomials, obtain the least square approximation of second degree for f(x) = x 4 on [–1, 1]. Sol. Let f(x) ≈ P(x) = C 0 T 0 (x) + C 1 T 1 (x) + C 2 T 2 (x) We have, S(C 0 , C 1 , C 2 ) = () () () () 1 2 4 00 11 22 1 x CTx CTx CTx dx − −−− ∫ which is to be minimum when 012 0 SSS ccc ∂∂∂ === ∂∂∂ Now, 0 0 S c ∂ = ∂ () () () () () 4 1 0 00 11 22 2 1 0 1 Tx x CTx CTx CTx dx x − ⇒− − − = − ∫ ⇒ () 1 4 0 0 2 1 13 8 1 xT x cdx x − == π − ∫ Similarly, 1 0 S c ∂ = ∂ () () () () () 1 1 4 00 11 22 2 1 0 1 Tx x CTx CTx CTx dx x − ⇒− − − = − ∫ () 1 4 1 1 2 1 2 0 1 xT x cdx x − ⇒= = π − ∫ and 2 0 S c ∂ = ∂ () () () () () 1 2 4 00 11 22 2 1 0 1 Tx x CTx CTx CTx dx x − ⇒− − − = − ∫ () 1 4 2 2 2 1 21 2 1 xT x cdx x − ⇒= = π − ∫ Hence the required approximation is f(x) = 02 31 82 TT+ . Example 11. The function f is defined by f(x) = 2 x t 2 0 1e 1 dt x t − − ∫ 282 COMPUTER BASED NUMERICAL AND STATISTICAL TECHNIQUES Approximate f by a polynomial P(x) = a + bx + cx 2 such that x1 may ≤ () () 3 fx Px 5 10 − −≤× Sol. The given function f(x)= 246810 0 1 1 2 6 24 120 720 x ttt t t dt x −+− + − + ∫ = 1 – 24 6 8 10 6 30 168 1080 7920 xx x x x +− + − + (1) given that ∈ = 5 × 10 –3 ⇒ ∈ = 0.005 Now, truncating the series (1) at x 8 , we have P(x) = 1 – 24 6 8 6 30 168 1080 xx x x +− + = T 0 – 1 12 (T 2 + T 0 ) + 1 240 (T 4 + 4T 2 + 3T 0 ) – () ++ + 64 2 0 1 61510 5376 TT T T + 1 138240 () 86420 8285635 TTTTT ++++ = 0.92755973T 0 – 0.06905175T 2 + 0.003253T 4 – 0.000128T 6 + 0.000007T 8 (2) Truncate the equation (2) at T 2 , to get required polynomial P(x) = 0.92755973T 0 – 0.06905175T 2 = 0.99661148 – 0.13810350x 2 or P(x) = 0.9966 – 0.1381 x 2 . Ans. Example 12. Obtain a linear polynomial approximation to the function y(x) = x 3 on [0, 1] using the least squares approximation with respect to weight function w(x) = 1. Sol. Let y = a 0 + a 1 x be the required linear approximation Then, S(a 0 , a 1 ) = 1 2 3 01 0 xaaxdx −− ∫ = minimum ⇒ 1 3 01 0 0 20 S xaaxdx a ∂ =− − − = ∂ ∫ ⇒ 111 3 01 000 adxaxdx xdx+= ∫∫∫ (1) Similarly, 1 3 01 1 0 20 S xaaxxdx a ∂ =− − − = ∂ ∫ ⇒ 111 24 01 000 x a xdx a x d x dx += ∫∫∫ (2) INTERPOLATION WITH UNEQUAL INTERVAL 283 From (1) and (2) a 0 + 1 1 24 a = 01 1 235 a a += ⇒ 01 91 , 10 5 aa==− . Hence the required linear approximation to y(x) = x 3 on [0, 1] is y = 1 91 10 5 x− . Example 13. Using the Chebyshev polynomials obtain the least square approximation of second degree for x 3 + x 2 + 3 on the interval [–1, 1]. Sol. Let f(x) = a 0 T 0 (x) + a 1 T 1 (x) + a 2 T 2 (x) So, S(a 0 , a 1 , a 2 ) = 1 2 1 1 1 x − − ∫ () () () () () 32 00 11 22 3 x x aT x aT x aT x dx ++− + + For S to be minimum 012 0 SSS aaa ∂∂∂ === ∂∂∂ Therefore, we have () () () () 1 32 0 00 11 22 2 1 30 1 Tx x x aT x aT x aT x dx x − ++− − − = − ∫ () () () () 1 32 1 00 11 22 2 1 30 1 Tx x x aT x aT x aT x dx x − ++− − − = − ∫ () () () () 1 32 2 00 11 22 2 1 30 1 Tx x x aT x aT x aT x dx x − ++− − − = − ∫ Using the orthogonality conditions, we have a 0 = ( ) () − ++ = π − ∫ 321 0 2 1 3 17 2 1 xx Tx dx x () () 32 1 1 1 2 1 3 23 4 1 xx Tx adx x − ++ == π − ∫ () () 32 1 2 2 2 1 3 21 2 1 xx Tx adx x − ++ == π − ∫ Hence, the required least-square approximation is, f(x) = () () () 012 731 242 Tx Tx Tx ++ 284 COMPUTER BASED NUMERICAL AND STATISTICAL TECHNIQUES Minimax polynomial approximation: Let f(x) be continuous on [a, b] and it is approximated by the polynomial P n (x) = a 0 + a 1 x + +a n x n , then the minimax polynomial approximation problem is to determine the constants a 0 , a 1 , a 2 ,a n such that () () max min axb axb xx ≤≤ ≤≤ ∈= ∈ (1) where, () x ∈ = f(x) – P n (x). (2) If P n (x) is the best uniform approximation in the sense of eqn. (2) and E n = max axb≤≤ () () n fx P x − then there are at least (n + 2) points a = x 0 < x 1 < x 2 < x n < x n+1 = b where error must alternate in signs, and (i) ∈ (x i ) = ± E n , i = 0, 1, 2, n + 1 (ii) ∈ (x i ) = – (x i+1 ), i = 0, 2 , , n (iii) ∈ 1 (x i ) = 0 for i = 1, 2, , n Example 14. Obtain the Chebyshev linear polynomial approximation (Uniform approximation) to the function f(x) = x 2 , on [0, 1]. Sol. Let P 1 (x) = a 0 + a 1 x and x 0 = 0, x 1 = α, x 2 = 1 Therefore, () x ∈ = 2 01 xaax−− Thus, () 0 x ∈ = () 1 x −∈ ⇒ () () 01 xx ∈+∈ = () () 0or 0 0 ∈+∈α= (1) and () 1 x ∈ = () 2 x −∈ () () 12 0 xx ⇒∈ +∈ = () () 1 ∈α +∈ = 0 (2) and () 1 1 2 xxa ∈=− = 0 (3) Hence from (1), – a 0 + α 2 – a 0 – a 1 α = 0 ⇒ 2 10 2 aaα− α− = 0 (4) Similarly from (2) 2 01 01 1 aa aaα− − α+− − = 0 (5) ⇒ () 2 10 121 a α− +αα− + = 0 (6) From (3), 2α – a 1 = 0 From eq. (4), (5), and (6) we get a 0 = – 1 8 , 1 2 α= , 1 1 a = Therefore the required Chebyshev linear approximation is P(x)= – 1 8 + x. Ans. INTERPOLATION WITH UNEQUAL INTERVAL 285 Example 15. Determine the best minimax approximation to f(x) = 2 1 x on [1, 2] with a straight line y = a 0 + a 1 x. Calculate the constants a 0 and a 1 , correct to two decimals. Sol. Given y = a 0 + a 1 x Therefore, ∈ (x) = 2 1 x – a 0 – a 1 x and x 0 = 1, x 1 = α , x 2 = 2 We have, () ( ) () () () 1 3 10 20 2 xa x ∈+∈α= ∈α+∈ = ′ ∈=−− (1) Thus, from (1), we have 1 – 2 a 0 + () 1 2 1 10 a −+α = α () 01 2 11 220 4 aa −+−+α= α 1 3 2 0 a+= α On solving these equations, we get a 0 = 1.66 and a 1 = – 0.75 Hence, the best minimax approximation is y = 1.66 – 0.75x. 5.7.3 Spline Interpolation Sometimes the problem of interpolation can be solved by dividing the given range of points by subintervals and use low order polynomial to interpolate each subintervals. Such types of polynomial are called piecewise polynomial. O Ducks Y Piecewise polynomial X FIG. 5.3 In the above figure piecewise polynomial exhibit discontinuity at some points. If it is possible to construct piecewise polynomial that prevent these discontinuities at the connecting points. Such piecewise polynomial are called spline function. According to the idea of draftsman spline, it is required that both dy dx and the curvature 2 2 dy dx are the same for the pair of cubics that join at each point. The spline have possess the given properties. . = ∂ ∑ ∫ 280 COMPUTER BASED NUMERICAL AND STATISTICAL TECHNIQUES after simplication, we get () () () () () +++ = ∫∫ ∫∫ 01 bb b b n n aa a a a wxdx a xwxdx a xwxdx wxfxdx () () () () () 21 01 bb b b n n aa a a a. a 0 + a 1 x 2 + a 2 x 2 + +a n x n (1) be a polynomial of degree n. Where a 0 , a 1 , a 2 , a n , are arbitrary constant we then have, () () () 2 01 0 , b n i ni i a Sa a a wx f x ax dx = =− ∑ ∫ . equations for P n (x). These are (n + 1) equations in (n + 1) unknowns and are solved to obtain a 0 , a 1 , a 2 , a n . Example 9. Obtain a least-square quadratic approximation to the function y(x)