1. Trang chủ
  2. » Khoa Học Tự Nhiên

Approximation theory

48 162 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Nội dung

3 Approximation Theory In this chapter, we deal with the problem of approximation of functions. A prototype problem can be described as follows: For some function f, known exactly or approximately, find an approximation that has a more simply computable form, with the error of the approximation within a given error tolerance. Often the function f is not known exactly. For example, if the function comes from a physical experiment, we usually have a table of function values only. Even when a closed-form expression is available, it may happen that the expression is not easily computable, for example, f(x)=  x 0 e −t 2 dt. The approximating functions need to be of simple form so that it is easy to make calculations with them. The most commonly used classes of approxi- mating functions are the polynomials, piecewise polynomial functions, and trigonometric polynomials. We begin with a review of some important theorems on the uniform approximation of continuous functions by polynomials. We then discuss several approaches to the construction of approximating functions. In Sec- tion 3.2, we define and analyze the use of interpolation functions. In Section 3.3 we discuss best approximation in general normed spaces, and in Sec- tion 3.4 we look at best approximation in inner product spaces. Section 3.5 is on the important special case of approximations using orthogonal polynomials, and Section 3.6 introduces approximations through projec- tion operators. The chapter concludes with a discussion in Section 3.7 of 114 3. Approximation Theory the uniform error bounds in polynomial and trigonometric approximations of smooth functions. 3.1 Approximation of continuous functions by polynomials The classical Weierstrass Theorem is a fundamental result in the approxi- mation of continuous functions by polynomials. Theorem 3.1.1 (Weierstrass) Let f ∈ C[a, b],letε>0. Then there exists a polynomial p(x) for which f − p ∞ ≤ ε. The theorem states that any continuous function f can be approximated uniformly by polynomials, no matter how badly behaved f may be on [a, b]. Several proofs of this seminal result are given in [52, Chap. 6], including an interesting constructive result using Bernstein polynomials. Various generalization of this classical result can be found in the liter- ature. The following is one such general result. Its proof can be found in several textbooks on analysis or functional analysis (e.g., [35, pp. 420–422]). Theorem 3.1.2 (Stone–Weierstrass) Let D ⊂ R d be a compact set. Suppose S is a subspace of C(D), the space of continuous functions on D, with the following three properties. (a) S contains all constant functions. (b) u, v ∈ S ⇒ uv ∈ S. (c) For each pair of points x, y ∈ D, x = y, there exists v ∈ S such that v(x) = v(y). Then S is dense in C(D), i.e., for any v ∈ C(D), there is a sequence {v n }⊂S such that v − v n  C(D) → 0asn →∞. . As simple consequences of Theorem 3.1.2, we have the next two results. Corollary 3.1.3 Let D be a compact set in R d . Then the set of all the polynomials on D is dense in C(D). Corollary 3.1.4 The set of all trigonometric polynomials is dense in the space C p ([−π, π]) of 2π-periodic continuous functions on R. 3.2 Interpolation theory 115 Obviously, Theorem 3.1.1 is a particular case of Corollary 3.1.3. Exercise 3.1.1 Prove Corollary 3.1.3 by applying Theorem 3.1.2. Exercise 3.1.2 Prove Corollary 3.1.4 by applying Theorem 3.1.2. Exercise 3.1.3 Prove Corollary 3.1.4 by applying Corollary 3.1.3. Hint:LetD be the unit circle in R 2 , and consider the trigonometric polynomials as the restrictions to D of polynomials in two variables. Exercise 3.1.4 Show that every continuous function f defined on [0, ∞)with the property lim x→∞ f(x) = 0 can be approximated by a sequence of functions of the form q n (x)= n  j=1 c n,j e −nax , where a>0 is any fixed number, and {c n,j } are constants. Hint: Apply Theorem 3.1.1 to the function  f(−log t/a), 0 <t≤ 1, 0,t=0. Exercise 3.1.5 Assume f ∈ C([−1, 1]) is an even function. Show that f(x)can be uniformly approximated on [−1, 1] by a sequence of polynomials of the form p n (x 2 ). Hint: Consider f ( √ x) for x ∈ [0, 1]. Exercise 3.1.6 Let f ∈ C m [a, b], m ≥ 0 integer. Show that there is a sequence of polynomials {p n } n≥1 such that f − p n  C m [a,b] → 0asn →∞. Hint: Apply Theorem 3.1.1. Exercise 3.1.7 Let Ω ⊂ R d be a domain, and f ∈ C m (Ω), m ≥ 0 integer. Show that there is a sequence of polynomials {p n } n≥1 such that f − p n  C m (Ω) → 0asn →∞. Hint: Apply Corollary 3.1.3. 3.2 Interpolation theory We begin by discussing the interpolation problem in an abstract setting. Let V be a normed space over a field K of numbers (R or C). Recall that the space of all the linear continuous functionals on V is called the dual space of V and is denoted by V  (see Section 2.5). 116 3. Approximation Theory An abstract interpolation problem can be stated in the following form. Suppose V n is an n-dimensional subspace of V ,withabasis{v 1 , ,v n }. Let L i ∈ V  ,1≤ i ≤ n,ben linear continuous functionals. Given n numbers b i ∈ K,1≤ i ≤ n, find u n ∈ V n such that the interpolation conditions L i u n = b i , 1 ≤ i ≤ n are satisfied. Some questions arise naturally: Does the interpolation problem have a solution? If so, is it unique? If the interpolation function is used to approx- imate a given function f(x), what can be said about error in the approxi- mation? Definition 3.2.1 We say that the functionals L i , 1 ≤ i ≤ n, are linearly independent over V n if n  i=1 a i L i (v)=0 ∀v ∈ V n =⇒ a i =0, 1 ≤ i ≤ n. Lemma 3.2.2 The linear functionals L 1 , ,L n are linearly independent over V n if and only if det(L i v j )=det ⎡ ⎢ ⎣ L 1 v 1 ··· L 1 v n . . . . . . . . . L n v 1 ··· L n v n ⎤ ⎥ ⎦ =0. Proof. By definition, L 1 , ,L n are linearly independent over V n ⇐⇒ n  i=1 a i L i (v j )=0, 1 ≤ j ≤ n =⇒ a i =0, 1 ≤ i ≤ n ⇐⇒ det(L i v j ) =0.  Theorem 3.2.3 The following statements are equivalent: 1. The interpolation problem has a unique solution. 2. The functionals L 1 , ,L n are linearly independent over V n . 3. The only element u n ∈ V n satisfying L i u n =0, 1 ≤ i ≤ n, is u n =0. 3.2 Interpolation theory 117 4. For any data {b i } n i=1 , there exists one u n ∈ V n such that L i u n = b i , 1 ≤ i ≤ n. Proof. From linear algebra, for a square matrix A ∈ K n×n , the following statements are equivalent: 1. The system Ax = b has a unique solution x ∈ K n for any b ∈ K n . 2. det(A) =0. 3. If Ax = 0,thenx = 0. 4. For any b ∈ K n , the system Ax = b has a solution x ∈ K n . The results of the theorem now follow from these statements and Lemma 3.2.2.  Now given u ∈ V , its interpolant u n =  n i=1 a i v i in V n is defined by the interpolation conditions L i u n = L i u, 1 ≤ i ≤ n. The coefficients {a i } n i=1 can be found from the linear system ⎛ ⎜ ⎝ L 1 v 1 ··· L 1 v n . . . . . . . . . L n v 1 ··· L n v n ⎞ ⎟ ⎠ ⎛ ⎜ ⎝ a 1 . . . a n ⎞ ⎟ ⎠ = ⎛ ⎜ ⎝ L 1 u . . . L n u ⎞ ⎟ ⎠ which has a unique solution if the functionals L 1 , ,L n are linearly inde- pendent over V n . The question of an error analysis in the abstract framework is difficult. For a general discussion of such error analysis, see Davis [52, Chap. 3]. Here we only give error analysis results for certain concrete situations. 3.2.1 Lagrange polynomial interpolation Let f be a continuous function defined on a finite closed interval [a, b]. Let ∆:a ≤ x 0 <x 1 < ···<x n ≤ b be a partition of the interval [a, b]. Choose V = C[a, b], the space of con- tinuous functions f :[a, b] → K ;andchooseV n+1 to be P n , the space of the polynomials of degree less than or equal to n. Then the Lagrange interpolant of degree n of f is defined by the conditions p n (x i )=f(x i ), 0 ≤ i ≤ n, p n ∈P n . (3.2.1) 118 3. Approximation Theory Here the interpolation linear functionals are L i f = f(x i ), 0 ≤ i ≤ n. If we choose the monomials v j (x)=x j (0 ≤ j ≤ n) as the basis for P n , then it can be shown that det(L i v j ) (n+1)×(n+1) =  j>i (x j − x i ) =0. (3.2.2) Thus there exists a unique Lagrange interpolation polynomial. Furthermore, we have the representation formula p n (x)= n  i=0 f(x i ) φ i (x),φ i (x) ≡  j=i x −x j x i − x j , (3.2.3) called Lagrange’s formula for the interpolation polynomial. The functions φ i satisfy the special interpolation conditions φ i (x j )=δ ij =  0,i= j, 1,i= j. The functions {φ i } n i=0 form a basis for P n , and they are often called La- grange basis functions. See Figure 3.1 for graphs of {φ i (x)} 3 i=0 for n =3, the case of cubic interpolation, with even spacing on the interval [1, 4]. Outside of the framework of Theorem 3.2.3, the formula (3.2.3) shows directly the existence of a solution to the Lagrange interpolation problem (3.2.1). The uniqueness result can also be proven by showing that the inter- polant corresponding to the homogeneous data is zero. Let us show this. Let p n ∈P n with p n (x i )=0,0≤ i ≤ n. Then the polynomial p n must contain the factors (x−x i ), 1 ≤ i ≤ n.Sincedeg(p n ) ≤ n and deg Π n i=1 (x−x i )=n, we have p n (x)=c n  i=1 (x −x i ) for some constant c. Using the condition p n (x 0 ) = 0, we see that c =0 and therefore, p n ≡ 0. We note that by Theorem 3.2.3, this result on the uniqueness of the solvability of the homogeneous problem also implies the existence of a solution. In the above, we have indicated three methods for showing the exis- tence and uniqueness of a solution to the interpolation problem (3.2.1). The method based on showing the determinant of the coefficient is non- zero, as in (3.2.2), can be done easily only in simple situations such as Lagrange polynomial interpolation. Usually it is simpler to show that the interpolant corresponding to the homogeneous data is zero, even for com- plicated interpolation conditions. For practical calculations, it is also useful 3.2 Interpolation theory 119 1 2 3 4 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1 x y y = φ 0 (x) 1 2 3 4 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1 x y y = φ 1 (x) 1 2 3 4 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1 x y y = φ 2 (x) 1 2 3 4 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1 x y y = φ 3 (x) FIGURE 3.1. The Lagrange basis functions for n = 3, with nodes {1, 2, 3, 4} to have a representation formula that is the analogue of (3.2.3), but such a formula is sometimes difficult to find. These results on the existence and uniqueness of polynomial interpolation extend to the case that {x 0 , ,x n } are any n + 1 distinct points in the complex plane C . The proofs remain the same. For the interpolation error in Lagrange polynomial interpolation, we have the following result. Proposition 3.2.4 Assume f ∈ C n+1 [a, b]. Then there exists a ξ x between min i {x i ,x} and max i {x i ,x} such that f(x) −p n (x)= ω n (x) (n + 1)! f (n+1) (ξ x ),ω n (x)= n  i=0 (x −x i ). (3.2.4) Proof. The result is obvious if x = x i ,0≤ i ≤ n. Suppose x = x i , 0 ≤ i ≤ n, and denote E(x)=f(x) − p n (x). Consider the function g(t)=E(t) − ω n (t) ω n (x) E(x). 120 3. Approximation Theory 0 1 −10 −5 0 5 x 10 −3 ω 3 (x) x 0 1 −3 −2 −1 0 1 2 3 x 10 −4 ω 6 (x) x 0 1 −12 −10 −8 −6 −4 −2 0 x 10 −6 ω 9 (x) x 0 1 −4 −2 0 2 4 x 10 −7 ω 12 (x) x FIGURE 3.2. Examples of the polynomials ω n (x) occurring in the interpolation error formulas (3.2.4) and (3.2.5) We see that g(t) has (n + 2) distinct roots, namely t = x and t = x i ,0≤ i ≤ n. By the Mean Value Theorem, g  (t)hasn+1 distinct roots. Applying repeatedly the Mean Value Theorem to derivatives of g, we conclude that g (n+1) (t)hasarootξ x ∈ (min i {x i ,x}, max i {x i ,x}). Then 0=g (n+1) (ξ x )=f (n+1) (ξ x ) − (n + 1)! ω n (x) E(x), and the result is proved.  There are other ways of looking at polynomial interpolation error. Using Newton divided differences, we can show f(x) −p n (x)=ω n (x) f[x 0 ,x 1 , ,x n ,x] (3.2.5) with f[x 0 ,x 1 , ,x n ,x] a divided difference of f of order n + 1. See [13, Section 3.2] for a development of this approach, together with a general discussion of divided differences and their use in interpolation. We should note that high degree polynomial interpolation with a uniform mesh is likely to lead to problems. Figure 3.2 contains graphs of ω n (x)for various degrees n. From these graphs, it is clear that the error behaviour is worse near the endpoint nodes than near the center node points. This leads 3.2 Interpolation theory 121 to p n (x) failing to converge for such simple functions as f(x) = (1 + x 2 ) −1 on [−5, 5], a famous example due to Carl Runge. A further discussion of this can be found in [13, Section 3.5]. In contrast, interpolation using the zeros of Chebyshev polynomials leads to excellent results. This is discussed in Section 3.7 of this chapter; and a further discussion is given in [13, p. 228]. 3.2.2 Hermite polynomial interpolation The main idea is to use values of both f (x)andf  (x) as interpolation conditions. Assume f is a continuously differentiable function on a finite interval [a, b]. Let ∆:a ≤ x 1 < ···<x n ≤ b be a partition of the interval [a, b]. Then the Hermite interpolant p 2n−1 ∈ P 2n−1 of degree less than or equal to 2n −1off is chosen to satisfy p 2n−1 (x i )=f(x i ),p  2n−1 (x i )=f  (x i ), 1 ≤ i ≤ n. (3.2.6) We have results on Hermite interpolation similar to those for Lagrange interpolation, as given in Exercise 3.2.6. More generally, for a given set of non-negative integers {m i } n i=0 ,one can define a general Hermite interpolation problem as follows. Find p N ∈ P N (a, b), N =  n i=1 (m i +1)− 1, to satisfy the interpolation conditions p (j) N (x i )=f (j) (x i ), 0 ≤ j ≤ m i , 1 ≤ i ≤ n. Again it can be shown that the interpolant with the homogeneous data is zero so that the interpolation problem has a unique solution. Also if f ∈ C N+1 [a, b], then the error satisfies f(x) −p N (x)= 1 (N + 1)! n  i=0 (x −x i ) m i +1 f (N+1) (ξ x ) for some ξ x ∈ [a, b]. For an illustration of an alternative error formula for the Hermite interpolation problem (3.2.6) that involves only the Newton divided difference of f, see [13, p. 161]. 3.2.3 Piecewise polynomial interpolation For simplicity, we focus our discussion on piecewise linear interpolation. Let f ∈ C[a, b], and let ∆:a = x 0 <x 1 < ···<x n = b be a partition of the interval [a, b]. Denote h i = x i − x i−1 ,1≤ i ≤ n, and h =max 1≤i≤n h i . The piecewise linear interpolant Π ∆ f of f is defined through the following two requirements: 122 3. Approximation Theory • For each i =1, ,n,Π ∆ f| [x i−1 ,x i ] is linear. • For i =0, 1, ,n,Π ∆ f(x i )=f(x i ). It is easy to see that Π ∆ f exists and is unique, and Π ∆ f(x)= x i − x h i f(x i−1 )+ x −x i−1 h i f(x i ),x∈ [x i−1 ,x i ], (3.2.7) for 1 ≤ i ≤ n. For a general f ∈ C[a, b], it is relatively straightforward to show max x∈[a,b] |f(x) −Π ∆ f(x)|≤ω(f,h) (3.2.8) with ω(f,h)themodulus of continuity of f on [a, b]: ω(f,h)= max |x−y|≤h a≤x,y≤b |f(x) −f(y)|. Suppose f ∈ C 2 [a, b]. By using (3.2.4) and (3.2.7), it is straightforward to show that max x∈[a,b] |f(x) −Π ∆ f(x)|≤ h 2 8 max x∈[a,b] |f  (x)|. (3.2.9) Now instead of f ∈ C 2 [a, b], assume f ∈ H 2 (a, b)sothat f 2 H 2 (a,b) =  b a  |f(x)| 2 + |f  (x)| 2 + |f  (x)| 2  dx < ∞. Here H 2 (a, b) is an example of Sobolev spaces. An introductory discussion was given in Examples 1.2.27 and 1.3.7. The space H 2 (a, b) consists of continuously differentiable functions f whose second derivative exists a.e. and belongs to L 2 (a, b). A detailed discussion of Sobolev spaces is given in Chapter 7. We are interested in estimating the error in the piecewise linear interpolant Π ∆ f and its derivative (Π ∆ f)  under the assumption f ∈ H 2 (a, b). We consider the error in the L 2 norm, f − Π ∆ f 2 L 2 (a,b) =  b a |f(x) −Π ∆ f(x)| 2 dx = n  i=1  x i x i−1 |f(x) −Π ∆ f(x)| 2 dx. For a function  f ∈ H 2 (0, 1), let  Π  f be its linear interpolant:  Π  f(ξ)=  f(0) (1 −ξ)+  f(1) ξ, 0 ≤ ξ ≤ 1. [...]... is its best approximation in K if and only if it satisfies (u − u, v − u) ≤ 0 ∀ v ∈ K (3.4.1) Proof Suppose u ∈ K is a best approximation of u Let v ∈ K be arbitrary Then, since K is convex, u + λ (v − u) ∈ K, λ ∈ [0, 1] Hence the function ϕ(λ) = u − [u + λ (v − u)] 2 , λ ∈ [0, 1], has its minimum at λ = 0 We then have 0 ≤ ϕ (0) = −2 (u − u, v − u), i.e., (3.4.1) holds 140 3 Approximation Theory Conversely,... i=0 in the sense of L2 (−1, 1) norm (u, Li )L2 (−1,1) Li i=0 143 144 3 Approximation Theory Example 3.4.9 An equally important example is the least squares approximation of a function f ∈ L2 (0, 2π) by trigonometric polynomials (see (3.2.13)) Let Vn = Tn , the set of all trigonometric polynomials of degree ≤ n Then the least squares approximation is given by n pn (x) = 1 [aj cos(jx) + bj sin(jx)] , a0... unique A subset is said to be finite-dimensional if it is a subset of a finitedimensional subspace 3.3.3 Existence of best approximation Let us apply the above results to a best approximation problem Let u ∈ V We are interested in finding elements from K ⊂ V which are closest 3.3 Best approximation 135 to u among the elements in K More precisely, we are interested in the minimization problem (3.3.3) inf... there exists a polynomial fn ∈ Pn such that f − fn Lp (a,b) = inf qn ∈Pn f − qn Lp (a,b) Certainly, for different value of p, we have a different best approximation fn When p = ∞, fn is called a “best uniform approximation of f ” The existence of a best approximation from a finite dimensional subspace can also be proven directly To do so, reformulate the minimization problem as a problem of minimizing... function over a closed bounded subset of Rn or Cn , and then appeal to the HeineBorel Theorem from elementary analysis (see Theorem 1.6.2) This is left as Exercise 3.3.7 136 3 Approximation Theory 3.3.4 Uniqueness of best approximation Showing uniqueness requires greater attention to the properties of the norm or to the characteristics of the approximating subset K Arguing as in the proof for the... Therefore, in Lp (Ω), 1 < p < ∞, there can be at most one best approximation Notice that the strict convexity of the norm is a sufficient condition for the uniqueness of a best approximation, but the condition is not necessary For example, the norm · L∞ (a,b) is not strictly convex, yet there are classical results stating that a best uniform approximation is unique for important classes of approximating... there is a λ ≥ 0 with 1 1 (u − u1 ) = λ (u − u0 ) 2 2 138 3 Approximation Theory From this, u − u1 = λ u − u0 , i.e., d = λ d So λ = 1, and u1 = u0 It can be shown that an inner product space is strictly normed (see Exercise 3.3.9), and for p ∈ (1, ∞), the space Lp (Ω) is also strictly normed Thus, we can again conclude the uniqueness of a best approximation both in an inner product space and in Lp (a,... Schwarz inequality 3.4 Best approximations in inner product spaces, projection on closed convex sets In an inner product space V , the norm · is induced by an associated inner product From the discussions in the previous section, Theorem 3.3.18 and Exercise 3.3.8, or Theorem 3.3.21 and Exercise 3.3.9, a best approximation is unique Alternatively, the uniqueness of the best approximation can be verified... similar argument shows f − (Π∆ f ) L2 (a,b) ≤ ch f L2 (a,b) (3.2.12) for another constant c > 0 In the theory of finite element interpolation, the above argument is generalized to error analysis of piecewise polynomial interpolation of any degree in higher spatial dimension 124 3 Approximation Theory 3.2.4 Trigonometric interpolation Another important and widely-used class of approximating functions... + u − v 2 ≥ u − u 2, i.e., u is a best approximation of u in K The geometric interpretation of (3.4.1) is that for any v ∈ K, the angle between the two vectors u − u and v − u is in the range [π/2, π] Corollary 3.4.2 Let K be a convex subset of an inner product space V Then for any u ∈ V , its best approximation is unique Proof Assume both u1 , u2 ∈ K are best approximations Then from Lemma 3.4.1, . approximations through projec- tion operators. The chapter concludes with a discussion in Section 3.7 of 114 3. Approximation Theory the uniform error bounds in polynomial and trigonometric approximations of. discuss best approximation in general normed spaces, and in Sec- tion 3.4 we look at best approximation in inner product spaces. Section 3.5 is on the important special case of approximations. 3 Approximation Theory In this chapter, we deal with the problem of approximation of functions. A prototype problem can be described

Ngày đăng: 16/02/2015, 19:32

TỪ KHÓA LIÊN QUAN

TÀI LIỆU CÙNG NGƯỜI DÙNG

  • Đang cập nhật ...

TÀI LIỆU LIÊN QUAN

w