Let Xn = {x j} n j=1 be a set of n points in the dcube I d := 0, 1d , and Φn = {ϕj} n j=1 a family of n functions on I d . We consider the approximate recovery functions f on I d from the sampled values f(x 1 ), ..., f(x n), by the linear sampling algorithm Ln(Xn, Φn, f) := Xn j=1 f(x j )ϕjLet Xn = {x j} n j=1 be a set of n points in the dcube I d := 0, 1d , and Φn = {ϕj} n j=1 a family of n functions on I d . We consider the approximate recovery functions f on I d from the sampled values f(x 1 ), ..., f(x n), by the linear sampling algorithm Ln(Xn, Φn, f) := Xn j=1 f(x j )ϕjLet Xn = {x j} n j=1 be a set of n points in the dcube I d := 0, 1d , and Φn = {ϕj} n j=1 a family of n functions on I d . We consider the approximate recovery functions f on I d from the sampled values f(x 1 ), ..., f(x n), by the linear sampling algorithm Ln(Xn, Φn, f) := Xn j=1 f(x j )ϕj
Sampling and cubature on sparse grids based on a B-spline quasi-interpolation Dinh D˜ ung Vietnam National University, Hanoi, Information Technology Institute 144 Xuan Thuy, Cau Giay, Hanoi, Vietnam dinhzung@gmail.vn April 22, 2014 -- Version 3.1 Abstract Let Xn = {xj }nj=1 be a set of n points in the d-cube Id := [0, 1]d , and Φn = {ϕj }nj=1 a family of n functions on Id . We consider the approximate recovery functions f on Id from the sampled values f (x1 ), ..., f (xn ), by the linear sampling algorithm n f (xj )ϕj . Ln (Xn , Φn , f ) := j=1 The error of sampling recovery is measured in the norm of the space Lq (Id )-norm or the energy norm of the isotropic Sobolev sce Wqγ (Id ) for 0 < q ≤ ∞ and γ > 0. Functions f to be recovered are from the unit ball in Besov type spaces of an anisotropic smoothness, in particular, spaces α,β a of a “hybrid” of mixed of a nonuniform mixed smoothness a ∈ Rd+ , and spaces Bp,θ Bp,θ smoothness α > 0 and isotropic smoothness β ∈ R. We constructed optimal linear sampling algorithms Ln (Xn∗ , Φ∗n , ·) on special sparse grids Xn∗ and a family Φ∗n of linear combinations of integer or half integer translated dilations of tensor products of B-splines. We computed the asymptotic of the error of the optimal recovery. This construction is based on a B-spline α,β a quasi-interpolation representations of functions in Bp,θ and Bp,θ . As consequences we obtained the asymptotic of optimal cubature formulas for numerical integration of functions from the unit ball of these Besov type spaces. Keywords and Phrases Linear sampling algorithm; Cubature formula; Sparse grid; Optimal sampling recovery; Optimal cubature; Besov type space of anisotropic smoothness; B-spline quasi-interpolation. Mathematics Subject Classifications (2010) 41A15; 41A05; 41A25; 41A58; 41A63. 1 1 Introduction The aim of the present paper is to investigate linear sampling algorithms and cubature formulas on sparse grids based on a B-spline quasi-interpolation, and their optimality for functions on the unit d-cube Id := [0, 1]d , having an anisotropic smoothness. The error of sampling recovery is measured in the norm of the space Lq (Id )-norm or the (generalized) energy norm of the isotropic Sobolev space Wqγ (Id ) for 0 < q ≤ ∞ and γ > 0. For convenience, we use somewhere the notation Wq0 (Id ) := Lq (Id ). Let Xn = {xj }nj=1 be a set of n points in Id , Φn = {ϕj }nj=1 a family of n functions on Id . If f a function on Id , for approximately recovering f from the sampled values f (x1 ), ..., f (xn ), we define the linear sampling algorithm Ln (Xn , Φn , ·) by n f (xj )ϕj . Ln (Xn , Φn , f ) := (1.1) j=1 Let B be a quasi-normed space of functions on Id , equipped with the quasi-norm · B . For f ∈ B, we measure the recovery error by f − Ln (Xn , Φn , f ) B . Let W ⊂ B. To study optimality of linear sampling algorithms of the form (1.1) for recovering f ∈ W from n their values, we will use the quantity rn (W, B) := inf sup f − Ln (Xn , Φn , f ) B . Xn ,Φn f ∈W A general nonlinear sampling algorithm of recovery can be defined as Rn (Xn , Pn , f ) := Pn (f (x1 ), ..., f (xn )), where Pn : Rn → B is a given mapping. To study optimal nonlinear sampling algorithms of recovery for f ∈ W from n their values, we can use the quantity n (W, B) := inf sup f − Rn (Xn , Pn , f ) Pn ,Xn f ∈W B. Further, let Λn = {λj }nj=1 a sequence of n numbers. For a f ∈ C(Id ), we want to approximately compute the integral I(f ) := f (x) dx Id by the cubature formula n λj f (xj ). In (Xn , Λn , f ) := j=1 To study the optimality of cubature formulas for f ∈ W , we use the quantity in (W ) := inf sup |I(f ) − In (Xn , Λn , f )|. Xn ,Λn f ∈W 2 Recently, there has been increasing interest in solving approximation and numerical problems that involve functions depending on a large number d of variables. The computation time typically grows exponentially in d, and the problems become intractable already for mild dimensions d without further assumptions. This is so called the curse of dimensionality [2]. In sampling recovery and numerical integration, a classical model in attempt to overcome it which has been widely studied, is to impose certain mixed smoothness or more general anisotropic smoothness conditions on the function to be approximated, and to employ sparse grids for construction of approximation algorithms for sampling recovery or integration. We refer the reader to [6, 31, 39, 40] for surveys and the references therein on various aspects of this direction. Sparse grids for sampling recovery and numerical integration were first considered by Smolyak [43]. He constructed the following grid of dyadic points Γ(m) := {2−k s : k ∈ D(m), s ∈ I d (k)}, where D(m) := {k ∈ Zd+ : |k|1 ≤ m} and I d (k) := {s ∈ Zd+ : 0 ≤ si ≤ 2ki , i ∈ [d]}. Here and in what follows, we use the notations: xy := (x1 y1 , ..., xd yd ); 2x := (2x1 , ..., 2xd ); |x|1 := di=1 |xi | for x, y ∈ Rd ; [d] denotes the set of all natural numbers from 1 to d; xi denotes the ith coordinate of x ∈ Rd , i.e., x := (x1 , ..., xd ). Observe that Γ(m) is a sparse grid of the size 2m md−1 in comparing with the standard full grid of the size 2dm . In Approximation Theory, Temlyakov [44] – [46] and the author of the present paper [16] – [18] developed Smolyak’s construction for studying the asymptotic order of rn (W, Lq (Td )) for periodic Sobolev classes Wpα1 and Nikol’skii classes Hpα1 having mixed smoothness α, where 1 := (1, 1, ..., 1) ∈ Rd and Td denotes the d-dimensional torus. Recently, Sickel and Ullrich [41] α1 , L (Td )) for periodic Besov classes. For non-periodic functions of mixed have investigated rn (Up,θ q smoothness linear sampling algorithms have been recently studied by Triebel [48] (d = 2), Dinh D˜ ung [23], Sickel and Ullrich [42], using the mixed tensor product of B-splines and Smolyak grids Γ(m). In [24], we have constructed methods of approximation by arbitrary linear combinations of translates of the Korobov kernel κr,d on Smolyak grids of functions from the Korobov space K2r (Td ) which is a reproducing kernel Hilbert space with the associated kernel κr,d . This approximation can have applications in Machine Leaning. Smolyak grids are a counterpart of hyperbolic crosses which are frequency domains of trigonometric polynomials widely used for approximations of functions with a bounded mixed smoothness. These hyperbolic cross trigonometric approximations are initiated by Babenko [1]. For further surveys and references on the topic see [15, 46], the references given there, and the more recent contributions [41, 49]. In Computational Mathematics, the sparse grid approach was first considered by Zenger [53] in parallel algorithms for numerical solving PDEs. Numerical integration was investigated in [30]. For non-periodic functions of mixed smoothness of integer order, linear sampling algorithms on sparse grids have been investigated by Bungartz and Griebel [6] employing hierarchical Lagrangian polynomials multilevel basis and measuring the approximation error in the L2 -norm and energy H 1 norm. There is a very large number of papers on sparse grids in various problems of approximations, sampling recovery and integration with applications in data mining, mathematical finance, learning theory, numerical solving of PDE and stochastic PDE, etc. to mention all of them. The reader can see the surveys in [6, 36, 31] and the references therein. For recent further developments and 3 results see in [34, 33, 35, 29, 4]. In the recent paper [23], we have studied the problem of sampling recovery of functions on Id α1 , which is defined as the unit ball of the Besov space B α1 of from the non-periodic Besov class Up,θ p,θ functions on Id having mixed smoothness α. For various 0 < p, θ, q ≤ ∞ and α > 1/p, we proved α1 , L (Id )) which in some cases, coincide with the asymptotic order upper bounds for rn (Up,θ q α1 rn (Up,θ , Lq (Id )) (d−1)b n−α+(1/p−1/q)+ log2 n, (1.2) where b = b(α, p, θ, q) > 0 and x+ := max(0, x) for x ∈ R. By using a quasi-interpolation repreα1 by mixed B-spline series, we constructed optimal linear sampling sentation of functions f ∈ Bp,θ algorithms on Smolyak grids Γ(m). In the paper [26], we obtained the asymptotic order of optimal sampling recovery on Smolyak α1 for 0 < p, θ, q ≤ ∞ and α > 1/p. It is grids in the Lq (Id )-quasi-norm of functions from Up,θ necessary to emphasize that any sampling algorithm on Smolyak grids always gives a lower bound (d−1)b of recovery error of the form as in the right side of (1.2) with the logarithm term log2 n, b > 0. Unfortunately, in the case when the dimension d is very large and the number n of samples is (d−1)b rather mild, the main term becomes log2 n which grows fast exponentially in d. To avoid this exponential grow we impose to functions other anisotropic smoothnesses and construct appropriate sparse grids for functions having them. Namely, we extend the above study to functions on Id from a for a ∈ Rd , and U α,β for α > 0, β ∈ R, which are defined as the unit ball of the the classes Up,θ + p,θ a and B α,β . The space B a and B α,β are certain sets of functions with Besov type spaces Bp,θ p,θ p,θ p,θ bounded mixed modulus of smoothness. Both of them are generalizations in different ways of α1 of mixed smoothness α. The space B a is B α1 for a = α1. The space B α,β is a the space Bp,θ p,θ p,θ p,θ α1 and the classical isotropic Besov space B β of smoothness β (see Section “hybrid” of the space Bp,θ p,θ 2). Hyperbolic cross approximations and sparse grid sampling recovery of functions from a space a with uniform and nonuniform mixed smoothness a were studied in a large number of works. Bp,θ We refer the reader to [15, 46] as well to recent papers [23, 25] for surveys and bibliography. These a , see [14, 15, 19, 27, 28]. problems were extended to functions from an intersection of spaces Bp,θ α,β α,β . The The space Bp,θ is a Besov type generalization of the Sobolev type space H α,β = B2,2 latter space has been introduced in [36] for solutions of the following elliptic variational problems: a(u, v) = (f, v) for all v ∈ H γ , where f ∈ H −γ and a : H γ × H γ → R is a bilinear symmetric form satisfying the conditions a(u, v) ≤ λ u H γ v H γ and a(u, u) ≥ µ u 2H γ . By use of tensor-product biorthogonal wavelet bases, the authors of these papers constructed so-called optimized sparse grid subspaces for finite element approximations of the solution having H α,β -regularity, whereas the approximation error is measured in the energy norm of isotropic Sobolev space H γ . They generalized the construction of [5] for a hyperbolic cross approximation of the solution of Poisson’s equation to elliptic variational problems. The necessary dimension nε of the optimized sparse grid space for the finite element approximation of the solution with accuracy ε does not exceed C(d, α, γ, β) ε−(α+β−γ) if α > γ−β > 0. A generalization H α,β (R3 )N of the space H α,β of functions on (R3 )N , based on isotropic 4 Sobolev smoothness of the space H 1 (R3 ), has been considered by Yserentant [50]–[52] for solutions u : (R3 )N → R : (x1 , ..., xN ) → u(x1 , ..., xN ) of the electronic Schr¨odinger equation Hu = λu for eigenvalue problem where H is the Hamilton operator. He proved that the eigenfunctions are contained in the intersection of spaces H 1,0 (R3 )N ∩ ∩ϑ 1/p and α > (γ − β)/d if β > γ, and α > γ − β if β < γ. Then we have α,β rn (Up,θ , Wqγ (Id )) n−α−(β−γ)/d+(1/p−1/q)+ , n−α−β+γ+(1/p−1/q)+ , α,β γ d n (Up,θ , Wq (I )) α,β ) in (Up,θ n−α−β/d+(1/p−1)+ , n−α−β+(1/p−1)+ , β > 0, β < 0. β > γ, β < γ; (1.5) (1.6) It is remarkable that the asymptotic orders in (1.3), (1.4) and in (1.5) for β < γ, (1.6) for β < 0, do not contain any exponent in d and moreover, do not depend on d. For a set ∆ ⊂ Zd+ , we define the grid G(∆) of points in Id by G(∆) := {2−k s : k ∈ ∆, s ∈ I d (k)}. For the quantities of optimal recovery in (1.5) and (1.3), asymptotically optimal linear sampling algorithms of the form Ln (Xn∗ , Φ∗n , f ) = f (2−k j)ψk,j (1.7) k∈∆n j∈I d (k) are constructed where Xn∗ := G(∆n ), Φ∗n := {ψk,j }k∈∆n , j∈I d (k) and ψk,j are explicitly constructed (r) as linear combinations of at most at most N B-splines Mk,s for some N ∈ N which is independent 5 (r) of k, j, m and f , Mk,s are tensor products of either integer or half integer translated dilations of a and the centered B-spline of order r. The set ∆n is specially constructed for each class of Up,θ α,β Up,θ , depending on the relationship between between 0 < p, θ, q ≤ ∞ and a or 0 < p, θ, q, τ ≤ ∞ and α, β respectively. The grids G(∆n ) are sparse and have much smaller number of sample points than the corresponding standard full grids and the Smolyak grids, but give the same error of the sampling recovery on the both latter ones. The asymptotically optimal linear sampling algorithms Ln (Xn∗ , Φ∗n , ·) are based on quasi-interpolation representations by B-spline series of functions in a and B α,β . Moreover, if the error of sampling recovery is measured in the L -norm, spaces Bp,θ 1 p,θ Ln (Xn∗ , Φ∗n , ·) generates an asymptotically optimal cubature formula (see Section 6 for detalis). We are restricted to compute the asymptotic order of rn and n with respect only to n when n → ∞, not analyzing the dependence on the number of variables d. Recently, in [25] Kolmogorov n-widths dn (U, H γ ) and ε-dimensions nε (U, H γ ) in space H γ of periodic multivariate function classes U have been investigated in high-dimensional settings, where U is the unit ball in H α,β or its subsets. We computed the accurate dependence of dn (U, H γ ) and nε (U, H γ ) as a function of two variables n, d or ε, d. Although n is the main parameter in the study of convergence rate with respect to n when n → ∞, the parameter d may affect this rate when d is large. It is interesting and important to investigate optimal sampling recovery and cubature in terms of rn , n and in in such high-dimensional settings. We will discuss this problem in a forthcoming paper. The present paper is organized as follows. Ω of functions with bounded mixed modIn Section 2, we give definitions of Besov type spaces Bp,θ a and B α,β , and prove theorems on quasi-interpolation ulus of smoothness, in particular, spaces Bp,θ p,θ representation by B-spline series, with relevant discrete equivalent quasi-norms. In Section 3, we a and construct linear sampling algorithms on sparse grids of the form (1.7) for function classes Up,θ α,β , and prove upper bounds for the error of recovery by these algorithms. In Section 4, we prove Up,θ the sparsity and asymptotic optimality of the linear sampling algorithms constructed in Section α,β α,β d d a , L (Id )), r (U a , L (Id )) and 3, for the quantities n (Up,θ q n (Up,θ , Lq (I )), rn (Up,θ , Lq (I )), and q n p,θ establish their asymptotic orders. In Section 5, we extend the investigations of Sections 3 and 4 α,β α,β , Wqγ (Id )) for γ > 0. In Section 6, we discuss the , Wqγ (Id )) and n (Up,θ to the quantities rn (Up,θ a ) and i (U α,β ). problem of optimal cubature formulas for numerical integration in terms of in (Up,θ n p,θ 2 Function spaces and B-spline quasi-interpolation representations Ω of functions with bounded mixed Let us first introduce fractal Sobolev space Wqγ (Id ), spaces Bp,θ a and B α,β of functions with anisotropic smoothmodulus of smoothness and Besov type spaces Bp,θ p,θ ness and give necessary knowledge of them, especially B-spline quasi-interpolation representations in Besov type spaces. Let G be a domain in R. For univariate functions f on G the rth difference operator ∆rh is 6 defined by r ∆rh (f, x) (−1)r−j := j=0 r f (x + jh). j If e is any subset of [d], for multivariate functions on Gd the mixed (r, e)th difference operator ∆r,e h is defined by ∆r,e ∆rhi , ∆r,∅ h := h = I, i∈e ∆rhi where the univariate operator is applied to the univariate function f by considering f as a function of variable xi with the other variables held fixed. Denote by Lp (Gd ) the quasi-normed space of functions on Gd with the pth integral quasi-norm · p,Gd for 0 < p < ∞, and the sup norm · ∞,Gd for p = ∞. Let ωre (f, t)p,Gd := sup |hi | 0, t > 0, t ∈ Rd+ , (2.1) Ω(t) ≤ CΩ(t ), t ≤ t , t, t ∈ Rd+ , (2.2) and for a fixed γ ∈ Rd+ , γ ≥ 1, there is a constant C = C (γ) such that for every λ ∈ Rd+ with λ ≤ γ, Ω(λ t) ≤ C Ω(t), t ∈ Rd+ . (2.3) For e ⊂ [d], we define the function Ωe : Rd+ → R+ by Ωe (t) := Ω(te ), where te ∈ Rd+ is given by tej = tj if j ∈ e, and tej = 1 otherwise. If 0 < p, θ ≤ ∞, we introduce the quasi-semi-norm |f |B Ω p,θ (e) |f |B Ω p,θ (e) (in particular, |f |B Ω p,θ (∅) := 1/θ Id = f for functions f ∈ Lp (Gd ) by t−1 i dt {ωre (f, t)p,Gd /Ωe (t)}θ , θ < ∞, (2.4) i∈e sup ωre (f, t)p,Gd /Ωe (t), t∈Id p,Gd ). 7 θ = ∞, Another alternative definition of the quasi-semi-norm |f |B Ω p,θ (e) is obtained by replacing the integral or supremum over Id in (2.4) by one over Rd+ . In what follows, we preliminarily assume that the function Ω satisfies the conditions (2.1)–(2.3). Ω (Gd ) is defined as the set of functions f ∈ For 0 < p, θ ≤ ∞, the Besov type space Bp,θ Lp (Id )(Gd ) for which the quasi-norm f Ω (Gd ) Bp,θ |f |B Ω := p,θ (e) e⊂[d] is finite. Since in the present paper we consider only functions defined on Id , for simplicity we somewhere drop the symbol Id in the above notations. We use the notations: An (f ) Bn (f ) if An (f ) ≤ CBn (f ) with C an absolute constant not depending on n and/or f ∈ W, and An (f ) Bn (f ) if An (f ) Bn (f ) and Bn (f ) An (f ). Put d d Z+ := {s ∈ Z : s ≥ 0} and Z+ (e) := {s ∈ Z+ : si = 0, i ∈ / e} for a set e ⊂ [d]. Lemma 2.1 Let 0 < p, θ ≤ ∞. Then we have the following quasi-norm equivalence f Ω Bp,θ ωre (f, 2−k )p /Ω(2−k ) B1 (f ) := e⊂[d] θ 1/θ , k∈Zd+ (e) with the corresponding change to sup when θ = ∞. Proof. This lemma follows from properties of mixed modulus of smoothness ωre (f, t)p and the properties (2.1)–(2.3) of the function Ω. a and B α,β of functions with anisotropic smoothness as Let us define the Besov type spaces Bp,θ p,θ Ω . particular cases of Bp,θ a of mixed smoothness a as follows. For a ∈ Rd+ , we define the space Bp,θ d tai i , t ∈ Rd+ . a Ω Bp,θ := Bp,θ , where Ω(t) = (2.5) i=1 α,β Let α ∈ R+ and β ∈ R with α + β > 0. We define the space Bp,θ as follows. α,β Ω Bp,θ := Bp,θ , where Ω(t) = 8 d tαi inf tβj , i=1 β ≥ 0, j∈[d] (2.6) d tαi sup tβj , i=1 j∈[d] β < 0. The definition (2.6) seems different for β > 0 and β < 0. However, it can be well interpreted in terms of the equivalent discrete quasi-norm B1 (f ) in Lemma 2.1. Indeed, the function Ω in (2.6) for both β ≥ 0 and β < 0 satisfies the assumptions (2.1)–(2.3) and moreover, 1/Ω(2−x ) = 2α|x|1 +β|x|∞ , x ∈ Rd+ , where |x|∞ := maxj∈[d] |xj | for x ∈ Rd . Hence, by Lemma 2.1 we have the following quasi-norm equivalence f 2α|k|1 +β|k|∞ ωre (f, 2−k )p α,β Bp,θ e⊂[d] θ 1/θ (2.7) k∈Zd+ (e) α,β with the corresponding change to sup when θ = ∞. The notation Bp,θ becomes explicitly reasonα,β able if we take the right side of (2.7) as a definition of the quasi-norm of the space Bp,θ . α,β The definition of Bp,θ includes the well known classical isotropic Besov space and its mixed α,β α1 for β = 0, and B α,β = B β for α = 0, where smoothness modifications. Thus, we have Bp,θ = Bp,θ p,θ p,θ β Bp,θ is the classical isotropic Besov space of smoothness β. From Lemma 2.1 and (2.7) we derive that for α ≥ 0 and β ≥ 0, d α,β Bp,θ j a Bp,θ , = j=1 and for α + β ≥ 0 and β < 0, d j α,β = Bp,θ a Bp,θ , (2.8) j=1 where aj = α1 + βej , ej is the jth unit vector in Rd and d d j j a Bp,θ := {f ∈ Lp (Id ) : f = j=1 a fj , fj ∈ Bp,θ }. j=1 Let us recall a notion of fractal isotropic Sobolev space (Bessel potential space) Wqγ (Id ) for γ > 0 and 0 < q ≤ ∞. We refer the reader to the books [3, 47] for details on this space. Denote by F the Fourier transform in distributional sense for local integrable functions on Rd . The space Wqγ (Rd ) is defined as Wqγ (Rd ) := f ∈ Lq (Rd ) : F −1 1 + |y|2 with the quasi-norm f Wqγ (Rd ) := F −1 1 + |y|2 γ 2 γ 2 Ff ∈ Lq (Rd ) Ff q,Rd . The space Wqγ (Id ) is the set of restrictions of functions from Wqγ (Rd ) to Id equipped with the quasi-norm f Wqγ (Id ) := inf g Wqγ (Rd ) : g ∈ Wqγ (Rd ), g|Id = f . For q = 2 and γ ∈ N, the space Wqγ (Id ) coincides with the classical isotropic Sobolev space H γ (Id ). 9 Next, we introduce quasi-interpolation operators for functions on Id . For a given natural number r, let M be the centered B-spline of order r with support [−r/2, r/2] and knots at the points −r/2, −r/2 + 1, ..., r/2 − 1, r/2. Let Λ = {λ(s)}j∈P (µ) be a given finite even sequence, i.e., λ(−j) = λ(j), where P (µ) := {j ∈ Z : |j| ≤ µ} and µ ≥ r/2 − 1. We define the linear operator Q for functions f on R by Q(f, x) := Λ(f, s)M (x − s), (2.9) s∈Z where λ(j)f (s − j). Λ(f, s) := (2.10) j∈P (µ) The operator Q is local and bounded in C(R) (see [8, p. 100–109]), where C(G) denotes the normed space of bounded continuous functions on G with sup-norm · C(G) . Moreover, Q(f ) C(R) ≤ Λ f C(R) for each f ∈ C(R), where Λ = j∈P (µ) |λ(j)|. An operator Q of the form (2.9)–(2.10) reproducing Pr−1 , is called a quasi-interpolation operator in C(R). There are many ways to construct quasi-interpolation operators. A method of construction via Neumann series was suggested by Chui and Diamond [9] (see also [8, p. 100–109]). A necessary and sufficient condition of reproducing Pr−1 for operators Q of the form (2.9)–(2.10) with even r and µ ≥ r/2, was established in [7]. De Bore and Fix [10] introduced another quasi-interpolation operator based on the values of derivatives. We give some examples of quasi-interpolation operator. The simplest example is a piecewise constant quasi-interpolation operator f (s)M (x − s), Q(f, x) := s∈Z where M is the symmetric piecewise constant B-spline with support [−1/2, 1/2] and knots at the half integer points −1/2, 1/2. A piecewise linear quasi-interpolation operator is defined as f (s)M (x − s), Q(f, x) := s∈Z where M is the symmetric piecewise linear B-spline with support [−1, 1] and knots at the integer points −1, 0, 1. It is related to the classical Faber-Schauder basis of the hat functions (see, e.g., [23], [48], for details). A quadric quasi-interpolation operator is defined by Q(f, x) := s∈Z 1 {−f (s − 1) + 10f (s) − f (s + 1)}M (x − s), 8 where M is the symmetric quadric B-spline with support [−3/2, 3/2] and knots at the half integer points −3/2, −1/2, 1/2, 3/2. Another example is the cubic quasi-interpolation operator Q(f, x) := s∈Z 1 {−f (s − 1) + 8f (s) − f (s + 1)}M (x − s), 6 10 where M is the symmetric cubic B-spline with support [−2, 2] and knots at the integer points −2, −1, 0, 1, 2. If Q is a quasi-interpolation operator of the form (2.9)–(2.10), for h > 0 and a function f on R, we define the operator Q(·; h) by Q(f ; h) := σh ◦ Q ◦ σ1/h (f ), where σh (f, x) = f (x/h). From the definition it is easy to see that Λ(f, k; h)M (h−1 x − k), Q(f, x; h) = k where λ(j)f (h(k − j)). Λ(f, k; h) := j∈P (µ) The operator Q(·; h) has the same properties as Q: it is a local bounded linear operator in C(R) and reproduces the polynomials from Pr−1 . Moreover, it gives a good approximation for smooth functions [11, p. 63–65]. We will also call it a quasi-interpolation operator for C(R). However, the quasi-interpolation operator Q(·; h) is not defined for a function f on I, and therefore, not appropriate for an approximate sampling recovery of f from its sampled values at points in I. An approach to construct a quasi-interpolation operator for functions on I is to extend it by interpolation Lagrange polynomials. This approach has been proposed in [21] for the univariate case. Let us recall it. For a non-negative integer k, we put xj = j2−k , j ∈ Z. If f is a function on I, let Uk (f ) and Vk (f ) be the (r −1)th Lagrange polynomials interpolating f at the r left end points x0 , x1 , ..., xr−1 , and r right end points x2k −r+1 , x2k −r+3 , ..., x2k , of the interval I, respectively. The function f¯k is defined as an extension of f on R by the formula Uk (f, x), x < 0, ¯ fk (x) := f (x), 0 ≤ x ≤ 1, Vk (f, x), x > 1. If f is continuous on I, then f¯k is a continuous function on R. Let Q be a quasi-interpolation operator of the form (2.9)–(2.10) in C(R). If k ∈ Z+ , we introduce the operator Qk by Qk (f, x) := Q(f¯k , x; 2−k ), x ∈ I, for a function f on I. We define the integer translated dilation Mk,s of M by Mk,s (x) := M (2k x − s), k ∈ Z+ , s ∈ Z. Then we have for k ∈ Z+ , ak,s (f )Mk,s (x), ∀x ∈ I, Qk (f, x) = s∈J(k) 11 where J(k) := {s ∈ Z : −r/2 < s < 2k + r/2} is the set of s for which Mk,s do not vanish identically on I, and the coefficient functional ak,s is defined by λ(j)f¯k (2−k (s − j)). ak,s (f ) := Λ(f¯k , s; 2−k ) = |j|≤µ For k ∈ Zd+ , let the mixed operator Qk be defined by d Qk := Qki , (2.11) i=1 where the univariate operator Qki is applied to the univariate function f by considering f as a function of variable xi with the other variables held fixed. We define the d-variable B-spline Mk,s by d Mki ,si (xi ), k ∈ Zd+ , s ∈ Zd . Mk,s (x) := (2.12) i=1 Then we have Qk (f, x) = ak,s (f )Mk,s (x), ∀x ∈ Id , s∈J d (k) where Mk,s is the mixed B-spline defined in (2.12), J d (k) := {s ∈ Zd : −r/2 < si < 2ki + r/2, i ∈ [d]} is the set of s for which Mk,s do not vanish identically on Id , ak,s (f ) := ak1 ,s1 (ak2 ,s2 (...akd ,sd (f ))), (2.13) and the univariate coefficient functional aki ,si is applied to the univariate function f by considering f as a function of variable xi with the other variables held fixed. The operator Qk is a local bounded linear mapping in C(Id ) for r ≥ 2 and in L∞ (Id ) for r = 1, d and reproducing Pr−1 the space of polynomials of order at most r − 1 in each variable xi . In particular, we have for every f ∈ C(Id ), Qk (f ) ∞ ≤C Λ d f C(Id ) . (2.14) ωre (f, 2−k )∞ , (2.15) For k ∈ Zd+ , we write k → ∞ if ki → ∞ for i ∈ [d]). Lemma 2.2 We have for every f ∈ C(Id ), f − Qk (f ) ∞ ≤ C e∈[d], e=∅ and, consequently, f − Qk (f ) ∞ → 0, k → ∞. 12 (2.16) Proof. For d = 1, the inequality (2.15) is of the form f − Qk (f ) ∞ ≤ Cωr (f, 2−k )∞ . (2.17) This inequality is derived from the inequalities (2.29)–(2.31) in [22] and the inequality (2.14). For simplicity, let us prove the the inequality (2.15) for d = 2 and r ≥ 2. The general case can be proven in a similar way. Let I be the identity operator and k = (k1 , k2 ). From the equation I − Qk = (I − Qk1 ) + (I − Qk2 ) − (I − Qk1 )(I − Qk2 ) and the inequality (2.17) applied to f as an univariate in each variable, we obtain f − Qk (f ) ∞ ≤ (I − Qk1 )(f ) ωr{1} (f, 2−k )∞ ∞ + (I − Qk2 )(f ) + ωr{2} (f, 2−k )∞ + ∞ + (I − Qk1 )(I − Qk2 )(f ) ∞ ωr[2] (f, 2−k )∞ . If τ is a number such that 0 < τ ≤ min(p, 1), then for any sequence of functions {gk } there is the inequality τ gk ≤ gk τp . (2.18) p ∗ of M by Further, we define the half integer translated dilation Mk,s ∗ (x) := M (2k x − s/2), k ∈ Z+ , s ∈ Z, Mk,s ∗ by and the d-variable B-spline Mk,s d ∗ (x) Mk,s Mk∗i ,si (xi ), k ∈ Zd+ , s ∈ Zd . := i=1 (r) In what follows, the B-spline M will be fixed. We will denote Mk,s := Mk,s if the order r of M is (r) ∗ if the order r of M is odd. Let J d (k) := J d (k) if r is even, and even, and Mk,s := Mk,s r Jrd (k) := {s ∈ Zd : −r < si < 2ki +1 + r, i ∈ [d]} (r) if r is odd. Notice that Jrd (k) is the set of s for which Mk,s do not vanish identically on Id . Denote (r) by Σdr (k) the span of the B-splines Mk,s , s ∈ Jrd (k). If 0 < p ≤ ∞, for all k ∈ Zd+ and all g ∈ Σdr (k) such that (r) g= as Mk,s , (2.19) s∈Jrd (k) there is the quasi-norm equivalence g p 2−|k|1 /p {as } 13 p,k , (2.20) where 1/p {as } p,k |as |p := s∈Jrd (k) with the corresponding change when p = ∞. For convenience we define the univariate operator Q−1 by putting Q−1 (f ) = 0 for all f on I. Let the operator qk , k ∈ Zd+ , be defined in the manner of the definition (2.11) by d (Qki − Qki −1 ) . qk := (2.21) i=1 We have Qk = qk . (2.22) k ≤k From (2.22) and (2.16) it is easy to see that a continuous function f has the decomposition qk (f ) f = k∈Zd+ with the convergence in the norm of L∞ (Id ). From the definition of (2.21) and the refinement equation for the B-spline M , we can represent the component functions qk (f ) as (r) qk (f ) = (r) ck,s (f )Mk,s , (2.23) s∈Jrd (k) (r) where ck,s are certain coefficient functionals of f, which are defined as follows (see [23] for details). (r) We first define ck,s for univariate functions (d = 1). If the order r of the B-spline M is even, (r) ck,s (f ) := ak,s (f ) − ak,s (f ), k ≥ 0, where ak,s (f ) := 2−r+1 (m,j)∈Cr (k,s) (2.24) r ak−1,m (f ), k > 0, a0,s (f ) := 0. j and Cr (k, s) := {(m, j) : 2m + j − r/2 = s, m ∈ J(k − 1), 0 ≤ j ≤ r}, k > 0, Cr (0, s) := {0}. If the order r of the B-spline M is odd, 0, (r) ck,s (f ) := ak,s/2 (f ), −r+1 2 (m,j)∈Cr (k,s) 14 r j ak−1,m (f ), k = 0, k > 0, s even, k > 0, s odd, where Cr (k, s) := {(m, j) : 4m + 2j − r = s, m ∈ J(k − 1), 0 ≤ j ≤ r}, k > 0, Cr (0, s) := {0}. (r) In the multivariate case, the representation (2.23) holds true with the ck,s which are defined in the manner of the definition of (2.13) by (r) (r) (r) (r) ck,s (f ) = ck1 ,s1 ((ck2 ,s2 (...ckd ,sd (f ))). (2.25) Thus, we have proven the following Lemma 2.3 Every continuous function f on Id is represented as B-spline series f = (r) k∈Zd+ (r) ck,s (f )Mk,s , qk (f ) = (2.26) k∈Zd+ s∈Jrd (k) (r) converging in the norm of L∞ (Id ), where the coefficient functionals ck,s (f ) are explicitly constructed by formula (2.24)–(2.25) as linear combinations of at most N function values of f for some N ∈ N which is independent of k, s and f . Ω and B a , We now prove theorems on quasi-interpolation representation of functions from Bp,θ p,θ α,β Bp,θ by series (2.26) satisfying a discrete equivalent quasi-norm. We need some auxiliary lemmas. Let us use the notations: x+ := ((x1 )+ , ..., (xd )+ ) for x ∈ Rd . Lemma 2.4 ([23]) Let 0 < p ≤ ∞ and τ ≤ min(p, 1). Then for any f ∈ C(Id ) and k ∈ Nd (e), we have 1/τ qk (f ) p ≤ C v⊃e 2|s−k|1 /p ωrv (f, 2−s )p τ s∈Zd+ (v), s≥k with some constant C depending at most on r, µ, p, d and Λ , whenever the sum in the right-hand side is finite. Lemma 2.5 Let 0 < p ≤ ∞, 0 < τ ≤ min(p, 1), δ = min(r, r − 1 + 1/p). Let g ∈ Lp (Id ) be represented by the series g = gk , gk ∈ Σdr (k) k∈Zd+ converging in the norm of L∞ (Id ). Then for any k ∈ Zd+ (e), there holds the inequality 1/τ ωre (g, 2−k )p ≤ C 2−δ|(k−s)+ |1 gs p τ s∈Zd+ with some constant C depending at most on r, µ, p, d and Λ , whenever the sum on the right-hand side is finite. 15 Proof. This lemma can be proven in a way similar to the proof of [23, Lemma 2.3]. Let 0 < p, θ ≤ ∞ and ψ : Zd+ → R. If {gk }k∈Zd is a sequence whose component functions gk + are in Lp (Id ), we define the “quasi-norm” {gk } {gk } bψ p,θ bψ p,θ by 2ψ(k) gk := θ 1/θ p k∈Zd+ with the usual change to a supremum when θ = ∞. When {gk }k∈Zd is a positive sequence, we + replace gk p by |gk | and denote the corresponding quasi-norm by {gk } bψ . θ We will need the following generalized discrete Hardy inequality (see, e.g, [12] for the univariate case with ψ(k) = αk, α > 0). Lemma 2.6 Let {ak }k∈Zd and {bk }k∈Zd be two positive sequences and let for some A > 0, τ > + + 0, δ > 0 1/τ 2−δ|(k−s)+ |1 as bk ≤ A τ . (2.27) s∈Zd+ Let the function ψ : Zd+ → R satisfy the following. There are numbers c1 , c2 ∈ R, 0 < ζ < δ such that > 0 and ψ(k) − |k|1 ≤ ψ(k ) − |k |1 + c1 , k ≤ k , k, k ∈ Zd+ , (2.28) ψ(k) − ζ|k|1 ≥ ψ(k ) − ζ|k |1 + c2 , k ≤ k , k, k ∈ Zd+ , (2.29) and Then for 0 < θ ≤ ∞, we have {bk } bψ θ ≤ CA {ak } (2.30) bψ θ with C = C(c1 , c2 , , δ, θ, d) > 0. Proof. Because the right side of (2.27) becomes larger when τ becomes smaller, we can assume τ < θ. For e ⊂ [d] and s ∈ Zd , let e¯ := [d] \ e and s(e) ∈ Zd be defined by s(e)j = sj if j ∈ e, and s(e)j = 0 if j ∈ e¯. From (2.27) we have bk Bk (e), k ∈ Zd+ , A (2.31) e⊂[d] where 1/τ Bk (e) := 2−δ|k(e)|1 2δ|s(e)|1 as s∈Z(e,k) 16 τ and Z(e, k) := {s ∈ Zd+ : sj ≤ kj , j ∈ e; sj > kj , j ∈ e}. Take numbers , ζ , θ with the conditions 0 < < , ζ < ζ < δ and τ /θ + τ /θ = 1, respectively. Applying H¨ older’s inequality with exponents θ/τ, θ /τ , we obtain 1/θ Bk (e) ≤ 2−δ|k(e)|1 2ζ |s(e)|1 + |s(¯ e)|1 1/θ θ as 2(δ−ζ )|s(e)|1 − s∈Z(e,k) |s(¯ e)|1 θ s∈Z(e,k) 1/θ 2−δ|k(e)|1 2ζ |s(e)|1 + |s(¯ e)|1 θ as 2(δ−ζ )|k(e)|1 − |k(¯ e)|1 s∈Z(e,k) 1/θ 2−ζ |k(e)|1 − |k(¯ e)|1 2ζ |s(e)|1 + |s(¯ e)|1 θ as . s∈Z(e,k) Hence, {Bk (e)} θ bψ θ 2θ(ψ(k)−ζ |k(e)|1 − |k(¯ e)|1 ) k∈Zd+ 2ζ |s(e)|1 + |s(¯ e)|1 θ as s∈Z(e,k) 2θ(ζ |s(e)|1 + |s(¯ e)|1 ) θ as s∈Zd+ 2θ(ψ(k)−ζ |k(e)|1 − |k(¯ e)|1 ) (2.32) , k∈X(e,s) where X(e, s) := {k ∈ Zd+ : kj ≥ sj , j ∈ e; kj < sj , j ∈ e}. By (2.28) and (2.29) we have for k ∈ X(e, s), ψ(k) = ψ(k) − ζ|k|1 + ζ(|k(e)|1 + |k(¯ e)|1 ) ≤ ψ(s(e) + k(¯ e)) − ζ(|s(e)|1 + |k(¯ e)|1 ) + ζ(|k(e)|1 + |k(¯ e)|1 ) = ψ(s(e) + k(¯ e)) − ζ|s(e)|1 + ζ|k(e)|1 , and ψ(s(e) + k(¯ e)) = ψ(s(e) + k(¯ e)) − (|s(e)|1 + |k(¯ e)|1 ) + (|s(e)|1 + |k(¯ e)|1 ) ≤ ψ(s(e) + s(¯ e)) − (|s(e)|1 + |s(¯ e)|1 ) + (|s(e)|1 + |k(¯ e)|1 ) = ψ(s) − |s(¯ e)|1 + |k(¯ e)|1 . Consequently, ψ(k) − ζ |k(e)|1 − |k(¯ e)|1 ≤ ψ(s) − ζ|s(e)|1 − |s(¯ e)|1 − (ζ − ζ)|k(e)|1 + ( − )|k(¯ e)|1 , 17 and therefore, we can continue the estimation (2.32) as {Bk (e)} 2θ(ψ(s)+(ζ −ζ)|s(e)|1 −( θ bψ θ 2θ(−(ζ −ζ)|k(e)|1 +( − )|s(¯ e)|1 θ as s∈Zd+ − )|k(¯ e)|1 k∈X(e,s) 2θ(ψ(s)+(ζ −ζ)|s(e)|1 −( − )|s(¯ e)|1 θ as 2θ(−(ζ −ζ)|s(e)|1 +( − )|s(¯ e)|1 s∈Zd+ 2θψ(s) aθs = = {ak } s∈Zd+ θ . bψ θ Hence, by (2.31) we prove (2.30). We now are able to prove quasi-interpolation B-spline representation theorems for functions Ω and B α,β , B a . For functions f on Id , we introduce the following quasi-norms: from Bp,θ p,θ p,θ 1/θ θ qk (f ) p /Ω(2−k ) B2 (f ) := ; k∈Zd+ (r) 2−|k|1 /p {ck,s (f )} B3 (f ) := −k p,k /Ω(2 ) θ 1/θ . k∈Zd+ Observe that by (2.20) the quasi-norms B2 (f ) and B3 (f ) are equivalent. Theorem 2.1 Let 0 < p, θ ≤ ∞ and Ω satisfy the additional conditions: there are numbers µ, ρ > 0 and C1 , C2 > 0 such that d d t−µ i Ω(t) −µ t i , t ≤ t , t, t ∈ Id , ≤ C1 Ω(t ) i=1 d d −ρ t−ρ ≥ C2 Ω(t ) i Ω(t) (2.33) i=1 t i , t ≤ t , t, t ∈ Id . i=1 (2.34) i=1 Then we have the following. Ω can be represented by the B-spline series (i) If µ > 1/p and ρ < r, then a function f ∈ Bp,θ (2.26) satisfying the convergence condition B2 (f ) f Ω . Bp,θ (2.35) (ii) If ρ < min(r, r − 1 + 1/p), then a function g on Id represented by a series g = (r) gk = k∈Zd+ ck,s Mk,s , k∈Zd+ s∈Jrd (k) 18 (2.36) satisfying the condition B4 (g) := −k gk p /Ω(2 θ 1/θ < ∞, ) k∈Zd+ Ω . Moreover, belongs the space Bp,θ g B4 (g). Ω Bp,θ Ω (iii) If µ > 1/p and ρ < min(r, r − 1 + 1/p), then a function f on Id belongs to the space Bp,θ if and only if f can be represented by the series (2.26) satisfying the convergence condition (2.35). Moreover, the quasi-norm f B Ω is equivalent to the quasi-norm B2 (f ). p,θ Proof. Put φ(x) := log2 [1/Ω(2−x )]. Due to (2.33)–(2.34), the function φ satisfies the following conditions φ(x) − µ|x|1 ≤ φ(x ) − µ|x |1 + log2 C1 , x ≤ x , x, x ∈ Rd+ , (2.37) and φ(x) − ρ|x|1 ≥ φ(x ) − ρ|x |1 + log2 C2 , x ≤ x , x, x ∈ Rd+ . We also have φ(2k )ωre (f, 2−k )p ) B1 (f ) = e⊂[d] θ (2.38) 1/θ , (2.39) k∈Zd+ (e) with the corresponding change to sup when θ = ∞. Fix a number 0 < τ ≤ min(p, 1). Let Nd (e) := {s ∈ Zd+ : si > 0, i ∈ e, si = 0, i ∈ / e} for e ⊂ [d] (in particular, Nd (∅) = {0} and d d d d N ([d]) = N ). We have N (u) ∩ N (v) = ∅ if u = v, and the following decomposition of Zd+ : Zd+ = Nd (e). e⊂[d] Assertion (i): From (2.37) we derive µ|k|1 ≤ φ(k) + c, k ∈ Zd+ , for some constant c. Hence, by Lemma (2.1) and (2.39) we have f Bpµ1 ≤ C f Ω , Bp,θ Ω f ∈ Bp,θ , for some constant C. Since for µ > 1/p, Bpµ1 is compactly embedded into C(Id ), by the last Ω . Take an arbitrary f ∈ B Ω . Then f can be treated as an element in C(Id ). inequality so is Bp,θ p,θ By Lemma 2.3 f is represented as B-spline series (2.26) converging in the norm of L∞ (Id ). For k ∈ Zd+ , put |k|1 /p bk := 2 2|k|1 /p ωrv (f, 2−k )p qk (f ) p , ak := v⊃e 19 τ 1/τ if k ∈ Nd (e). By Lemma 2.4 we have for k ∈ Zd+ , aτs bk ≤ C 1/τ 1/τ ∞ τ 2−δ|(k−s)+ |1 as ≤ C , k ∈ Zd+ , s∈Zd+ s≥k for a fixed δ > ρ + 1/p. Let the function ψ be defined by ψ(k) = φ(k) − |k|1 /p, k ∈ Zd+ . By the inequality µ > 1/p, (2.37) and (2.38), it is easy to see that ψ(k) − |k|1 ≤ ψ(k ) − |k |1 + log2 C1 , k ≤ k , k, k ∈ Zd+ , and ψ(k) − ζ|k|1 ≥ ψ(k ) − ζ|k |1 + log2 C2 , k ≤ k , k, k ∈ Zd+ , < µ − 1/p and ζ = ρ + 1/p. Hence, applying Lemma 2.6 gives for B2 (f ) = {bk } bψ θ ≤ C {ak } bψ θ B1 (f ) f Ω . Bp,θ Assertion (ii): For k ∈ Zd+ , define τ ωrv (g, 2−k )p bk := 1/τ , ak := gk p v⊃e if k ∈ Nd (e). By Lemma 2.5 we have for any k ∈ Zd+ (e), 1/τ 2−δ|(k−s)+ |1 gs ωre (g, 2−k )p ≤ C3 p τ , s∈Zd+ where δ = min(r, r − 1 + 1/p). Therefore, 1/τ 2−δ|(k−s)+ |1 as bk ≤ C4 τ , k ∈ Zd+ . s∈Zd+ Taking ζ = ρ and 0 < < µ, we obtain by (2.37) and (2.38) φ(k) − |k|1 ≤ φ(k ) − |k |1 + log2 C1 , k ≤ k , k, k ∈ Zd+ , and φ(k) − ζ|k|1 ≥ φ(k ) − ζ|k |1 + log2 C2 , k ≤ k , k, k ∈ Zd+ . Applying Lemma 2.6 we get g Ω Bp,θ B1 (g) {bk } bφ θ ≤ C {ak } bφ θ Assertion (ii) is proven. Assertion (iii): This assertion follows from Assertions (i) and (ii). From Assertion (ii) in Theorem 2.1 we obtain 20 = B4 (g). Corollary 2.1 Let 0 < p, θ ≤ ∞ and Ω satisfy the assumptions of Assertion (ii) in Theorem 2.1. Then for every k ∈ Zd+ , we have g g p /Ω(2−k ), g ∈ Σdr (k). Ω Bp,θ Theorem 2.2 Let 0 < p, θ ≤ ∞ and a ∈ Rd+ . Then we have the following. a can be represented by the (i) If 1/p < minj∈[d] aj ≤ maxj∈[d] aj < r, then a function f ∈ Bp,θ mixed B-spline series (2.26) satisfying the convergence condition 1/θ {2(a,k) qk (f ) p }θ B2 (f ) = f a . Bp,θ (2.40) k∈Zd+ (ii) If 0 < minj∈[d] aj ≤ maxj∈[d] aj < min(r, r − 1 + 1/p), then a function g on Id represented by a series (2.36) satisfying the condition 2(a,k) gk B4 (g) := θ 1/θ < ∞, p k∈Zd+ a . Moreover, belongs the space Bp,θ g a Bp,θ B4 (g). (iii) If 1/p < minj∈[d] aj ≤ maxj∈[d] aj < min(r, r −1+1/p), then a function f on Id belongs to the a if and only if f can be represented by the series (2.26) satisfying the convergence space Bp,θ a condition (2.40). Moreover, the quasi-norm f Bp,θ is equivalent to the quasi-norms B2 (f ). Proof. For Ω as in (2.5), we have 1/Ω(2−x ) = 2(a,x) , x ∈ Rd+ . One can directly verify the conditions (2.1)–(2.3) and the conditions (2.33)–(2.34) with 1/p < µ < minj∈[d] aj and ρ = maxj∈[d] aj , for Ω defined in (2.5). Applying Theorem 2.1(i), we obtain the assertion (i). The assertion (ii) can be proven in a similar way. The assertion (iii) follows from the assertions (i) and (iii). Theorem 2.3 Let 0 < p, θ ≤ ∞ and α ∈ R+ , β ∈ R. Then we have the following. α,β (i) If 1/p < min(α, α + β) ≤ max(α, α + β) < r, then a function f ∈ Bp,θ can be represented by the mixed B-spline series (2.26) satisfying the convergence condition 1/θ B2 (f ) = {2α|k|1 +β|k|∞ qk (f ) p }θ k∈Zd+ 21 f α,β Bp,θ . (2.41) (ii) If 0 < min(α, α+β) ≤ max(α, α+β) < min(r, r−1+1/p), then a function g on Id represented by a series (2.36) satisfying the condition 2α|k|1 +β|k|∞ gk B4 (g) := θ p 1/θ < ∞, k∈Zd+ α,β belongs the space Bp,θ . Moreover, g α,β Bp,θ B4 (g). (iii) If 1/p < min(α, α + β) ≤ max(α, α + β) < min(r, r − 1 + 1/p), then a function f on Id α,β belongs to the space Bp,θ if and only if f can be represented by the series (2.26) satisfying the convergence condition (2.41). Moreover, the quasi-norm f B α,β is equivalent to the p,θ quasi-norms B2 (f ). Proof. As mentioned above, for Ω as in (2.6), we have 1/Ω(2−x ) = 2α|x|1 +β|x|∞ , x ∈ Rd+ . By Theorem 2.1, the assertion (i) of the theorem is proven if the conditions (2.1)–(2.3) and (2.33)– (2.34) with some µ > 1/p and ρ < r, are verified. The condition (2.1) is obvious. Put φ(x) := log2 {1/Ω(2−x )} = α|x|1 + β|x|∞ , x ∈ Rd+ . Then the conditions (2.2)–(2.3) and (2.33)–(2.34) are equivalent to the following conditions for the function φ, φ(x) ≤ φ(x ) + log2 C, x ≤ x , x, x ∈ Rd+ ; (2.42) for every b ≤ log2 γ := (log2 γ1 , ..., log2 γd ), φ(x + b) ≤ φ(x) + log2 C , x, x + b ∈ Rd+ ; (2.43) φ(x) − µ|x|1 ≤ φ(x ) − µ|x |1 + log2 C1 , x ≤ x , x, x ∈ Rd+ ; (2.44) φ(x) − ρ|x|1 ≥ φ(x ) − ρ|x |1 + log2 C2 , x ≤ x , x, x ∈ Rd+ . (2.45) We first consider the case β ≥ 0. Take µ and ρ with the conditions 1/p < µ < α and ρ = α + β. The conditions (2.42)–(2.43) can be easily verified. From the inequality α−µ > 0 and the equation φ(x) − µ|x|1 = (α − µ)|x|1 + β|x|∞ , x ∈ Rd+ . (2.46) follows (2.44). We have xi , x ∈ Rd+ . φ(x) − ρ|x|1 = β(|x|∞ − |x|1 ) = −β min j∈[d] i=j Hence, we deduce (2.45). Let us next consider the case β < 0. The condition (2.43) is obvious. Take µ and ρ with the conditions 1/p < µ < α + β and α < ρ < r. Let x ≤ x , x, x ∈ Rd+ . Assume that |x|∞ = xj and 22 |x |∞ = xj . By using the inequalities xj ≤ xj ≤ xj ≤ xj and α − µ > α + β − µ > 0 from (2.46) we get φ(x) − µ|x|1 = (α − µ) xi + (α + β − µ)xj + (α − µ)xj i=j,j ≤ (α − µ) xi + (α + β − µ)xj + (α − µ)xj i=j,j = φ(x ) − µ|x |1 . The inequality (2.44) is proven. The inequality (2.42) and (2.45) can be proven analogously. Instead the inequalities α − µ > α + β − µ > 0, in the proof we should use α > α + β > 0 and α + β − ρ < α − ρ < 0, respectively. Thus, the assertion (i) is proven. The assertion (ii) can be proven in a similar way. The assertion (iii) follows from the assertions (i) and (iii). Remark Theorem 2.2 for a = α1 and Theorem 2.3 for β = 0 coincide. This particular case Ω were introduced in [15] where has been proven in [23]. Some modifications of the space Bp,θ approximations by trigonometric polynomials with frequencies from hyperbolic cross, n-widths were investigated from functions from these spaces. Ω for which Theorem 2.1 is true with There are many examples of function Ω for the space Bp,θ some light natural restrictions and to which it is interesting to extend the results of the present paper. Let us give some important ones of them. For bounded sets A, B with B ⊂ A ⊂ Rd+ , d x∈A d txi i sup Ω(t) = inf y∈B i=1 i=1 tyi i . For univariate functions Ωj , j ∈ [d], satisfying the conditions (2.1)–(2.3), Ω(t) = Ωj (tj ). j∈[d] For a ∈ Rd+ and a univariate function Ω∗ satisfying the conditions (2.1)–(2.3), d tai i ). Ω(t) = Ω∗ ( i=1 23 3 Sampling recovery Let ∆ ⊂ Zd+ be given. Put K(∆) := {(k, s) : k ∈ ∆, s ∈ I d (k)} and denote by Mrd (∆) the set of (r) B-spines Mk,s , k ∈ ∆, s ∈ Jrd (k). We define the operator R∆ for functions f on Id by R∆ (f ) := (r) (r) ck,s (f )Mk,s , qk (f ) = k∈∆ s∈Jrd (k) k∈∆ and the grid G(∆) of points in Id by G(∆) := {2−k s : (k, s) ∈ K(∆)}. Lemma 3.1 The operator R∆ defines a linear sampling algorithm of the form (1.1) on the grid G(∆). More precisely, f (2−k j)ψk,s , R∆ (f ) = Ln (Xn , Φn , f ) = (k,s)∈K(∆) where Xn := G(∆) = {2−k s}(k,s)∈K(∆) , Φn := {ψk,j }(k,s)∈K(∆) , d (2kj + 1), n := |G(∆)| = k∈∆ j=1 (r) and ψk,j are explicitly constructed as linear combinations of at most N B-splines Mk,s ∈ Mrd (∆) for some N ∈ N which is independent of k, j, ∆ and f . Proof. This lemma can be proven in a way similar to the proof of [23, Lemma 3.1]. {ψ} Let ψ : Zd+ → R+ . Denote by Bp,θ the space of all functions f on Id for which the following quasi-norm is finite 1/θ f {ψ} Bp,θ {2ψ(k) qk (f ) p }θ := . k∈Zd+ {ψ} Lemma 3.2 Let 0 < p, θ, q ≤ ∞ and ψ : Zd+ → R+ . Then for every f ∈ Bp,θ , we have the following. (i) For p ≥ q, f − R∆ (f ) where θ∗ := q f 2−ψ(k) , sup d {ψ} Bp,θ θ ≤ min(q, 1), k∈Z+ \∆ k∈Zd+ \∆ 1 . 1/ min(q, 1) − 1/θ 24 {2−ψ(k) }θ ∗ 1/θ∗ , θ > min(q, 1), (ii) For p < q < ∞, f − R∆ (f ) where q ∗ := f q 2−ψ(k)+(1/p−1/q)|k|1 , sup d {ψ} Bp,θ θ ≤ q, k∈Z+ \∆ 1/q ∗ −ψ(k)+(1/p−1/q)|k|1 }q ∗ k∈Zd+ \∆ {2 , θ > q, 1 . 1/q − 1/θ (iii) For p < q = ∞, f − R∆ (f ) where θ := 2−ψ(k)+|k|1 /p , sup d f ∞ {ψ} Bp,θ θ≤1 k∈Z+ \∆ k∈Zd+ \∆ {2 −ψ(k)+|k|1 /p }θ 1/θ , θ > 1, 1 . 1 − 1/θ Proof. {ψ} Case (i): p ≥ q. For an arbitrary f ∈ Bp,θ , by the representation (2.26) and (2.18) we have f − R∆ (f ) τ q qk (f ) τ q k∈Zd+ \∆ with any τ ≤ min(q, 1). Therefore, if θ ≤ min(q, 1), then by Theorem 2.3 and the inequality qk (f ) q ≤ qk (f ) p we get 1/θ f − R∆ (f ) qk (f ) θq q k∈Zd+ \∆ 1/θ ≤ sup 2−ψ(k) k∈Zd+ \∆ ≤ f k∈Zd+ \∆ sup 2−ψ(k) . {ψ} Bp,θ {2ψ(k) qk (f ) p }θ k∈Zd+ \∆ If θ > min(q, 1), then f − R∆ (f ) ν q qk (f ) ν q {2ψ(k) qk (f ) q }ν {2−ψ(k) }ν , = k∈Zd+ \∆ k∈Zd+ \∆ 25 (3.1) where ν = min(q, 1). Since ν/θ + ν/θ∗ = 1, by H¨older’s inequality with exponents θ/ν, θ∗ /ν, the inequality qk (f ) q ≤ qk (f ) p and Theorem 2.3 we obtain f − R∆ (f ) {2ψ(k) qk (f ) q }θ q 1/θ∗ 1/θ k∈Zd+ \∆ k∈Zd+ \∆ {ψ} Bp,θ (3.2) 1/θ∗ f ∗ {2−ψ(k) }θ ∗ {2−ψ(k) }θ . k∈Zd+ \∆ This and (3.1) prove Case (i). {ψ} Case (ii): p < q < ∞. For an arbitrary f ∈ Bp,θ , by the representation (2.26) and [23, Lemma 5.3] have {2(1/p−1/q)|k|1 qk (f ) p }q . f − R∆ (f ) qq k∈Zd+ \∆ Therefore, if θ ≤ q, then 1/θ f − R∆ (f ) {2(1/p−1/q)|k|1 qk (f ) p }θ q k∈Zd+ \∆ 1/θ sup 2−ψ(k)+(1/p−1/q)|k|1 k∈Zd+ \∆ f {ψ} Bp,θ {2ψ(k) qk (f ) p }θ k∈Zd+ \∆ sup 2−ψ(k)+(1/p−1/q)|k|1 . k∈Zd+ \∆ If θ > q, then f − R∆ (f ) q q {2(1/p−1/q)|k|1 qk (f ) p }q k∈Zd+ \∆ {2ψ(k) qk (f ) p }q {2−ψ(k)+(1/p−1/q)|k|1 }q . = k∈Zd+ \∆ Hence, similarly to (3.2), we get 1/q∗ f − R∆ (f ) q f {ψ} Bp,θ ∗ {2−ψ(k)+(1/p−1/q)|k|1 }q k∈Zd+ \∆ This completes the proof of Case (ii). 26 . Case (iii): p < q = ∞. Case (iii) can be proven analogously to Case (ii) by using the inequality f − R∆ (f ) 2|k|1 /p qk (f ) p . ∞ k∈Zd+ \∆ ¯ 3 the set of triples (p, θ, q) such that 0 < p, θ, q ≤ ∞. According to Lemma 3.2, Denote by R + ¯ 3 , the error f − R∆ (f ) q of the depending on the relationship between p, θ, q for (p, θ, q) ∈ R + {ψ} approximation of f ∈ Bp,θ has an upper bound of two different forms: either f − R∆ (f ) f q sup 2−ψ(k)+(1/p−1/q)+ |k|1 , {ψ} Bp,θ (3.3) k∈Zd+ \∆ or for some 0 < τ < ∞, 1/τ f − R∆ (f ) q f {ψ} Bp,θ {2−ψ(k)+(1/p−1/q)+ |k|1 }τ . (3.4) k∈Zd+ \∆ ¯ 3 into two sets A and B with A ∩ B = ∅ as follows. A triple (p, θ, q) ∈ R ¯3 Let us decompose R + + belongs to A if and only if for (p, θ, q) there holds (3.3), and belongs to B if and only if for (p, θ, q) ¯ 3 satisfying one of the following there holds (3.4). By Lemma 3.2, A consists of all (p, θ, q) ∈ R + conditions • p ≥ q, θ ≤ min(q, 1); • p < q, θ ≤ q; • p < q = ∞, θ ≤ 1, ¯ 3 satisfying one of the following conditions and B consists of all (p, θ, q) ∈ R + • p ≥ q, θ > min(q, 1); • p < q, θ > q; • p < q = ∞, θ > 1. α,β We construct special sets ∆(ξ) parametrized by ξ > 0, for the recovery of functions f ∈ Up,θ by R∆(ξ) (f ). Let 0 < p, θ, q ≤ ∞ and α ∈ R+ , β ∈ R be given. We fix a number ε so that 0 < ε < min(α − (1/p − 1/q)+ , |β|), 27 and define the set ∆(ξ) for ξ > 0 by d {k ∈ Z+ : (α − (1/p − 1/q)+ )|k|1 + β|k|∞ ≤ ξ}, ∆(ξ) := {k ∈ Zd+ : (α − (1/p − 1/q)+ + ε/d)|k|1 + (β − ε)|k|∞ ≤ ξ}, {k ∈ Zd+ : (α − (1/p − 1/q)+ − ε)|k|1 + (β + ε)|k|∞ ≤ ξ}, (p, θ, q) ∈ A, (p, θ, q) ∈ B, β > 0, (p, θ, q) ∈ B, β < 0. Preliminarily note that for (p, θ, q) ∈ A, ∆(ξ) is defined as the set {k ∈ Zd+ : (α − (1/p − 1/q)+ )|k|1 + β|k|∞ ≤ ξ}, but for (p, θ, q) ∈ B, ∆(ξ) is defined as an extension of the last one parametrized by ε. We will give a detailed comment on this substantial difference in a remark at the end of the next section where the optimality and sparsity are investigated. Theorem 3.1 Let 0 < p, θ, q ≤ ∞ and α ∈ R+ , β ∈ R, β = 0, such that 1/p < min(α, α + β) ≤ max(α, α + β) < r. Then we have the following upper bound f − R∆(ξ) (f ) sup q 2−ξ . (3.5) α,β f ∈Up,θ Proof. If (p, θ, q) ∈ A, by Lemma 3.2, we have f − R∆(ξ) (f ) sup 2−(α−(1/p−1/q)+ )|k|1 +β|k|∞ ) sup q 2−ξ . k∈Zd+ \∆(ξ) α,β f ∈Up,θ We next consider the case (p, θ, q) ∈ B. In this case, by Lemma 3.2, we have sup f − R∆(ξ) α,β f ∈Up,θ 2−τ (α−(1/p−1/q)+ )|k|1 +β|k|∞ ) τ q k∈Zd+ \∆(ξ) for τ = θ∗ , q ∗ , θ . For simplicity we prove the case (p, θ, q) ∈ B for τ = 1 and p ≥ q, the general case can be proven similarly. In this particular case, we get sup f − R∆(ξ) 2−α|k|1 −β|k|∞ =: Σ(ξ). q α,β f ∈Up,θ (3.6) k∈Zd+ \∆(ξ) We first assume that β > 0. It is easy to verify that for every ξ > 0, 2−(α1,x)−βM (x) dx, Σ(ξ) W (ξ) where M (x) := maxj∈[d] xj for x ∈ Rd , and W (ξ) := {x ∈ Rd+ : (α + ε/d)(1, x) + (β − ε)M (x) > ξ}. 28 (3.7) Putting V (ξ, s) := {x ∈ W (ξ) : ξ + s − 1 ≤ α(1, x) + βM (x) < ξ + s}, s ∈ N, from (3.7) we have ∞ 2−ξ Σ(ξ) 2−s |V (ξ, s)|. (3.8) s=1 Let us estimate |V (ξ, s)|. Put V ∗ (ξ, s) := V (ξ, s) − x∗ , where x∗ := (νd)−1 ξ1 and ν := α + β/d. For every y = x − x∗ ∈ V ∗ (ξ, s), from the equation (1, x∗ ) = ξ/ν and the inequality α(1, x) + βM (x) < ξ + s we get α(1, y) + βM (y) < s. (3.9) On the other hand, for every x ∈ V (ξ, s), from the inequality α(1, x) + βM (x) < ξ + s and (α + ε/d)(1, x) + (β − ε)M (x) > ξ we get M (x) − (1, x)/d < ε−1 s. This inequality together with the inequality α(1, x)+βM (x) ≥ s−1 gives (1, x) ≥ ξ/ν +((1−ε−1 β)s+1)/ν for every x ∈ V (ξ, s). Hence, for every y = x − x∗ ∈ V ∗ (ξ, s), (1, y) ≥ ((1 − ε−1 β)s + 1)/ν. (3.10) This means that V ∗ (ξ, s) ⊂ V (s) for every ξ > 0, where V (s) ⊂ Rd is the set of all y ∈ Rd given by the conditions (3.9) and (3.10). Since V (s) is a bounded polyhedron and consequently, |V (ξ, s)| = |V ∗ (ξ, s)| ≤ |V (s)| sd , combining (3.6) and (3.8), we obtain ∞ sup f − R∆(ξ) −ξ 2−s sd 2 q α,β f ∈Up,θ 2−ξ . s=1 If β < 0, similarly to (3.6) and (3.7), we have for every ξ > 0, 2−(α1,x)−βM (x) dx, Σ(ξ) W (ξ) where W (ξ) := {x ∈ Rd+ : (α − ε)(1, x) + (β + ε)M (x) > ξ}. From the last relation, similarly to the proof for the case β > 0, we prove (3.5) for the case β < 0. a We construct special sets ∆ (ξ) parametrized by ξ > 0, for the recovery of functions f ∈ Up,θ by R∆ (ξ) (f ). Let 0 < p, q, θ ≤ ∞ and a ∈ Rd+ be given. In what follows, we assume the following a : restriction on the smoothness a of Bp,θ 1/p < a1 < a2 ≤ ... ≤ ad < r. 29 (3.11) We fix a number ε so that 0 < ε < a2 − a1 , and define the set ∆ (ξ) for ξ > 0, by {k ∈ Zd+ : (a, k) − (1/p − 1/q)+ |k|1 ≤ ξ}, {k ∈ Zd+ : (a(ε), k) − (1/p − 1/q)+ |k|1 ≤ ξ}, ∆ (ξ) := (p, θ, q) ∈ A, (p, θ, q) ∈ B, where a(ε) = (a1 , a2 − ε, ..., ad − ε). Theorem 3.2 Let 0 < p, θ, q ≤ ∞ and a ∈ Rd+ satisfying the condition (3.11) and 1/p < a1 < ad < r. Then we have the following upper bound sup a f ∈Up,θ f − R∆ (ξ) (f ) 2−ξ . q (3.12) Proof. Let us first consider the case (p, θ, q) ∈ A. In this case, by Lemma 3.2, we have sup a f ∈Up,θ f − R∆ (ξ) (f ) 2−((a,k)−(1/p−1/q)+ |k|1 ) sup q 2−ξ . k∈Zd+ \∆ (ξ) We next treat the case (p, θ, q) ∈ B. In this case, by Lemma 3.2, we have sup a f ∈Up,θ f − R∆ (ξ) (f ) 2−τ ((a,k)−(1/p−1/q)+ |k|1 ) , τ q k∈Zd+ \∆ (ξ) for τ = θ∗ , q ∗ , θ . For simplicity we prove the case (p, θ, q) ∈ B for τ = 1 and (1/p − 1/q)+ ) = 0, the general case can be proven similarly. In this particular case, we get sup a f ∈Up,θ f − R∆ (ξ) (f ) 2−(a,k) =: Σ(ξ). q k∈Zd+ \∆ (3.13) (ξ) It is easy to verify that for every ξ > 0, 2−(a,x) dx, Σ(ξ) (3.14) W (ξ) where W (ξ) := {x ∈ Rd+ : (a , x) > ξ}. We put V (ξ, s) := {x ∈ W (ξ) : ξ + s − 1 ≤ (a, x) < ξ + s}, s ∈ N. then from (3.14) we have ∞ Σ(ξ) 2 −ξ 2−s |V (ξ, s)|. s=1 30 (3.15) Let us estimate |V (ξ, s)|. Put V ∗ (ξ, s) := V (ξ, s) − x∗ , where x∗ := (a1 )−1 ξe1 . For every y = x − x∗ ∈ V ∗ (ξ, s), from the equation (a, x∗ ) = ξ and the inequality (a, x) < ξ + s we get (a, y) < s and therefore, yj < s/aj , j ∈ [d]. (3.16) On the other hand, for every x ∈ V (ξ, s), from the inequality (a, x) < ξ + s and (a, x) − ε(1 , x) = (a , x) > ξ we get (1 , x) < ε−1 s, where 1 := (0, 1, 1, ..., 1) ∈ Rd . This inequality together with the inequality a1 x1 + ad (1 , x) ≥ (a, x) ≥ ξ + s − 1 gives x1 ≥ ξ/a1 + ((1 − ε−1 ad )s + 1)/a1 for every x ∈ V (ξ, s). Hence, for every y = x − x∗ ∈ V ∗ (ξ, s), y1 ≥ ((1 − ε−1 ad )s + 1)/a1 , yj ≥ 0, j = 2, ..., d. (3.17) This means that V ∗ (ξ, s) ⊂ V (s) for every ξ > 0, where V (s) ⊂ Rd is the box of all y ∈ Rd given by the conditions (3.16) and (3.17). Since |V (ξ, s)| = |V ∗ (ξ, s)| ≤ |V (s)| sd , by (3.13) and (3.15), we obtain ∞ f − R∆(ξ) sup a f ∈Up,θ 2 q −ξ 2−s sd 2−ξ . s=1 Remark The grids G(∆(ξ)) and G(∆ (ξ)) defined for (p, θ, q) ∈ A or (p, θ, q) ∈ B with β > 0, were employed in [16, 17, 18] for sampling recovery of periodic functions from an intersection of spaces of different mixed smoothness. The grids G(∆(ξ)) defined for β = −1, θ = 1, p = q ≥ 1, were used in [6] for sampling recovery of non-periodic functions based on a hierarchical Lagrangian basis polynomials representation, with the approximation error measured in the energy H 1 -norm. 4 Sparsity and optimality Lemma 4.1 Let 0 < p, θ, q ≤ ∞ and α ∈ R+ , β ∈ R, β = 0, such that 1/p < min(α, α + β) ≤ max(α, α + β) < r. Then we have 2|k|1 |G(∆(ξ))| 2ξ/ν , (4.1) k∈∆(ξ) where ν := α + β/d − (1/p − 1/q)+ , α + β − (1/p − 1/q)+ , 31 β > 0, β < 0. (4.2) Proof. The first asymptotic equivalence in (4.1) follows from the definitions. Let us prove the second one. For simplicity we prove it for the case where p ≥ q, the general case can be proven similarly. Let us first consider the case (p, θ, q) ∈ B, β > 0. It is easy to verify that for every ξ > 0, 2|k|1 2(1,x) dx, (4.3) W (ξ) k∈∆(ξ) where W (ξ) := {x ∈ Rd+ : (α + ε/d)(1, x) + (β − ε)M (x) ≤ ξ} and M (x) := maxj∈[d] xj for x ∈ Rd . We put V (ξ, s) := {x ∈ W (ξ) : ξ/ν + s − 1 ≤ (1, x) < ξ/ν + s}, s ∈ Z+ . From the inequalities β > ε and M (x) − (1, x)/d ≥ 0, x ∈ Rd+ , one can verify that for every x ∈ W (ξ), (1, x) ≤ ξ/ν. Hence, we have ξ/ν 2(1,x) dx 2−s |V (ξ, s)|. 2ξ/ν W (ξ) (4.4) s=0 Let us estimate |V (ξ, s)|. Put V ∗ (ξ, s) := V (ξ, s) − x∗ , where x∗ := (νd)−1 ξ1. From the equation (1, x∗ ) = ξ/ν, we get for every y = x − x∗ ∈ V ∗ (ξ, s), s − 1 ≤ (1, y) < s. (4.5) (α + ε/d)(1, y) + (β − ε)M (y) ≤ 0. (4.6) and This means that V ∗ (ξ, s) ⊂ V (s) for every ξ > 0, where V (s) ⊂ Rd is the set of all y ∈ Rd given by the conditions (4.5) and (4.6). Notice that V (s) is a bounded polyhedron and |V (s)| sd−1 . Hence, by the inequality |V (ξ, s)| = |V ∗ (ξ, s)| ≤ |V (s)|, (4.3) and (4.4), we prove the upper bound in (4.1): ∞ 2 |k|1 k∈∆(ξ) 2 2−s sd−1 ξ/ν 2ξ/ν . s=0 To prove the lower bound for this case, we take k ∗ := ξ/dν 1 ∈ Zd+ . It is easy to check k ∗ ∈ ∆(ξ) and consequently, ∗ 2|k|1 ≥ 2|k |1 2ξ/ν . k∈∆(ξ) The case (p, θ, q) ∈ B, β < 0 can be proven similarly with a slight modification. To prove the case (p, θ, q) ∈ A it is enough to put ε = 0 in the proof of the case (p, θ, q) ∈ B. 32 Lemma 4.2 Let 0 < p, θ, q ≤ ∞ and a ∈ Rd+ satisfying the condition (3.11) and 1/p < a1 < ad < r. Then we have 2|k|1 |G(∆ (ξ))| 2ξ/(a1 −(1/p−1/q)+ ) . (4.7) k∈∆ (ξ) Proof. The first asymptotic equivalence in (4.7) follows from the definitions. Let us prove the second one. For simplicity we prove it for the case where p ≥ q, the general case can be proven similarly. Let us first consider the case (p, θ, q) ∈ B. It is easy to verify that for every ξ > 0, 2|k|1 2(1,x) dx, (4.8) W (ξ) k∈∆ (ξ) where W (ξ) := {x ∈ Rd+ : (a , x) ≤ ξ}. We put V (ξ, s) := {x ∈ W (ξ) : ξ/a1 + s − 1 ≤ (1, x) < ξ/a1 + s}, s ∈ Z+ . One can verify that for every x ∈ W (ξ), (1, x) ≤ ξ/a1 . Hence, we have ξ/a1 2 (1,x) dx 2 2−s |V (ξ, s)|. ξ/a1 W (ξ) (4.9) s=0 Let us estimate |V (ξ, s)|. Put V ∗ (ξ, s) := V (ξ, s) − x∗ , where x∗ := (a1 )−1 ξe1 . From the equation (1, x∗ ) = ξ/a1 , we get for every y = x − x∗ ∈ V ∗ (ξ, s), s − 1 ≤ (1, y) < s. (4.10) (a , y) ≤ 0. (4.11) and This means that V ∗ (ξ, s) ⊂ V (s) for every ξ > 0, where V (s) ⊂ Rd is the set of all y ∈ Rd given by the conditions (4.10) and (4.11). Notice that V (s) is a bounded polyhedron and |V (s)| sd−1 . Hence, by the inequality |V (ξ, s)| = |V ∗ (ξ, s)| ≤ |V (s)|, (4.8) and (4.9), we obtain the upper bound in (4.7): ∞ 2|k|1 k∈∆ (ξ) 2−s sd−1 2ξ/a1 s=0 33 2ξ/a1 . To prove the lower bound, we take k ∗ := ξ/a1 e1 ∈ Zd+ . It is easy to check k ∗ ∈ ∆ (ξ) and consequently, ∗ 2|k|1 ≥ 2|k |1 2ξ/a1 . k∈∆ (ξ) Remark The grids of sample points G(∆(ξ)) and G(∆ (ξ)) are sparse and have much less elements than the standard dyadic full grids G(∆1 (ξ)) and Smolyak grids G(∆2 (ξ)) which give the same recovery error, where ∆1 (ξ) := {k ∈ Zd+ : λ|k|∞ ≤ ξ} and ∆2 (ξ) := {k ∈ Zd+ : λ|k|1 ≤ ξ} and the number λ := ν is as in (4.2) for G(∆(ξ)) and λ := a1 − (1/p − 1/q)+ for G(∆ (ξ)). For instance, the linear sampling algorithms R∆i (ξ) , i = 1, 2, on the grids G(∆i (ξ)) gives the worst case error sup α,β f ∈Up,θ f − R∆i (ξ) (f ) q 2−ξ . The number of sample points in G(∆1 (ξ)) is |G(∆1 (ξ))| 2dξ/ν and in G(∆2 (ξ)) is |G(∆2 (ξ))| 2ξ/ν ξ d−1 . Whereas, due to Theorem 3.1 and Lemma 4.1 we can get the same error by the linear sampling algorithm R∆(ξ) on the grids G(∆(ξ)) with the number of sample points |G(∆(ξ))| 2ξ/ν . The following two theorems show that the linear sampling sampling algorithms R∆(ξ) on sparse grids G(∆(ξ)), and R∆ (ξ) on sparse grids G(∆ (ξ)) are asymptotically optimal in the sense of the quantities rn and n . Theorem 4.1 Let 0 < p, θ, q ≤ ∞ and α ∈ R+ , β ∈ R, β = 0, such that 1/p < min(α, α + β) ≤ max(α, α + β) < r. Assume that for a given n ∈ Z+ , ξn is the largest nonnegative number such that |G(∆(ξn ))| ≤ n. (4.12) α,β Then R∆(ξn ) defines an asymptotically optimal linear sampling algorithm for rn := rn (Up,θ , Lq ) and n := α,β n (Up,θ , Lq ) by R∆(ξn ) (f ) = Ln (Xn∗ , Φ∗n , f ) = f (2−k s)ψk,s , (4.13) (k,s)∈K(∆(ξn )) where Xn∗ := G(∆(ξn )) = {2−k s}(k,s)∈K(∆(ξn )) , Φ∗n := {ψk,s }(k,s)∈K(∆(ξn )) , and we have the following asymptotic orders sup α,β f ∈Up,θ f − R∆(ξn ) (f ) q rn n−α−β/d+(1/p−1/q)+ , n−α−β+(1/p−1/q)+ , n 34 β > 0, β < 0. (4.14) Proof. Upper bounds. Due to Lemma 4.1 we have 2ξn /ν n |G(∆(ξn ))| ≤ n, where ν is as in (4.2). Hence, we find n−α+β/d−(1/p−1/q)+ , n−α+β−(1/p−1/q)+ , 2−ξn β > 0, β < 0. (4.15) By Lemma 3.1 and (4.12), R∆(ξn ) is a linear sampling algorithm of the form (1.1) as in (4.13) and consequently, from Theorem 3.5 we get n ≤ rn ≤ f − R∆(ξn ) (f ) sup α,β f ∈Up,θ 2−ξn . q These relations together with (4.15) proves the upper bounds of (4.14). Lower bounds. We need the following auxiliary result. If W ⊂ Lq , then we have rn (W, Lq (Id )) sup inf d Xn ={xj }n j=1 ⊂I f q. (4.16) f ∈W : f (xj )=0, j=1,...,n For the proof of this inequality see [38, Proposition 19]. Since f q ≥ f p for p ≥ q, it is sufficient to prove the lower bound for the case p ≤ q. Fix a number r = 2m with integer m so that max(α, α + β) < min(r , r − 1 + 1/p). We first treat the case β > 0. Put k ∗ = k ∗ (η) := η1 for integer η > m. Consider the boxes J(s) ⊂ Id J(s) := {x ∈ Id : 2−η+m sj ≤ xj < 2−η+m (sj + 1), j ∈ [d]}, s ∈ Z(η), where Z(η) := {s ∈ Zd+ : 0 ≤ sj ≤ 2η−m − 1, j ∈ [d]}. For a given n, we find η satisfying the relations 2|k n ∗| 2d(η−m) = |Z(η)| ≥ 2n. 1 (4.17) Let Xn = {xj }nj=1 be an arbitrary subset of n points in Id . Since J(s) ∩ J(s ) = ∅ for s = s , and |Z(η)| ≥ 2n, there is Z ∗ (η) ⊂ Z(η) such that |Z ∗ (η)| ≥ n and Xn ∩ {∪s∈Z ∗ (η) J(s)} = ∅. (4.18) Consider the function g ∗ ∈ Σdr (k ∗ ) defined by g ∗ := λ2−α|k ∗| 1 −β|k ∗| ∞ +|k ∗| 1 /p Mk∗ ,s+r /2 , (4.19) s∈Z ∗ (η) where Mk∗ ,s+r /2 are B-splines of order r . Since |Z ∗ (η)| g∗ q λ2−α|k ∗| 1 −β|k 35 ∗| 2|k ∗| 1 , by (2.20) we have ∞ +(1/p−1/q)|k ∗| 1 , (4.20) and g∗ λ2−α|k p ∗| 1 −β|k ∗| ∞ . α,β Hence, by Corollary 2.1 there is λ > 0 independent of η and n such that g ∗ ∈ Up,θ . Notice that ∗ ∗ j Mk∗ ,s+m−1 (x), x ∈ J(s), for every s ∈ Z (η), and consequently, by (4.18) g (x ) = 0, j = 1, ..., n. From the inequality (4.16) (4.20) and (4.17) we obtain g∗ n n−α−β/d+1/p−1/q . q This proves the lower bound of (4.14) for the case β > 0. We now consider the case β < 0. We will use some notations which coincide with those in the proof of the case β > 0. Put k ∗ = k ∗ (η) := (η, m, ..., m) for integer η > m. Consider the boxes J(s) ⊂ Id J(s) := {x ∈ Id : 2−η+m s1 ≤ x1 < 2−η+m (s1 + 1)}, s ∈ Z(η), where Z(η) := {s ∈ Zd+ : 0 ≤ s1 ≤ 2η−m − 1, sj = 0, j = 2, ..., d}. For a given n, we find η satisfying the relations n ∗ 2η−m = |Z(η)| ≥ 2n. 2k1 (4.21) Let Xn = {xj }nj=1 be an arbitrary subset of n points in Id . Since J(s) ∩ J(s ) = ∅ for s = s , and |Z(η)| ≥ 2n, there is Z ∗ (η) ⊂ Z(η) such that |Z ∗ (η)| ≥ n and Xn ∩ {∪s∈Z ∗ (η) J(s)} = ∅. (4.22) Consider the function g ∗ ∈ Σdr (k ∗ ) defined by ∗ g ∗ := λ2−(α+β−1/p)k1 Mk∗ ,s+r /2 , (4.23) s∈Z ∗ (η) ∗ where Mk∗ ,s+r /2 are B-splines of order r . Since |Z ∗ (η)| g∗ 2k1 , by (2.20) we have ∗ λ2−(α+β−1/p+1/q)k1 , q and g∗ (4.24) ∗ p λ2−(α+β)k1 . α,β Hence, by Corollary 2.1 there is λ > 0 independent of η and n such that g ∗ ∈ Up,θ . Notice that ∗ ∗ j ∗ Mk ,s+m−1 (x), x ∈ J(s), for every s ∈ Z (η), and consequently, by (4.22) g (x ) = 0, j = 1, ..., n. From the inequality (4.16) (4.24) and (4.21) we obtain α,β n (Up,θ , Lq ) g∗ q n−α−β+1/p−1/q . This proves the lower bound of (4.14) for the case β < 0. 36 Theorem 4.2 Let 0 < p, θ, q ≤ ∞ and a ∈ Rd+ satisfying the condition (3.11) and 1/p < a1 < a2 ≤ ... ≤ ad < r. Assume that for a given n ∈ Z+ , ξn is the largest nonnegative number such that |G(∆ (ξn ))| ≤ n. (4.25) α,β Then R∆(ξn ) defines an asymptotically optimal linear sampling algorithm for rn := rn (Up,θ , Lq (Id )) and n := α,β d n (Up,θ , Lq (I )) by f (2−k s)ψk,s , R∆ (ξn ) (f ) = Ln (Xn∗ , Φ∗n , f ) = (k,s)∈K(∆ (ξn )) where Xn∗ := G(∆ (ξn )) = {2−k s}(k,s)∈K(∆ (ξn )) , Φ∗n := {ψk,s }(k,s)∈K(∆ (ξn )) , and we have the following asymptotic order f − R∆ (ξn ) (f ) sup a f ∈Up,θ q rn n n−a1 +(1/p−1/q)+ . (4.26) Proof. Upper bounds. For a given n ∈ Z+ (large enough), due to Lemma 4.2 we can define ξ = ξn as the largest nonnegative number such that n 2ξn /(a1 −(1/p−1/q)+ ) |G(∆ (ξn ))| ≤ n. Hence, we find 2−ξn n−a1 +(1/p−1/q)+ . (4.27) By Lemma 3.1 and (4.25) R∆ (ξn ) is a linear sampling algorithm of the form (1.1) and consequently, from Theorem 3.12 we get n ≤ rn ≤ sup a f ∈Up,θ f − R∆ (ξn ) (f ) q 2−ξn . These relations together with (4.27) proves the upper bounds for (4.26). Lower bounds. As in the proof of Theorem 4.1, it is sufficient to prove the lower bound for the case p ≤ q. Fix a number r = 2m with integer m so that rd < min(r, r − 1 + 1/p). In the next steps, the proof is similar to the proof of the lower bound for the case β < 0 in Theorem 4.1. Indeed, we can repeat almost all the details in it with replacing α + β by a1 . Remark Concerning the asymptotically optimal sparse grids of sampling points G(∆(ξn )) and α,β α,β a , L (Id )), a d G(∆ (ξn )) for rn (Up,θ , Lq (Id )), n (Up,θ , Lq (Id )) and rn (Up,θ q n (Up,θ , Lq (I )), it is worth to notice the following. Let set A and B be the sets of triples (p, θ, q) introduced in Section 3. For every triple (p, θ, q) ∈ A, we can define the best choice of family of asymptotically optimal sparse grids G(∆(ξn )) and G(∆ (ξn )). Whereas, for a triple (p, θ, q) ∈ B, there are many families 37 of asymptotically optimal sparse grids G(∆(ξn )) and G(∆ (ξn )) depending on parameter ε > 0, α,β α,β a , L ), a d for rn (Up,θ , Lq (Id )), n (Up,θ , Lq (Id )) and rn (Up,θ q n (Up,θ , Lq (I )), respectively. Moreover, the parameter ε > 0 plays a crucial role in the construction asymptotically optimal sparse grids for (p, θ, q) ∈ B. Indeed, to understand the substance let us consider, for instance, α,β the problem of asymptotically optimal sparse grids for even the simplest case rn (U2,2 , L2 (Id )) and α,β d n (U2,2 , L2 (I )) with β < 0. Suppose that for this case instead the set ∆(ξ) := {k ∈ Zd+ : (α − ε)|k|1 + (β + ε)|k|∞ ≤ ξ}, we take the set ˜ ∆(ξ) := {k ∈ Zd+ : α|k|1 + β|k|∞ ≤ ξ}. ˜ ˜ Then ∆(ξ) is a proper subset of ∆(ξ), i.e., the grid G(∆(ξ)) is essentially extended from G(∆(ξ)) ˜ ˜ by parameter ε. However, |G(∆(ξ))| |G(∆(ξ))|. On the other hand, the grid G(∆(ξ)) cannot α,β α,β be asymptotically optimal for rn (U2,2 , L2 (Id )) and n (U2,2 , L2 (Id )), because for this gird (3.5) is replaced by sup f − R∆(ξ) (f ) 2 2−ξ ξ d−1 . ˜ α,β f ∈U2,2 α,β , W2γ (Id )) and A similar optimal property of the grid G(∆ (ξ)) holds for rn (U2,2 with γ > β (see the nest section). 5 γ d α,β n (U2,2 , W2 (I )) Sampling recovery in energy norm α,β In this section, we extend the results on sampling recovery in space Lq (Id ) of functions from Bp,θ in Sections 3 and 4, to sampling recovery in the energy norm of the isotropic Sobolev space Wqγ (Id ) γ , and the receive the with γ > 0. We preliminarily study the sampling recovery in the norm of Bq,τ γ d γ results on sampling recovery in the norm of Wq (I ) as consequences of those in the norm of Bq,τ . Put τ ∗ := min(τ, 1) and θ∗ := (1/τ ∗ − 1/θ)−1 . Lemma 5.1 Let 0 < p, θ, q, τ ≤ ∞, 0 < γ < min(r, r − 1 + 1/p) and ψ : Zd+ → R+ . Then for {ψ} every f ∈ Bp,θ , we have f − R∆ (f ) γ Bq,τ f 2−ψ(k)+γ , sup d {ψ} Bp,θ θ ≤ τ ∗, k∈Z+ \∆ k∈Zd+ \∆ {2−ψ(k)+γ }θ ∗ 1/θ∗ , θ > τ ∗. Proof. Let g be a function of the form (2.19). We have the following Bernstein type inequality g γ Bq,τ 2γ|k|∞ g 38 q which can be proven in a way similar to the proof of [13, Corollary 5.2]. This inequality together with (2.20) gives γ g Bq,τ 2γ|k|∞ +(1/p−1/q)+ |k|1 g p . {ψ} Hence, we obtain for every f ∈ Bp,θ , f − R∆ (f ) ∗ τ∗ γ Bq,τ 2γ|k|∞ +(1/p−1/q)+ |k|1 qk (f ) p }τ . k∈Zd+ \∆ By use of this inequality, in a way similar to the proof of Lemma 3.2(i) we prove the lemma. Let 0 < p, θ, q, τ ≤ ∞ and α, γ ∈ R+ , β ∈ R be given. We fix a number ε so that 0 < ε < min(α − (1/p − 1/q)+ , |γ − β|), and define the set ∆ (ξ) for ξ > 0 by d {k ∈ Z+ : (α − (1/p − 1/q)+ )|k|1 − (γ − β)|k|∞ ≤ ξ}, ∆ (ξ) := {k ∈ Zd+ : (α − (1/p − 1/q)+ + ε/d)|k|1 − (γ − β − ε)|k|∞ ≤ ξ}, {k ∈ Zd+ : (α − (1/p − 1/q)+ − ε)|k|1 − (γ − β + ε)|k|∞ ≤ ξ}, θ ≤ τ ∗, θ > τ ∗ , β > γ, θ > τ ∗ , β < γ. The following theorems and lemma are counterparts of the corresponding results on sampling α,β recovery in space Lq (Id ) of functions from Bp,θ in Sections 3 and 4. They can be proven in a similar way with slight modifications. Theorem 5.1 Let 0 < p, θ, q, τ ≤ ∞, α, γ ∈ R+ and β ∈ R, β = γ, satisfy the conditions α> (γ − β)/d, γ − β, β > γ, β < γ, and 1/p < min(α, α + β) ≤ max(α, α + β) < r, 0 < γ < min(r, r − 1 + 1/p). Then we have the following upper bound sup α,β f ∈Up,θ f − R∆ 2−ξ . γ (ξ) (f ) Bq,τ Lemma 5.2 Under the assumptions of Theorem 5.1 we have 2|k|1 |G(∆ (ξ))| 2ξ/ν , k∈∆ (ξ) where ν := α − (γ − β)/d − (1/p − 1/q)+ , α − (γ − β) − (1/p − 1/q)+ , 39 β > γ, β < γ. Theorem 5.2 Under the assumptions of Theorem 5.1, let for a given n ∈ Z+ , ξn be the largest nonnegative number such that |G(∆ (ξn ))| ≤ n. Then R∆ and n := (ξn ) α,β γ defines an asymptotically optimal linear sampling algorithm for rn := rn (Up,θ , Bq,τ ) α,β γ n (Up,θ , Bq,τ ) R∆ by (ξn ) (f ) f (2−k s)ψk,s , = Ln (Xn∗ , Φ∗n , f ) = (k,s)∈K(∆ (ξn )) where Xn∗ := G(∆ (ξn )) = {2−k s}(k,s)∈K(∆ following asymptotic orders sup α,β f ∈Up,θ f − R∆ (ξn ) (f ) γ Bq,τ (ξn )) , rn Φ∗n := {ψk,s }(k,s)∈K(∆ (ξn )) , n−α−(β−γ)/d+(1/p−1/q)+ , n−α−β+γ+(1/p−1/q)+ , n and we have the β > γ, β < γ. Theorem 5.3 Under the assumptions of Theorem 5.1, we have the following asymptotic orders for 1 < q < ∞, n−α−(β−γ)/d+(1/p−1/q)+ , n−α−β+γ+(1/p−1/q)+ , α,β γ d n (Up,θ , Wq (I )) α,β , Wqγ (Id )) rn (Up,θ β > γ, β < γ. γ Proof. This theorem is follows from Theorem 5.2 and the inequality for f ∈ Bq,min(p,2) , f Wqγ (Id ) ≤ C f γ . Bq,min(p,2) The last inequality can be proven in a way similar to the proof of the inequality [20, (14)] on the basic of a generalization of the well-known Littlewood-Paley theorem for the norm · Wqγ (Rd ) . α,β , Wqγ (Id )) and Remark Asymptotically optimal linear sampling algorithms for rn (Up,θ α,β γ d n (Up,θ , Wq (I )) α,β γ are the same as for rn (Up,θ , Bq,min(p,2) ) and is true also for γ ∈ N, 0 < q ≤ ∞. 6 α,β γ n (Up,θ , Bq,min(p,2) ). Theorem 5.3 Optimal cubature Every linear sampling algorithm Ln (Xn , Φn , ·) of the form (1.1) generates the cubature formula In (Xn , Λn , f ) where Λn = {λj }nj=1 , λj = ϕj (x) dx. Id Hence, it is easy to see that |I(f ) − In (Xn , Λn , f )| ≤ f − Ln (Xn , Φn , f ) 1 , and consequently, from the definitions we have the following inequality in (W ) ≤ rn (W )1 . 40 (6.1) Theorem 6.1 Let 0 < p, θ ≤ ∞ and α ∈ R+ , β ∈ R such that 1/p < min(α, α + β) ≤ max(α, α + β) < r. Assume that for a given n ∈ Z+ , ξn is the largest nonnegative number such that |G(∆(ξn ))| ≤ n. α,β Then R∆(ξn ) defines an asymptotically optimal cubature formula for in (Up,θ ) by λk,s f (2−k s), In (Xn∗ , Φ∗n , f ) = (k,s)∈K(∆(ξn )) where Xn∗ := G(∆(ξn )) = {2−k s}(k,s)∈K(∆(ξn )) , Λ∗n := {λk,s }(k,s)∈K(∆(ξn )) ψk,s (x)dx, λ(k,s := Id and we have the following asymptotic orders sup α,β f ∈Up,θ n−α−β/d+(1/p−1)+ , n−α−β+(1/p−1)+ , α,β ) in (Up,θ |I(f ) − In (Xn∗ , Λ∗n , f )| β > 0, β < 0. (6.2) Proof. The upper bound of (6.2) follows from (6.1) and Theorem 4.1. To prove the lower bound of (6.2) we observe that in (W ) ≥ |I(f )|, sup inf d Xn ={xj }n j=1 ⊂I f ∈W : f (xj )=0, j=1,...,n and for the functions g ∗ given in (4.19) and (4.23) we have I(g ∗ ) = g ∗ 1 . Hence, we can see that the lower bound is derived from the proof of the lower bound of Theorem 4.1. In a similar way, we can prove the following Theorem 6.2 Let 0 < p, θ ≤ ∞ and a ∈ Rd+ satisfying the condition (3.11) and a1 > 1/p. Assume that for a given n ∈ Z+ , ξn is the largest nonnegative number such that |G(∆ (ξn ))| ≤ n. a ) by Then R∆ (ξn ) defines an asymptotically optimal cubature formula for in (Up,θ In (Xn∗ , Φ∗n , f ) = λk,s f (2−k s), (k,s)∈K(∆ (ξn )) where Xn∗ := G(∆ (ξn )) = {2−k s}(k,s)∈K(∆ (ξn )) , Λ∗n := {λk,s }(k,s)∈K(∆ (ξn )) λk,s := and we have the following asymptotic order sup |I(f ) − In (Λ∗n , Xn∗ , f )| a in (Up,θ ) a f ∈Up,θ 41 n−a1 +(1/p−1)+ . ψk,s (x)dx, Id Remark If in Theorems 6.1 and 6.2 we assume 1 ≤ p ≤ ∞, then α,β in (Up,θ ) n−α−β/d , n−α−β , β > 0, β < 0, and a in (Up,θ ) n−a1 . Acknowledgments. This work is funded by Vietnam National Foundation for Science and Technology Development (NAFOSTED) under Grant No. 102.01-2014.02. A part of this work was done when the author was working as a research professor at the Vietnam Institute for Advanced Study in Mathematics (VIASM). He would like to thank the VIASM for providing a fruitful research environment and working condition. The author would like to thank Dr. Tino Ullrich his valuable remarks and suggestions in particular, for indicating the equation (2.8). References [1] K.I. Babenko, On the approximation of periodic functions of several variables by trigonometric polynomials, Dokl. Akad. Nauk USSR 132(1960), 247–250; English transl. in Soviet Math. Dokl. 1(1960). [2] R. Bellmann, Dynamic Programming, Princeton University Press, Princeton, 1957. [3] J. Bergh and J. L¨ ofstr¨ om, Interpolation Spaces, An Introduction, Grundlehren der Mathematischen Wissenschaften 223, Springer-Verlag, 1976. [4] O. Bokanowski, J. Garcke, M. Griebel, and I. Klompmaker. An adaptive sparse grid semiLagrangian scheme for first order Hamilton-Jacobi Bellman equations. Journal of Scientific Computing, 55(3):575-605, 2013. [5] H.-J. Bungartz and M. Griebel, A note on the complexity of solving Poisson’s equation for spaces of bounded mixed derivatives, J. Complexity, 15(1999), 167–199. [6] H.-J. Bungartz and M. Griebel, Sparse grids, Acta Numer., 13(2004), 147–269. [7] P.L. Butzer, M. Schmidt, E.L. Stark and L. Voigt, Central factorial numbers; their main properties and some applications, Numer. Funct. Anal. and Optimiz. 10 5&6 (1989), 419– 488. [8] C.K. Chui, An Introduction to Wavelets, Academic Press, New York, 1992. [9] C.K. Chui, H. Diamond A natural formulation of quasi-interpolation by multivariate splines, Proc. Amer. Math. Soc. 99(1987), 643–646. [10] C. de Boor, G.J. Fix, Spline approximation by quasiinterpolants, J. Approx. Theory 8(1973), 19–45. 42 [11] C. de Bore, K. H¨ ollig, S. Riemenschneider, Box Spline, Springer-Verlag, Berlin, 1993. [12] R.A. DeVore, G.G. Lorentz, Constructive approximation, Springer-Verlag, New York, 1993. [13] DeVore, R.A., Popov, V.A., Interpolation of Besov spaces. Trans. Am. Math. Soc. 305, 397413 (1988) [14] Dinh D˜ ung, The number of integral points in some sets and approximation of functions of several variables, Mat. Zametki 36(1984), 479 - 491. [15] Dinh D˜ ung, Approximation of functions of several variables on a torus by trigonometric polynomials, Mat. Sb. (N.S.), 131(173)(2)(1986), 251–271. [16] Dinh D˜ ung, On recovery and one-sided approximation of periodic functions of several variables, Dokl. Akad. SSSR 313(1990), 787–790. [17] Dinh D˜ ung, On optimal recovery of multivariate periodic functions, In: Harmonic Analysis (Conference Proceedings, Ed. S. Igary), Springer-Verlag 1991, Tokyo-Berlin, pp. 96-105. [18] Dinh D˜ ung, Optimal recovery of functions of a certain mixed smoothness, Vietnam J. Math. 20(2)(1992), 18-32. [19] Dinh D˜ ung, Continuous algorithms in n-term approximation and non-linear widths, J. of Approximation Theory 102(2000), 217-242. [20] Dinh D˜ ung, Non-linear approximation using sets of finite cardinality or finite pseudodimension, J. of Complexity 17 (2001), 467- 492. [21] Dinh D˜ ung, Non-linear sampling recovery based on quasi-interpolant wavelet representations, Adv. Comput. Math. 30(2009), 375–401. [22] Dinh D˜ ung, Optimal adaptive sampling recovery, Adv. Comput. Math.., 34(2011), 1–41. [23] Dinh D˜ ung, B-spline quasi-interpolant representations and sampling recovery of functions with mixed smoothness, Journal of Complexity 27(2011), 541–467. [24] Dinh D˜ ung and C. Micchelli, Multivariate approximation by translates of the Korobov function on Smolyak grids, Journal of Complexity 29 (2013), pp. 424-437. [25] Dinh D˜ ung and T. Ullrich, N -Widths and ε-dimensions for high-dimensional approximations, Foundations of Comp. Math. 13 (2013), 965-1003. [26] Dinh D˜ ung and T. Ullrich, Lower and upper bounds for the error of optimal cubature in Besov spaces of mixed smoothness, http://arxiv.org/abs/1311.1563. ˜ q of the classes W ˜ pα¯ and H ˜ pα¯ of periodic [27] E.M. Galeev, Kolmogorov widths in the space L functions of several variables, Izv. Akad. Nauk SSSR, Ser. Mat. 49(1985), 916934. [28] E.M. Galeev, Approximation of classes of periodic functions of several variables by nuclear operators, Mat. Zametki 47(1990), 32-41. 43 [29] J. Garcke and M. Hegland. Fitting multidimensional data using gradient penalties and the sparse grid combination technique. Computing, 84(1-2):1-25, April 2009. [30] T. Gerstner and M. Griebel, Numerical Integration using Sparse Grids, Numer. Algorithms, 18:209-232, 1998. [31] T. Gerstner and M. Griebel, Sparse grids, In R. Cont, editor, Encyclopedia of Quantitative Finance, John Wiley and Sons, 2010. [32] M. Griebel and J. Hamaekers, Tensor product multiscale many-particle spaces with finiteorder weights for the electronic Schr¨odinger equation, Zeitschrift f¨ ur Physikalische Chemie, 224(2010), 527–543. [33] M. Griebel, H. Harbrecht, A note on the construction of L-fold sparse tensor product spaces, Constructive Approximation, 38(2):235-251, 2013. [34] M. Griebel and M. Holtz, Dimension-wise integration of high-dimensional functions with applications to finance, J. Complexity, 26:455-489, 2010. [35] M. Griebel and H. Harbrecht, On the construction of sparse tensor product spaces, Mathematics of Computations 82(282):975-994, Apr. 2013. [36] M. Griebel and S. Knapek, Optimized general sparse grid approximation spaces for operator equations. Math. Comp., 78(268)(2009), 2223–2257. [37] H.-C. Kreusler, H. Yserentant, The mixed regularity of electronic wave functions in fractional order and weighted Sobolev spaces, Numer. Math. 121 (2012), no. 4, 781-802. [38] E. Novak, H. Triebel, Function spaces in Lipschitz domains and optimal rates of convergence for sampling, Constr. Approx. 23 (2006), 325–350. [39] E. Novak and H. Wo´zniakowski, Tractability of Multivariate Problems, Volume I: Linear Information, EMS Tracts in Mathematics, Vol. 6, Eur. Math. Soc. Publ. House, Z¨ urich 2008. [40] E. Novak and H. Wo´zniakowski, Tractability of Multivariate Problems, Volume II: Standard Information for Functionals, EMS Tracts in Mathematics, Vol. 12, Eur. Math. Soc. Publ. House , Z¨ urich 2010. [41] W. Sickel, T. Ullrich, The Smolyak algorithm, sampling on sparse grids and function spaces of dominating mixed smoothness, East J. Approx. 13(2007), 387–425. [42] W. Sickel, T. Ullrich, Spline Interpolation on sparse grids, Applicable Analysis, 90(2011), 337–383. [43] S.A. Smolyak, Quadrature and interpolation formulas for tensor products of certain classes of functions, Dokl. Akad. Nauk 148(1963), 1042–1045. [44] V. Temlyakov, Approximation recovery of periodic functions of several variables, Mat. Sb. 128(1985), 256–268. 44 [45] V. Temlyakov, On approximate recovery of functions with bounded mixed derivative, J. Complexity 9(1993), 41-59. [46] V. Temlyakov, Approximation of periodic functions, Nova Science Publishers, Inc., New York, 1993. [47] H. Triebel, Interpolation Theory, Function Spaces, Differential Operators, Heidelberg: Johann Ambrosius Barth, 1995. [48] H. Triebel, Bases in function spaces, sampling, discrepancy, numerical integration, European Math. Soc. Publishing House, Z¨ urich, 2010. [49] T. Ullrich, Smolyak’s algorithm, sampling on sparse grids and Sobolev spaces of dominating mixed smoothness. East J. Approx., 14(2008), 1–38. [50] H. Yserentant, The hyperbolic cross space approximation of electronic wavefunctions, Numer. Math. 105 (2007), no. 4, 659690. [51] H. Yserentant, Regularity and approximability of electronic wave functions, Lecture Notes in Mathematics, 2000. Springer-Verlag, Berlin, 2010. viii+182 pp. [52] H. Yserentant, The mixed regularity of electronic wave functions multiplied by explicit correlation factors, ESAIM Math. Model. Numer. Anal. 45 (2011), no. 5, 803-824. [53] C. Zenger, Sparse grids, in Parallel Algorithms for Partial Differential Equations (W. Hackbusch, ed.), Vol. 31 of Notes on Numerical Fluid Mechanics, Vieweg, Braunschweig/Wiesbaden, 1991. 45 [...]... approximate sampling < /b> recovery of f from its sampled values at points in I An approach to construct a < /b> quasi-interpolation operator for functions on < /b> I is to extend it by interpolation Lagrange polynomials This approach has been proposed in [21] for the univariate case Let us recall it For a < /b> non-negative integer k, we put xj = j2−k , j ∈ Z If f is a < /b> function on < /b> I, let Uk (f ) and < /b> Vk (f ) be the (r −1)th Lagrange... functionals ck,s (f ) are explicitly constructed by formula (2.24)–(2.25) as linear combinations of at most N function values of f for some N ∈ N which is independent of k, s and < /b> f Ω and < /b> B a < /b> , We now prove theorems on < /b> quasi-interpolation representation of functions from Bp,θ p,θ α,β Bp,θ by series (2.26) satisfying a < /b> discrete equivalent quasi-norm We need some auxiliary lemmas Let us use the notations:... operator Q(·; h) has the same properties as Q: it is a < /b> local bounded linear operator in C(R) and < /b> reproduces the polynomials from Pr−1 Moreover, it gives a < /b> good approximation for smooth functions [11, p 63–65] We will also call it a < /b> quasi-interpolation operator for C(R) However, the quasi-interpolation operator Q(·; h) is not defined for a < /b> function f on < /b> I, and < /b> therefore, not appropriate for an approximate... on < /b> sparse < /b> grids < /b> G(∆(ξ)), and < /b> R∆ (ξ) on < /b> sparse < /b> grids < /b> G(∆ (ξ)) are asymptotically optimal in the sense of the quantities rn and < /b> n Theorem 4.1 Let 0 < p, θ, q ≤ ∞ and < /b> α ∈ R+ , β ∈ R, β = 0, such that 1/p < min(α, α + β) ≤ max(α, α + β) < r Assume that for a < /b> given n ∈ Z+ , ξn is the largest nonnegative number such that |G(∆(ξn ))| ≤ n (4.12) α,β Then R∆(ξn ) defines an asymptotically optimal linear sampling.< /b> .. decomposition of Zd+ : Zd+ = Nd (e) e⊂[d] Assertion (i): From (2.37) we derive µ|k|1 ≤ φ(k) + c, k ∈ Zd+ , for some constant c Hence, by Lemma (2.1) and < /b> (2.39) we have f Bpµ1 ≤ C f Ω , Bp,θ Ω f ∈ Bp,θ , for some constant C Since for µ > 1/p, Bpµ1 is compactly embedded into C(Id ), by the last Ω Take an arbitrary f ∈ B Ω Then f can be treated as an element in C(Id ) inequality so is Bp,θ p,θ By Lemma 2.3... condition 1/θ {2 (a,< /b> k) qk (f ) p }θ B2 (f ) = f a < /b> Bp,θ (2.40) k∈Zd+ (ii) If 0 < minj∈[d] aj ≤ maxj∈[d] aj < min(r, r − 1 + 1/p), then a < /b> function g on < /b> Id represented by a < /b> series (2.36) satisfying the condition 2 (a,< /b> k) gk B4 (g) := θ 1/θ < ∞, p k∈Zd+ a < /b> Moreover, belongs the space Bp,θ g a < /b> Bp,θ B4 (g) (iii) If 1/p < minj∈[d] aj ≤ maxj∈[d] aj < min(r, r −1+1/p), then a < /b> function f on < /b> Id belongs... belongs to the a < /b> if and < /b> only if f can be represented by the series (2.26) satisfying the convergence space Bp,θ a < /b> condition (2.40) Moreover, the quasi-norm f Bp,θ is equivalent to the quasi-norms B2 (f ) Proof For Ω as in (2.5), we have 1/Ω(2−x ) = 2 (a,< /b> x) , x ∈ Rd+ One can directly verify the conditions (2.1)–(2.3) and < /b> the conditions (2.33)–(2.34) with 1/p < µ < minj∈[d] aj and < /b> ρ = maxj∈[d] aj , for Ω... 17, 18] for sampling < /b> recovery of periodic functions from an intersection of spaces of different mixed smoothness The grids < /b> G(∆(ξ)) defined for β = −1, θ = 1, p = q ≥ 1, were used in [6] for sampling < /b> recovery of non-periodic functions based < /b> on < /b> a < /b> hierarchical Lagrangian basis polynomials representation, with the approximation error measured in the energy H 1 -norm 4 Sparsity and < /b> optimality Lemma 4.1 Let... B- spline defined in (2.12), J d (k) := {s ∈ Zd : −r/2 < si < 2ki + r/2, i ∈ [d]} is the set of s for which Mk,s do not vanish identically on < /b> Id , ak,s (f ) := ak1 ,s1 (ak2 ,s2 ( akd ,sd (f ))), (2.13) and < /b> the univariate coefficient functional aki ,si is applied to the univariate function f by considering f as a < /b> function of variable xi with the other variables held fixed The operator Qk is a < /b> local bounded... function f on < /b> Id α,β belongs to the space Bp,θ if and < /b> only if f can be represented by the series (2.26) satisfying the convergence condition (2.41) Moreover, the quasi-norm f B α,β is equivalent to the p,θ quasi-norms B2 (f ) Proof As mentioned above, for Ω as in (2.6), we have 1/Ω(2−x ) = 2α|x|1 +β|x|∞ , x ∈ Rd+ By Theorem 2.1, the assertion (i) of the theorem is proven if the conditions (2.1)–(2.3) and ... Introduction The aim of the present paper is to investigate linear sampling algorithms and cubature formulas on sparse grids based on a B- spline quasi-interpolation, and their optimality for functions... basis and measuring the approximation error in the L2 -norm and energy H norm There is a very large number of papers on sparse grids in various problems of approximations, sampling recovery and. .. is called a quasi-interpolation operator in C(R) There are many ways to construct quasi-interpolation operators A method of construction via Neumann series was suggested by Chui and Diamond [9]