Computers and Mathematics with Applications 86 (2021) 16–32 Contents lists available at ScienceDirect Computers and Mathematics with Applications journal homepage: www.elsevier.com/locate/camwa A two-dimensional sideways problem with random discrete data Dang Duc Trong a,b , Tran Quoc Viet c,d , Vo Dang Khoa e , Nguyen Thi Hong Nhung a,b ,∗ a Faculty of Mathematics and Computer Sciences, University of Science, Ho Chi Minh City, Viet Nam Vietnam National University Ho Chi Minh, Viet Nam Institute of Fundamental and Applied Sciences, Duy Tan University, Ho Chi Minh city 700000, Viet Nam d Faculty of Natural Science, Duy Tan University, Danang City 550000, Viet Nam e Ho Chi Minh City Medicine and Pharmacy University, Viet Nam b c ARTICLE INFO Keywords: Heat equation Heat distribution Ill-posed problem Regularization Statistical inverse problems Nonparametric regression ABSTRACT The heat distribution on the surface of a layer inside a heat conducting body can be recovered from two interior temperature measurements It is reasonable to take discrete measured data with random noise into account In the paper, approximations of interior measurements from the discrete data are established First, a one-dimensional Sinc series is applied to represent the approximation function on 𝑥 standing for the length of the object and then the trigonometric estimators on time-variable 𝑡 is obtained We also construct an estimator for the heat distribution which converges in the sense of integrated mean squared error (MISE) Some numerical experiments are given as demonstrations of the ability of numerical implementation of our regularization Introduction In the heat conduction theory, the temperature history of a body is often defined from temperature of whole surface body But, in a lot of applications, some part of the surface is inaccessible when it is too hot or too cold In the special situations (see, e.g., [1]), instead of attaching a temperature sensor at the surface of the body, we can only measure the temperature in the interior points or on the accessible part of surface body to get the heating history in the body The problem of determining the surface temperature or heat flux histories from interior/accessible part measured temperature is called the sideways problem In references, there are many types of interior measurements For the one-dimensional case the heat body is modeled as the interval 𝛺 = [0, 𝐿] and we can list here some kind of interior measurements In [2,3], the temperature is recovered from the temperature history 𝑢(𝑥0 , 𝑡) and the flux 𝑢𝑥 (𝑥0 , 𝑡) where 𝑥0 ∈ [0, 𝐿] is the 𝑥-coordinate of the interior point of the heat body In [4,5], the authors considered the model 𝛺 = (0, ∞) and used the data 𝑢(𝑥0 , 𝑡) (𝑥0 > 0) and the assumption lim𝑥→∞ 𝑢(𝑥, 𝑡) = to recovery 𝑢(𝑥, 𝑡) For the higher dimensional case, we also have some papers In [6], the authors consider the body 𝛺 = (0, 1) × (0, ∞) with the inaccessible surface being {𝑥 = 1} and the interior data are 𝑢(0, 𝑦, 𝑡) = 𝜙(𝑦, 𝑡), 𝑢𝑥 (0, 𝑦, 𝑡) = In [7], the authors considered 𝛺 = R × (0, 2) and recovered the function 𝑢(𝑥, 𝑡) from measurements 𝑢(𝑥, 1, 𝑡), 𝑢(𝑥, 2, 𝑡) In this paper, the body is modeled by the strip R × (0, 2) with the line R × {𝑦 = 0} represented the inaccessible surface of the body The temperature is measured at two accessible lines R×{𝑦 = 1} and R×{𝑦 = 2} We consider the problem of determining the heat distribution ( ) 𝑣(𝑥, 𝑡) = 𝑢 𝑥, 𝑦0 , 𝑡 , ≤ 𝑦0 < 1, (1) where 𝑢 satisfies 𝜅∇2 𝑢 − 𝜕𝑢 =0 𝜕𝑡 (2) for some constant 𝜅 > 0, and subjects to the interior conditions 𝑢 (𝑥, 2, 𝑡) = 𝑔(𝑥, 𝑡), 𝑥 ∈ R, 𝑡 > 0, (3) 𝑢 (𝑥, 1, 𝑡) = 𝑓 (𝑥, 𝑡), 𝑥 ∈ R, 𝑡 > 0, (4) and the initial condition 𝑢(𝑥, 𝑦, 0) = 0, 𝑥 ∈ R, < 𝑦 < (5) In fact, some results dealing with deterministic data have been studied carefully so far First of all, it is sensible to have an interpretation of previous examined result in [7] The authors have studied the problem with attention paid on deterministic data For the uniqueness of solution, we shall have to measure the temperature history at two interior lines R × {𝑦 = 1} and R × {𝑦 = 2} which enable us to identify uniquely the heating history inside of the layer (see, e.g., [1]) Additionally, it is worth insisting that the problem is ill-posed if we consider the problem over the whole time interval R+ with respect to the 𝐿2 -norm In particular, the authors regularized the function 𝑢 (𝑥, 0, 𝑡) directly from measured data of 𝑢 (𝑥, 1, 𝑡) and 𝑢 (𝑥, 2, 𝑡) To be more specific, they ∗ Corresponding author at: Faculty of Mathematics and Computer Sciences, University of Science, Ho Chi Minh City, Viet Nam E-mail addresses: ddtrong@hcmus.edu.vn (D.D Trong), tranquocviet5@duytan.edu.vn (T.Q Viet), vdkhoa@ump.edu.vn (V.D Khoa), nthnhung@hcmus.edu.vn (N.T.H Nhung) https://doi.org/10.1016/j.camwa.2021.01.013 Received August 2020; Received in revised form December 2020; Accepted 22 January 2021 Available online xxxx 0898-1221/© 2021 Elsevier Ltd All rights reserved D.D Trong, T.Q Viet, V.D Khoa et al Computers and Mathematics with Applications 86 (2021) 16–32 where i2 = −1, 𝑝 = (𝑝1 , … , 𝑝𝑘 ), 𝑥 = (𝑥1 , … , 𝑥𝑘 ), 𝑝 ⋅ 𝑥 = Moreover, 𝜙f t ∈ 𝐿2 (R𝑘 ) and changed the problem into an integral equation of convolution type as well as represented the solution by an expansion of two-dimensional Sinc series In the present paper, we develop the ideas of paper [7] Concentrating on the numerical aspect of the sideways problem, we focus on the problem of recovering the heat distribution from measured data along with the effect of random noise In this circumstance, the problem is still ill-posed Moreover, we are also concerned with random noise data which is new from previous viewpoint Indeed, measurements are often to contain some errors all the time In practice, if we measure the func{ } tions 𝑓 (𝑥, 𝑡), 𝑔(𝑥, 𝑡) at discrete points 𝑥𝑚 = 𝑚ℎ, ℎ > 0, −𝑀 ≤ 𝑚 ≤ 𝑀 and discrete points of time { < 𝑡1 ≤ 𝑡2 ≤ ⋯ ≤ 𝑡𝑛 ≤ 𝑇 , then we} obtain a set of measured values (𝜏𝑗,𝑚 , 𝜂𝑗,𝑚 ) ∶ −𝑀 ≤ 𝑚 ≤ 𝑀, ≤ 𝑗 ≤ 𝑛 where ( ) ( ) 𝜏𝑗,𝑚 ≈ 𝑓 𝑚ℎ, 𝑡𝑗 , 𝜂𝑗,𝑚 ≈ 𝑔 𝑚ℎ, 𝑡𝑗 The points 𝑥𝑚 , for 𝑚 = −𝑀, 𝑀, are called (non-random) design points The actual measurements are always observed with errors i.e ( ) 𝜏𝑗,𝑚 = 𝑓 𝑚ℎ, 𝑡𝑗 + 𝜖𝑗,𝑚 , (6) ) ( (7) 𝜂𝑗,𝑚 = 𝑔 𝑚ℎ, 𝑡𝑗 + 𝜀𝑗,𝑚 𝜙(𝑥) = 𝑗=1 𝑝𝑗 𝑥𝑗 𝑥 ∈ R𝑘 a.e In addition, we have the Parseval equality ‖𝜙‖2𝐿2 (R𝑘 ) = ‖ f t ‖2 ‖𝜙 ‖ (2𝜋)𝑘 ‖ ‖𝐿2 (R𝑘 ) For 𝜃 > 0, we define { 𝐻 𝜃 (R𝑘 ) = 𝜙 ∈ 𝐿2 (R𝑘 ) ∶ ‖𝜙‖2𝐻 𝜃 = ∫R𝑘 } (1 + |𝑝|2 )𝜃 |𝜙f t (𝑝)| d𝑝 < ∞ For 𝜙 ∈ 𝐿2 (R𝑘 ), we denote the Fourier transform of function 𝜙(𝑥1 , … , 𝑥𝑘 ) with respect to the 𝑗th variable 𝑥𝑗 (for 𝑗 = 1, 𝑘) by ( ) 𝜙f𝑗 t 𝑥1 , … , 𝑝𝑗 , … , 𝑥𝑘 = ∞ ∫−∞ 𝜙(𝑥1 , … , 𝑥𝑗 , … , 𝑥𝑘 ) e−i𝑥𝑗 𝑝𝑗 d𝑥𝑗 , 𝑝𝑗 ∈ R Similarly, 𝜙f𝑗,𝑙t denotes the Fourier transform of 𝜙f𝑗 t with respect to 𝑥𝑙 for 𝑙 = 1, 𝑘 and 𝑙 ≠ 𝑗 We define a class of functions used in the Sinc approximation of our paper In statistics literature, the unknown errors 𝜖𝑗,𝑚 , 𝜀𝑗,𝑚 , 𝑚 = −𝑀, 𝑀 and 𝑗 = 1, 𝑛 are often assumed to be mutually independent There are a lot of reasons why these errors arise, such as instrument or environment for instance In general, when the effects of the instrument on measurements are considered, the errors seem to have the greatest possible bound It can be noted that this model is more or less similar to the one for deterministic case and can be considered in the deterministic setting Otherwise, if the errors come from the environment, then their magnitude may be not uniformly bounded Therefore, we will only examine a model in which the errors’ variance is uniformly bounded and call it the bounded variance model, i.e., there are 𝜎1 , 𝜎2 > such that Var(𝜖𝑗,𝑚 ) ≤ 𝜎12 , Var(𝜀𝑗,𝑚 ) ≤ 𝜎22 𝜙f t (𝑝) ei𝑝⋅𝑥 d𝑝, (2𝜋)𝑘 ∫R𝑘 ∑𝑘 Definition 2.1 Let 𝐶 > 0, 𝑞, 𝑟0 , 𝑇0 , 𝜌, 𝜚 > We denote by 𝑉 (𝑞, 𝐶) the set of functions 𝜙 ∈ 𝐿2 (R × [0, ∞)) such that ∬R×[0,∞) + (1 + 𝑟2 )𝑞 |𝜙f1t (𝑟, 𝑡)| 𝑑𝑟𝑑𝑡 |2 | 𝜕𝜙f t | | (1 + 𝑟2 )𝑞 || (𝑟, 𝑡)|| 𝑑𝑟𝑑𝑡 ≤ 𝐶 ∬(R⧵[𝑟0 ,𝑟0 ])×[0,∞) 𝜕𝑟 | | | | We also denote by 𝑉trunc (𝜌, 𝐶) the set of functions 𝜙 ∈ 𝐿2 (R × [0, ∞)) such that 𝐶 |𝜙(𝑥, 𝑡)|2 d𝑥d𝑡 ≤ 𝜌 for all 𝑇 > 𝑇0 ∬R×(𝑇 ,∞) 𝑇 (8) Finally, we define { } 𝑊 (𝜚) = 𝜙 ∈ 𝐿2 (R) ∶ supp 𝜙f t ⊂ [−𝜚, 𝜚] In this case, the errors can be non-identically distributed The random data model was studied in some recent papers (see, e.g., [8–13]) Different to the studies [8–12], in the present paper, we consider the numerical problem on an unbounded domain (𝑥, 𝑦, 𝑡) ∈ R × (0, 2) × R+ and using the Sinc expansion The problem is of finding the heat distribution 𝑢 = 𝑢(𝑥, 𝑦, 𝑡) satisfying (2)–(5) from the discrete data in (6), (7) The one-dimensional similar problem has been considered recently in [13] In the present paper, we combine the Sinc expansion with the fractional discrete Fourier transform (fDFT) to construct numerically a regularization for the problem From our knowledge, this combination is new The remaining part of present paper is divided into five sections Section is devoted to set up some necessary definitions and transform our problem to the integral form In Section 3, we state four main results of our paper More explicitly, we first present a result which can be applied to define computation parameters Next, we state approximations of functions 𝑓 (𝑥, 𝑡), 𝑔(𝑥, 𝑡) by a combination of truncated Sinc expansion and truncated Fourier expansion with respect to the 𝑥-variable and the 𝑡-variable respectively Then, we propose an estimator for 𝑢(𝑥, 𝑦0 , 𝑡) which converges in the sense of integrated mean squared error (MISE) We also give a specific way to define regularization parameters In Section 4, some numerical experiments are given as demonstrations of the ability of numerical implementation for our regularization Finally, in Section 5, we present the proofs of the main results As in [14] we define the Cardinal function [ ] sin 𝜋(𝑥 − 𝑚ℎ)∕ℎ 𝑆(𝑚, ℎ)(𝑚ℎ) = 1, 𝑆(𝑚, ℎ)(𝑥) = , 𝜋(𝑥 − 𝑚ℎ)∕ℎ 𝑚 ∈ Z, ℎ > 0, 𝑥 ≠ 𝑚ℎ which has the orthogonal property (see [14], Chapter 1, Section 1.10, pages 91–92) { −1∕2 } Lemma 2.2 Let ℎ > 0, then the family of(functions ℎ 𝑆(𝑚, ℎ) is a ) complete orthonormal basis in the space 𝑊 𝜋ℎ We have { ∞ ℎ, 𝑚 = 𝑙, 𝑆(𝑚, ℎ)(𝑥)𝑆(𝑙, ℎ)(𝑥)d𝑥 = ∫−∞ 0, otherwise We also give here some definitions which will be used in Fourier expansions(in 𝐿2 (0, 𝑇)) for 𝑇 > Recall that the system {𝜙𝑝 } with 𝜙𝑝 (𝑡) = sin (𝑝 − 21 ) 𝜋𝑡 for every 𝑝 = 1, 2, …, is an orthogonal basis of 𝑇 𝑇 𝐿2 [0, 𝑇 ] and denote ⟨𝜑, 𝜓⟩𝑇 = ∫0 𝜑(𝑡)𝜓(𝑡)𝑑𝑡 Definition 2.3 Denote { F = 𝜓 ∶ R × [0, ∞) → R ∣ 𝜓(𝑥, 0) = 0, 𝜓(𝑥, ⋅) ∈ 𝐿2 (0, 𝑇 ) for every 𝑇 > 0, 𝑥 ∈ R} Preliminary results For 𝛼, 𝛽 > 0, ℎ ∈ (0, 1) and a function 𝛬 ∶ [0, ∞) × [0, 1] → (0, ∞), 𝛬 = 𝛬(𝑇 , ℎ), we define the ellipsoid 𝐶𝛼,𝛽,𝛬 the set of 𝜓 ∈ F such that [ ] ∑ ∑ ∑ 𝑝2𝛼 |⟨𝜓(0, ⋅), 𝜙𝑝 ⟩𝑇 |2 + 𝑝2𝛼 |𝑚|2𝛽 |⟨𝜓(𝑚ℎ, ⋅), 𝜙𝑝 ⟩𝑇 |2 2.1 The Fourier transform and Fourier expansion Before going to the main part of the paper, we set up some notations For 𝑘 ∈ N, 𝜙 ∈ 𝐿2 (R𝑘 ), the Fourier transform of 𝜙 is defined by 𝜙f t (𝑝) = ∫R𝑘 𝜙(𝑥) e−i𝑝⋅𝑥 d𝑥, 𝑝 ∈ R𝑘 , 𝑝≥1 |𝑚|≥1 𝑝≥1 ≤ 𝛬2 ∀(𝑇 , ℎ) ∈ [0, ∞) × [0, 1] (9) We will give an example of the ellipsoid 17 D.D Trong, T.Q Viet, V.D Khoa et al Computers and Mathematics with Applications 86 (2021) 16–32 Lemma 2.4 Let 𝐶Fou > 0, < 𝛼 < 3∕2, 𝛽1′ , 𝛽2′ > 0, < 𝛽 < (2𝛽 ′ − 1)∕2 with 𝛽 ′ = min{(𝛽1′ + 𝛽2′ )∕2; 𝛽2′ } Assume that 𝜓 ∈ F, 𝜓 = 𝜓(𝑥, 𝑡) has the second derivative with respect to the variable 𝑡 and ‖𝜓𝑡 (𝑥, ⋅)‖𝐿2 (0,∞) ≤ ′ Note that )) ( ( | e𝜆(2−𝑦0 ) − e−𝜆(2−𝑦0 ) |2 e2𝐴0 (2−𝑦0 ) + e−2𝐴0 (2−𝑦0 ) − cos 2𝐵0 − 𝑦0 | = | , | | e𝜆 − e−𝜆 e2𝐴0 + e−2𝐴0 − cos(2𝐵0 ) | | ′ 𝐶Fou (1 + |𝑥|)−𝛽1 , ‖𝜓𝑡𝑡 (𝑥, ⋅)‖𝐿2 (0,∞) ≤ 𝐶Fou (1 + |𝑥|)−𝛽2 Then 𝜓 ∈ 𝐶𝛼,𝛽,𝛬 with 𝐶𝛬 𝑇 ′ ℎ2𝛽 𝛬2 = 𝛬2 (𝑇 , ℎ) = 𝐶𝛬 = 96𝐶Fou 𝜋 ( − 2𝛼 − 2𝛼 and )( ( 1+2 2𝛽 ′ − 2𝛽 2𝛽 ′ − 2𝛽 − where 𝐴0 and 𝐵0 are defined in (11) The increasing √√first quantity is √ 2 exponentially for < 𝑦0 < as 𝐴0 = 𝑟 + 𝑠 ∕𝜅 + 𝑟 ∕ → ∞ Therefore, a small disturbance for the data 𝑓 (𝑥, 𝑡) will be amplified infinitely by this factor, hence the problem of recovering the temperature 𝑣(𝑥, 𝑡) from the measured data is severely ill-posed )) 2.2 The Fourier transform of the solution We shall find solutions of Eq (2) by using the Fourier transform Putting 𝑢(𝑥, 𝑦, 𝑡) = for 𝑡 < 0, 𝑥 ∈ R2 , 𝑦 ∈ (0, 2), and applying the Fourier transform with respect to 𝑥, 𝑡, we obtain t −𝑟2 𝑢f1,3 (𝑟, 𝑦, 𝑠) + 𝑠 t 𝜕2 f t 𝑢 (𝑟, 𝑦, 𝑠) = i 𝑢f1,3 (𝑟, 𝑦, 𝑠) 𝜅 𝜕𝑦2 1,3 Main results of the paper 3.1 Assumptions (10) The latter subsection shows that the problem is ill-posed, hence a regularization is in order We first state assumptions for 𝑓 , 𝑔 From now on, we assume 𝜆2 −𝑟2 −i𝑠∕𝜅 The characteristic equation = of the differential equation (10) has two solutions 𝜆1,2 = ±𝜆(𝑟, 𝑠), where √√ 𝑟4 + 𝑠2 ∕𝜅 + 𝑟2 , 𝜆 = 𝐴0 + i𝐵0 , 𝐴0 = √ (11) √√ 𝐵0 = sgn(𝑠) √ 𝑟4 + 𝑠2 ∕𝜅 − 𝑟2 t 𝐶1 e𝜆 + 𝐶2 e−𝜆 = 𝑢f1,3 (𝑟, 1, 𝑠) = ∫R 𝑔1f t (𝑟, 𝑡) e−i𝑠𝑡 d𝑡 ∶= 𝐺(𝑟, 𝑠), 𝐺e−𝜆 − 𝐹 e−2𝜆 𝐹 e2𝜆 − 𝐺e𝜆 , 𝐶2 = 𝜆 −𝜆 e −e e𝜆 − e−𝜆 The Fourier transform of the solution of (1)–(5) with respect to the variable (𝑥, 𝑡) is (15) ′ ∃ 𝛼, 𝛽 > 1∕2, 𝛽 > 0, 𝐶𝛬 , 𝜏 > 0, ℎ0 , 𝑇0 > 0, s.t 𝑓 , 𝑔 ∈ 𝐶𝛼,𝛽,𝛬 √ ′ for all 𝛬 = 𝐶𝛬 𝑇 𝜏 ∕ℎ𝛽 , ℎ ∈ (0, ℎ0 ), 𝑇 ∈ (𝑇0 , ∞) Theorem 3.1 𝐺(𝑟, 𝑠) e−𝜆 − 𝐹 (𝑟, 𝑠) e−2𝜆 𝜆𝑦0 𝐹 (𝑟, 𝑠) e2𝜆 − 𝐺(𝑟, 𝑠) e𝜆 −𝜆𝑦0 e + e e𝜆 − e−𝜆 e𝜆 − e−𝜆 = 𝐹 (𝑟, 𝑠) 𝐶𝜆 (𝑟, 𝑠) − 𝐺(𝑟, 𝑠) 𝐷𝜆 (𝑟, 𝑠), 𝛼 = 𝛽 ′ = 1, 𝜌 = 2, 𝜏 = 2, arbitrary 𝑞 > 0, arbitrary 𝛽 ∈ (0, 1∕2) sinh 𝜆(1 − 𝑦0 ) 𝐷𝜆 (𝑟, 𝑠) = sinh 𝜆 (12) We can also choose 𝐶 ≥ lim (𝑟,𝑠)→(0,0) 𝐷𝜆 (𝑟, 𝑠) = − 𝑦0 t (𝑟, 𝑦 , 𝑠) Let us remind that 𝑣(𝑥, 𝑡) = 𝑢(𝑥, 𝑦0 , 𝑡), hence 𝑣f t (𝑟, 𝑠) = 𝑢f1,3 Consequently, we have the following formula of the exact solution 𝑢f t (𝑟, 𝑦0 , 𝑠)ei(𝑟𝑥+𝑠𝑡) d𝑟d𝑠 4𝜋 ∬R2 1,3 ( ) e𝜆(2−𝑦0 ) − e−𝜆(2−𝑦0 ) e𝜆(1−𝑦0 ) − e−𝜆(1−𝑦0 ) = 𝐹 (𝑟, 𝑠) − 𝐺(𝑟, 𝑠) 𝜆 −𝜆 𝜆 −𝜆 e −e e −e 4𝜋 ∬R2 × ei(𝑟𝑥+𝑠𝑡) d𝑟d𝑠 𝐶Fou t 𝑣f t (𝑟, 𝑠) = 𝑢f1,3 (𝑟, 𝑦0 , 𝑠) = 𝐹 (𝑟, 𝑠) e𝜆 − e−𝜆 − 𝐺(𝑟, 𝑠) e𝜆 − e−𝜆 | 𝜕𝑣f t |2 | 𝑦 | | | d𝑟d𝑠, (𝑟, 𝑠) | 4𝜋 ∬R2 || 𝜕𝑠 | | | { } ≥ sup𝑥∈R (1 + |𝑥|) ‖𝑣𝑦,𝑡 (𝑥, )‖𝐿2 (0,∞) + ‖𝑣𝑦,𝑡𝑡 (𝑥, )‖𝐿2 (0,∞) Remark The condition (1 + |𝑥| + |𝑡|)𝑤 ∈ 𝐿2 (R2 ) is equivalent to 𝑤f t ∈ 𝐻 (R2 ) which is quite natural In the following parts, we use 𝑓 (𝑥, 𝑡) = 𝑣1 (𝑥, 𝑡) = 𝑢(𝑥, 1, 𝑡) and 𝑔(𝑥, 𝑡) = 𝑣2 (𝑥, 𝑡) = 𝑢(𝑥, 2, 𝑡) Hence, 𝐶, 𝐸, 𝐶𝛬 corresponding to 𝑓 , 𝑔 can be calculated from the discrete data As mentioned above, the solution of (1)–(5) can be obtained as the following form − e−𝜆(1−𝑦0 ) |2 | 𝜕𝑣f t | | 𝑦,1 (𝑟, 𝑡)|| 𝑑𝑟𝑑𝑡, (1 + 𝑟2 )𝑞 || ∬(R⧵[𝑟0 ,𝑟0 ])×[0,∞) 𝜕𝑟 | | | | and 𝐶𝛬 be as in Lemma 2.4 2.3 The ill-posedness of the problem e𝜆(1−𝑦0 ) t (1 + 𝑟2 )𝑞 |𝑣f𝑦,1 (𝑟, 𝑡)| 𝑑𝑟𝑑𝑡 𝐸 ≥ (13) − e−𝜆(2−𝑦0 ) ∬R×[0,∞) + 𝑣(𝑥, 𝑡) = e𝜆(2−𝑦0 ) Let 𝑌 > 2, < 𝑦 < 𝑌 , 𝑞 > 0, < 𝛽 < 1∕2 Assume Then we can find 𝐶, 𝐸, 𝐶𝛬 > such that 𝑣𝑦 ∈ 𝑉 (𝑞, 𝐶) ∩ 𝑉trunc (2, 𝐸) ∩ 𝐶1,𝛽,𝛬 with 𝛬2 = 𝐶𝛬 𝑇 ∕ℎ2 Hence, with 𝑢 satisfying (i), (ii) we can choose where the Fourier transform 𝑣f t is defined by (9), and 𝐶𝜆 (𝑟, 𝑠) = − 𝑦0 , (16) (i) 𝑢(𝑥, 𝑦, 𝑡) defined on R × [0, 𝑌 ] × [0, ∞) and satisfies (2), (5) (ii) (1 + |𝑥| + |𝑡|)𝑣0 , (1 + |𝑥| + |𝑡|)𝑣𝑌 ∈ 𝐿2 (R2 ) where 𝑣𝑦 (𝑥, 𝑡) ∶= 𝑢(𝑥, 𝑦, 𝑡) for 𝑡 > and 𝑣𝑦 (𝑥, 𝑡) for 𝑡 ≤ 𝑣f t (𝑟, 𝑠) = lim ∃ 𝐸, 𝜌 > s.t.𝑓 , 𝑔 ∈ 𝑉trunc (𝜌, 𝐸) Since there are so many parameters, it is better to find a way of specifying them 𝐶1 = (𝑟,𝑠)→(0,0) (A2) 3.2 Main results 𝑓1f t (𝑟, 𝑡) e−i𝑠𝑡 d𝑡 ∶= 𝐹 (𝑟, 𝑠), which gives sinh 𝜆(2 − 𝑦0 ) 𝐶𝜆 (𝑟, 𝑠) = , sinh 𝜆 In fact (14) Herein 𝑉 and 𝑉trunc are defined in Definition 2.1, 𝐶𝛼,𝛽,𝛬 is in Definition 2.3 We denote by I𝐴 the indicator of the set A, i.e., I𝐴 (𝑥) = for 𝑥 ∈ 𝐴 and I𝐴 (𝑥) = for 𝑥 ∉ 𝐴 depend on 𝑦 We shall find 𝐶1 , 𝐶2 In view of (3) and (4), we obtain directly ∫R ∃𝑞 > 1∕2, 𝐶sinc > s.t 𝑓 , 𝑔 ∈ 𝑉 (𝑞, 𝐶sinc ) (A3) t (𝑟, 𝑦, 𝑠) = 𝐶 e𝜆𝑦 + 𝐶 e−𝜆𝑦 , where 𝐶 , 𝐶 not for 𝑟, 𝑠 ∈ R Hence 𝑢f1,3 2 t 𝐶1 e2𝜆 + 𝐶2 e−2𝜆 = 𝑢f1,3 (𝑟, 2, 𝑠) = (A1) From Eq (13), we can divide the regularization into two steps ̂ for 𝐹 and 𝐺, In the first step, we will find two estimators 𝐹̂ and 𝐺 respectively From the data 𝜏𝑗,𝑚 , 𝜂𝑗,𝑚 as in (6), (7), we are going to 18 D.D Trong, T.Q Viet, V.D Khoa et al Computers and Mathematics with Applications 86 (2021) 16–32 build 𝑓̃ and 𝑔̃ which are concrete estimators of the functions 𝑓 and 𝑔, respectively Before recalling the Sinc series representation of 𝑓 as well as 𝑔, we need some conditions on them First, 𝑓 (⋅, 𝑡) and 𝑔(⋅, 𝑡) have to be (at least) continuous in order for 𝑓 (𝑚ℎ, 𝑡) and 𝑔(𝑚ℎ, 𝑡) to be defined for any 𝑡 > In the present paper, we choose ft 𝐹̂ = 𝐹̂𝑁,𝑀 = 𝑓̂𝑁,𝑀 , and √ lim𝜖→0+ 𝑏𝜖 = ∞ Due to technical reasons, we choose 𝑏𝜖 = ln(4∕𝜖)∕ √ √ ( 2 + 1) In fact, such the quantity 𝑏𝜖 implies 𝑒2𝐴0 ≤ 4𝜖 −1 for (𝑟, 𝑠) ∈ 𝐷𝜖 Combining the idea, we will approximate the function 𝑣 by the function ( e𝜆(2−𝑦0 ) − e−𝜆(2−𝑦0 ) 𝑣̂𝜖 (𝑥, 𝑡) = 𝐹̂𝑁,𝑀 (𝑟, 𝑠) e𝜆 − e−𝜆 4𝜋 ∬𝐷𝜖 (25) ) 𝜆(1−𝑦0 ) − e−𝜆(1−𝑦0 ) i(𝑟𝑥+𝑠𝑡) ̂𝑁,𝑀 (𝑟, 𝑠) e −𝐺 e d𝑟d𝑠 e𝜆 − e−𝜆 ̂=𝐺 ̂𝑁,𝑀 = 𝑔̂f t 𝐺 𝑁,𝑀 with 𝑓̂𝑁,𝑀 and 𝑔̂𝑁,𝑀 defined by Definition Let the model (6) hold According to Lemma 5.3, we define the estimators for the coefficients 𝑐𝑝 (𝑚ℎ) as follows: 𝑐̂𝑝,𝑚 = 𝑛−1 2∑ 𝜏 𝜙 (𝑡 ) + 𝜏𝑛,𝑚 𝜙𝑝 (𝑡𝑛 ), 𝑛 𝑗=1 𝑗,𝑚 𝑝 𝑗 𝑛 𝑝 = 1, 𝑛 In conclusion, we give the main result that will lead to the convergence of our estimators in associating conditions (17) Theorem 3.3 Let Assumptions (A1)–(A3) in (14)–(16) hold, 𝜃 ≥ 0, < ′ 𝛽 < 2𝛽 2−1 and let 𝑣 ∈ 𝐻 𝜃 (R2 ) Then we have We estimate the function 𝑓 (𝑥, 𝑡) on R2 by 𝑓̂𝑁,𝑀 (𝑥, 𝑡) = I[0,𝑇 ] (𝑡) 𝑀 ∑ 𝑁 ∑ E 𝑐̂𝑝,𝑚 𝜙𝑝 (𝑡)𝑆(𝑚, ℎ)(𝑥) (18) 𝑚=−𝑀 𝑝=1 + Definition Let the model (7) hold According to Lemma 5.4, we define the estimators for the coefficients 𝑑𝑝 (𝑚ℎ) as follows: 𝑛−1 2∑ 𝜂 𝜙 (𝑡 ) + 𝜂𝑛,𝑚 𝜙𝑝 (𝑡𝑛 ), 𝑑̂𝑝,𝑚 = 𝑛 𝑗=1 𝑗,𝑚 𝑝 𝑗 𝑛 𝑝 = 1, 𝑛 𝑁 𝑀 ∑ ∑ 𝑑̂𝑝,𝑚 𝜙𝑝 (𝑡)𝑆(𝑚, ℎ)(𝑥) (19) (20) Note that positive numbers ℎ, 𝑇 > and positive integers 𝑁, 𝑀 are called regularization parameters For 𝜎1 and 𝜎2 defined in (8), 𝐶sinc and 𝑞 in (14), put ( ) √ 1 𝜎 = max{𝜎1 , 𝜎2 }, 𝐶sinc,𝑞 = 𝐶sinc √ (21) + √ 2𝜋 4𝑞+1 (4𝑞 − 1)𝜋 4𝑞−1 𝑝1 = 2𝛼𝜇 −1 , 𝑝2 = 2𝛽𝜇−1 , 𝑝3 = 𝜇 −1 , ( ′ )−1 2𝛽 − 2𝛽 ′ − 𝑝4 = (4𝑞 − 2)𝜇 −1 + −1 , 2𝛼 2𝛽 ( )−1 2𝜏 + 2𝜏 + 𝑝5 = 𝜌𝜇 −1 + +1 2𝛼 2𝛽 𝜀0 (ℎ, 𝑇 , 𝑀, 𝑁) ( ) 𝛼2−2𝛼+2 4𝑀ℎ𝑇 𝑁𝜎 = 1+ ℎ𝑇 𝑁 −2𝛼 𝛬2 + 2ℎ𝑇 𝑀 −2𝛽 𝛬2 + 2𝛼 − 𝑛 Theorem 3.4 2𝛽 ′ −1 Put (22) 𝑇 2𝜏+1 𝑇 2𝜏+1 𝑀ℎ𝑇 𝑁 𝐸 + 𝐵𝜀 + 𝐶𝜀 + 𝐷𝜀 ℎ4𝑞−2 + 𝜌 𝑛 𝑇 𝑁 2𝛼 ℎ2𝛽 ′ −1 ℎ2𝛽 ′ −1 𝑀 2𝛽 where ( ) 𝛼2−2𝛼+2 𝐴𝜀 = 2𝐶𝛬 + , 𝐵𝜀 = 2𝐶𝛬 , 𝐶𝜀 = 4𝜎 , 𝐷𝜀 = 4𝐶sinc,𝑞 2𝛼 − (28) (29) Let Assumptions (A1)–(A3) in (14)–(16) hold, < 𝛽 < Then 𝜀0 (ℎ𝑛 , 𝑇𝑛 , 𝑀𝑛 , 𝑁𝑛 ) ( )1 ( )1 ( )1 ( )1 ( ) ( )𝜇 = 𝑝1 𝐴𝜀 𝑝1 𝑝2 𝐵𝜀 𝑝2 𝑝3 𝐶𝜀 𝑝3 𝑝4 𝐷𝜀 𝑝4 𝑝5 𝐸 𝑝5 𝑛 (23) where 𝐴𝜀 , 𝐵𝜀 , 𝐶𝜀 , 𝐷𝜀 , 𝐸 and 𝑝𝑖 , 𝑖 = 1, 5, are defined as in (23) and (16), (21), (26)–(29), respectively Moreover we have Then, one obtains an upper bound for two main parts in the Fourier transform of solution 𝑣 as presenting is the following two theorems ( 𝑇𝑛 = Theorem 3.2 Let 𝑓 , 𝑔 satisfy Assumptions (A1)–(A3) in (14)–(16) Then { } ‖ ‖2 ‖ ̂𝑁,𝑀 ‖ max E ‖𝐹 − 𝐹̂𝑁,𝑀 ‖ 2 , E ‖𝐺 − 𝐺 ≤ 4𝜋 𝜀0 (ℎ, 𝑇 , 𝑀, 𝑁) ‖ 2 ‖ ‖𝐿 (R ) ‖ ‖𝐿 (R ) ( 𝑁𝑛 = In the second step, for 𝜆 defined in (11), we have to replace the terms (e𝜆(2−𝑦0 ) − e−𝜆(2−𝑦0 ) )∕(e𝜆 − e−𝜆 ) and (e𝜆(1−𝑦0 ) − e−𝜆(1−𝑦0 ) )∕(e𝜆 − e−𝜆 ) in (13) by stable terms For 𝜖 > 0, since the instability terms tend to infinity as 𝑟, 𝑠 → ∞, we replace the terms by ( 𝑀𝑛 = 𝑝5 𝐸 𝑝4 𝐷𝜀 𝑝1 𝐴𝜀 𝑝4 𝐷𝜀 𝑝2 𝐵 𝜀 𝑝4 𝐷𝜀 )1 4𝑞−2 − 𝜌 𝜌 ℎ𝑛 ) ) ( 2𝛼 ( 2𝛽 , 𝑝5 𝐸 𝑝4 𝐷𝜀 𝑝5 𝐸 𝑝4 𝐷𝜀 ) 2𝜏+1 2𝛼𝜌 − ℎ𝑛 ) 2𝜏+1 2𝛽𝜌 − ℎ𝑛 (2𝛽 ′ −1)𝜌+(4𝑞−2)(𝜌+2𝜏+1) 2𝛼𝜌 (2𝛽 ′ −1)𝜌+(4𝑞−2)(𝜌+2𝜏+1) 2𝛽𝜌 , , and e𝜆(1−𝑦0 ) − e−𝜆(1−𝑦0 ) I𝐷𝜖 (𝑟, 𝑠), e𝜆 − e−𝜆 𝐷𝜖 = {(𝑟, 𝑠) ∈ R2 ∶ |𝑟| ≤ 𝑏𝜖 , |𝑠| ≤ 𝜅𝑏2𝜖 } (27) (ℎ𝑛 , 𝑇𝑛 , 𝑀𝑛 , 𝑁𝑛 ) = argmin 𝜀0 (ℎ, 𝑇 , 𝑀, 𝑁) where ℎ, 𝑇 , 𝑀, 𝑁 > = 𝐴𝜀 e𝜆(2−𝑦0 ) − e−𝜆(2−𝑦0 ) I𝐷𝜖 (𝑟, 𝑠), e𝜆 − e−𝜆 where (𝑟2 + 𝑠2 )𝜃 |𝑣f t (𝑟, 𝑠)| 𝑑𝑟𝑑𝑠 ∬R2 ⧵𝐷𝜖 4𝜋 min{1, 𝜅 𝜃 }𝑏2𝜃 𝜖 and Using the parameters of Assumptions (A1)–(A3) in (14)–(16), we denote 𝐸 𝑇𝜌 ( )3−𝑦0 𝜖 In fact, to obtain the convergence of the estimators, there are infinitely many ways to define the regularization parameters ℎ, 𝑇 , 𝑀, 𝑁 In this theoretical part, we will give an instance of regularization parameters selection We first set up some necessary notations Put ( ( ) 1 2𝜏 + 2𝜏 + 𝜇= + +1+ + +1 2𝛼 2𝛽 𝜌 2𝛼 2𝛽 (26) ( ′ )) −1 2𝛽 − 2𝛽 ′ − 1 + + −1 4𝑞 − 2𝛼 2𝛽 𝑚=−𝑀 𝑝=1 +4𝐶sinc,𝑞 ℎ4𝑞−2 + |𝑣𝜖 − 𝑣̂|2 d𝑟d𝑠 ≤ 8𝜀0 (ℎ, 𝑇 , 𝑀, 𝑁) where 𝜀0 is defined in (22) We estimate the function 𝑔(𝑥, 𝑡) on R2 by 𝑔̂𝑁,𝑀 (𝑥, 𝑡) = I[0,𝑇 ] (𝑡) ∬R2 ℎ𝑛 = 1 𝑛𝜗 ( 𝑝3 𝐶𝜀 𝑝4 𝐷𝜀 )1 ( 𝜗 where 𝜗 = 4𝑞 − + (24) 19 𝑝5 𝐸 𝑝4 𝐷𝜀 4𝑞−2 𝜌 + ) + 2𝜏+1 + 2𝜏+1 𝜌𝜗 2𝛼𝜌𝜗 2𝛽𝜌𝜗 ( 𝑝1 𝐴𝜀 𝑝4 𝐷𝜀 ( (2𝛽 ′ −1)𝜌+(4𝑞−2)(𝜌+2𝜏+1) 2𝜌 ) 𝛼 2𝛼𝜗 + 𝛽 ( ) 𝑝2 𝐵 𝜀 𝑝4 𝐷𝜀 ) 2𝛽𝜗 D.D Trong, T.Q Viet, V.D Khoa et al Computers and Mathematics with Applications 86 (2021) 16–32 In addition, assume that there are 𝜃, 𝐾𝑣,𝜃 > satisfying 𝑣 ∈ 𝐻 𝜃 (R2 ) and ‖𝑣‖𝐻 𝜃 ≤ 𝐾𝑣,𝜃 Let } { ( )3−𝑦0 𝐾𝑣,𝜃 + 𝜖𝑛 = argmin 32𝜋 𝜀0 (ℎ𝑛 , 𝑇𝑛 , 𝑀𝑛 , 𝑁𝑛 ) 𝜖 max{1, 𝜅 𝜃 }𝑏2𝜃 0 independent of 𝑛 such that ‖2 E‖ ‖𝑣 − 𝑣̂𝑛 ‖𝐿2 (R2 ) ≤ 𝐶′ ln2𝜃 (𝑛) 𝐴 𝑣𝜖 (𝑥, 𝑡) ∶= We first present the algorithm for the problem (1)–(7) • Step 1: For 𝑛 ∈ √ N, choose the parameters ℎ𝑛 , 𝑇𝑛 , 𝑀𝑛 , 𝑁𝑛 , 𝜖𝑛 Put √ ( ) 𝑏𝑛 = log 4∕𝜖𝑛 ∕ 2( + 1), 𝑎𝑛 = 𝑏2𝑛 • Step 2: Compute 𝑓̂𝑁 ,𝑀 (𝑥, 𝑡) and 𝑔̂𝑁 ,𝑀 (𝑥, 𝑡) as in (18) and (20) 𝑛 𝑛 𝑗 𝑛 4.1 Numerical implementation 𝑥𝑗 = (𝑗 − 1)𝛿𝑥 − 𝑎, Let us fix the values of 𝑛, 𝑇 , ℎ, 𝑀 and 𝑁 𝑔𝑁,𝑀 (𝑥, 𝑡) = I[0,𝑇 ] (𝑡) 𝐵 𝑣𝜖 (𝑥𝑗 , 𝑡𝑘 ) = where 𝑐̂𝑝,𝑚 𝑛−1 ⎛ (𝑝 − )𝜋𝑗 ⎞ 𝑛+1 2∑ ⎟ + (−1) 𝜏𝑛,𝑚 , = 𝜏𝑗,𝑚 sin ⎜ ⎜ ⎟ 𝑛 𝑗=1 𝑛 𝑛 ⎝ ⎠ (31) 𝑑̂𝑝,𝑚 𝑛−1 ⎛ (𝑝 − )𝜋𝑗 ⎞ 𝑛+1 2∑ ⎟ + (−1) 𝜂𝑛,𝑚 = 𝜂𝑗,𝑚 sin ⎜ ⎜ ⎟ 𝑛 𝑗=1 𝑛 𝑛 ⎝ ⎠ (32) ≃ 𝛿𝑟 𝛿𝑠 ft 𝑓𝑁,𝑀 (𝑟, 𝑠) = 𝛿𝑟 = 𝛿𝑡 = 𝑏 , 𝐾 −1 (38) 2𝐴 , 𝐽 −1 𝑠𝑞 = (𝑞 − 1)𝛿𝑠 − 𝐵, 𝛿𝑠 = 2𝐵 , (39) 𝐾 −1 𝐴 ∫−𝐵 ∫−𝐴 𝐾 𝐽 ∑ ∑ 𝜓(𝑟, 𝑠)ei(𝑟𝑥𝑗 +𝑠𝑡𝑘 ) d𝑟d𝑠 (40) 𝜔𝑝 𝜔𝑞 𝜓𝑝,𝑞 ei(𝑥𝑗 𝑟𝑝 +𝑡𝑘 𝑠𝑞 ) ≡ d (𝜓)𝑗,𝑘 , where 𝜔𝑝 are some weight coefficients Thus, 𝑣𝜖 (𝑥𝑗 , 𝑡𝑘 ) ≃ d (𝜓)𝑗,𝑘 Hence, we calculate d (𝜓)𝑗,𝑘 , as follows Substituting 𝑥𝑗 , 𝑟𝑝 , 𝑡𝑘 and 𝑠𝑞 from (38) and (39) to d (𝜓)𝑗,𝑘 , we obtain d (𝜓)𝑗,𝑘 (𝛼 ) ( )f t 𝑐̂𝑝,𝑚 I[0,𝑇 ] 𝜙𝑝 (𝑠) (𝑆(𝑚, ℎ))f t (𝑟), 𝑡𝑘 = (𝑘 − 1)𝛿𝑡 , 𝑝=1 𝑞=1 for any 𝑝 = 1, 𝑛 and 𝑚 = −𝑀, 𝑀 Herein 𝜏𝑗,𝑚 and 𝜂𝑗,𝑚 are given in (6) and (7) Thus, we obtain 𝑁 ∑ 2𝑎 , 𝐽 −1 for 𝑝 = 1, 𝐽 , 𝑞 = 1, 𝐾 Denote 𝜓𝑝,𝑞 ∶= 𝑤(𝑟𝑝 , 𝑠𝑞 ) Using the Newton–Cotes method we have 𝑑̂𝑝,𝑚 𝜙𝑝 (𝑡)𝑆(𝑚, ℎ)(𝑥), 𝑚=−𝑀 𝑝=1 𝑀 ∑ 𝛿𝑥 = 𝑟𝑝 = (𝑝 − 1)𝛿𝑟 − 𝐴, (30) 𝑚=−𝑀 𝑝=1 𝑁 𝑀 ∑ ∑ = 𝛿𝑟 𝛿𝑠 e−i(𝐴𝑥𝑗 +𝐵𝑡𝑘 ) 𝐅𝐾 𝑠 ({ (𝛼 ) 𝜔 𝑞 𝐅𝐽 𝑟 ({ 𝜔𝑝 e−i(𝑝−1)𝑎𝛿𝑟 𝜓𝑝,𝑞 ) )} } 𝑝=1,𝐽 (33) 𝑗 , 𝑞=1,𝐾 𝑚=−𝑀 𝑝=1 ft 𝑔𝑁,𝑀 (𝑟, 𝑠) = 𝑀 ∑ 𝑁 ∑ (37) for 𝑗 = 1, 𝐽 , 𝑘 = 1, 𝐾 Similarly, let us define the mesh for the domain [−𝐴, 𝐴] × [−𝐵, 𝐵]: Step 2: For 𝑀, 𝑁 ∈ N such that < 𝑁 ≤ 𝑛, 𝑐̂𝑝,𝑚 𝜙𝑝 (𝑡)𝑆(𝑚, ℎ)(𝑥), 𝑗 = 1, 𝐽 , 𝑝=1 for some parameter 𝛼, where {𝜑𝑗 }𝑗=1,𝐽 is some array of complex numbers Let us define the mesh points (𝑥𝑗 , 𝑡𝑘 ) ∈ 𝛺 = [−𝑎, 𝑎] × [0, 𝑏]: for 𝐽 , 𝐾 ∈ N, Next, we explain numerical methods for the steps above 𝑁 𝑀 ∑ ∑ (𝑥, 𝑡) ∈ 𝛺, 𝐽 ( ) ∑ {𝜑𝑝 }𝑝=1,𝐽 ↦ 𝐅(𝛼) {𝜑𝑝 }𝑝=1,𝐽 ∶= 𝜑𝑝 e2𝛼i(𝑗−1)(𝑝−1) , 𝐽 • Step 3: Calculate 𝑣̂𝑛 = 𝑣̂𝜖𝑛 as in (25) 𝑓𝑁,𝑀 (𝑥, 𝑡) = I[0,𝑇 ] (𝑡) 𝜓(𝑟, 𝑠)ei(𝑟𝑥+𝑠𝑡) d𝑟d𝑠, where 𝜓 stands for the integrand in (36), 𝐴 and 𝐵 stand for 𝑏𝜖 and 𝑏2𝜖 , respectively Next we will approximate 𝑣𝜖 for (𝑥𝑗 , 𝑡𝑘 ) given mesh points in 𝛺 Based on the idea in [15], we define the fractional discrete Fourier transform (fDFT) Numerical results 𝑛 𝐵 ∫−𝐴 ∫−𝐵 𝑘 (41) )f t ( 𝑑̂𝑝,𝑚 I[0,𝑇 ] 𝜙𝑝 (𝑠) (𝑆(𝑚, ℎ))f t (𝑟), (𝛼 ) 𝐅𝐽 𝑟 where we can calculate 𝜙f𝑝t (𝑠) directly from the definition (9) Herein, since ∞ e−i𝑚ℎ𝑟 I(−𝜋∕ℎ,𝜋∕ℎ) (𝑟) ei𝑟𝑥 d𝑟 = ℎ−1 𝑆(𝑚, ℎ)(𝑥), 2𝜋 ∫−∞ • Step 3.1: Looping for 𝑞 = 1, 𝐾 and 𝑝 = 1, 𝐽 , assign we can use the inverse Fourier transform to deduce that (𝑆(𝑚, ℎ))f t (𝑟) = ℎ e−i𝑚ℎ𝑟 I(−𝜋∕ℎ,𝜋∕ℎ) (𝑟), 𝑟 ∈ R (𝛼 ) where 𝛼𝑟 = 𝛿𝑥 𝛿𝑟 ∕2 and 𝛼𝑠 = 𝛿𝑡 𝛿𝑠 ∕2 Herein the transforms and 𝐅𝐾 𝑠 are defined in (37), which can be calculated fast by the method in [15] For the transformation 𝜓𝑝,𝑞 ↦ d (𝜓)𝑗,𝑘 , we can use only one storage, i.e the array 𝜓, through Step to save computer memory Let us summarize the procedure in three following steps For 𝜓 an array of complex numbers with the length of 𝐽 𝐾, we perform (41) as follows: (34) 𝑚=−𝑀 𝑝=1 𝜓𝑝,𝑞 ∶= 𝜔𝑝 𝜓𝑝,𝑞 e−i(𝑝−1)𝑎𝛿𝑟 (35) Looping for 𝑞 = 1, 𝐾, perform the fDFT ( ) (𝛼 ) {𝜓𝑝,𝑞 }𝑝=1,𝐽 ↦ 𝜓𝑗,𝑞 ∶= 𝐅𝐽 𝑟 {𝜓𝑝,𝑞 }𝑝=1,𝐽 , Computing (33) and (34) directly for various mesh points are very time consuming Therefore, we need an efficient method to perform the tasks fast We shall explain it later 𝑗 𝑗 = 1, 𝐽 • Step 3.2: Looping for 𝑞 = 1, 𝐾 and 𝑗 = 1, 𝐽 , adjust Step 3: From (25), we aim to compute numerically the regularized solution: ( e𝜆(2−𝑦0 ) − e−𝜆(2−𝑦0 ) ft 𝑣𝜖 (𝑥, 𝑡) = 𝑓𝑁,𝑀 (𝑟, 𝑠) e𝜆 − e−𝜆 4𝜋 ∬𝐷𝜖 (36) ) e𝜆(1−𝑦0 ) − e−𝜆(1−𝑦0 ) ft i(𝑟𝑥+𝑠𝑡) −𝑔𝑁,𝑀 (𝑟, 𝑠) e d𝑟d𝑠, e𝜆 − e−𝜆 𝜓𝑗,𝑞 ∶= 𝜔𝑞 𝜓𝑗,𝑞 Looping for 𝑗 = 1, 𝐽 , perform the fDFT ( ) (𝛼 ) {𝜓𝑗,𝑞 }𝑞=1,𝐾 ↦ 𝜓𝑗,𝑘 ∶= 𝐅𝐾 𝑠 {𝜓𝑗,𝑞 }𝑞=1,𝐾 , 𝑘 20 𝑘 = 1, 𝐾 D.D Trong, T.Q Viet, V.D Khoa et al Computers and Mathematics with Applications 86 (2021) 16–32 ( (𝛽𝑠 ) { • Step 3.3: Looping for 𝑘 = 1, 𝐾 and 𝑗 = 1, 𝐽 , assign d (𝜓)𝑗,𝑘 ∶= 𝛿𝑟 𝛿𝑠 e −i(𝐴𝑥𝑗 +𝐵𝑡𝑘 ) { = 𝛿𝜚 𝐅𝐾 𝜓𝑗,𝑘 Remark on the fDFT: To accelerate the calculation of (37), we apply the fast Fourier transform (FFT) algorithm, e.g subroutines CFFTB (for 𝛼 > 0) and CFFTF (for 𝛼 < 0) obtained from [16] Details of the calculation are referred to [15] ∫ 𝑟1 𝜑(𝑟)d𝑟 = 𝛿𝑟 𝜔𝑝 𝜑(𝑟𝑝 ) + (|𝛿𝑟 | ), 𝑀 ⎧ ∑ ⎪ℎ 𝐶̃𝑞,𝑚 e−i𝑚ℎ𝑟𝑝 , =⎨ 𝑚=−𝑀 ⎪0, ⎩ (42) 𝑀 ∑ where 𝛿𝑟 = (𝑟𝐽 −𝑟1 )∕(𝐽 −1) and 𝑟𝑝 = 𝑟1 +(𝑝−1)𝛿𝑟 , and the weight numbers in this implementation are 𝜔1 = 𝜔𝐽 = 17∕48, 𝜔2 = 𝜔𝐽 −1 = 59∕48, 𝜔3 = 𝜔𝐽 −2 = 43∕48, 𝜔4 = 𝜔𝐽 −3 = 49∕48, and 𝜔5 = ⋯ = 𝜔𝐽 −4 = for 𝐽 ≥ and 𝐽 odd ̃′ 𝑞,𝑚 𝐶 { 𝐶̃𝑞,𝑚 , = 0, if if 𝐶𝑚 (𝜚)e−i𝜚𝑠 d𝜚, ∫0 𝐶𝑚 (𝜚) = 𝐽2 ∑ 𝑇 , 𝐾 −1 𝑚 = −𝑀, 𝑀 𝑁 ∑ 𝑝=1 (45) 𝑘 = 1, 𝐾, 𝐶̃𝑞,𝑚 = 𝛿𝜚 𝑘=1 ⎛ (𝑝 − )𝜋(𝑘 − 1) ⎞ ⎟, 𝑐̃𝑝,𝑚 sin ⎜ ⎜ ⎟ 𝐾 −1 𝑝=1 ⎠ ⎝ 𝐾−1 ∑ 𝐶𝑚 (𝜚)e−i𝜚𝑠𝑞 d𝜚 ≃ 𝛿𝜚 (53) 𝑝 for 𝑝 = 1, 𝐽 , 𝑞 = 1, 𝐾, and 𝑟𝑝 ∈ (−𝜋∕ℎ, 𝜋∕ℎ) If 𝑟𝑝 ∉ (−𝜋∕ℎ, 𝜋∕ℎ), we f t (𝑟 , 𝑠 ) = assign 𝑓𝑁,𝑀 𝑝 𝑞 Note that we must impose 𝐽 > 2(𝑀 + 1) and 𝐾 ≥ 𝑛 in advance Let us summarize the calculation procedure for Step as follows Starting with the data {𝜏𝑘,𝑚 } (Eq (6)) and {𝜂𝑘,𝑚 } (Eq (7)), for 𝑘 = 1, 𝑛 and 𝑚 = −𝑀, 𝑀, we perform stepwise: 𝚂𝙸𝙽𝚀𝙵 𝑘 = 2, 𝐾 {𝜏𝑘,𝑚 }𝑘=1,𝑛 ⟼ {̂ 𝑐𝑞,𝑚 }𝑞=1,𝑛 , (47) 𝐾 ∑ 𝚂𝙸𝙽𝚀𝙵 {𝜂𝑘,𝑚 }𝑘=1,𝑛 ⟼ {𝑑̂𝑞,𝑚 }𝑞=1,𝑛 • Step 2.2: To define (46) For the sequences {𝑐̃𝑞,𝑚 } and {𝑑̃𝑞,𝑚 }, where 𝑞 = 1, 𝐾 − and 𝑚 = −𝑀, 𝑀 Looping for 𝑚 = −𝑀, 𝑀, we define 𝑐̃𝑞,𝑚 = 𝑐̂𝑞,𝑚 , 𝑑̃𝑞,𝑚 = 𝑑̂𝑞,𝑚 , ∀𝑞 = 1, 𝑁, and we assign 𝑐̃𝑞,𝑚 = 𝑑̃𝑞,𝑚 = for all 𝑞 = 𝑁 + 1, 𝐾 − 𝑤𝑘 𝐶𝑘,𝑚 e−i𝜚𝑘 𝑠𝑞 = 𝐶̃𝑞,𝑚 , • Step 2.3: To compute (47) For the sequences {𝐶𝑘,𝑚 } and {𝐶𝑘,𝑚 }, (48) 𝑘=1 𝑤𝑘 𝐶𝑘,𝑚 e−i𝜚𝑘 𝑠𝑞 𝐽 ( ) ∑ 𝐶̃′′ 𝑞,𝑗 eiℎ𝐴(𝑗−1) e2𝛽𝑥 i(𝑝−1)(𝑗−1) , 𝑗=1,𝐽 where 𝑘 = 1, 𝐾 and 𝑚 = −𝑀, 𝑀 Looping for 𝑚 = −𝑀, 𝑀, we perform where the weights 𝑤𝑘 are given in (42) Due to (45) and (39), we have 𝜚𝑘 𝑠𝑞 = −2𝛽𝑠 (𝑘 − 1)(𝑞 − 1) − 𝐵𝜚𝑘 , where 𝛽𝑠 = −𝛿𝜚 𝛿𝑠 ∕2, and we deduce 𝐾 ∑ 𝐶̃′′ 𝑞,𝑗 e−iℎ((𝑝−1)𝛿𝑟 −𝐴)(𝑗−1) e−iℎ((𝑝−1)𝛿𝑟 −𝐴)𝐽1 • Step 2.1: To compute (31) and (32) Looping for 𝑚 = −𝑀, 𝑀, we perform 𝑇 ∫0 𝐽 ∑ ̃′ 𝑞,𝑗+𝐽 −1 where 𝑟𝑝 and 𝛿𝑟 are defined in (39), 𝛽𝑥 = −ℎ𝛿𝑟 ∕2, and 𝐶̃′′ 𝑞,𝑗 = 𝐶 due to (52) Thus, combining (50), (51) and (53) we obtain ) ({ } (𝛽 ) ft (54) 𝑓𝑁,𝑀 (𝑟𝑝 , 𝑠𝑞 ) ≃ ℎe−i𝐽1 ℎ𝑟𝑝 𝐅𝐽 𝑥 𝐶̃′′ 𝑞,𝑗 eiℎ𝐴(𝑗−1) Here we apply the inverse of quarter-sine transform (60) to obtain {𝐶𝑘,𝑚 } rapidly for 𝑘 = 1, 𝐾 and 𝑚 = −𝑀, 𝑀, e.g subroutine SINQB in [16] can be applied For 𝑠𝑞 and 𝛿𝑠 in (39), we approximate 𝐶̂𝑚 (𝑠𝑞 ) in (44) by 𝐶̃𝑞,𝑚 as follows 𝐶̂𝑚 (𝑠𝑞 ) = ̃′ 𝑞,𝐽 +𝑗−1 e−iℎ((𝑝−1)𝛿𝑟 −𝐴)(𝑗+𝐽1 −1) 𝐶 𝑗=1 𝑐̂𝑝,𝑚 𝜙𝑝 (0) = 0, 𝑐̂𝑝,𝑚 𝜙𝑝 (𝜚𝑘 ) = 𝐽 ∑ (44) 𝑝=1 𝐶𝑘,𝑚 = 𝐶̃′′ 𝑞,𝑚−𝐽1 +1 𝑗=1 Denoting 𝐶𝑘,𝑚 = 𝐶𝑚 (𝜚𝑘 ) from (44), we have 𝑁 ∑ = 𝑗=1 = and we define the sequence {𝑐̃𝑝,𝑚 }, for 𝑝 = 1, 𝐾 and 𝑚 = −𝑀, 𝑀, such that { 𝑐̂𝑝,𝑚 , ≤ 𝑝 ≤ 𝑁, 𝑐̃𝑝,𝑚 = (46) 0, 𝑁 + ≤ 𝑝 < 𝐾 𝐶1,𝑚 = −𝑀 ≤ 𝑚 ≤ 𝑀, (𝐽1 ≤ 𝑚 < −𝑀) ∨ (𝑀 < 𝑚 ≤ 𝐽2 ), ̃′ 𝑞,𝑚 e−i𝑚ℎ𝑟𝑝 = 𝐶 (43) 𝑝=1 𝛿𝜚 = (51) for all 𝑚 = 𝐽1 , 𝐽2 and 𝑞 = 1, 𝐾 Now, putting 𝑗 = 𝑚 − 𝐽1 + 1, we can see that 𝑗 = 1, 𝐽 as 𝑚 = 𝐽1 , 𝐽2 and the right-hand side of (51) becomes For 𝐾 as in (38), to approximate the integral in (44), we define 𝛿𝜚 and 𝜚𝑘 𝜚𝑘 = (𝑘 − 1)𝛿𝜚 , ̃′ 𝑞,𝑚 e−i𝑚ℎ𝑟𝑝 , 𝐶 = e−i𝐽1 ℎ𝑟𝑝 𝑐̂𝑝,𝑚 𝜙𝑝 (𝜚), otherwise (52) otherwise 𝑇 (50) −𝜋∕ℎ < 𝑟𝑝 < 𝜋∕ℎ, in which we define where 𝐶̂𝑚 (𝑠) = (49) 𝑞 = 1, 𝐾 𝑚=𝐽1 𝑚=𝐽1 𝑁 ∑ 𝐽2 ∑ 𝐶̃𝑞,𝑚 e−i𝑚ℎ𝑟𝑝 = 𝑚=−𝑀 Remark on Step 2: The summations (31) and (32) can be seen in view of the quarter-sine transform (59) When the size 𝑛 in Eqs (6) and (7) is large, we use subroutine SINQF in [16] to perform these summations rapidly Furthermore, we can apply the FFT technique to approximate (33) and (34) fast Indeed, we will provide the numerical procedure to approximate f t (𝑟 , 𝑠 ) in (33) for all the mesh points (𝑟 , 𝑠 ) defined in (39) only 𝑓𝑁,𝑀 𝑝 𝑞 𝑝 𝑞 ft After that 𝑔𝑁,𝑀 (𝑟𝑝 , 𝑠𝑞 ) in (34) can be approximated by the same method Firstly, applying (9) and (35) to (30), we can see that −𝜋∕ℎ < 𝑟 < 𝜋∕ℎ, , where 𝑟𝑝 and 𝑠𝑞 are defined in (38) For 𝐽 > 2𝑀 + 2, we put 𝐽1 = −𝐽 ∕2 and 𝐽2 = 𝐽 ∕2 − for 𝐽 even, or 𝐽1 = −(𝐽 − 1)∕2 and 𝐽2 = (𝐽 − 1)∕2 for 𝐽 odd Then we have 𝑝=1 𝑀 ⎧ ∑ ⎪ℎ 𝐶̂𝑚 (𝑠)e−i𝑚ℎ𝑟 , ft 𝑓𝑁,𝑀 (𝑟, 𝑠) = ⎨ 𝑚=−𝑀 ⎪0, ⎩ 𝑘=1,𝐾 𝑞 ft ft 𝑓𝑁,𝑀 (𝑟𝑝 , 𝑠𝑞 ) ≃ 𝑓̃𝑁,𝑀 (𝑟𝑝 , 𝑠𝑞 ) Remark on the Newton–Cotes method: In addition, combining the Simpson’s 1/8 and 3/8 rules, we derive 𝐽 ∑ ) } Here we can see that the vector {𝐶̃𝑞,𝑚 }𝑞=1,𝐾 can be calculated fast by (𝛽 ) 𝐅𝐾 𝑠 , for every 𝑚 Therefore 𝐶̂𝑚 (𝑠𝑞 ) is approximated efficiently by the FFT technique Secondly, based on the previous work, we deduce from (43) that } Finally, 𝑣𝜖 (𝑥𝑗 , 𝑡𝑘 ) ≃ Re d (𝜓)𝑗,𝑘 for all the mesh points (𝑥𝑗 , 𝑡𝑘 ) ∈ [−𝑎, 𝑎] × [0, 𝑏] 𝑟𝐽 𝑤𝑘 𝐶𝑘,𝑚 ei𝐵𝜚𝑘 𝐾 ( ) ∑ = 𝛿𝜚 𝑤𝑘 𝐶𝑘,𝑚 ei𝐵𝜚𝑘 e2𝛽𝑠 i(𝑘−1)(𝑞−1) 𝑘=1 21 𝚂𝙸𝙽𝚀𝙱 𝐶1,𝑚 = 0, {𝑐̃𝑞,𝑚 }𝑞=1,𝐾−1 ⟼ {𝐶𝑘,𝑚 }𝑘=2,𝐾 𝐷1,𝑚 = 0, {𝑑̃𝑞,𝑚 }𝑞=1,𝐾−1 ⟼ {𝐷𝑘,𝑚 }𝑘=2,𝐾 𝚂𝙸𝙽𝚀𝙱 D.D Trong, T.Q Viet, V.D Khoa et al Computers and Mathematics with Applications 86 (2021) 16–32 • Step 2.4: To compute (49) Put 𝛽𝑠 = −𝛿𝜚 𝛿𝑠 ∕2 Looping for 𝑚 = −𝑀, 𝑀, we perform ( ) } (𝛽 ) { 𝑤𝑘 ei𝐵𝜚𝑘 𝐶𝑘,𝑚 𝑘=1,𝐾 , 𝐶̃𝑞,𝑚 = 𝛿𝜚 𝐅𝐾 𝑠 However, as defined (23), the constants 𝐴𝜀 , 𝐵𝜀 , 𝐶𝜀 , 𝐷𝜀 , 𝐸 are calculated from the a priori constants 𝛼, 𝛽, 𝛽 ′ , 𝜏, 𝜌, 𝐶sinc , 𝐶Fou To overcome this obstacle partly, we can use Theorem 3.1 to obtain parameters 𝜌 = 2, 𝛽 ′ = 1, 𝛼 = 1, 𝜏 = Since < 𝛽 < 𝛽 ′ − 1∕2 we can choose 𝛽 = 3∕8 In practice we take a number 𝑇 in advance such that 𝑇 −𝜌 is small enough, e.g 𝑇 = 20 Let us fix 𝑇 from now In the experiment example, to verify the convergence of the algorithm, the role of a priori constants 𝐶sinc , 𝐶Fou , 𝐾𝑣,𝜃 is dimmed out to simplify our analysis Hence, to choose these parameters, we should have another scheme Put 𝑞 ̃𝑞,𝑚 = 𝐷 (𝛽 ) 𝛿𝜚 𝐅𝐾 𝑠 ({ i𝐵𝜚𝑘 𝑤𝑘 e 𝐷𝑘,𝑚 ) } 𝑘=1,𝐾 𝑞 ∀𝑞 = 1, 𝐾 , • Step 2.5: To define (52) If 𝐽 is even, we put 𝐽1 = −𝐽 ∕2 and 𝐽2 = 𝐽 ∕2−1, otherwise, we define 𝐽1 = −(𝐽 −1)∕2 and 𝐽2 = (𝐽 −1)∕2 For ̃′′ 𝑞,𝑗 }, where 𝑞 = 1, 𝐾 and 𝑗 = 1, 𝐽 the sequences {𝐶̃′′ 𝑞,𝑗 } and {𝐷 Looping for 𝑚 = −𝑀, 𝑀, using the mapping 𝑗 = 𝑚 − 𝐽1 + 1, we define 𝐶̃′′ 𝑞,𝑗 = 𝐶̃𝑞,𝑚 , ̃′′ 𝑞,𝑗 = 𝐷 ̃𝑞,𝑚 , 𝐷 ∀𝑞 = 1, 𝐾 vided empirically Then we can see that ℎ𝑛2𝛽 12 𝐺𝑝,𝑞 = (𝛽 ) ℎe−i𝐽1 ℎ𝑟𝑝 𝐅𝐽 𝑥 𝑗=1,𝐽 e iℎ𝐴(𝑗−1) ̃′′ 𝑞,𝑗 𝐷 𝑝 , 𝑗=1,𝐽 − 12 − 61 ′ −1 𝑁𝑛2𝛼 ∼ 𝑛 , ℎ2𝛽 𝑛 ′ −1 𝑀𝑛2𝛽 ∼ 4.2 Simulation and comments ) } (55) 𝑛 , 𝑛−1 𝑀𝑛 ℎ𝑛 𝑁𝑛 ∼ 𝑛 and ℎ𝑛 ∼ 𝑛 as 𝑛 → ∞, therefore in this process 𝜀0 (ℎ𝑛 , 𝑇 , 𝑀𝑛 , 𝑁𝑛 ) tends to 𝑇 −𝜌 , which plays the role of numerical tolerance • Step 2.6: To compute (54) Put 𝛽𝑥 = −ℎ𝛿𝑟 ∕2 Looping for 𝑞 = 1, 𝐾, we perform ({ ) } (𝛽 ) , 𝐹𝑝,𝑞 = ℎe−i𝐽1 ℎ𝑟𝑝 𝐅 𝑥 eiℎ𝐴(𝑗−1) 𝐶̃′′ 𝑞,𝑗 ({ where 𝑆𝑁 , 𝑆𝑀 , 𝑆ℎ , 𝑆𝜖 are some positive constants which will be pro- Looping for 𝑚 = 𝐽1 , −𝑀 − and for 𝑚 = 𝑀 + 1, 𝐽2 , we assign ̃′′ 𝑞,𝑗 = for all 𝑞 = 1, 𝐾 𝐶̃′′ 𝑞,𝑗 = 𝐷 𝐽 𝑁𝑛 = 𝑆𝑁 𝑛 , 𝑀𝑛 = 𝑆𝑀 𝑛 , ℎ𝑛 = 𝑆ℎ 𝑛− , 𝜖𝑛 = 𝑆𝜖 𝑛−1 , ) ( log 4∕𝜖𝑛 𝑏𝑛 = √ , 𝑎𝑛 = 𝑏2𝑛 , √ 2( + 1) ∀𝑝 = 1, 𝐽 Example To obtain the ‘‘exact’’ Cauchy data for the problem (2)–(5) we apply the finite difference method (FDM) in which the implicit Crank–Nicolson method and the central difference scheme are adopted for the temporal and the spatial approximation, respectively The solution of Eq (2) is solved numerically in a fine 2D mesh with the initial condition (5) and the boundary conditions √ 𝑢(𝑎∞ , 𝑦, 𝑡) = 𝑢(−𝑎∞ , 𝑦, 𝑡) = 𝑢(𝑥, 𝑏∞ , 𝑡) = 0, 𝑢(𝑥, 0, 𝑡) = 𝑡e−(𝑥−𝑡∕2) −𝑡 𝑝 For 𝑞 = 1, 𝐾 and 𝑝 = 1, 𝐽 , if 𝑟𝑝 ∉ (−𝜋∕ℎ, 𝜋∕ℎ) then we adjust 𝐹𝑝,𝑞 = and 𝐺𝑝,𝑞 = f t (𝑟 , 𝑠 ) and 𝑔 f t Finally, we approximate 𝑓𝑁,𝑀 (𝑟 , 𝑠 ) by 𝐹𝑝,𝑞 and 𝐺𝑝,𝑞 , 𝑝 𝑞 𝑁,𝑀 𝑝 𝑞 respectively, for 𝑝 = 1, 𝐽 and 𝑞 = 1, 𝐾 In practice, we use Fortran programming language [17] to perform the calculation in computer To save computer memory, we use only two arrays, namely U and V, to get through the whole of Steps and ̂ (𝑐, ̃ Precisely, we use the couple (U, V) to store values of (𝜏, 𝜂), (̂ 𝑐 , 𝑑), ̃ 𝑑), ′′ ′′ ̃ ̃ ̃ ̃ (𝐶, 𝐷), (𝐶, 𝐷), (𝐶 , 𝐷 ) and (𝐹 , 𝐺), stepwise, with the aid of the index mapping 𝑗 = 𝑚 − 𝐽1 + Based on the column-major order of array model in Fortran, 𝑈 and 𝑉 should be processed with the dimension 𝐾 × 𝐽 from Step 2.1 to Step 2.5 However, before Step 2.6, the arrays should be transposed so as the dimension is turned from 𝐾 ×𝐽 to 𝐽 ×𝐾 Thus, the 𝑞-loop of Step 2.6 should be modified, e.g., as follows ( ) } (𝛽 ) { iℎ𝐴(𝑗−1) U(p, q) ∶= ℎe−i𝐽1 ℎ𝑟𝑝 𝐅𝐽 𝑥 e U(j, q) 𝑗=1,𝐽 , ∀𝑝 = 1, 𝐽 After that we use subroutine RGSF3P in [18] to interpolate the numerical result from this mesh in order to obtain the ‘‘exact’’ Cauchy data 𝑓 , 𝑔 in (3), (4), and the ‘‘exact’’ solution 𝑢(⋅, 𝑦0 , ⋅) for 𝑦0 > In our practice, we choose 𝑎∞ = 15 and 𝑏∞ = 30, and perform the FDM on the mesh with the resolution 3001 × 3001 for the spatial discretization, and with the time step 10−2 A similar implementation using FDM to obtain data input can be found in [19, Appendix] For 𝑛 given, we compute 𝑣𝑛 = 𝑢(𝑥, 0.1, 𝑡) (i.e 𝑦0 = 0.1) by the procedure in Steps 1–3 with the Cauchy data (Eqs (6) and (7)) given in two cases: without noise, and with Gaussian noise In this test case we aim to observe the estimation of the expectation of the relative error of ‖𝑣𝑛 − 𝑢(⋅, 𝑦0 , ⋅)‖, that is √ ∑𝐽 ∑𝐾 | |2 𝐿 𝑗=1 𝑘=1 ||𝑣𝑛,(𝑙),𝑗,𝑘 − 𝑢(𝑥𝑗 , 𝑦0 , 𝑡𝑘 )|| ∑ ERE ∶= , (56) √ 𝐿 𝑙=1 ∑𝐽 ∑𝐾 | |2 𝑢(𝑥 , 𝑦 , 𝑡 ) | | 𝑗 𝑘 | 𝑗=1 𝑘=1 | 𝑝 This transposition is not only to speed up the computing at Step 2.6 due to the column-major order of the array model, but also to ensure that the dimension of the outcome (i.e 𝐽 × 𝐾) is consistent with that of the input for Step Remark on the mesh resolutions: In order to improve the accuracy of the Newton–Cotes procedure (42) as well as (40) and (48), we set the mesh resolutions 𝐽 and 𝐾 to be large enough At least, they should satisfy 𝐽 > 2𝑀 + and 𝐾 ≥ 𝑛 where 𝐿 is the number of replication, 𝑥𝑗 and 𝑡𝑘 were defined in (38), and 𝑣𝑛,(𝑙),𝑗,𝑘 denotes the value of the 𝑙th solution 𝑣𝑛,(𝑙) at the mesh point (𝑥𝑗 , 𝑡𝑘 ), for 𝑙 = 1, 𝐿 For every 𝑛, we repeat generating the data randomly as in (6)–(7) and performing the calculation of 𝑣𝑛 in 𝐿 times in order to obtain the above error estimate In this example, the diffusivity is 𝜅 = We set 𝑎 = 𝑏 = in (38), 𝐽 = max{4096, 2𝑀𝑛 + 3} and 𝐾 = max{1025, 𝑛 + 1} for every 𝑛 Letting 𝑇 = 20, we set the values of constants in (55) as follows: 𝑆𝑁 = 5, 𝑆𝑀 = 30, 𝑆ℎ = 10−1 and 𝑆𝜖 = We approximate 𝑣𝑛 = 𝑣𝜖𝑛 (Eq (36)) by the method in Steps 1–3 Table shows ERE in (56) for 𝑦0 = 0.1, for various values of 𝑛 and for different noise levels Therein we calculate ERE with 𝐿 = for the data without noise, and with 𝐿 = 100 for the data disturbed by Gaussian noise (0, 𝜎), where 𝜎 = 5%, 1% From the left to right columns of the table we can see the convergence of 𝑣𝑛 to 𝑢(𝑦0 ) stably, In practice, we determine the sizes of our problem, i.e 𝐽 and 𝐾, in advance of Step It should be large as much as possible so that we can achieve the desired accuracy However, there is a trade-off between doing the calculation accurately and doing it quickly Now we explain calculation procedure for Step Step 1: Theoretically, for 𝑛 ∈ N, we need to choose the parameters ℎ𝑛 , 𝑇𝑛 , 𝑀𝑛 , 𝑁𝑛 , 𝜖𝑛 in such a way that 𝜀0 (ℎ, 𝑇 , 𝑀, 𝑁) becomes small as 𝑛 ′ becomes large Herein 𝜀0 is given with the setting 𝛬2 = 𝐶𝛬 𝑇 2𝜏 ℎ−2𝛽 for ′ ′ 𝐶𝛬 constant, 𝛽 > 1∕2, < 𝛼 < 3∕2 and < 𝛽 < 𝛽 − 1∕2 as in (16), 𝜏 = as in Lemma 2.4, i.e ( ) 𝐴𝜀 𝐵𝜀 𝑇 2𝜏+1 𝑀ℎ𝑇 𝑁 𝜀0 (ℎ, 𝑇 , 𝑀, 𝑁) = + + 𝐶𝜀 + 𝐷𝜀 ℎ4𝑞−2 + 𝐸𝑇 −𝜌 𝑛 𝑀 2𝛽 ℎ2𝛽 ′ −1 𝑁 2𝛼 22 D.D Trong, T.Q Viet, V.D Khoa et al Computers and Mathematics with Applications 86 (2021) 16–32 Table Error estimate ERR defined in (56) No noise 𝜎 = 1% 𝜎 = 5% n = 100 500 1000 2000 4000 6000 8000 1.48E−01 2.37E−01 9.25E−01 1.01E−01 2.13E−01 9.28E−01 9.94E−02 2.12E−01 9.45E−01 9.66E−02 2.07E−01 9.17E−01 9.09E−02 1.99E−01 8.90E−01 8.50E−02 1.95E−01 8.82E−01 8.15E−02 1.92E−01 8.75E−01 results randomly to display From the top to bottom of the figure, we can see that Dirac delta distribution 𝑣(𝑥, 0) was well detected at the origin (𝑥, 𝑡) = (0, 0) It is improved clearly when 𝑛 becomes large, i.e 𝑛 = 1000, 4000, 8000 From the left to right of the figure, we can see that the graph of 𝑣𝑛 with the noise 𝜎 = 5% has less wiggles than the one with 𝜎 = 10% This phenomenon appears from the top to bottom of the figure It confirms the fact that the quality of data not only depended on the sample size (i.e 𝑛), but it also is influenced by the deviation of the noise As experimented in Examples and 2, we conclude that the proposed regularization method works well and it is feasible to perform in practice However, let us emphasize that there is a trade-off between stability and accuracy, and that the empirical constants 𝑆𝑁 , 𝑆𝑀 , 𝑆ℎ , 𝑆𝜖 play an important role in the accuracy and stability of the regularized solution For instance, if we decrease 𝑆𝜖 , e.g from 𝑆𝜖 = to 10−1 in Example 1, then the accuracy can be achieved at a higher precision but the stability is less confirmed, and reversely Let us reserve this problem of parameter selection for a future study this trend appears in every row of the table Therefore it confirms the reliability of the proposed method The calculation speed of Steps and is moderately fast For the mesh resolution 𝐽 × 𝐾, where 𝐽 = 4096, the following table shows average time that the calculation takes in the computer with CPU 3.00 GHz Intel Core i7-9700F, RAM 16G, and with the compiler GNU Fortran 8.3.0 in Debian/Linux 1025 1.49 𝐾 CPU time (in second) 2001 3.36 4001 6.68 6001 10.59 8001 14.47 The computer program was not optimized since we focus only on illustrating the method implementation Example Let us consider the heat transfer in metal with the thermal diffusivity of the medium 𝜅 = 0.23 (cm2 ∕s) The problems (2) and (5) are defined with an instantaneous source located initially at the origin (see, e.g., [20, p 28]), and the exact solution is given by 𝑄 − 𝑥2 +𝑦2 e 4𝜅𝑡 , (57) 𝑡 where 𝑄 > is some constant Let us take from the formula (57) the functions 𝑢(𝑥, 𝑦, 𝑡) = 𝑓 (𝑥, 𝑡) = 𝑄 e 𝑡 +1 − 𝑥4𝜅𝑡 , 𝑔(𝑥, 𝑡) = 𝑥2 𝑄 − e 𝑡 Proofs 5.1 The Fourier series and Sinc series 𝑥2 +4 4𝜅𝑡 5.1.1 The Fourier series To present the proofs, we rewrite the following property of the trigonometric basis, which are given by Tsybakov, [21] , (58) 𝑄 − e 4𝜅𝑡 , 𝑡 > 0, 𝑡 which then stand for our exact data (i.e 𝑓 and 𝑔) and exact solution (i.e 𝑣) of (1)–(7) To verify that 𝑓 and 𝑔 satisfy the assumptions (14)–(16), readers may be interested in Section 5.6 𝑣(𝑥, 𝑡) = 𝑢(𝑥, 0, 𝑡) = Lemma 5.1 For 𝑛 ∈ N, let 𝑡𝑗 = 𝑗𝑇 ∕𝑛 for 𝑗 = 1, 𝑛 (a) For every 𝑝, 𝑞 ∈ N, we have 𝑛−1 2∑ 𝜙 (𝑡 )𝜙 (𝑡 ) + 𝜙𝑝 (𝑡𝑛 )𝜙𝑞 (𝑡𝑛 ) 𝑛 𝑙=1 𝑝 𝑙 𝑞 𝑙 𝑛 Herein 𝑣(0, 𝑡) is the instantaneous source of the problem We aim to detect 𝑣(𝑥, 𝑡) from the Cauchy data 𝑓 , 𝑔 given by the models (6) and (7) We have a note on (57) Since lim𝑡↓0 𝑢(𝑥, 𝑦, 𝑡) = for every (𝑥, 𝑦) ≠ (0, 0), and lim𝑡↓0 𝑢(0, 0, 𝑡) = ∞, the exact solution 𝑣(𝑥, 𝑡) gets singularity at 𝑥 = as 𝑡 tends to zero We need to approximate 𝑢(𝑥, 𝑦, 0) Based on ∞ ∞ the fact that is ∫−∞ ∫−∞ 𝑢(𝑥, 𝑦, 𝑡)d𝑥d𝑦 = 4𝜋𝜅𝑄 for all 𝑡 > 0, to eliminate the singularity, we approximate 𝑢(𝑥, 𝑦, 𝑡) ∶= 𝛿𝑑 (𝑥, 𝑦) for 𝑡 ∈ [0, (4𝜅𝑑)−1 ], 2 where 𝛿𝑑 (𝑥, 𝑦) = 4𝜅𝑄𝑑 e−𝑑(𝑥 +𝑦 ) for some 𝑑 > large enough Herein we can check that 𝛿𝑑 is an approximation of the Dirac delta at (𝑥, 𝑦) = (0, 0) ∞ ∞ with the scale factor 4𝜋𝜅𝑄, i.e ∫−∞ ∫−∞ 𝛿𝑑 (𝑥, 𝑦)d𝑥d𝑦 = 4𝜋𝜅𝑄 for all 𝑑 > 0, lim𝑑→∞ 𝛿𝑑 (𝑥, 𝑦) = for every (𝑥, 𝑦) ≠ (0, 0), and lim𝑑→∞ 𝛿𝑑 (0, 0) = ∞ Therefore, we approximate 𝑣(𝑥, 𝑡) ≃ 𝛿𝑑 (𝑥, 0), 𝑓 (𝑥, 𝑡) ≃ 𝛿𝑑 (𝑥, 1) and 𝑔(𝑥, 𝑡) ≃ 𝛿𝑑 (𝑥, 2) for 𝑡 ∈ [0, (4𝜅𝑑)−1 ] Otherwise, we use the exact form (58) Let 𝑄 = 2, 𝑑 = 100, 𝑇 = 20 Let 𝑎 = 𝑏 = in (38) Let 𝑆𝑁 = 5, 𝑆𝑀 = 30, 𝑆ℎ = 10−1 , 𝑆𝜖 = 1∕80 in (55) For 𝑛, 𝐽 and 𝐾 given as in Example 1, we perform Steps 1–3 with the Cauchy data in (6) and (7) are given in two cases of the white noise: 𝜖𝑗,𝑚 , 𝜀𝑗,𝑚 ∼ (0, 𝜎) for 𝜎 = 10% and 5%, respectively Fig shows the Cauchy data that is defined by the model (6) and (7), where 𝜖𝑗,𝑚 and 𝜀𝑗,𝑚 is from Gaussian noise (0, 𝜎), for 𝜎 = 10%, 5% Herein the data were chosen randomly In the sub figures 1(c)–1(f), we can see that the data quality was mostly defaced by the noise However, the data in Figs 1(c)–1(d) looks better than the one shown in Figs 1(e)– 1(f) We can see how this difference influences the numerical solutions in Fig Fig shows the regularized solution 𝑣𝑛 (𝑥, 𝑡) calculated from the data (𝜏𝑗,𝑚 , 𝜂𝑗,𝑚 ) by the method in Steps 1–3 Therein we selected the ⎧ 1, ⎪ = ⎨−1, ⎪ 0, ⎩ if 𝑝 − 𝑞 = 2𝑚𝑛, if 𝑝 + 𝑞 = 2𝑘𝑛 + 1, otherwise 𝑚 = 0, 1, 2, … , 𝑘 = 1, 2, 3, … , (b) For {𝑢𝑗 }𝑗=1,𝑛 and {̂ 𝑢𝑝 }𝑝=1,𝑛 in C, if ̂ 𝑢𝑝 = 𝑛−1 2∑ 𝑢 𝜙 (𝑡 ) + 𝑢𝑛 𝜙𝑝 (𝑡𝑛 ) 𝑛 𝑗=1 𝑗 𝑝 𝑗 𝑛 (59) then we have 𝑢𝑗 = 𝑛 ∑ (60) ̂ 𝑢𝑝 𝜙𝑝 (𝑡𝑗 ), 𝑝=1 and vice versa We call (59) and (60) the quarter-sine transform and its inverse, respectively Moreover we have 𝑛 ∑ |̂ 𝑢𝑝 |2 = 𝑝=1 𝑛−1 2∑ |𝑢 |2 + |𝑢𝑛 |2 , 𝑛 𝑗=1 𝑗 𝑛 which is called the discrete Parseval identity (c) Let 𝑢 ∈ 𝐿2 (0, 𝑇 ) and 𝑢 be piecewise-continuous on (0, 𝑇 ) with ∑ 𝜃𝑗 = (2∕𝑇 )⟨𝑢, 𝜙𝑗 ⟩𝑇 for 𝑗 = 1, 𝑛 Put 𝛼𝑗 = 2𝑛 𝑛−1 𝑘=1 𝑢(𝑡𝑘 )𝜙𝑗 (𝑡𝑘 ) + 𝑢(𝑡 )𝜙 (𝑡 ) − 𝜃 Then we have 𝑛 𝑗 𝑛 𝑗 𝑛 𝛼𝑗 = ∞ ∑ ( ) 𝜃𝑗+2𝑙𝑛 − 𝜃−𝑗+2𝑙𝑛+1 , 𝑙=1 23 𝑗 = 1, 𝑛 D.D Trong, T.Q Viet, V.D Khoa et al Computers and Mathematics with Applications 86 (2021) 16–32 Fig Example The Cauchy data defined by (6) and (7) with the white noise 𝜖𝑗,𝑚 , 𝜀𝑗,𝑚 ∼ (0, 𝜎), for 𝜎 = 10%, 5% Herein the data were chosen randomly Proof To prove (a), we apply the Lagrange’s trigonometric identities, i.e cos 𝑥 + cos 2𝑥 + ⋯ + cos 𝑛𝑥 = − + sin 𝑥 + sin 2𝑥 + ⋯ + sin 𝑛𝑥 = sin((𝑛 + 12 )𝑥) , sin( 𝑥2 ) cos( 𝑥2 ) sin( 𝑥2 ) − cos((𝑛 + 12 )𝑥) sin( 𝑥2 ) for 𝑛 ∈ N and for any 𝑥 ≠ 2𝑘𝜋, 𝑘 ∈ Z The rest of this proof is elementary, therefore we omit it 24 D.D Trong, T.Q Viet, V.D Khoa et al Computers and Mathematics with Applications 86 (2021) 16–32 Fig Example Regularized solution 𝑣𝑛 (𝑥, 𝑡) chosen randomly from many calculations as detailed in Steps 1–3 with the Cauchy data influenced by the white noise (0, 𝜎) The proof of (b) can be derived directly from the orthonormal property in (a) So we also omit it Now, let us give the proof of (c) ∑ Since 𝑢(𝑡𝑙 ) = ∞ 𝑝=1 𝜃𝑝 𝜙𝑝 (𝑡𝑙 ), we have = ∞ ∑ 𝑝=1 ( 𝜃𝑝 ) 𝑛−1 2∑ 𝜙𝑝 (𝑡𝑙 )𝜙𝑞 (𝑡𝑙 ) + 𝜙𝑝 (𝑡𝑛 )𝜙𝑞 (𝑡𝑛 ) − 𝜃𝑞 𝑛 𝑙=1 𝑛 Now we apply (a) For every 𝑞 = 1, 𝑛, we define 𝑛−1 2∑ 𝛼𝑞 = 𝑢(𝑡 )𝜙 (𝑡 ) + 𝑢(𝑡𝑛 )𝜙𝑞 (𝑡𝑛 ) − 𝜃𝑞 , 𝑞 = 1, 𝑛 𝑛 𝑙=1 𝑙 𝑞 𝑙 𝑛 (∞ ) (∞ ) 𝑛−1 2∑ ∑ ∑ = 𝜃 𝜙 (𝑡 ) 𝜙𝑞 (𝑡𝑙 ) + 𝜃 𝜙 (𝑡 ) 𝜙𝑞 (𝑡𝑛 ) − 𝜃𝑞 𝑛 𝑙=1 𝑝=1 𝑝 𝑝 𝑙 𝑛 𝑝=1 𝑝 𝑝 𝑛 𝐴𝑞 = {𝑞 + 2𝑚𝑛 ∶ 𝑚 = 0, 1, 2, …} , 𝐵𝑞 = {−𝑞 + 2𝑘𝑛 + ∶ 𝑘 = 1, 2, 3, …} , where we can see that 𝐴𝑞 ⊂ N and 𝐵𝑞 ⊂ N Particularly, we have 𝐴𝑞 ∩𝐵𝑞 = ∅ Indeed, if 𝑙 ∈ 𝐴𝑞 and 𝑙 ∈ 𝐵𝑞 then 𝑙 = 𝑞+2𝑚𝑛 and 𝑙 = −𝑞+2𝑘𝑛+1, 25 D.D Trong, T.Q Viet, V.D Khoa et al Computers and Mathematics with Applications 86 (2021) 16–32 We verify that there is a constant 𝐶 > such that 𝑣𝑦 ∈ 𝑉𝑞,𝐶 In fact, which implies 2𝑙 = 2𝑚𝑛 + 2𝑘𝑛 + This is contradictory since the RHS is an odd integer while the LHS is even So far, for every 𝑞 = 1, 𝑛, using (a) we have ) ( 𝑛−1 ∞ ∑ 2∑ 𝜃𝑝 𝜙 (𝑡 )𝜙 (𝑡 ) + 𝜙𝑝 (𝑡𝑛 )𝜙𝑞 (𝑡𝑛 ) 𝑛 𝑙=1 𝑝 𝑙 𝑞 𝑙 𝑛 𝑝=1 ∑ ∑ ∑ ∑ ∑ = + + = + 𝑝∈𝐴𝑞 𝑝∈𝐵𝑞 𝑝∈𝐴𝑞 𝑝∈N⧵(𝐴𝑞 ∪𝐵𝑞 ) Using the Parseval equality yields ∫R 𝑝∈𝐵𝑞 ∞ ∑ 𝜃𝑞+2𝑚𝑛 − 𝑚=0 ∞ ∑ 𝜃−𝑞+2𝑘𝑛+1 − 𝜃𝑞 = 𝑘=1 ∞ ∑ ) ( 𝜃𝑞+2𝑚𝑛 − 𝜃−𝑞+2𝑚𝑛+1 condition (1+𝑟2 )𝑞∕2 𝑚=1 The proof is completed (1 + 𝑟2 )𝑞∕2 Proof of Lemma 2.4 We have | | ) ]⟩ | ⟨ [( | 𝜋𝑡 𝑇 | | | | 𝜓𝑡 (𝑚ℎ, ⋅), cos 𝑝 − | |⟨𝜓(𝑚ℎ, ⋅), 𝜙𝑝 ⟩𝑇 | = | ( ) | | | 𝑇 𝑇 || | 𝑝− 𝜋 | | ) 4𝑇 ( |𝜓𝑡 (𝑚ℎ, 𝑇 )| + |⟨𝜓𝑡𝑡 (𝑚ℎ, ⋅), 𝜙𝑝 ⟩𝑇 | ≤ 𝑝2 𝜋 ′ ∑ ≤ ≤ |⟨𝜓(𝑚ℎ, ⋅), 𝜙𝑝 ⟩𝑇 |2 < −1, the latter inequalities yield ( ) ∑ 𝑝2𝛼 |⟨𝜓(0, ⋅), 𝜙𝑝 ⟩𝑇 |2 + 𝑝2𝛼 |𝑚|2𝛽 |⟨𝜓(𝑚ℎ, ⋅), 𝜙𝑝 ⟩𝑇 |2 𝑇4 ∑ 96𝐶Fou 𝜋4 𝑝2𝛼−4 + 𝑇4 ∑ 96𝐶Fou 𝑝≥1 𝑇4 96𝐶Fou [ 𝜋 ℎ2𝛽 ′ − 2𝛼 +2 − 2𝛼 ℎ2𝛽 ′ 𝜋 ( Hence, we can choose 𝛬2 (𝑇 , ℎ) = 𝐶𝛬 = |𝑚|≥1 2𝛽 ′ − 2𝛽 2𝛽 ′ − 2𝛽 − 𝐶𝛬 𝑇 ′ ℎ2𝛽 ( 𝜕𝑣f0t ∑ 𝑝2𝛼−4 |𝑚|2𝛽−2𝛽 𝑝≥1 )( − 2𝛼 − 2𝛼 √ | | √ ≤ 2, || 𝐴𝑟 || ≤ and computing directly, | 0| | 𝜕𝜆 | | 𝜕𝜆 | |𝜆 | + |𝜆 | ≤ 𝐶1 (1 + |𝑠| + |𝑟|) | 𝜕𝑟 | | 𝜕𝑠 | | | | | ′ (65) Combining (63), (64), (65) deduces that 𝐾3 ∈ 𝐿2 (R2 ) Similarly, 𝐾4 ∈ )] 𝜕𝑣f𝑦t 𝐿2 (R2 ) From these arguments, we obtain (1 + 𝑟2 )𝑞∕2 ■ that, with the same calculation, we have (1 + 𝑟2 )𝑞∕2 𝜕𝑠 ∈ 𝐿2 (R2 ) which is used in the next proof We verify that there is a constant 𝐸 > such that 𝑣𝑦 ∈ 𝑉trunc (2, 𝐸) In fact, 𝜕𝑟 𝜕𝑣f𝑦t with ∬R2 Proof Put 𝑎,𝑏 (𝑧) = sinh(𝑎𝑧)∕ sinh(𝑏𝑧) for 𝑎, 𝑏 ∈ R, 𝑎, 𝑏 > 0, 𝑅𝑒(𝑧) > We can find a constant 𝐶 > such that | 𝑑𝑎,𝑏 | | | (𝑧)| ≤ 𝐶 𝑒−(𝑏−𝑎)𝑅𝑒(𝑧) for 𝑅𝑒(𝑧) > 0, 𝑎 < 𝑏 | | 𝑑𝑧 | | | Using the Fourier forms (10) and (11) yields |𝑧| 𝑣f𝑦t (𝑟, 𝑠) = 𝑣f0t (𝑟, 𝑠)𝑌 −𝑦,𝑌 (𝜆(𝑟, 𝑠)) + 𝑣f𝑌t (𝑟, 𝑠)𝑦,𝑌 (𝜆(𝑟, 𝑠)) ∬R×(𝑇 ,∞) (61) evaluate (62) |𝑣𝑦 (𝑥, 𝑡)|2 d𝑥d𝑡 ≤ 𝑇 −2 ∬R2 |𝑡𝑣𝑦 (𝑥, 𝑡)|2 d𝑥d𝑡 ≤ 𝐸𝑇 −2 𝜕𝑣𝑦 𝜕𝑡 ∶= 𝑣𝑦,𝑡 , 𝜕 𝑣𝑦 𝜕𝑡2 ∶= 𝑣𝑦,𝑡𝑡 We have to prove ‖𝑣𝑦,𝑡 (𝑥, )‖𝐿2 (0,∞) + |𝑥|‖𝑣𝑦,𝑡 (𝑥, )‖𝐿2 (0,∞) + ‖𝑣𝑦,𝑡𝑡 (𝑥, )‖𝐿2 (0,∞) + |𝑥|‖𝑣𝑦,𝑡𝑡 (𝑥, )‖𝐿2 (0,∞) ≤ 𝐶 The proof of the latter inequality is long and tedious We choose the most complex term to prove and omit the proof for other terms In fact, we have ( ) 𝜕𝑣f𝑦t 1 𝑖𝑟𝑥 −𝑖𝑥𝑣𝑦,𝑡𝑡 (𝑥, 𝑡) = (𝑟, 𝑠)𝑒 d𝑥 (𝑖𝑠)2 𝑒𝑖𝑠𝑡 d𝑠 2𝜋 ∫R 2𝜋 ∫R 𝜕𝑟 𝑑𝑧 | 𝜕𝑣f t |2 | 𝑦 | | | d𝑟d𝑠 ∶= 𝐸 < ∞ (𝑟, 𝑠) | 4𝜋 ∫R2 || 𝜕𝑠 | | | Finally, we verify condition 𝑣𝑦 ∈ 𝐶𝛼,𝛽,𝛬 Using Lemma 2.4, we will every 𝜉 ∈ (0, 𝑌 ), 𝜙 ∈ 𝐿2 (R2 ) we obtain 𝑑𝜉,𝑌 |𝑡𝑣𝑦 (𝑥, 𝑡)|2 d𝑥d𝑡 = ∈ 𝐿2 (R2 ) Note Hence where (𝑟, 𝑠) + 𝑖𝐵0 (𝑟, 𝑠) and 𝐴0 (𝑟, 𝑠) = √√we recall that 𝜆(𝑟, 𝑠) = 𝐴√ √ 1 2 √ 𝑟 + 𝑠 ∕𝜅 + 𝑟 , 𝐵0 (𝑟, 𝑠) = √ 𝑟4 + 𝑠2 ∕𝜅 − 𝑟2 From (61), for (1 + 𝑟2 + 𝑠2 )𝑞∕2 𝜙(.)𝜉,𝑌 (𝜆(.)) and (1 + 𝑟2 + 𝑠2 )𝑞∕2 𝜙(.) (64) | √𝑠∕𝜅 | | | | 𝐴0 | | | we can find 𝐶1 > such that 5.2 Proof of Theorem 3.1 (𝑟, 𝑠) = 𝐾1 + 𝐾2 + 𝐾3 + 𝐾4 Using the inequalities ) ( )) ( )( 96𝐶Fou 2𝛽 ′ − 2𝛽 − 2𝛼 1+2 ′ − 2𝛼 2𝛽 − 2𝛽 − 𝜋 |𝑎,𝑏 (𝑧)| + ∈ 𝐿2 (R2 ) From (62) we have 𝑣f0t , 𝑣f𝑌t ∈ 𝐻 (R2 ) ∑ 𝑝≥1 𝜕𝑟 From the condition (ii), we obtain − 2𝛽 ′ |𝑚|≥1 𝜕𝑟 𝜕𝑣f𝑦t ( ) | | 𝜕𝜆 𝑑𝑌 −𝑦,𝑌 | | |𝐾3 | = (1 + 𝑟2 )𝑞∕2 |𝑣f0t (𝑟, 𝑠) 𝜆 (𝑟, 𝑠) (𝜆(𝑟, 𝑠))| | | 𝜕𝑟 𝜆(𝑟, 𝑠) 𝑑𝑧 | | 𝑇4 for 𝑚 = 1, 2, … ′ 2𝛽 ′ 2𝛽 𝑚 ℎ 𝑝 𝑝≥1 t , ∈ 𝐿2 (R2 ) By the same technic for (1+𝑟2 )𝑞∕2 𝑣f𝑦,1 t ∈ 𝐻 (R2 ), we obtain in view of (63) that 𝐾 , 𝐾 ∈ 𝐿2 (R2 ) Since 𝑣f𝑦,1 For 𝐾3 we have ′ Hence For 2𝛼 − < −1, 2𝛽 𝜕𝑟 (𝜆(𝑟, 𝑠)), 𝜕𝑟 𝑌 −𝑦,𝑌 f t 𝜕𝑣 𝐾2 = (1 + 𝑟2 )𝑞∕2 𝑌 (𝑟, 𝑠)𝑦,𝑌 (𝜆(𝑟, 𝑠)), 𝜕𝑟 𝑑𝑌 −𝑦,𝑌 𝑞∕2 f t 𝜕𝜆 (𝑟, 𝑠) (𝜆(𝑟, 𝑠)), 𝐾3 = (1 + 𝑟 ) 𝑣0 𝜕𝑟 𝑑𝑧 𝑑𝑦,𝑌 𝜕𝜆 𝐾4 = (1 + 𝑟2 )𝑞∕2 𝑣f𝑌t (𝑟, 𝑠) (𝜆(𝑟, 𝑠)) 𝜕𝑟 𝑑𝑧 ≤ 2𝐶Fou (1 + |𝑚ℎ|)−2𝛽 , |⟨𝜓𝑡𝑡 (𝑚ℎ, ⋅), 𝜙𝑝 ⟩𝑇 | ≤ ‖𝜓𝑡𝑡 (𝑚ℎ, ⋅)‖𝐿2 (0,𝑇 ) ≤ 𝐶Fou (1 + |𝑚ℎ|)−𝛽 𝜕𝑣f𝑦t 𝐾1 = (1 + 𝑟2 )𝑞∕2 | ∞ | |𝜓𝑡 (𝑚ℎ, 𝑇 )|2 = || 𝜓𝑡 (𝑚ℎ, 𝑡)𝜓𝑡𝑡 (𝑚ℎ, 𝑡)𝑑𝑡|| ∫ | 𝑇 | ≤ 2‖𝜓𝑡 (𝑚ℎ, ⋅)‖𝐿2 (0,∞) ‖𝜓𝑡𝑡 (𝑚ℎ, ⋅)‖𝐿2 (0,∞) ≤ 96𝐶Fou t 𝜕𝑣f𝑦,1 where We note that 𝑇4 , 𝑝4 𝜋 (1 + 𝑟2 )𝑞 |𝑣f𝑦t (𝑟, 𝑠)| 𝑑𝑠 2𝜋 ∫R it is sufficient to prove that (1 + 𝑟2 )𝑞∕2 ■ |⟨𝜓(0, ⋅), 𝜙𝑝 ⟩𝑇 |2 ≤ 96𝐶Fou t (1 + 𝑟2 )𝑞 |𝑣f𝑦,1 (𝑟, 𝑡)| 𝑑𝑡 = From (62), we obtain (1 + 𝑟2 )𝑞∕2 𝑣f𝑦t ∈ 𝐿2 (R2 ) Combining the properties t ∈ 𝐿2 (R2 ) We verify the with the latter equality yields (1 + 𝑟2 )𝑞∕2 𝑣f𝑦,1 which gives 𝛼𝑞 = (1 + 𝑟2 )𝑞∕2 𝑣f𝑦t (𝑟, 𝑠)𝑒𝑖𝑡𝑠 𝑑𝑠 2𝜋 ∫R t (1 + 𝑟2 )𝑞∕2 𝑣f𝑦,1 (𝑟, 𝑡) = (𝜆(.)) ∈ 𝐿2 (R2 ) (63) 26 D.D Trong, T.Q Viet, V.D Khoa et al Computers and Mathematics with Applications 86 (2021) 16–32 5.3 Proof of Theorem 3.2 Using the Parseval equality yields ∫R | |2 𝜕𝑣f𝑦t | | | (𝑟, 𝑠)𝑒𝑖𝑟𝑥 d𝑥|| 𝑠4 d𝑠 | 2𝜋 ∫R | 2𝜋 ∫R 𝜕𝑟 | | | |𝑥𝑣𝑦,𝑡𝑡 (𝑥, 𝑡)|2 d𝑡 = 5.3.1 Cardinal series approximations of 𝑓 (⋅, 𝑡) and 𝑔(⋅, 𝑡) We define From (62) we have 𝑠 𝜕𝑣f𝑦t 𝜕𝑟 = 𝑠 Then it is reasonable for us to consider the truncated series of (66) with respect to any natural number 𝑀 𝜕𝑣f0t 𝜕𝑟 𝜕𝑣f𝑌t 𝜕𝑟 𝑌 −𝑦,𝑌 (𝜆(𝑟, 𝑠)), 𝑓̃𝑀 (𝑥, 𝑡) = 𝑀 ∑ 𝑓 (𝑚ℎ, 𝑡) 𝑆(𝑚, ℎ)(𝑥), (𝑟, 𝑠)𝑦,𝑌 (𝜆(𝑟, 𝑠)), Now we verify that the functions 𝑓̃ and 𝑔̃ are approximations of 𝑓 and 𝑔, respectively (see also [22], Chap 5) Lemma 5.2 Let ℎ ∈ (0, 1) and let 𝑞 > 1∕2, 𝐶sinc > as in (15) Suppose that 𝑓 , 𝑔 ∈ 𝑉 (𝑞, 𝐶sinc ) For 𝑓̃ and 𝑔̃ defined in (66), we obtain ∑ | |2 | 𝐾 ′ (𝑟, 𝑠)𝑒𝑖𝑟𝑥 d𝑥| d𝑠 | | 𝑗 ∫ ∫ 2𝜋 R 𝑗=1 | R | Proof To prove this lemma, we can refer to Theorem 5.6 in [22] ■ 5.3.2 Estimating 𝑓 (𝑚ℎ, 𝑡) and 𝑔(𝑚ℎ, 𝑡) for |𝑚| ≤ 𝑀 From the latter section, we see that the approximation 𝑓̃, 𝑔̃ of 𝑓 , 𝑔 are defined from 𝑓 (𝑚ℎ, 𝑡) and 𝑔(𝑚ℎ, 𝑡) Hence, approximations of 𝑓 (𝑚ℎ, ⋅) and 𝑔(𝑚ℎ, ⋅) are in order To this end, we will use the model (6), (7) with 𝑡𝑗 = 𝑗𝑇 ∕𝑛, 𝑗 = 1, 𝑛 Notice that some properties of 𝑓 (𝑚ℎ, ⋅) and 𝑔(𝑚ℎ, ⋅) are similar (in fact, this will be realized in the sequel), so it is sufficient just to give the proof of approximation results relating to the function 𝑓 (𝑚ℎ, ⋅) Since 𝑓 (𝑚ℎ, 𝑡) and 𝑔(𝑚ℎ, 𝑡) ∈ 𝐿2 [0, 𝑇 ], one can write the sine-Fourier expansion of 𝑓 (𝑚ℎ, ⋅) and 𝑔 (𝑚ℎ, ⋅) as: | |2 | 𝐾 ′ (𝑟, 𝑠)𝑒𝑖𝑟𝑥 d𝑟| d𝑠 | ∫R ||∫R | |2 | 𝜕𝑣f t | | | d𝑟 | 𝑠4 |𝑌 −𝑦,𝑌 (𝜆(𝑟, 𝑠))|2 d𝑟d𝑠 (𝑟, 𝑠) | ∫R ∬R2 || 𝜕𝑟 | | | | 𝜕𝑣f t |2 | | | ≤ (𝑟, 𝑠)|| d𝑟 𝑠4 𝐶𝑒−2𝑦𝐴0 (𝑟,𝑠) d𝑟d𝑠 ∬R2 || 𝜕𝑟 ∫R | | | | 𝜕𝑣f t |2 √ √ | | | | d𝑟 𝑠4 𝐶𝑒− 2𝑦( |𝑠|+|𝑟|) d𝑟d𝑠 ≤ (𝑟, 𝑠) | ∫ ∬R2 || 𝜕𝑟 | R | | ≤ 𝑓 (𝑚ℎ, 𝑡) = ≤ 𝐶1,Fou 𝑔(𝑚ℎ, 𝑡) = 𝑛 ∑ 𝑝=1 𝑛 ∑ 𝑐𝑝 (𝑚ℎ)𝜙𝑝 (𝑡) + 𝑑𝑝 (𝑚ℎ)𝜙𝑝 (𝑡) + where ∞ ∑ 𝑘=𝑛+1 ∞ ∑ 𝑐𝑘 (𝑚ℎ)𝜙𝑘 (𝑡), 𝑑𝑘 (𝑚ℎ)𝜙𝑘 (𝑡) 𝑘=𝑛+1 𝑝=1 where 𝑡 ∈ (0, 𝑇 ) and 𝐶1,Fou = 𝐶sup𝑠 𝑠4 𝐶𝑒 √ √ − 2𝑦 |𝑠| ∫R 𝑒 √ − 2𝑦|𝑟| | 𝜕𝑣f t |2 | | | d𝑟 (𝑟, 𝑠)|| d𝑟d𝑠 ∬R2 || 𝜕𝑟 | | | 𝑐𝑝 (𝑚ℎ) = | |2 | 𝐾 ′ (𝑟, 𝑠)𝑒𝑖𝑟𝑥 d𝑟| d𝑠 | | ∫R |∫R | | |2 𝑑 𝑌 −𝑦,𝑌 𝜕𝜆 | | ≤ (𝜆(𝑟, 𝑠))𝑒𝑖𝑟𝑥 d𝑟| d𝑠 | 𝑠2 𝑣f0t | ∫R |∫R 𝜕𝑟 𝑑𝑧 | | ( ) | |2 | | 𝑑𝑌 −𝑦,𝑌 | 𝜕𝜆 | (𝜆(𝑟, 𝑠))| d𝑟 d𝑠 ≤ |𝑣f0t | d𝑟 𝑠2 ||𝑠 (𝑟, 𝑠)|| | | ∬R2 ∫R | 𝜕𝑟 | || 𝑑𝑧 | |𝑣f0t | d𝑟 ∫R |𝑣f0t | d𝑟 ∫R √ √ 2𝑦( |𝑠|+|𝑟|) d𝑟d𝑠 ≤ 𝐶3,Fou √ √ 2𝑦 |𝑠| √ 2𝑦|𝑟| ∫R 𝑒− d𝑟 ∬R2 |𝑣f0t | d𝑟d𝑠 ⟨𝑔(𝑚ℎ, ⋅), 𝜙𝑝 ⟩𝑇 𝑇 𝑐𝑝 (𝑚ℎ) ≈ 𝑛−1 2∑ 𝑓 (𝑚ℎ, 𝑡𝑗 )𝜙𝑝 (𝑡𝑗 ) + 𝑓 (𝑚ℎ, 𝑡𝑛 )𝜙𝑝 (𝑡𝑛 ), 𝑛 𝑗=1 𝑛 𝑑𝑝 (𝑚ℎ) ≈ 𝑛 2∑ 𝑔(𝑚ℎ, 𝑡𝑗 )𝜙𝑝 (𝑡𝑗 ) + 𝑔(𝑚ℎ, 𝑡𝑛 )𝜙𝑝 (𝑡𝑛 ) 𝑛 𝑗=1 𝑛 𝛾𝑛,𝑝 (𝑚ℎ) = 𝑐𝑝 (𝑚ℎ) − 𝑛−1 2∑ 𝑓 (𝑚ℎ, 𝑡𝑗 )𝜙𝑝 (𝑡𝑗 ) − 𝑓 (𝑚ℎ, 𝑡𝑛 )𝜙𝑝 (𝑡𝑛 ), 𝑛 𝑗=1 𝑛 𝜈𝑛,𝑝 (𝑚ℎ) = 𝑑𝑝 (𝑚ℎ) − 𝑛 2∑ 𝑔(𝑚ℎ, 𝑡𝑗 )𝜙𝑝 (𝑡𝑗 ) − 𝑔(𝑚ℎ, 𝑡𝑛 )𝜙𝑝 (𝑡𝑛 ) 𝑛 𝑗=1 𝑛 for 𝑝 = 1, 𝑛 With the above notations, we have the results on the error of the approximations where 𝐶3,Fou = sup𝑠 𝑠2 𝐶12 (1 + |𝑠|)2 𝐶 𝑒− 𝑑𝑝 (𝑚ℎ) = We have to evaluate the error quantities 𝑠2 𝐶12 (1 + |𝑠|)2 𝐶 𝑒−2𝑦𝐴0 (𝑟,𝑠) d𝑟d𝑠 𝑠2 𝐶12 (1 + |𝑠|)2 𝐶 𝑒− ⟨𝑓 (𝑚ℎ, ⋅), 𝜙𝑝 ⟩𝑇 , 𝑇 for every 𝑝 ∈ N Using the trapezoidal rule yields Now, we estimate 𝐾3′ We have in view of (61) and (65) ∬R2 ‖𝑔 − 𝑔̃‖𝐿2 (R2 ) ≤ 𝐶sinc,𝑞 ℎ2𝑞−1 Herein 𝐶sinc,𝑞 is defined in (21) We find a bound for the right hand side of the latter inequality Since 𝐾1′ , 𝐾2′ have the same form we can verify only one term From (61) we obtain ∬R2 𝑔(𝑚ℎ, 𝑡) 𝑆(𝑚, ℎ)(𝑥) 𝑚=−𝑀 ‖𝑓 − 𝑓̃‖𝐿2 (R2 ) ≤ 𝐶sinc,𝑞 ℎ2𝑞−1 , |𝑥𝑣𝑦,𝑡𝑡 (𝑥, 𝑡)|2 d𝑡 ≤ 𝑀 ∑ 𝑔̃𝑀 (𝑥, 𝑡) = 𝑚=−𝑀 Hence ≤ 𝑔(𝑚ℎ, 𝑡) 𝑆(𝑚, ℎ)(𝑥) 𝑚=−∞ (66) 𝑑𝑌 −𝑦,𝑌 𝜕𝜆 (𝜆(𝑟, 𝑠)), 𝐾3′ = 𝑠2 𝑣f0t (𝑟, 𝑠) 𝜕𝑟 𝑑𝑧 𝑑𝑦,𝑌 𝜕𝜆 𝐾4′ = 𝑠2 𝑣f𝑌t (𝑟, 𝑠) (𝜆(𝑟, 𝑠)) 𝜕𝑟 𝑑𝑧 ≤ ∞ ∑ 𝑔̃(𝑥, 𝑡) = (𝑟, 𝑠) = 𝐾1′ + 𝐾2′ + 𝐾3′ + 𝐾4′ 𝐾1′ = 𝑠2 ∫R 𝑓 (𝑚ℎ, 𝑡) 𝑆(𝑚, ℎ)(𝑥), 𝑚=−∞ where 𝐾2′ ∞ ∑ 𝑓̃(𝑥, 𝑡) = Lemma 5.3 Let 𝛼 > 1∕2 and assume that the function 𝑓 (𝑚ℎ, ⋅) is piecewise 𝐶 on [0, 𝑇 ] and belongs to the functional class 𝐶𝛼,𝛽,𝛬 Then, we ■ 27 D.D Trong, T.Q Viet, V.D Khoa et al have |𝛾𝑛,𝑝 (𝑚ℎ) |2 ≤ (∞ ∑ 𝑙−2𝛼 )( ∞ ∑ 𝑙=1 𝑙2𝛼 Computers and Mathematics with Applications 86 (2021) 16–32 [ where 𝜈𝑚𝑝 are as in Lemma 5.4, ) ] 2 2𝑐𝑝+2𝑙𝑛 (𝑚ℎ) + 2𝑐−𝑝+2𝑙𝑛+1 (𝑚ℎ) Bias(̂ 𝑔𝑁,𝑀 (𝑥, 𝑡)) {𝑁 } 𝑀 ∞ ∑ ∑ ∑ = 𝜈𝑛,𝑝 (𝑚ℎ)𝜙𝑝 (𝑡) + 𝑑𝑘 (𝑚ℎ)𝜙𝑘 (𝑡) 𝑆(𝑚, ℎ)(𝑥), 𝑙=1 for 𝑝 = 1, 𝑛 𝑚=−𝑀 Similarly 𝑙=1 𝑀 ∑ 𝑙=1 for 𝑝 = 1, 𝑛 Proof Since Lemma 5.4 is similar to Lemma 5.3, it is sufficient just to give the proof of Lemma 5.3 Using Lemma 5.1, we have the inequality [∞ ]2 ∑[ ] 𝑐𝑝+2𝑙𝑛 (𝑚ℎ) − 𝑐−𝑝+2𝑙𝑛+1 (𝑚ℎ) 𝛾𝑛,𝑝 (𝑚ℎ) = 𝑙=1 (∞ ∑ 𝑙 −2𝛼 𝑙=1 ≤ 𝑘=𝑁+1 )2 ⎧ 𝑛−1 ( 𝑁 ⎪4 ∑ ∑ 𝜙𝑝 (𝑡𝑗 )𝜙𝑝 (𝑡) Var𝜀𝑗,𝑚 Var(̂ 𝑔𝑁,𝑀 (𝑥, 𝑡)) = 𝑆 (𝑚, ℎ)(𝑥) ⎨ ⎪ 𝑛 𝑗=1 𝑝=1 𝑚=−𝑀 ⎩ (𝑁 )2 ⎫ ⎪ ∑ 𝜙𝑝 (𝑡𝑛 )𝜙𝑝 (𝑡) Var𝜀𝑛,𝑚 ⎬ + 𝑛2 𝑝=1 ⎪ ⎭ Lemma 5.4 Let 𝛼 > 1∕2 and assume that the function 𝑔(𝑚ℎ, ⋅) is piecewise 𝐶 on [0, 𝑇 ] and belongs to the functional class 𝐶𝛼,𝛽,𝛬 Then, we have (∞ )( ∞ ) [ ] ∑ ∑ −2𝛼 2𝛼 2 |𝜈𝑛,𝑝 (𝑚ℎ)| ≤ 𝑙 𝑙 2𝑑𝑝+2𝑙𝑛 (𝑚ℎ) + 2𝑑−𝑝+2𝑙𝑛+1 (𝑚ℎ) ≤ 𝑝=1 and (∞ ∑ )( ∞ ∑ 𝑙=1 𝑙−2𝛼 )( ∞ ∑ 𝑙=1 𝑙 2𝛼 [ ]2 𝑐𝑝+2𝑙𝑛 (𝑚ℎ) − 𝑐−𝑝+2𝑙𝑛+1 (𝑚ℎ) Lemma 5.7 Let 𝑡 ∈ (0, 𝑇 ) and let random variables 𝜖𝑚,1 , … , 𝜖𝑚,𝑛 with 𝑚 = −𝑀, 𝑀 be mutually independent and 𝑓̂ as in (17)–(18) Denoting ( ) MSE 𝑓̂𝑁,𝑀 (𝑥, 𝑡) = E|𝑓̂𝑁,𝑀 (𝑥, 𝑡) − 𝑓̃𝑀 (𝑥, 𝑡)| , we have ) ( ) ( ) ( MSE 𝑓̂𝑁,𝑀 (𝑥, 𝑡) = Bias2 𝑓̂𝑁,𝑀 (𝑥, 𝑡) + Var 𝑓̂𝑁,𝑀 (𝑥, 𝑡) , ) ) [ ] 2𝛼 2 𝑙 2𝑐𝑝+2𝑙𝑛 (𝑚ℎ) + 2𝑐−𝑝+2𝑙𝑛+1 (𝑚ℎ) where 𝛾𝑛,𝑝 are as in Lemma 5.3, Bias(𝑓̂𝑁,𝑀 (𝑥, 𝑡)) } {𝑁 ∞ 𝑀 ∑ ∑ ∑ 𝑐𝑘 (𝑚ℎ)𝜙𝑘 (𝑡) 𝑆(𝑚, ℎ)(𝑥), 𝛾𝑛,𝑝 (𝑚ℎ)𝜙𝑝 (𝑡) + = ■ 𝑙=1 5.3.3 Constructing the approximations of 𝑓 (𝑚ℎ, 𝑡), 𝑔(𝑛ℎ, 𝑡) from the discrete random noise data 𝑚=−𝑀 Var(𝑓̂𝑁,𝑀 (𝑥, 𝑡)) = ‖2 ‖ ℎ4𝑞−2 , ‖I[0,𝑇 ] (𝑓̃𝑀 − 𝑓 )‖ 2 ≤ ℎ𝑇 𝛬2 (𝑀 + 1)−2𝛽 + 2𝐶sinc,𝑞 ‖𝐿 (R ) ‖ ‖2 ‖ 𝑔𝑀 − 𝑔)‖ 2 ≤ ℎ𝑇 𝛬2 (𝑀 + 1)−2𝛽 + 2𝐶sinc,𝑞 ℎ4𝑞−2 , ‖I[0,𝑇 ] (̃ ‖𝐿 (R ) ‖ where I[0,𝑇 ] = I[0,𝑇 ] (𝑡), 𝐶sinc,𝑞 is the constant in Lemma 5.2 and ∑ ∑ 𝑓̃𝑀 = 𝑓 (𝑚ℎ, 𝑡) 𝑆(𝑚, ℎ)(𝑥), 𝑔̃𝑀 = 𝑔(𝑚ℎ, 𝑡) 𝑆(𝑚, ℎ)(𝑥) Bias(𝑓̂𝑁,𝑀 (𝑥, 𝑡)) [ ] = E 𝑓̂𝑁,𝑀 (𝑥, 𝑡) − 𝑓̃𝑀 (𝑥, 𝑡) } {𝑁 ∞ 𝑀 ∑[ ∑ ∑ ] 𝑐𝑘 (𝑚ℎ)𝜙𝑘 (𝑡) 𝑆(𝑚, ℎ)(𝑥) E(̂ 𝑐𝑝,𝑚 ) − 𝑐𝑝 (𝑚ℎ) 𝜙𝑝 (𝑡) + = Proof We first have ‖2 ‖ ‖2 ‖2 ‖ ‖ ‖I[0,𝑇 ] (𝑓̃𝑀 − 𝑓 )‖ 2 ≤ ‖I[0,𝑇 ] (𝑓̃𝑀 − 𝑓̃)‖ 2 + ‖I[0,𝑇 ] (𝑓̃ − 𝑓 )‖ 2 ‖𝐿 (R ) ‖ ‖𝐿 (R ) ‖𝐿 (R ) ‖ ‖ |𝑚|≥𝑀+1 ∑ 𝑚=−𝑀 𝑔(𝑚ℎ, 𝑡) 𝑆(𝑚, ℎ)(𝑥) |𝑚|≥𝑀+1 = 𝑀 ∑ { 𝑝=1 − 𝑚=−𝑀 As a result ‖ ‖2 ‖I[0,𝑇 ] (𝑓̃𝑀 − 𝑓̃)‖ 2 ‖ ‖𝐿 (R ) 𝑇 ∑ ℎ𝑇 =ℎ [𝑓 (𝑚ℎ, 𝑡)]2 d𝑡 = ∫0 |𝑚|≥𝑀+1 )2 ⎧ 𝑛−1 ( 𝑁 ⎪4 ∑ ∑ 𝑆 (𝑚, ℎ)(𝑥) ⎨ 𝜙𝑝 (𝑡𝑗 )𝜙𝑝 (𝑡) Var𝜖𝑗,𝑚 ⎪ 𝑛 𝑗=1 𝑝=1 𝑚=−𝑀 ⎩ )2 (𝑁 ⎫ ⎪ ∑ + 𝜙𝑝 (𝑡𝑛 )𝜙𝑝 (𝑡) Var𝜖𝑛,𝑚 ⎬ 𝑛2 𝑝=1 ⎪ ⎭ 𝑀 ∑ Proof Direct computation gives E(̂ 𝑐𝑝,𝑚 ) = 𝑐𝑝 (𝑚ℎ) − 𝛾𝑛,𝑝 (𝑚ℎ) and |𝑚|≤𝑀 The definition of 𝑓̃𝑀 , 𝑔̃𝑀 yields ∑ 𝑓̃− 𝑓̃𝑀 = 𝑓 (𝑚ℎ, 𝑡) 𝑆(𝑚, ℎ)(𝑥), 𝑔̃−̃ 𝑔𝑀 = 𝑘=𝑁+1 𝑝=1 and Lemma 5.5 Let 𝑓 and 𝑔 belong to 𝐶𝛼,𝛽,𝛬 as well as 𝑓 and 𝑔 satisfy (14) then |𝑚|≤𝑀 𝑘=𝑁+1 𝑁 ∑ ∞ ∑ 𝛾𝑛,𝑝 (𝑚ℎ)𝜙𝑝 (𝑡) + 𝑝=1 } 𝑐𝑘 (𝑚ℎ)𝜙𝑝 (𝑡) 𝑆(𝑚, ℎ)(𝑥) 𝑘=𝑁+1 To evaluate MISE, we compute Var(𝑓̂𝑁,𝑀 (𝑥, 𝑡)) Put ∑ |𝑚|≥𝑀+1 [ ∑ ] 𝑛−1 2∑ 𝜖 𝜙 (𝑡 ) + 𝜖𝑛,𝑚 𝜙𝑝 (𝑡𝑛 ) 𝑛 𝑗=1 𝑗,𝑚 𝑝 𝑗 𝑛 𝐻𝑝,𝑚 = 𝑐𝑝2 (𝑚ℎ) 𝑝≥1 We have Since 𝑓 ∈ 𝐶𝛼,𝛽,𝛬 as in Definition 2.3 we obtain ‖ ‖2 ‖I[0,𝑇 ] (𝑓̃𝑀 − 𝑓̃)‖ 2 ≤ ℎ𝑇 𝛬2 (𝑀 + 1)−2𝛽 ‖ ‖𝐿 (R ) On the other hand, Lemma 5.2 gives Var 𝐻𝑝,𝑚 𝜙𝑝 (𝑡) = 𝑝=1 ‖ ‖ ‖ ‖ ‖I[0,𝑇 ] (𝑓̃ − 𝑓 )‖ 2 ≤ ‖𝑓̃ − 𝑓 ‖ 2 ≤ 𝐶sinc,𝑞 ℎ2𝑞−1 ‖ ‖𝐿 (R ) ‖ ‖𝐿 (R ) Combining two latter inequalities completes the proof of Lemma 𝑁 ∑ + ■ 𝑛2 (𝑁 ∑ (𝑁 )2 𝑛−1 ∑ ∑ 𝜙𝑝 (𝑡𝑗 )𝜙𝑝 (𝑡) Var𝜖𝑗,𝑚 𝑛2 𝑗=1 𝑝=1 )2 𝜙𝑝 (𝑡𝑛 )𝜙𝑝 (𝑡) Var𝜖𝑛,𝑚 𝑝=1 Using the equality gives { Lemma 5.6 Let 𝑡 ∈ (0, 𝑇 ) and let random variables 𝜀𝑚,1 , … , 𝜀𝑚,𝑛 with 𝑚 = −𝑀, 𝑀 be mutually independent and 𝑔̂ as in definition (19) Denoting ( ) MSE 𝑔̂𝑁,𝑀 (𝑥, 𝑡) = E|̂ 𝑔𝑁,𝑀 (𝑥, 𝑡) − 𝑔̃𝑀 (𝑥, 𝑡)|2 , we have ( ) ( ) ( ) MSE 𝑔̂𝑁,𝑀 (𝑥, 𝑡) = Bias2 𝑔̂𝑁,𝑀 (𝑥, 𝑡) + Var 𝑔̂𝑁,𝑀 (𝑥, 𝑡) , Var(𝑓̂𝑁,𝑀 (𝑥, 𝑡)) = Var 𝑀 ∑ 𝑚=−𝑀 = 𝑀 ∑ 𝑚=−𝑀 28 [𝑁 ∑ ] } 𝐻𝑝,𝑚 𝜙𝑝 (𝑡) 𝑆(𝑚, ℎ)(𝑥) 𝑝=1 𝑆 (𝑚, ℎ)(𝑥)Var (𝑁 ∑ 𝑝=1 ) 𝐻𝑝,𝑚 𝜙𝑝 (𝑡) D.D Trong, T.Q Viet, V.D Khoa et al Computers and Mathematics with Applications 86 (2021) 16–32 { )2 ⎧ 𝑛−1 ( 𝑁 ⎪4 ∑ ∑ = 𝑆 (𝑚, ℎ)(𝑥) ⎨ 𝜙𝑝 (𝑡𝑗 )𝜙𝑝 (𝑡) Var𝜖𝑗,𝑚 ⎪ 𝑛 𝑗=1 𝑝=1 𝑚=−𝑀 ⎩ (𝑁 )2 ⎫ ⎪ ∑ 𝜙𝑝 (𝑡𝑛 )𝜙𝑝 (𝑡) Var𝜖𝑛,𝑚 ⎬ ■ + 𝑛2 𝑝=1 ⎪ ⎭ 𝑀 ∑ ∑ × 𝑁 ∑ ∞ ∑ |𝑚|−2𝛽 |𝑚|2𝛽 (𝑝 + 2𝑙𝑛)2𝛼 𝑐𝑝+2𝑙𝑛 (𝑚ℎ) 1≤|𝑚|≤𝑀 𝑝=1 𝑙=1 + 𝑁 ∑ ∞ ∑ (𝑝 + 2𝑙𝑛)2𝛼 𝑐𝑝+2𝑙𝑛 (0) } 𝑝=1 𝑙=1 + ℎ𝑇 𝑁 −2𝛼 { So,( Lemmas ) 5.7 and 5.6 give ( explicit) expressions for MSE Denoting MISE 𝑓̂𝑁,𝑀 = ∫R×(0,𝑇 ) MSE 𝑓̂𝑁,𝑀 (𝑥, 𝑡) d𝑥d𝑡 we obtain ( ) MISE 𝑓̂𝑁,𝑀 × 𝑀 ∑ ∞ ∑ (2𝑙)−2𝛼 𝑙=1 𝑁 ∑ ∞ ∑ |𝑚|−2𝛽 |𝑚|2𝛽 (−𝑝 + 2𝑙𝑛)2𝛼 𝑐−𝑝+2𝑙𝑛 (𝑚ℎ) 𝑚=−𝑀 𝑝=1 𝑙=1 } 𝑁 ∑ ∞ ∑ 2𝛼 + (−𝑝 + 2𝑙𝑛) 𝑐−𝑝+2𝑙𝑛 (0) √ ( )‖2 ( )‖2 ‖ ‖ ‖ ‖ ‖ ‖ = ‖I[0,𝑇 ] (𝑡)Bias 𝑓̂𝑁,𝑀 ‖ + ‖I[0,𝑇 ] (𝑡) Var 𝑓̂𝑁,𝑀 ‖ ‖ 2 ‖ ‖ 2 ‖ ‖𝐿 (R ) ‖ ‖𝐿 (R ) ‖ 𝑝=1 𝑙=1 { + ℎ𝑇 and ( ) MISE 𝑔̂𝑁,𝑀 ≤ ℎ𝑇 𝑁 √ ( )‖2 ( )‖2 ‖ ‖ ‖ ‖ ‖ ‖ = ‖I[0,𝑇 ] (𝑡)Bias 𝑔̂𝑁,𝑀 ‖ + ‖I[0,𝑇 ] (𝑡) Var 𝑔̂𝑁,𝑀 ‖ ‖ 2 ‖ ‖ 2 ‖ ‖𝐿 (R ) ‖ ‖𝐿 (R ) ‖ ∑ ∞ ∑ 2𝛽 |𝑚| 1≤|𝑚|≤𝑀 𝑘=𝑁 ∞ ∑ −2𝛼 −2𝛼 (2𝑙) 𝛬 ∞ ∑ |𝑚|−2𝛽 𝑘2𝛼 𝑘−2𝛼 𝑐𝑘2 (𝑚ℎ) + } 𝑐𝑘2 (0) 𝑘=𝑁+1 + ℎ𝑇 𝑁 −2𝛼 𝛬2 𝑙=1 ∞ ∑ (2𝑙)−2𝛼 + ℎ𝑁 −2𝛼 𝛬2 𝑙=1 Evaluating directly gives Therefore, we have two following lemmas useful for estimator of the MISE ‖ ‖2 ‖I[0,𝑇 ] Bias(𝑓̂𝑁,𝑀 )‖ 2 ≤ ℎ𝑇 𝑁 −2𝛼 𝛬2 ‖ ‖𝐿 (R ) ( ∞ ∑ ) (2𝑙)−2𝛼 + 𝑙=1 ( ( )) ∞ ≤ ℎ𝑇 𝑁 −2𝛼 𝛬2 + 2−2𝛼+1 + 𝑥−2𝛼 𝑑𝑥 ∫1 ) ( 𝛼2−2𝛼+2 −2𝛼 = ℎ𝑇 𝑁 𝛬 + 2𝛼 − ( ) Lemma 5.8 If Var 𝜀𝑗,𝑚 ≤ 𝜎22 , then ) ( ‖ ‖2 𝛼2−2𝛼+2 −2𝛼 ‖I[0,𝑇 ] Bias(̂ 𝑔𝑁,𝑀 )‖ 𝛬 1+ ‖ ‖ 2 ≤ ℎ𝑇 𝑁 2𝛼 − ‖ ‖𝐿 (R ) and And from Lemma 5.7 we have √ ‖ ‖2 ‖I[0,𝑇 ] Var(𝑓̂𝑁,𝑀 )‖ ̂ ‖ ‖ 2 = ∬ Var(𝑓𝑁,𝑀 )d𝑥d𝑡 ‖ ‖𝐿 (R ) R [ ] 𝑀 𝑛−1 2𝑀𝑇 ℎ𝜎12 𝑁 ∑ ∑ ℎ𝑇 𝜎1 𝜙𝑝 (𝑡𝑗 ) + 𝜙𝑝 (𝑡𝑛 ) ≤ ≤ 𝑚=−𝑀 𝑛 𝑛2 𝑗=1 𝑛2 √ 2𝑀ℎ𝜎22 𝑇 𝑁 ‖ ‖2 ‖I[0,𝑇 ] Var(̂ 𝑔𝑁,𝑀 )‖ ‖ ‖ 2 ≤ 𝑛 ‖ ‖𝐿 (R ) ( ) Lemma 5.9 If Var 𝜖𝑗,𝑚 ≤ 𝜎12 , then ( ) ‖ ‖2 𝛼2−2𝛼+2 −2𝛼 ‖I[0,𝑇 ] Bias(𝑓̂𝑁,𝑀 )‖ 𝛬 1+ ‖ ‖ 2 ≤ ℎ𝑇 𝑁 2𝛼 − ‖ ‖𝐿 (R ) ■ Combining Lemmas 5.6, 5.7 gives directly and √ 2𝑀ℎ𝜎12 𝑇 𝑁 ‖ ‖2 ‖I[0,𝑇 ] Var(𝑓̂𝑁,𝑀 )‖ ‖ ‖ 2 ≤ 𝑛 ‖ ‖𝐿 (R ) Lemma 5.10 Proof We have ‖2 ‖ ‖I[0,𝑇 ] Bias(𝑓̂𝑁,𝑀 )‖ 2 ‖𝐿 (R ) ‖ ]2 [𝑁 ∞ 𝑀 𝑇 ∑ ∑ ∑ 𝑐𝑘 (𝑚ℎ)𝜙𝑘 (𝑡) d𝑡 𝛾𝑛,𝑝 𝜙𝑝 (𝑡) + =ℎ ∫0 𝑚=−𝑀 𝑝=1 𝑘=𝑁+1 [ ] 𝑀 𝑁 ∞ ∑ ℎ𝑇 ∑ ∑ = 𝛾𝑛,𝑝 + 𝑐𝑘2 (𝑚ℎ) 𝑚=−𝑀 𝑝=1 𝑘=𝑁+1 and ‖ E ‖I[0,𝑇 ] (𝑓̂𝑁,𝑀 ‖ ‖2 ‖ 𝑔𝑁,𝑀 − 𝑔̃𝑀 )‖ 2 ≤ E ‖I[0,𝑇 ] (̂ ‖𝐿 (R ) ‖ ‖ ‖2 ‖ ‖2 E ‖𝐹 − 𝐹̂𝑁,𝑀 ‖ 2 = E ‖𝑓 − 𝑓̂𝑁,𝑀 ‖ 2 ‖𝐿 (R ) ‖ ‖𝐿 (R ) 4𝜋 ‖ ‖ ‖2 ‖ ‖2 ̂ = E ‖I[0,𝑇 ] (𝑓 − 𝑓𝑁,𝑀 )‖ 2 + E ‖I(𝑇 ,∞) 𝑓 ‖ 2 ‖ ‖𝐿 (R ) ‖ ‖𝐿 (R ) Hence, the theorem is deduced directly from (15), Lemmas 5.10 and 5.5 ■ ‖ ‖2 ‖I[0,𝑇 ] Bias(𝑓̂𝑁,𝑀 )‖ 2 ‖ ‖𝐿 (R ) ] {𝑁 [∞ 𝑀 ∞ ∑ ∑ ∑ ∑ ℎ𝑇 2𝑙 −2𝛼 2𝛼 ≤ (−1) (𝑝 + 2𝑙𝑛) (𝑝 + 2𝑙𝑛) 2𝑐𝑝+2𝑙𝑛 (𝑚ℎ) 𝑚=−𝑀 𝑝=1 𝑙=1 𝑙=1 [∞ ]} 𝑁 ∞ ∑ ℎ𝑇 ∑ ∑ + (−1)2𝑙 (−𝑝 + 2𝑙𝑛)−2𝛼 (−𝑝 + 2𝑙𝑛)2𝛼 2𝑐−𝑝+2𝑙𝑛 (𝑚ℎ) 𝑝=1 𝑙=1 𝑙=1 ℎ𝑇 + ( ) 𝛼2−2𝛼+2 2𝑀ℎ𝑇 𝑁𝜎 1+ ℎ𝑇 𝑁 −2𝛼 𝛬2 + 2𝛼 − 𝑛 Proof of Theorem 3.2 We have From Lemma 5.3 and assuming that 𝑀 < 𝑛, 𝛼 > 12 , we have ∞ ∑ Let Assumptions (8) and (16) hold Then ) ( 2𝑀ℎ𝑇 𝑁𝜎 𝛼2−2𝛼+2 ‖2 ℎ𝑇 𝑁 −2𝛼 𝛬2 + , − 𝑓̃𝑀 )‖ 2 ≤ + ‖𝐿 (R ) 2𝛼 − 𝑛 5.4 Proof of Theorem 3.3 We denote ( e𝜆(2−𝑦0 ) − e−𝜆(2−𝑦0 ) e𝜆 − e−𝜆 ) e𝜆(1−𝑦0 ) − e−𝜆(1−𝑦0 ) −𝐺(𝑟, 𝑠) ei(𝑟𝑥+𝑠𝑡) d𝑟d𝑠 e𝜆 − e−𝜆 𝑣𝜖 (𝑥, 𝑡) = 𝑐𝑘2 (𝑚ℎ) 𝑘=𝑁+1 Hence ‖ ‖2 ‖I[0,𝑇 ] Bias(𝑓̂𝑁,𝑀 )‖ 2 ‖ ‖𝐿 (R ) ∞ ∑ ≤ ℎ𝑇 𝑁 −2𝛼 (2𝑙)−2𝛼 4𝜋 ∬𝐷𝜖 𝐹 (𝑟, 𝑠) (67) 𝑓𝑡 ̂𝑁,𝑀 = 𝑔̂𝑓 𝑡 be the approximation for 𝐹 Let 𝐹̂𝑁,𝑀 = 𝑓̂𝑁,𝑀 and 𝐺 𝑁,𝑀 and 𝐺, where 𝑓̂𝑁,𝑀 and 𝑔̂𝑁,𝑀 are defined in (18) and (20) Using the estimators, we can obtain the approximation 𝑣̂f t of 𝑣f t Suggested from 𝑙=1 29 D.D Trong, T.Q Viet, V.D Khoa et al Computers and Mathematics with Applications 86 (2021) 16–32 Using the lemma, we can finish the proof of the main theorem As a result, (71) becomes the formula (25), we define ( e𝜆(2−𝑦0 ) − e−𝜆(2−𝑦0 ) 𝑣̂(𝑥, 𝑡) = 𝐹̂𝑁,𝑀 (𝑟, 𝑠) e𝜆 − e−𝜆 4𝜋 ∬𝐷𝜖 ) 𝜆(1−𝑦 ) −𝜆(1−𝑦 ) 0 −e ̂𝑁,𝑀 (𝑟, 𝑠) e −𝐺 ei(𝑟𝑥+𝑠𝑡) d𝑟d𝑠 e𝜆 − e−𝜆 E ∬𝐷𝜖 ‖̂ ‖2 + 2𝐵𝜖 E ‖𝐺 − 𝐺‖ 2 , ‖ 𝑁,𝑀 ‖𝐿 (R ) We have Using the MISE bound in Theorem 3.2 gives ‖2 ‖ ‖2 ‖ ‖2 MISE = E ‖ ‖𝑣 − 𝑣̂‖ ≤ ‖𝑣 − 𝑣𝜖 ‖ + 2E ‖𝑣𝜖 − 𝑣̂‖ (68) E By Parseval’s equality, the first term can be rewritten as ∬R2 ∖𝐷𝜖 (𝑟2 + 𝑠2 )𝜃 |𝑣f t (𝑟, 𝑠)| d𝑟d𝑠 (𝑟2 + 𝑠2 )𝜃 E ∬𝐷𝜖 |𝑣f𝜖t − 𝑣̂f t | d𝑟d𝑠 ≤ 𝜀0 (ℎ, 𝑇 , 𝑀, 𝑁)32𝜋 ( )3−𝑦0 , 𝜖 Combining the latter inequality with (68), (69), (70), we can obtain the desired inequality (70) ‖ ‖2 ≤ ‖(𝑟 + 𝑠2 )𝜃∕2 𝑣f t ‖ ‖ ‖ min{1, 𝜅 𝜃 }𝑏2𝜃 𝜖 ‖ ‖2 |𝑣f𝜖t − 𝑣̂f t | d𝑟d𝑠 ≤ 2𝐴𝜖 E ‖𝐹 − 𝐹̂𝑁,𝑀 ‖ 2 ‖𝐿 (R ) ‖ where 𝜀0 (ℎ, 𝑇 , 𝑀, 𝑁) is defined in (22) Notice that if 𝜖 is sufficiently ( )3−𝑦0 Hence small, then Lemma 5.11 implies 𝐴𝜖 , 𝐵𝜖 ≤ 4𝜖 2 |𝑣f t (𝑟, 𝑠)| d𝑟d𝑠 ≤ ∬𝐷𝜖 ‖̂ ‖2 + 2𝐵𝜖 E ‖𝐺 − 𝐺‖ 2 ‖ 𝑁,𝑀 ‖𝐿 (R ) ( ) ≤ 8𝜋 𝜀0 (ℎ, 𝑇 , 𝑀, 𝑁) 𝐴𝜖 + 𝐵𝜖 , ‖2 ‖𝑣 − 𝑣𝜖 ‖2 = ‖ |𝑣f t (𝑟, 𝑠)| d𝑟d𝑠 (69) ‖𝑣f t − 𝑣f𝜖t ‖ = ‖ ‖ ‖ ‖ 4𝜋 4𝜋 ∬R2 ∖𝐷𝜖 ( )2 Note that if 𝑏𝜖 > then 𝑟2 + 𝑠2 ≥ 𝑟4 +𝑠2 for (𝑟, 𝑠) ∈ R2 ∖𝐷𝜖 which gives ( )𝜃 𝜃∕2 𝑟 +𝑠 ≥ (𝑟 + 𝑠 ) Moreover, we have (𝑟4 + 𝑠2 )𝜃∕2 > min{1, 𝜅 𝜃 }𝑏2𝜃 𝜖 2 ft 𝜃 on R ⧵𝐷𝜖 Hence (𝑟2 +𝑠2 )𝜃 ≥ min{1, 𝜅 𝜃 }𝑏2𝜃 𝜖 on R ⧵𝐷𝜖 Since 𝑣 ∈ 𝐻 (R ) where 𝑣 is as in (67), we obtain ∬R2 ∖𝐷𝜖 ‖ ‖2 |𝑣f𝜖t − 𝑣̂f t | d𝑟d𝑠 ≤ 2𝐴𝜖 E ‖𝐹 − 𝐹̂𝑁,𝑀 ‖ 2 ‖ ‖𝐿 (R ) 5.5 Proof of Theorem 3.4 For the other term in (68), we note that We first prove the stated minimum value of 𝜀0 (ℎ, 𝑇 , 𝑀, 𝑁) ‖2 E‖ ‖𝑣𝜖 − 𝑣̂‖𝐿2 (R2 ) E |𝑣f t − 𝑣̂f t | d𝑟d𝑠 4𝜋 ∬𝐷𝜖 𝜖 ( ) ̂𝑁,𝑀 − 𝐺|2 |𝐷𝜆 |2 d𝑟d𝑠 E |𝐹 − 𝐹̂𝑁,𝑀 | |𝐶𝜆 |2 + |𝐺 ≤ ∬ 2𝜋 𝐷𝜖 Proof We have = 𝜀0 (ℎ, 𝑇 , 𝑀, 𝑁) = 𝐴𝜀 (71) |𝐷𝜆 (𝑟, 𝑠)|2 ≤ 𝐵𝜖 for all (𝑟, 𝑠) ∈ 𝐷𝜖 , where The equality holds for 𝑎11 = ⋯ = 𝑎55 Hence, for 𝑏𝑖 ≥ 0, 𝑖 = 1, … , 5, we |𝐶𝜆 (𝑟, 𝑠)|2 ∶ (𝑟, 𝑠) ∈ 𝐷𝜖 , √√ put 𝑎𝑖 = (𝑝𝑖 𝑏𝑖 ) }} 𝑟4 + 𝑠2 ∕𝜅 + 𝑟2 ≤ ∑ , 𝑏𝑖 ≥ |𝐷𝜆 (𝑟, 𝑠)|2 ∶ (𝑟, 𝑠) ∈ 𝐷𝜖 , √√ { |𝐶𝜆 (𝑟, 𝑠)|2 = ≤ (𝑟, 𝑠) ∈ R2 ∶ 𝐴0 = √ √√ 𝑟4 + 𝑠2 ∕𝜅 + 𝑟2 > 2e2𝐴0 (2−𝑦0 ) e−2𝐴0 ( )3−𝑦0 ≤2 𝜖 𝑇 2𝜏+1 2𝛼 𝑁 ℎ2𝛽 ′ −1 ) 𝑝1 ( 𝑝2 𝐵𝜀 𝑇 2𝜏+1 𝑀 2𝛽 ℎ2𝛽 ′ −1 ) 𝑝2 ( 𝑝3 𝐶𝜀 ℎ𝑇 𝑀𝑁 𝑛 ∑ 2𝛼 2𝛽 = 1, = , = 𝑝 𝑝1 𝑝3 𝑝2 𝑝3 𝑖=1 𝑖 + e−2𝐴0 − cos(2𝐵0 ) √ √√ 2𝐴0 (3−𝑦0 ) 𝑟4 +𝑠2 +𝑟2 (3−𝑦0 ) ≤ 2e = 2e ) 𝑝3 (72) and 𝜌 1 + (2𝜏 + 1) + = , 𝑝1 𝑝2 𝑝3 𝑝5 4𝑞 − 1 = + (2𝛽 ′ − 1) + (2𝛽 ′ − 1) 𝑝1 𝑝2 𝑝3 𝑝4 (2𝜏 + 1) { √√ Otherwise, as (𝑟, 𝑠) ∈ 𝐷𝜖 ∩ (𝑟, 𝑠) ∈ R2 ∶ 𝐴0 = √1 𝑟4 + 𝑠2 ∕𝜅 + 𝑟2 ≤ } , the definition of 𝐴𝜖 implies |𝐶𝜆 (𝑟, 𝑠)|2 ≤ 𝐴𝜖 Similarly, we also obtain an upper bound for |𝐷𝜆 (𝑟, 𝑠)|2 on 𝐷𝜖 𝑝1 𝐴𝜀 ( ) ( 𝐸 )𝑝 × 𝑝4 𝐷𝜀 ℎ4𝑞−2 𝑝4 𝑝5 𝜌 𝑇 We choose 𝑝𝑖 > such that the variables ℎ, 𝑀, 𝑁, 𝑇 are deleted in the latter quantity, i.e., ( ( )) e2𝐴0 (2−𝑦0 ) + e−2𝐴0 (2−𝑦0 ) − cos 2𝐵0 − 𝑦0 e2𝐴0 (𝑝𝑖 𝑏𝑖 ) 𝑝𝑖 ( 𝜀0 ≥ Proof For (𝑟, 𝑠) ∈ 𝐷𝜖 ∩ } , we have and obtain The inequality will be used in the following part of the proof We will find 𝑝𝑖 ’s such that 𝜀0 (ℎ, 𝑇 , 𝑀, 𝑁) attains its minimum Using the inequality gives }} 𝑟4 + 𝑠2 ∕𝜅 + 𝑟2 ≤ ∏ 𝑝𝑖 𝑖=1 𝑖=1 { ( )3−𝑦 , 𝐵𝜖 = max 𝜖 { sup 𝑝 𝑝 { ( )3−𝑦 𝐴𝜖 = max , 𝜖 { 𝑇 2𝜏+1 ℎ𝑇 𝑀𝑁 + 𝐶𝜀 𝑛 𝑀 2𝛽 ℎ2𝛽 ′ −1 𝐸 𝑇𝜌 where 𝐴𝜀 , 𝐵𝜀 , 𝐶𝜀 , 𝐷𝜀 , 𝐸 are defined in (15) and (23) To prove Theorem, a Young-type inequality is necessary For 𝑝𝑖 > 1, ∑ 𝑖 = 1, … , 5, satisfying 5𝑖=1 𝑝1 = 1, 𝑎𝑖 ≥ 0, using the Jensen inequality 𝑖 gives ( ) 𝑝 𝑛 ∑ ∑ 𝑝𝑖 ln 𝑎𝑖 ∏ ∑ 𝑎𝑖 𝑖 exp(𝑝𝑖 ln 𝑎𝑖 ) 𝑎𝑖 = ≥ exp = 𝑝 𝑝𝑖 𝑝𝑖 𝑖=1 𝑖=1 𝑖=1 𝑖=1 𝑖 Lemma 5.11 Letting 𝐶𝜆 and 𝐷𝜆 be defined as in (11) and (12), we have upper bounds for 𝐶𝜆 and 𝐷𝜆 in 𝐷𝜖 as following sup + 𝐵𝜀 + 𝐷𝜀 ℎ4𝑞−2 + To find a bound for the right hand side of the latter inequality we need |𝐶𝜆 (𝑟, 𝑠)|2 ≤ 𝐴𝜖 , 𝑇 2𝜏+1 𝑁 2𝛼 ℎ2𝛽 ′ −1 With these 𝑝𝑖 , we obtain ( )1 ( )1 ( )1 ( )1 ( ) ( ) 𝑝3 𝜀0 ≥ 𝑝1 𝐴𝜀 𝑝1 𝑝2 𝐵𝜀 𝑝2 𝑝3 𝐶𝜀 𝑝3 𝑝4 𝐷𝜀 𝑝4 𝑝5 𝐸 𝑝5 𝑛 ■ 30 (73) (74) D.D Trong, T.Q Viet, V.D Khoa et al Computers and Mathematics with Applications 86 (2021) 16–32 entation, we consider 𝑓 and 𝑔 for the case 𝑄 = 𝜅 = 1, in which we have √ ∞ 𝜋 𝑓1f t (𝑟, 𝑡) = 𝑓 (𝑥, 𝑡)e−i𝑟𝑥 d𝑥 = √ e− 4𝑡 −𝑟 𝑡 , ∫−∞ 𝑡 √ ∞ 𝜋 𝑔1f t (𝑟, 𝑡) = 𝑔(𝑥, 𝑡)e−i𝑟𝑥 d𝑥 = √ e− 𝑡 −𝑟 𝑡 ∫−∞ 𝑡 We calculate 𝑝𝑖 From (72) we obtain 1 1 = , = 𝑝1 2𝛼𝑝3 𝑝2 2𝛽𝑝3 (75) Substituting (75) into (73) yields ( ) 1 2𝜏 + 2𝜏 + = + +1 𝑝5 𝑝3 𝜌 2𝛼 2𝛽 (76) Similarly, using (74) gives ( ′ ) 2𝛽 − 2𝛽 ′ − 1 = + −1 𝑝4 𝑝3 (4𝑞 − 2) 2𝛼 2𝛽 Since 𝑓1f t and 𝑔1f t are similar, only the proof for 𝑓1f t is presented We first consider the condition (14) for any 𝑞 ≥ We evaluate ( )𝑞 + 𝑟2 𝑓1f t (𝑟, 𝑡) in 𝐿2 (R2 ) Indeed (77) Combining (75), (76), (77) and (72) yields ( ) 1 2𝜏 + 2𝜏 + 𝑝3 = + +1+ + +1 2𝛼 2𝛽 𝜌 2𝛼 2𝛽 ( ′ ) 2𝛽 − 2𝛽 ′ − 1 + + −1 4𝑞 − 2𝛼 2𝛽 )2𝑞 1 ( d𝑟d𝑡 4𝜋 e− 2𝑡 −2𝑟 𝑡 + 𝑟2 𝑡 ∞ ∞ )2𝑞 − 2𝑡1 −2𝑟2 𝑡 ( = 8𝜋 + 𝑟2 e d𝑟d𝑡 ∫0 ∫0 𝑡 )2𝑞 ( ∞ ∞ 𝜏2 − −2𝜏 d𝜏d𝑡 1+ = 8𝜋 √ e 2𝑡 ∫0 ∫0 𝑡 𝑡 𝑡 ) ∞ ∞ )2𝑞 ( − −2𝜏 ( 2𝑞 ≤ 8𝜋 d𝜏d𝑡 + 𝜏2 1+ √ e 2𝑡 ∫0 ∫0 𝑡 𝑡 𝑡 ( ) ∞ ∞ )2𝑞 ( −1 2𝑞 = 8𝜋 d𝑡 e−2𝜏 + 𝜏 d𝜏 < ∞ √ e 2𝑡 + ∫0 𝑡 𝑡 ∫0 𝑡 ∞ ∫0 Finally, we calculate the regularization parameters We note that 𝑝1 𝐴𝜀 𝑇 2𝜏+1 𝑁 2𝛼 ℎ2𝛽 ′ −1 = 𝑝2 𝐵𝜀 𝑇 2𝜏+1 ℎ𝑇 𝑀𝑁 𝐸 = 𝑝3 𝐶𝜀 = 𝑝4 𝐷𝜀 ℎ4𝑞−2 = 𝑝5 𝜌 𝑛 𝑇 𝑀 2𝛽 ℎ2𝛽 ′ −1 It follows that )1 ( 𝑝5 𝐸 𝜌 − 4𝑞−2 ℎ 𝜌 , 𝑇 = 𝑝4 𝐷𝜀 ( )1 ( ) 2𝜏+1 (2𝛽 ′ −1)𝜌+(4𝑞−2)(𝜌+2𝜏+1) 2𝛼𝜌 𝑝1 𝐴𝜀 2𝛼 𝑝5 𝐸 − 2𝛼𝜌 𝑁 = ℎ , 𝑝4 𝐷𝜀 𝑝4 𝐷𝜀 )1 ( ) 2𝜏+1 ( (2𝛽 ′ −1)𝜌+(4𝑞−2)(𝜌+2𝜏+1) 2𝛽𝜌 𝑝2 𝐵𝜀 2𝛽 𝑝5 𝐸 − 2𝛽𝜌 , 𝑀 = ℎ 𝑝4 𝐷𝜀 𝑝4 𝐷𝜀 ℎ = 𝑛 ( 𝑝3 𝐶𝜀 𝑝4 𝐷𝜀 )( 𝑝5 𝐸 𝑝4 𝐷𝜀 ( 1 + 𝛼 𝛽 2𝛼𝜌 ( ∞ 2𝑞 ∫0 ∫|𝑟|≥1 ∫0 𝑝1 𝐴𝜀 𝑝4 𝐷𝜀 ) 2𝛼 ( 𝑝2 𝐵𝜀 𝑝4 𝐷𝜀 ) 2𝛽 (𝑟, 𝑡) d𝑟d𝑡 ∞ ∫0 ∫|𝑟|≥1 𝑟2 𝑡(1 + 𝑟2 )2𝑞 e− 2𝑡 (2𝑟2 𝑡)−(2𝑞+2) d𝑟d𝑡 ∞ ∫0 e− 2𝑡 𝑡−(2𝑞+1) d𝑡 ∫|𝑟|≥1 𝑟2 (1 + 𝑟2 )2𝑞 (𝑟2 )−(2𝑞+2) d𝑟 < ∞ We verify the condition A4 (15) We have Actually, we can see that ∬R2 ∖𝐷 |𝑣f t (𝑟, 𝑠)| d𝑟d𝑠 → as 𝜖 → (−2𝑥2 −2)∕(4𝑡) e d𝑥d𝑡 ∫R ∫𝑇 𝑡2 ∞ √ −1∕2𝑡 e 𝑒−𝑢 ∕2 𝑡d𝑢d𝑡 = ∫R ∫𝑇 𝑡2 ∞ √ √ 𝑡−3∕2 d𝑡 = 2𝜋𝑇 −1∕2 ≤ 2𝜋 ∫𝑇 ∞ 𝜖 Combining Theorem 3.3 and (70), we obtain ( )3−𝑦0 E |𝑣f t − 𝑣̂f t | d𝑟d𝑠 ≤ 32𝜋 𝜀0 (ℎ, 𝑇 , 𝑀, 𝑁) ∬R2 𝜖 𝜖 ‖ ‖2 + ‖(𝑟 + 𝑠2 )𝜃∕2 𝑣f t ‖ 𝜃 2𝜃 ‖ ‖ min{1, 𝜅 }𝑏𝜖 ( )3−𝑦0 𝐾𝑣,𝜃 ≤ 32𝜋 𝜀0 (ℎ, 𝑇 , 𝑀, 𝑁) + 𝜖 min{1, 𝜅 𝜃 }𝑏2𝜃 𝜖 √ √ √ + 1) where we recall 𝑏𝜖 = ln(4∕𝜖)∕( ∫R ∫𝑇 ∞ |𝑓 (𝑥, 𝑡)|2 d𝑥d𝑡 = We verify the condition (16) Using Lemma 2.4, we verify that the function 𝑓 , 𝑔 are in 𝐶𝛼,𝛽,𝛬 with appropriate 𝛼, 𝛽, 𝛬 We consider the 2 function 𝜓𝑘 (𝑡) = 1𝑡 𝑒−𝑘 ∕𝑡 where 𝑘 ≥ 𝛼0 > We have 𝜓𝑘′ (𝑡) = 𝑡12 𝑒−𝑘 ∕𝑡 ( ) −1 + 𝑡 In this case ( ) −𝑘2 ∕𝑡 𝑒 −1 + d𝑡 𝑡 𝑡 ( ) ∞ ′ 𝑢2 𝑢 d𝑢 = 𝑒−𝑢 −1 + ≤ 𝐶0 𝑘−6 = 𝐶0 𝑘−2𝛽1 ∫0 𝑘 𝑘 𝑘2 ∞ ′ Proof of Theorem 3.4 We consider the case 𝛬2 = 𝐶𝛬 𝑇 2𝜏 ∕ℎ2𝛽 for 𝐶𝛬 , 𝜏 > As proved, ‖𝜓𝑘′ ‖2 𝐿 (0,∞) 𝜀0 (ℎ𝑛 , 𝑇𝑛 , 𝑀𝑛 , 𝑁𝑛 ) = 𝑂(𝑛−𝜇 ) Now we choose 𝜖 = 𝑛−𝜇∕[2(3−𝑦0 )] and obtain = ∫0 ′ with 𝛽1′ = Similarly ‖𝜓𝑘′′ ‖2 ≤ 𝐶0 𝑘−8 = 𝐶0 𝑘−2𝛽2 with 𝛽2′ = So we 𝐿 (0,∞) ′ ′ ′ ′ can choose 𝛽 = min{𝛽2 , (𝛽1 + 𝛽2 )∕2} = 7∕2 It follows that 𝑓 , 𝑔 ∈ 𝐶𝛼,𝛽,𝛬 ′ with 1∕2 < 𝛼 < 3∕2, 1∕2 < 𝛽 < (2𝛽 ′ − 1)∕2 = 3, 𝛬 = 𝐶𝑇 ∕ℎ𝛽 ( )3−𝑦0 𝐾𝑣,𝜃 32𝜋 𝜀0 (ℎ𝑛 , 𝑇𝑛 , 𝑀𝑛 , 𝑁𝑛 ) + 𝜖 min{1, 𝜅 𝜃 }𝑏2𝜃 𝜖𝑛 ( ) 𝐶′ −𝜇∕2 ≤𝐶 𝑛 + ≤ ln2𝜃 𝑛 ln2𝜃 𝑛 Hence, we have the desired upper bound of error 𝜕𝑟 )2 16𝑟2 𝜋𝑡(1 + 𝑟2 )2𝑞 e− 2𝑡 −2𝑟 𝑡 d𝑟d𝑡 =16.2−(2𝑞+2) 𝜋𝐶2𝑞+2 ■ 𝜕𝑓1f t ∫|𝑟|≥1 ≤16𝜋𝐶2𝑞+2 2𝛽𝜌 (1 + 𝑟 ) ∞ ≤ ) ) + 2𝜏+1 + 2𝜏+1 ( 𝜌 ∫−∞ √ 𝜕𝑓 f t 𝜕𝑓 f t We verify condition (14) for 𝜕𝑟1 We have 𝜕𝑟1 = −4𝑟 𝜋𝑡e− 4𝑡 −𝑟 𝑡 Hence, for every 𝑝 > 0, putting 𝐶𝑝 = sup𝑧≥0 𝑧𝑝 𝑒−𝑧 we obtain Using the equality 𝑝3 𝐶𝜀 ℎ𝑇 𝑀𝑁 = 𝑝4 𝐷ℎ4𝑞−2 , we obtain in view of the 𝑛 formulas of 𝑇 , 𝑁, 𝑀 4𝑞−2 (2𝛽 ′ −1)𝜌+(4𝑞−2)(𝜌+2𝜏+1) 4𝑞−3+ 𝜌 + 2𝜌 ∞ Acknowledgments ■ The authors would like to thank the editor and the anonymous referees for their valuable comments and helpful suggestions that improve deeply the quality of our paper This research was supported by Vietnam National Foundation for Science and Technology Development (NAFOSTED) [grant number 101.02-2019.321] 5.6 Verifying the conditions (14)–(16) for Example in Section To guarantee that 𝑓 and 𝑔 in (58) are given properly, we have to check if they satisfy the conditions (14)–(16) For simplicity of pres31 D.D Trong, T.Q Viet, V.D Khoa et al Computers and Mathematics with Applications 86 (2021) 16–32 References [11] T.N Thach, N.H Tuan, P.T.M Tam, M.N Minh, N.H Can, Identification of an inverse source problem for time-fractional diffusion equation with random noise, Math Methods Appl Sci 42 (1) (2019) 204–218 [12] D.D Trong, T.D Khanh, N.H Tuan, N.D Minh, Nonparametric regression in a statistical modified Helmholtz equation using the fourier spectral regularization, Statistics 49 (2) (2015) 267–290 [13] D.D Trong, N.T.H Nhung, N.D Minh, N.N Lan, A fractional sideways problem in a one- dimensional finite- slab with deterministic and random interior perturbed data, Math Methods Appl Sci (2020) [14] Fr Stenger, Numerical Methods Based on Sinc and Analytic Functions, Springer, Berlin, Heidelberg, New York, 1993 [15] David H Bailey, Paul N Swarztrauber, The fractional fourier transform and applications, SIAM Rev 33 (3) (1991) 389–404 [16] P.N Swarztrauber, Vectorizing the FFTs, in: Parallel Computations, Elsevier, 1982, pp 51–83 [17] Michael Metcalf, John Reid, Malcolm Cohen, Modern Fortran Explained, Oxford University Press, 2011 [18] H Akima, Algorithm 760: rectangular-grid-data surface fitting that has the accuracy of a bicubic polynomial, ACM Trans Math Softw (TOMS) 22 (3) (1996) 357–361, http://dx.doi.org/10.1145/232826.232854 [19] N.M Dien, D.N.D Hai, T.Q Viet, D.D Trong, On tikhonov’s method and optimal error bound for inverse source problem for a time-fractional diffusion equation, Comput Math Appl (2020) [20] John Crank, The Mathematics of Diffusion, Oxford University Press, 1979 [21] Alexandre B Tsybakov, Introduction to Nonparametric Estimation, in: Springer Series in Statistics, Springer New York, 2009 [22] D.D Ang, R Gorenflo, L.K Vy, D.D Trong, Moment Theory and Some Inverse Problems in Potential Theory and Heat Conduction, in: Lecture Notes in Mathematics, vol 1792, Springer, Berlin, 2002 [1] J.V Beck, B Blackwell, C.R St Clair Jr., Inverse Heat Conduction, Ill-Posed Problems, Wiley, New York, 1985 [2] F Berntsson, Sequential solution of the sideways heat equation by windowing of the data, Inverse Probl Eng 11 (2) (2003) 91–103 [3] N.H Tuan, D Lesnic, T.Q Viet, V.V Au, Regularization of the semilinear sideways heat equation, IMA J Appl Math 84 (2) (2018) 258–291 [4] Ming Li, Xiang-Xiang Xi, Xiang-Tuan Xiong, Regularization for a fractional sideways heat equation, J Comput Appl Math 255 (2014) 28–43 [5] U Tautenhahn, Optimal stable approximations for the sideways heat equation, J Inverse Ill-Posed Probl (3) (1997) 287–307 [6] Wei Cheng, Qi Zhao, A modified quasi-boundary value method for a twodimensional inverse heat conduction problem, Comput Math Appl (2019) http://dx.doi.org/10.1016/j.camwa.2019.06.031 [7] P.N Dinh Alain, P.H Quan, D.D Trong, Sinc approximation of the heat distribution on the boundary of a two-dimensional finite slab, Nonlinear Anal.: Real World Appl (2008) 1103–1111 [8] M Kirane, E Nane, N.H Tuan, On a backward problem for multidimensional ginzburg-landau equation with random data, Inverse Problems 34 (1) (2017) 015008 [9] N.D Minh, T.D Khanh, N.H Tuan, D.D Trong, A two-dimensional backward heat problem with statistical discrete data, J Inverse Ill-posed Probl 26 (1) (2018) 13–31 [10] E Nane, N.H Tuan, Approximate solutions of inverse problems for nonlinear space fractional diffusion equations with randomly perturbed data, SIAM/ASA J Uncertain Quantif (1) (2018) 302–338 32 ... boundary of a two-dimensional finite slab, Nonlinear Anal.: Real World Appl (2008) 1103–1111 [8] M Kirane, E Nane, N.H Tuan, On a backward problem for multidimensional ginzburg-landau equation with. .. equation with random data, Inverse Problems 34 (1) (2017) 015008 [9] N.D Minh, T.D Khanh, N.H Tuan, D.D Trong, A two-dimensional backward heat problem with statistical discrete data, J Inverse... (2018) 13–31 [10] E Nane, N.H Tuan, Approximate solutions of inverse problems for nonlinear space fractional diffusion equations with randomly perturbed data, SIAM/ASA J Uncertain Quantif (1) (2018)