1. Trang chủ
  2. » Thể loại khác

DSpace at VNU: Local convergence of the Lavrentiev method for the Cauchy problem via a Carleman inequality

22 98 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 22
Dung lượng 886,84 KB

Nội dung

J Sci Comput DOI 10.1007/s10915-011-9571-6 Local Convergence of the Lavrentiev Method for the Cauchy Problem via a Carleman Inequality Faker Ben Belgacem · Duc Thang Du · Faten Jelassi Received: 13 July 2011 / Revised: 22 November 2011 / Accepted: 21 December 2011 © Springer Science+Business Media, LLC 2012 Abstract The purpose is to perform a sharp analysis of the Lavrentiev method applied to the regularization of the ill-posed Cauchy problem, set in the Steklov-Poincaré variational framework Global approximation results have been stated earlier that demonstrate that the Lavrentiev procedure yields a convergent strategy However, no convergence rates are available unless a source condition is assumed on the exact Cauchy solution We pursue here bounds on the approximation (bias) and the noise propagation (variance) errors away from the incomplete boundary where instabilities are located The investigation relies on a Carleman inequality that enables enhanced local convergence rates for both bias and variance errors without any particular smoothness assumption on the exact solution These improved results allows a new insight on the behavior of the Lavrentiev solution, look similar to those established for the Quasi-Reversibility method in [Inverse Problems 25, 035005, 2009] There is a case for saying that this sort of ‘super-convergence’ is rather inherent to the nature of the Cauchy problem and any reasonable regularization procedure would enjoy the same locally super-convergent behavior Keywords Cauchy problem · Lavrentiev regularization · Bias and variance bounds · Carleman estimate F Ben Belgacem ( ) LMAC, EA 2222, Université de Technologie de Compiègne, BP 20529, 60205 Compiegne Cedex, France e-mail: fbenbelg@utc.fr D.T Du MIM, Hanoï University of Science, Hanoi, Vietnam e-mail: thangdd@vnu.vn F Jelassi LAMSIN, Faculté des Sciences de Bizerte, 7021 Jarzouna, Tunisia e-mail: faten.jelassi@lamsin.rnu.tn J Sci Comput Introduction The data completion problem consists in recovering data on an inaccessible part of the boundary from abundant measurements on an accessible part of it It is also known under the terminology of Cauchy’s problem and the over-specified boundary conditions are called Cauchy data It may arise in many engineering processes We refer to [10, 12, 16, 21] for some applications The main feature of this problem is the ill posedness Using some regularization strategy is thus mandatory in the numerical computations In a recent work in [8, 9] and particularly in [11], the quasi-reversibility method has been sharply analyzed, when used for the regularization of the Cauchy problem The salient results stated in there show that, despite that no convergence rates are expected globally (in the whole computational domain) without additional smoothness on the exact Cauchy solution, exhibiting such rates turns out to be possible away from the incomplete boundary where data are missing The proofs of these estimates are elaborated without need of Source Conditions.1 The approach proposed in [8, 11] enables one to derive convergence rates on shrinked sub-domains that not meet regions stretched along the incomplete boundary where instabilities are located Things happen as if the error is born at the over-determined boundary, and then propagates while growing and eventually reach its maximum on the incomplete boundary In either references, the authors benefit of a suitable Carleman inequality to derive the desired results The purpose of this contribution is to show that, compared to the quasi-reversibility procedure, the Lavrentiev method enjoys similar advantages and shows up analogous convergence rates when applied in the variational framework firstly settled in [5] Sharp estimates with Holder rates may be stated away from the zone of instabilities This testifies that the super-convergence is inherent to the nature of the Cauchy problem and any reasonable regularization methods is entitled to share in common this behavior The tool indicated for our analysis is a well fit Carleman estimate Compared to the one used in [11], the inequality we use incorporates the boundary and is harder to state It has been proved by D Tataru in [26] Notice that we restrict ourselves to the improvement of the bounds on the bias and the variance one can obtain in any sub-domain that is complementary of a vicinity of the incomplete boundary We not address, in the analysis, the issue of selecting the regularization parameter with respect to the noise magnitude in spite of its importance Cautious users have of course to deal with this question to secure their computations The discrepancy principle, for instance, seeks a trade-off between the accuracy of the Lavrentiev method and the control of the instable effects of noisy data It has been studied in details in [7] and is the one we use in our simulations The contents of the paper are as follows Section is a presentation of the reduced variational formulation of the Cauchy problem, introduced in [5], to which we apply the Lavrentiev regularization We recall then some former convergence results available on the global domain that are useful for our study They are mainly proved in [7] In Sect 3, we provide the Carleman inequality proved in [26] We show afterwards how to use it to successfully solve the problematic we are concerned with and to obtain convergence rates for the bias and variance on any truncated domain that does not see the incomplete boundary Section is dedicated to the illustration of the theoretical predictions on an example where Fourier calculations are possible Then, we discuss some numerical simulations realized on more Let us mention that such Source assumptions are controversial Some authors are reluctant to make use of them, because of the difficulty to figure out whether that condition is satisfied or not We refer for instance to [19, 23] to know more about that Condition J Sci Comput Fig The domain , C is accessible to measure and I is unreachable complex geometries The results we obtain are in agreement with the predictions We investigate also the reliability of the discrepancy principle to select the regularization Lavrentiev parameter according to the noise intensity Some Notations Let x denote the generic point of The Lebesgue space of square integrable functions L2 ( ) is endowed with the natural inner product (·, ·)L2 ( ) ; the associated norm is · L2 ( ) The Sobolev space H ( ) involves all the functions that are in L2 ( ) so as their first order derivatives It is provided with the norm · H ( ) and the semi-norm is denoted by | · |H ( ) Let ϒ ⊂ ∂ be a connected component of the boundary the space H 1/2 (ϒ) is the set of the traces over ϒ of all the functions of H ( ) and we adopt the notation H −1/2 (ϒ) for the dual space of H 1/2 (ϒ) We refer to [1] for a detailed study of the fractionary Sobolev spaces Finally, we shall use throughout the notation p+ to indicate any real-number greater than p Cauchy’s Problem and Lavrentiev’s Regularization Let be a bounded domain in Rd (d = 2, 3) We assume that the boundary = ∂ is divided into two disconnected components C and I as indicated in (see Fig 1) We have then C ∩ I = ∅ They are supposed sufficiently smooth for instance), to avoid that they are cause of any limitation on the elliptic regularity we will use This is simply to alleviate the technicalities mandatory to handle the geometrical singularities which is far from being central in this work Let a be sufficiently smooth and b ∈ L∞ ( ) be given; a(·) is bounded away from zero and b(·) is non-negative and non-identically zero, that is a(x) ≥ a∗ > 0, b(x) ≥ 0, ∀x ∈ The norm of H ( ) well adapted to our problem reads as follows √ √ bv 2L2 ( ) )1/2 v H ( ) = ( a∇v 2L2 ( ) + (1) Unless explicitly indicated we use throughout this norm for the Sobolev space H ( ) Now, assume that we are provided a source f ∈ L2 ( ) and a Cauchy boundary condition (g, ϕ) ∈ H 1/2 ( C ) × H −1/2 ( C ) The data completion problem we are concerned with reads then as follows: find u such that − div(a∇u) + bu = f in , (2) a∂n u = ϕ on C, (3) u = g, u =? on I (4) J Sci Comput The exponential ill-posedness of the problem is widely known To be safely solved numerically, it is necessary to resort to a performing regularization strategy The one we are specifically involved in here is the Lavrentiev method The methodology rests on a variational formulation written on the incomplete boundary I It has been introduced in [5] and analyzed in [2, 7] The approach proceeds by a duplication argument, suggested in an earlier work by Kohn and Vogelius (see [21]) Let μ ∈ H 1/2 ( I ) be a given function in I , define uD (μ, g) ∈ H ( ) to be in charge of the Dirichlet condition on C , and is solution of the elliptic Poisson problem − div(a∇uD (μ, g)) + buD (μ, g) = f in , uD (μ, g) = g on C, uD (μ, g) = μ on I Next, uN (μ, ϕ) ∈ H ( ) is involved with the Neumann condition on − div(a∇uN (μ, ϕ)) + buN (μ, ϕ) = f in , a∂n uN (μ, ϕ) = ϕ on C, uN (μ, ϕ) = μ on I C, it is such that Obviously, functions uD (μ, g) and uN (μ, ϕ) depend on the data f This dependence does not appear in their expressions since f has an insignificant effect on the subsequent analysis It may be even set to zero (f = 0), without any changing on the exposition nor on the final convergence results This will be assumed henceforward The missing boundary data on I , denoted by λ ∈ H 1/2 ( I ), is therefore obtained when the equation uD (λ, g) = uN (λ, ϕ)(= u) is fulfilled Owing to the Holmgren uniqueness theorem, that equality may be condensed on the I and translated on the fluxes We are left therefore with the reduced fluxes equation a∂n uD (λ, g) = a∂n uN (λ, ϕ), in I Proceeding like in [5], the above equation may be put under a variational form The weak problem to solve is the variational problem: finding λ ∈ H 1/2 ( I ) such that s(λ, μ) = (μ), ∀μ ∈ H 1/2 ( (5) I) The bilinear form s(·, ·) and the linear form (·) are defined by: ∀χ , μ ∈ H 1/2 ( s(χ , μ) = I ), (a∇uD (χ )∇uD (μ) + buD (χ )uD (μ)) dx − (μ) = − (a∇uN (χ )∇uN (μ) + buN (χ )uN (μ)) dx, (a∇ u˘ D (g)∇uD (μ) + bu˘ D (g)uD (μ)) dx − ϕ, uN (μ) 1/2, C The notations used here are those of [5], uN (μ) is used instead of uN (μ, 0) and u˘ N (ϕ) replaces uN (0, ϕ) Similar notation abuses are adopted for uD The forms s(·, ·) and (·) are made of two contributions each We call them (sD (·, ·), sN (·, ·)) and ( D (·), N (·)) with no ambiguity about their definitions By ellipticity, sD (·, ·) determines an inner-product on H 1/2 ( I ) The corresponding norm · sD is equivalent to the natural norm · H 1/2 ( I ) J Sci Comput Henceforth, we will rather use · sD Moreover, the bilinear form s(·, ·) is obviously symmetric, it is also non-negative definite (see [5]) As a result, it defines an inner product and the related norm is provided by μ s = s(μ, μ), ∀μ ∈ H 1/2 ( I ) This norm is very weak because the form s(·, ·) is compact with a very high compactness degree (see [4]) The eigenvalues of this form decreases towards zero very fast We begin the analysis by stating an alternative expression of the norm · s Recall that the norm · H ( ) is the one defined in (1) Lemma 2.1 There holds that: ∀μ ∈ H 1/2 ( μ s I ); = uD (μ) − uN (μ) H 1( ) Proof The proof is achieved by direct computations We have that uD (μ) − uN (μ) = H 1( ) [a∇(uD (μ) − uN (μ))∇uD (μ) + b(uD (μ) − uN (μ))uD (μ)] dx − [a∇(uD (μ) − uN (μ))∇uN (μ) + b(uD (μ) − uN (μ))uN (μ)] dx In view of (uD (μ) − uN (μ)) = on I and by the construction of uN (μ), the second integral vanishes We obtain the new formula uD (μ) − uN (μ) H 1( ) = a(∇uD (μ))2 + buD (μ)2 dx − = a(∇uD (μ))2 + buD (μ)2 dx − + [a∇uN (μ)∇uD (μ) + buN (μ)uD (μ)] dx a(∇uN (μ))2 + buN (μ))2 dx [a∇uN (μ)∇(uN (μ) − uD (μ)) + buN (μ)(uN (μ) − uD (μ))] dx Arguing again as above the last integral vanishes which completes the result of the lemma The stability of the linear form (·), in (5), with respect to the norm preponderant role in the analysis we aim at Lemma 2.2 Let (g, ϕ) be arbitrarily given in H 1/2 ( is such that (μ) ≤ m μ s , C) × H ∀μ ∈ H 1/2 ( The optimal value of the boundedness modulus is m = u˘ D (g) − u˘ N (ϕ) H 1( ) −1/2 I ) ( C ) · s will play a The linear form (·) J Sci Comput Proof We emphasize on the fact that the lemma holds true even when there is no solution to problem (5) However, the proof is easier to check out when (5) has a solution λ ∈ H 1/2 ( I ) Indeed, we have in this case (μ) = s(λ, μ) ≤ λ s μ s = m μ s Moreover, by Lemma 2.1, there holds that m= λ s = s(λ, λ) = uD (λ) − uN (λ) H 1( ) Then, the equality uD (λ, g) = uN (λ, ϕ) yields in particular that uD (λ) − uN (λ) = u˘ D (g) − u˘ N (ϕ) Replacing in the formula of m achieves the result When problem (5) has no solution, the proof can be conducted following the lines of [7, Lemma 2.1] We skip it over Now, to dampen the pollution that may substantially damage the solution λ, because Cauchy data suffers from some noise, the most simple method is the Lavrentiev method which turns out to be well suited to the symmetric positive-definite problem (5) Let be a small positive real parameter and consider the regularized problem: find λ ∈ H 1/2 ( I ) such that, sD (λ , μ) + s(λ , μ) = (μ), ∀μ ∈ H 1/2 ( (6) I ) The term sD (·, ·) ensures the ellipticity of the problem, λ exists in H 1/2 ( I) Remark 2.1 Let us introduce the bounded linear operator S defined on H 1/2 ( sD (Sλ, μ) = s(λ, μ), and we construct the data f ∈ H 1/2 ( I) ∀μ ∈ H 1/2 ( and is unique I) as follows I ), through the formula sD (f, μ) = (μ), ∀μ ∈ H 1/2 ( I ) Recall that sD (·, ·) determines a Hilbertian inner product on the space H 1/2 ( I ) S is symmetric with respect to sD (·, ·) and inherits the properties of s(·, ·) It is therefore nonnegative definite and the spectrum is a positive sequence that decays exponentially fast towards zero With these new notations, the ill-posed Cauchy problem (5) may be written as Sλ = f, in H 1/2 ( I ), while its regularized counterpart (6) may be put under the following form λ + Sλ = f, in H 1/2 ( I ) The regularization method we are interested in is then the Lavrentiev procedure applied to the Cauchy problem (see [22]) Several results about the Lavrentiev solution are listed in [2] complemented by a full study in [7] By the way, the proof of the following convergence result can be found in [7, Lemma 3.1] J Sci Comput Lemma 2.3 Assume that problem (5) has a solution λ ∈ H 1/2 ( lim λ − λ sD →0 I ) Then, we have that = Moreover, there holds that λ −λ s ≤ λ (7) sD Remark 2.2 The analysis exposed in [7] yields that both λ and (λ − λ ) controlled in the sense that λ sD ≤ λ λ−λ and sD sD ≤ λ sD The convergence in Lemma 2.3 of the bias (λ − λ), the error purely due to the Lavrentiev method, is concerned with the whole domain That bound can not be improved without introducing more smoothness on the exact solution λ However, the gap (λ − λ ) seems to be concentrated at the vicinity of the incomplete boundary and one may hope better convergence rate far away from I This will be the subject of the coming section Similar issues for the error caused by noisy Cauchy data, the variance, will be investigated later on Carleman Estimate and Local Convergence We pursue a Holderian convergence rate about (λ ) toward λ in the subregions of that not intersect with I , without introducing further smoothness on the solution λ Basically, the methodology we use consists in a suitable interpolation estimate deduced from a Carleman inequality in domains with boundary, proved by D Tataru (see [26]) 3.1 Carleman’s Inequality That Carleman inequality, tuned to the analysis of the Lavrentiev regularization, relies on some suitable weight functions One of them can be constructed as follows Let θ be a smooth function, θ ∈ C ( ), defined on the closure of such that |∇θ (x)| > 0, ∀x ∈ , θ (x) > 0, ∀x ∈ \ θ (x) = 0, I, ∀x ∈ I Such a function does exist by [17, Lemma 1.1] Set now the weight function ψ(x) = eθ(x) , ∀x ∈ Notice that |∇ψ(x)| > 0, ∀x ∈ , ψ(x) > 1, ∀x ∈ \ ψ(x) = 1, ∀x ∈ I, I Then, for sufficiently large ζ ≥ 1, the following one-parameter Carleman estimate, including boundary conditions, is provided in [26] a(∇v)2 + ζ bv e2ζ ψ dx ≤ C + The constant C is independent of ζ ζ −div(a∇v) + bv e2ζ ψ dx (a∂n v)2 + ζ v e2ζ ψ dγ , ∀v ∈ H ( ) (8) J Sci Comput 3.2 Local Estimate on the Bias We conduct first the analysis in the noise free setting, that is when the existence of λ, a solution of (5), is ensured We are thus concerned with bounding the bias Recall that the solution λ of the reduced problem (5) is such that uD (λ, g) = uN (λ, ϕ) They coincide with u that solves the Cauchy problem (2)–(4) For convenience, we introduce the symbol δ = (λ − λ) Define the gap functions as follows (wD, , wN, ) = (uD (λ , g) − u, uN (λ , ϕ) − u) = (uD (δ ), uN (δ )) The purpose is to exhibit a local (super)convergence rate on the function wN, which is solution of the Laplace problem2 − div(a∇wN, ) + bwN, = in a∂n wN, = on wN, = δ (9) , C, (10) I (11) on Remark 3.1 Before stepping further, we need to know more about the regularity of the gap function wN, The smoothness made on a together with the assumptions on , introduced in Sect 2, results in more regularity than expected on wN, Actually, the only factor that limits that regularity to H ( ) is the Dirichlet condition (11) on the incomplete boundary I Then, away from I , the elliptic regularity theory applies which stipulates that wN, belongs to H (ω) for any sub-domain ω such that ω ∩ I = ∅ (see [13]) On account of Lemma 2.2, both errors (wD, , wN, ) go to zero in H ( ), for small Taking profit from estimate (7) is liable to provide sharper information at the vicinity of C Lemma 3.1 We have that wN, H 1/2 ( C ) + a∂n wD, H −1/2 ( C ) ≤C √ λ sD Proof The trace theorem applied to the harmonic function (wD, − wN, ) yields that wN, H 1/2 ( C ) + a∂n wD, H −1/2 ( C ) ≤ C wD, − wN, By means of Lemma 2.1, we state that wN, H 1/2 ( C ) + a∂n wD, H 1/2 ( C ) ≤C δ Estimate (7) in Lemma 2.3 completes the proof The function w D, is such that − div(a∇wD, ) + bwD, = in , wD, = on C , wD, = δ on I s H 1( ) J Sci Comput Fig The sub-domain τ Prior to the application of the Carleman estimate, we introduce some additional geometrical and functional notations For a given small parameter τ , we define the subset τ = x∈ , ψ(x) ≥ + τ The domain τ is depicted in Fig Observe first that τ ∩ I = ∅, the distance d( τ , I ) is then positive In addition, τ may be selected sufficiently small so that \ τ is concentrated at the vicinity of I In fact, we have that τ >0 τ = \ I Next, for a given couple of small positive real numbers (τ, η) with τ > η > 0, we consider the smooth cut-off function ξτ,η supported in η and defined as, ⎧ ⎪ ⎨0 ≤ ξτ,η (x) ≤ ∀x ∈ , (12) ξτ,η (x) = ∀x ∈ τ , ⎪ ⎩ ∀x ∈ \ η ξτ,η (x) = The following local estimate holds Theorem 3.2 Let β > a small parameter There exists q = q(β) ∈ [0, 1/2[ and a constant C = C(β) such that uN (λ , ϕ) − u H 1( β ) ≤C q λ sD Proof Let (τ, η) be two positive parameters with β > τ > η > 0, and fix ξτ,η to be the cut-off function defined in (12) We are going to apply Carleman estimate to the function wN, ξτ,η To avoid heavy writings, we purge off some of the indices N and τ,η and we use w ξ instead of wN, ξτ,η Now, we are allowed to choose v = w ξ in formula (8) This is legitimated by Remark 3.1 Indeed v ∈ H ( ), because the support η of ξ does not meet I There comes out a(∇(w ξ ))2 + ζ b(w ξ )2 e2ζ ψ dx ≤ C β ζ + − div(a∇(w ξ )) + bw ξ e2ζ ψ dx (a∂n w ξ )2 + ζ (w ξ )2 e2ζ ψ dγ On account of (9), the boundary condition (10), and that ξ(x) = for all x in (a∇(w ))2 + ζ b(w )2 e2ζ ψ dx β The motivation of keeping the index will naturally appear later τ yield J Sci Comput ≤C ζ η\ τ a(∇w )2 + bw2 e2ζ ψ dx + ζ w2 e2ζ ψ dγ C The new constant C depends on the functions (a, b) and on the cut-off ξ and then on the parameters (τ, η) Now, due to the specification of the subregions β , τ and η and after setting σ = maxx∈ C ψ(x) − > 0, we obtain that 2ζ (1+τ ) e ζ a(∇(w ))2 + ζ b(w )2 dx ≤ C e2ζ (1+β) β η\ τ + ζ e2ζ (1+σ ) [a(∇w )2 + bw2 ] dx w2 dγ C Observe that σ > β necessarily holds true owing to the choice of the weight function ψ Due to obvious simplifications we have that a(∇(w ))2 + ζ b(w )2 dx β ≤C 2ζ (τ −β) e ζ η\ τ [a(∇w )2 + bw2 ] dx + ζ e2ζ (σ −β) w2 dγ C The stability of w (= wN, ) with respect to δ , and Lemma 3.1 provides a(∇(w ))2 + ζ b(w )2 dx ≤ C β −2ζ (β−τ ) e δ ζ sD + ζ e2ζ (σ −β) λ sD Now, we introduce the parameter ρ = √1ζ e−ζ (β−τ ) , to decay towards zero for large ζ Recalling that ζ ≥ 1, the above estimate may be transformed into 1/2 w H 1( β ) ≤ C ρ2 δ sD + ρ 2s λ sD s coincides with [(σ − β)/(β − τ )]+ , the constant C is so that C = C(β, τ, η) > Selecting ρ to minimize the right bound ends up to w H 1( β ) ≤C 2(1+s+ ) ( δ s sD ) 1+s ( λ sD ) 1+s The last inequality is derived after recalling that δ complete with q = 2(1+s) sD ≤C ≤ λ 2(1+s) sD λ sD (13) The proof is therefore Remark 3.2 We have that uD (λ , g) − u H 1( β ) ≤C q The derivation of this bound is direct from Proposition 3.2 together with estimate (7) Remark 3.3 It occurs that the Cauchy solution enjoys sufficient smoothness liable to improve (7) Therefore, one may have that δ sD ≤C p , (14) J Sci Comput for some p ∈ [0, 1/2[ Returning to the estimate (13) enables a better convergence rate than Theorem 3.2 Indeed, we have that uN (λ , ϕ) − u H 1( β ) ≤C (1−μ)p+q =C (1−μ)p+μ/2 (15) The real-number μ ∈ [0, 1] is dependent on the weight function ψ and by then on the sub1 This sounds as if the estimate is derived by Hilbertian interdomain β It is equal to s+1 polation between the convergence rate at the vicinity of C and the one related to the whole domain As a matter of fact, it can be checked out that if β decays to zero, this means that , then the parameter μ goes towards zero and we β comes close to the whole domain recover the global convergence rate Oppositely when β grows and the sub-domain β is reduced to a thin band concentrated around C , the parameter μ grows up towards one and the convergence rate comes close to the one on C given in Lemma 3.1 Remark 3.4 Things happen here as if the solution u satisfies a ‘General Source Condition’ on the restricted domain β This is related to the issue discussed in [6] where Proposition 3.1 suggests a connection between that ‘abstract’ smoothness assumption and the possibility for the Cauchy solution to be ‘harmonically’ extended to a larger domain The solution u, defined in the global domain , may be accounted for as an extension of u| β This may offer an explanation to the super-convergence results of Theorem 3.2 3.3 Local Estimate of the Variance Real life configurations show that data are noisy Cauchy boundary conditions are only known approximately Instead of the exact (g, ϕ) we dispose of the deviated data (g , ϕ ) that may be mismatching They suffer from inaccuracy and have in general a dramatic impact on the solution of problem (5) The numerical computations produce unavoidably irrelevant results unless a regularization strategy is adopted The Lavrentiev Method is there to dampen that effect, provided that the parameter is judiciously selected We start by specifying the perturbation on the Cauchy data Denote the noise by (δg, δϕ) which coincides with (g − g, ϕ − ϕ) Assume it is such that δg H 1/2 ( C ) + δϕ H −1/2 ( C ) ≤ (16) A direct consequence is the following useful estimate u˘ D (δg) − u˘ N (δϕ) H 1( ) ≤C (17) This bound can be viewed as a direct consequence of the fact that (δg, δϕ) is small with respect to their natural norms in H 1/2 ( C ) × H −1/2 ( C ) The linear form in problem (6) is thus denoted by (·) when defined by (g , ϕ ) and δ (·) (= ( − )(·)) is related to the deviation (δg, δϕ) The regularized solution for noisy data is henceforth λ ∈ H 1/2 ( I ) and is such that sD (λ , μ) + s(λ , μ) = (μ), ∀μ ∈ H 1/2 ( I ) (18) The important issue that remains to investigate is the way the parameter = ( ) should be selected so to guarantee that (λ ) >0 converges toward the exact solution λ for small That error obeys the standard bias-variance decomposition λ−λ sD ≤ λ−λ sD + λ −λ sD J Sci Comput Local convergence results on the bias are elaborated above, to be complete we aim now to conduct similar analysis on the variance We start by the global bound obtained in [7, Lemma 3.2] Lemma 3.3 There holds that λ −λ As a result, if = ( ) is chosen so that converges towards λ in H 1/2 ( ≤C√ sD √ decays towards zero for small , then λ I ) Remark 3.5 The bound on the variance for the Lavrentiev Method is similar to that predicted by the general theory for the Tikhonov procedure This suggest that the variational problem (5) is implicitly a least square problem This turns out to be exactly the case We refer to [3] (see also [14]) for the justification Now, we purse a sharp analysis of that variance away from the incomplete boundary to illustrate a better behavior than expected To realize this target, we need first to bring further adaptations to the notations The symbol δ stands for (λ − λ ) The following gap functions are also needed (wD, , wN, ) = (uD (λ , g) − uD (λ , g ), uN (λ , ϕ) − uN (λ , ϕ )) = (uD (δ , δg), uN (δ , δϕ)) The function wN, is therefore solution of the Laplace problem − div(a∇wN, ) + bwN, = in a∂n wN, = δϕ wN, = δ , on on C, I Similar to the study of the bias, the coming analysis is based on a preliminary result Lemma 3.4 There holds that wN, + a∂n wD, H 1/2 ( C ) H −1/2 ( C ) ≤C Proof The proof takes two steps Choose first μ = δ and subtract (18) from (6), we obtain that δ sD + δ s = (δ )(δ ) Owing to the stability with respect to the norm · δ sD + δ s s, given by Lemma 2.2, we derive that ≤ (δm) δ s The continuity modulus is given by (δm) = u˘ D (δg) − u˘ N (δϕ) H 1( ) ≤C J Sci Comput The right bound comes from (17) As a result, we have the inequality δ s ≤C (19) The second part of the proof is similar to the one developed for Lemma 2.2 First, by the trace theorem we have that uN (δ ) H 1/2 ( C ) + a∂n uD (δ ) H −1/2 ( C ) ≤ C uD (δ ) − uN (δ ) H 1( ) ≤C δ s (20) Using the triangular inequality yields that wN, H 1/2 ( C ) + a∂n wD, H −1/2 ( C ) ≤ uN (δ ) H 1/2 ( C ) + u˘ N (δϕ) + a∂n uD (δ ) H 1/2 ( C ) H −1/2 ( C ) + a∂n u˘ D (δg) H −1/2 ( C ) Another application of the trace theorem together with inequality (20) provides wN, H 1/2 ( C ) + a∂n wD, H −1/2 ( C ) ≤ C( δ s + δϕ H −1/2 ( C ) + δg H 1/2 ( C ) ) In view of the bounds (19) and (16) there comes out wN, H 1/2 ( C ) + a∂n wD, H −1/2 ( C ) ≤C The proof is complete Remark 3.6 Be aware that the bound C in (19) on δ is obtained for the weak norm · s On the other side the bound with the strong norm · sD is C √ , according to Lemma 3.3 We dispose now of the technical tools that serve to exhibit a local convergence of the variance We have the following theorem Theorem 3.5 Let β > be a small parameter There exists q = q(β) ∈ [0, 1/2[ and a constant C = C(β) such that the variance is bounded as follows uN (λ , ϕ ) − uN (λ , ϕ) H 1( β ) ≤C − 12 +q Proof The proof is largely inspired from the proof of Theorem 3.2 We apply the Carleman estimate to wN, ξτ,η Again, we drop off the indices N and τ,η Taking v = w ξ(∈ H ( )) in formula (8), and working it out as previously (proof of Theorem 3.2), we end up to a(∇(w ))2 + ζ b(w )2 dx ≤ C β 2ζ (τ −β) e ζ η\ τ + e2ζ (σ −β) [a(∇w )2 + bw2 ] dx [(a∂n w )2 + ζ w2 )] dγ C The stability of w (= wN, ) with respect to (δ , δϕ) together with Lemma 3.3 yields a bound of the first integral in one hand side In the other hand side, seen a∂n w = δϕ, using estimate (16) and Lemma 3.4 gives a bound for the second integral There holds then a(∇(w ))2 + bw2 dx ≤ C β −2ζ (β−τ ) e ζ + (1 + ζ )e2ζ (σ −β) J Sci Comput After setting ρ = ζ1 e−2ζ (β−τ ) , we state that w H 1( β ) ≤C ρ2 + ρ 2s (21) The estimate being valid for any small ρ, it may be chosen to obtain the best bound possible There holds then w According to q = C − 12 +q , 2(1+s) H 1( β ) ≤C s − 2(s+1) as defined in Theorem 3.2, the final bound is expressed by The proof is complete Remark 3.7 The bound on the variance confirms the intuition the numericists have since long, that the largest fraction of the error due to noise is located at the vicinity of the incom√ is bounded from below, let say plete boundary For illustration, assume that the ratio / √ = O(1) Looking now at the bound by Theorem 3.5, we have that / uN (λ , ϕ ) − uN (λ , ϕ) H 1( β ) This tells that the variance decays to zero away from ≤C I, q ≤C 2q for small noise magnitude 3.4 Local Convergence Rates for the Lavrentiev Method According to the analysis achieved above, the bias and the variance errors decay toward zero with some convergence rates This makes it possible to obtain a local estimate of the total error, from the bias-variance decomposition principle We have that Theorem 3.6 Let β > be a small parameter There exists q = q(β) ∈ [0, 1/2[ and a constant C = C(β) such that the following bound holds uN (λ , ϕ ) − u H 1( β ) ≤C q λ sD + − 12 Remark 3.8 Selecting the parameter = ( ), to obtain the best bound in Theorem 3.6, is the key issue for a performing activation of the Lavrentiev strategy A wide literature is dedicated to a posteriori rules such that the Discrepancy principle or the balancing principle which are a reliable means to realize a trade-off between the convergence rate and the stability, necessary to an efficient computed solution (see [7, 11]) The discrepancy principle is the criterion adopted in our computations Analytical and Numerical Examples The convergence rates of Theorems 3.2 and 3.5 and can be checked out through explicit computations for the Laplace operator Achieving analytical calculations becomes possible and accurate convergence rates may be obtained, to compare with the theoretical statements proved here We give also some numerical simulations to confirm the predictions for more complicated domains J Sci Comput Fig The circular configuration 4.1 Analytical Example Assume that the (diffusion, reaction) parameters (a, b) are equal to (1, 0) and the volume data is such that f = The problem to consider is hence the Laplace equation (2)–(4) with Cauchy data Consider then the annular domain with double radii (rC , rI ) (see Fig 3) The internal circle is C and the external one plays the role of I Then, let us define the truncated T as the annular sub-region characterized by both radii (rC , rT ), with rC < rT < rI Fourier computations may be realized We refer to [6] for the details of the calculations Following the lines of [6, Lemma 4.4], we obtain a bound on the bias similar in its form to Theorem 3.2 Indeed there holds that uN (λ , ϕ) − u H 1( T ) ≤C q λ sD (22) The convergence rate q can be calculated accurately In the case where the assumption rI rC ≤ (rT )2 is fulfilled q is given by q = , 2(1 + s ) s = log(rT ) − log(rC ) log(rI ) − log(rT ) Next, we try to find out whether the rates obtained by the Carleman estimate are close or not This result is tightly dependent on the weight function ψ(·) used in (8) The one that seems suitable is the following radial weight ψ(x) = ν(r) = + log rI , r r = |x| To fit the configuration handled in Sect 3.2, we choose β = log(rI /rT ) It is immediately checked out that T = x∈ , ψ(x) ≥ + β = β As proved in Theorem 3.2, owing to Carleman’s estimate, the bias bound (22) holds true with a convergence rate q The point now is to figure out whether q coincides with q or not In fact, we will verify whether s is equal to s To bring the answer we shall have a closer look at the proof of Theorem 3.2 where s is computed from a couple (τ, η) of non-negative real-numbers where β > τ > η related to the cut-off function ξτ,η used in the Carleman inequality Notice also that we may construct ξτ,η with a support that coincides with the whole due to the smoothness of ∂ Indeed, it is possible to construct it so that (ξτ , ∂n ξτ ) vanishes on I It is sufficient to select z(r) a non-increasing function on [rC , rI ] with z(rI ) = z (rI ) = 0, and set the cut-off ξτ,η (x) = z(|x|) Consequently, η may be fixed J Sci Comput to zero, η = Returning to the exponent s in the proof of Theorem 3.2, it is therefore given by s= σ −β ν(rC ) − ν(rT ) log(rT ) − log(rC ) = = β −τ ν(rT ) − ν(rτ ) log(rτ ) − log(rT ) Taking τ sufficiently close to zero yields that rτ ≈ rI and s comes close to s Let us draw the attention on the following fact Insofar, nothing tells us that, in the general geometry, the constant C = C(β, τ, η) in the estimate (22), derived by Carleman inequality, remains bounded when (τ, η) goes toward (0, 0) Actually, the one written in (8) predicts that C would blow-up For the elementary example we are dealing with here, it does not Similar observations are readily made on the variance In the circular geometry we are involved in, an accurate bound of it is stated in [6, Lemma 4.2], uN (λ , ϕ) − uN (λ , ϕ ) H 1( β ) ≤C − 12 +q Seen that q turns out to be close to q , as illustrated above, this estimate of the variance looks like the one proved in Theorem 3.5 4.2 Numerical Examples The aim pursued is to confirm numerically the observations made during the analytical discussion before assessing the Lavrentiev solution in more complex domains The computations are realized by means of a Fortran program implemented for the Lavrentiev solution of the Cauchy problem It is based on linear finite elements built on triangular meshes, to solve the discrete counterpart of (6) Details on implementation so as many tips to proceed efficiently can be found in [3] The construction of stiffness matrices and the Conjugate Gradient iterative solver we use are thoroughly described in there The examples presented have all a closed form w of the Cauchy solution, thus u = w The Cauchy data (g, ϕ) are constructed by (w, ∂n w)| C , obtained from the knowledge of w To derive noisy data (g , ϕ ), the polluted counterparts of (g, ϕ) we add an artificial multiplicative noise of a known level The noise is created by the Fortran function rand The deviated data (g , ϕ ) are used as they are, they are not filtered nor smoothed Thus, we consider the worst situation one can be confronted to About the Bias We begin the experimentations by the investigation of the bias in the annular domain centered at the origin with double radii (0.5, 1) The internal circle is the Cauchy boundary C and the external one is I We investigate the case of polynomialdipolar potential u(x, y) = x − 3xy + (x − 0.85) (x − 0.85)2 + (y − 0.85)2 The singular contribution to the u is related to the potential created by a dipole located at the point S = (0.85, 0.85) Computations are achieved by feeding the program with the interpolation of the unnoisy data (g, ϕ) Actually, in the pre-processing stage, the very data vector involved in the algebraic system is obtained after some numerical manipulations, especially due to the quadrature formulas for the evaluation of the integrals in (5) It is hence affected by some inaccuracy Nevertheless, that noise seems to be hardly excited by the CG iterations and it J Sci Comput Table The predicted and observed convergence rates for the bias in different truncated domains rT 0.6 0.75 0.80 1.00 Theoretical rates ( T ) Computational rates ( T ) 0.40 0.28 0.25 0.13 0.41 0.32 0.27 0.12 Fig Convergence curves of the bias (u − uN (λ , ϕ)) for the dipolar potential has no significant effect on the final results excepted for very small values of the Lavrentiev parameter We investigate the behavior of the computed solutions with respect to within the circular sub-domains T where rT = 0.6, 0.75 and 0.8 Predicting the theoretical convergence rate turns out to be feasible On account of the result in [6, Proposition 3.1], the analyticity of the exact solution u beyond the incomplete boundary I allocates to the exact solution λ = u| I some smoothness which yields a Hölderien convergence rate of the Lavrentiev solution.4 Actually, that effective smoothness is narrowly dependent on the size of the analyticity domain and by the way on the location of the singular point S = (0.85, 0.85) As a result, the estimate (14) holds true and the convergence rate may be computed following the rules given in [6, Remark 3.2], p= √ log(0.85 2) log(rS ) − log(rI ) = ≈ 0.132 log(rI ) − log(rC ) log The symbol rS indicates the length of the radial vector pointing at the singularity location S Next, focussing on the sub-region T (= β ), the new convergence rate is derived as in , where s = s estimate (15) The point is to compute the real-number μ, evaluated to s+1 is provided in the foregoing Section These overall computations achieved, the method is expected to illustrate the following convergence rates on the bias The error curves are displayed in logarithmic scales in Fig The slopes of the linear regression of those curves are evaluated and provided in the last row of Table They seem close to the theoretical predictions recorded in the middle row of that same Table The purpose of the second example is to observe similar trends in a more complex geometry The one we consider is obtained by the assembly of four layers The internal one recalls the shape of a brain layer It is extended then by adding three layers depicted in Fig We pursue the confirmation of the observations in the circular geometry Here, predicting the accurate convergence rates is a hard point The potentials created by a dipole located at Hölderian or the Sobolev regularities on u presumably lead to logarithmic convergence rates J Sci Comput Fig The structure of the domain Table The computed convergence rates (to the left) Convergence curves of the bias (to the right) T Computational rates ( T ) (a) (b) (c) (d) 0.43 0.29 0.21 0.13 S = (0.25, −0.1) represented by the cross in Fig The subregion coinciding with the brain layer is denoted by (a), the three elliptic extensions are (b), (c) and (d) when moving from the smallest to the largest The complete boundary C is the internal brain-shaped boundary and the incomplete one is the most external ellipse The convergence curves are provided in the right panel of Fig The slopes reported in Table appear in accordance with the expectations and in agreement with the previous trends About the Variance and the Discrepancy Principle If the data are damaged by noise, the issue of choosing the regularization parameter becomes fundamental The Discrepancy Principle is known to be a tractable methodology to conclusively operate (see [20, 24, 25]) The only information to dispose of is the magnitude of the noise (δg, δϕ) We proceed here following the approach exposed and studied in [7] The selection rule naturally based on the Kohn-Vogelius functional defined by V (μ) = uD (μ, g ) − uN (μ, ϕ ) The idea underlying the approximation of such that V (λ) = uD (λ, g ) − uN (λ, ϕ ) H 1( ) is the following The exact solution λ of (5) is H 1( ) = u˘ D (δg ) − u˘ N (δϕ ) H 1( ) ≈C This estimate is issued from (17) In the subsequent we prefer to use this deviation = C We refer to [6, 15] for the description of the process used for the practical evaluation of this Let now τ > be fixed close to unity The discrepancy principle consists in choosing = ( ) , by solving the algebraic equation V (λ ) = τ (23) Recall that the λ is the solution of the regularized problem (6) where the exact data (g , ϕ ) are inexact It is therefore dependent on and (23) makes sense That ( ) exists and is unique is derived from the continuity and the monotonicity of → V (λ ) (see [7]) When J Sci Comput Fig The domain , the mesh is only indicative (to the left) The truncated domain T (to the right) Table Accuracy of the Lavrentiev solution for different noise levels The parameter , obtained by the discrepancy equation (23), is given between parenthesis in the third row ∞ (u − uN )∞, (u − uN )∞, T 0.025 0.05 0.1 0.25 0.3196 0.6601 1.285 3.601 0.313(0.75 × 10−3 ) 0.325(0.1 × 10−2 ) 0.357(0.5 × 10−2 ) 0.368(0.1 × 10−1 ) 0.0655 0.0691 0.0859 0.0581 decays toward zero, the convergence of λ , towards λ, where guaranteed In particular, it is proven in [7, Corollary 5.2] that = ( ) solves (23), is lim √ = →0 This shows that not only the Lavrentiev method coupled to the Discrepancy principle yields a convergent strategy but it also shows a similar behavior as the Tikhonov method Let us emphasize on that the general theory does not predict that the Lavrentiev regularization associated with a posteriori rule to choose results in a convergent strategy (see [18]) The point we are going to investigate now is how the discrepancy rule affects the local convergence of the computed solution The Cauchy problem we investigate in what follows is set on the domain depicted in the left panel of Fig It looks like a brain diminished of two circular regions Cauchy data are prescribed on the external portion while no information are available on the internal two-connected circular boundaries The exact solution we hope to recover is obtained by the superposition of a polynomial and two dipolar potentials u(x, y) = x − 3xy + (x − 0.025) (y − 0.02) 1 + (x − 0.025)2 + (y − 0.09)2 (x + 0.08)2 + (y − 0.02)2 The dipoles are located within the circles, they are materialized by crosses in Fig We draw the attention on the fact that the momentums of both dipoles are orthogonal Computations with different noise levels are realized The parameter is selected by the discrepancy equation (23) solved by a dichotomy method We then investigate the errors within the whole domain and in the truncated one T represented in the right panel of Fig We assume that Cauchy data are contaminated by a random noise of magnitude ∞ with respect to the maximum norm The quantitative observations can be seen in Table The notation (u − uN )∞, is for the relative maximum norm of the gap (u − uN ), where uN = uN (λ , ϕ ) The computed solutions are depicted in Fig and the errors are reported J Sci Comput Fig The exact potential u (to the left) and the Lavrentiev computed potential uN = uN (λ , ϕ ) where the data suffers from noise of 25% (to the right) Upper panels are for the whole domain and lower panels for the truncated domain T in Table The main remarks to make here is first that zooming away from the incomplete boundary yields the expected improving on the accuracy The second fact to point at is that for this example, the real difficulty to recovering the dipolar potential is rather inherent to the singularities generated by the dipoles The noise level has lesser-significant effect on the numerical results provided that the parameter is suitably adjusted, by using the discrepancy principle for instance To have a better insight on the accuracy of the approximated solution away from the dipoles, we depict in Fig the potential curve on the internal boundary of T That curve may be particularly sought for in some applications such that the reconstruction of the electrical activity at the interface of the brain layers The numerical evaluation of that potential seems very satisfactory Conclusion The aim is to revisit the Lavrentiev Regularization applied to the data completion problem in the variational setting of [5] The analysis elaborated in [7] results in convergence estimates in the global computational domain Here, we investigate that same method, but at a local level to demonstrate interesting convergence rates far away from the incomplete boundary, J Sci Comput Fig Potentials at the internal boundary of T The reconstructed potential is pretty close to the exact one the portion where data are missing No additional smoothness are assumed on the exact Cauchy solution and the tool we use is a suitable Carleman inequality picked up in [26] Acknowledgements We would like to thank M Azaïez from Laboratoire TREFLE, for interesting discussions on the subject This work was partially supported by la region Picardie, programme Appui l’émergence for TDD and by le Ministère de l’Enseignement Supérieur, de la Recherche Scientifique et de la Technologie (MESRST, Tunisia) under the LR99ES-20 contract for F.J References Adams, D.A.: Sobolev Spaces Academic Press, New York (1975) Azaïez, M., Ben Belgacem, F., El Fekih, H.: On Cauchy’s problem II Completion, regularization and approximation Inverse Probl 22, 1307–1336 (2006) Azaïez, M., Ben Belgacem, F., Du, D.T., Jelassi, F.: A finite element model for the data completion problem: Analysis and assessment Inverse Probl Sci Eng 19, 1063–1086 (2011) Ben Belgacem, F.: Why is the Cauchy’s problem severely ill-posed? Inverse Probl 23, 823–836 (2007) Ben Belgacem, F., El Fekih, H.: On Cauchy’s problem I A variational Steklov-Poincaré theory Inverse Probl 21, 1915–1936 (2005) Ben Belgacem, F., Du, D.T., Jelassi, F.: Extended-domain-Lavrentiev’s regularization for the Cauchy problem Inverse Probl 27, 045005 (2008) Ben Belgacem, F., El Fekih, H., Jelassi, F.: The Lavrentiev regularization of the data completion problem Inverse Probl 24, 045009 (2008) Bourgeois, L.: Convergence rates for the quasi-reversibility method to solve the Cauchy problem for Laplace’s equation Inverse Probl 22, 413–430 (2006) Bourgeois, L.: About stability and regularization of ill-posed elliptic Cauchy problems: The case of C 1,1 domains Modél Math Anal Numér 44, 715–735 (2010) 10 Brühl, M., Hanke, M., Pidcock, M.: Crack detection using electrostatic measurements Modél Math Anal Numér 35, 595–605 (2001) 11 Cao, H., Klibanov, M.V., Pereverzev, S.V.: A Carleman estimate and the balancing principle in the quasireversibility method for solving the Cauchy problem for the Laplace equation Inverse Probl 25, 035005 (2009) 12 Colli Franzone, P., Magenes, E.: On the inverse potential problem of electrocardiology Calcolo 16, 459– 538 (1979) 13 Dauge, M.: Elliptic Boundary Value Problems in Corner Domains Lecture Notes in Mathematics, vol 1341 Springer, Berlin (1988) 14 Du, D.T.: A Lavrentiev finite element model for the Cauchy problem of data completion: Analysis and numerical assessment Ph.D Thesis, Université de Technologie de Compiègne (March, 2011) 15 Du, D.T., Jelassi, F.: A preconditioned Richardson regularization for the data completion problem and the Kozlov-Maz’ya-Fomin method ARIMA 13, 17–32 (2010) 16 Friedman, A., Vogelius, M.S.: Determining cracks by boundary measurements Indiana Univ Math J 38, 527–556 (1989) 17 Fursikov, A., Imanuvilov, O.Y.: Controllability of Evolution Equations Lecture Notes Series, vol 34 RIM-GARC, Seoul National University, Seoul (1996) J Sci Comput 18 Groetsch, C.W.: Comments on Morozov’s discrepancy principle In: Hämmerlin, G., Hoffmann, K.H (eds.) Improperly Posed Problems and Their Numerical Treatment, pp 97–104 Birkhäuser, Basel (1983) 19 Hofmann, B., Mathé, P., Von Weizsacker, H.: Regularization in Hilbert space under unbounded operators and general source conditions Inverse Probl 25, 115013 (2009) 20 Janno, J., Tautenhahn, U.: On Lavrentiev regularization for ill-posed problems in Hilbert scales Numer Funct Anal Optim 24, 531–555 (2003) 21 Kohn, R.V., Vogelius, M.S.: Determining conductivity by boundary measurements II Interior results Commun Pure Appl Math 38, 643–667 (1985) 22 Lavrentiev, M.M.: Some Improperly Posed Problems of Mathematical Physics Springer, New York (1967) 23 Mathé, P., Hofmann, B.: How general are general source conditions? Inverse Probl 24, 015009 (2008) 24 Morozov, V.A.: On the solution of functional equations by the method of regularization Sov Math Dokl 7, 414–417 (1966) 25 Nair, M.T., Tautenhahn, U.: Lavrentiev regularization for linear ill-posed problems under general source conditions J Anal Appl 23, 167–185 (2004) 26 Tataru, D.: A-priori estimates of Carleman’s type in domains with boundary J Math Pures Appl 73, 355–387 (1994) ... that data are noisy Cauchy boundary conditions are only known approximately Instead of the exact (g, ϕ) we dispose of the deviated data (g , ϕ ) that may be mismatching They suffer from inaccuracy... right panel of Fig We assume that Cauchy data are contaminated by a random noise of magnitude ∞ with respect to the maximum norm The quantitative observations can be seen in Table The notation... computations Analytical and Numerical Examples The convergence rates of Theorems 3.2 and 3.5 and can be checked out through explicit computations for the Laplace operator Achieving analytical calculations

Ngày đăng: 16/12/2017, 02:47