Dire t problem
Let Ω be an open bounded domain in R n , n ≥ 1 with boundary ∂Ω Denote Q = Ω ×
(0, T ], S = ∂Ω × (0, T ] Let a ij , i, j ∈ {1, 2, , n}, b ∈ L ∞ (Q), (1.1) a ij = a ji , i, j ∈ {1, 2, , n}, (1.2) λǁξǁ2 n ≤ Σ aij(x, t)ξiξj ≤ λ−1ǁξǁ2 n , ∀ξ ∈ Rn, ∀(x, t) ∈ Q, (1.3)
0 ≤ b(x, t) ≤ à a.e in Q, (1.4) v ∈ L 2 (Ω), g ∈ L 2 (S), f ∈ L 2 (Q), (1.5) λ is positive onstant and à ≥ 0 (1.6)
Consider the initial value problem
De nition 1.1.6 The spa e H 1,1 (Q) is the set of all elements u(x, t) ∈ H 1,1 (Q)
De nition 1.1.3 The spa e H 1,0 (Q) is the set of all elements u(x, t) ∈ L 2 (Q) with either Diri hlet boundary ondition u = 0 on S, (1.9) or Neumann boundary ondition
(1.10) i,j=1 where ν is the outer unit normal ve tor to S.
When the oe ients of (1.7), v, g and f are given, the problem of uniquely solving u(x, t) from the system (1.7) (1.9) or (1.7), (1.8), (1.10) is alled the dire t problem [24, 82 ℄
To study these problems, we introdu e the following standard Sobolev spa es (see [24, 29,
De nition 1.1.1 The spa e H 1 (Ω) is the set of all elements u(x) ∈ L 2 (Ω) having gener- alized derivatives ∂u i ∈ L 2 (Ω), i = 1, , n with s alar produ t
De nition 1.1.2 The spa e H 1 (Ω) is the ompletion of C 1 (Ω) in the norm of H 1 (Ω) In
0 0 ase ∂Ω is smooth, we have
H 1 (Ω) = {u ∈ H 1 (Ω) : u = 0}. generalized derivatives ∂u i ∈ L 2 (Q), i = 1, , n with s alar produ t
De nition 1.1.4 The spa e H 1,1 (Q) is the set of all elements u(x, t) ∈ L 2 (Q) having generalized derivatives ∂u i ∈ L 2 (Q), i = 1, , n and ∂u ∈ L 2 (Q) with s alar produ t
De nition 1.1.5 The spa e H 1,0 (Q) is the set of all elements u(x, t) ∈ H 1,0 (Q) vanishing on S
Let B be a Bana h spa e, we de ne
We also de ne ǁuǁ L 2 (0,T ;B)
W (0, T ; H (Ω)) = {u : u ∈ L (0, T ; H (Ω)), u t ∈ L (0, T ; (H (Ω)) )}, with norm 2 2 2 ǁuǁ W (0,T ;H 1 (Ω)) = ǁuǁ L 2 (0,T ;H 1 (Ω)) + ǁu t ǁ L 2 (0,T ;(H 1 (Ω)) ′ ) The spa e W (0, T ; H 1 (Ω)) is similarly de ned with the note that (H 1 (Ω)) ′ = H −1 (Ω).
The solutions of the Diri hlet problem (1.7)-(1.9), and the Neumann problem (1.7), (1.8), (1.10) are understood in the weak sense as follows:
De nition 1.1.7 A weak solution in W (0, T ; H 1 (Ω)) of the problem (1.7)-(1.9) is a fun - tion u(x, t) ∈ W (0, T ; H 1 (Ω)) satisfying the identity
De nition 1.1.8 A weak solution in W (0, T ; H 1 (Ω)) of the problem (1.7), (1.8),
(1.10) is a fun tion u(x, t) ∈ W (0, T ; H 1 (Ω)) satisfying the identity
Due to [24, p 35 46 , [41 , [82, p 141 152 and [83, Chapter IV we have the following℄ ℄ ℄ ℄ results about the well-posedness of the Diri hlet and Neumann problems.
Theorem 1.1.1 Let the onditions (1.1) (1.6) be satis ed The following statements hold:
1) There exists a unique solution u ∈ W (0, T ; H 1 (Ω)) to the Diri hlet problem (1.7) (1.9) Furthermore, there exists a positive onstant c D independent of the initial ondition v and the right hand side f (only depends on a ij , b and Ω) su h that ǁuǁ W (0,T ;H 1 (Ω)) ≤ c D ǁf ǁ L 2 (Q) + ǁvǁ L 2 (Ω)
2) If v ∈ H 1 (Ω), a ij , b ∈ C 1 ([0, T ]; L ∞ (Ω)), i, j = 1, , n and there exists a onstant à 1 su h that |∂a ij /∂t|, |∂b/∂t| ≤ à 1 , then u ∈ H 1,1 (Q).
Theorem 1.1.2 Let the onditions (1.1) (1.6) be satis ed The following statements hold:
1) There exists a unique solution u ∈ W (0, T ; H 1 (Ω)) to the Neumann problem (1.7), (1.8), (1.10) Furthermore, there exists a positive onstant c N independent of the initial ondition v, the boundary ondition g and the right-hand side f (only depends on a ij , b and Ω) su h ǁuǁ W (0,T ;H 1 (Ω)) ≤ c N ǁf ǁ L 2 (Q) + ǁgǁ L 2 (S) + ǁvǁ L 2 (Ω)
2) If v ∈ H 1 (Ω), g ∈ H 0,1 (S), a ij , b ∈ C 1 ([0, T ]; L ∞ (Ω)), i, j = 1, , n and there exists a onstant à 1 su h that |∂a ij /∂t|, |∂b/∂t| ≤ à 1 then u ∈ H 1,1 (Q) In this ase, there exists a onstant, denoted again by c N su h that ǁuǁ H 1,1 (Q) ≤ c N ǁf ǁ L 2 (Q) + ǁgǁ H 0,1 (S) + ǁvǁ H 1 (Ω)
In addition to De nition 1.1.7 and De nition 1.1.8, we introdu e the following de nitions.
The weak solution in H 1,0 (Q) to the Diri hlet problem (1.7) (1.9) and the Neumann prob- lem (1.7), (1.8), (1.10) are understood in the weak sense as follows:
De nition 1.1.9 A weak solution in H 1,0 (Q) to the problem (1.7) (1.9) is a fun tion u ∈ H 1,0 (Q) satis es the identity
De nition 1.1.10 A weak solution in H 1,0 (Q) to the problem (1.7), (1.8), (1.10) is a fun tion u ∈ H 1,0 (Q) satis es the identity
It has been shown that solution belonging to H 1,0 (Q) in two above de nitions are in
Adjoint problem
To study the variational problem for data assimilation in heat ondu tion, we need some results related to the adjoint problems and Green's formula The following results an be
1 proved in the same way as in [82, Ÿ3.6.1., p 156 158 ℄ n
Consider the adjoint problem to (1.7) (1.9):
(1.20) p(ζ, t) = 0on S, p(x, T ) = a Ω in Ω, where a Q ∈ L 2 (Q) and a Ω ∈ L 2 (Ω) We de ne the solution of this problem is a fun tion p ∈ W (0, T ; H 1 (Ω)) satisfying the variational problem
By hanging the time dire tion and using the result of Theorem 1.1.1, we see that there exists a unique solution p ∈ W (0, T ; H 1 (Ω)) and p also satis es an a priori inequality similar to (1.15) We have the following result:
Theorem 1.2.1 Suppose that the onditions (1.1) (1.4) hold Let y ∈ W (0, T ; H 1 (Ω)) be the solution to the problem Σ ∂ ∂y y t − i, j=1 ∂x i a ij (x, t)∂x
(1.21) with b Q ∈ L 2 (Q), and b Ω ∈ L 2 (Ω) Assume that a Q ∈ L 2 (Q), a Ω ∈ L 2 (Ω) and p ∈
W (0, T ; H 1 (Ω)) is the weak solution to the adjoint problem (1.20) Then we have Green's formula ∫ a Ω y(ã, T )dx +
Similarly, onsider the adjoint problem to the Neumann problem (1.7), (1.8), (1.10):
∂ N p = a S on S, (1.23) p(x, T ) = a Ω in Ω, where a Q ∈ L 2 (Q), a S ∈ L 2 (S), and a Ω ∈ L 2 (Ω) We de ne the solution to this problem by a fun tion p ∈ W (0, T ; H 1 (Ω)) satisfying the variational problem
We an also prove that there exists a unique solution p ∈ W (0, T ; H 1 (Ω)) to this problem and it satis es an a priori inequality similar to (1.16).
Theorem 1.2.2 Let y ∈ W (0, T ; H 1 (Ω)) be the solution to the problem y − Σ ∂ a (x, t) ∂y
L 2 (S), a Ω ∈ L 2 (Ω) and p ∈ W (0, T ; H 1 (Ω)) is the weak solution to the adjoint problem
(1.23) Then we have Green's formula
In the next two se tions, we present the nite di eren e method for solving the dire t problems Note that the solutions to the Diri hlet and Neumann problems studied in this thesis are understood in the weak sense, hen e the onvergen e results of the nite di eren e method for them are not trivial For larity of presentation, we will separately des ribe the nite di eren e method for one-dimensional and multi-dimensional problems.
Finite di eren e method for one-dimensional dire t problems
Dis retization in the spa e variable
We approximate the integrals in the rst equations of the systems (1.27) and (1.29) as follows
Similarly, the one-dimensional Neumann problem (1.7), (1.8), (1.10) now
0 b i (t)u i (t)η i (t)dt, f i (t)η i (t)dt, (1.33) x i+1 x i+1 x i+1 a i (t) = 1 h x i a(x, t)dx, b i (t) 1 h x i b(x, t)dx, f i (t) 1 h x i f (x, t)dx, i = 0,
Putting the approximations in (1.30) (1.33) into (1.27), we obtain
Sin e η in (1.35) is arbitrary, it follows that
The oe ient matrix Λ is de ned by a 0 + h 2 b 0 −a 0 0 0 0
Similarly to the Diri hlet problem, putting the approximations (1.30) (1.33) in (1.29), we obtain
0 f i η i − g 0 η 0 + g N x η N x dt, (1.39) u i (0) = v i , i = 0, N x , where v i , a i and f i are given by formulas (1.34) and (1.36).
Thus, we get the following system
The oe ient matrix Λ is given by formula (1.38) and the right hand side F¯(t) is by
The positive semi-de niteness of Λ in (1.37) and (1.40) is proved as follows.
Lemma 1.3.1 With ea h t, the oe ient matrix Λ de ned in the system (1.37) and
(1.40) is positive semi-de nite.
Proof Put U = (U 0 , U 1 , , U N x ) ′ It follows from the formula (1.38) that a 0 + h 2 b 0 0 U
The last an be rewritten in the ompa t
Consequently, Λ is positive semi-de nite The proof is omplete.
Dis retization in time
Now we dis retize (1.37) and (1.40) in time by Crank-Ni olson's method We subdivide the interval (0, T ) into N t uniform subintervals by the grid 0 = t 0 < t 1 < ã ã ã < t N t = T with t m+1 − t m = ∆t = T/N t , m being time index Denoting u m = u¯(t m ), Λ m = Λ(t m ), F m F¯(t m ), m = 0, 1 , N t , we dis retize (1.37) and (1.40) by ([56 )℄
On the other hand, sin e Λ m is positive semi-de nite, it follows from Kellogg's Lemma
Thus, from (1.44) we obtain φ 4 ǁu m+1 ǁ ≤ ǁu m ǁ + ǁF m+1/2 ǁ, ǁu m ǁ ≤ ǁu m−1 ǁ + ǁF m−1/2 ǁ, ã ã ã ǁu 1 ǁ ≤ ǁu 0 ǁ + ǁF 1/2 ǁ.
Putting ǁvǁ = ǁu 0 ǁ, ǁf ǁ = max ǁF m+1/2 ǁ, we obtain m ǁu m+1 ǁ ≤ ǁvǁ + (m + 1)∆tǁf ǁ (1.45)Consequently, the nite di eren e s heme (1.43) is stable.
Finite di eren e method for multi-dimensional dire t problems
Interpolations of grid fun tions
We suppose that Ω = (0, L 1) ì (0, L 2) ì ã ã ã ì (0, L n ) and subdivide Ω into small ells by re tangular uniform grid spe i ed by
Here h i = L i /N i is the grid size in the x i -dire tion, i = 1, , n Denote by
Similarly, we rewrite the Neumann problem (1.7), (1.8), (1.10) in the i
• h := (h 1 , , h n ) is the ve tor of spatial grid size;
• e i , i = 1, , n being the unit ve tor in the x i -dire tion, i.e e 1 = (1, 0, , 0),
Around ea h grid point, we de ne the following subsets of Ω: ω + (k) :={x ∈ Ω : k i h i < x i < (k i + 1)h i , ∀i = 1, , n}, (1.50) ω(k) :={x ∈ Ω : (k i − 0.5)h i < x i < (k i + 0.5)h i , ∀i = 1, , n}, (1.51) ω + (k) :={x ∈ Ω : k i h i ≤ x i ≤ (k i + 1)h i , (k j − 0.5)h j ≤ x j ≤ (k j +
(1.52) The set of the indi es of all grid points belonging to Ω¯ is denoted by Ω¯ h , that is
Ω¯ h := {k = (k 1 , , k n ) : 0 ≤ k i ≤ N i , ∀i = 1, , n} (1.53) The set of the indi es of all interior grid points is denoted by Ω h , that is
Ω h := {k = (k 1 , , k n ) : 1 ≤ k i ≤ N i − 1, ∀i = 1, , n} (1.54) The set of the indi es of all boundary grid points is denoted by Π h , that is Π h = Ω¯ h \ Ω h (1.55)
Moreover, we use some notations for the subset of Π h as follows Π il := {k = (k 1 , k 2 , , k n ) : k i = 0}, i = 1, , n, (1.56) Π ir := {k = (k 1 , k 2 , , k n ) : k i = N i }, i = 1, , n (1.57)
The points x k with k i = 0 or N i , ∀i = 1, , n are alled the orner points and the set of the indi es of all orner points is denoted by Π 0
We also make use of the following sets i := {k = (k 1 , , k n ) : 0 ≤ k i ≤ N i −1, 0 ≤ k j ≤ N j , ∀j /= i}, with i = 1, , n (1.58)
For a fun tion u(x, t) de ned in Q, we denote by u¯ k (t) (or u k if there is no onfusion), or u(k, t) its approximate value at (x k , t) Suppose that u¯ = {u k , k ∈ Ω¯ h } is a grid fun tion de ned in Q hT := Ω¯ h × (0, T ) whi h have the rst weak derivative with respe t to t We de ne ǁu¯ǁ2 1,0 hT :∆h
0 k∈Ω¯ h i=1 k∈Ω¯ i with the dire t di eren e quotient and ba kward one de ning as follows u¯ x i
When u h is a grid fun tion, we denote u hx i by u x i
For a grid fun tion u¯ we de ne the following interpolations in Q:
From these formulas, with n = 2, we have uˆ¯(x, t) := u¯ k (t) + u¯ k (t)(x − k h )
From Theorem 3.2, Theorem 3.3, Chapter 6 in [40℄ and the omments therein, we have the following results.
Lemma 1.4.1 Suppose that the grid fun tion u¯ satis es the inequality ǁu¯ǁ H 1,0 (Q hT ) ≤ C, (1.65) with C being a onstant independent of h Then, the multi-linear interpolation (1.62) of u¯ is bounded in H 1,0 (Q).
Lemma 1.4.2 Suppose that the grid fun tion u¯ satis es the inequality ǁu¯ǁ H 1,1 (Q hT ) ≤ C, (1.66) with C being a onstant independent of h Then, the multi-linear interpolation (1.62) of u¯ is bounded in H 1,1 (Q).
Asymptoti relationships between the two interpolations as the grid size h tends to zero are stated in the following lemmas.
Lemma 1.4.3 Suppose that the hypothesis of Lemma 1.4.1 is ful lled Then if {uˆ¯(x, t)} h weakly onverges to a fun tion u(x, t) in L 2 (Q) as the grid size h tends to zero, the sequen e
4{u¯˜(x, t)} h also weakly onverges to u(x, t) in L 2 (Q) Moreover, if {uˆ¯| S } h weakly onverges to u | S in L 2 (S) then {u˜¯| S } h also weakly onverges to u | S in L 2 (S).
Lemma 1.4.4 Suppose that the hypothesis of Lemma 1.4.2 is ful lled Then if {uˆ¯(x, t)} h strongly onverges to a fun tion u(x, t) in L 2 (Q) as the grid size h tends to zero, the se- quen e {u¯˜(x, t)} h also strongly onverges to u(x, t) in L 2 (Q) Moreover, if {uˆ¯| } h onverges to u | in L 2 (S) then {u˜¯| } h also onverges to u | in L 2 (S).
Lemma 1.4.5 Suppose that the hypothesis of Lemma 1.4.2 is ful lled and i ∈ 1, , n} {
Then if the sequen e of derivatives {uˆ¯ x i (x, t)} h weakly onverges to a fun tion u(x, t) in L 2 (Q) as h tends to zero, the sequen e {u˜¯ x (x, t)} h also onverges to u(x, t) in L 2 (Q) where u˜¯ x i (x, t) : u¯ x i
Dis retization in spa e variables and the onvergen e of the nite
For a fun tion z de ned in Ω, we de ne its average on ω(k) by z k = 1 ∫ z(x)dx = 1
Here, if k belongs to the boundary of Ω then we understand that ω(k) is a grid box of Ω ontaining k.
We approximate the integrals in (1.47) and (1.49) as follows
Substituting the integrals (1.68) (1.71) into the rst equation of (1.47), we have the fol- lowing equality (dropping the time variable t for a moment)
Using the summation by parts formula together with the ondition u¯ k k i = 0 and k i = N i , we have η¯ k = 0 when Σ k k k Σ k u¯ k+e i − u¯ k η¯ k+e i − η¯ k k∈Ω i k u¯ k+e i − u¯ k a¯ i 2 i η¯ k+e i
∈Ω i k u¯ k+e i − u¯ k a¯ i 2 i η¯ k (1.74) Σ k u¯ k − u¯ k−e i k Σ k u¯ k+e i − u¯ k k Σ k u¯ k − u¯ k−e i k u¯ k+e i − u¯ k k with a¯ k = a¯ k−e i Repla ing into (1.73) and approximating the initial ondition u¯ k (0) v¯ k , k ∈ Ω¯ h , we obtain the following system approximating the original problem (1.47)
)u¯ − F¯ 0, (1.75) u¯(0) = v¯, with u¯ = {u k , k ∈ Ω¯ h }, the fun tion v¯ = {v k , k ∈ Ω¯ h } being a grid fun tion approximating the initial ondition v and F¯ = {f k , k ∈ Ω h } where f¯ k is de ned in formula (1.67) The oe ients matrix Λ i has the form
We note that the oe ients matri es Λ i de ned by (1.76) is positive semi-de nite and the proof of this proposition is similar to proof of Lemma 1.4.6 for the Neumann problem in the following part. b The Neumann problem (1.48)
Putting (1.68) (1.72) in the rst equation of (1.49), for the ease of notation, we drop the time variable t for a moment, we have
2 r trary and xed indi es k 2 , , k n , using the summation by parts, we obtain
1 (1.78) with a¯ k = a¯ k−e 1 The similar terms in the x 2 , , x n dire tions are treated in the same way Then, repla ing this equality into (1.77) and approximating the initial ondition in the se ond equation of (1.49), we obtain the following system approximating the original problem (1.49)
)u¯ − F¯ 0, (1.79) u¯(0) = v¯, with u¯ = {u k , k ∈ Ω¯ h } and the fun tion v¯ = {v k , k ∈ Ω¯ h } is a grid fun tion approximating the initial ondition v and i− u¯ k h 2
0, k ∈ Π 0 Moreover, the right hand side F¯ is de ned as follows a
4 f¯ k + g¯ k , if k ∈ Π 0 Next, we will prove that the oe ients matrix Λ i de ned by (1.80) is positive semi-de nite.
Lemma 1.4.6 For any t, the oe ients matrix Λ i , i = 1, 2, ã ã ã , n is positive semi- de nite.
Proof Without loss of generality, we assume n = 2 and i = 1 We have
Setting U k 2 = (U (0,k 2 ) , U (1,k 2 ) , , U (N 1 ,k 2 ) ) ′ , and taking into a ount that a¯ k = a¯ k−e 1 and ¯b k ≥ 01 , we see that
Hen e Λ k is positive semi-de nite The proof is omplete.
Next, we prove the boundedness of the solution of (1.75) and (1.79).
Lemma 1.4.7 Let u¯ be a solution of the Cau hy problem (1.75). a) If v ∈ L 2 (Ω), then exists a onstant c independent of h and the oe ients of the equation su h that
∫ T Σ ¯ i=1 k∈Ω i Σ b) Moreover, if a i , b ∈ C 1 ([0, T ], L ∞ (Ω)) with |∂a i /∂t|, |∂b/∂t| ≤ à 2 < ∞, we have
|u¯ k (0)| 2 , and u¯ k (0) = v¯, it follows from (1.73) that
Multiplying the both sides of the equality (1.85) by 2, applying Cau hy's inequality to the rst term in the right hand side, noting that b k ≥ 0, we obtain
Applying Gronwall's inequality, we obtain y(t ∗ ) ≤ ∆h t ∗ |f¯ k | 2 dt + ∆h | v¯ k |
From the onditions (1.1) (1.3) about the oe ient a i , the inequalities (1.86) and (1.87) we have
Combining the two inequalities, we obtain the inequality (1.83). b) In order to obtain a bound for ∫ T Σ k 2 , we repla e k in (1.73) by k and
Multiplying the both sides of (1.90) by 2, integrating by parts the se ond and the third terms in the left hand side and using Cau hy's inequality for the right hand side, we obtain
From this inequality, sin e b ≥ 0 and the onditions (1.1) (1.3) about the oe ient a i , the inequalities (1.86), (1.87) and the hypothesis of the theorem, we obtain
Using this inequality and (1.89) we get
Using the inequality (1.83) we obtain (1.84).
If v ∈ L 2 (Ω), the right hand side of (1.83) is bounded by c(ǁf ǁ 2 2
) On the other hand, if v ∈ H 1 (Ω), the right hand side of (1.84) is bounded by c(ǁf ǁ 2 2 +ǁvǁ 2
Consequently, it follows form Lemmas 1.4.7, 1.4.2 and 1.4.4 the onvergen e of the multi- linear interpolation of u¯ k to the solution of the problem (1.46) as the grid size h tends to zero This is stated in the following theorem.
To emphasize the dependen e of the solution of the dis retized problem on the grid size, we now use the notation u h instead of u¯ k
Theorem 1.4.8 1) If v ∈ L 2 (Ω), then the multi-linear interpolation (1.62) uˆ h of the di eren e-di erential problem (1.73) in Q T weakly onverges in L 2 (Q) to the solution u ∈
H 1,0 (Q) of the original problem (1.46) and its derivatives with respe t to x i , i = 1, , n weakly onverges in L (Q) to u x i
2) If a i , b ∈ C 1 ([0, T ], L ∞ (Ω)) with |∂a i /∂t|, |∂b/∂t| ≤ à 2 < ∞, and v ∈ H 1 (Ω), then uˆ h strongly onverges L 2 (Q) to u Furthermore, uˆ h | S onverges to u| S in L 2 (S).
The proof of this theorem is similar to that of Theorem 1.4.11 below, therefore we omit it.
We now prove the boundedness of the nite di eren e approximation to the solution of the Neumann problem (1.49) We have the following results.
Lemma 1.4.9 Let u¯ be a solution of the Cau hy problem (1.79). a) If v ∈L 2 (Ω), then there exists a onstant c independent of h and the oe ients of the equation (1.7) su h that
0 i=1 h i k ∈Π il k∈Π ir Σ ∫ T 1 Σ ∆ h b) If a i , b ∈ C 1 ([0, T ], L ∞ (Ω)) with |∂a i /∂t|, |∂b/∂t| ≤ à 2 < ∞ and if g ∈ H 0,1 (S), then max ∆h t∈[0,T ] Σ |u¯ k (t)| 2 +
Proof The proof of this lemma is similar to that of Lemma 1.4.7 but di erent from the estimation for the sum on the boundary. a) Similar to the proof of Lemma 1.4.7, for arbitrary t ∗ ∈ (0, T ] set
Using ǫ-Cau hy's inequality for the terms in the right hand side, taking into a ount that ¯b k ≥ 0 and the ondition (1.3), we obtain
Here, the positive onstants ǫ 1 and ǫ 2 will be hosen.
We estimate the last sum in the above inequality In the ontinuous ase, we an evaluate the L 2 -norm of the tra e of a fun tion ϕ ∈ H 1 (Ω) on the boundary ∂Ω by its
However, a similar inequality is not valid for a grid fun tion in Ω h due to its orners We now estimate the sum
Here, we have used the equality h 1 = L 1 /N 1 and N 1 ≥ 2.
On the other hand, sin e h 1(N 1 − 1) = L 1 (N 1 − 1) ≥ 1/2L 1 , the above inequality has the form
On the other boundary, we have similar estimates.
We now estimate u¯ 0 (t) The value of u¯(t) at the other orner points are similarly estimated From the equation (1.80) and (1.81) we have d u¯
Furthermore, we have the initial ondition
0 with c being a de ned onstant Hen e
Thus, from (1.96), (1.98) and (1.100) we obtain
∆h |f¯ k (t)| 2 + | g¯ k (t)| 2 dt + c Σ ∆ h Σ |v¯ k | 2 (1.101) k∈Π 0 0 n i=1 h i=1 h i k∈Π 0 with c being a de ned onstant
L i = λ, we an eliminate the term ontaining derivatives with respe t to x i in the both sides of the above inequality Then hoosing ǫ 1 = 1/2 and applying Gronwall's inequality with respe t to ∆h k∈Ω¯ h
|u¯ k (t ∗ )| 2 , we obtain the estimate max ∆h t∈[0,T ] k Σ
In the inequality (1.101) hoosing + ǫ 2 su h that n 2nǫ h 2 |g¯ max k | i ∈ 1, ,n} { L i = λ/2, then applying i h x n Σ n
∫ Σ h k 0 i t the inequality (1.102) we obtain the following estimate for the derivative of u¯ as follows
Combining (1.102) and (1.103), we obtain estimate (1.93) as laimed in the theorem. b) Next, to estimate ∫ T Σ k 2 , we repla e k in (1.73) by k to obtain
Multiplying the both sides of (1.104) with 2, applying integration by parts to the se ond and third terms of the right-hand side, the terms of the left-hand side ex ept for the rst one, we obtain
To pro eed, we need the following auxiliary result
Taking the integral of the both sides with respe t to t from 0 to T , then dividing the obtained inequality by T, we arrive at the assertion of the lemma.
Applying Cau hy's inequality, we have
Applying Cau hy's inequality and (1.98), we get
Applying Cau hy's inequality, Lemma 1.4.10, the inequalities (1.98), (1.100), we obtain
Further, using Cau hy's inequality, Lemma 1.4.10 and (1.102) we an estimate (IV) as follows
Sin e the oe ient b is nonnegative, the oe ients a i satisfy the ondition (1.1) (1.3), furthermore, these oe ients satisfy the se ond onditions of the lemma, from the in- equalities (1.105),(1.107) (1.110), with the use of (1.93), we obtain (1.94).
From Lemma 1.4.9, we have the following results whi h assert the onvergen e of the nite di eren e s heme (1.79) of the Neumann problem (1.49). h i
Theorem 1.4.11 1) If v ∈ L 2 (Ω), then the multi-linear interpolation (1.62) uˆ h of the di eren e-di erential problem (1.79) in Q T weakly onverges in L 2 (Q) to the solution u ∈
H 1,0 (Q) of the Neumann problem (1.46) and its derivatives with respe t to x i , i = 1, , n onverges weakly in L 2 (Q) to u x
2) If a i , b ∈ C 1 ([0, T ], L ∞ (Ω)) with |∂a i /∂t|, |∂b/∂t| ≤ à 2 < ∞, and v ∈ H 1 (Ω), then uˆ h onverges strongly in L 2 (Q) to u Furthermore, uˆ h | S onverges to u| S in L 2 (S).
Proof We follow Ladyzhenskaya [40, Chapter 6 (see also [76, 77 ) to prove these℄ ℄ onver- gen e results We an prove onvergen e rates like in [35, 70 ℄ However, we do not pursuit it in this thesis.
1) From the estimate (1.93), the right hand side of (1.93) is bounded by c ǁf ǁ L 2 (Q) + ǁgǁ L 2 (S) + ǁvǁ L 2 (Ω) Therefore, the sequen e {uˆ¯} h of the multi-linear interpolations is bounded in H 1,0 (Q), due to Lemma 1.4.1 Hen e, there is a subsequen e {uˆ¯} h κ weakly onverges to some fun tion u = u(x, t) ∈ H 1,0 (Q) and the sequen e of the derivative
{u¯ˆ} h κ x weakly onverges to the orresponding derivative u x in L 2 (Q) as κ tends to in nity (equivalently, h κ tends to zero) for i = 1, 2, , n Due to Lemma 1.4.3, the subsequen e
{u˜¯} h κ weakly onverges in L 2 (Q) to u and the subsequen e of the derivative {u¯˜} h κ x weakly onverges in L 2 (Q) to u x as κ tends to in nity To nish the proof, we need to show that ea h term in the dis rete equation (1.77) onverges to the orresponding one in (1.19). Indeed, sin e C 2,1 (Q) is dense in H 1,1 (Q), it is enough to onsider the fun tion η in this spa e The values of the grid fun tion {η¯} h are set to be the values of η at the grid points It is lear that {η¯} h onverges uniformly to η as h tends to zero This implies that all the terms in (1.77) (the terms on the boundary in the right hand side are onsidered as the boundary term) onverge to the orresponding terms in (1.19) This means that u is a weak solution of the Neumann problem (1.46) whi h is unique Thus, every subsequen e of {uˆ¯} h onverges to the same fun tion u Hen e the sequen e
2) When the onditions of the se ond part of the theorem are ful lled, we have the a priori estimate (1.94) Therefore, from the part one of the theorem, the sequen e {uˆ¯} h onverges to the solution u ∈H 1,0 (Q) whi h belongs to H 1,1 (Q), due to Theorem 1.1.2.
On the other hand, from (1.94), the sequen e {uˆ¯} h of the multi-linear interpolations is bounded in H 1,1 (Q), due to Lemma 1.4.2 Hen e, {uˆ¯} h weakly onverges to u = u(x,t) ∈ H 1,1 (Q) It follows that {uˆ¯} h κ strongly onverges to u = u(x, t) ∈ L 2 (Q).Furthermore, uˆ h | S onverges to u| S in L 2 (S).
Dis retization in time and splitting di eren e s heme
Next, we dis rete the time variable t We subdivide [0, T ] into M subintervals by the points t i , i = 0, , M, t 0 = 0, t 1 = ∆t, , t M = M∆t = T To simplify the notation, we set u k,m := u k (t m ) In this part, if there is no onfusion, we will drop the
In order to obtain the splitting di eren e s heme for the Cau hy problems (1.75) and (1.79), i
∆ t 2 we set u m+δ := u¯(t m + δ∆t), Λ m := Λ i (t m + ∆t/2) We introdu e the following two ir le omponent-by- omponent splitting s heme [54℄ u m+ 1 u m 1
4 1 where E i is the identity matrix orresponding to Λ i , i = 1, , n The splitting s heme (1.112) an be rewritten in the ompa t form
1 n where A m = (E i + ∆t Λ m ) −1 (E i − ∆t Λ m ) The stability of the splitting s heme (1.113) is i 4 i 4 i given in the following theorem.
Theorem 1.4.12 The splitting s heme (1.113) is stable.
Proof Indeed, from the rst equation of (1.113), we have ǁu m+1 ǁ ≤ ǁA m u m ǁ + ∆tǁB m F m ǁ
≤ ǁA m ǁ ǁA m ǁǁA m ǁ ǁA m ǁǁu m ǁ + ∆tǁA m ǁ ǁA m ǁǁF m ǁ.
Moreover, applying Kellogg's Lemma [54, Theorem 2.1, p 220 ,℄ we obtain ǁA m ǁ = ǁ(E + ∆t Λ m )
Hen e, we get i i 4 i i 4 i ǁu m+1 ǁ ≤ ǁu m ǁ + ∆tǁF m ǁ, ǁu m ǁ ≤ ǁu m−1 ǁ + ∆tǁF m−1 ǁ,
Putting ǁvǁ = ǁu 0 ǁ and ǁf ǁ = max ǁF m ǁ, we have m ǁu m+1 ǁ ≤ ǁvǁ + m∆tǁf ǁ.
Consequently, the s heme is stable.
Next, we introdu e the dis retized Green's formula for the dire t and adjoint problems.
Theorem 1.4.13 Let u be a solution of the di eren e s heme
u m+1 = A m u m + F m , m = 0, 1, , M − 1, and η a solution of the adjoint problem
Then, the dis retized Green's formula has form
Proof Multiplying both sides of the rst equation of (1.114) with η m , m = 0, 1, , M − 1, and summing the results over m, we have
Multiplying both sides of the se ond equation of (1.114) with η M we obtain
Hen e, it follows from (1.114) that
Multiplying both sides of the rst equation of (1.115) with u m+1 , m = 0, 1, , M − 1, summing the results over m, we have
Multiplying both sides of the se ond equation of (1.115) with u 0 we get
Hen e, it follows from the adjoint problem (1.115) that
It follows from (1.117) and (1.118) that
Consequently, we have the dis retized Green's formula as follows
Approximation of the variational problems
As stated in Introdu tion, the inverse problems onsidered in this thesis are that of de- termining the initial ondition v in 1) the Diri hlet problem (0.8) (0.10) from either the u C
∗ γ h with Φ1(h) → 0 as h → 0 Also suppose that Cˆ ∗ is an approximation to C ∗ su
8 observation of its solution u at the nal time, or integral observations, 2) in the
Neumann problem (0.8), (0.9), (0.11) from the observation of its solution u at a part of the boundary
S The solution u to the Diri hlet and Neumann problems is understood in the weak sense as in De nitions 1.1.7 and 1.1.8.
Thus, the observation operator C is either i) the nal time observation operator Cu(v) = u(x, T ; v): v ∈ L 2 (Ω) → Cu(v) ∈ H : L 2 (Ω), ii) or the integral operator Cu(v) = (l 1 u(x, t; v), l 2 u(x, t; v), , l N u(x, t; v)), where l i are de ned by (0.22) In this ase, the operator C maps v ∈ L 2 (Ω) to Cu(v) ∈ H
= (L 2 (0, T )) N , iii) or the tra e of the solution u on a part of the boundary Cu(v) = u(x, t; v)|Γ Thus, C maps v ∈ L 2 (Ω) to Cu(v) ∈ H = L 2 (Γ).
Our data assimilation problem is that of re onstru ting the initial ondition v when the observation Cu(v) is given by some z ∈ H.
Denoting the solution to (0.8) (0.10) (or (0.8), (0.9), (0.11)) with v ≡ 0 by o , we see that the operator from v ∈ L 2 (Ω) to Cv := Cu(v)
− o ∈ H is bounded and linear Thus, the above inverse problems lead to the linear operator equation
− o To get a stable approximation to v we minimize the Tikhonov fun tional
(1.120) with respe t to v ∈ L 2 (Ω) Here, v ∗ is an estimation to v The minimizer to this optimiza- tion problem is hara terized by the optimality ondition:
J ′ (v) = C ∗ (Cv − z˜) + γ(v − v ∗ ) = 0, (1.121) where C ∗ is the adjoint operator of C We denote the solution to this problem by v γ
Suppose that C h is an approximation to C su h that ǁ(C − C h )vǁ H ≤ Φ1(h)ǁvǁ L 2 (Ω) ∀v ∈L 2 (Ω) (1.122) h h ǁC ∗ ω − Cˆ ∗ ωǁ L 2 (Ω) ≤ Φ2(h)ǁωǁ H ∀ω ∈ H
We approximate the problem (1.120) by
+ ǁv − v γ ǁ 2 (1.124) v γ 2 H 2 h L (Ω) whi h is hara terized by the rst-order optimality ondition h h h h h h h h
Here, v ∗ ∈ L 2 (Ω) is an approximation to v ∗ su h that with Φ3(h) → 0 as h →
Let vˆ γ be the solution to the variational problem
Cˆ ∗ (C h vˆ γ − z δ ) + γ(vˆ γ − v ∗ ) = 0 (1.127) h h h h with a perturbation z δ of z˜ satisfying ǁz δ − z˜ǁ H ≤ δ (1.128)
Following [28, 30℄ we have the following result.
Theorem 1.5.1 Let v γ be the solution of the variational problem (1.121) and γ > 0 Then the following error estimate holds ǁv γ − vˆ γ ǁ ≤ Φ(h) + c δ γ , where Φ(h) → 0 when h tends to zero.
Proof In the proof c 1 , c 2 , are generi positive onstants, ⟨ã, ã⟩ and ǁ ã ǁ denote the s alar produ t and the norm in L 2 (Ω), respe tively From the equation (1.125) we have γǁv γ ǁ 2 = ⟨C ∗ (z˜ − C h v γ ) + γv ∗ , v γ ⟩ h h h h h
From the equations (1.125) and (1.127) we have γ(v γ − vˆ γ ) = Cˆ ∗ (C h vˆ γ − z δ ) − C ∗ (C h v γ − z˜).
Moreover, using the inequalities (1.123) and (1.129), we estimate the rst term of the right
≤ c 7Φ2(h) + c 8 δ + c 9Φ1(h) ǁv γ − h h γ 7 2 9 1 γ hand side of the last quality as follows
Similarly, using the inequalities (1.122) and (1.129) we an estimate the se ond term by
Further, using the inequalities (1.122), (1.123) and (1.128), we obtain
Similarly, from (1.121), (1.122), (1.123), (1.126), (1.127) and (1.129), we have γǁv γ − vˆ γ ǁ 2 =⟨Cˆ ∗ C h vˆ γ − C ∗ C h vˆ γ + C ∗ C h vˆ γ − C ∗ Cv γ − Cˆ ∗ z δ + C ∗ z˜, v γ − vˆ γ ⟩ h h h h h h h γ⟨v ∗ − v ∗ , v γ − vˆ γ ⟩ h h
+ Φ2(h)ǁz δ ǁ H ǁv γ − vˆ γ ǁ + c 12 δǁv γ − vˆ γ ǁ + Φ3(h)ǁv ∗ ǁǁv γ − vˆ γ ǁ. h h h
Combining this inequality with (1.130) we arrive at the statement of the theorem. ǁ h β
J h (v) = C ∗ C h v algorithm [81℄ to estimate the eigenvalues of C ∗ C h based on its value C ∗ C h v This approa
Data assimilation by the nal time observations 44
Problem setting and the variational method
For the ease of reading, we rewrite the Diri hlet problem (1.7) (1.9) with the new indexing:
The oe ients a ij , b and the data f and v satisfy the onditions (1.1) (1.6) Under these onditions, (2.1) has a unique weak solution u ∈ W (0, T ; H 1 (Ω)) in the sense of De nition 1.1.7.
Data assimilation: Suppose that v is not given and we have to re onstru t it from
8 the observation of the solution u to (2.1) at the nal time Cu := u(ã, T ) = ξ(ã) ∈
Proof Sin e u ∈ W (0, T ; H 1 (Ω)), we have u(ã, T ; v) ∈ H 1 (Ω) However, n
From now on, to emphasize the dependen e of the solution u on the initial ondition v, we denote it by u(v) or u(x, t; v).
Remark 2.1.1 The operator C maps v ∈ L 2 (Ω) to u(x, T ; v) is ompa t from L 2 (Ω) to
L 2 (Ω) Hen e the problem of re onstru ting v from u(ã, T ; v) = ξ(ã) ∈ L 2 (Ω) is ill-posed.
0 0 0 pa tly embedded in L 2 (Ω), therefore the operator C from L 2 (Ω) to L 2 (Ω) is ompa t.
To analyse the degree of the ill-posedness of the problem we have to study behaviour of the singular values of "the linear part" C of the a ne operator mapping v ∈ L 2 (Ω) to u(x, T ; v) ∈ L 2 (Ω) whi h is de ned as presented in Introdu tion and in Ÿ1.5 as follows: denote by o the solution to (2.1) with v ≡ 0, then o o
Cv := Cu − Cu = u(x, T ; v) − u(x, T ) is bounded and linear from L 2 (Ω) to L 2 (Ω) It is lear that C is ompa t Sin e there is no expli it form for C and the analysis of its singular values is not trivial, we therefore suggest a numeri al s heme to approximate the singular values of C Before doing so, we des ribe the variational method for nding the initial ondition v in from u(x, T ; v) by minimizing the mis t fun tional
(2.2) over L 2 (Ω), subje t to the system (2.1) As the inverse problem is ill-posed, this variational problem is so, therefore, we use the Tikhonov regularization method to stabilize it by minimizing the Tikhonov regularization fun tional
, (2.3) over L 2 (Ω) with γ > 0 being the regularization parameter and v ∗ an approximation of the initial ondition v, whi h an be set to zero.
Now we prove that J γ (v) is Fré het di erentiable and derive a formula for its gradient
∇J γ In doing so, we introdu e the adjoint problem
Sin e u(ã, T ; v) − ξ(ã) ∈ L 2 (Ω), due to Theorem 1.1.1, there exists a unique solution p ∈
Theorem 2.1.1 The fun tional J γ is Fré het di erentiable and its gradient is given by
∇J γ (v) = p(x, 0) + γ(v − v ∗ ), (2.5) where p(x, t) is the solution to the adjoint problem (2.4).
Proof For a small variation δv, we have
∫ δu(v) u(v) − ξ dx where δu(v) = u(v + δv) − u(v) is the solution to the problem
Applying the a priori estimate (1.15) of Theorem 1.1.1 to the solution of the problem (2.7), we obtain that there exists a onstant c D su h that ǁδuǁ W (0,T ;H 1 (Ω)) ≤ c D ǁδvǁ L 2 (Ω) Hen e 1 2
Furthermore, using Green's formula (1.22) of Theorem 1.2.1 for (2.4) and (2.7), we obtain
Combining it with (2.6), we have
Ω δvp(x, 0)dx o(ǁδvǁ2 2 ) + ⟨δv, p(x, 0)⟩ L 2 (Ω) Thus, J 0 is Fré het di erentiable and
∇J 0(v) = p(x, 0) where p(x, t) is the solution to the adjoint problem (2.4) It follows the formula (2.5) for the gradient of the fun tional J γ (v) The proof is omplete.
C o u)). system of ordinary di erential equations (1.75) Denote ξ˜ = ξ − o (x, T ) Then J
2 ǁCv − ξǁ L 2 (Ω) We approximate γ 2 2 n where C ∗ is the adjoint operator of C On the other hand, from the above theorem,
∇J 0(v) = C ∗ Cv = p(x, 0), where p = p(x, t; v) is the solution to the adjoint problem
Thus, for any v ∈ L 2 (Ω) we an always evaluate C ∗ Cv via a dire t problem for al ulating u¯(v) and an adjoint problem for obtaining p(x, 0; v) When we dis retize the problem, C has a form of a matrix, therefore we an apply Lan zos' algorithm Ÿ1.6 to approximate eigenvalues of C ∗ C The numeri al results will be presented in Ÿ2.4.
Dis retization of the variational problem in spa e variable
In this se tion we suppose that the rst equation of the Diri hlet problem (2.1) has no mixed derivative, that is a ij = 0 if i /= j, and we denote a ii by a i as in Ÿ1.4 and get the system (1.46) Furthermore, we assume Ω is an open parallelepiped de ned in Ÿ1.4.1 We dis retize this problem by the nite di eren e method in spa e variables and get the u 0
|v¯ k − v¯ ∗ k | 2 (2.9) and minimize this fun tional over all grid fun tions v¯ Set C h v¯ = uˆ¯(x, T ; v¯), where uˆ¯ is the pie ewise linear interpolation of u¯ de ned in Subse tion 1.4.1 Then, if the ondition 2) of Theorem 1.4.8 is satis ed, we see that all onditions in Subse tion 1.5 for Theorem 1.5.1 are satis ed As we do not allow noise in the data ξ yet, we have the following result on the onvergen e of the solution of the dis retized optimization problem (2.9).
Theorem 2.2.1 Let ondition 2) of Theorem 1.4.8 is satis ed Then the linear interpo- lation uˆ¯ of the solution u γ of the problem (2.9) onverges in L 2 (Ω) to the solution v γ of the problem (2.3) as h tends to zero.
Full dis retization of the variational problem
To solve the variational problem numeri ally we need to dis retize the system (1.46) in time.
We use the splitting method in Subse tion 1.4.3 to get the s heme (1.111) or (1.112) Now,
0 0 2 2 k k k we dis retize the obje tive fun tional J 0(v) as follows
Here we use the notation u k,M (v¯) to indi ate the dependen e of the solution to the initial ondition v¯ and M represents the index of the nal time We drop the multiplier h as it does not play any role here Furthermore, we use the same notation J h as in the previous se tion, although it depends also on the time mesh size ∆t.
2.3.1 The gradient of the obje tive fun tional
We will solve the minimization problem (2.10) by the onjugate gradient method For this purpose, we need to al ulate the gradient of the obje tive fun tion J h In this subse tion, we use the following inner produ t of two grid fun tions u := {u k , k ∈ Ω h } and k v :{v , k ∈
The following theorem give a formula for the gradient of the obje tive fun tional (2.10).
Theorem 2.3.1 The gradient ∇J h (v¯) of the obje tive fun tional J h at v¯ is given by
0 where η = (η 0 , , η M ) satis es the adjoint problem η m = (A m+1 ) ∗ η m+1 , m = M − 2, M − 3, , 0.
Here, the matrix (A m ) ∗ has form
+ 4 Λ 1 ). Proof For a small variation δv¯ of v¯, it follows from (2.10) that
0 0 where w k,m := u k,m (v¯ + δv¯) − u k,m (v¯), k ∈ Ω h , m = 0, , M, w m := {v k,m , k ∈ Ω h } and ψ = u M (v¯) − ξ It follows from (1.111) that w is the solution to the problem
Taking the inner produ t of both sides of the mth equation of (2.16) with an arbitrary ve tor η m ∈ R N 1 × ×N n , then summing the results over m = 0, , M − 1, we obtain
Here A m ∗ is the adjoint matrix of A m Consider the adjoint problem
Taking the inner produ t of the both sides of the rst equation of (2.18) with an arbitrary ve tor w m+1 , then summing the results over m = 0, , M − 2, we obtain
(w m+1 , (A m+1 ) ∗ η m+1 ) (2.19) Taking the inner produ t of the both sides of the se ond equation of (2.18) with the ve tor v M , we have
It follows from (2.19) and (2.20) that
On the other hand, it an be proved by indu tion that w k,M 2 = o(ǁδv¯ǁ) Hen e, k∈Ω h it follows from (2.15) and (2.22) that
Consequently, the gradient of the obje tive fun tional J h an be written as
Note that, sin e the matri es Λ m , i = 1, , n are symmetri , we have for m = 0, , M − 1
Denote by ǫ the noise level of the measured data Using the well-known onjugate gradient method with an a posteriori stopping rule introdu ed by Nemirovskii [59 ,℄ we re onstru t the initial ondition v¯ from the measured nal state u M = ξ by the following steps:
Step 1 (initialization): Given an initial guess v 0 and a s alar α > 1, al ulate the residual rˆ 0 = u(v 0 ) − ξ by solving the splitting s heme (1.111) with the initial ondition v¯ being repla ed by the initial guess v 0
If ǁrˆ 0 ǁ ≤ αǫ, stop the algorithm Otherwise, set i = 0, d −1 = (0, , 0) and go to Step
Step 2 Cal ulate the gradient r i = ∇J k,M (v i ) given in (2.12) by solving the adjoint problem (2.13) Then set where
Step 3 Cal ulate the solution u¯ i of the splitting s heme (1.111) with v¯ being repla ed by d i , put
The residual an be al ulated by v i+1 = v i + α i d i (2.28) rˆ i+1 = rˆ i + α i (u¯ i ) M (2.29)
Step 4 If ǁ rˆ i+1 ǁ≤ αǫ, stop the algorithm (Nemirovskii's stopping rule) Otherwise, set i := i + 1, and go ba k to Step2.
We note that (2.29) an be derived from the equality rˆ i+1 = u M (v i+1 ) − ξ = u M (v i + α i d i ) − ξ
Data assimilation by the integral observations 57
Problem setting and the variational method
As in the previous hapter, for the ease of reading, we rewrite the Diri hlet problem (1.7) (1.9) with the new indexing:
We assume that the oe ients a ij , b and the data f and v satisfy the onditions (1.1) (1.6). The solution to this problem is understood in the weak sense of De nition 1.1.7 It has been proved that under these onditions there exists a unique solution u ∈ W (0, T ;
Data assimilation: Suppose that v is not given and we have to re onstru t it from the observation of the solution u to (3.1) from N integral observations l i u = h i (t), t ∈ (τ, T ), τ ≥ 0, i = 1, 2, , N, with l i u(x, t) = ω i (x)u(x, t)dx = h i (t), t ∈ (τ, T ), i = 1, , N (3.2)
Here ω i ∈ L 1 (Ω) with ∫ ω i (x)dx > 0 are N given nonnegative weight fun tions.
Remark 3.1.1 If v ∈ H 1 (Ω), a ij , b ∈ C 1 ([0, T ]; L ∞ (Ω)), i, j = 1, , n and there exists a onstant à 1 su h that |∂a ij /∂t|, |∂b/∂t| ≤ à 1 , then the operator C maps v ∈
L 2 (Ω) to (l 1 u(x, t; v), ã ã ã , l N u(x, t; v)) is ompa t from L 2 (Ω) to (L 2 (τ, T )) N Hen e the problem of re onstru ting v from Cu(v) = (h 1 , ã ã ã , h N ) ∈ (L 2 (τ, T )) N is ill- posed.
Proof If the ondition 2) of Theorem 1.1.1 is satis ed, then u ∈ H 1,1 (Q) Therefore,
(l 1 u(x, t; v), ã ã ã , l N u(x, t; v)) ∈ (H 1 (τ, T )) N whi h is ompa tly embedded in (L 2 (τ, T )) N
Hen e, the operator C from L 2 (Ω) to (L 2 (τ, T )) N is ompa t.
Now we analyze the degree of ill-posedness of this problem We denote by o the solution of the problem
Ω, and u[v] the solution of the problem
+ o = C v + o (3.5) i i l i u i l i u where C i v, i = 1, , N mapping v to l i u[v], i = 1, , N are linear bounded operators from L 2 (Ω) to L 2 (τ, T ).
De ne the linear operator
Cv = (C1 v, , C N v)for v ∈ L 2 (Ω), where the s alar produ t in (L 2 (τ, T )) N is de ned by: if → k 1 = (k 1 , , k 1 ) and → k 2
→ re onstru ting the initial ondition v in (3.1) from the observations (3.2) has the form
We introdu e now a variational method for solving the re onstru tion problem.
We denote u(x, t), the solution to (3.1) by u(x, t; v) or u(v), if there is no onfusion, to emphasise the dependen e of the solution u(x, t) on the initial ondition v A natural method for re onstru ting v from the observations l i (u), i = 1, 2, , N (3.2) is to minimize the mis t fun tional with respe t to v ∈
Let v ∗ be an a priori guess of v We now ombine the above least squares problem with Tikhonov regularization as follows: minimize the fun tional
(3.8) with respe t to v ∈ L 2 (Ω) with γ being positive Tikhonov regularization parameter.
Sin e γ > 0, it is easily seen that the problem of minimizing J γ (v) over L 2 (Ω) has a unique solution Now we prove that J γ (v) is Fré het di erentiable and derive a formula for its gradient In doing so, onsider the adjoint problem:
(L 2 (τ, T norm in (L 2 (τ, T )) N is de ned by N ǁ i f k = (k1 , , k N Σ
1 where χ (τ,T )(t) = 1, if t ∈ (τ, T ) and χ (τ,T )(t) = 0, otherwise The solution to (3.9) is
N understood in the weak sense of Ÿ1.2 Sin e ω i (l i u − h i )χ (τ,T )(t) ∈ L 2 (Q), there exists a unique solution p ∈ W (0, T ; H 1 (Ω)) to (3.9) i=1
Theorem 3.1.1 The gradient ∇J 0(v) of the obje tive fun tional J 0(v) at v is given by
∇J 0(v) = p(x, 0), (3.10) where p(x, t) is the solution to the adjoint problem (3.9).
Proof For a small variation δv of v, we have
= ⟨l i δu(v), l i u(v) − h i ⟩ L 2 (τ,T ) + o(ǁδvǁ L 2 (Ω)) i=1 where δu(v) is the solution to the problem
Using Green's fomula (1.22) (Theorem 1.2.1) for systems (3.9) and (3.11), we obtain
It follows from (3.12) and (3.13) that
Consequently, J 0 is Fré het di erentiable and
From this result, we see that the fun tional J γ (v) is also Fré het di erentiable and its gradient ∇J γ (v) has the form
∇J γ (v) = p(x, 0) + γ(v − v ∗ ), (3.14) where p(x, t) is the solution to the adjoint problem (3.9).
Now we turn to the question of estimating the degree of ill-posedness of our re onstru ting problem Sin e C i , i = 1, , N, are de ned by (3.5), from the above theorem we have
C ∗ g = p † (x, 0), where p † is the solution to the adjoint problem i i i p † = 0 on S, (3.15)
† i with g ∈ L 2 (τ, T ) and g˜(t) = g(t) for t ∈ (τ, T ) and g˜(t) = 0, otherwise From (3.5), we have
C ∗ C i v = p 0 (x, 0), where p 0 is the solution of the adjoint problem
Thus, if v ∈ L 2 (Ω) is given, we have the value C ∗ Cv = p 0 (x, 0) However, an expli it form of C ∗ C is not available to analyze its eigenvalues and eigenfun tions Despite this, when we dis retize the problem, the nite-dimensional approximations C h to C are matri
N N Σ ǁ 2 + ǁ i Σ Σ Σ so we an apply the Lan zos' algorithm [81℄ to estimates the eigenvalues of C ∗ C h based on the values C ∗ C h v h The numeri al simulation of this approa h will be given in Ÿ3.4.
Sin e the gradient of the fun tional J γ (v) an be al ulated via the adjoint problem (3.9), the onjugate gradient algorithm is appli able to approximating the initial ondition v as follows [60 ℄ Let where v k+1 = v k + α k d k , d k ǁ∇J γ (v k )ǁ 2
It follows from (3.17) that α k an be rewritten as
Dis retization of the variational problem in spa e variable
Suppose that the rst equation of the Diri hlet problem (3.1) has no mixed derivative, that is a ij = 0 if i /= j, and we denote a ii by a i as in Ÿ1.4 and get the system (1.46). Furthermore, we assume Ω is the open parallelepiped de ned in Ÿ1.4.1 We dis retize this problem by the nite di eren e method in spa e variables and get the system of ordinary di erential equations (1.75).
Using the representation in Subse tion 3.1, we have the rst-order optimality ondition for this problem as follows
We approximate the fun tional J γ as follows First, we approximate the fun tional l i (v) by l h u¯(v¯) = ∆h Σ ω¯ k u¯ k (v¯), i = 1, ,
As in the ontinuous problem, we de ne ¯ o by the solution to the Cau hy problem du¯+ (Λ + ã ã ã + Λ )u¯ − F¯ = 0, u¯(0) = 0,dt where F¯ is de ned as in (1.75), and u¯[v¯] the solution to the Cau hy problem du¯ dt + (Λ1 + ã ã ã + Λ n )u¯ = 0,
Hen e the operator C ih v˜¯ = l h u¯[v¯] is linear and bounded from L 2 (Ω) into L 2 (τ, T ), for i = 1, , N Furthermore, if p¯† is a solution to the Cau hy problem dp¯†
= 0, (3.27) p¯ † (T ) = 0, with G¯ = {ω¯ k g(t), k ∈ Ω h } where ω¯ k is de ned in formula (1.67), then C ∗ g = ˜¯ † (x, 0). i i ih p
Thus, the dis retized version of C has the form C h := (C1h , , C Nh ) and the fun tional J γ is now approximated as follows:
Here, for simpli ity of notation, we again set k = h − h ¯ o For this dis retized optimization i i l i u problem we have the rst-order optimality ondition
If we suppose that ondition 2) of Theorem 1.4.11 is satis ed, then ǁC h v¯ −Cvǁ(L 2 (τ,T )) N and
C ǁ ∗ v − Cvǁ L 2 (τ,T ) tend to zero as h tends to zero Following Theorem 1.5.1 of Subse tion
1.5 we have the following result.
Theorem 3.2.1 Let ondition 2) of Theorem 1.4.11 be satis ed Then the interpolation v¯^ γ of the solution v¯ γ of the problem (3.28) onverges to the solution v γ of the problem (3.8)
Full dis retization of the variational problem
To solve the variational problem numeri ally we need to dis retize the system (1.46) in time
We use Crank-Ni olson's method in Se tion 1.3 and the splitting method in Subse tion 1.4.3 to get the s heme (1.111) or (1.112) Now, we dis retize the obje tive fun tional
[ Σ ω k u k,m (v¯) − h m ] 2 , (3.31) where ℓ is the rst index for whi h ℓ∆t > τ and u k,m (v¯) shows its dependen e on the initial ondition v¯ and m is the index of grid points on time axis We denote ω k = ω i (x k ) the approximation of the fun tion ω i (x) in Ω h at points x k , as de ned by (1.67) We note that in the de nition of J h,∆t the multiplier ∆h has been dropped as it plays no role.
To minimize J h,∆t (v¯) by the onjugate gradient method, we rst al ulate its
1 gradient Theorem 3.3.1 The gradient of J h,∆t at v¯ is given by
(ω w m , ψ m ), Σ i=1 m=ℓ i=1 m=ℓ i=1 m=ℓ i=1 m=ℓ where η satis es the adjoint problem
(3.33) Here the matrix (A m ) ∗ is given by
Proof For a small variation δv¯ of v¯,we have from (3.31) that
= Σ ω k u k,m − h m , k ∈ Ω h , and the inner produ t is that of R N 1 × ×N n i k∈Ω h i i
It follows from (1.111) that w is the solution to the problem
Taking the inner produ t of the both sides of the mth equation of (3.36) with an arbitrary ve tor η m ∈ R N 1 × ×N n , summing the results over m = ℓ − 1, , M − 1, we obtain
Here A m ∗ is the adjoint matrix of A m Consider the adjoint problem
Taking the inner produ t of the both sides of the rst equation of (3.38) with an arbitrary ve tor w m+1 , summing the results over m = ℓ − 1, , M − 1, we obtain
On the other hand, it an be proved by indu tion that Σ N Σ M Σ ω k w k,m 2= o(ǁδv¯ǁ). i=1 m=ℓ k∈Ω h
Hen e, it follows from the ondition η M = 0, (3.35) and (3.40) that
J ∆t (v¯ + δv¯) − J ∆t (v¯) = (δv¯, A 0 ∗ ã ã ã A ℓ−1 ∗ η ℓ−1 ) + o(ǁδv¯ǁ) (3.41) Consequently, the gradient of the obje tive fun tion J h an be written as
Note that, sin e the the matri es Λ m , i = 1, , n are symmetri , we have for m = ℓ −
The onjugate gradient method for the dis retized fun tional (3.31) onsists of the following steps: Σ Σ
Step 1 Choose an initial approximation v 0 and al ulate the residual rˆ 0 = [l i u(v 0 ) − h i ] by solving the splitting s heme (1.111) with v¯ being repla ed by the initial approximation i=1 v 0 and set k = 0.
Step 2 Cal ulate the gradient r 0 = −∇J γ (v 0 ) given in (3.32) by solving the adjoint problem (3.33) Then set d 0 = r 0
N i = 1 ǁl i d 0 ǁ 2 + γǁd 0 ǁ 2 where l i d 0 an be al ulated from the splitting s heme (1.111) with v¯ being repla ed by d 0 and F = 0 Then, set v 1 = v 0 + α 0 d 0 Step 4 For k = 1, 2, ã ã ã , al ulate r k = −∇J 0(v k ), d k = r k + β k d k−1 , where
N i = 1 ǁl i d k ǁ 2 + γǁd k ǁ 2 where l i d k an be al ulated from the splitting s heme (1.111) with v¯ being repla ed by d k and F = 0 Then, set v k+1 = v k + α k d k
Data assimilation by the boundary observations 78
Problem setting and the variational method
We re all the Neumann problem (1.7), (1.8), (1.10) in this hapter with new indexing for the ease of reading:
∂N S ν is the outer normal to S.
The solution to this problem is a fun tion u ∈ W (0, T ; H 1 (Ω)) satisfying De nition 1.1.8.
If the onditions (1.1) (1.6) are satis ed, then Theorem 1.1.2 shows that there exists a unique solution u ∈ W (0, T ; H 1 (Ω)) to the Neumann problem (4.1) (4.3).
Data assimilation: Re onstru t the initial ondition v(x) in (4.1) (4.3) from the obser- vations of the solution u on a part of the boundary S Namely, let Γ ⊂ ∂Ω and denote Σ = Γ × (0, T ) Our aim is to re onstru t the initial ondition v from the impre ise measurement ϕ ∈ L 2 (Σ) of the solution u on Σ: ǁu|Σ − ϕǁ L 2 (Σ) ≤ ǫ (4.4)
From now on, as in the previous hapters, to emphasize the dependen e of the solution u in (4.1) (4.3) on the initial ondition v, we denote it by u(v) or u(x, t; v) Denote Cu(v) = u(v)|Σ , we thus have to solve the operator equation
Remark 4.1.1 To see the ill-posedness of the problem, note that if the ondition 2) of Theorem 1.1.2 is satis ed, then u ∈ H 1,1 (Q) Hen e the operator mapping v to u|Σ is ompa t from L 2 (Ω) to L 2 (Σ) Thus, the problem of re onstru ting v from u|Σ is ill-posed.
To hara terize the degree of ill-posedness of this problem, denote the solution to the problem (4.1) (4.3) with v ≡ 0 by o and denote the solution to the problem (4.1) (4.3) with f ≡ 0, g ≡ 0 by u 0 Then, the operator
Cv = Cu − Cu o (4.6) from L 2 (Ω) to L 2 (Σ) is linear and the problem (4.5) is redu ed to solving the linear equation
Cv = ϕ − o We thus have to analyze the singular values of C In doing so, let introdu e the variational method for solving the problem (4.5).
We reformulate the re onstru tion problem as the least squares problem of minimizing the fun tional
(4.7) over L 2 (Ω) As this minimization problem is unstable, we minimize its Tikhonov regular- ized fun tional
(4.8) over L 2 (Ω) with γ > 0 the regularization parameter, v ∗ an estimation of v whi h an be set by zero.
Now we prove that J γ is Fré het di erentiable and derive a formula for its gradient In doing so, we introdu e the adjoint problem
Σ where χ Σ(ξ, t) = 1 if (ξ, t) ∈ Σ and zero otherwise.
The solution of this problem is understood in the weak sense of Ÿ1.2 Sin e u(v)|Σ − ϕ ∈
L 2 (Σ), we see that there exists a unique solution in W (0, T ; H 1 (Ω)) to
(4.9) Lemma 4.1.1 The fun tional J γ is Fré het di erentiable and
∇J γ (v) = p(x, 0) + γ(v(x) − v ∗ (x)), (4.10) where p(x, t) is the solution to the adjoint problem (4.9).
Proof For a small variation δv of v, we have
∫∫ δu(v) u(v) − ϕ dsdt + o(ǁδvǁ L 2 (Ω)) where δu is the solution to the problem
Using Green's formula (1.25) (Theorem 1.2.2) for (4.9) and (4.11), we have
Consequently, the fun tional J 0 is Fré het di erentiable and
From this result, we see that the fun tional J γ (v) is also Fré het di erentiable and its gradient ∇J γ (v) has the form (4.10) The proof is omplete.
Now we return to hara terizing the degree of ill-posedness of the problem (4.5) Sin e
Ω. eigenvalues of C ∗ C h based on its values C ∗ C h v We will present some numeri al results
If in this formula we take ϕ = o
Due to Proposition 4.1.1 we have C ∗ Cv = p † (x, 0), where p † is the solution to the adjoint
= u 0 (v)χ Σ on S, Thus, for any v ∈ L 2 (Ω) we an evaluate C ∗ Cv by solving the dire t problem (4.1) (4.3) and the adjoint problem (4.13) However, the expli it form for C ∗ C is not available As in the previous hapters, when we dis retize the problem, the nite-dimensional approximations
C h to C are matri es, and so we an apply the Lan zos' algorithm [81℄ to estimates the h h
We note that when Σ ≡ S Lions [50, p 216 219℄ suggested the following variational method Consider two boundary value problems
∂N then minimize the fun tional
(4.20) over L 2 (Ω) As this variational problem has the ill-posed nature of the original problem we regularize it by minimizing the Tikhonov fun tional
In this setting, the solution of the Diri hlet problem (4.14) (4.16) is understood in a om- mon sense: hoose a fun tion Φ ∈ H 1,1 (Q) su h that Φ| S = h, then u˜ 1 = u 1 − Φ satis es a new homogeneous Diri hlet problem with the new right hand side f˜ and initial ondition
Q j v˜ The fun tion u˜ 1 ∈ W (0, T ; H 1 (Ω)) is said to be a weak solution to this homogeneous
If h is regular enough, there exists a unique solution u˜ 1 to the homogeneous Diri hlet problem and thus, there exists a unique solution u 1 ∈ W (0, T ; H 1 (Ω)) to (4.14) (4.16).
Sin e u 1 and u 2 belong to W (0, T ; H 1 (Ω)) we an modify Lions' method as follows: mini- mize the fun tional
(4.23) with λ 1 and λ 2 being non-negative and λ 1 + λ 2 > 0.
The fun tional MJL γ is Fré het di erentiable and its gradient an be represented via two adjoint problems
Lemma 4.1.2 The fun tional MJL 0 is Fré het di erentiable and its gradient has the
MJL ′ (v) = p 1 (x, 0) − p 2 (x, 0) (4.26) Lions' method is a subje t of another independent resear h, we therefore do not pursuit it in this thesis.
— u h h h h γ u h h h u h h h adjoint operator S ∗ , we de ned an approximation Cˆ ∗ of C for whi h ǁC ∗ z − h h h u h h h
Dis retization of the variational method in spa e variables
We now turn to approximating the minimization problem (4.8) Due to the previous
+ γ(v − v ∗ ) = p(x, 0) + γ(v − v ∗ ), where Cv = u 0 (v)|Σ and p is the solution to the adjoint problem (4.13) Thus, the optimality ondition is
Denote by C h v = uˆ 0 |Σ we have that ǁCv − C h vǁ L 2 (Σ) tends to zero as h tends to zero The dis rete version of the fun tional (4.8) is h 1 o ˆ 2 γ ∗ 2
2ǁvˆ h − vˆ h ǁ L 2 (Ω) for whi h we have the rst-order optimality ondition
We note that to evaluate C ∗ we have to solve the orresponding dis retized adjoint problem, but the Neumann ondition in the adjoint problem (4.13) does not belong to H 1,1 (S), therefore p is not in H 1,1 (Q), hen e we do not have the strong onvergen e of C ∗ z to C ∗ z in L 2 (Ω) However, when we dis retize (4.13) we have to mollify the Neumann data by the onvolution with Steklov's kernel [40 , therefore we have a new approximate data in℄
H 1,1 (S) Sin e the solution of the adjoint problem (4.13) is stable with respe t to the data, the solution of the adjoint problem with molli ed data p¯ approximates the solution p of (4.13) Now we apply the above nite di eren e s heme to the adjoint problem with molli ed data to get its multi-linear interpolation pˆ¯ h su h that pˆ¯ h → p¯ in L 2 ([0, T ], L 2 (Ω)) and pˆ¯ h (t)
→ p¯(t) weakly in H 1 (Ω) for all t ∈ [0, T ] Thus, in this way, instead of the h h h h to zero for all z being multi-linear interpolations on Ω h
Let vˆ γ be the solution of the variational problem
+ γ(vˆ − vˆ ∗ ) = 0 (4.29)Following Se tion 1.5 we an prove the following result. h
Proposition 4.2.1 Let v γ be the solution of the variational problem (4.27) and γ > 0.Then vˆ γ onverges to v γ in L 2 (Ω) as h tends to zero.
Full dis retization of the variational problem and the onjugate gradient method
In this se tion, we onsider the problem of estimating the dis rete initial ondition v¯ from the dis rete measurement of the solution on the boundary of the domain The fully dis- retized version of J γ has the form
For minimizing the problem (4.30) by the onjugate gradient method, we rst al ulate the gradient of obje tive fun tion J h,∆t (v¯) and it is shown by the following theorem
Theorem 4.3.1 The gradient of J h,∆t at v¯ is given by h,∆t 0 ∗ 0
0 where η = (η 0 , , η M ) satis es the adjoint problem with η M −1 = ψ M ,
(4.32) and the matri es (A m ) ∗ and (B m ) ∗ being given by
Proof For a small variation δv¯ of v¯, we have from (4.30) that
It follows from (1.111) that w is the solution to the problem
Taking the inner produ t of the both sides of the mth equation of (4.35) with an arbitrary ve tor η m ∈ R N 1 × ×N n and then summing the results over m = 0, , M − 1, we obtain
Here, ⟨ã, ã⟩ is the inner produ t in R N 1 ì ìN n và A m ∗ is the adjoint matrix of A m
Taking the inner produ t of the both sides of the rst equation of(4.32) with an arbitrary ve tor w m+1 , summing the results over m = 0, , M − 2, we have
Taking the inner produ t of both sides of the se ond equation of (4.32) with an arbitrary ve tor w M , we have
It follows from (4.37) and (4.38) that
On the other hand, we an prove that m m
1 Σ Σ w k,m 2 o(ǁv¯ǁ) Hen e, it follows form
Consequently, J h,∆t is di erentiable and its gradient has the form (3.32).
Note that, sin e the matri es Λ i , i = 1, , n are symmetri , we have for m = 0, , M − 1
Conjugate gradient method for the dis retized fun tional (4.30) onsists of the following steps
Step 1 Choose an initial approximation v 0 and al ulate the residual rˆ 0 = u(v 0 )|Σ − ϕ by solving the splitting s heme (1.111) with v¯ being repla ed by the initial approximation v 0 and set k = 0.
Step 2 Cal ulate the gradient r 0 = −∇J γ (v 0 ) given in (4.31) by solving the adjoint problem (4.32) Then set d 0 = r 0
Step 3 Cal ulate α 0 = ǁr 0 ǁ 2 ǁu(d 0 )|Σǁ 2 + γǁd 0 ǁ 2 where u(d 0 ) an be al ulated from the splitting s heme (1.111) with v¯ being repla ed by d 0 and g = 0, F = 0 Then, set v 1 = v 0 + α 0 d 0 Step 4 For k = 1, 2, ã ã ã , al ulate r k = −∇J 0(v k ), d k = r k + β k d k−1 , where ǁr k ǁ 2
= ǁr k−1 ǁ 2 ǁr k ǁ 2 ǁu(d k )|Σǁ 2 + γǁd k ǁ 2 where u(d k ) an be al ulated from the splitting s heme (1.111) with v¯ being repla ed by d k and g = 0, F = 0 Then, set v k+1 = v k + α k d k The simulation of this algorithm on omputer will be given in the next se tion.
Numeri al example
In this se tion we will present our numeri al simulation for one- and two-dimensional prob- lems As in the previous hapters, we will test for di erent kinds of the initial onditions: a0=1 a0=5 a0
1) very smooth, 2) ontinuous but not smooth (the hat fun tion), 3) dis ontinuous (step fun tions) We see that the degree of di ulty in reases from test 1) to test 3) In the one-dimensional ases we will present our numeri al al ulation for the singular values by the method des ribed in Se tion 4.1.
4.4.1 Numeri al example in one-dimensional ase
Set Ω = (0, 1), T = 1 Consider the system u t − (au x ) x = f in Q,
T ], u| t=0 = v in Ω, where oe ient a = 2xt + x 2 t + 1 The observations will be taken at x = 0 and x 1 The noise level is 10 −2
Example 1 We approximate singular values for the ases when we in rease the oe ient a by a 0 = 5 and a 0 = 10 times It appeared that if the oe ient of the equation is larger, the singular values are smaller This an be seen in Figure 4.1 on the singular values evaluated by the method presented in Se tion 4.1.
Figure 4.1: Example 1: Singular Values for 1D Problem otherwise 2(1 − x),
Now we present numeri al results for di erent initial onditions as explained above.
Example 2 Smooth initial ondition: v = sin(2πx).
Example 3 Continuous but not smooth initial ondition: v 2x, if x ≤ 0.5,
Example 4 Dis ontinuous initial ondition: v = 1, if 0.25 ≤ x ≤ 0.75, otherwise.
Figure 4.2: Example 2, 3, 4: 1D Problem: Re onstru tion results for smooth, ontinuous and dis ontinuous initial onditions
4.4.2 Numeri al example in the multi-dimensional ase
Set Ω := (0, 1) × (0, 1), T = 1 Consider the equation u t − (a 1 u x 1 ) x 1 − (a 2 u x 2 ) x 2 f As in the one-dimensional ase, we hoose the initial ondition v, then let u = v × (1
− t) Putting u in the equation to get the boundary data and right hand side f The v ( v ( v ( observation v
0.39 0.39 is taken on the whole boundary S and the noise level is set by 10 −2 In all examples we take a 1(x 1 , x 2 , t) = a 2(x 1 , x 2 , t) = 10 (1 + 10 cos(πx 1 t) cos(πx 2)).
Example 5 Smooth initial ondition: v = sin(πx 1) sin(πx 2).
Exact function v Estimated function with noiselevel 1%
Figure 4.3: Example 5: Exa t initial ondition (left) and its re onstru tion (right)
Figure 4.4: Example 5 ( ontinue): Error (left) and the verti al sli e of the exa t initial ondition and its re on- stru tion along the interval [(0.5, 0), (0.5, 1)] (right)
2x 2 , if x 2 ≤ 0.5 and x 2 ≤ x 1 and x 1 ≤ 1 − x 2 , v = 2(1 − x 2), if x 2 ≥ 0.5 and x 2 ≥ x 1 and x 1 ≥ 1 − x 2 ,
Exact function v Estimated function with noiselevel 1%
Figure 4.5: Example 6: Exa t initial ondition (left) and its re onstru tion (right)
Figure 4.6: Example 6 ( ontinue): Error (left) and the sli e of the exa t initial ondition and its re onstru tion along the interval [(0.5, 0), (0.5, 1)] (right)
Example 7 Dis ontinuous initial ondition: v
Exact function v Estimated function with noiselevel 1%
Figure 4.7: Example 7: Exa t initial ondition (left) and its re onstru tion (right) v v
Figure 4.8: Example 7 ( ontinue): Error (left) and the sli e of the exa t initial ondition and its re onstru tion along the interval [(0.5, 0), (0.5, 1)] (right)
In all examples we see that the numeri al re onstru tions are pretty good However, when the oe ients are large, the ill-posedness of the problem is more severe and the method is less e e tive.
This hapter is written on the basis of the paper
[26℄ Hào D.N and Oanh N.T.N., Determination of the initial ondition in paraboli equa- tions from boundary observations Journal of Inverse and Ill-Posed Problems24(2016), no 2, 195 220.
In this thesis we study data assimilation in heat ondu tion of re onstru ting the initial ondition in a heat transfer pro ess from either 1) the observation of the temperature at the nal time moment, or 2) interior integral observations whi h are regarded as interior measurements, or 3) boundary observations The rst problem is new in the sense that the oe ients of the equation des ribing the heat transfer pro ess are depending on time, and up to now there are very few studies devoted to it The se ond problem is a new setting for su h kind of problems in data assimilation: interior observations are important, but related studies are devoted to the ase of pointwise observations whi h are not realisti in pra ti e The use of integral observations is more pra ti al The third problem is very hard as the observation is only on the boundary and up to now there have been very few studies for this ase We reformulate these problems as a variational problem aiming at minimizing a mis t fun tional in the least squares sense We prove that the fun tional is Fré het di erentiable and derive a formula for its gradient via an adjoint problem and as a by-produ t of the method, we propose a very natural and easy method for estimating the degree of ill-posedness of the re onstru tion problem For numeri ally solving the problems, we dis retize the dire t and adjoint problems by the splitting nite di eren e method for getting the gradient of the dis retized variational problems and then apply the onjugate gradient method for solving them We note that sin e the solutions in the thesis are understood in the weak sense, the nite di eren e method for them is not trivial. With respe t to the dis retization in spa e variables we prove onvergen e results for the dis retization methods We test our method on omputer for various numeri al examples to show the e ien y of our approa h.
The author's publi ations related to the thesis
[1 Nguyen Thi Ngo℄ Oanh, A splitting method for a ba kward paraboli equation with time-dependent oe ients, Computers & Mathemati s with Appli ations 65(2013), 17 28;
[2 Dinh Nho Hào and Nguyen Thi Ngo℄ Oanh, Determination of the initial ondition in paraboli equations from integral observations, Inverse Problems in S ien e and Engineer- ing (to appear) doi: 10.1080/17415977.2016.1229778.
[3 Dinh Nho Hào and Nguyen Thi Ngo℄ Oanh, Determination of the initial ondition in paraboli equations from boundary observations, Journal of Inverse and Ill-Posed Problems24(2016), no 2, 195 220.
[1℄ Agmon S and Nirenberg L., Properties of solutions of ordinary di erential equations in Bana h spa es, Comm Pure Appl Math 16(1963), 121 239.
[2 ℄ Agoshkov V I., Optimal Control Methods and the Method of Adjoint Equations in Problems of Mathemati al Physi s Russian A ademy of S ien es, Institute for Nu- meri al Mathemati s, Mos ow, 2003 (Russian).
[3℄ Agoshkov V I., On some inverse problems for distributed parameter systems, Russ.
J Numer Anal Math Modelling, 18(2003), no 6, 455 465.
[4℄ Alifanov O.M., Inverse Heat Transfer Problems, Springer, New York, 1994.
[5 Alifanov O.M., Artyukhin E.A., and Rumyantsev S.V., ℄ Extreme Methods for Solving Ill-Posed Problems with Appli ations to Inverse Heat Transfer Problems Begell House
[6 Aubert G and Kornprobst P., ℄ Mathemati al Problems in Image Pro essing, Springer, New York, 2006.
[7 Banks H T and Kunis h K., ℄ Estimation Te hniques for Distributed Parameter Sys- tems, Birkhọuser, Boston, In , Boston, MA, 1989.
[8 Baumeister J., ℄ Stable Solution of Inverse Problems Friedr Vieweg & Sohn, Braun- s hweig, 1987.
[9 Be k J.V., Bla kwell B., Clair St.C.R., ℄ Inverse Heat Condu tion, Ill-Posed Problems, Wiley, New York, 1985.
[10 Bengtsson L., Ghil M., and Kọllộn E., ℄ Dynami Meteorology: Data Assimilation Meth- ods Springer-Verlag, New York,1981.
[11℄ Bennett A.F., Inverse Methods in Physi al O eanography, Cambridge University Press, Cambridge, 1992.
[12 Boussetila N and Rebbani F., Optimal regularization method for ill-posed Cau hy℄ problems Ele tron J Di erential Equations, 147(2006), 1 15.
[13 Buly hởv E.V., Glasko V.B and Fởdorov S.M., Re onstru tion of an initial temper-℄ ature from its measurements on a surfa e Zh Vy hisl Mat i Mat Fiz 23(1983),
[14 Chavent G., ℄ Nonlinear Least Squares for Inverse Problems Theoreti al Foundations and Step-by-Step Guide for Appli ations, Springer, New York, 2009.
[15 Courtier P and Talagrand O., Variational assimilation of meteorologi al observations℄ with the dire t and adjoint shallow water equations Tellus 42A(1990), 531 549.
[16 Courtier P and Talagrand O., Variational assimilation of meteorologi al observations℄ with the adjoint vorti ity equations, Part II, Numeri al results Quart J Roy Meteor.
[17 Courtier P., Derber J., Erri o R.M., Louis J.F and Vuki evi℄ T., Review of the use of adjoint, variational methods and Kalman lters in meteorology Tellus 45A(1993),
[18 Du℄ N.V., Paraboli Equations Ba kwards in Time PhD Thesis, Vinh University,
[19℄ Engl H.W., Martin H and Andreas N., Regularization of Inverse Problems Dordre ht, Boston, London, 1996.
[20 Hadamard J., ℄ Le tures on the Cau hy Problem in Linear Partial Di erential Equa- tions, Yale University Press, New Haven,1923.
[21 ℄ Hào D.N., A non hara teristi Cau hy problem for linear paraboli equations II: A variational method Numer Fun t Anal Optim 13(5&6)(1992), 541 564.
[22℄ Hào D.N., A non hara teristi Cau hy problem for linear paraboli equations II I:
A variational method and its approximation s hemes Numer Fun t Anal Optim. 13(5&6)(1992), 565 583.
[23 Hào D.N., A molli ation method for ill-posed problems, ℄ Numer Math., 68(1994),
[24℄ Hào D.N., Methods for Inverse Heat Condu tion Problems Peter Lang Verlag, Frank- furt/Main, Bern, New York, Paris,1998.
[25 Hào D.N and Du℄ N.V., Stability results for ba kward paraboli equations with time- dependent oe ients Inverse Problems 27(2011), no 2, 025003, 20 pp.
[26℄ Hào D.N and Oanh N.T.N., Determination of the initial ondition in paraboli equa- tions from boundary observations J Inverse Ill-Posed Probl 24(2016), 195 220.
[27 Hào℄ D.N and Oanh N.T.N., Determination of the initial ondition in paraboli equations from integral observations Inverse Probl S i Eng doi: 10.1080/17415977.2016.1229778.
[28℄ Hào D.N., Thanh P.X., Lesni D and Johansson B.T., A boundary element method for a multi-dimensional inverse heat ondu tion problem Int J Comput Math 89(2012),
[29 Hào D.N., Thành N.T., and Sahli H., Splitting-based gradient method for multi-℄ dimensional inverse ondu tion problems J Comput Appl Math., 232(2009), 361 377.
[30 Hinze M., A variational dis retization℄ on ept in ontrol onstrained optimization: The linear-quadrati ase, Computat Optimiz Appl., 30(2005), 45 61.
[31℄ Isakov V., Inverse Problems for Partial Di erential Equations Se ond edition. Springer, New York, 2006.
[32℄ Ivanov V.K., On linear problems whi h are not well-posed Dokl Akad Nauk SSSR 145(1962), no 2, 270 272 (in Russian).
[33℄ Ivanov V.K., Vasin V.V., and Tanana V.P., Theory of Linear Ill-Posed Problems and its Appli ations VSP, Utre ht, 2002.
[34℄ John F., Numeri al solution of the equation of heat ondu tion for pre eeding time. Ann Mat Pura Appl., 40(1955), 129 142.
[35℄ Jovanovi¢ B.S and Süli E Analysis of Finite Fi eren e S hemes For Linear Partial
Di erential Equations with Generalized Solutions Springer, London, 2014.
[36 Kabanikhin℄ S I., Inverse and Ill-Posed Problems Theory and Appli ations De Gruyter, Germany, 2011.
[37 Kalnay E, ℄ Atmospheri Modeling, Data Assimilation and Predi tability Cambridge University Press, Cambridge, 2002.
[38 Klibanov M.V., Estimates of initial℄ onditions of paraboli equations and inequalities via lateral Cau hy data Inverse Problems 22(2006), 495 514.
[39 Klibanov M.V and Tikhonravov A.V., Estimates of initial℄ onditions of paraboli equations and inequalities in in nite domains via lateral Cau hy data J Di erential Equations 237(2007), 198 224.
[40℄ Ladyzhenskaya O A., The Boundary Value Problems of Mathemati al Physi s. Springer-Verlag, New York, 1985.
[41℄ Ladyzhenskaya O A., Solonnilov V A and Ural eva N N., Linear and Quasilinear Equations of Paraboli Types Ameri an Mathemati al So iety, 1968.
[42℄ Lavrent'ev M.M., On Cau hy's problem for Lapla e's equation Dokl Akad Nauk SSSR 102(1955), no 2, 205 206 (in Russian).
[43℄ Lavrent'ev M.M., Integral equations of the rst kind Dokl Akad Nauk SSSR
[44℄ Lavrent'ev M.M., Ill-Posed Problems of Mathemati al Physi s Siberian Bran h of the Russian A ademy Publishers, 1962 (in Russian).
[45℄ Lavrent'ev M M., Romanov V G and Shishatskii G P., Ill-posed Problems in Math- emati al Physi s and Analysis Amer Math So , Providen e, R I., 1986.
[46℄ Le Dimet F.-X and Shutyaev V.P., On Newton methods in data assimilation Russian
J Numer Anal Math Modelling 15(2000), no 5, 419 434.
[47℄ Le Dimet F.-X and Shutyaev V.P., On data assimilation for quasilinear paraboli problems Russian J Numer Anal Math Modelling 16(2001), no 3, 247 259.
[48 Le Dimet F.-X and Talagrand O., Variational algorithms for analysis and assimilation℄ of meteorologi al observations: theoreti al aspe ts Tellus 38A(1986), 97 110.
[49℄ Li J., Yamamoto M and Zou J., Conditional stability and numeri al re ontru tion of initial temperature Commun Pure Appl Anal 8(2009), 361 382.
[50℄ Lions J.-L., Optimal Control of Systems Governed by Partial Di erential Equations, Springer, Berlin, 1971.
[51℄ Louis A.K., Inverse und s hle ht gestellte Probleme B.G Teubner, Stuttgart, 1989.
[52 Lundvall J Kozlov V and Weinerfelt P., Iterative meth- ods for data assimilation for℄ Burgers' equation J Inverse Ill-posed Probl., 14(2006), 505 535.
[53 Manselli P and Miller K., Dimensionality redu tion methods for e℄ ient numeri al solution, ba kward in time, of paraboli equations with variable oe ients SIAM J. Math Anal 11(1980), 147 159.
[54℄ Mar huk G.I., Methods of Numeri al Mathemati s Springer-Verlag, New York, 1975.
[55 Mar huk G.I.,℄ Mathemati al Modeling in the Problem of the Environment, Nauka, Mos ow, 1982 (Russian)
[56℄ Mar huk G.I., Splitting and alternating dire tion methods In Ciaglet P G and Li- ons J L., editors, Handbook of Numeri al Mathemati s Volume 1: Finite Di eren e Methods Elsevier S ien e Publisher B.V., North-Holland, Amsterdam, 1990.
[57℄ Mar huk G., Adjoint Equations and Analysis of Complex Systems Springer, New York, 1995.
[58 Mizohata S., Uni ité du prolongement des solutions pour quelques opérateurs di éren-℄ tiels paraboliques Mem Coll S i Univ Kyoto Ser A Math 31(1958), 219 239.
[59 Nemirovskii A.S., The regularizing properties of the adjoint gradient method in ill-℄ posed problems Zh vy hisl Mat mat Fiz 26(1986), 332 347 Engl Transl in U.S.S.R Comput Maths Math Phys 26:2(1986), 7 16.
[60℄ No edal J and Wright S.J., Numeri al Optimization Se ond edition Springer, New York, 2006.
[61℄ Oanh N.T.N., A splitting method for a ba kward paraboli equation with time- dependent oe ients Comput Math Appl 65(2013), 17 28.
[62℄ Oanh N.T.N and Huong B.V., Determination of a time-dependent term in the right hand side of linear paraboli equations A ta Math Vietnam 41(2016), 313 335.
[63 Okubo℄ A., Difussion and E ologi al Models: Model Perspe tives, Spinger S i- en e+Business Media, New York, 2001.
[64℄ Parmuzin E.I., Le Dimet F.-X and Shutyaev V.P., On error analysis in variational data assimilation problem for a nonlinear onve tion-di usion model Russian J. Numer Anal Math Modelling 21(2006), no 2, 169 183.
[65 Parmuzin E.I., Shutyaev V.P., Numeri al solution of the problem on re onstru ting the℄ initial ondition for a semilinear paraboli equation Russian J Numer Anal Math. Modelling 21(2006), no 4, 375 393.
[66℄ Parmuzin E.I., Shutyaev V.P., Variational data assimilation for a nonstationary heat ondu tion problem with nonlinear di usion Russian J Numer Anal Math Mod- elling 20(2005), no 1, 81 95.
[67 Parmuzin ℄ E.I and Shutyaev V.P., Numeri al algorithms for solving a problem of data assimilation (Russian) Zh Vy hisl Mat Mat Fiz 37(1997), no 7, 816 827; translation in Comput Math Math Phys 37(1997), no 7, 792 803.
[68 Payne℄ L., Improperly Posed Problems in Partial Di erential Equations SIAM, Philadelphia, 1975.
[69℄ Pu i C., Sui problemi di Cau hy non "ben posti Atti A ad Naz Lin ei Rend Cl.
S i Fis Mat Nat 18(1955), no 8, 473 477 (Italian)
[70℄ Samarskii A.A., Lazarov R.D and Makarov L., Finite Di eren e S hemes for Di eren- tial Equations with Weak Solutions Visshaya Shkola Publ , Mos ow, 1987 (Russian)
[71℄ Shutyaev V.P., Control Operators and Iterative Algorithms in Variational Data As- similation Problems Nauka, Mos ow, 2001 (Russian).
[72℄ Sun N.Z., Inverse Problems in Groundwater Modeling, Kluwer A ad Publishers, Dor- dre ht, Boston, London, 1994.
[73℄ Talagrand O., A study of the dynami s of four dimensional data assimilation
[74℄ Talagrand O., Assimilation of observations, an introdu tion J Met So Japan 75(1997), 1B, 191 209.
[75℄ Talagrand O and Courtier P., Variational assimilation of meteorologi al observations with the adjoint vorti ity equations, Part I Theory Quart J Roy Meteor So 113(1987), 1311 1328.
[76℄ Thành N.T., Infrared Thermography for the Dete tion and Chara terization of Buried Obje ts PhD thesis, Vrije Universiteit Brussel, Brussels, Belgium,2007.
[77℄ Thành N.T., Hào D.N., and Sahli H., Thermal infrared te hnique for landmine dete - tion: mathemati al formulation and methods A ta Math Vietnam 36(2011), 469 504.
[78 Tikhonov℄ A.N., On the stability of inverse problems Doklady A ad S i USSR
[79℄ Tikhonov A.N., On the solution of ill-posed problems and the method of regularization. Dokl Akad Nauk SSSR 151(1963), 501 504 (in Russian).
[80℄ Tikhonov A.N and Arsenin V.Y., Solutions of Ill-Posed Problems, Winston, Wash- ington, 1977.
[81℄ Trefethen L.N and Bau D III, Numeri al Linear Algebra, SIAM, Philadelphia, 1997.
[82℄ Trửltzs h F., Optimal Control of Partial Di erential Equations Graduate Studies in Mathemati s, Ameri an Mathemati al So iety, Providen e, Rhode Island, 2010. [83℄ Wloka J., Partial Di erential Equations Cambridge University Press, 1987.
[84℄ Yanenko N.N., The Method of Fra tional Steps Springer-Verlag, Berlin, Heidelberg,New York, 1971.