Acoustic and electromagnetic scattering analysis using discrete sources i elements of functional analysis Acoustic and electromagnetic scattering analysis using discrete sources i elements of functional analysis Acoustic and electromagnetic scattering analysis using discrete sources i elements of functional analysis Acoustic and electromagnetic scattering analysis using discrete sources i elements of functional analysis Acoustic and electromagnetic scattering analysis using discrete sources i elements of functional analysis Acoustic and electromagnetic scattering analysis using discrete sources i elements of functional analysis
I ELEMENTS OF FUNCTIONAL ANALYSIS In this chapter we will recall some fundamental results of functional analysis We firstly present the notion of a Hilbert space and discuss some basic properties of the orthogonal projection operator We then introduce the concepts of closeness and completeness of a system of elements which belong to a Hilbert space The completeness of the system of elementary sources is a necessary condition for the solution of scattering problems in the framework of the discrete sources method After this discussion, we will briefly present the notions of Schauder and Riesz bases We will use these concepts when we will analyze the convergence of the null-field method We then consider projection methods for the operator equation Au = / , where A is a linear bounded and bounded invertible operator from a Hilbert space H onto itself We will consider the equivalent variational problem B{u, x) = J^*{x) for all x e H, where B is a bounded and strictly coercive sesquilinear form and J" is a linear and continuous functional Convergent projection schemes will be CHAPTER I ELEMENTS OF FUNCTIONAL ANALYSIS constructed by appealing on the fundamental theorem of discrete approximation Later, we will particularize these results for the space of square integrable tangential vector functions We conclude this chapter by analyzing projection methods for a linear operator A acting from a Hilbert space H onto a Hilbert space G, and for the operator equation where JB is a compact operator HILBERT SPACES ORTHOGONAL PROJECTION OPERATOR Let if be a complex vector space (linear space) The function.(, )j^ : H X H -^ C is called a Hermitian form if (a) {au 4- /3v, w)j^ = a {u,w) fj -\- f3 {v, w)jj , (linearity) (b) (w, v)fj = {v, u)]^ , (symmetry) for all u,v,w e H and all a,/3 E C Here, a* denotes the complex conjugate of a A Hermitian form with the properties (a) (w,ix)^>0, (positivity) (b) (u, u)ff=Oii and only ii u = 9HI (definiteness) where ^H stands for the zero element of H, is called a scalar product or an inner product The vector space with a scalar product specified is called a inner product space or a pre-Hilbert space In terms of the scalar product in /if, a norm \HH = \fM~H (1-1) can be introduced, after which H becomes a normed space The following important inequality is the basis for the statement that inner product spaces contain all the elements of Euclidean geometry, while normed spaces have length, but nothing corresponding to angle It is the Cauchy-Schwarz inequality and is given by |(u,i;)^| < \\u\\tj \\V\\H for all u.veH (L2) The Cauchy-Schwarz inequality and the definition of scalar product imply that the norm properties: HILBERT SPACES ORTHOGONAL PROJECTION OPERATOR (a) \\u\\fj > 0, (positivity) (t>) ll^ll// = if and only \i u = 0//, (definiteness) (c) ||aix||^ = \a\ \\u\\jj , (homogeneity) (d) ||u + v\\f^ < \\u\\ff -f ||i;||^ , (triangle inequality) for all u,v e H and all a € C are satisfied Therefore any scalar product induces a norm, but in general, a norm || ||^ is generated by a scalar product if and only if the parallelogram identity ll« + v\\l + \\u - v\f„ = {\\u\\l + Ml) (1.3) holds Given a sequence {un) of elements of a normed space X, we say that Un converges to an element u of H if \\un — u\\^ -+ as n —• oo A sequence (un) of elements in a normed space X is called a Cauchy sequence if \\un — UmWx —* as n , m ^ oc A subset M of a normed space X is called complete if every Cauchy sequence of elements in M converges to an element in M A normed space is called a Banach space if it is complete An inner product space is called a Hilbert space if it is complete A sequence (un) in a Hilbert space H converges weakly to u £ H if for any v E H, {un,v)fj —> (u,i;)^ as n —>> oo Ordinary (norm) convergence is often called strong convergence, to distinguish it from weak convergence The terms 'strong' and 'weak' convergence are justified by the fact that strong convergence implies weak convergence, and, in general, the converse implication does not hold If a sequence is contained in a compact set, then weak convergence implies strong convergence Note that every weakly convergent sequence in a Hilbert space is bounded and every bounded sequence in a Hilbert space has a weakly convergent subsequence Two elements u and v of an inner product space H are called orthogonal if {u,v)fj = 0; we then write u±v If an element u is orthogonal to each element of a set M, we call it orthogonal to the set M and write u±M Similarly, if each element of a set M is orthogonal to each element of the set , K, we call these sets orthogonal, and write M±K The Pytagora theorem states that \\u±v\\l^\\u\\l + \\v\\l (1.4) for any orthogonal elements u and v A set in a Hilbert space is called orthogonal if any two elements of the set are orthogonal If, moreover, the norm of any element is one, the set is called orthonormal 4 CHAPTER I ELEMENTS OF FUNCTIONAL ANALYSIS A subset M of a normed space is said to be closed if it contains all its limit points For any set M in a normed space, the closure of M is the union of M with the set of all limit points of M, The closure of M is written M Obviously, M is contained in M, and M = M if M is closed Note the following properties of the closure: (a) For any set M, M is closed (b) If_A/ C K, then M C F (c) M is the smallest closed set containing A/; that is, '\{ M 0, we see that the set {\\u — vW^^ / t ' G A/} posses an infimum Let d = mi^^^M ||^ ~" ^IIH and let {vn) be a minimizing sequence, i.e (f^,) C M and \\u - VUWH ~^ dasn —V oo Since M is a vector subspace, ^{vn + v^n) G Af, whence 11 !L _!Ii|| > d^ Using this and the parallelogram identity H ,,||5, = 2(||7i-i;,||^, + | | u - i ; , n | | ^ ) - Vn 4-1;,, (1-5) H gives \\V„ - Vr,,\\l < (||U - VnWl + II" " ^mll«) ' ^d^\ (1-6) whence, by letting n, m —• oo, ||i;„ — I'mll// —* follows Thus, {vn) is a Cauchy sequence and since M is complete, there exists w £ M such that \\vn — 'w\\f^ —> as n —> oo; moreover \\u — i^n||// —^ ||^ - ?^||// = rfas n —• 00 Suppose now that there exists another element w' for which the function \\u — u||^ attains its minimum; then d = \\u - if ||^ = ||w — vj'^n • Clearly, \(w •\- w') E M and we have d — inf \U - l^llrr < w+ w H (1.7) Thus, u - w -\- w' — d, and by the parallelogram identity H \\W - w'W^j = (||n - xot„ + \\u - w't^^) - I L w ^-w' = 0, (1.8) HILBERT SPACES ORTHOGONAL PROJECTION OPERATOR we find w = w' The vector w gives the best approximation of u among all the vectors of M Note that d is called the distance from u to M and is also noted by p(u, M) The operator P : H —^ M mapping u onto its best approximation, i.e Pu=:us (1.9) where ||t/ — t/;||j^ = d — miy^M 11^ "" ^11// ' ^^ ^ bounded Hnear operator with the properties: P^ = P and {Pu.v)jj — {u,Pv)ff for any u,v € H It is called the orthogonal projection operator from H onto M, and w is called the projection of u onto M The following statements characterizing the projection are equivalent: (a) \\U-W\\H < (b) Re{u- (c) Re{u-v,w- \\u-v\\ff, w,v - w)fj < 0, v)fj > 0, toT ue H, w ^ Pu e M and any v € M, Let M be a subset of a Hilbert space // The set of all elements orthogonal to M is called the orthogonal complement of M, M^ = {ueH/u±M} Clearly, M-^ is a subspace of H, To show this we firstly observe that A/-^ is a vector subspace, since for any scalars a and /? and any u.v £ A/-^, {au-\- (3v,(fi)^ = for all (^ G A/; whence au -{- f3v e A/-^ follows To prove that Af-*- is complete, let us choose a Cauchy sequence {n„) C A/-^; it converges to some u £ H because H is complete We must show that u E M^ Since for any v e H, and in particular for any v e A/, we have {uny v)fj —> (w, v)fj as n —> oc and (un? ^')H = 0, n = 1,2, , it follows that (w, i^)// = for any v G A/ Hence, tz G A/-'- and so Af-^ is complete Now, let if be a Hilbert space, M a subspace of if, and P the orthogonal projection operator of H onto M Let u e H From the properties of the projection we see that Re{u - Pu.v — Pu)fj < for any t; G Af Choose v = Pu ± (p with (p being an arbitrary element of A/ Then Re {u — Pu, i ^ ) / f < 0, whence Re {u — Pu, ^)fj = Replacing in the last relation (f by J V (J^ = 1) we get Re(w ~ Pu,j(f)ff ~ R e [ - j ( w - Pu,(f)fj] = Im(w - Pu,ip)ff = (LIO) Thus, for a given u e H the projection u; = Pu satisfies u — w M Therefore, any element u £ H can be uniquely decomposed as u = w-^w^, (Lll) CHAPTER I ELEMENTS OF FUNCTIONAL ANALYSIS where w e M and w^ G M-^ This result is known as the theorem of orthogonal projection The operator Q : H -^ M-^ given by Qu = u-Pu (1.12) is the orthogonal projection operator from H onto M^ CLOSED AND COMPLETE SYSTEMS IN HILBERT SPACES BASES Let X be a normed space and M a subset of X M is dense in X if for any u e X and any £ > there exist u^ E M such that \\u — u^W^ < e Equivalently, M is.dense in X if and only if for any u e X there exists a sequence {un) C M such that \\un — w||x —• as n —> oo Every set is dense in its closure, i.e M is dense in M M is the largest set in which M is dense; that is, if M is dense in K, then K C M.li M is dense in a Hilbert space if, then M — H Conversely, if M = i / , then M is dense in H Let H he Si Hilbert space If M is dense in H and u is orthogonal to M, then u = 6H- Indeed, let uJLM and choose an arbitrary v E H Since M = H there exists a sequence (vn) C M such that \\vn — v||^ —> as n —> GO Consequently, {u^Vn)^ —• (^?^)// as n —> oo From {u,Vn)ff = 0, n = 1,2, , it follows that (w, v)^ = for any v e H Thus, u ± H The element w is orthogonal to any element of H and in particular is orthogonal to itself, i.e {u,u)ff = ||u||^ = Hence, u = OHElements V^i,^2' "">'^N ^f ^ vector space X are called linearly dependent if there exists a linear combination Yli^i ^i'^i = in which the coefficients not vanish, i.e Yli=i l^^l > ^- ^ ^ ^ vectors are called linearly independent if they are not linearly dependent, or equivalently, if there exists no non-trivial vanishing linear combination If any finite number of elements of an infinite set {t/^J^i is linearly independent, the set {V'Ji^i is called linearly independent A system of elements {V^jj^i is called closed in H if there are no elements in H orthogonal to any element of the set except the zero element, that means (w,V^,)^ = , z = 1,2, , implies IX = 0HA system of elements {ipi}^i is called complete in H if the linear span of {ipi}^i or the set of all finite linear combinations of {ipi}^ 1=1 CX) S p { ^ i , ^2 •••} = lu = J ^ a ^ ^ Q i G C,7V = 1,2, i CLOSED AND COMPLETE SYSTEMS IN HILBERT SPACES BASES is dense in H, i.e Bp {-^j, ip2^ •••} = H Equivalently, if {t/^J^i is complete in H then for any u e H and any e > there exist an integer N = N{e) and a set {ttr}^_i such that \\u — X)i=i ^f^^i "^ ^• Let us observe that the closure of the linear span of any set {T/^^}^! is a subspace of H It is a vector subspace by its very definition and it is also complete as a closed subset of a complete set Obviously, if the system {V^J^i is complete in H, then the only element orthogonal to {ipi}^i is the zero element of H] thus the set {0i}i^i is closed in H The converse result is also true To show this let { ^ J ^ i be a closed system in H Let us denote by W the linear span of {ipi}^i - Then any element u£ H can be uniquely represented as w = Pu 4- Qu, where P is the orthogonal projection operator from H onto W^ and Q is the orthogonal projection operator from H onto W Since Qu G W and \l)i € W, = 1,2, , we get {Qu, il^^)u = 0, i = 1,2, The closeness of {V^J^i in H implies Qu = 6H-> and therefore for any element u € H we have u = Pu € W Thus, H C W, and since W C H we get W = H; therefore W is dense in H We summarize this result in the following theorem T H E O R E M 2.1: Let H be a Hilbert space A systetn of elements {'4^i}^i is complete in H if and only if it is closed in H A set {ipi }?=i is called a finite basis for the vector space X if it is linearly independent and it spans X A vector space is said to be n-dimensional if it has a finite basis consisting of n elements A vector space with no finite basis is said to be infinite-dimensional Let HN be a finite-dimensional vector subspace of a Hilbert space H with orthogonal basis {i}^-i Then the orthogonal projection operator from H onto HN is given by N PNU = Y2 (^' ^t)H ^i^ ueH t=l For the time being we note a simple but important result characterizing the convergence of the projections Let {ipi}^i be a complete and linear independent system in a Hilbert space /f, let H^ stand for the linear span of {ipi}i^i, and let us denote by PN the orthogonal projection operator from H onto HN We have \\u - P N + I ^ I I H = inf \\u - t;||^ Vfc/lN + l (1.13) < inf ||u - v\\fj = ||u - PNU\\H v€riN for any u e H; thus the sequence ||n - PN'^WH ^^ convergent Since { ^ J ^ i is complete in H we find a subsequence (uiv„) C HN^ such that ||u - UNn \\H CHAPTER I ELEMENTS OF FUNCTIONAL ANALYSIS —> as n —* oo Then, from < \\u — PN^'^^'WH ^ N — ^iVnll// we get ||u — PNa'^Wn —^ as n —>• oc; thus the convergent sequence \\u — PNU\\H possesses a subsequence which converge to zero Therefore, for any xi £ H we have \\u - PNU\\U -^Oas N -^OO (1.14) A map i4 of a vector space X into a vector space Y is called linear if A transforms linear combinations of elements into the same linear combinations of their images, i.e if ^ ( a i ^ i -]-a2U2 + ) = aiA{ui)-ha2A{u2)-{- " Linear maps are also called linear operators In the linear algebra one usually writes arguments without brackets, A{u) = Au Linearity of a map, is for normed spaces, a very strong condition which is shown by the following equivalent statements: (a) A transforms sequences converging to zero into bounded sequences, (b) A is continuous at one point (for instance at tx = 0), (c) A satisfies the Lipschitz condition ||i4u||y < c||u||;^ for all u e X and c independent on tx, (d) A is continuous at every point Each number c for which the inequality (c) holds is called a bound for the operator A Let C{X, Y) be the linear space of all linear continuous maps of a normed space X into a normed space Y The norm of an operator uex,uy^0x \m\x l|u|lx=i satisfies all the axioms of the norm in a normed space, whence the linear space £(X, Y) is a normed space Note that the number \\A\\ is the smallest bound for yl It is not difficult to prove that the space C{X, Y) is complete if the space Y is such A map of a vector space into the space C of scalars is called a functional The above statements are valid for linear functional The space £{X^ C) is called the conjugate space of X and is denoted by X* It is always a Banach space A system {xpj}^^ is called minimal if no elements of this system belongs to the closure of the linear span of the remaining elements In order that the system {V^l^i be minimal in a Banach space X, it is necessary and sufficient that a system of linear and continuous functional defined on X exist forming with the given system a biorthogonal system; that is, a system of Hnear and continuous functionals { ^ j j ^ j such that ^j ( ^ J = 6ij^ where 6ij is the Kronecker symbol If the system {V^^ j ^ j is complete and minimal, then the system of functionals {^j}Jli is defined in a unique manner In CLOSED AND COMPLETE SYSTEMS LN HILBERT SPACES BASES a Hilbert space H, by Riesz theorem (see section 1.3), there exists ipj such that J'j (u) = (^,^j) for any u € //; therefore (tj^^j)^ = 6ij In this case the system ^^^^^ _ is called biorthogonal to the system {t''j}^_| A system { ^ J ^ j is called a Schauder basis of a Banach space X if any element u e X can be uniquel}^ represented as u = X]^^l ^?^^n where the convergence of the series is in the norm of X Every basis is a complete minimal system However, a complete minimal system may not be a basis in the space For example, the trigonometric system I/JQ (t) = 1/2, 02n-i(^) = sin(n;^), ^2M (^) — cos(?if),n = 1,2, ,is a complete minimal system in the space C([—TT, TT]) but it does not form a basis in it In an arbitrarily separable Hilbert space if, every complete orthogonal systems of elements forms a basis Thus, the trigonometric system of functions forms a basis in L2([-;r,7r]) The system { j ^ j is called an unconditional basis in the Banach space A' if it remains a basis for an arbitrary rearrangement of its elements Let T : X -^ X hea bounded linear operator with a bounded inverse If the system {ipi}^i is a basis, then the system { T ^ j j ^ j is a basis If {u%}^^ is an unconditional basis, then {Tu'i}^i is an unconditional basis In a Hilbert space, every orthogonal basis is unconditional It can be shown that an arbitrary unconditional basis in a Hilbert space is representable in the form { T ^ } ^ j , where {0^}J^i is an orthonormal basis oi H Such bases are called Riesz bases If { ' j ^ i is a Riesz basis then the biorthogonal system \pi > is also a Riesz basis A complete system {i^i}^i forms a Riesz basis of H if the Gramm matrix G = [Gij], Gjj = {^i^'^'j)ff > generates an isomorphism on /^ The system {t'^jj^i forms a Riesz basis of H if the inequalities N ci^\o^if i=l N < l]«^i^^ such that \B{x,y)\ such that ReB{x,x) > c||x||^ for all x e H (1.21) Let H he a Hilbert space and B a bounded sesquilinear form on H With X being an element of H we define the functional T: i / —• C by J^{y) = B*{x^ y) Then f E H^ According to Riesz theorem there exists an unique element x/ G i/, such that J-^{y) = {yi^f)f{ for all y £ H We define the operator A : H -^ H hy Ax — x/ Then B*{x^y) = {y,Ax)ff , and further B{x^y) = {Ax^y)ff for all x^y e H Let us now prove that A G C{H,H) The linearity of - c||rr||;^ for sll X € H and c > The Lax-Milgram lemma states that if B is a bounded and strictly coercive sesquilinear form on a Hilbert space H, then the strictly coercive bounded operator A : H —^ H generated by B has a bounded inverse A-^ :H -^ H As a consequence of Riesz theorem and Lax-Milgram lemma if is a bounded and strictly coercive sesquilinear form and / " a bounded linear functional on a Hilbert space H then the variational problem B{%x) = T*{x) for all x e H, (1.25) is unique solvable and the solution solves the operator equation Au = / , (1.26) where A is the operator generated by B and / is the uniquely determined element corresponding to J^ We are now well prepared to present the main result of this chapter, namely the fundamental theorem of discrete approximation This theorem is frequently used in the finite-element method for solution of various boundary-value problems by discrete schemes T H E O R E M 3.2: (fundamental theorem of discrete approximation) Let H be a Hilbert space and B a bounded sesquilinear form on H satisfying \B{x, x)\>c \\x\\]j for all xeH, (1.27) Let T be a linear and continuous functional on H and {^i}^x ^ complete and linearly independent system in H Then 12 CHAPTER I ELEMENTS OF FUNCTIONAL ANALYSIS (a) the algebraic system of equations N Y,B{^,,iPj)a^=T*{i^j), j = l, ,iV, (1.28) possess a unique solution^ (b) the sequence N UN = Y^ a^il^i (1.29) is convergent; if \\UM — u\\fj —> as N -^ oo, then u is the unique solution to the variational problem ^*(x) = B{u,x) for all x G H, (1.30) Proof: Before we present the proof we note that condition (1.27) is weaker than the coerciveness condition (1.21) Coming now to the proof of (a) we define the matrix B = [Bij] by Bij — B{'tl)^^ ipj)^ z, j = 1,2, , N Let HN = Sp{^i, ,i/^;v} ^^d let PN be the orthogonal projection operator from H onto Hjsf With A standing for the operator generated by the sesquilinear form B, i.e B{x,y) = {Ax,y)fj , we have Bij = ( ^ i , ^ , ) = ( M , ^ i > H = (A^i.PN'ipj),, = (PNAiPi.tlj^)^ (1.31) For any x € HN we use c\M]t < \Bix,x)\ = \{Ax,x)ff\ = \{Ax,PNx)f,\ = < \{PNAx,x)jf\ WPNAXWJ, \\X\\H (1.32) to obtain ||P/v>la:||^ > c||x||^ Consequently, the operator PjsfA : H^ —^ HN is invertible Since {t/^jl^^^x form a basis of H^ we see that the vectors (fi = PisfAtp^, i = l, ,iV, form a basis of f//^ Let us denote by T = [Tij] , z, j = 1,2, , A/', the nonsingular transition matrix passing from the basis {t/^jjli to the basis {iPi}^^i, i.e (p^ = Ylk^i^ik^k^^^ ^ = 1, ,A^ Then, we have N N k=l k=l where $ = [*ij] , *ij = {'^ii'^j)fj ^h j =" 1? 2, , AT, is the Gramm matrix of the linearly independent system {ipi}^^i - The matrix B is expressed as a product of two nonsingular matrices Hence, B is nonsingular 3 PROJECTION METHODS 13 For proving (b) we rewrite (1.28) as B{uN,i^j)^r{i^j), j-l, ,Ar (1.34) Multiplying the above relations by a^* and summing over j we obtain = T*{UN) Then from B{UNJUN) c||«^||^ < \B{UN,UN)\ = \r{uN)\ < WTWH' I K I I H (1.35) we deduce that {UN) is bounded; thus we can pick up a weak convergent subsequence {uNk) • Let u be the weak limit of this subsequence Prom (1.34) we get B{uNk^'^j) = T*{tl)j), j = l,, ,Nk Since for any fixed j the mapping x G i / , x «-* B{x^ ipj) is a linear and continuous functional on H^ and since u^^ ~> u weakly asfc-^ oo, we obtain B{u,il)j) = J^*{tpj) for any j = 1,2, Next, the completeness of the system { ^ J ^ i gives B{u,x) = J^*{x) for any a: e H, Let us prove that u is unique Assume that there exists u' ^ u such that B{u\x) — ^*{x) for any x E H Then B{u — w', a:) = for any x E H and from = \B{u -u'.U'- u')\ >c\\u- u'tfj > (1.36) the conclusion readily follows Thus, all weak convergent subsequences have the same weak limit; whence UN -^ u weakly as AT —> oo (cf Dinca [43]) Let us now prove the more stronger result, namely that \\UN — u\^ —• as iV —> oo Using (1.37) < 15(1*AT, WAT) - S(tiAr, u) - B{u, UN) -f B(u, u)\ and the identities B{U^UN) = T*{UN) and B{UN,UN) = J^*{UN) we get c \\UN - u\\]j < \B[uN.u) - B{u, u)\ (1.38) Since UN -^ u weakly as AT -> oo and the mapping x if, a; H-> B{X, U) defines a linear and continuous functional on H we obtain B{UN^U) —• B{u^ u) as N —* oo, and the conclusion readily follows Evidently, the fundamental theorem of discrete approximation is also valid for a strictly coercive sesquilinear form B In this context, the unique solution to the the variational problem (1.30) coincides with the unique solution to the operator equation (1.26) The projection relations (1.28) may be written as < ^ w j v ~ / , ^ ^ ) ^ = , i = l, ,iV, (1.39) 14 CHAPTER I ELEMENTS OF FUNCTIONAL ANALYSIS or equivalently as PNAPNUN = PN/^ (1.40) where P/v is the orthogonal projection operator from H onto Hjsf and HN = Sp{V^i, ,V^^} The above projection method is also called the Galerkin method The strongest condition which guarantee the convergence of the projection scheme is the strictly coercivity of the sesquilinear form B According to (1.32) we see that this condition implies WPNAPNUW^^ > c \\PNU\\H for all ueH (1.41) Let us generalize the above results when A is a linear bounded and boundedly invertible operator from a Hilbert space H onto a Hilbert space G Let HN C HN^I with dimHN = AT be a sequence of subsets limit dense in H, i.e for any u £ H, P{U^HN) —• as iV —^ OO, and let PN stands for the orthogonal projection operator onto HN Analogously, let GN C Giv+i with dim GAT = AT be a sequence of subsets limit dense in G and let QN stands for the orthogonal projection operator onto GN The projection method giving the approximate solution u^ of (1.26) is QNAPNUN = QN/- (1.42) Then we can formulate the following result (cf Ramm [128]) T H E O R E M 3.3: Let A : H -^ G be a linear bounded and boundedly invertible operator Equation (1.4^) is unique solvable for all sufficiently large N, and \W - '^NWH - • as AT - • 00, (1.43) if and only if > c \\PNu\\fj for all w € ^ and AT > A^o, WQNAPNUW^ (1-44) where NQ is some integer and c > does not depend on N and u Proof: Let us prove the necessity Assume that (1.43) holds and (1.42) is unique solvable Then for / G G we have \\UN — u\\ff —> as N -^ oo, where UN = {QNAPN)~^ QNf and u = A~^f Thus sup {QNAPNr'QN\\ 0, iV -* oo Then, from \\UN-PNU\\U = < < -PNy)\\u \\PN{UN '\\QNAPN{UN-PNU)\\G^-\\QNA{I^PN)U\\C c c l\\A{I-PN)u\\a oo This finishes the proof of the theorem The following theorem will also be used many times in the sequel T H E O R E M 3.4: Let A : H —^ G he a linear hounded and houndedly invertible operator satisfying (1.44)- Let B : H —^ G be a compact operator and A-\- B he hounded invertihle Then, ^QN{A + B)PNu\\ci>c\\PNu\\jj foralUG^andiV>iVo, (1.51) where NQ is some integer and c > does not depend on N and u Proof: For the proof we refer to Ramm [128] Theorems 3.3 and 3.4 show that the equation QN{A-i-B)PNUN = QNf (1.52) is unique solvable for all sufficiently large AT and \\u — UNWH ~^ as iV —> oo, where u is the exact solution to the operator equation [A -f B)u = / ... or equivalently, if there exists no non-trivial vanishing linear combination If any finite number of elements of an infinite set {t/^J ^i is linearly independent, the set {V'Ji ^i is called linearly... = 1,2, , implies IX = 0HA system of elements {ipi} ^i is called complete in H if the linear span of {ipi} ^i or the set of all finite linear combinations of {ipi}^ 1=1 CX) S p { ^ i , ^2 •••} =... finite basis for the vector space X if it is linearly independent and it spans X A vector space is said to be n-dimensional if it has a finite basis consisting of n elements A vector space with