APPLICATIONS OF THE POINCARÉ INEQUALITY TO EXTENDED KANTOROVICH METHOD DER-CHEN CHANG, TRISTAN NGUYEN, GANG WANG, AND NORMAN M. WERELEY Received 3 February 2005; Revised 2 March 2005; Accepted 18 April 2005 We apply the Poincar ´ e inequality to study the extended Kantorovich method that was used to construct a closed-form solution for two coupled partial differential equations with mixed boundary conditions. Copyright © 2006 Der-Chen Chang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestri cted use, distribution, and reproduction in any medium, provided the original work is properly cited. 1. Introduction Let Ω ⊂ R n be a Lipschitz domain in R n . Consider the Dirichlet space H 1 0 (Ω) which is the collection of all functions in the Sobolev space L 2 1 (Ω)suchthat H 1 0 (Ω) = u ∈ L 2 (Ω):u| ∂Ω = 0, u L 2 + n k=1 ∂u ∂x k L 2 < ∞ . (1.1) The famous Poincar ´ e inequality can be stated as follows: for u ∈ H 1 0 (Ω), then there exists a universal constant C such that Ω u 2 (x)dx ≤ C n k=1 Ω ∂u ∂x k 2 dx. (1.2) One of the applications of this inequality is to solve the modified version of the Dirichlet problem (see, John [5, page 97]): find a v ∈ H 1 0 (Ω)suchthat (u,v) = Ω n k=1 ∂u ∂x k ∂v ∂x k dx = Ω u(x) f (x)dx, (1.3) Hindawi Publishing Cor poration Journal of Inequalities and Applications Volume 2006, Article ID 32356, Pages 1–21 DOI 10.1155/JIA/2006/32356 2Poincar ´ e inequality and Kantorovich method where x = (x 1 , ,x n )withafixed f ∈ C( ¯ Ω). Then the function v in (1.3) satisfied the boundary value problem Δv =−f ,inΩ v = 0, on ∂Ω. (1.4) In this paper, we will use the Poincar ´ e inequality to study the extended Kantorovich method, see [6]. This method has been used extensively in many engineering problems, for example, readers can consult papers [4, 7, 8, 11, 12], and the references therein. Let us start with a model problem, see [8]. For a clamped rectangular box Ω = n k =1 [−a k ,a k ], subjected to a lateral distributed load, ᏼ(x) = ᏼ(x 1 , ,x n ), the principle of virtual dis- placements yields n =1 a −a η∇ 4 Φ − ᏼ δΦ Dx = 0, (1.5) where Φ is the lateral deflection which satisfies the boundary conditions, η is the flexural rigidity of the box, and ∇ 4 = n k=1 ∂ 4 ∂x 4 k + j=k 2 ∂ 4 ∂x 2 j ∂x 2 k . (1.6) Since the domain Ω is a rectangular box, it is natural to assume the deflection in the form Φ(x) = Φ k 1 ···k n (x) = n =1 f k x , (1.7) it follows that when f k 2 (x 2 )··· f k n (x n ) is prescribed a priori, (1.5)canberewrittenas a 1 −a 1 n =2 a −a η∇ 4 Φ k 1 ···k n − ᏼ f k x dx δf k 1 (x 1 )dx 1 = 0. (1.8) Equation (1.8) is satisfied when n =2 a −a η∇ 4 Φ k 1 ···k n − ᏼ f k x dx = 0. (1.9) Similarly, when n =1,=m f k (x )isprescribedapriori,(1.5)canberewrittenas a m −a m n =1,=m a −a η∇ 4 Φ k 1 ···k n − ᏼ f k x dx δf k m (x m )dx m = 0. (1.10) It is satisfied when n =1,=m a −a η∇ 4 Φ k 1 ···k n − ᏼ f k x dx = 0. (1.11) Der-Chen Chang et al. 3 It is known that (1.9)and(1.11) are called the Galerkin equations of the extended Kan- torovich method. Now we may first choose f 20 (x 2 )··· f n0 (x n ) = n =2 c x 2 a 2 − 1 2 . (1.12) Then Φ 10···0 (x) = f 11 (x 1 ) f 20 (x 2 )··· f n0 (x n ) satisfies the boundary conditions Φ 10···0 = 0, ∂Φ 10···0 ∂x = 0atx =±a , x 1 ∈ − a 1 ,a 1 , (1.13) for = 2, ,n.Now(1.9)becomes n =2 c a −a ∇ 4 Φ 10···0 − ᏼ η x 2 a 2 − 1 2 dx = 0, (1.14) which yields C 4 d 4 f 11 dx 4 + C 2 d 2 f 11 dx 2 + C 0 f 11 = B. (1.15) After solving the above ODE, we can use f 11 (x 1 ) n =3 f 0 (x )asaprioridataandplugit into (1.10)tofind f 21 (x 2 ). Then we obtain the function Φ 110···0 (x) = f 11 x 1 f 21 x 2 f 30 x 3 ··· f n0 x n . (1.16) Continue this process until we obtain Φ 1···1 (x) = f 11 (x 1 ) f 21 (x 2 )··· f n1 (x n ) and there- fore completes the first cycle. Next, we use f 21 (x 2 )··· f n1 (x n ) as our priori data and find f 12 (x 1 ). We continue this process and expect to find a sequence of “approximate solu- tions.” The problem reduces to investigate the convergence of this sequence. Therefore, it is crucial to analyze (1.15). Moreover, from numerical point of view, we know that this sequence converges rapidly (see [1, 2]). Hence, it is necessary to give a rigorous mathe- matical proof of this method. 2. A convex linear functional on H 2 0 (Ω) Denote I[φ] = Ω | Δφ| 2 − 2ᏼ(x)φ(x) dx (2.1) for Ω ⊂ R n a bounded Lipschitz domain. Here x = (x 1 , ,x n ). As usual, denote D 2 φ = ⎡ ⎢ ⎢ ⎢ ⎣ ∂ 2 φ ∂x 2 ∂ 2 φ ∂x∂y ∂ 2 φ ∂y∂x ∂ 2 φ ∂y 2 ⎤ ⎥ ⎥ ⎥ ⎦ . (2.2) 4Poincar ´ e inequality and Kantorovich method For Ω ⊂ R 2 , we define the Lagrangian function L associated to I[φ]asfollows: L : Ω × R × R 2 × R 4 −→ R, (x, y;z;X,Y;U,V, S, W) −→ (U + V) 2 − 2ᏼ(x, y)z, (2.3) where ᏼ(x, y) is a fixed function on Ω which shows up in the integ rand of I[φ]. With the above definitions, we have L x, y;φ;∇φ;D 2 φ =| Δφ| 2 − 2ᏼ(x, y)φ(x, y), (2.4) where we have identified z ←→ φ(x, y), X ←→ ∂φ ∂x , Y ←→ ∂φ ∂y , U ←→ ∂ 2 φ ∂x 2 , V ←→ ∂ 2 φ ∂y 2 , S ←→ ∂ 2 φ ∂y∂x , W ←→ ∂ 2 φ ∂x∂y . (2.5) We also set H 2 0 (Ω) to be the class of all square integ rable functions such that H 2 0 (Ω) = ψ ∈ L 2 (Ω): |k|≤2 ∂ k ψ ∂x k L 2 < ∞, ψ| ∂Ω = 0, ∇ψ| ∂Ω = 0 . (2.6) Fix (x, y) ∈ Ω.Weknowthat ∇L(x, y;z;X,Y;U, V,S,W) = − 2ᏼ(x, y)002(U + V)2(U + V)00 T . (2.7) Because the convexity of the function L in the remaining variables, then for all ( z; X, Y; U, V, S, W) ∈ R × R 2 × R 4 ,onehas L x, y; z; X, Y; U, V, S, W ≥ L(x, y;z;X,Y;U,V,S,W) − 2ᏼ(x, y) z − z +2(U + V) U − U + V − V . (2.8) In particular, one has, with z = φ(x, y), L x, y; φ;∇ φ;D 2 φ ≥ L x, y;φ;∇φ;D 2 φ +2Δφ ∇ φ −∇φ − 2ᏼ(x, y)( φ − φ). (2.9) This implies that Δ φ 2 − 2ᏼ(x, y) φ ≥ Δφ 2 − 2ᏼ(x, y)φ +2Δφ Δ φ − Δφ − 2ᏼ(x, y) φ − φ . (2.10) If instead we fix (x, y;z) ∈ Ω × R,then L x, y; z; X, Y; U, V, S, W ≥ L(x, y; z;X,Y;U,V,S,W) +2(U + V) U − U + V − V . (2.11) Der-Chen Chang et al. 5 This implies that L x, y; φ;∇ φ;D 2 φ ≥ L(x, y; φ;∇φ;D 2 φ +2Δφ[∇ φ −∇φ] (2.12) Therefore, |Δ φ| 2 − 2ᏼ(x, y) φ ≥|Δφ| 2 − 2ᏼ(x, y) φ +2Δφ[Δ φ − Δφ]. (2.13) Lemma 2.1. Suppose either (1) φ ∈ H 2 0 (Ω) ∩ C 4 (Ω) and η ∈ C 1 c (Ω);or (2) φ ∈ H 2 0 (Ω) ∩ C 3 ( ¯ Ω) ∩ C 4 (Ω) and η ∈ H 2 0 (Ω). Let δI[φ; η] denote the first variation of I at φ in the direction η,thatis, δI[φ;η] = lim ε→0 I[φ + εη] − I[φ] ε . (2.14) Then δI[φ;η] = 2 Ω Δ 2 φ − ᏼ(x, y) ηdxdy. (2.15) Proof. We know that I[φ + εη] − I[φ] = 2ε Ω [ΔφΔη − ᏼη]dxdy + ε 2 Ω (Δη) 2 dxdy. (2.16) Hence, εI[φ;η] = 2 Ω [ΔφΔη − ᏼη]dxdy. (2.17) If either assumption (1) or ( 2) holds, we can apply Green’s formula to a Lipschitz domain Ω to obtain Ω (ΔφΔη)dxdy = Ω η Δ 2 φ dxdy + ∂Ω ∂η ∂ n Δφ − η ∂ ∂ n Δφ dxdy, (2.18) where ∂/∂ n is the derivative in the direction normal to ∂Ω.Sinceeitherη ∈ C 1 c (Ω)or η ∈ H 2 0 (Ω), the boundary term vanishes, which proves the lemma. Lemma 2.2. Let φ ∈ H 2 0 (Ω). Then φ H 2 0 (Ω) ≈Δφ L 2 (Ω) . (2.19) Proof. The function φ ∈ H 2 0 (Ω) implies that there exists a sequence {φ k }⊂C ∞ c (Ω)such that lim k→∞ φ k = φ in H 2 0 -norm. From a well-known result for the Calder ´ on-Zygmund operator (see, Stein [10, page 77]), one has ∂ 2 f ∂x j ∂x L p ≤ CΔ f L p , j, = 1, ,n (2.20) 6Poincar ´ e inequality and Kantorovich method for all f ∈ C 2 c (R n )and1<p<∞.HereC is a constant that depends on n only. Applying this result to each φ k ,weobtain ∂ 2 φ k ∂x 2 L 2 (Ω) , ∂ 2 φ k ∂x∂y L 2 (Ω) , ∂ 2 φ k ∂y 2 L 2 (Ω) ≤ C Δφ k L 2 (Ω) . (2.21) Taking the limit, we conclude that ∂ 2 φ ∂x 2 L 2 (Ω) , ∂ 2 φ ∂x∂y L 2 (Ω) , ∂ 2 φ ∂y 2 L 2 (Ω) ≤ CΔφ L 2 (Ω) . (2.22) Applying Poincar ´ e inequality twice to the function φ ∈ H 2 0 (Ω), we have φ L 2 (Ω) ≤ C 1 ∇φ L 2 (Ω) ≤ C 2 ∂ 2 φ ∂x 2 L 2 (Ω) + ∂ 2 φ ∂x∂y L 2 (Ω) + ∂ 2 φ ∂y 2 L 2 (Ω) ≤ CΔφ L 2 (Ω) . (2.23) Hence, φ L 2 (Ω) ≤ CΔφ L 2 (Ω) . The reverse inequality is trivial. The proof of t his lemma is therefore complete. Lemma 2.3. Let {φ k } be a bounded sequence in H 2 0 (Ω). Then there exist φ ∈ H 2 0 (Ω) and a subsequence {φ k j } such that I[φ] ≤ lim inf I φ k j . (2.24) Proof. By a weak compactness theorem for reflexive Banach spaces, and hence for Hilbert spaces, there exist a subsequence {φ k j } of {φ k } and φ in H 2 0 (Ω)suchthatφ k j → φ weakly in H 2 0 (Ω). Since H 2 0 (Ω) ⊂ H 1 0 (Ω)⊂⊂L 2 (Ω), (2.25) by the Sobolev embedding theorem, we have φ k j −→ φ in L 2 (Ω) (2.26) after passing to yet another subsequence if necessary. Now fix (x, y,φ k j (x, y)) ∈ R 2 × R and a pply inequality (2.13), we have Δφ k j 2 − 2ᏼ(x, y)φ k j (x, y) ≥|Δφ| 2 − 2ᏼ(x, y)φ k j (x, y)+2Δφ Δφ k j − Δφ . (2.27) This implies that I φ k j ≥ Ω | Δφ| 2 − 2ᏼ(x, y)φ k j dxdy +2 Ω Δφ · Δφ k j − Δφ dxdy. (2.28) But φ k j → φ in L 2 (Ω), hence Ω | Δφ| 2 − 2ᏼ(x, y)φ k j dxdy −→ Ω | Δφ| 2 − 2ᏼ(x, y)φ dxdy = I[φ]. (2.29) Der-Chen Chang et al. 7 Besides φ k j → φ weakly in H 2 0 (Ω) implies that Ω Δφ · Δφ k j − Δφ dxdy −→ 0. (2.30) It follows that when taking limit I[φ] ≤ lim inf j I φ k j . (2.31) This completes the proof of the lemma. Remark 2.4. The above proof uses the convexity of L(x, y;z;X,Y;U,V,S,W)when(x, y; z) is fixed. We already remarked at the beginning of this section that when (x, y)isfixed, L(x, y;z;X,Y;U,V,S,W) is convex in the remaining variables, including the z-variable. That is, we are not required to utilize the full strength of the convexity of L here. 3. The extended Kantorovich method Now, we shift our focus to the extended Kantorovich method for finding an approximate solution to the minimization problem min φ∈H 2 0 (Ω) I[φ] (3.1) when Ω = [−a,a] × [−b,b]isarectangularregioninR 2 . In the sequel, we will write φ(x, y)(resp.,φ k (x, y)) as f (x)g(y)(resp., f k (x) g k (y)) interchangeably as notated in Kerr and Alexander [8]. More specifical ly, we will study the extended Kantorovich method for the case n = 2, which has been used extensively in the analysis of stress on rectangular plates. Equivalently, we will seek for an approximate solution of the above minimization problem in the form φ(x, y) = f (x)g(y)where f ∈ H 2 0 ([−a,a]) and g ∈ H 2 0 ([−b,b]). To phr ase this differently, we will search for an approximate solution in the tensor product Hilbert spaces H 2 0 ([−a,a]) ⊗ H 2 0 ([−b,b]), and all sequences {φ k }, {φ k j } involved hereinafter reside in this Hilbert space. Without loss of generality, we may assume that Ω = [−1,1] × [−1,1] for all subsequent results remain valid for the general case where Ω = [−a, a] × [−b,b] by approximate scalings/normalizing of the x and y variables. As in [8], we will treat the special case ᏼ(x, y) = γ, that is, we assume that the load ᏼ(x, y)is distributed equally on a given rectangular plate. To start the extended Kantorovich scheme, we first choose g 0 (y) ∈ H 2 0 ([−1,1]) ∩ C ∞ c (−1,1), and find the minimizer f 1 (x) ∈ H 2 0 ([−1,1]) of the functional: I fg 0 = Ω Δ fg 0 2 − 2γf(x)g 0 (y) dxdy = Ω g 2 0 ( f ) 2 +2ff g 0 g 0 + f 2 g 0 2 − 2γfg 0 dxdy = 1 −1 ( f ) 2 dx 1 −1 g 2 0 dy+2 1 −1 g 0 2 dy 1 −1 ( f ) 2 dx + 1 −1 g 0 2 dy 1 −1 f 2 dx − 2γ 1 −1 g 0 dy 1 −1 fdx, (3.2) 8Poincar ´ e inequality and Kantorovich method where the last equality was obtained via the integration by parts of ff and g 0 g 0 .Since g 0 has been chosen a priori; we can rewrite the functional I as J[ f ] = g 0 2 L 2 1 −1 ( f ) 2 dx +2 g 0 2 L 2 1 −1 ( f ) 2 dx + g 0 2 L 2 1 −1 f 2 dx − 2γ 1 −1 g 0 (y)dy 1 −1 fdx (3.3) for all f ∈ H 2 0 ([−1,1]). Now we may rewrite (3.3) in the following form: J[ f ] = 1 −1 C 1 ( f ) 2 + C 2 ( f ) 2 + C 3 f 2 + C 4 f dx ≡ 1 −1 K(x, f , f , f )dx (3.4) with K : R × R × R × R → R given by (x; z;V;W) −→ C 1 W 2 + C 2 V 2 + C 3 z 2 + C 4 z, (3.5) where C 1 = g 0 2 L 2 , C 2 = g 0 2 L 2 , C 3 = g 0 2 L 2 , C 4 =−2γ 1 −1 g 0 (y)dy. (3.6) As long as g 0 ≡ 0, as we have implicitly assumed, the Poincar ´ e inequality implies that 0 <C 1 ≤ αC 2 ≤ βC 3 (3.7) for some positive constants α and β, independent of g 0 . Consequently, K(x;z;V;W)isa strictly convex function in variable z, V, W when x is fixed. In other words, K satisfies K(x; z; V; W) − K(x;z;V;W) ≥ ∂K ∂z (x; z;V;W)( z − z)+ ∂K ∂V (x; z;V;W)( V − V)+ ∂K ∂W (x; z;V;W)( W − W) (3.8) for all (x;z;V;W)and(x; z; V; W)inR 4 , and the inequality becomes equality at (x;z;V; W)onlyif z = z,or V = V,or W = W. Proposition 3.1. Let ᏸ : R × R × R × R → R be a C ∞ function satisfy ing the following convexity condition: ᏸ(x;z + z ;V +V ;W + W ) − ᏸ(x;z;V;W) ≥ ∂ᏸ ∂z (x; z;V;W)z + ∂ᏸ ∂V (x; z;V;W)V + ∂ᏸ ∂W (x; z;V;W)W (3.9) Der-Chen Chang et al. 9 for all (x;z;V;W) and (x;z + z ;V + V ;W + W ) ∈ R 4 , with equality at (x;z;V;W) only if z = 0,orV = 0,orW = 0. Also, let J[ f ] = β α ᏸ x, f (x), f (x), f (x) dx, ∀ f ∈ H 2 0 (α,β). (3.10) Then J[ f + η] − J[ f ] ≥ δJ[ f ,η], ∀η ∈ C ∞ c (α,β) (3.11) and equality holds only if η ≡ 0.HereδJ[ f ,η] is the first variation of J at f in the direction η. Proof. Condition (3.9) means that at each x, ᏸ(x; f + η; f + η ; f + η ) − ᏸ(x; f ; f ; f ) ≥ ∂ᏸ ∂z (x; f ; f ; f )η(x)+ ∂ᏸ ∂V (x; f ; f ; f )η (x)+ ∂ᏸ ∂W (x; f ; f ; f )η (x) (3.12) for all η ∈ C ∞ c (α,β) with equality only if η(x) = 0, or η (x) = 0, or η (x) = 0. Equivalently, the equality holds in (3.12)atx only if η(x)η (x) = 0orη (x) = 0. In other words, η (x) d dx η 2 (x) = 0. (3.13) Integrating ( 3.12)gives J[ f + η] − J[ f ] ≥ β α ∂ᏸ ∂z η + ∂ᏸ ∂V η + ∂ᏸ ∂W η dx = δJ[ f , η]. (3.14) Now suppose there exists η ∈ C ∞ c (α,β)suchthat(3.14)isanequality.Sinceᏸ is a smooth function, this equality forces (3.12) to be a pointwise equality, which implies, in view of (3.13), that η (x) d dx η 2 (x) = 0, ∀x. (3.15) If η (x) ≡ 0, then η (x) = constant which implies that η (x) ≡ 0 (since η ∈ C ∞ c (α,β)). This tells us that η ≡ constant and conclude that η ≡ 0 on the interval (α,β). If η (x) ≡ 0, set U ={x ∈ (α,β):η (x) = 0}.ThenU is a non-empty open set which implies that there exist x 0 ∈ U andsomeopensetᏻ x 0 of x 0 contained in U.Thenη (ξ) = 0forallξ ∈ ᏻ x 0 ⊂ U.Thus d dx η 2 = 0onᏻ x 0 . (3.16) Hence, η(ξ) ≡ constant on ᏻ x 0 . But this creates a contradiction because η (ξ) ≡ 0onᏻ x 0 . Therefore, J[ f + η] − J[ f ] = δJ[ f ,η] (3.17) only if η(x) ≡ 0, as desired. This completes the proof of the proposition. 10 Poincar ´ e inequality and Kantorovich method Corollary 3.2. Let J[ f ] be as in (3.4). Then f 1 ∈ H 2 0 ([−1,1]) is the unique minimizer for J[ f ] if and only if f 1 solves the following ODE: g 0 2 L 2 d 4 f dx 4 − 2 g 0 2 L 2 d 2 f dx 2 + g 0 2 L 2 f = γ 1 −1 g 0 dy. (3.18) Proof. Suppose f 1 is the unique minimizer. Then f 1 is a local extremum of J[ f ]. This implies that δJ[ f ,η] = 0forallη ∈ H 2 0 ([−1,1]). Using the notations in (3.4), we have 0 = δJ[ f ,η] = 1 −1 ∂K ∂z η + ∂K ∂V η + ∂K ∂W η dx = 1 −1 ∂K ∂z − d dx ∂K ∂V + d 2 dx 2 ∂K ∂W η(x)dx (3.19) for all η ∈ H 2 0 ([−1,1]). This implies that ∂K ∂z − d dx ∂K ∂V + d 2 dx 2 ∂K ∂W = 0, (3.20) which is the Euler-Lagrange equation (3.18). This also follows from Lemma 2.1 directly. Conversely, assume f 1 solves (3.18). Then the above argument shows that δJ[ f ,η] = 0 for a ll η ∈ H 2 0 ([−1,1]). Since K satisfies condition (3.9)inProposition 3.1,weconclude that J f 1 + η − J f 1 ≥ δJ f 1 ,η , ∀η ∈ C ∞ c [−1,1] . (3.21) This tells us that J[ f 1 + η] ≥ J[ f 1 ]forallη ∈ C ∞ c ([−1,1]) and J[ f 1 + η] >J[ f 1 ]ifη ≡ 0. Observe that J : H 1 0 ([−1,1]) → R as given in (3.4) is a continuous linear functional in the H 2 0 -norm. This fact, combined with the density of C ∞ c ([−1,1]) in H 2 0 ([−1,1]) (in the H 2 0 -norm), implies that J f 1 + η ≥ J f 1 , ∀η ∈ C ∞ c [−1,1] . (3.22) This means that for all ϕ ∈ H 2 0 ([−1,1]), we have J[ϕ] ≥ J[ f 1 ]andifϕ ≡ f 1 (almost ev- erywhere), then ϕ − f 1 ≡ 0 and hence, J[ϕ] >J[ f 1 ]. Thus f 1 is the unique minimum for J. Reversing the roles of f and g, that is, fixing f 0 and finding g 1 ∈ H 2 0 to minimize I[ f 0 g] over g ∈ H 2 0 ([−1,1]), we obtain the same conclusion by using the same arguments. Corollary 3.3. Fix f 0 ∈ H 2 0 ([−1,1]). Then g 1 ∈ H 2 0 ([−1,1]) is the unique minimizer for J[g] = I f 0 g = f 0 2 L 2 1 −1 (g ) 2 dy+2 f 0 2 L 2 1 −1 (g ) 2 dy + f 0 2 L 2 1 −1 g 2 dy− 2γ f 0 L 1 1 −1 gdy (3.23) [...]... (3.32) 2 where g ∈ H0 (Ω) But the only solution to this PDE is g ≡ 0 (see, Evans [3, pages 300– 302]) This completes the proof of the lemma Remark 3.6 If n = 1, one can solve g − λ−1 g = 0 directly without having to appeal to the theory of elliptic PDEs Proposition 3.7 The solutions of (3.18) and (3.24) have the same form Proof Using either Lemma 3.5 in case n = 1 to the above remark, we see that 2... Hence the characteristic polynomial associated to (3.26) has two pairs of complex conjugate roots as long as g0 ≡ 0 Apply the same arguments to the ODE in (3.24) and the proposition is proved Remark 3.8 The statement in Proposition 3.7 was claimed in [8] without verification Indeed the authors stated therein that the solutions of (3.18) and (3.24) are of the same form because of the positivity of the. .. y + 1 K0n K0n 5 Convergence of the solutions In order to discuss the convergence of the extended Kantorovich method, let us start with the following auxiliary lemma Lemma 5.1 Let φn (x, y) = fn (x)gn (y) and ψn (x, y) = fn+1 (x)gn (y) Then these two 2 sequences are bounded in H0 (Ω) Proof We will verify the boundedness of {ψn } for the arguments which is identical for the sequence {φn } Fix an integer... Approximate Method of Higher Analysis, Noordhoff, Groningen, 1964 [7] A D Kerr, An extension of the Kantorovich method, Quarterly of Applied Mathematics 26 (1968), no 2, 219–229 [8] A D Kerr and H Alexander, An application of the extended Kantorovich method to the stress analysis of a clamped rectangular plate, Acta Mechanica 6 (1968), 180–196 [9] E H Lieb and M Loss, Analysis, Graduate Studies in Mathematics,... conclude that after passing to another subsequence if necessary, φn j −→ φ, ψn j −→ ψ, in L2 (Ω) (5.8) 18 Poincar´ inequality and Kantorovich method e From (4.19), we see that g0 ∈ ᐆ if and only if f1 ≡ 0 Hence if g0 ∈ ᐆ, the iteration process of the extended Kantorovich method stops and we have ψ1 (x, y) = f1 (x)g0 (y) ≡ 0 Now suppose g0 ∈ ᐆ, that is, f1 ≡ 0 As in the proof of Lemma 5.1, Corollary 3.2... Furthermore, the derivatives of all orders of { fn j gn j } j also converge pointwisely to that of F(x)G(y) Proof Let us observe the expression of φn (x, y) = fn (x)gn (y) in (4.23) Applying Corollary 5.3 to the constants on the right-hand side of (4.23), we can find convergent subsequences: K0n j , K1n j , K2n j , K0n j , K1n j , K2n j , (5.22) and {ρn j }, {κn j }, {ρn j }, {κn j } In addition, the. .. C2 Thus after further extracting subsequences of { fn j } and {gn j }, we may conclude that the following limits exist and are non-zero: lim j fn fn L2 , lim j L2 fn fn L2 , lim j L2 gn gn L2 , lim j L2 gn gn L2 (5.20) L2 This completes the proof of the corollary Corollary 5.4 If g0 ∈ ᐆ, then there exists a subsequence { fn j gn j } j that converges pointwisely to a function of the form N Θ(x, y)... Lemma 2.2 then yields ψn Therefore, ψn 2 H0 (Ω) 2 2 H0 (Ω) < Cγ ψn 2 L2 (Ω) < Cγ ψn 2 H0 (Ω) (5.4) < Cγ as desired Now we are in a position to prove the main theorem of this section Theorem 5.2 There exist subsequences {φn j } j and {ψn j } j of {φn } and {ψn } which converge 2 in L2 (Ω) to some functions φ,ψ ∈ H0 (Ω) Furthermore if 2 ᐆ = g ∈ H0 [−1,1] : 1 −1 g(y)d y = 0 (5.5) and if g0 ∈ ᐆ, then lim... (x)gn−1 (y)dx d y 2 f n g n −1 L 2 · 2 L2 2 g n −1 L 2 g n −1 · fn fn 2 L2 2 L2 (5.23) ; hence Theorem 5.2 and Corollary 5.3 guarantee the convergence of the subsequence {cn−1 cn− j } Altogether, after replacing all sequences on the right-hand side of (4.23) with 20 Poincar´ inequality and Kantorovich method e either convergent subsequences, we get Θ(x, y) = lim fn j gn j j =C K1∞ K1∞ cosh ρ∞ x cosh ρ∞... we differentiate fn gn a finite number of times, then from (4.23) we have each summand scaled by integral powers of ρn , ρn , κn and κn But we just argued above that these sequences have convergent subsequences Hence when x, y are fixed, we conclude that all derivatives of fn j gn j at (x, y) will converge to that of Θ(x, y) as k → ∞ The proof of the corollary is therefore complete Remark 5.5 Corollary . y;z;X,Y;U,V,S,W) is convex in the remaining variables, including the z-variable. That is, we are not required to utilize the full strength of the convexity of L here. 3. The extended Kantorovich method Now,. completes the proof of the lemma. Remark 3.6. If n = 1, one can solve g − λ −1 g = 0 directly without having to appeal to the theory of elliptic PDEs. Proposition 3.7. The solutions of (3.18)and(3.24)havethesameform. Proof without verification. Indeed the authors stated therein that t he solutions of (3.18)and(3.24)areofthesame form because of the positivity of the coefficients on the left-hand side of (3.18)and(3.24). As