o e C on nZ ie nh V b.com/sinhvienzonevn Si nh Vi en Zo ne C om This page intentionally left blank SinhVienZone.com https://fb.com/sinhvienzonevn om FINITE ELEMENTS nh Vi en Zo ne C This definitive introduction to finite element methods has been thoroughly updated for this third edition, which features important new material for both research and application of the finite element method The discussion of saddle point problems is a highlight of the book and has been elaborated to include many more nonstandard applications The chapter on applications in elasticity now contains a complete discussion of locking phenomena The numerical solution of elliptic partial differential equations is an important application of finite elements and the author discusses this subject comprehensively These equations are treated as variational problems for which the Sobolev spaces are the right framework Graduate students who not necessarily have any particular background in differential equations but require an introduction to finite element methods will find this text invaluable Specifically, the chapter on finite elements in solid mechanics provides a bridge between mathematics and engineering Si D I E T R I C H B R A E S S is Professor of Mathematics at Ruhr University Bochum, Germany SinhVienZone.com https://fb.com/sinhvienzonevn om C ne Zo en Vi nh Si SinhVienZone.com https://fb.com/sinhvienzonevn FINITE ELEMENTS Theory, Fast Solvers, and Applications in Elasticity Theory DIETRICH BRAESS Si nh Vi en Zo ne C om Translated from the German by Larry L Schumaker SinhVienZone.com https://fb.com/sinhvienzonevn CAMBRIDGE UNIVERSITY PRESS Cambridge, New York, Melbourne, Madrid, Cape Town, Singapore, São Paulo om Cambridge University Press The Edinburgh Building, Cambridge CB2 8RU, UK Published in the United States of America by Cambridge University Press, New York www.cambridge.org Information on this title: www.cambridge.org/9780521705189 C © D Braess 2007 ne This publication is in copyright Subject to statutory exception and to the provision of relevant collective licensing agreements, no reproduction of any part may take place without the written permission of Cambridge University Press Zo First published in print format 2007 Vi paperback 978-0-521-70518-9 paperback 0-521-70518-5 nh ISBN-13 ISBN-10 en eBook (NetLibrary) ISBN-13 978-0-511-27910-2 ISBN-10 0-511-27910-8 eBook (NetLibrary) Si Cambridge University Press has no responsibility for the persistence or accuracy of urls for external or third-party internet websites referred to in this publication, and does not guarantee that any content on such websites is, or will remain, accurate or appropriate SinhVienZone.com https://fb.com/sinhvienzonevn v Contents Preface to the Third English Edition page x Preface to the First English Edition xi Preface to the German Edition xii Notation xiv ne C om Chapter I Introduction § Examples and Classification of PDE’s Examples — Classification of PDE’s — Well-posed problems — Problems 10 § The Maximum Principle Examples 13 — Corollaries 14 — Problem 15 12 16 § A Convergence Theory for Difference Methods Consistency 22 — Local and global error 22 — Limits of the convergence theory 24 — Problems 26 22 Vi en Zo § Finite Difference Methods Discretization 16 — Discrete maximum principle 19 — Problem 21 Chapter II Conforming Finite Elements nh 27 28 § The Neumann Boundary-Value Problem A Trace Theorem Ellipticity in H 44 — Boundary-value problems with natural boundary conditions 45 — Neumann boundary conditions 46 — Mixed boundary conditions 47 — Proof of the trace theorem 48 — Practical consequences of the trace theorem 50 — Problems 52 44 Si § Sobolev Spaces Introduction to Sobolev spaces 29 — Friedrichs’ inequality 30 — Possible singularities of H functions 31 — Compact imbeddings 32 — Problems 33 § Variational Formulation of Elliptic Boundary-Value Problems of Second Order Variational formulation 35 — Reduction to homogeneous boundary conditions 36 — Existence of solutions 38 — Inhomogeneous boundary conditions 42 — Problems 42 SinhVienZone.com https://fb.com/sinhvienzonevn 34 vi Contents § The Ritz–Galerkin Method and Some Finite Elements Model problem 56 — Problems 58 53 § Some Standard Finite Elements Requirements on the meshes 61 — Significance of the differentiability properties 62 — Triangular elements with complete polynomials 64 — Remarks on C elements 67 — Bilinear elements 68 — Quadratic rectangular elements 69 — Affine families 70 — Choice of an element 74 — Problems 74 § Approximation Properties The Bramble–Hilbert lemma 77 — Triangular elements with complete polynomials 78 — Bilinear quadrilateral elements 81 — Inverse estimates 83 — Cl´ement’s interpolation 84 — Appendix: On the optimality of the estimates 85 — Problems 87 60 om 76 89 ne C § Error Bounds for Elliptic Problems of Second Order Remarks on regularity 89 — Error bounds in the energy norm 90 — L2 estimates 91 — A simple L∞ estimate 93 — The L2 -projector 94 — Problems 95 § Computational Considerations Assembling the stiffness matrix 97 — Static condensation 99 — Complexity of setting up the matrix 100 — Effect on the choice of a grid 100 — Local mesh refinement 100 — Implementation of the Neumann boundary-value problem 102 — Problems 103 en Zo 97 nh Vi Chapter III Nonconforming and Other Methods 105 106 Si § Abstract Lemmas and a Simple Boundary Approximation Generalizations of C´ea’s lemma 106 — Duality methods 108 — The Crouzeix–Raviart element 109 — A simple approximation to curved boundaries 112 — Modifications of the duality argument 114 — Problems 116 § Isoparametric Elements Isoparametric triangular elements 117 — Isoparametric quadrilateral elements 119 — Problems 121 § Further Tools from Functional Analysis Negative norms 122 — Adjoint operators 124 — An abstract existence theorem 124 — An abstract convergence theorem 126 — Proof of Theorem 3.4 127 — Problems 128 § Saddle Point Problems Saddle points and minima 129 — The inf-sup condition 130 — Mixed finite element methods 134 — Fortin interpolation 136 — SinhVienZone.com 117 122 129 https://fb.com/sinhvienzonevn Contents vii om Saddle point problems with penalty term 138 — Typical applications 141 — Problems 142 § Mixed Methods for the Poisson Equation The Poisson equation as a mixed problem 145 — The Raviart– Thomas element 148 — Interpolation by Raviart–Thomas elements 149 — Implementation and postprocessing 152 — Mesh-dependent norms for the Raviart–Thomas element 153 — The softening behaviour of mixed methods 154 — Problems 156 § The Stokes Equation Variational formulation 158 — The inf-sup condition 159 — Nearly incompressible flows 161 — Problems 161 Zo ne C § Finite Elements for the Stokes Problem An instable element 162 — The Taylor–Hood element 167 — The MINI element 168 — The divergence-free nonconforming P1 element 170 — Problems 171 § A Posteriori Error Estimates Residual estimators 174 — Lower estimates 176 — Remark on other estimators 179 — Local mesh refinement and convergence 179 § A Posteriori Error Estimates via the Hypercircle Method 145 157 162 172 181 en Chapter IV The Conjugate Gradient Method 186 187 § Preconditioning Preconditioning by SSOR 213 — Preconditioning by ILU 214 — Remarks on parallelization 216 — Nonlinear problems 217 — Problems 218 210 196 Si nh Vi § Classical Iterative Methods for Solving Linear Systems Stationary linear processes 187 — The Jacobi and Gauss–Seidel methods 189 — The model problem 192 — Overrelaxation 193 — Problems 195 § Gradient Methods The general gradient method 196 — Gradient methods and quadratic functions 197 — Convergence behavior in the case of large condition numbers 199 — Problems 200 § Conjugate Gradient and the Minimal Residual Method The CG algorithm 203 — Analysis of the CG method as an optimal method 196 — The minimal residual method 207 — Indefinite and unsymmetric matrices 208 — Problems 209 SinhVienZone.com 201 https://fb.com/sinhvienzonevn viii Contents § Saddle Point Problems The Uzawa algorithm and its variants 221 — An alternative 223 — Problems 224 Chapter V Multigrid Methods 221 225 226 237 248 255 Vi en Zo ne C om § Multigrid Methods for Variational Problems Smoothing properties of classical iterative methods 226 — The multigrid idea 227 — The algorithm 228 — Transfer between grids 232 — Problems 235 § Convergence of Multigrid Methods Discrete norms 238 — Connection with the Sobolev norm 240 — Approximation property 242 — Convergence proof for the two-grid method 244 — An alternative short proof 245 — Some variants 245 — Problems 246 § Convergence for Several Levels A recurrence formula for the W-cycle 248 — An improvement for the energy norm 249 — The convergence proof for the V-cycle 251 — Problems 254 § Nested Iteration Computation of starting values 255 — Complexity 257 — Multigrid methods with a small number of levels 258 — The CASCADE algorithm 259 — Problems 260 261 Si nh § Multigrid Analysis via Space Decomposition Schwarz alternating method 262 — Assumptions 265 — Direct consequences 266 — Convergence of multiplicative methods 267 — Verification of A1 269 — Local mesh refinements 270 — Problems 271 § Nonlinear Problems The multigrid-Newton method 273 — The nonlinear multigrid method 274 — Starting values 276 — Problems 277 Chapter VI Finite Elements in Solid Mechanics § Introduction to Elasticity Theory Kinematics 279 — The equilibrium equations 281 — The Piola transform 283 — Constitutive Equations 284 — Linear material laws 288 § Hyperelastic Materials SinhVienZone.com 272 278 279 290 https://fb.com/sinhvienzonevn §5 Multigrid Analysis via Space Decomposition 263 om Fig 56 Schwarz alternating iteration with one-dimensional subspaces V and W in Euclidean 2-space The iterates u1 , u3 , u5 , lie in V ⊥ and u2 , u4 , in W ⊥ The angle between V ⊥ and W ⊥ is the same as between V and W 5.2 Convergence Theorem Assume that there is a constant γ < such that for the inner product in H C |a(v, w)| ≤ γ v w for v ∈ V , w ∈ W (5.3) Zo ne Then we have for the iteration with the Schwarz alternating method the error reduction (5.4) uk+1 − u ≤ γ uk − u for k ≥ en Proof Because of the symmetry of the problem we may confine ourselves to even k Since uk is constructed by a minimization in the subspace W , we have for w ∈ W (5.5) Vi a(uk − u, w) = nh We decompose uk − u = vˆ + wˆ with vˆ ∈ V , wˆ ∈ W From (5.5) it follows with w = wˆ that a(v, ˆ w) ˆ = −w ˆ (5.6) Si ˆ w ˆ By the strengthened Cauchy inequality (5.3) we have a(v, ˆ w) ˆ = −αk v with some αk ≤ γ Without loss of generality let αk = It follows from (5.6) that v ˆ = αk−1 w ˆ and uk − u2 = vˆ + w ˆ = v ˆ − 2w ˆ + w ˆ = ˆ (αk−2 − 1)w Since uk+1 is the result of an optimization in V , we obtain an upper estimate from the simple test function uk + (αk2 − 1)v ˆ ˆ uk+1 − u2 ≤ uk + (αk2 − 1)v = αk2 vˆ + w ˆ = (1 − αk2 )w ˆ = αk2 uk − u2 Noting that αk ≤ γ , the proof is complete The bound in (5.4) is sharp This becomes obvious from an example with one-dimensional spaces V and W depicted in Fig 56 and also from Problem 5.8 SinhVienZone.com https://fb.com/sinhvienzonevn 264 V Multigrid Methods Algebraic Description of Space Decomposition Algorithms The finite element spaces S may be recursively constructed S0 = W0 , S = S−1 ⊕ W , ≥ 1, (5.7) S = SL The finite element solution on the level is related to the operator A : S → S defined by (A u, w) = a(u, w) for all w ∈ S (5.8) (5.9) C a(P u, w) = a(u, w) for all w ∈ S om Moreover A := AL The corresponding Ritz projector P : S → S satisfies en Zo ne We note that the discussion below holds for any inner product (·, ·) in the Hilbert space S, but we will refer to the L2 inner product or the 2 inner product when we deal with concrete examples We recall that the L2 -norm is equivalent to the 2 -norm of the associated vector representations, and the smoothing procedures refer to L2 -like operators Therefore, we will use also the L2 -orthogonal projectors Q : S → S , (Q u, w) = (u, w) for all w ∈ S (5.10) It follows that A P = Q A (5.11) nh Vi Indeed, for all w ∈ S we obtain from (5.8)–(5.10) the equations (A P u, w) = a(P u, w) = a(u, w) = a(u, Q w) = (Au, Q w) = (Q Au, w) Since A P and Q A are mappings into S , this proves (5.11) Si Assume that u˜ is an approximate solution of the variational problem in S, and let Au˜ − f be the residue The solution of the variational problem in the subset ˜ Therefore, the correction by the exact solution of u˜ + S is u˜ + A−1 Q (f − Au) the subproblem for the level is A−1 ˜ Q (f − Au) Since its computation is too expensive in general, the actual correction will be obtained from a computation with an approximate inverse B−1 , i.e., the real correction will be B−1 Q (f − Au) ˜ (5.12) ˜ For convenience, we will assume The correction turns u˜ into u˜ + B−1 Q (f − Au) that A ≤ B , (5.13) SinhVienZone.com https://fb.com/sinhvienzonevn §5 Multigrid Analysis via Space Decomposition 265 i.e., B − A is assumed to be positive semidefinite and tacitly B is assumed to be symmetric Often only the weaker condition A ≤ ωB with ω < is required, but we prefer to have the assumption without an extra factor in order to avoid some inconvenient factors in the estimates Some standard techniques for dealing with the approximate solution above are found in Problems IV.4.14–17 We recall (5.11) and following the standard notation, we define the linear mapping (5.14) T := B−1 Q A = B−1 A P om From (5.12) we know that the correction of u˜ in the subspace S yields the new iterate u˜ + T (u − u), ˜ and its error is (I − T ) (u − u) ˜ ne E := EL where C We consider the multigrid V -cycle with post-smoothing only Consequently the error propagation operator for one complete cycle is = 0, 1, , L, Zo E := (I − T ) (I − T−1 ) (I − T0 ), (5.15) en and E−1 := I This representation elucidates that the subspace corrections are applied in a multiplicative way Vi Assumptions nh The assumptions refer to the family of finite element spaces S and the complementary spaces W specified in (5.7) Si Assumption A1 There exists a constant K1 such that for all v ∈ W , = 0, 1, , L, L L (B v , v ) ≤ K1 v 2 (5.16) =0 =0 Assumption A2 (Strengthened Cauchy–Schwarz Inequality) There exist constants γk = γk with a(vk , w ) ≤ γk (Bk vk , vk )1/2 (B w , w )1/2 for all vk ∈ Sk , w ∈ W (5.17) if k ≤ Moreover, there is a constant K2 such that L γk xk y ≤ K2 k,l=0 SinhVienZone.com L k=0 xk2 L 1/2 y2 1/2 for x, y ∈ RL+1 (5.18) =0 https://fb.com/sinhvienzonevn 266 V Multigrid Methods We postpone the verification of A1 – The verification of A2 with a constant K2 that is independent of the number of levels is not trivial Therefore we provide a short proof of an estimate with a bound that increases only logarithmically The standard Cauchy–Schwarz inequality and A ≤ B imply that we have γk ≤ for all k, Hence, 1/2 1/2 γk xk y ≤ |xk | |y | ≤ (L + 1) xk2 y2 , k,l k k and (5.18) is obvious for K2 ≤ L + ≤ c | log hL | om (5.19) Direct Consequences =0 a(vk , v ) ne v 2 = k, ≤ 1/2 γk (Bk vk , vk ) (B v , v ) 1/2 L ≤ K2 (B v , v ) Zo L C From A2 we conclude immediately that k, (5.20) =0 nh Vi en Hence, the norms encountered in (5.16) and (5.20), are equivalent provided that A1 and A2 hold A direct consequence of A1 is an analogue of an inequality which we considered in §2 in connection with logarithmic convexity Note the asymmetry in the occurrence of the spaces in (5.21) 5.3 Lemma Let w ∈ W and u ∈ S = SL for = 0, 1, , L Then we have L L 1/2 a(w , u ) ≤ K1 w a(T u , u ) Si L =0 =0 (5.21) =0 Proof Since P w = w , it follows from the Cauchy–Schwarz inequality in Euclidean space that L a(w , u ) = =0 L a(w , P u ) =0 = L 1/2 −1/2 (B w , B A P u ) =0 L L 1/2 1/2 (B w , w ) (A P u , B−1 A P u ) (5.22) ≤ =0 SinhVienZone.com =0 https://fb.com/sinhvienzonevn §5 Multigrid Analysis via Space Decomposition 267 Next we derive an equality that is useful also in other contexts (B T w, T w) = (T w, B B−1 A P w) = (T w, A P w) = a(T w, P w) = a(T w, w) (5.23) The first factor on the right-hand side of (5.22) can be estimated by A1 Since T = B−1 A P , we can insert (5.23) into the summands of the second factor, and the proof of the lemma is complete Convergence of Multiplicative Methods om It is more than a coincidence that a(T w, w) is a multiple of the discrete norm |||P w|||2 that we encountered in §2 if B is a multiple of the identity on the subspace S ne C First we estimate the reduction of the error by the multigrid algorithm on the level from below 5.4 Lemma Let ≥ Then Zo v2 − (I − T )v2 ≥ a(T v, v) (5.24) en Proof From the binomial formula we obtain that the left-hand side of (5.24) equals (5.25) Vi 2a(T v, v) − a(T v, T v) Next, we consider the second term using A ≤ B and (5.23) nh a(T v, T v) ≤ (B T v, T v) = a(T v, v) Si Therefore the negative term in (5.25) can be absorbed by the term a(T v, v) by subtracting from the factor and the proof is complete Now we turn to the central result of this § It yields the convergence rate of the multigrid iteration in terms of the constants in the assumptions A1 and A2 5.5 Theorem Assume that A1 and A2 hold Then the energy norm of the error propagation operator E of the multigrid iteration satisfies E2 ≤ − K1 (1 + K2 )2 Proof By applying Lemma 5.4 to E−1 v and noting that E = (I − T )E−1 we obtain E−1 v2 − E v2 ≥ a(T E−1 v, E−1 v) SinhVienZone.com https://fb.com/sinhvienzonevn 268 V Multigrid Methods A summation over all levels is performed with telescoping v − Ev ≥ 2 L a(T E−1 v, E−1 v) (5.26) =0 Therefore the statement of the theorem will be clear if we verify v2 ≤ K1 (1 + K2 )2 L a(T E−1 v, E−1 v) (5.27) =0 To this end, let v= L v ∈ W , v , C =0 v = L a(E−1 v, v ) + =0 L =1 ne be a (stable) decomposition Obviously, om Indeed, (5.26) and (5.27) yield v2 ≤ K1 (1 + K2 )2 (v2 − Ev2 ), and the rest of the proof is concerned with establishing this inequality a((I − E−1 )v, v ) (5.29) en Zo Lemma 5.3 is used to deal with the first term L L 1/2 a(E−1 v, v ) ≤ K1 v a(T E−1 v, E−1 v) (5.28) =0 =0 Vi Next from E − E−1 = −T E−1 it follows by induction that nh I − E−1 = −1 Tk Ek−1 k=0 Si With the bound of the second term on the right-hand side of (5.28), we verify that the conditions for obtaining an improvement on the level are not affected much by the corrections in the previous steps Here A2 enters and L a((I − E−1 )v, v ) =1 = −1 L a(Tk Ek−1 v, v ) =1 k=0 ≤ −1 L γk (Bk Tk Ek−1 v, Tk Ek−1 v)1/2 (B v , v )1/2 =1 k=0 L L 1/2 1/2 ≤ K2 (Bk Tk Ek−1 v, Tk Ek−1 v) (B v , v ) k=0 SinhVienZone.com =0 https://fb.com/sinhvienzonevn §5 Multigrid Analysis via Space Decomposition 269 From (5.23) and A1 it follows that L L 1/2 a((I − E−1 )v, v ) ≤ K1 K2 v ak (Tk Ek−1 v, Ek−1 v) (5.30) k=0 =1 Adding (5.29) and (5.30) and dividing by v we obtain (5.27) completing the proof Verification of A1 v , C v= L om In the case of full H regularity and quasi-uniform triangulations optimal estimates are easily derived We have an ideal case Given v ∈ S, let u be the finite element solution of v in S , i.e., u = P v Set =0 v = P v − P−1 v = u − u−1 for = 1, 2, , L ne v0 = P0 v, (5.31) From the Galerkin orthogonality of finite element solutions we conclude that L Zo v = v 2 (5.32) en =0 Vi Since u−1 is also the finite element solution to u in S−1 and v = u − u−1 , it follows from the Aubin–Nitsche lemma that for = 1, 2, , L (5.33) nh v 0 ≤ c h−1 v Si The approximate inverses for a multigrid algorithm with Richardson iteration as a smoother are given by B0 := A0 , B := c λmax (A )I, = 1, 2, , L (5.34) The inverse estimates yield λmax (A ) ≤ ch−2 Combining these facts and noting h−1 ≤ ch we obtain L (B v , v ) ≤ (A0 v0 , v0 ) + =0 L ch−2 (v , v ) =1 = v0 2 + c L h−2 v 0 (5.35) =1 ≤c L v 2 = c v2 =0 SinhVienZone.com https://fb.com/sinhvienzonevn 270 V Multigrid Methods This proves A1 with a constant K1 = c that is independent of the number of levels In the cases with less regularity we perform the decomposition by applying the L2 -orthogonal projectors Q instead of P v= L v , (5.36) =0 v0 = Q0 v, v = Q v − Q−1 v for = 1, 2, , L (B v , v ) ≤ v0 21 +c =0 L h−2 v 0 =1 ≤ c (L + 1)v (5.37) C L om From Lemma II.7.9 we have Q0 v ≤ cv Next from (II.7.15) it follows that v 0 ≤ v − Q v0 + v − Q−1 v0 ≤ ch v Recalling the approximate solvers from (5.34) we proceed as in the derivation of (5.35) en Zo ne This proves A1 with a constant K1 ≤ c(L + 1)1/2 ≤ c| log hL |1/2 Although this result is only suboptimal, it has the advantage that no regularity assumptions are required As mentioned above, the logarithmic factor arises since we stay in the framework of Sobolev spaces An analysis with the theory of Besov spaces shows that the factor can be dropped, see Oswald [1994] Local Mesh Refinements nh Vi An inspection of the proof of Lemma II.7.9 shows that the estimate (5.37) remains true if the orthogonal projector Q is replaced by an operator of Cl´ement type, e.g., we may choose I = Ih from (II.6.19) That interpolation operator is nearly local Si This has a big advantage when we consider finite element spaces which arise from local mesh refinements Assume that the refinement of the triangulation on the level is restricted to a subdomain ⊂ and that L ⊂ L−1 ⊂ ⊂ 0 = (5.38) Given v ∈ SL , its restriction to \ coincides there with some finite element function in S Now we modify I v at the nodes outside and set (I v)(xj ) := v(xj ) if xj ∈ ˜ j in (II.6.17) is Specifically, when defining I , the construction of the operator Q augmented by the rule (II.6.23) We have (I v)(x) = v(x) SinhVienZone.com (5.39) https://fb.com/sinhvienzonevn §5 Multigrid Analysis via Space Decomposition 271 for x outside a neighborhood of , and from problem II.6.17 we know that the modification changes only the constants in the estimates of the L2 -error The strip of \ , in which (5.39) does not hold, is small if rule II.8.1(1) is observed during the refinement process Hence, v − I v0 ≤ ch v1 (5.40) We note that an estimate of this kind cannot be guaranteed for the finite element solution in S So by using interpolation of Cl´ement type we also obtain multigrid convergence in cases with local mesh refinements om There is also a consequence for computational aspects of the multigrid method Since v+1 = I+1 v − I v = outside a neighborhood of , Zo ne C the smoothing procedure on the levels + 1, + 2, , L may be restricted to the nodes in a neighborhood of It is not necessary to perform the smoothing iteration at each level on the whole domain For this reason the computing effort only increases linearly with the dimension of SL As was pointed out by Xu [1992] and Yserentant [1993], local refinements induce a faster increase of the computational complexity en Problems Si nh Vi 5.7 Let V , W be subspaces of a Hilbert space H Denote the projectors onto V and W by PV , PW , respectively Show that the following properties are equivalent: (1) A strengthened Cauchy inequality (5.3) holds with γ < (2) PW v ≤ γ v holds for all v ∈ V (3) PV w ≤ γ w holds for all w ∈ W (4) v + w ≥ 1 − γ v holds for all v ∈ V , w ∈ W (5) v + w ≥ (1 − γ ) (v + w) holds for all v ∈ V , w ∈ W 5.8 Consider a sequence obtained by the Schwarz alternating method Let αk be the factor in the Cauchy inequality for the decomposition of the error in the iteration step k as in the proof of Theorem 5.2 Show that (αk ) is a nondecreasing sequence SinhVienZone.com https://fb.com/sinhvienzonevn 272 § Nonlinear Problems om Multigrid methods are also very useful for the numerical solution of nonlinear differential equations We need only make some changes in the multigrid method for linear equations These changes are typical for the efficient treatment of nonlinear problems However, there is one essential idea involved which we might not otherwise encounter We have to correct the right-hand side of the nonlinear equation on the coarse grid in order to compensate for the error which arises in moving between grids As an example of an important nonlinear differential equation, consider the Navier–Stokes equation in , div u = in , (6.1) ne u = u0 C −u + Re (u∇)u − grad p = f on ∂ Zo If we drop the quadratic term in the first equation, we get the Stokes problem (III.6.1) Another typical nonlinear differential equation is −u = eλu en u =0 in , on ∂ (6.2) Vi It arises in the analysis of explosive processes The parameter λ specifies the relation between the reaction heat and the diffusion constant – Nonlinear boundary conditions are also of interest, in particular for problems in (nonlinear) elasticity Si nh We write a nonlinear boundary-value problem as an equation of the form L(u) = Suppose that for each = 0, 1, , max , the discretization at level leads to the nonlinear equation L (u ) = (6.3) with N := dim S unknowns In the sequel it is often more convenient to consider the formally more general equation L (u ) = f (6.4) with given f ∈ RN Within the framework of multigrid methods, there are two fundamentally different approaches: The multigrid Newton method (MGNM), which solves the linearized equation using the multigrid method The nonlinear multigrid method (NMGM), which applies the multigrid method directly to the given nonlinear equation SinhVienZone.com https://fb.com/sinhvienzonevn §6 Nonlinear Problems 273 The Multigrid Newton Method Newton iteration requires solving a linear system of equations for every step of the iteration However, it suffices to compute an approximate solution in each step The following algorithm is a variant of the damped Newton method We denote the derivative of the (nonlinear) mapping L by DL DL (u,k ) v = d k C with the starting value v ,0 = Call the result v ,1 (Line search) For λ = 1, 21 , 41 , , test if om 6.1 Multigrid Newton Method Let u,0 be an approximation to the solution of the equation L (u ) = f For k = 0, 1, , carry out the following calculation: (Determine the direction) Set d k = f − L (u,k ) Perform one cycle of the algorithm MGM to solve λ ) L (u,k ) − f ne L (u,k + λ v ,1 ) − f ≤ (1 − (6.5) Zo As soon as (6.5) is satisfied, stop testing and set en u,k+1 = u,k + λ v ,1 nh Vi The direction to the next approximation is determined in the first step, and the distance to go in that direction is determined in the second step If the approximations are sufficiently close to the solution, then we get λ = In this case, step can be replaced by the simpler classical method: 2 Set u,k+1 = u,k + v ,1 Si The introduction of the damping parameter λ and the associated test results in a stabilization; see Hackbusch and Reusken [1989] Thus, the method is less sensitive to the choice of the starting value u,0 It is known that the classical Newton method converges quadratically for sufficiently good starting values, provided the derivative DL is invertible at the solution In Algorithm 6.1 we also have an extra linear error term, and the error ek := u,k − u satisfies the following recurrence formula: ek+1 ≤ ρek + cek 2 Here ρ is the convergence rate of the multigrid algorithm This implies only linear convergence At first glance this is a disadvantage, but quadratic convergence only happens in a neighborhood of the solution, and in particular only when the error ek is smaller than the discretization error In view of the discussion in the previous section, this is no essential disadvantage SinhVienZone.com https://fb.com/sinhvienzonevn ... Vi nh Si SinhVienZone .com https://fb .com/ sinhvienzonevn FINITE ELEMENTS Theory, Fast Solvers, and Applications in Elasticity Theory DIETRICH BRAESS Si nh Vi en Zo ne C om Translated from the... § Saddle Point Problems Saddle points and minima 129 — The inf-sup condition 130 — Mixed finite element methods 134 — Fortin interpolation 136 — SinhVienZone .com 117 122 129 https://fb .com/ sinhvienzonevn... found in Chapter III Saddle point problems and mixed methods are used now not only for variational problems with given constraints, but there is also an increasing interest in nonstandard saddle