1. Trang chủ
  2. » Thể loại khác

Lecture note in computer science

743 308 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 743
Dung lượng 19,07 MB

Nội dung

Lecture Notes in Computer Science Commenced Publication in 1973 Founding and Former Series Editors: Gerhard Goos, Juris Hartmanis, and Jan van Leeuwen Editorial Board David Hutchison Lancaster University, UK Takeo Kanade Carnegie Mellon University, Pittsburgh, PA, USA Josef Kittler University of Surrey, Guildford, UK Jon M Kleinberg Cornell University, Ithaca, NY, USA Alfred Kobsa University of California, Irvine, CA, USA Friedemann Mattern ETH Zurich, Switzerland John C Mitchell Stanford University, CA, USA Moni Naor Weizmann Institute of Science, Rehovot, Israel Oscar Nierstrasz University of Bern, Switzerland C Pandu Rangan Indian Institute of Technology, Madras, India Bernhard Steffen University of Dortmund, Germany Madhu Sudan Massachusetts Institute of Technology, MA, USA Demetri Terzopoulos University of California, Los Angeles, CA, USA Doug Tygar University of California, Berkeley, CA, USA Gerhard Weikum Max-Planck Institute of Computer Science, Saarbruecken, Germany 4818 Ivan Lirkov Svetozar Margenov Jerzy Wa´sniewski (Eds.) Large-Scale Scientific Computing 6th International Conference, LSSC 2007 Sozopol, Bulgaria, June 5-9, 2007 Revised Papers 13 Volume Editors Ivan Lirkov Bulgarian Academy of Sciences Institute for Parallel Processing 1113 Sofia, Bulgaria E-mail: ivan@parallel.bas.bg Svetozar Margenov Bulgarian Academy of Sciences Institute for Parallel Processing 1113 Sofia, Bulgaria E-mail: margenov@parallel.bas.bg Jerzy Wa´sniewski Technical University of Denmark Department of Informatics and Mathematical Modelling 2800 Kongens Lyngby, Denmark E-mail: jw@imm.dtu.dk Library of Congress Control Number: 2008923854 CR Subject Classification (1998): G.1, D.1, D.4, F.2, I.6, J.2, J.6 LNCS Sublibrary: SL – Theoretical Computer Science and General Issues ISSN ISBN-10 ISBN-13 0302-9743 3-540-78825-5 Springer Berlin Heidelberg New York 978-3-540-78825-6 Springer Berlin Heidelberg New York This work is subject to copyright All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting, reproduction on microfilms or in any other way, and storage in data banks Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer Violations are liable to prosecution under the German Copyright Law Springer is a part of Springer Science+Business Media springer.com © Springer-Verlag Berlin Heidelberg 2008 Printed in Germany Typesetting: Camera-ready by author, data conversion by Scientific Publishing Services, Chennai, India Printed on acid-free paper SPIN: 12246786 06/3180 543210 Preface The 6th International Conference on Large-Scale Scientific Computations (LSSC 2007) was held in Sozopol, Bulgaria, June 5–9, 2007 The conference was organized by the Institute for Parallel Processing at the Bulgarian Academy of Sciences in cooperation with SIAM (Society for Industrial and Applied Mathematics) Partial support was also provided from project BIS-21++ funded by the European Commission in FP6 INCO via grant 016639/2005 The conference was devoted to the 60th anniversary of Richard E Ewing Professor Ewing was awarded the medal of the Bulgarian Academy of Sciences for his contributions to the Bulgarian mathematical community and to the Academy of Sciences His career spanned 33 years, primarily in academia, but also included industry Since 1992 he worked at Texas A&M University being Dean of Science and Vice President of Research, as well as director of the Institute for Scientific Computation (ISC), which he founded in 1992 Professor Ewing is internationally well known with his contributions in applied mathematics, mathematical modeling, and large-scale scientific computations He inspired a generation of researchers with creative enthusiasm for doing science on scientific computations The preparatory work on this volume was almost done when the sad news came to us: Richard E Ewing passed away on December 5, 2007 of an apparent heart attack while driving home from the office Plenary Invited Speakers and Lectures: – O Axelsson, Mesh-Independent Superlinear PCG Rates for Elliptic Problems – R Ewing, Mathematical Modeling and Scientific Computation in Energy and Environmental Applications L Gră une, Numerical Optimization-Based Stabilization: From Hamilton-Jacobi-Bellman PDEs to Receding Horizon Control – M Gunzburger, Bridging Methods for Coupling Atomistic and Continuum Models – B Philippe, Domain Decomposition and Convergence of GMRES – P Vassilevski, Exact de Rham Sequences of Finite Element Spaces on Agglomerated Elements – Z Zlatev, Parallelization of Data Assimilation Modules The success of the conference and the present volume in particular are the outcome of the joint efforts of many colleagues from various institutions and organizations First, thanks to all the members of the Scientific Committee for their valuable contribution forming the scientific face of the conference, as well as for their help in reviewing contributed papers We especially thank the organizers of the special sessions We are also grateful to the staff involved in the local organization VI Preface Traditionally, the purpose of the conference is to bring together scientists working with large-scale computational models of environmental and industrial problems and specialists in the field of numerical methods and algorithms for modern high-speed computers The key lectures reviewed some of the advanced achievements in the field of numerical methods and their efficient applications The conference lectures were presented by the university researchers and practical industry engineers including applied mathematicians, numerical analysts and computer experts The general theme for LSSC 2007 was “Large-Scale Scientific Computing” with a particular focus on the organized special sessions Special Sessions and Organizers: – Robust Multilevel and Hierarchical Preconditioning Methods — J Kraus, S Margenov, M Neytcheva – Domain Decomposition Methods — U Langer – Monte Carlo: Tools, Applications, Distributed Computing — I Dimov, H Kosina, M Nedjalkov – Operator Splittings, Their Application and Realization — I Farago – Large-Scale Computations in Coupled Engineering Phenomena with Multiple Scales — R Ewing, O Iliev, R Lazarov – Advances in Optimization, Control and Reduced Order Modeling — P Bochev, M Gunzburger – Control Systems — M Krastanov, V Veliov – Environmental Modelling — A Ebel, K Georgiev, Z Zlatev – Computational Grid and Large-Scale Problems — T Gurov, A Karaivanova, K Skala – Application of Metaheuristics to Large-Scale Problems — E Alba, S Fidanova More than 150 participants from all over the world attended the conference representing some of the strongest research groups in the field of advanced largescale scientific computing This volume contains 86 papers submitted by authors from over 20 countries The 7th International Conference LSSC 2009 will be organized in June 2009 December 2007 Ivan Lirkov Svetozar Margenov Jerzy Wa´sniewski Table of Contents I Plenary and Invited Papers Mesh Independent Convergence Rates Via Differential Operator Pairs Owe Axelsson and J´ anos Kar´ atson Bridging Methods for Coupling Atomistic and Continuum Models Santiago Badia, Pavel Bochev, Max Gunzburger, Richard Lehoucq, and Michael Parks 16 Parallelization of Advection-Diffusion-Chemistry Modules Istv´ an Farag´ o, Krassimir Georgiev, and Zahari Zlatev 28 Comments on the GMRES Convergence for Preconditioned Systems Nabil Gmati and Bernard Philippe 40 Optimization Based Stabilization of Nonlinear Control Systems Lars Gră une 52 II Robust Multilevel and Hierarchical Preconditioning Methods On Smoothing Surfaces in Voxel Based Finite Element Analysis of Trabecular Bone Peter Arbenz and Cyril Flaig 69 Application of Hierarchical Decomposition: Preconditioners and Error Estimates for Conforming and Nonconforming FEM Radim Blaheta 78 Multilevel Preconditioning of Rotated Trilinear Non-conforming Finite Element Problems Ivan Georgiev, Johannes Kraus, and Svetozar Margenov 86 A Fixed-Grid Finite Element Algebraic Multigrid Approach for Interface Shape Optimization Governed by 2-Dimensional Magnetostatics Dalibor Luk´ aˇs and Johannes Kraus 96 VIII Table of Contents The Effect of a Minimum Angle Condition on the Preconditioning of the Pivot Block Arising from 2-Level-Splittings of Crouzeix-Raviart FE-Spaces Josef Synka III 105 Monte Carlo: Tools, Applications, Distributed Computing Development of a 3D Parallel Finite Element Monte Carlo Simulator for Nano-MOSFETs Manuel Aldegunde, Antonio J Garc´ıa-Loureiro, and Karol Kalna Numerical Study of Algebraic Problems Using Stochastic Arithmetic Ren´e Alt, Jean-Luc Lamotte, and Svetoslav Markov 115 123 Monte Carlo Simulation of GaN Diode Including Intercarrier Interactions A Ashok, D Vasileska, O Hartin, and S.M Goodnick 131 Wigner Ensemble Monte Carlo: Challenges of 2D Nano-Device Simulation M Nedjalkov, H Kosina, and D Vasileska 139 Monte Carlo Simulation for Reliability Centered Maintenance Management Cornel Resteanu, Ion Vaduva, and Marin Andreica 148 Monte Carlo Algorithm for Mobility Calculations in Thin Body Field Effect Transistors: Role of Degeneracy and Intersubband Scattering V Sverdlov, E Ungersboeck, and H Kosina 157 IV Operator Splittings, Their Application and Realization A Parallel Combustion Solver within an Operator Splitting Context for Engine Simulations on Grids Laura Antonelli, Pasqua D’Ambra, Francesco Gregoretti, Gennaro Oliva, and Paola Belardini 167 Identifying the Stationary Viscous Flows Around a Circular Cylinder at High Reynolds Numbers Christo I Christov, Rossitza S Marinova, and Tchavdar T Marinov 175 On the Richardson Extrapolation as Applied to the Sequential Splitting Method ´ Istv´ an Farag´ o and Agnes Havasi 184 Table of Contents A Penalty-Projection Method Using Staggered Grids for Incompressible Flows C F´evri`ere, Ph Angot, and P Poullet IX 192 Qualitatively Correct Discretizations in an Air Pollution Model K Georgiev and M Mincsovics 201 Limit Cycles and Bifurcations in a Biological Clock Model B´ alint Nagy 209 Large Matrices Arising in Traveling Wave Bifurcations Peter L Simon 217 V Recent Advances in Methods and Applications for Large Scale Computations and Optimization of Coupled Engineering Problems Parallel Implementation of LQG Balanced Truncation for Large-Scale Systems Jose M Bad´ıa, Peter Benner, Rafael Mayo, Enrique S Quintana-Ort´ı, Gregorio Quintana-Ort´ı, and Alfredo Rem´ on 227 Finite Element Solution of Optimal Control Problems Arising in Semiconductor Modeling Pavel Bochev and Denis Ridzal 235 Orthogonality Measures and Applications in Systems Theory in One and More Variables Adhemar Bultheel, Annie Cuyt, and Brigitte Verdonk 243 DNS and LES of Scalar Transport in a Turbulent Plane Channel Flow at Low Reynolds Number Jordan A Denev, Jochen Fră ohlich, Henning Bockhorn, Florian Schwertfirm, and Michael Manhart 251 Adaptive Path Following Primal Dual Interior Point Methods for Shape Optimization of Linear and Nonlinear Stokes Flow Problems Ronald H.W Hoppe, Christopher Linsenmann, and Harbir Antil 259 Analytical Effective Coefficient and First-Order Approximation to Linear Darcy’s Law through Block Inclusions Rosangela F Sviercoski and Bryan J Travis 267 X VI Table of Contents Control Systems Optimal Control for Lotka-Volterra Systems with a Hunter Population Narcisa Apreutesei and Gabriel Dimitriu Modeling Supply Shocks in Optimal Control Models of Illicit Drug Consumption Roswitha Bultmann, Jonathan P Caulkins, Gustav Feichtinger, and Gernot Tragler 277 285 Multicriteria Optimal Control and Vectorial Hamilton-Jacobi Equation Nathalie Caroff 293 Descent-Penalty Methods for Relaxed Nonlinear Elliptic Optimal Control Problems Ion Chryssoverghi and Juergen Geiser 300 Approximation of the Solution Set of Impulsive Systems Tzanko Donchev 309 Lipschitz Stability of Broken Extremals in Bang-Bang Control Problems Ursula Felgenhauer 317 On State Estimation Approaches for Uncertain Dynamical Systems with Quadratic Nonlinearity: Theory and Computer Simulations Tatiana F Filippova and Elena V Berezina 326 Using the Escalator Boxcar Train to Determine the Optimal Management of a Size-Distributed Forest When Carbon Sequestration Is Taken into Account Renan Goetz, Natali Hritonenko, Angels Xabadia, and Yuri Yatsenko 334 On Optimal Redistributive Capital Income Taxation Mikhail I Krastanov and Rossen Rozenov 342 Numerical Methods for Robust Control P.Hr Petkov, A.S Yonchev, N.D Christov, and M.M Konstantinov 350 Runge-Kutta Schemes in Control Constrained Optimal Control Nedka V Pulova 358 Optimal Control of a Class of Size-Structured Systems Oana Carmen Tarniceriu and Vladimir M Veliov 366 Table of Contents VII XI Environmental Modelling Modelling Evaluation of Emission Scenario Impact in Northern Italy Claudio Carnevale, Giovanna Finzi, Enrico Pisoni, and Marialuisa Volta Modelling of Airborne Primary and Secondary Particulate Matter with the EUROS-Model Felix Deutsch, Clemens Mensink, Jean Vankerkom, and Liliane Janssen Comparative Study with Data Assimilation Experiments Using Proper Orthogonal Decomposition Method Gabriel Dimitriu and Narcisa Apreutesei Effective Indices for Emissions from Road Transport Kostadin G Ganev, Dimiter E Syrakov, and Zahari Zlatev 377 385 393 401 On the Numerical Solution of the Heat Transfer Equation in the Process of Freeze Drying K Georgiev, N Kosturski, and S Margenov 410 Results Obtained with a Semi-lagrangian Mass-Integrating Transport Algorithm by Using the GME Grid Wolfgang Joppich and Sabine Pott 417 The Evaluation of the Thermal Behaviour of an Underground Repository of the Spent Nuclear Fuel Roman Kohut, Jiˇr´ı Star´y, and Alexej Kolcun 425 Study of the Pollution Exchange between Romania, Bulgaria, and Greece Maria Prodanova, Dimiter Syrakov, Kostadin Ganev, and Nikolai Miloshev A Collaborative Working Environment for a Large Scale Environmental Model Cihan Sahin, Christian Weihrauch, Ivan T Dimov, and Vassil N Alexandrov Advances on Real-Time Air Quality Forecasting Systems for Industrial Plants and Urban Areas by Using the MM5-CMAQ-EMIMO Roberto San Jos´e, Juan L P´erez, Jos´e L Morant, and Rosa M Gonz´ alez VIII 433 442 450 Computational Grid and Large-Scale Problems Ultra-fast Semiconductor Carrier Transport Simulation on the Grid Emanouil Atanassov, Todor Gurov, and Aneta Karaivanova 461 Improving Triangular Preconditioner Updates z i = yi − (du)ij zj , zi = zi − j>i bij zj , 741 (11) j>i were used, followed by putting zi = zi (du)ii + bii (12) A first advantage of this implementation is that the solution process is straightforward The sparsity patterns of DU and DB + UB , which are immediately available, not need to be further processed Another advantage of this implementation is that the difference matrix B may be sparsified in a different way for different matrices of a sequence It was mentioned repeatedly in [5], however, that merging the two matrices DU and DB + UB may yield better timings Here we present results of experiments with merged factors which formed the sum DU − DB − UB , or its lower triangular counterpart, explicitly This sum needs to be formed only once at the beginning of the solve process of the linear system, that is in our case, before the preconditioned iterations start Every time the preconditioner is applied, the backward solve step with the merged factors may be significantly cheaper than with (11)–(12) if the sparsity patterns of DU and DB + UB are close enough In our experiments we confirm this Numerical Experiments Our model problem is a two-dimensional nonlinear convection-diffusion model problem It has the form (see, e.g [6]) − Δu + Ru ∂u ∂u + ∂x ∂y = 2000x(1 − x)y(1 − y), (13) on the unit square, discretized by 5-point finite differences on a uniform grid The initial approximation is the discretization of u0 (x, y) = In contrast with [5] we use here R = 100 and different grid sizes We solve the resulting linear systems with the BiCGSTAB [11] iterative method with right preconditioning Iterations were stopped when the Euclidean norm of the residual was decreased by seven orders of magnitude Other stopping criteria yield qualitatively the same results In Table we consider a 70 × 70 grid, yielding a sequence of 13 matrices of dimension 4900 with 24220 nonzeros each We precondition with ILU(0), which has the same sparsity pattern as the matrix it preconditions This experiment was performed in Matlab 7.1 We display the number of BiCGSTAB iterations for the individual systems and the overall time to solve the whole sequence The first column determines the matrix of the sequence which is preconditioned The second column gives the results when ILU(0) is recomputed for every system of the sequence In the third column ILU(0) is computed only for the first system and reused (frozen) for the whole sequence In the remaining columns this first factorization is updated ’Triang’ stays for the triangular updates from [5], that 742 J Duintjer Tebbens and M T˚ uma Table Nonlinear convection-diffusion model problem with n=4900, ILU(0) ILU(0), psize ≈ 24000 Matrix Recomp Freeze Triang A(0) 40 40 40 A(1) 25 37 37 A(2) 24 41 27 A(3) 20 48 26 A(4) 17 56 30 A(5) 16 85 32 A(6) 15 97 35 A(7) 14 106 43 A(8) 13 97 44 A(9) 13 108 45 A(10) 13 94 50 A(11) 15 104 45 A(12) 13 156 49 overall time 13 s 13 s 7.5 s GS 40 27 27 19 21 25 29 31 40 38 44 35 42 6.5 s Table Nonlinear convection-diffusion problem with n=4900: Accuracies and values (7)–(10) (i) i A(i) − MGS 852 938 1102 1252 1581 1844 2316 2731 2736 10 2760 11 2760 12 2760 F (i) A(i) − MT R 857 1785 2506 3033 3975 4699 5590 6326 6372 6413 6415 6415 F value of (7) value of (9) value of (8) value of (10) ∗ ∗ ∗ ∗ 377 679 560 1105 373 843 729 1663 383 957 869 2076 432 1155 1149 2820 496 1303 1388 3395 610 1484 1706 4106 738 1631 1993 4695 735 1642 2002 4731 742 1650 2018 4763 742 1650 2018 4765 742 1650 2018 4765 is for adaptive choice between (5) and (6) The last column presents results for the Gauss-Seidel (GS) updates (3) and (4) The abbreviation ’psize’ gives the average number of nonzeros of the preconditioners As expected from [5], freezing yields much higher iteration counts than any updated preconditioning On the other hand, recomputation gives low iteration counts but it is time inefficient The new GS strategy from Section improves the power of the original triangular update Table displays the accuracies of (4) (here denoted by MGS ) and (6) (denoted by MT R ) in the Frobenius norm and the values of (7-10) These values reflect the efficiencies of the two updates Improving Triangular Preconditioner Updates 743 Table Nonlinear convection-diffusion model problem with n=49729, ILUT(0.2/5) ILUT(0.2/5), psize ≈ 475000, ptime ≈ 0.05 Matrix Recomp Freeze Triang GS A(0) 113/2.02 113/2.02 113/2.02/2.02 113/2.02 A(1) 119/2.06 112/1.94 104/1.95/1.81 122/2.26 A(2) 111/1.94 111/1.95 104/1.91/1.78 100/1.84 A(3) 94/1.66 115/2.00 92/1.64/1.45 96/1.77 A(4) 85/1.44 116/2.00 92/1.77/1.55 90/1.67 A(5) 81/1.45 138/2.44 93/1.73/1.47 83/1.55 A(6) 72/1.28 158/2.75 101/1.89/1.63 85/1.59 A(7) 72/1.28 163/2.86 101/1.91/1.59 92/1.69 A(8) 78/1.36 161/2.84 94/1.77/1.53 82/1.48 A(9) 72/1.23 159/2.72 92/1.72/1.73 80/1.55 A(10) 73/1.27 153/2.66 97/1.91/1.61 82/1.48 and confirm the remarks made after (7-10) Note that the first update in this sequence is based on (3), resp (5) and thus the values (7-10) not apply here In Table we use the grid size 223 and obtain a sequence of 11 linear systems with matrices of dimension 49729 and with 247753 nonzeros The preconditioner is ILUT(0.2,5), that is incomplete LU decomposition with drop tolerance 0.2 and number of additional nonzeros per row This experiment was implemented in Fortran 90 in order to show improvements in timings for the alternative implementation strategy discussed in Section The columns contain the BiCGSTAB iteration counts, followed by the time to solve the linear system, including the time to compute the (updated or new) factorization In the column ’Triang’ the last number corresponds to the implementation with merged factors as explained above and ’ptime’ denotes the average time to recompute preconditioners The benefit of merging is considerable Still, even with this improved implementation, the Gauss-Seidel type of updates happens to be faster than the standard triangular updates for several systems of the sequence As for the BiCGSTAB iteration counts, for the majority of the linear systems Gauss-Seidel updates are more efficient We have included in this table the results based on recomputation as well In contrast to the results of the previous example, the decomposition routines are very efficient and exhibit typically in-cache behaviour Then they often provide the best overall timings This does not need to be the case in other environments like matrix-free or parallel implementations, or in cases where preconditioners are computed directly on grids Conclusion In this paper we considered new ways for improving triangular updates of factorized preconditioners introduced in [5] We proposed a Gauss-Seidel type of approach to replace the triangular strategy, and we introduced a more efficient 744 J Duintjer Tebbens and M T˚ uma implementation of adaptive triangular updates We showed on a model nonlinear problem that both techniques may be beneficial As a logical consequence, it seems worth to combine the two improvements by adapting the new implementation strategy for Gauss-Seidel updates We expect this to yield even more efficient updates For conciseness, we did not present some promising results with the Gauss-Seidel approach generalized by adding a relaxation parameter Acknowledgement This work was supported by the project 1ET400300415 within the National Program of Research “Information Society” The work of the first author is also supported by project number KJB100300703 of the Grant Agency of the Academy of Sciences of the Czech Republic References Baglama, J., et al.: Adaptively preconditioned GMRES algorithms SIAM J Sci Comput 20, 243–269 (1998) Benzi, M., Bertaccini, D.: Approximate inverse preconditioning for shifted linear systems BIT 43, 231–244 (2003) Bergamaschi, L., et al.: Quasi-Newton Preconditioners for the Inexact Newton Method ETNA 23, 76–87 (2006) Bertaccini, D.: Efficient preconditioning for sequences of parametric complex symmetric linear systems ETNA 18, 49–64 (2004) Duintjer Tebbens, J., T˚ uma, M.: Preconditioner updates for solving sequences of large and sparse nonsymmetric linear systems SIAM J Sci Comput 29, 1918–1941 (2007) Kelley, C.T.: Iterative methods for linear and nonlinear equations SIAM, Philadelphia (1995) Loghin, D., Ruiz, D., Touhami, A.: Adaptive preconditioners for nonlinear systems of equations J Comput Appl Math 189, 326–374 (2006) Meurant, G.: On the incomplete Cholesky decomposition of a class of perturbed matrices SIAM J Sci Comput 23, 419–429 (2001) Morales, J.L., Nocedal, J.: Automatic preconditioning by limited-memory quasiNewton updates SIAM J Opt 10, 1079–1096 (2000) 10 Parks, M.L., et al.: Recycling Krylov subspaces for sequences of linear systems SIAM J Sci Comput 28, 1651–1674 (2006) 11 van der Vorst, H.A.: Bi-CGSTAB: A fast and smoothly converging variant of Bi-CG for the solution of non-symmetric linear systems SIAM J Sci Stat Comput 12, 631–644 (1992) Parallel DD-MIC(0) Preconditioning of Nonconforming Rotated Trilinear FEM Elasticity Systems Yavor Vutov Institute for Parallel Processing, Bulgarian Academy of Sciences Acad G Bonchev, Bl 25A, 1113 Sofia, Bulgaria yavor.vutov@gmail.com Abstract A new parallel preconditioning algorithm for 3D nonconforming FEM elasticity systems is presented The preconditioner is constructed in two steps First, displacement decomposition of the stiffness matrix is used Then MIC(0) factorization is applied to a proper auxiliary M-matrix to get an approximate factorization of the obtained blockdiagonal matrix The auxiliary matrix has a special block structure — its diagonal blocks are diagonal matrices themselves This allows the solution of the preconditioning system to be performed efficiently in parallel Estimates for the parallel times, speedups and efficiencies are derived The performed parallel tests are in total agreement with them The robustness of the proposed algorithm is confirmed by the presented experiments solving problems with strong coefficient jumps Keywords: nonconforming finite element method, preconditioned conjugate gradient method, MIC(0), parallel algorithms Introduction We consider the weak formulation of the linear elasticity problem in the form: find u ∈ [HE (Ω)]3 = {v ∈ [H (Ω)]3 : vΓD = uS } such that f t vdΩ + [2με(u) : ε(v) + λ div u div v]dΩ = Ω Ω gt vdΓ, (1) ΓN ∀v ∈ [H01 (Ω)]3 = {v = [H (Ω)]3 : vΓD = 0}, with the positive constants λ and μ of Lam´e, the symmetric strains ε(u) := 0.5(∇u + (∇u)t ), the volume forces f , and the boundary tractions g, ΓN ∪ ΓD = ∂Ω Nonconforming rotated trilinear elements of Rannacher-Turek [6] are used for the discretization of (1) To obtain a stable saddle-point system one usually uses a mixed formulation for u and div u By the choice of non-continuous finite elements for the dual variable, it can be eliminated at the macroelement level, and we get a symmetric positive definite finite element system in displacement variables This approach is known as reduced and selective integration (RSI) technique, see [5] Let Ω H = w1H × w2H × w3H be a regular coarser decomposition of the domain Ω ⊂ R3 into hexahedrons, and let the finer decomposition Ω h = w1h × w2h × w3h I Lirkov, S Margenov, and J Wa´ sniewski (Eds.): LSSC 2007, LNCS 4818, pp 745–752, 2008 c Springer-Verlag Berlin Heidelberg 2008 746 Y Vutov be obtained by a regular refinement of each macro element E ∈ Ω H into eight similar hexahedrons The cube eˆ = [−1, 1]3 is used as a reference element in the parametric definition of the rotated trilinear elements For each e ∈ Ω h , let ψe : eˆ → e be the trilinear 1–1 transformation Then the nodal basis functions are defined by the relations {φi }6i=1 = {φˆi ◦ ψe−1 }6i=1 , where φˆi ∈ span{1, ξj , ξj2 − ξj+1 , j = 1, 2, 3} Mid-point (MP) and integral mid-value (MV) interpolation conditions can be used for determining the reference element basis functions {φˆi }6i=1 This leads to two different finite element spaces V h , referred as Algorithm MP and Algorithm MV The RSI finite element method (FEM) discretization reads as follows: find uh ∈ VEh such that 2με∗ (uh ) : ε∗ (vh ) + λ div uh div vh de = e∈Ω h e f t vh dΩ + Ω gt vh dΓ, ΓN (2) H ∀vh ∈ V0h , where ε∗ (u) := ∇u − 0.5ILQ [∇u − (∇u)t ], V0h is the FEM space, satisfying (in nodalwise sense) homogeneous boundary conditions on ΓD , the H operator ILQ denotes the L2 –orthogonal projection onto QH , the space of piecewise constant functions on the coarser decomposition Ω H of Ω Then a standard computational procedure leads to a system of linear equations ⎡ ⎤⎡ ⎤ ⎡ 1⎤ K11 K12 K13 uh fh ⎣ K21 K22 K23 ⎦ ⎣ u2h ⎦ = ⎣ fh2 ⎦ (3) K31 K32 K33 u3h fh3 Here the stiffness matrix K is written in block form corresponding to a separate displacements components ordering of the vector of nodal unknowns Since K is sparse, symmetric and positive definite, we use the preconditioned conjugate gradient (PCG) method to solve the system (3) PCG is known to be the best solution method for such systems [2] DD MIC(0) Preconditioning Let us first recall some well known facts about the modified incomplete factorization MIC(0) Let us split the real N × N matrix A = (aij ) in the form A = D − L − LT , where D is the diagonal and (−L) is the strictly lower triangular part of A Then we consider the approximate factorization of A which has the following form: CMIC(0) (A) = (X − L)X −1 (X − L)T , where X = diag(x1 , , xN ) is a diagonal matrix determined such that A and CMIC(0) have equal row sums For the purpose of preconditioning we restrict ourselves to the case when X > 0, i.e., when CMIC(0) is positive definite In this case, the MIC(0) factorization is called stable Concerning the stability of the MIC(0) factorization, we have the following theorem [4] Parallel DD-MIC(0) Preconditioning of Nonconforming Rotated Trilinear 747 Theorem Let A = (aij ) be a symmetric real N × N matrix and let A = D − L − LT be a splitting of A Let us assume that (in an elementwise sense) L ≥ 0, Ae ≥ 0, Ae + LT e > 0, e = (1, · · · , 1)T ∈ RN , i.e., that A is a weakly diagonally dominant matrix with nonpositive offdiagonal entries and that A + LT = D − L is strictly diagonally dominant Then the relation i−1 N aik xi = aii − akj > (4) xk k=1 j=k+1 holds and the diagonal matrix X = diag(x1 , · · · , xN ) defines a stable MIC(0) factorization of A Remark The numerical tests presented in this work are performed using the perturbed version of MIC(0) algorithm, where the incomplete factorization is ˜ The diagonal perturbation D ˜ = D(ξ) ˜ applied to the matrix A˜ = A + D = ˜ ˜ ˜ ˜ diag(d1 , dN ) is defined as follows: di = ξaii if aii ≥ 2wi , and di = ξ 1/2 aii otherwise, where < ξ < is a constant and wi = − j>i aij We use PCG with an isotropic displacement decomposition (DD) MIC(0) factorization preconditioner in the form: ⎡ ⎤ CMIC(0) (B) ⎦ CMIC(0) (B) CDDMIC(0) (K) = ⎣ CMIC(0) (B) Matrix B is a modification of the stiffness matrix A corresponding to the bilinear form ∂uh ∂v h a(uh , v h ) = E de ∂xi ∂xi e h i=1 e∈Ω Here E is the modulus of elasticity Such DD preconditioning for the coupled matrix K is theoretically motivated by the Korn’s inequality which holds for the RSI FEM discretization under consideration [3] The auxiliary matrix B is constructed element-by-element: Following the standard FEM assembling procedure we write A in the form A = e∈Ω h LTe Ae Le , where Le stands for the restriction mapping of the global vector of unknowns to the local one corresponding to the current element e and Ae = {aij }6i,j=1 is the element stiffness matrix The local node numbering and connectivity pattern is displayed in Fig (a) Now we will introduce the structure of two variants for local approximations Be They will later be referred to as Variant B1 and Variant B2 Variant B1 Variant B2 ⎡ b11 ⎢ ⎢ ⎢ a31 Be = ⎢ ⎢ a41 ⎢ ⎣ a51 a61 b22 a32 a42 a52 a62 a13 a14 a23 a24 b33 b44 a53 a54 a63 a64 a15 a25 a35 a45 b55 ⎤ a16 a26 ⎥ ⎥ a36 ⎥ ⎥ a46 ⎥ ⎥ ⎦ b66 ⎡ b11 ⎢ ⎢ ⎢ a31 Be = ⎢ ⎢ a41 ⎢ ⎣ a51 a61 b22 a32 a42 a52 a62 ⎤ a13 a14 a15 a16 a23 a24 a25 a26 ⎥ ⎥ ⎥ b33 ⎥ ⎥ b44 ⎥ ⎦ b55 b66 748 Y Vutov (a) (b) Fig (a) Local node numbering and connectivity pattern; (b) Sparsity pattern of the matrices A and B (both variants) for a division of Ω into 2x2x6 hexahedrons Non-zero elements are drawn with boxes: non-zero in A and B (both variants), non-zero in A and B Variant B1, non-zero only in A With thicker lines are bordered blocks in the matrix B Variant B2 The matrices Be are symmetric and positive semidefinite, with nonpositive offdiagonal entries, such that Be e = Ae e, eT = (1, 1, 1, 1, 1, 1) Then we construct (1) T (i) the global matrix B = e∈Ωh λe Le Be Le , where {λe }i=1 are the nontrivial eigenvalues of Be−1 Ae in ascending order The matrix B is a M-matrix and has a special block structure with diagonal blocks being diagonal matrices, see Fig 1(b) These blocks correspond to nodal lines and plains for variants B1 and B2, respectively Lexicographic node numbering is used This allows a stable MIC(0) factorization and efficient parallel implementation It is important, that A and B are spectrally equivalent, and the relative condition number κ(B −1 A) is uniformly bounded [1] 3.1 Parallel Algorithm Description The PCG algorithm is used for the solution of the linear system (3) Let us assume that the parallelogram domain Ω is decomposed into n × n × n equal nonconforming hexahedral elements The size of the resulting nonconforming FEM system is N = 9n2 (n + 1) To handle the systems with the preconditioner ˜ ≡ (X − L)y = v, X −1 z = y and one has to solve three times systems Ly T ˜ L w = z, where L is the strictly lower triangular part of the matrix B The triangular systems are solved using standard forward or backward recurrences Parallel DD-MIC(0) Preconditioning of Nonconforming Rotated Trilinear 749 This can be done in kB1 = 2n2 + 2n and kB2 = 2n + stages for variants B1 and B2, respectively Within stage i the block yi is computed Since the blocks ˜ ii , are diagonal, the computations of each component of yi can be performed L in parallel Let the p ≤ n/2 processors be denoted by P1 , P2 , , Pp We distribute the entries of the vectors corresponding to each diagonal block of B among the processors Each processor Pj receives a strip of the computational domain These strips have almost equal size Elements of all vectors and rows of all matrices that participate in the PCG algorithm are distributed in the same manner Thus the processor Pj takes care of the local computations on the j-th strip 3.2 Parallel Times On each iteration in the PCG algorithm one matrix vector multiplication Kx, one solution of the preconditioner system Cx = y, two inner products and three linked triads of the form x = y+αz are computed The matrix vector multiplication can be performed on the macroelement level In the case of rectangular brick mesh, the number of non-zero elements in the macroelement stiffness matrix is 1740 The number of operations on each PCG iteration is: N it = N (Kx) + N (C −1 x) + 2N (< , >) + 3N (x = y + αz) it it N it ≈ 24N + N (C −1 x) + 2N + 3N, NB1 ≈ 40N, NB2 ≈ 38N An operation is assumed to consist of one addition and one multiplication Estimations of the parallel execution times are derived with the following assumptions: a) executing M arithmetical operations on one processor lasts T = M ta , b) the time to transfer M data items between two neighboring processors can be approximated by T comm = ts + M tc , where ts is the startup time and tc is the incremental time for each of the M elements to be transferred, and c) send and receive operations between each pair of neighboring processors can be done in parallel We get the following expressions for the communication times: T comm (Kx) ≈ 2ts + N 2/3 tc , 2/3 2/3 −1 −1 comm comm T (CB1 x) ≈ N ts + N tc , T (CB2 x) ≈ N 1/3 ts + N 2/3 tc 3 Two communication steps for the matrix vector multiplication are performed to avoid duplication of the computations or extra logic For the solution of the triangular systems, after each nodal column (variant B1) or each nodal plain (variant B2) of unknowns is commputed some vector components must be exchanged The three systems of the preconditioner (one for each displacement) are solved simultaneously Thus no extra communication steps for different displacements are required The above communications are completely local and not depend on the number of processors The inner product needs one broadcasting and one gathering global communication but they not contribute to the leading terms of the total parallel time The parallel properties of the algorithm not 750 Y Vutov depend on the number of iterations, so it is enough to evaluate the parallel time per iteration, and use it in the speedup and efficiency analysis As the computations are almost equally distributed among the processors, assuming there is no overlapping of the communications and computations one can write for the total time per iteration on p processors the following estimates: it TB1 (p) = 40N 38N it ta + N 2/3 ts + 4N 2/3 tc , TB2 (p) = ta + N 1/3 ts + 4N 2/3 tc p p The relative speedup S(p) = T (1)/T (p) and efficiency E(p) = S(p)/p, will grow with n in both variants up to their theoretical limits S(p) = p and E(p) = Since on a real computer ts tc and ts ta we can expect good efficiencies only when n p ts /ta The efficiency of Variant B2 is expected to be much better than the one of Variant B1, because about 3n times fewer messages are sent 4.1 Benchmarking Convergence Tests The presented numerical tests illustrate the PCG convergence rate of the studied displacement decomposition algorithms when the size of the discrete problem and the coefficient jumps are varied The computational domain is Ω = [0, 1]3 where homogeneous Dirichlet boundary conditions are assumed at the bottom face An uniform mesh is used The number of intervals in each of the coordinate directions for the finer grid is n A relative stopping criterion (C −1 ri , ri )/(C −1 r0 , r0 ) < ε2 is used in the PCG algorithm, where ri stands for the residual at the i-th iteration step, and ε = 10−6 The interaction between a soil media and a foundation element with varying elasticity modulus is considered The foundation domain is Ωf = [3/8, 5/8]×[3/8, 5/8]×[1/2, 1] The mechanical characteristics are Es = 10 MPa, νs = 0.2 and Ef = 10J MPa, νf = 0.2 for the soil and foundation respectively Experiments with J = 0, 1, 2, are performed The force acting on the top of the foundation is MN In Tables and the number of iterations are collected for both variants B1 and B2 for Algorithms MP and MV respectively In Table also is added Variant B0 corresponding to the application of the MIC(0) Table Algorithm MP, number of iterations J n 32 304 64 396 128 19 021 256 151 584 N 128 160 824 768 B0 161 264 367 B1 147 223 331 486 B2 113 162 230 327 B0 186 284 424 B1 173 262 389 570 B2 130 186 264 377 B0 227 428 638 B1 253 391 581 852 B2 189 271 385 542 B0 361 565 843 B1 343 523 780 148 B2 247 357 509 725 Parallel DD-MIC(0) Preconditioning of Nonconforming Rotated Trilinear 751 Table Algorithm MV, number of iterations J n 32 304 64 396 128 19 021 256 151 584 N 128 160 824 768 B1 173 295 471 730 1 B2 255 648 916 282 B1 B2 197 280 310 744 536 053 857 486 B1 313 486 778 198 B2 B1 B2 348 405 411 904 630 1069 281 013 517 813 600 154 factorization directly to the matrix A Note that this is possible only for the Algorithm MP (because of the positive offdiagonal entries in A in algorithm MV) and only in a sequential program One can clearly see the robustness of the proposed preconditioners The number of iterations is of order O(n1/2 ) = O(N 1/6 ) It is remarkable that for Algorithm MP, the number of iterations for Variants B2 is less than that number for Variant B1, and it is even less than the number of iterations obtained without the modification of the matrix A 4.2 Parallel Tests Here we present execution times, speedups, and efficiencies from experiments performed on three parallel computing platforms, referred to further as C1, C2, and C3 Platform C1 is an “IBM SP Cluster 1600” consisting of 64 p5-575 nodes interconnected with a pair of connections to the Federation HPS (High Performance Switch) Each p5-575 node contains Power5 SMP processors at 1.9 GHz and 16 GB of RAM The network bandwidth is 16 Gb/s Platform C2 is an IBM Linux Cluster 1350, made of 512 dual-core IBM X335 nodes Each node contains Xeon Pentium IV processors and GB of RAM Nodes are interconnected with an Gb Myrinet network Platform C3 is a “Cray XD1” cabinet, fully equipped with 72 2-way nodes, totaling in 144 AMD Opteron processors at 2.4 GHz Each node has GB of memory The CPUs are interconnected with the Cray RaidArray network with a bandwidth of 5.6 Gb/s Since the parallel properties of the algorithm not depend on the discretization type and the number of iterations, experiments only for Algorithm MP and for the case with the strongest coefficient jumps are performed In Table sequential execution times T (p) are shown in seconds The relative speedups S(p) and efficiencies E(p) for various values of n and number of processors p are collected in Table Results for both variants B1 and B2 are included For a fixed number of processors the speedup and efficiency grow with the problem size Conversely for fixed n, the efficiency decrease with the number of processors This is true for all platforms and confirms our analysis For Variant B1, reasonable efficiencies are obtained, only when n/p is sufficiently large And again, as we expected, for a given p and n Variant B2 performs far better even for smaller ratios n/p It is clearly seen, how reducing the number of communication steps in the solution of the preconditioner improves the parallel performance 752 Y Vutov Table Sequential times n 32 64 128 Variant B1 Variant B2 C1 C2 C3 C1 C2 C3 52.18 30.87 29.47 28.16 18.61 21.18 578.4 336.8 347.6 336.1 228.4 224.2 6596 3793 3556 3887 2556 2610 Table Parallel speedups and efficiencies n p 32 16 64 16 128 16 Variant B1 Variant B2 C1 C2 C3 C1 C2 C3 S(p) E(p) S(p) E(p) S(p) E(p) S(p) E(p) S(p) E(p) S(p) E(p) 1.49 0.74 1.31 0.66 1.77 0.88 1.93 0.96 1.33 0.66 1.97 0.99 1.83 0.45 1.49 0.37 2.40 0.60 3.53 0.88 2.08 0.51 3.25 0.81 2.11 0.26 1.22 0.15 3.34 0.42 5.78 0.72 3.07 0.38 5.20 0.65 1.61 0.10 0.92 0.06 3.22 0.20 9.45 0.59 3.93 0.25 7.63 0.48 1.68 0.84 1.38 0.69 2.02 1.01 2.02 1.01 1.35 0.68 1.77 0.88 2.46 0.61 1.98 0.49 3.17 0.79 3.92 0.98 2.49 0.62 3.50 0.87 3.27 0.41 1.93 0.24 4.26 0.53 7.38 0.92 4.21 0.52 5.91 0.73 3.78 0.23 2.06 0.13 6.03 0.38 12.83 0.81 6.53 0.40 8.64 0.54 1.82 0.91 1.51 0.76 1.56 0.78 2.00 1.00 1.49 0.74 1.93 0.96 2.96 0.74 2.40 0.60 2.73 0.68 3.90 0.98 2.54 0.63 3.72 0.93 4.50 0.56 2.70 0.34 5.34 0.67 7.33 0.92 4.59 0.57 7.30 0.91 5.83 0.36 3.64 0.23 7.64 0.48 12.73 0.80 7.51 0.47 12.21 0.76 Acknowledgments The numerical tests are supported via EC Project HPC-EUROPA RII3-CT2003-506079 The author also gratefully acknowledges the support provided via EC INCO Grant BIS-21++ 016639/2005 References Arbenz, P., Margenov, S., Vutov, Y.: Parallel MIC(0) Preconditioning of 3D Elliptic Problems Discretized by Rannacher–Turek Finite Elements Comput Math Appl (to appear) Axelsson, O.: Iterative Solution Methods Cambridge University Press, Cambridge (1994) Axelsson, O., Gustafsson, I.: Iterative methods for the Navier equation of elasticity Comp Math Appl Mech Engin 15, 241–258 (1978) Blaheta, R.: Displacement Decomposition — incomplete factorization preconditioning techniques for linear elasticity problems Numer Lin Alg Appl 1, 107–126 (1994) Malkus, D., Hughes, T.: Mixed finite element methods Reduced and selective integration techniques: an uniform concepts Comp Meth Appl Mech Eng 15, 63–81 (1978) Rannacher, R., Turek, S.: Simple nonconforming quadrilateral Stokes Element Numer Methods Partial Differential Equations 8(2), 97–112 (1992) Author Index Alba, Enrique 527, 609 Aldegunde, Manuel 115 Alexandrov, Vassil N 442 Alt, Ren´e 123 Andreev, A.B 637, 645 Andreica, Marin 148 Angelova, Ivanka T 654 Angot, Ph 192 Antil, Harbir 259 Antonelli, Laura 167 Apreutesei, Narcisa 277, 393 Arbenz, Peter 69 Ashok, A 131 Atanassov, Emanouil 461 Axelsson, Owe Bad´ıa, Jose M 227 Badia, Santiago 16 Behar, Vera 585 Belardini, Paola 167 Benner, Peter 227 Berezina, Elena V 326 Blaheta, Radim 78 Bochev, Pavel 16, 235 Bockhorn, Henning 251 Brayanov, Ilia A 661 Bultheel, Adhemar 243 Bultmann, Roswitha 285 Carnevale, Claudio 377 Caroff, Nathalie 293 Caulkins, Jonathan P 285 Christov, Christo I 175 Christov, N.D 350 Chryssoverghi, Ion 300 Coves Moreno, Anna M 536 Cuyt, Annie 243 D’Ambra, Pasqua 167 Denev, Jordan A 251 Deutsch, Felix 385 Dimitriu, Gabriel 277, 393 Dimov, Ivan T 442 D´ ob´e, P´eter 470 Donchev, Tzanko 309 Duintjer Tebbens, Jurjen 737 Farag o, Istv an 28, 184 Fă arber, Gerrit 536 Feichtinger, Gustav 285 Felgenhauer, Ursula 317 F´evri`ere, C 192 Fidanova, Stefka 545 Filippova, Tatiana F 326 Finzi, Giovanna 377 Flaig, Cyril 69 Fră ohlich, Jochen 251 Ganev, Kostadin G 401, 433 Ganzha, Maria 484, 705 Garc´ıa-Loureiro, Antonio J 115 Gawinecki, Maciej 484 Geiser, Juergen 300 Georgallis, P.G 561 Georgatos, Fotis 476 Georgiev, Ivan 86 Georgiev, Krassimir 28, 201, 410 Georgieva, Irina 670 Gmati, Nabil 40 Goetz, Renan 334 Gonz´ alez, Rosa M 450 Goodnick, S.M 131 Gregoretti, Francesco 167 Gră une, Lars 52 Guinand, Fr´ed´eric 553 Gunzburger, Max 16 Gurov, Todor 461 Hartin, O 131 ´ Havasi, Agnes 184 Hoppe, Ronald H.W 259 Hritonenko, Natali 334 Janssen, Liliane 385 Joppich, Wolfgang 417 Kalna, Karol 115 Kandilarov, Juri D 661, 679 K´ apolnai, Rich´ ard 470 754 Author Index Karaivanova, Aneta 461 Kar´ atson, J´ anos Kateros, D.A 561 Katsigiannis, C.I 561 Kohut, Roman 425 Kolcun, Alexej 425 Koleva, Miglena N 661, 679 Konstantinov, M.M 350 Kosina, H 139, 157 Kosturski, Nikola 410, 688 Kouvakis, Ioannis 476 Krastanov, Mikhail I 342 Kraus, Johannes 86, 96 Kuranowski, Wojciech 484 Lamotte, Jean-Luc 123 Le´ on, Coromoto 569, 696 Lehoucq, Richard 16 Linsenmann, Christopher 259 Lirkov, Ivan 484, 705 Luk´ aˇs, Dalibor 96 Manhart, Michael 251 Margenov, Svetozar 86, 410, 484 Marinov, Tchavdar T 175 Marinova, Rossitza S 175 Markov, Svetoslav 123 Mart´ın, S 696 Mayo, Rafael 227 Mensink, Clemens 385 Miloshev, Nikolai 433 Mincsovics, M 201 Miranda, Gara 569, 696 Moisil, Ioana 577, 618 Molina, Guillermo 527 Molina, Juan M 609 Morant, Jos´e L 450 Nagy, B´ alint 209 Nedjalkov, M 139 N´emeth, D´enes 492 Nikolov, Milen 585 Oliva, Gennaro 167 Pah, Iulian 577, 618 Panica, Silviu 516 Paprzycki, Marcin 484, 705 Parks, Michael 16 Pasztuhov, D´ aniel 500 Penev, Kalin 593 P´erez, Juan L 450 Petcu, Dana 516 Petkov, P.Hr 350 Philippe, Bernard 40 Pign´e, Yoann 553 Pisoni, Enrico 377 Pott, Sabine 417 Poullet, P 192 Prezerakos, G.N 561 Prodanova, Maria 433 Pulova, Nedka V 358 Quintana-Ort´ı, Enrique S 227 Quintana-Ort´ı, Gregorio 227 Racheva, M.R 637, 645 Rem´ on, Alfredo 227 Resteanu, Cornel 148 Ridzal, Denis 235 Rodr´ıguez, C 696 Rodr´ıguez, J 696 Roeva, Olympia 601 Rogina, Branka Medved 508 Rozenov, Rossen 342 Sahin, Cihan 442 Sakurai, Tetsuya 721 Salhi, Said 536 Salto, Carolina 609 San Jos´e, Roberto 450 Sapozhnikov, A.A 713 Sapozhnikov, A.P 713 Sapozhnikova, T.F 713 Schwertfirm, Florian 251 Segura, Carlos 569 Simian, Corina 577, 618 Simian, Dana 577, 618 Simon, Peter L 217 Skala, Karolj 508 Star´ y, Jiˇr´ı 425 Stevanovi´c, Radomir 508 Stipˇcevi´c, Mario 508 Sverdlov, V 157 Sviercoski, Rosangela F 267 Synka, Josef 105 Syrakov, Dimiter E 401, 433 Szeber´enyi, Imre 470 Tadano, Hiroto 721 Takahashi, Daisuke 729 Author Index Tarniceriu, Oana Carmen Topi´c, Goran 508 Tă oră ok, J anos 500 Tragler, Gernot 285 Travis, Bryan J 267 T˚ uma, Miroslav 737 Uluchev, Rumen Ungersboeck, E 670 157 Vaduva, Ion 148 Vankerkom, Jean 385 Vasileska, D 131, 139 Veliov, Vladimir M 366 Venieris, I.S 561 366 Verdonk, Brigitte 243 Volta, Marialuisa 377 Vulkov, Lubin G 654, 679 Vutov, Yavor 705, 745 Weihrauch, Christian Wysocka-Schillak, F Xabadia, Angels Yatsenko, Yuri Yonchev, A.S 442 626 334 334 350 Zaharie, Daniela 516 Zlatev, Zahari 28, 401 755 ... Science+ Business Media springer.com © Springer-Verlag Berlin Heidelberg 2008 Printed in Germany Typesetting: Camera-ready by author, data conversion by Scientific Publishing Services, Chennai, India Printed... coupling, blended coupling, molecular statics Coupling Atomistic and Continuum Models For us, continuum models are PDE models that are derived by invoking a (physical) continuum hypothesis In most... Right: forces acting on a point x in the continuum region “dominates” the continuum model near the interface surface between the atomistic and bridge regions and the continuum model “dominates” the

Ngày đăng: 10/11/2018, 08:20