petascale computing algorithms and applications bader 2007 12 22 Cấu trúc dữ liệu và giải thuật

583 28 0
petascale computing algorithms and applications bader 2007 12 22  Cấu trúc dữ liệu và giải thuật

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

CuuDuongThanCong.com Petascale comPuting algorithms and aPPlications C9098_FM.indd CuuDuongThanCong.com 11/15/07 1:38:55 PM Chapman & Hall/CRC Computational Science Series SERiES EDiToR Horst Simon Associate Laboratory Director, Computing Sciences Lawrence Berkeley National Laboratory Berkeley, California, U.S.A AiMS AND SCoPE This series aims to capture new developments and applications in the field of computational science through the publication of a broad range of textbooks, reference works, and handbooks Books in this series will provide introductory as well as advanced material on mathematical, statistical, and computational methods and techniques, and will present researchers with the latest theories and experimentation The scope of the series includes, but is not limited to, titles in the areas of scientific computing, parallel and distributed computing, high performance computing, grid computing, cluster computing, heterogeneous computing, quantum computing, and their applications in scientific disciplines such as astrophysics, aeronautics, biology, chemistry, climate modeling, combustion, cosmology, earthquake prediction, imaging, materials, neuroscience, oil exploration, and weather forecasting PuBliSHED TiTlES PETASCALE COMPUTING: Algorithms and Applications Edited by David A Bader C9098_FM.indd CuuDuongThanCong.com 11/15/07 1:38:55 PM Petascale comPuting algorithms and aPPlications EditEd by daVid a Bader Georgia institute of technology Atlanta, U.S.A C9098_FM.indd CuuDuongThanCong.com 11/15/07 1:38:56 PM Chapman & Hall/CRC Taylor & Francis Group 6000 Broken Sound Parkway NW, Suite 300 Boca Raton, FL 33487‑2742 © 2008 by Taylor & Francis Group, LLC Chapman & Hall/CRC is an imprint of Taylor & Francis Group, an Informa business No claim to original U.S Government works Printed in the United States of America on acid‑free paper 10 International Standard Book Number‑13: 978‑1‑58488‑909‑0 (Hardcover) This book contains information obtained from authentic and highly regarded sources Reprinted material is quoted with permission, and sources are indicated A wide variety of references are listed Reasonable efforts have been made to publish reliable data and information, but the author and the publisher cannot assume responsibility for the validity of all materials or for the conse‑ quences of their use Except as permitted under U.S Copyright Law, no part of this book may be reprinted, reproduced, transmitted, or utilized in any form by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying, microfilming, and recording, or in any information storage or retrieval system, without written permission from the publishers For permission to photocopy or use material electronically from this work, please access www copyright.com (http://www.copyright.com/) or contact the Copyright Clearance Center, Inc (CCC) 222 Rosewood Drive, Danvers, MA 01923, 978‑750‑8400 CCC is a not‑for‑profit organization that provides licenses and registration for a variety of users For organizations that have been granted a photocopy license by the CCC, a separate system of payment has been arranged Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe Library of Congress Cataloging‑in‑Publication Data Petascale computing : algorithms and applications / editor, David A Bader p cm ‑‑ (Computational science series) Includes bibliographical references and index ISBN 978‑1‑58488‑909‑0 (hardback : alk paper) High performance computing Petaflops computers Parallel processing (Electronic computers) I Bader, David A II Title III Series QA76.88.P475 2007 004’.35‑‑dc22 2007044024 Visit the Taylor & Francis Web site at http://www.taylorandfrancis.com and the CRC Press Web site at http://www.crcpress.com C9098_FM.indd CuuDuongThanCong.com 11/15/07 1:38:56 PM To Sadie Rose v CuuDuongThanCong.com CuuDuongThanCong.com Chapman & Hall/CRC Computational Science Series Computational science, the scientific investigation of physical processes through modelling and simulation on computers, has become generally accepted as the third pillar of science, complementing and extending theory and experimentation This view was probably first expressed in the mid-1980s It grew out of an impressive list of accomplishments in such diverse areas as astrophysics, aeronautics, chemistry, climate modelling, combustion, cosmology, earthquake prediction, imaging, materials, neuroscience, oil exploration, and weather forecasting Today, in the middle of the first decade of the 21st century, the pace of innovation in information technology is accelerating, and consequently the opportunities for computational science and engineering abound Computational science and engineering (CSE) today serves to advance all of science and engineering, and many areas of research in the future will be only accessible to those with access to advanced computational technology and platforms Progress in research using high performance computing platforms has been tightly linked to progress in computer hardware on one side and progress in software and algorithms on the other, with both sides generally acknowledged to contribute equally to the advances made by researchers using these technologies With the arrival of highly parallel compute platforms in the mid1990s, several subtle changes occurred that changed the face of CSE in the last decade Because of the complexities of large-scale hardware systems and the increasing sophistication of modelling software, including multi-physics and multiscale simulation, CSE increasingly became a team science The most successful practitioners of CSE today are multidisciplinary teams that include mathematicians and computer scientists These teams have set up a software infrastructure, including a support infrastructure, for large codes that are well maintained and extensible beyond the set of original developers The importance of CSE for the future of research accomplishments and economic growth has been well established “Computational science is now indispensable to the solution of complex problems in every sector, from traditional science and engineering domains to such key areas as national security, public health, and economic innovation,” is the principal finding of the recent report of the President’s Information Technology Advisory Committee (PITAC) in the U.S (President’s Information Technology Advisory Committee, Computational Science: Ensuring America’s Competitiveness , Arlington, Virginia: National Coordination Office for Information Technology Research and Development, 2005, p 2.) vii CuuDuongThanCong.com viii As advances in computational science and engineering continue to grow at a rapid pace, it becomes increasingly important to present the latest research and applications to professionals working in the field Therefore I welcomed the invitation by Chapman & Hall/CRC Press to become series editor and start this new series of books on computational science and engineering The series aims to capture new developments and applications in the field of computational science, through the publication of a broad range of textbooks, reference works, and handbooks By integrating mathematical, statistical, and computational methods and techniques, the titles included in the series are meant to appeal to students, researchers, and professionals, as well as interdisciplinary researchers and practitioners who are users of computing technology and practitioners of computational science The inclusion of concrete examples and applications is highly encouraged The scope of the series includes, but is not limited to, titles in the areas of scientific computing, parallel and distributed computing, high performance computing, grid computing, cluster computing, heterogeneous computing, quantum computing, and their application in scientific areas such as astrophysics, aeronautics, biology, chemistry, climate modelling, combustion, cosmology, earthquake prediction, imaging, materials, neuroscience, oil exploration, and weather forecasting, and others With this goal in mind I am very pleased to introduce the first book in the series, Petascale Computing: Algorithms and Applications, edited by my good colleague and friend David Bader This book grew out of a workshop at Schloss Dagstuhl in February 2006, and is a perfect start for the series It is probably the first book on real petascale computing At the beginning of an exciting new phase in high performance computing, just as we are about to enter the age of petascale performance, the chapters in the book will form an ideal starting point for further investigations They summarize the state of knowledge in algorithms and applications in 2007, just before the first petascale systems will become available In the same way as petascale computing will open up new and unprecedented opportunities for research in computational science, I expect this current book to lead the new series to a deeper understanding and appreciation of research in computational science and engineering Berkeley, May 2007 Dr Horst Simon Series Editor Associate Laboratory Director, Computing Sciences Lawrence Berkeley National Laboratory CuuDuongThanCong.com Foreword Over the last few decades, there have been innumerable science, engineering and societal breakthroughs enabled by the development of high performance computing (HPC) applications, algorithms, and architectures These powerful tools have provided researchers, educators, and practitioners the ability to computationally translate and harness data gathered from around the globe into solutions for some of society’s most challenging questions and problems An important force which has continued to drive HPC has been a community articulation of “frontier milestones,” i.e., technical goals which symbolize the next level of progress within the field In the 1990s, the HPC community sought to achieve computing at the teraflop (1012 floating point operations per second) level Teraflop computing resulted in important new discoveries such as the design of new drugs to combat HIV and other diseases; simulations at unprecedented accuracy of natural phenomena such as earthquakes and hurricanes; and greater understanding of systems as large as the universe and smaller than the cell Currently, we are about to compute on the first architectures at the petaflop (1015 floating point operations per second) level Some communities are already in the early stages of thinking about what computing at the exaflop (1018 floating point operations per second) level will be like In driving towards the “-flop,” the assumption is that achieving the next frontier in HPC architectures will provide immense new capacity and capability, will directly benefit the most resource-hungry users, and will provide longer-term benefits to many others However, large-scale HPC users know that the ability to use frontier-level systems effectively is at least as important as their increased capacity and capability, and that considerable time and human, software, and hardware infrastructure are generally required to get the most out of these extraordinary systems Experience indicates that the development of scalable algorithms, models, simulations, analyses, libraries, and application components which can take full advantage of frontier system capacities and capabilities can be as challenging as building and deploying the frontier system itself For application codes to sustain a petaflop in the next few years, hundreds of thousands of processor cores or more will be needed, regardless of processor technology Currently, few real existing HPC codes easily scale to this regime, and major code development efforts are critical to achieve the potential of the new petaflop systems Scaling to a petaflop will involve improving ix CuuDuongThanCong.com ... 12.4 Δ-stepping execution time and relative speedup on the MTA-2 for (a) a Rand directed-UWD-228 -2 28 graph instance and (b) a R-MAT directed-UWD-228 -2 28 instance ... lines The even-numbered systems, GRAPE-2, 2A, 4, and (dashed line), are the high-precision machines, and the systems with odd numbers, GRAPE-1, 1A, 3, and (dotted line), are the low-precision machines... Application-Level Diskless Checkpointing 267 13.2.1 Neighbor-based checkpointing 269 13.2.2 Checksum-based checkpointing 271 13.2.3 Weighted-checksum-based checkpointing

Ngày đăng: 29/08/2020, 23:31

Mục lục

    Chapman & Hall/CRC Computational Science Series

    Chapter 1: Performance Characteristics of Potential Petascale Scientific Applications

    Chapter 2: Petascale Computing: Impact on Future NASA Missions

    Chapter 3: Multiphysics Simulations and Petascale Computing

    Chapter 4: Scalable Parallel AMR for the Uintah Multi-Physics Code

    Chapter 5: Simulating Cosmological Evolution with Enzo

    Chapter 6: Numerical Prediction of High-Impact Local Weather: A Driver for Petascale Computing

    Chapter 7: Software Design for Petascale Climate Science

    Chapter 8: Towards Distributed Petascale Computing

    Chapter 9: Biomolecular Modeling in the Era of Petascale Computing

Tài liệu cùng người dùng

Tài liệu liên quan