1. Trang chủ
  2. » Công Nghệ Thông Tin

what is mpi portran

44 466 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 44
Dung lượng 171,5 KB

Nội dung

1 What is MPI?  MPI = Message Passing Interface  Specification of message passing libraries for developers and users  Not a library by itself, but specifies what such a library should be  Specifies application programming interface (API) for such libraries  Many libraries implement such APIs on different platforms – MPI libraries  Goal: provide a standard for writing message passing programs  Portable, efficient, flexible  Language binding: C, C++, FORTRAN programs 2 History & Evolution  1980s – 1990s: incompatible libraries and software tools; need for a standard  1994, MPI 1.0;  1995, MPI 1.1, revision and clarification to MPI 1.0  Major milestone  C, FORTRAN  Fully implemented in all MPI libraries  1997, MPI 1.2  Corrections and clarifications to MPI 1.1  1997, MPI 2  Major extension (and clarifications) to MPI 1.1  C++, C, FORTRAN  Partially implemented in most libraries; a few full implementations (e.g. ANL MPICH2) MPI Evolution 3 Why Use MPI?  Standardization: de facto standard for parallel computing  Not an IEEE or ISO standard, but “industry standard”  Practically replaced all previous message passing libraries  Portability: supported on virtually all HPC platforms  No need to modify source code when migrating to different machines  Performance: so far the best; high performance and high scalability  Rich functionality:  MPI 1.1 – 125 functions  MPI 2 – 152 functions. If you know 6 MPI functions, you can do almost everything in parallel. 4 Programming Model  Message passing model: data exchange through explicit communications.  For distributed memory, as well as shared-memory parallel machines  User has full control (data partition, distribution): needs to identify parallelism and implement parallel algorithms using MPI function calls.  Number of CPUs in computation is static  New tasks cannot be dynamically spawned during run time (MPI 1.1)  MPI 2 specifies dynamic process creation and management, but not available in most implementations.  Not necessarily a disadvantage  General assumption: one-to-one mapping of MPI processes to processors (although not necessarily always true). 5 MPI 1.1 Overview  Point to point communications  Collective communications  Process groups and communicators  Process topologies  MPI environment management 6 MPI 2 Overview  Dynamic process creation and management  One-sided communications  MPI Input/Output (Parallel I/O)  Extended collective communications  C++ binding 7 MPI Resources  MPI Standard:  http://www.mpi-forum.org/  MPI web sites/tutorials etc, see class web site  Public domain (free) MPI implementations  MPICH and MPICH2 (from ANL)  LAM MPI 8 General MPI Program Structure 9 Example #include <mpi.h> #include <stdio.h> int main(int argc, char **argv) { int my_rank, num_cpus; MPI_Init(&argc, &argv); MPI_Comm_rank(MPI_COMM_WORLD, &my_rank); MPI_Comm_size(MPI_COMM_WORLD, &num_cpus); printf(“Hello, I am process %d among %d processes\n”, my_rank, num_cpus); MPI_Finalize(); return 0; } Hello, I am process 1 among 4 processes Hello, I am process 2 among 4 processes Hello, I am process 0 among 4 processes Hello, I am process 3 among 4 processes On 4 processors: 10 Example program hello implicit none include ‘mpif.h’ integer :: ierr, my_rank, num_cpus call MPI_INIT(ierr) call MPI_COMM_RANK(MPI_COMM_WORLD, my_rank) call MPI_COMM_SIZE(MPI_COMM_WORLD, num_cpus) write(*,*) “Hello, I am process “, my_rank, “ among “ & , num_cpus, “ processes” call MPI_FINALIZE(ierr) end program hello Hello, I am process 1 among 4 processes Hello, I am process 2 among 4 processes Hello, I am process 0 among 4 processes Hello, I am process 3 among 4 processes On 4 processors: [...]... tag1=1002; MPI_ Status status; MPI_ Comm_rank (MPI_ COMM_WORLD,&rank); … if(rank==0) MPI_ Send(A, 10, MPI_ DOUBLE, 1, tag, MPI_ COMM_WORLD); else if(rank==1){ MPI_ Recv(B, 15, MPI_ DOUBLE, 0, tag, MPI_ COMM_WORLD, &status); // ok // MPI_ Recv(B, 15, MPI_ FLOAT, 0, tag, MPI_ COMM_WORLD, &status);  wrong // MPI_ Recv(B,15 ,MPI_ DOUBLE,0,tag1 ,MPI_ COMM_WORLD,&status);  un-match // MPI_ Recv(B,15 ,MPI_ DOUBLE,1,tag ,MPI_ COMM_WORLD,&status);... rank,ncpus; MPI_ Status status; … MPI_ Comm_rank (MPI_ COMM_WORLD,&rank); … // set num_students,grade,note in rank=0 cpu if(rank==0){ MPI_ Send(&num_students,1 ,MPI_ INT,1,tag1 ,MPI_ COMM_WORLD); MPI_ Send(grade, 10, MPI_ DOUBLE,2,tag1 ,MPI_ COMM_WORLD); MPI_ Send(note,strlen(note)+1 ,MPI_ CHAR,1,tag2 ,MPI_ COMM_WORLD); } if(rank==1){ MPI_ Recv(&num_students,1 ,MPI_ INT,0,tag1 ,MPI_ COMM_WORLD,&status); MPI_ Recv(note,1024 ,MPI_ CHAR,0,tag2 ,MPI_ COMM_WORLD,&status);... MPI_ COMM_SIZE(comm,size,ierr) Compiling, Running  MPI standard does not specify how to start up the program  Compiling and running MPI code implementation dependent  MPI implementations provide utilities/commands for compiling/running MPI codes  Compile: mpicc, mpiCC, mpif77, mpif90, mpCC, mpxlf … mpiCC –o myprog myfile.C (cluster) mpif90 –o myprog myfile.f90 (cluster) CC –Ipath _mpi_ include –o myprog myfile.C –lmpi (SGI)... other MPI routine can be called after this call, even MPI_ INIT()  Exception: MPI_ Initialized() (and MPI_ Get_version(), MPI_ Finalized())  Abnormal termination: MPI_ Abort()  Makes a best attempt to abort all tasks int MPI_ Finalize(void) MPI_ FINALIZE(IERR) integer IERR int MPI_ Abort (MPI_ Comm comm, int errorcode) MPI_ ABORT(COMM,ERRORCODE,IERR) integer COMM, ERRORCODE, IERR 14 MPI Processes  MPI is process-oriented:... Everything is ok; otherwise, something is wrong ierr = MPI_ Xxxx(arg1,arg2,…); ierr = MPI_ Xxxx_xxx(arg1,arg2,…);  MPI constants all uppercase MPI_ COMM_WORLD, MPI_ SUCCESS, MPI_ DOUBLE, MPI_ SUM, … 12 Initialization  Initialization: MPI_ Init() initializes MPI environment; (MPI_ Init_thread() if multiple threads)  Must be called before any other MPI routine (so put it at the beginning of code) except MPI_ Initialized()... the address buf  MPI data types:  Basic data types: one for each data type in hosting languages of C/C++, FORTRAN  Derived data type: will learn later 26 Basic MPI Data Types MPI datatype C datatype MPI datatype FORTRAN datatype MPI_ CHAR signed char MPI_ INTEGER INTEGER MPI_ SHORT signed short MPI_ INT signed int MPI_ REAL REAL MPI_ LONG signed long MPI_ DOUBLE_PREC ISION DOUBLE PRECISION MPI_ UNSIGNED_CHAR... MPI_ Comm_rank() MPI_ Comm_size() MPI_ Send() MPI_ Recv() MPI_ Init(&argc, &argv); MPI_ Comm_rank (MPI_ COMM_WORLD,&my_rank); if(my_rank==0){ strcpy(message,”Hello, there!”); MPI_ Send(message,strlen(message)+1 ,MPI_ CHAR,1,99 ,MPI_ COMM_WORLD); } else if(my_rank==1) { MPI_ Recv(message,256 ,MPI_ CHAR,0,99 ,MPI_ COMM_WORLD,&status); printf(“Process %d received: %s\n”,my_rank,message); } MPI_ Finalize(); return 0; } 19 MPI Communications... Receive … MPI_ Send(message,strlen(message)+1 ,MPI_ CHAR,1,99 ,MPI_ COMM_WORLD); MPI_ Recv(message,256 ,MPI_ CHAR,0,99 ,MPI_ COMM_WORLD,&status); …  Message data: what to send/receive?  Where is the message? Where to put it?  What kind of data is it? What is the size?  Message envelope: where to send/receive?  Sender, receiver  Communication context  Message tag 22 Send int MPI_ Send(void *buf,int count ,MPI_ Datatype... long MPI_ DOUBLE_PREC ISION DOUBLE PRECISION MPI_ UNSIGNED_CHAR unsigned char MPI_ UNSIGNED_SHORT unsigned short MPI_ COMPLEX COMPLEX MPI_ UNSIGNED unsigned int MPI_ LOGICAL LOGICAL MPI_ UNSIGNED_LONG unsigned long int MPI_ CHARACTER CHARACTER(1) MPI_ DOUBLE double MPI_ BYTE MPI_ FLOAT float MPI_ LONG_DOUBLE long double MPI_ PACKED MPI_ BYTE MPI_ PACKED 27 Example int num_students; num_students: 0  1 double grade[10];... received message: MPI_ Get_count() Int MPI_ Get_count (MPI_ Status *status, MPI_ Datatype datatype, int *count) MPI_ GET_COUNT(STATUS,DATATYPE,COUNT,IERROR) integer STATUS (MPI_ STATUS_SIZE),DATATYPE,COUNT,IERROR MPI_ Status status; int count; … MPI_ Recv(message,256 ,MPI_ CHAR,0,99 ,MPI_ COMM_WORLD,&status); MPI_ Get_count(&status, MPI_ CHAR, &count); // count contains actual length 25 Message Data  Consists of count . MPI_ XXXX_XXXX(arg1,arg2,…,ierr) ierr = MPI_ Xxxx(arg1,arg2,…); ierr = MPI_ Xxxx_xxx(arg1,arg2,…); MPI_ COMM_WORLD, MPI_ SUCCESS, MPI_ DOUBLE, MPI_ SUM, … If ierr == MPI_ SUCCESS, Everything is ok; otherwise, something is wrong. 13 Initialization  Initialization:. utilities/commands for compiling/running MPI codes  Compile: mpicc, mpiCC, mpif77, mpif90, mpCC, mpxlf … mpiCC –o myprog myfile.C (cluster) mpif90 –o myprog myfile.f90 (cluster) CC –Ipath _mpi_ include –o. binding 7 MPI Resources  MPI Standard:  http://www .mpi- forum.org/  MPI web sites/tutorials etc, see class web site  Public domain (free) MPI implementations  MPICH and MPICH2 (from ANL)  LAM MPI 8 General

Ngày đăng: 24/10/2014, 21:28

TỪ KHÓA LIÊN QUAN

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN

w