Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống
1
/ 67 trang
THÔNG TIN TÀI LIỆU
Thông tin cơ bản
Định dạng
Số trang
67
Dung lượng
411,51 KB
Nội dung
1 DEMOCRITOS/ ICTP course in TOOLS FOR COMPUTATIONAL PHYSICS 2005 Stefano Cozzini cozzini@democritos.it Democritos/INFM + SISSA MPI tutorial 2 DEMOCRITOS/ ICTP course in TOOLS FOR COMPUTATIONAL PHYSICS 2005 !" • Shared memory (load, store, lock, unlock) • Message Passing (send, receive, broadcast, ) • Transparent (compiler works magic) • Directive-based (compiler needs help) • Others (BSP, OpenMP, ) 3 DEMOCRITOS/ ICTP course in TOOLS FOR COMPUTATIONAL PHYSICS 2005 ""!" • Parallel programs consist of separate processes, each with its own address space – Programmer manages memory by placing data in a particular process • Data sent explicitly between processes – Programmer manages memory motion • Collective operations – On arbitrary set of processes • Data distribution – Also managed by programmer 4 DEMOCRITOS/ ICTP course in TOOLS FOR COMPUTATIONAL PHYSICS 2005 "" • Data Parallel - the same instructions are carried out simultaneously on multiple data items (SIMD) • Task Parallel - different instructions on different data (MIMD) • SPMD (single program, multiple data) not synchronized at individual operation level • SPMD is equivalent to MIMD since each MIMD program can be made SPMD (similarly for SIMD, but not in practical sense.) • Message passing is for MIMD/SPMD parallelism. HPF is an example of an SIMD 5 DEMOCRITOS/ ICTP course in TOOLS FOR COMPUTATIONAL PHYSICS 2005 #!$!" % 6 DEMOCRITOS/ ICTP course in TOOLS FOR COMPUTATIONAL PHYSICS 2005 & ' • A message-passing library specification – extended message-passing model – not a language or compiler specification – not a specific implementation or product • For parallel computers, clusters, and heterogeneous networks • Full-featured • Designed to provide access to advanced parallel hardware for end users, library writers, and tool developers 7 DEMOCRITOS/ ICTP course in TOOLS FOR COMPUTATIONAL PHYSICS 2005 & ' A STANDARD The actual implementation of the standard is demanded to the software developers of the different systems In all systems MPI has been implemented as a library of subroutines over the network drivers and primitives many different implementations LAM/MPI (today's TOY) www.lam-mpi.org MPICH 8 DEMOCRITOS/ ICTP course in TOOLS FOR COMPUTATIONAL PHYSICS 2005 ( !! MPI’s prime goals are: • To provide source-code portability • To allow efficient implementations MPI also offers: • A great deal of functionality • Support for heterogeneous parallel architectures 9 DEMOCRITOS/ ICTP course in TOOLS FOR COMPUTATIONAL PHYSICS 2005 • The Standard itself: – at http://www.mpi-forum.org – All MPI official releases, in both postscript and HTML • Other information on Web: – at http://www.mcs.anl.gov/mpi – pointers to lots of stuff, including talks and tutorials, a FAQ, other MPI pages 10 DEMOCRITOS/ ICTP course in TOOLS FOR COMPUTATIONAL PHYSICS 2005 )*"* • MPI is a library – All operations are performed with routine calls – Basic definitions are in • mpi.h for C • mpif.h for Fortran 77 and 90 • MPI module for Fortran 90 (optional) [...]... ICTP course in TOOLS FOR COMPUTATIONAL PHYSICS C - MPI Basic Datatypes MPI Data type C Data type MPI_ CHAR signed char MPI_ SHORT signed short int MPI_ INT signed int MPI_ LONG Signed log int MPI_ UNSIGNED_CHAR unsigned char MPI_ UNSIGNED_SHORT unsigned short int MPI_ UNSIGNED unsigned int MPI_ UNSIGNED_LONG unsigned long int MPI_ FLOAT float MPI_ DOUBLE double MPI_ LONG_DOUBLE long double MPI_ BYTE MPI_ PACKED 28... MPI functions to construct custom datatypes, in particular ones for subarrays 26 February 2005 DEMOCRITOS/ ICTP course in TOOLS FOR COMPUTATIONAL PHYSICS Fortran - MPI Basic Datatypes MPI Data type Fortran Data type MPI_ INTEGER INTEGER MPI_ REAL REAL MPI_ DOUBLE_PRECISION DOUBLE PRECISION MPI_ COMPLEX COMPLEX MPI_ DOUBLE_COMPLEX DOUBLE COMPLEX MPI_ LOGICAL LOGICAL MPI_ CHARACTER CHARACTER(1) MPI_ PACKED MPI_ BYTE... TOOLS FOR COMPUTATIONAL PHYSICS MPI basic functions (subroutines) MPI_ INIT: initialize MPI MPI_COMM_SIZE: how many PE ? MPI_ COMM_RANK: identify the PE MPI_ SEND : MPI_ RECV: MPI_ FINALIZE: close MPI • All you need is to know this 6 calls 12 February 2005 DEMOCRITOS/ ICTP course in TOOLS FOR COMPUTATIONAL PHYSICS A First Program: Hello World! C Fortran include #include PROGRAM hello void... char * argv[]) { INCLUDE ‘mpif.h‘ int rank, size; INTEGER err MPI_ Init( &argc, &argv ); CALL MPI_ INIT(err) call MPI_ COMM_RANK( MPI_ COMM_WORLD, rank, ierr ) MPI_ Comm_rank( MPI_ COMM_WORLD,&rank ); MPI_ Comm_size( MPI_ COMM_WORLD,&size ); printf( "I am %d of %d\n", rank, size ); call MPI_ COMM_SIZE( MPI_ COMM_WORLD, size, ierr ) MPI_ Finalize(); print *, 'I am ', rank, ' of ', size CALL MPI_ FINALIZE(err) return... the appropriate include directory (i.e -I/mpidir/include) • You should specify the mpi library (i.e -L/mpidir/lib -lmpi) • Usually MPI compiler wrappers do this job for you (i.e Mpif77) Check on your machine 15 February 2005 DEMOCRITOS/ ICTP course in TOOLS FOR COMPUTATIONAL PHYSICS Running MPI programs • The MPI- 1 Standard does not specify how to run an MPI program, just as the Fortran standard... ICTP course in TOOLS FOR COMPUTATIONAL PHYSICS Basic Structures of MPI Programs Header files MPI Communicator MPI Function format Communicator Size and Process Rank Initializing and Exiting MPI 17 February 2005 DEMOCRITOS/ ICTP course in TOOLS FOR COMPUTATIONAL PHYSICS Header files All Subprogram that contains calls to MPI subroutine must include the MPI header file C: #include Fortran:... Call MPI_ send( buffer, 1, MPI_ integer, 1, 10, & MPI_ comm_world, error ) End If ! If( rank == 1 ) Then Call MPI_ recv( buffer, 1, MPI_ integer, 0, 10, & MPI_ comm_world, status, error ) Print*, 'Rank ', rank, ' buffer=', buffer If( buffer /= 33 ) Print*, 'fail' End If Call MPI_ finalize( error ) End Program MPI 33 February 2005 33 DEMOCRITOS/ ICTP course in TOOLS FOR COMPUTATIONAL PHYSICS Summary: MPI send/receive... CALL MPI_ FINALIZE(IERR) This two subprograms should be called by all processes, and no other MPI calls are allowed before mpi_ init and after mpi_ finalize 20 February 2005 DEMOCRITOS/ ICTP course in TOOLS FOR COMPUTATIONAL PHYSICS C and Fortran: a note • C and Fortran bindings correspond closely • In C: – mpi. h must be #included – MPI functions return error codes or – MPI_ SUCCESS • In Fortran: – mpif.h... in case of error • MPI_ recv is blocking Return when all the data are in BUFFER 32 4/13/99 February 2005 32 DEMOCRITOS/ ICTP course in TOOLS FOR COMPUTATIONAL PHYSICS MPI: a fortran example Program MPI Implicit None ! Include 'mpif.h' ! Integer :: Integer :: Integer, Dimension( 1 :MPI_ status_size ) :: Integer :: rank buffer status error ! Call MPI_ init( error ) Call MPI_ comm_rank( MPI_ comm_world, rank,... 2005 DEMOCRITOS/ ICTP course in TOOLS FOR COMPUTATIONAL PHYSICS Notes on hello • All MPI programs begin with MPI_ Init and end with MPI_ Finalize • MPI_ COMM_WORLD is defined by mpi. h (in C) or mpif.h (in Fortran) and designates all processes in the MPI “job” • Each statement executes independently in each process – including the printf/print statements • I/O not part of MPI- 1 – print and write to standard . 0 • All MPI programs begin with MPI_ Init and end with MPI_ Finalize • MPI_ COMM_WORLD is defined by mpi. h (in C) or mpif.h (in Fortran) and designates all processes in the MPI “job” • Each. include directory (i.e. -I/mpidir/include) • You should specify the mpi library (i.e. -L/mpidir/lib -lmpi) • Usually MPI compiler wrappers do this job for you. (i.e. Mpif77) Check on your machine. -,".)&!/ Fortran PROGRAM hello INCLUDE ‘mpif.h‘ INTEGER err CALL MPI_ INIT(err) call MPI_ COMM_RANK( MPI_ COMM_WORLD, rank, ierr ) call MPI_ COMM_SIZE( MPI_ COMM_WORLD, size, ierr ) print *,