PHẦN 1: TÍNH TOÁN SONG SONG Chƣơng 1 KIẾN TRÚC VÀ CÁC LOẠI MÁY TINH SONG SONG Chƣơng 2 CÁC THÀNH PHẦN CỦA MÁY TINH SONG SONG Chƣơng 3 GIỚI THIỆU VỀ LẬP TRÌNH SONG SONG Chƣơng 4 CÁC MÔ HÌNH LẬP TRÌNH SONG SONG Chƣơng 5 THUẬT TOÁN SONG SONG PHẦN 2: XỬ LÝ SONG SONG CÁC CƠ SỞ DỮ LIỆU (Đọc thêm) Chƣơng 6 TỔNG QUAN VỀ CƠ SỞ DỮ LIỆU SONG SONG Chƣơng 7 TỐI ƢU HÓA TRUY VẤN SONG SONG Chƣơng 8 LẬP LỊCH TỐI ƢU CHO CÂU TRUY VẤN SONG SONG
THOAI NAM Khoa Coõng Ngheọ Thoõng Tin ẹaùi Hoùc Baựch Khoa Tp.HCM Outline Communication modes MPI Message Passing Interface Standard Khoa Coõng Ngheọ Thoõng Tin ẹaùi Hoùc Baựch Khoa Tp.HCM TERMs (1) Blocking If return from the procedure indicates the user is allowed to reuse resources specified in the call Non-blocking If the procedure may return before the operation completes, and before the user is allowed to reuse resources specified in the call Collective If all processes in a process group need to invoke the procedure Message envelope Information used to distinguish messages and selectively receive them <source, destination, tag, communicator> Khoa Coõng Ngheọ Thoõng Tin ẹaùi Hoùc Baựch Khoa Tp.HCM TERMs (2) Communicator The communication context for a communication operation Messages are always received within the context they were sent Messages sent in different contexts do not interfere MPI_COMM_WORLD Process group The communicator specifies the set of processes that share this communication context. This process group is ordered and processes are identified by their rank within this group Khoa Coõng Ngheọ Thoõng Tin ẹaùi Hoùc Baựch Khoa Tp.HCM MPI Environment Point-to-point communication Collective communication Derived data type Group management Khoa Coõng Ngheọ Thoõng Tin ẹaùi Hoùc Baựch Khoa Tp.HCM MPI P 0 P 1 P 2 P 3 P 4 Daemon Daemon Daemon P 0 P 1 P 2 P 3 P 4 Khoa Coõng Ngheọ Thoõng Tin ẹaùi Hoùc Baựch Khoa Tp.HCM LAM: http://www.lam-mpi.org/ MPICH: http://www-unix.mcs.anl.gov/mpi/mpich/ Others Documents: http://www.mpi.org/ http://www.mpi-forum.org/ Khoa Coõng Ngheọ Thoõng Tin ẹaùi Hoùc Baựch Khoa Tp.HCM % cat lamhosts # a 2-node LAM node1.cluster.example.com node2.cluster.example.com The lamboot tool actually starts LAM on the specified cluster. % lamboot -v lamhosts LAM 7.0.4 - Indiana University Executing hboot on n0 (node1.cluster.example.com - 1 CPU) Executing hboot on n1 (node2.cluster.example.com - 1 CPU) lamboot returns to the UNIX shell prompt. LAM does not force a canned environment or a "LAM shell". The tping command builds user confidence that the cluster and LAM are running. Khoa Coõng Ngheọ Thoõng Tin ẹaùi Hoùc Baựch Khoa Tp.HCM Refer to MPI: It's Easy to Get Started to see a simple MPI program. mpicc (and mpiCC and mpif77) is a wrapper for the C (C++, and F77) compiler that includes all the necessary command line switches to the underlying compiler to find the LAM include files, the relevant LAM libraries, etc. shell$ mpicc -o foo foo.c shell$ mpif77 -o foo foo.f Khoa Coõng Ngheọ Thoõng Tin ẹaùi Hoùc Baựch Khoa Tp.HCM A MPI application is started by one invocation of the mpirun command. A SPMD application can be started on the mpirun command line. shell$ mpirun -v -np 2 foo 2445 foo running on n0 (o) 361 foo running on n1 An application with multiple programs must be described in an application schema, a file that lists each program and its target node(s). [...]... MPI_ INTEGER, 0, TAG, MPI_ COMM_WORLD, &request); MPI_ Wait(&request, &status); MPI_ Get_count(&status, MPI_ INTEGER, &count); printf( irbuf = %d source = %d tag = %d count = %d\n, irbuf, status .MPI_ SOURCE, status .MPI_ TAG, count); } MPI_ Finalize(); } Khoa Coõng Ngheọ Thoõng Tin ẹaùi Hoùc Baựch Khoa Tp.HCM MPI_ BCAST MPI_ SCATTER MPI_ SCATTERV MPI_ GATHER MPI_ GATHERV MPI_ ALLGATHER MPI_ ALLGATHERV MPI_ ALLTOALL Khoa... if(rank == 0) { isbuf = 9; MPI_ Send( &isbuf, 1, MPI_ INTEGER, 1, TAG, MPI_ COMM_WORLD); } else if(rank == 1) { MPI_ Recv( &irbuf, 1, MPI_ INTEGER, 0, TAG, MPI_ COMM_WORLD, &status); printf( %d\n, irbuf ); } MPI_ Finalize(); } Khoa Coõng Ngheọ Thoõng Tin ẹaùi Hoùc Baựch Khoa Tp.HCM MPI_ Isend Usage int MPI_ Isend( void* buf, int count, MPI_ Datatype datatype, int dest, int tag, MPI_ Comm comm, MPI_ Request* request... count; MPI_ Request request; MPI_ Status status; MPI_ Init( &argc, &argv ); MPI_ Comm_size( MPI_ COMM_WORLD, &nproc ); MPI_ Comm_rank( MPI_ COMM_WORLD, &rank ); if(rank == 0) { isbuf = 9; MPI_ Isend( &isbuf, 1, MPI_ INTEGER, 1, TAG, MPI_ COMM_WORLD, &request ); Khoa Coõng Ngheọ Thoõng Tin ẹaùi Hoùc Baựch Khoa Tp.HCM Sample Program for Non-Blocking Operations (2) } else if (rank == 1) { MPI_ Irecv( &irbuf, 1, MPI_ INTEGER,... MPI_ INIT MPI_ COMM_SIZE MPI_ COMM_RANK MPI_ FINALIZE MPI_ ABORT Khoa Coõng Ngheọ Thoõng Tin ẹaùi Hoùc Baựch Khoa Tp.HCM MPI_ Init Usage int MPI_ Init( int* argc_ptr, char** argv_ptr[] ); /* in */ /* in */ Description Initialize MPI All MPI programs must call this routines once and only once before any other MPI routines Khoa Coõng Ngheọ Thoõng Tin ẹaùi Hoùc Baựch Khoa Tp.HCM MPI_ Finalize Usage int MPI_ Finalize... buffer MPI_ RECV can receive a message sent by either MPI_ SEND or MPI_ ISEND Khoa Coõng Ngheọ Thoõng Tin ẹaùi Hoùc Baựch Khoa Tp.HCM Khoa Coõng Ngheọ Thoõng Tin ẹaùi Hoùc Baựch Khoa Tp.HCM Sample Program for Blocking Operations (1) #include mpi. h int main( int argc, char* argv[] ) { int rank, nproc; int isbuf, irbuf; MPI_ Init( &argc, &argv ); MPI_ Comm_size( MPI_ COMM_WORLD, &nproc ); MPI_ Comm_rank( MPI_ COMM_WORLD,... The send buffer may not be modified until the request has been completed by MPI_ WAIT or MPI_ TEST The message can be received by either MPI_ RECV or MPI_ IRECV Khoa Coõng Ngheọ Thoõng Tin ẹaùi Hoùc Baựch Khoa Tp.HCM MPI_ Irecv (1) Usage int MPI_ Irecv( void* buf, int count, MPI_ Datatype datatype, int source, int tag, MPI_ Comm comm, MPI_ Request* request ); Khoa Coõng Ngheọ Thoõng Tin ẹaùi Hoùc Baựch Khoa... datatype, int dest, int tag, MPI_ Comm comm ); /* in */ /* in */ /* in */ /* in */ /* in */ /* in */ Description Performs a blocking standard mode send operation The message can be received by either MPI_ RECV or MPI_ IRECV Khoa Coõng Ngheọ Thoõng Tin ẹaùi Hoùc Baựch Khoa Tp.HCM MPI_ Recv Usage int MPI_ Recv( void* buf, int count, MPI_ Datatype datatype, int source, int tag, MPI_ Comm comm, MPI_ Status* status );... */ /* in */ Description Forces all processes of an MPI job to terminate Khoa Coõng Ngheọ Thoõng Tin ẹaùi Hoùc Baựch Khoa Tp.HCM Simple Program #include mpi. h int main( int argc, char* argv[] ) { int rank; int nproc; MPI_ Init( &argc, &argv ); MPI_ Comm_size( MPI_ COMM_WORLD, &nproc ); MPI_ Comm_rank( MPI_ COMM_WORLD, &rank ); /* write codes for you */ MPI_ Finalize(); } Khoa Coõng Ngheọ Thoõng Tin ẹaùi... for you */ MPI_ Finalize(); } Khoa Coõng Ngheọ Thoõng Tin ẹaùi Hoùc Baựch Khoa Tp.HCM Point-to-Point Communication MPI_ SEND MPI_ RECV MPI_ ISEND MPI_ IRECV MPI_ WAIT MPI_ GET_COUNT Khoa Coõng Ngheọ Thoõng Tin ẹaùi Hoùc Baựch Khoa Tp.HCM Communication Modes in MPI (1) Standard mode It is up to MPI to decide whether outgoing messages will be buffered Non-local operation Buffered or synchronous? Buffered(asynchronous)... out */ MPI_ Irecv (2) Description Performs a nonblocking receive operation Do not access any part of the receive buffer until the receive is complete The message received must be less than or equal to the length of the receive buffer MPI_ IRECV can receive a message sent by either MPI_ SEND or MPI_ ISEND Khoa Coõng Ngheọ Thoõng Tin ẹaùi Hoùc Baựch Khoa Tp.HCM MPI_ Wait Usage int MPI_ Wait( MPI_ Request* . Hoùc Baựch Khoa Tp.HCM MPI_ INIT MPI_ COMM_SIZE MPI_ COMM_RANK MPI_ FINALIZE MPI_ ABORT Khoa Coõng Ngheọ Thoõng Tin ẹaùi Hoùc Baựch Khoa Tp.HCM MPI_ Init Usage int MPI_ Init( int* argc_ptr,. codes for you */ MPI_ Finalize(); } Khoa Coõng Ngheọ Thoõng Tin ẹaùi Hoùc Baựch Khoa Tp.HCM Point-to-Point Communication MPI_ SEND MPI_ RECV MPI_ ISEND MPI_ IRECV MPI_ WAIT MPI_ GET_COUNT Khoa. Program #include mpi. h int main( int argc, char* argv[] ) { int rank; int nproc; MPI_ Init( &argc, &argv ); MPI_ Comm_size( MPI_ COMM_WORLD, &nproc ); MPI_ Comm_rank( MPI_ COMM_WORLD, &rank