1. Trang chủ
  2. » Công Nghệ Thông Tin

4 PP speedup xử lý song song và phân tán

19 391 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 19
Dung lượng 276,23 KB

Nội dung

PHẦN 1: TÍNH TOÁN SONG SONG Chƣơng 1 KIẾN TRÚC VÀ CÁC LOẠI MÁY TINH SONG SONG Chƣơng 2 CÁC THÀNH PHẦN CỦA MÁY TINH SONG SONG Chƣơng 3 GIỚI THIỆU VỀ LẬP TRÌNH SONG SONG Chƣơng 4 CÁC MÔ HÌNH LẬP TRÌNH SONG SONG Chƣơng 5 THUẬT TOÁN SONG SONG PHẦN 2: XỬ LÝ SONG SONG CÁC CƠ SỞ DỮ LIỆU (Đọc thêm) Chƣơng 6 TỔNG QUAN VỀ CƠ SỞ DỮ LIỆU SONG SONG Chƣơng 7 TỐI ƢU HÓA TRUY VẤN SONG SONG Chƣơng 8 LẬP LỊCH TỐI ƢU CHO CÂU TRUY VẤN SONG SONG

Thoai Nam Khoa Coõng Ngheọ Thoõng Tin ẹaùi Hoùc Baựch Khoa Tp.HCM Speedup & Efficiency Amdahls Law Gustafsons Law Sun & Nis Law Khoa Coõng Ngheọ Thoõng Tin ẹaùi Hoùc Baựch Khoa Tp.HCM Speedup: S = Time (the most efficient sequential algorithm) / Time(parallel algorithm) Efficiency: E = S / N with N is the number of processors Khoa Coõng Ngheọ Thoõng Tin ẹaùi Hoùc Baựch Khoa Tp.HCM Amdahls Law Fixed Problem Size (1) The main objective is to produce the results as soon as possible (ex) video compression, computer graphics, VLSI routing, etc Implications Upper-bound is Make Sequential bottleneck as small as possible Optimize the common case Modified Amdahls law for fixed problem size including the overhead Khoa Coõng Ngheọ Thoõng Tin ẹaùi Hoùc Baựch Khoa Tp.HCM Amdahls Law Fixed Problem Size (2) ParallelSequential Sequential P 5 P 6 P 7 P 8 P 4 P 0 P 1 P 2 P 3 P 9 Sequential Parallel T(1) T(N) T s T p T s =T(1) T p = (1-)T(1) T(N) = T(1)+ (1-)T(1)/N Number of processors Khoa Coõng Ngheọ Thoõng Tin ẹaùi Hoùc Baựch Khoa Tp.HCM Amdahls Law Fixed Problem Size (3) + = + = Nas NN T T T Speedup 1 )1( 1 )1()1( )1( )1( )( )1( NTime Time Speedup = Khoa Coõng Ngheọ Thoõng Tin ẹaùi Hoùc Baựch Khoa Tp.HCM + + + = Nas T T T N T T T Speedup overhead overhead )1( 1 )1()1( )1( )1( The overhead includes parallelism and interaction overheads Khoa Coõng Ngheọ Thoõng Tin ẹaùi Hoùc Baựch Khoa Tp.HCM Gustafsons Law Fixed Time (1) User wants more accurate results within a time limit Execution time is fixed as system scales (ex) FEM for structural analysis, FDM for fluid dynamics Properties of a work metric Easy to measure Architecture independent Easy to model with an analytical expression No additional experiment to measure the work The measure of work should scale linearly with sequential time complexity of the algorithm Time constrained seems to be most generally viable model! Khoa Coõng Ngheọ Thoõng Tin ẹaùi Hoùc Baựch Khoa Tp.HCM Gustafsons Law Fixed Time (2) Parallel P 5 P 6 P 7 P 8 P 4 P 0 P 1 P 2 P 3 P 9 Sequential Sequential P 0 Sequential P 9 . . . W 0 W s = W s / W(N) W(N) = W(N) + (1-)W(N) W(1) = W(N) + (1-)W(N)N W(N) Khoa Coõng Ngheọ Thoõng Tin ẹaùi Hoùc Baựch Khoa Tp.HCM Gustafsons Law Fixed Time without overhead N W NWW kNW kW NT T Speedup )1( 1( ).( ).1( )( )1( += )+ === Time = Work . k W(N) = W [...]... Time with overhead W(N) = W + W0 Speedup = T (1) W (1).k W + (1 ) NW + (1 ) N = = = W0 T ( N ) W ( N ).k W + W0 1+ W Khoa Coõng Ngheọ Thoõng Tin ẹaùi Hoùc Baựch Khoa Tp.HCM Sun and Nis Law Fixed Memory (1) Scale the largest possible solution limited by the memory space Or, fix memory usage per processor Speedup, Time(1)/Time(N) for scaled up problem is not appropriate For simple profile, and... Gustafsons Law For most of the scientific and engineering applications, the computation requirement increases faster than the memory requirement, G(N)>N Khoa Coõng Ngheọ Thoõng Tin ẹaùi Hoùc Baựch Khoa Tp.HCM 10 8 6 S(Linear) S(Normal) 4 2 0 Processors Khoa Coõng Ngheọ Thoõng Tin ẹaùi Hoùc Baựch Khoa Tp.HCM Parallelizing a code does not always result in a speedup; sometimes it actually slows the code down!... g (c) g ( x), then, with all data being shared by If W = all available processors, the simplified memorybounced speedup is W1 + g ( N )WN + (1 )G ( N ) * SN = = g (N ) G( N ) W1 + WN + (1 ) N N Khoa Coõng Ngheọ Thoõng Tin ẹaùi Hoùc Baựch Khoa Tp.HCM Sun and Nis Law Fixed Memory (4) Proof: Let the memory requirement of Wn be M, Wn = g (M ) M is the memory requirement when 1 node is available... Ngheọ Thoõng Tin ẹaùi Hoùc Baựch Khoa Tp.HCM Sun and Nis Law Fixed Memory (2) W=W+(1- )W Let M be the memory capacity of a single node N nodes: the increased memory nM The scaled work: W=W+(1- )G(N)W SpeedupMC + (1 )G ( N ) = G( N ) + (1 ) N Khoa Coõng Ngheọ Thoõng Tin ẹaùi Hoùc Baựch Khoa Tp.HCM Sun and Nis Law Fixed Memory (3) Definition: A function g is homomorphism if there exists a function... Baựch Khoa Tp.HCM Parallelizing a code does not always result in a speedup; sometimes it actually slows the code down! This can be due to a poor choice of algorithm or to poor coding The best possible speedup is linear, i.e it is proportional to the number of processors: T(N) = T(1)/N where N = number of processors, T(1) = time for serial run A code that continues to speed up reasonably close to linearly . (3) + = + = Nas NN T T T Speedup 1 )1( 1 )1()1( )1( )1( )( )1( NTime Time Speedup = Khoa Coõng Ngheọ Thoõng Tin ẹaùi Hoùc Baựch Khoa Tp.HCM + + + = Nas T T T N T T T Speedup overhead overhead )1( 1 )1()1( )1( )1( The. ẹaùi Hoùc Baựch Khoa Tp.HCM Speedup & Efficiency Amdahls Law Gustafsons Law Sun & Nis Law Khoa Coõng Ngheọ Thoõng Tin ẹaùi Hoùc Baựch Khoa Tp.HCM Speedup: S = Time (the most efficient. limited by the memory space. Or, fix memory usage per processor Speedup, Time(1)/Time(N) for scaled up problem is not appropriate. For simple profile, and G(N) is the increase of parallel

Ngày đăng: 14/10/2014, 20:03

TỪ KHÓA LIÊN QUAN

w