1. Trang chủ
  2. » Cao đẳng - Đại học

Slide kiến trúc máy tính nâng cao memory hierarchy design

16 52 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 16
Dung lượng 190,36 KB

Nội dung

dce 2011 ADVANCED COMPUTER ARCHITECTURE Khoa Khoa học Kỹ thuật Máy tính BM Kỹ thuật Máy tính BK TP.HCM Trần Ngọc Thịnh http://www.cse.hcmut.edu.vn/~tnthinh ©2013, dce dce 2011 Memory Hierarchy Design dce 2011 Since 1980, CPU has outpaced DRAM Four-issue 2GHz superscalar accessing 100ns DRAM could execute 800 instructions during time for one memory access! Performance (1/latency) 10 00 CPU 10 CPU 60% per yr 2X in 1.5 yrs Gap grew 50% per year DRAM 9% per yr DRAM 2X in 10 yrs 10 19 80 19 90 Year 00 dce 2011 Processor-DRAM Performance Gap Impact • To illustrate the performance impact, assume a single-issue pipelined CPU with CPI = using non-ideal memory • Ignoring other factors, the minimum cost of a full memory access in terms of number of wasted CPU cycles: Year CPU speed MHZ 1986: 1989: 1992: 1996: 1998: 2000: 2002: 2004: 33 60 200 300 1000 2000 3000 CPU cycle ns 125 30 16.6 3.33 333 Memory Access Minimum CPU memory stall cycles or instructions wasted ns 190 165 120 110 100 90 80 60 190/125 - = 0.5 165/30 -1 = 4.5 120/16.6 -1 = 6.2 110/5 -1 = 21 100/3.33 -1 = 29 90/1 - = 89 80/.5 - = 159 60.333 - = 179 dce Levels of the Memory Hierarchy Upper Level 2011 Capacity Access Time Cost CPU Registers 100s Bytes more conflicts Can waste bandwidth 27 dce 2011 Q3: Which block should be replaced on a miss? • Easy for Direct Mapped • Set Associative or Fully Associative: – Random – Least Recently Used (LRU) • LRU cache state must be updated on every access • true implementation only feasible for small sets (2way, 4-way) • pseudo-LRU binary tree often used for 4-8 way – First In, First Out (FIFO) a.k.a Round-Robin • used in highly associative caches • Replacement policy has a second order effect since replacement only happens on misses 28 dce 2011 Q4: What happens on a write? • Cache hit: – write through: write both cache & memory • generally higher traffic but simplifies cache coherence – write back: write cache only (memory is written only when the entry is evicted) • a dirty bit per block can further reduce the traffic • Cache miss: – no write allocate: only write to main memory – write allocate (aka fetch on write): fetch into cache • Common combinations: – write through and no write allocate (below example) – write back with write allocate (above Example) 29 dce 2011 Reading assignment • Cache coherent problem in multicore systems – Identify the problem – Algorithms for multicore architectures • Reference – eecs.wsu.edu/~cs460/cs550/cachecoherence.pdf – …More on internet 30 dce 2011 Reading assignment • Cache performance – Replacement policy (algorithms) – Optimization (Miss rate, penalty, …) • Reference – Hennessy - Patterson - Computer Architecture A Quantitative – www2.lns.mit.edu/~avinatan/research/cache.pdf – … More on internet 31 ... 179 dce Levels of the Memory Hierarchy Upper Level 2011 Capacity Access Time Cost CPU Registers 100s Bytes

Ngày đăng: 16/12/2021, 13:04

TỪ KHÓA LIÊN QUAN

w