1. Trang chủ
  2. » Công Nghệ Thông Tin

Advanced Computer Architecture - Lecture 26: Memory hierarchy design

44 5 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 44
Dung lượng 1,32 MB

Nội dung

Advanced Computer Architecture - Lecture 26: Memory hierarchy design. This lecture will cover the following: concept of caching and principle of locality; concept of cache memory; principle of locality; cache addressing techniques; RAM vs. cache transaction; temporal locality; spatial locality;...

CS 704 Advanced Computer Architecture Lecture 26 Memory Hierarchy Design (Concept of Caching and Principle of Locality) Prof Dr M Ashraf Chughtai Today’s Topics Recap: Storage trends and memory hierarchy Concept of Cache Memory Principle of Locality Cache Addressing Techniques RAM vs Cache Transaction Summary MAC/VU-Advanced Computer Architecture Lecture 26 Memory Hierarchy (2) Recap: Storage Devices Design features of semiconductor memories SRAM DRAM magnetic disk storage MAC/VU-Advanced Computer Architecture Lecture 26 Memory Hierarchy (2) Recap: Speed and Cost per byte – DRAM is slow but cheap relative to SRAM – Main memory of the processor to hold moderately large amount of data and instructions – Disk storage is slowest and cheapest – secondary storage to hold bulk of data and instructions MAC/VU-Advanced Computer Architecture Lecture 26 Memory Hierarchy (2) Recap: CPU-Memory Access-Time – The gap between the speed of DRAM and Disk with respect to the speed of processor, as compared to that of the SRAM, is increasing very fast with time MAC/VU-Advanced Computer Architecture Lecture 26 Memory Hierarchy (2) CPU-Memory Gap … Cont’d 100,000,000 10,000,000 1,000,000 100,000 10,000 1,000 100 10 Disk seek time s n DRAM access time SRAM access time CPU cycle time 1980 MAC/VU-Advanced Computer Architecture 1985 1990 year 1995 Lecture 26 Memory Hierarchy (2) 2000 Memory Hierarchy Principles The speed of DRAM and CPU complement each other Organize memory in hierarchy, based on the Concept of Caching; and – Principle of Locality MAC/VU-Advanced Computer Architecture Lecture 26 Memory Hierarchy (2) 1: Concept of Caching staging area or temporary-place to: – store frequently-used subset of the data or instructions from the relatively cheaper, larger and slower memory; and – To avoid having to go to the main memory every time this information is needed MAC/VU-Advanced Computer Architecture Lecture 26 Memory Hierarchy (2) Caching and Memory Hierarchy Memory devices of different type are used for each value k – the device level – the faster, smaller device at level k, serves as a cache for the larger, slower device at level k+1 – The programs tend to access the data or instructions at level k more often than they access the data at level k+1 MAC/VU-Advanced Computer Architecture Lecture 26 Memory Hierarchy (2) Caching and Memory Hierarchy – Storage at level k+1 can be slower, but larger and cheaper per bit A large pool of memory that costs as much as the cheap storage at the highest level (near the bottom in hierarchy) serves data or instructions at the rate of the fast storage at the lowest level (near the top in hierarchy) MAC/VU-Advanced Computer Architecture Lecture 26 Memory Hierarchy (2) 10 Memory Hierarchy Terminology To Processor From Processor MAC/VU-Advanced Computer Architecture Upper Level Memory Lower Level Memory Blk X Blk Y Lecture 26 Memory Hierarchy (2) 30 Memory Hierarchy Terminology Hit: the data the processor wants to access appears in some block in the upper level (example: Block X) – Hit Rate: the fraction of memory access that are found in the upper level (i.e., HIT) – Hit Time: Time to access the upper level which consists of (i) RAM access time (ii) Time to determine if this is hit or miss MAC/VU-Advanced Computer Architecture Lecture 26 Memory Hierarchy (2) 31 Memory Hierarchy Terminology … Cont’d Miss: data needed by the processor is not found in the upper level and has to be retrieved from a block in the lower level (Block Y) – Miss Rate = - (Hit Rate) – Miss Penalty is the sum of the time: (i) to replace a block in the upper level (ii) to deliver the block the processor Recommendation: Hit Time must be much much smaller than Miss Penalty, otherwise no need for memory hierarchy MAC/VU-Advanced Computer Architecture Lecture 26 Memory Hierarchy (2) 32 Cache Hit CPU needs object d, which is stored in some block b, say 14 of the (k+1) memory and corresponding block of the cache at level k CPU d Level k: Level k+1: Request 14 4* 14 3 10 11 12 13 14 15 MAC/VU-Advanced Computer Architecture Cache hit – Program finds block b (14) in the cache at level k – Object d is transferred to CPU Lecture 26 Memory Hierarchy (2) 33 Cache Miss A Level k: Level k+1: Program needs object A, which is stored in some block C say block 12 at level K+1 Request 12 12 14 12 Request 12 10 11 12 13 14 15 MAC/VU-Advanced Computer Architecture Cache miss – Block C (12 from K+1) is not at level k- It is cache Miss – Hence, level k cache must fetch it from level k+1; and – transfer object A to the CPU Lecture 26 Memory Hierarchy (2) 34 Placement and Replacement Policies – If level k cache is full, then some current block must be replaced (evicted), which one is the “victim”? – It depends upon: Cache design that defines the relationship of cache addresses with the higher level memory addresses Placement policy that determines where can the new block go? and Replacement policy that defines which block should be evicted? MAC/VU-Advanced Computer Architecture Lecture 26 Memory Hierarchy (2) 35 Types of misses Cold (compulsory) miss – occurs when the cache is empty; at the beginning of the cache access Capacity miss – occurs when the set of active cache blocks (working set) is larger than the cache Conflict miss – occurs when the level k cache is large enough, but multiple data objects all map to the same level k block MAC/VU-Advanced Computer Architecture Lecture 26 Memory Hierarchy (2) 36 Conflict Miss: Example … Cont’d If the placement policy is based on the direct addressing, then: – Block n at level k+1 must be placed in block (n mod 4) at level k In this case, referencing blocks 0, 8, 0, 8, 0, 8, would miss every time as mod = 0, as both the blocks and of level k+1 are placed at the location 00 at level k MAC/VU-Advanced Computer Architecture Lecture 26 Memory Hierarchy (2) 37 Cache Design We have observed that more than one blocks from the level k+1 memory (say of the main memory), having N blocks, may be placed at the same location (given by N MOD M) in the level-k memory (say cache) having M blocks Hence, a tag must be associated with each block in the level-k (cache) memory to identify its position in the level k+1 memory (Main memory) MAC/VU-Advanced Computer Architecture Lecture 26 Memory Hierarchy (2) 38 Direct Mapping Example The 16 MB main memory has 24 address bus It is organized in 32-bit blocks 16 K word (64 KB) cache requires 16-bit address and 8-bit tag MAC/VU-Advanced Computer Architecture Lecture 26 Memory Hierarchy (2) 39 Direct Mapping Address Structure Tag s-r Line or Slot r 14 Word w 24 bit address bit word identifier (4 byte block) 22 bit block identifier – for the main memory – bit tag (=22-14) – 14 bit slot or line or index value for cache No two blocks in the same line have the same Tag field Check contents of cache by finding line and checking Tag Direct Mapping Cache Organization MAC/VU-Advanced Computer Architecture Lecture 26 Memory Hierarchy (2) 41 Cache Design Another Example Let us consider another example with realistic numbers: Assume we have a KB direct mapped cache with block size equals to 32 bytes In other words, each block associated with the cache tag will have 32 bytes in it (Row 1)  Cache Tag 22 bit address : : Byte 1023 MAC/VU-Advanced Computer Architecture Lecture 26 Memory Hierarchy (2) : :  Cache Data Byte 31 Byte 63 : : Valid Bit  Line Number  or Index Byte 1 Byte 0 Byte 33 Byte 32 Byte 992 31 42 Address Translation – Direct Mapped Cache Assume the k+1 level main memory of 4GB, with Block Size equals to 32 bytes, and a k level cache of 1Kbyte 31 Cache Tag Stored as part of the cache “state” : MAC/VU-Advanced Computer Architecture  Cache Data Byte 31 Byte 63 Byte Select Ex: 0x00 Byte 1 Byte 0 Byte 33 Byte 32 : : Byte 1023 Lecture 26 Memory Hierarchy (2) :  Cache  Tag : : Valid Bit Cache Index Ex: 0x01 Byte 992 31 43 Cache Design With Block Size equals to 32 bytes, the least significant bits of the address will be used as byte select within the cache block Since the cache size is 1K byte, the upper 32 minus 10 bits, or 22 bits of the address will be stored as cache tag The rest of the address bits in the middle, that is bit through 9, will be used as Cache Index to select the proper cache block entry MAC/VU-Advanced Computer Architecture Lecture 26 Memory Hierarchy (2) 44 ... to it MAC/VU -Advanced Computer Architecture Lecture 26 Memory Hierarchy (2) 22 Hierarchy List Register File L1 L2 Main memory Disk cache Disk Optical Tape MAC/VU -Advanced Computer Architecture. .. MainMemory Larger storage such as disk, is placed away from the main memory MAC/VU -Advanced Computer Architecture Lecture 26 Memory Hierarchy (2) 25 Cache Organization Main Memory MAC/VU -Advanced. .. by 4-bit code: xxyy 29 Memory Hierarchy Terminology To Processor From Processor MAC/VU -Advanced Computer Architecture Upper Level Memory Lower Level Memory Blk X Blk Y Lecture 26 Memory Hierarchy

Ngày đăng: 05/07/2022, 11:53

w