Question? Leave a message!




Cache Optimizations

Cache Optimizations
Dr.ShaneMatts Profile Pic
Dr.ShaneMatts,United States,Teacher
Published Date:23-07-2017
Website URL
Comment
1 Cache Optimizations Joel Emer Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology Based on the material prepared by Krste Asanovic and Arvind 6.823 L8- 2 Joel Emer CPU-Cache Interaction (5-stage pipeline) 0x4 E Add M A we addr ALU Y Y Decode, nop Primary IR B Register Data rdata Fetch Cache addr R inst PC D hit? wdata wdata hit? Primary PCen Instruction MD1 MD2 Cache Stall entire CPU on data cache miss To Memory Control Cache Refill Data from Lower Levels of Memory Hierarchy What about Instruction miss or writes to i-stream ? October 5, 2005 6.823 L8- 3 Joel Emer Write Performance Block Tag Index Offset b t k Data V Tag k 2 lines t = WE Data Word or Byte HIT October 5, 2005 6.823 L8- 4 Joel Emer Reducing Write Hit Time Problem: Writes take two cycles in memory stage, one cycle for tag check plus one cycle for data write if hit Solutions: • Design data RAM that can perform read and write in one cycle, restore old value after tag miss • CAM-Tag caches: Word line only enabled if hit • Pipelined writes: Hold write data for store in single buffer ahead of cache, write cache data during next store’s tag check October 5, 2005 6.823 L8- 5 Joel Emer Pipelining Cache Writes Address and Store Data From CPU Tag Index Store Data Delayed Write Addr. Delayed Write Data Load/Store =? S Tags Data L 1 0 =? Load Data to CPU Hit? Data from a store hit written into data portion of cache during tag access of subsequent store October 5, 2005 6.823 L8- 6 Joel Emer Write pipeline Instr Data Data RF ALU Memory Memory Memory I-Fetch Decode Tag Address Mem Reg Read Read Calc Data Write What hazard has been introduced in this pipeline? October 5, 2005 6.823 L8- 7 Joel Emer Write Policy • Cache hit: –write through: write both cache & memory • generally higher traffic but simplifies cache coherence –write back: write cache only (memory is written only when the entry is evicted) • a dirty bit per block can further reduce the traffic • Cache miss: – no write allocate: only write to main memory – write allocate (aka fetch on write): fetch into cache • Common combinations: – write through and no write allocate – write back with write allocate October 5, 2005 Average Cache Read Latency α is HIT RATIO: Fraction of references in cache 1 -α is MISS RATIO: Remaining references Average access time for serial search: Addr Addr Main t + (1 -α) t c m Processor CACHE Memory Data Data Average access time for parallel search: Addr Main α t + (1 -α) t Processor CACHE c m Memory Data Data t is smallest for which type of cache? c October 5, 2005 6.823 L8- 9 Joel Emer Improving Cache Performance Average memory access time = Hit time + Miss rate x Miss penalty To improve performance: • reduce the miss rate (e.g., larger cache) • reduce the miss penalty (e.g., L2 cache) • reduce the hit time What is the simplest design strategy? October 5, 2005 6.823 L8- 10 Joel Emer Improving Cache Performance Average memory access time = Hit time + Miss rate x Miss penalty To improve performance: • reduce the miss rate (e.g., larger cache) • reduce the miss penalty (e.g., L2 cache) • reduce the hit time The simplest design strategy is to design the largest primary cache without slowing down the clock or adding pipeline stages (but design decisions are more complex with out-of- order or highly pipelined CPUs) October 5, 2005 6.823 L8- 11 Joel Emer Causes for Cache Misses • Compulsory: first-reference to a block a.k.a. cold start misses - misses that would occur even with infinite cache • Capacity: cache is too small to hold all data needed by the program - misses that would occur even under perfect placement & replacement policy • Conflict: misses that occur because of collisions due to block-placement strategy - misses that would not occur with full associativity October 5, 2005 6.823 L8- 12 Joel Emer Effect of Cache Parameters on Performance • Larger cache size + reduces capacity and conflict misses - hit time will increase • Higher associativity + reduces conflict misses (up to around 4-8 way) - may increase access time • Larger block size October 5, 2005 6.823 L8- 13 Joel Emer Block Size and Spatial Locality Block is unit of transfer between the cache and memory 4 word block, Word0 Word1 Word2 Word3 Tag b=2 block address offset Split CPU b address b bits 32-b bits b 2 = block size a.k.a line size (in bytes) Larger block size has distinct hardware advantages • less tag overhead • exploit fast burst transfers from DRAM • exploit fast burst transfers over wide busses What are the disadvantages of increasing block size? October 5, 2005 6.823 L8- 14 Joel Emer Block-level Optimizations • Tags are too large, i.e., too much overhead – Simple solution: Larger blocks, but miss penalty could be large. • Sub-block placement (aka sector cache) – A valid bit added to units smaller than the full block, called sub-blocks – Only read a sub-block on a miss – If a tag matches, is the word in the cache? 100 1 1 1 1 300 1 1 0 0 204 0 1 0 1 October 5, 2005 6.823 L8- 15 Joel Emer Set-Associative RAM-Tag Cache Tag Status Data Tag Status Data Not energy-efficient – A tag and data word is read from every way Two-phase approach – First read tags, then just read data from =? =? selected way – More energy- efficient – Doubles latency in L1 – OK, for L2 and above, why? Tag Index Offset October 5, 2005 6.823 L8- 16 Joel Emer Highly-Associative CAM-Tag Caches • For high associativity (e.g., 32-way), use content-addressable memory (CAM) for tags (Intel XScale) • Overhead: Tag+comparator bit 2-4x area of plain RAM-tag bit tag set offset t i b Set i Set 1 Set 0 Tag Data Block =? Tag Data Block =? Tag Data Block Tag Data Block =? =? Tag Data Block =? Tag Data Block =? Tag Data Block =? Tag Data Block =? Tag Data Block =? Only one set enabled Hit? Data Only hit data accessed – saves energy October 5, 2005 6.823 L8- 17 Joel Emer Way Predicting Caches (MIPS R10000 L2 cache) • Use processor address to index into way prediction table • Look in predicted way at given index, then: HIT MISS Return copy Look in other way of data from cache MISS SLOW HIT (change entry in prediction table) Read block of data from next level of cache October 5, 2005 6.823 L8- 18 Joel Emer Way Predicting Instruction Cache (Alpha 21264-like) Jump target 0x4 Jump Add control addr PC inst Primary Instruction way Cache Sequential Way Branch Target Way October 5, 2005 19 Five-minute break to stretch your legs 6.823 L8- 20 Joel Emer Victim Caches (HP 7200) CPU Unified L2 Cache L1 Data RF Cache Evicted data from L1 Victim where ? FA Cache Hit data from VC Evicted data 4 blocks (miss in L1) From VC Victim cache is a small associative back up cache, added to a direct mapped cache, which holds recently evicted lines • First look up in direct mapped cache • If miss, look in victim cache • If hit in victim cache, swap hit line with line now evicted from L1 • If miss in victim cache, L1 victim - VC, VC victim-? Fast hit time of direct mapped but with reduced conflict misses October 5, 2005