Question? Leave a message!




Cache Optimizations

Cache Optimizations
1 Cache Optimizations Joel Emer Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology Based on the material prepared by Krste Asanovic and Arvind 6.823 L8 2 Joel Emer CPUCache Interaction (5stage pipeline) 0x4 E Add M A we addr ALU Y Y Decode, nop Primary IR B Register Data rdata Fetch Cache addr R inst PC D hit wdata wdata hit Primary PCen Instruction MD1 MD2 Cache Stall entire CPU on data cache miss To Memory Control Cache Refill Data from Lower Levels of Memory Hierarchy What about Instruction miss or writes to istream October 5, 2005 6.823 L8 3 Joel Emer Write Performance Block Tag Index Offset b t k Data V Tag k 2 lines t = WE Data Word or Byte HIT October 5, 2005 6.823 L8 4 Joel Emer Reducing Write Hit Time Problem: Writes take two cycles in memory stage, one cycle for tag check plus one cycle for data write if hit Solutions: • Design data RAM that can perform read and write in one cycle, restore old value after tag miss • CAMTag caches: Word line only enabled if hit • Pipelined writes: Hold write data for store in single buffer ahead of cache, write cache data during next store’s tag check October 5, 2005 6.823 L8 5 Joel Emer Pipelining Cache Writes Address and Store Data From CPU Tag Index Store Data Delayed Write Addr. Delayed Write Data Load/Store = S Tags Data L 1 0 = Load Data to CPU Hit Data from a store hit written into data portion of cache during tag access of subsequent store October 5, 2005 6.823 L8 6 Joel Emer Write pipeline Instr Data Data RF ALU Memory Memory Memory IFetch Decode Tag Address Mem Reg Read Read Calc Data Write What hazard has been introduced in this pipeline October 5, 2005 6.823 L8 7 Joel Emer Write Policy • Cache hit: –write through: write both cache memory • generally higher traffic but simplifies cache coherence –write back: write cache only (memory is written only when the entry is evicted) • a dirty bit per block can further reduce the traffic • Cache miss: – no write allocate: only write to main memory – write allocate (aka fetch on write): fetch into cache • Common combinations: – write through and no write allocate – write back with write allocate October 5, 2005 Average Cache Read Latency α is HIT RATIO: Fraction of references in cache 1 α is MISS RATIO: Remaining references Average access time for serial search: Addr Addr Main t + (1 α) t c m Processor CACHE Memory Data Data Average access time for parallel search: Addr Main α t + (1 α) t Processor CACHE c m Memory Data Data t is smallest for which type of cache c October 5, 2005 6.823 L8 9 Joel Emer Improving Cache Performance Average memory access time = Hit time + Miss rate x Miss penalty To improve performance: • reduce the miss rate (e.g., larger cache) • reduce the miss penalty (e.g., L2 cache) • reduce the hit time What is the simplest design strategy October 5, 2005 6.823 L8 10 Joel Emer Improving Cache Performance Average memory access time = Hit time + Miss rate x Miss penalty To improve performance: • reduce the miss rate (e.g., larger cache) • reduce the miss penalty (e.g., L2 cache) • reduce the hit time The simplest design strategy is to design the largest primary cache without slowing down the clock or adding pipeline stages (but design decisions are more complex with outof order or highly pipelined CPUs) October 5, 2005 6.823 L8 11 Joel Emer Causes for Cache Misses • Compulsory: firstreference to a block a.k.a. cold start misses misses that would occur even with infinite cache • Capacity: cache is too small to hold all data needed by the program misses that would occur even under perfect placement replacement policy • Conflict: misses that occur because of collisions due to blockplacement strategy misses that would not occur with full associativity October 5, 2005 6.823 L8 12 Joel Emer Effect of Cache Parameters on Performance • Larger cache size + reduces capacity and conflict misses hit time will increase • Higher associativity + reduces conflict misses (up to around 48 way) may increase access time • Larger block size October 5, 2005 6.823 L8 13 Joel Emer Block Size and Spatial Locality Block is unit of transfer between the cache and memory 4 word block, Word0 Word1 Word2 Word3 Tag b=2 block address offset Split CPU b address b bits 32b bits b 2 = block size a.k.a line size (in bytes) Larger block size has distinct hardware advantages • less tag overhead • exploit fast burst transfers from DRAM • exploit fast burst transfers over wide busses What are the disadvantages of increasing block size October 5, 2005 6.823 L8 14 Joel Emer Blocklevel Optimizations • Tags are too large, i.e., too much overhead – Simple solution: Larger blocks, but miss penalty could be large. • Subblock placement (aka sector cache) – A valid bit added to units smaller than the full block, called subblocks – Only read a subblock on a miss – If a tag matches, is the word in the cache 100 1 1 1 1 300 1 1 0 0 204 0 1 0 1 October 5, 2005 6.823 L8 15 Joel Emer SetAssociative RAMTag Cache Tag Status Data Tag Status Data Not energyefficient – A tag and data word is read from every way Twophase approach – First read tags, then just read data from = = selected way – More energy efficient – Doubles latency in L1 – OK, for L2 and above, why Tag Index Offset October 5, 2005 6.823 L8 16 Joel Emer HighlyAssociative CAMTag Caches • For high associativity (e.g., 32way), use contentaddressable memory (CAM) for tags (Intel XScale) • Overhead: Tag+comparator bit 24x area of plain RAMtag bit tag set offset t i b Set i Set 1 Set 0 Tag Data Block = Tag Data Block = Tag Data Block Tag Data Block = = Tag Data Block = Tag Data Block = Tag Data Block = Tag Data Block = Tag Data Block = Only one set enabled Hit Data Only hit data accessed – saves energy October 5, 2005 6.823 L8 17 Joel Emer Way Predicting Caches (MIPS R10000 L2 cache) • Use processor address to index into way prediction table • Look in predicted way at given index, then: HIT MISS Return copy Look in other way of data from cache MISS SLOW HIT (change entry in prediction table) Read block of data from next level of cache October 5, 2005 6.823 L8 18 Joel Emer Way Predicting Instruction Cache (Alpha 21264like) Jump target 0x4 Jump Add control addr PC inst Primary Instruction way Cache Sequential Way Branch Target Way October 5, 2005 19 Fiveminute break to stretch your legs 6.823 L8 20 Joel Emer Victim Caches (HP 7200) CPU Unified L2 Cache L1 Data RF Cache Evicted data from L1 Victim where FA Cache Hit data from VC Evicted data 4 blocks (miss in L1) From VC Victim cache is a small associative back up cache, added to a direct mapped cache, which holds recently evicted lines • First look up in direct mapped cache • If miss, look in victim cache • If hit in victim cache, swap hit line with line now evicted from L1 • If miss in victim cache, L1 victim VC, VC victim Fast hit time of direct mapped but with reduced conflict misses October 5, 2005 6.823 L8 21 Joel Emer Multilevel Caches • A memory cannot be large and fast • Increasing sizes of cache at each level CPU L1 DRAM L2 Local miss rate = misses in cache / accesses to cache Global miss rate = misses in cache / CPU memory accesses Misses per instruction = misses in cache / number of instructions October 5, 2005 6.823 L8 22 Joel Emer Inclusion Policy • Inclusive multilevel cache: – Inner cache holds copies of data in outer cache – ExtraCPU access needs only check outer cache – Most common case • Exclusive multilevel caches: – Inner cache may hold data not in outer cache – Swap lines between inner/outer caches on miss – Used in Athlon with 64KB primary and 256KB secondary cache Why choose one type of the other October 5, 2005 6.823 L8 23 Joel Emer Itanium2 OnChip Caches (Intel/HP, 2002) Level 1, 16KB, 4way s.a., 64B line, quadport (2 load+2 store), single cycle latency Image removed due to copyright restrictions. To view image, visit http://www Level 2, 256KB, 4way s.a, vlsi.stanford.edu/group/chipsmicroprobody 128B line, quadport (4 .html load or 4 store), five cycle latency Level 3, 3MB, 12way s.a., 128B line, single 32B port, twelve cycle latency October 5, 2005 6.823 L8 24 Joel Emer Reducing Read Miss Penalty Unified CPU Data L2 Cache Cache Write RF buffer Evicted dirty lines for writeback cache OR All writes in writethru cache • Write buffer may hold updated value of location needed by a read miss • Simple scheme: on a read miss, wait for the write buffer to go empty • Faster scheme: Check write buffer addresses against read miss addresses, if no match, allow read miss to go ahead of writes, else, return value in write buffer October 5, 2005 6.823 L8 25 Joel Emer Prefetching • Speculate on future instruction and data accesses and fetch them into cache(s) – Instruction accesses easier to predict than data accesses • Varieties of prefetching – Hardware prefetching – Software prefetching –Mixed schemes • What types of misses does prefetching affect October 5, 2005 6.823 L8 26 Joel Emer Issues in Prefetching • Usefulness – should produce hits • Timeliness – not late and not too early • Cache and bandwidth pollution L1 Instruction CPU Unified L2 Cache RF L1 Data Prefetched data October 5, 2005 6.823 L8 27 Joel Emer Hardware Instruction Prefetching • Instruction prefetch in Alpha AXP 21064 – Fetch two blocks on a miss; the requested block and the next consecutive block – Requested block placed in cache, and next block in instruction stream buffer Prefetched Req instruction block Stream block Buffer (4 blocks) CPU Unified L2 L1 Cache Instruction Req RF block October 5, 2005 6.823 L8 28 Joel Emer Hardware Data Prefetching • Prefetchonmiss: –Prefetch b + 1 upon miss on b • One Block Lookahead (OBL) scheme – Initiate prefetch for block b + 1 when block b is accessed – Why is this different from doubling block size – Can extend to N block lookahead • Strided prefetch – If sequence of accesses to block b, b+N, b+2N, then prefetch b+3N etc. October 5, 2005 6.823 L8 29 Joel Emer Software Prefetching for(i=0; i N; i++) prefetch( ai + 1 ); prefetch( bi + 1 ); SUM = SUM + ai bi; • What property do we require of the cache for prefetching to work October 5, 2005 6.823 L8 30 Joel Emer Software Prefetching Issues • Timing is the biggest issue, not predictability – If you prefetch very close to when the data is required, you might be too late – Prefetch too early, cause pollution – Estimate how long it will take for the data to come into L1, so we can set P appropriately – Why is this hard to do for(i=0; i N; i++) prefetch( ai + P ); prefetch( bi + P ); SUM = SUM + ai bi; Must consider cost of prefetch instructions October 5, 2005 6.823 L8 31 Joel Emer Compiler Optimizations • Restructuring code affects the data block access sequence – Group data accesses together to improve spatial locality – Reorder data accesses to improve temporal locality • Prevent data from entering the cache – Useful for variables that will only be accessed once before being replaced – Needs mechanism for software to tell hardware not to cache data (instruction hints or page table bits) • Kill data that will never be used again – Streaming data exploits spatial locality but not temporal locality – Replace into dead cache locations October 5, 2005 6.823 L8 32 Joel Emer Loop Interchange for(j=0; j N; j++) for(i=0; i M; i++) xij = 2 xij; for(i=0; i M; i++) for(j=0; j N; j++) xij = 2 xij; What type of locality does this improve October 5, 2005 6.823 L8 33 Joel Emer Loop Fusion for(i=0; i N; i++) for(j=0; j M; j++) aij = bij cij; for(i=0; i N; i++) for(j=0; j M; j++) dij = aij cij; for(i=0; i M; i++) for(j=0; j N; j++) aij = bij cij; dij = aij cij; What type of locality does this improve October 5, 2005 6.823 L8 34 Joel Emer Blocking for(i=0; i N; i++) for(j=0; j N; j++) r = 0; for(k=0; k N; k++) r = r + yik zkj; xij = r; x j y k z j i i k Not touched Old access New access October 5, 2005 6.823 L8 35 Joel Emer Blocking for(jj=0; jj N; jj=jj+B) for(kk=0; kk N; kk=kk+B) for(i=0; i N; i++) for(j=jj; j min(jj+B,N); j++) r = 0; for(k=kk; k min(kk+B,N); k++) r = r + yik zkj; xij = xij + r; x j y k z j i i k What type of locality does this improve October 5, 2005 36 Thank you 37 Extras 6.823 L8 38 Joel Emer Memory Hierarchy Example • AlphaStation 600/5 desktop workstation – Alpha 21164 333 MHz – Onchip L1 and L2 caches – L1 instruction cache, 8KB directmapped, 32B lines, fetch four instructions/cycle (16B) – Instruction stream prefetches up to 4 cache lines ahead – L1 data cache, 8KB directmapped, 32B lines, write through, load two 8B words or store one 8B word/cycle (2 cycle latency) – up to 21 outstanding loads, 6x32B lines of outstanding writes – L2 unified cache, 96KB 3way setassociative, 64B blocks/32B subblocks, writeback, 16B/cycle bandwidth (7 cycle latency) – Offchip L3 unified cache, 8MB directmapped, 64B blocks, peak bandwidth is 16B every 7 cycles (15 cycle latency) – DRAM, peak bandwidth 16B every 10 cycles (60 cycle latency) October 5, 2005 6.823 L8 39 Joel Emer Further Issues There are several other factors that are intimately connected with cache design: • Virtual memory and associated address translation • Multiprocessor and associated memory model issues cache coherence stay tuned October 5, 2005
sharer
Presentations
Free
Document Information
Category:
Presentations
User Name:
Dr.ShaneMatts
User Type:
Teacher
Country:
United States
Uploaded Date:
23-07-2017