Question? Leave a message!




Multilevel Memories

Multilevel Memories
1 Multilevel Memories Joel Emer Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology Based on the material prepared by Krste Asanovic and Arvind 6.823 L7 2 Joel Emer CPUMemory Bottleneck CPU Memory Performance of highspeed computers is usually limited by memory bandwidth latency • Latency (time for a single access) Memory access time Processor cycle time • Bandwidth (number of accesses per unit time) if fraction m of instructions access memory, ⇒1+m memory references / instruction ⇒ CPI = 1 requires 1+m memory refs / cycle October 3, 2005 6.823 L7 3 Joel Emer Core Memory • Core memory was first large scale reliable main memory – invented by Forrester in late 40s at MIT for Whirlwind project • Bits stored as magnetization polarity on small ferrite cores threaded onto 2 dimensional grid of wires • Coincident current pulses on X and Y wires would write cell and also sense original state (destructive reads) • Robust, nonvolatile storage • Used on space shuttle Image removed due to computers until recently copyright restrictions. • Cores threaded onto wires by hand (25 billion a year at peak production) • Core access time 1µs DEC PDP8/E Board, 4K words x 12 bits, (1968) October 3, 2005 6.823 L7 4 Joel Emer Semiconductor Memory, DRAM • Semiconductor memory began to be competitive in early 1970s – Intel formed to exploit market for semiconductor memory • First commercial DRAM was Intel 1103 – 1Kbit of storage on single chip – charge on a capacitor used to hold value • Semiconductor memory quickly replaced core in 1970s October 3, 2005 6.823 L7 5 Joel Emer One Transistor Dynamic RAM TiN top electrode (V ) REF 1T DRAM Cell Ta O dielectric 2 5 word Image removed due to copyright restrictions. access FET bit Explicit storage poly W bottom TiN/Ta2O5/W Capacitor capacitor (FET word electrode line gate, trench, access fet stack) October 3, 2005 6.823 L7 6 Joel Emer ProcessorDRAM Gap (latency) µProc 60/year 1000 CPU “Moore’s Law” ProcessorMemory Performance Gap: 100 (grows 50 / year) DRAM 10 7/year DRAM 1 From David Patterson, UC Berkeley Time Fourissue superscalar could execute 800 instructions during cache miss October 3, 2005 Performance 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 6.823 L7 7 Joel Emer Little’s Law Throughput (T) = Number in Flight (N) / Latency (L) Misses in CPU Memory flight table Example: Assume infinite bandwidth memory 100 cycles / memory reference 1 + 0.2 memory references / instruction ⇒ Table size = 1.2 100 = 120 entries 120 independent memory operations in flight October 3, 2005 6.823 L7 8 Joel Emer DRAM Architecture bit lines word lines Col. Col. M 1 2 Row 1 N N Row 2 Memory cell (one bit) M N+M Column Decoder Sense Amplifiers D Data • Bits stored in 2dimensional arrays on chip • Modern chips have around 4 logical banks on each chip – each logical bank physically implemented as many smaller arrays October 3, 2005 Row Address Decoder 6.823 L7 9 Joel Emer DRAM Operation Three steps in read/write access to a given bank • Row access (RAS) – decode row address, enable addressed row (often multiple Kb in row) – bitlines share charge with storage cell – small change in voltage detected by sense amplifiers which latch whole row of bits – sense amplifiers drive bitlines full rail to recharge storage cells • Column access (CAS) – decode column address to select small number of sense amplifier latches (4, 8, 16, or 32 bits depending on DRAM package) – on read, send latched bits out to chip pins – on write, change sense amplifier latches which then charge storage cells to required value – can perform multiple column accesses on same row without another row access (burst mode) • Precharge – charges bit lines to known value, required before next row access Each step has a latency of around 20ns in modern DRAMs Various DRAM standards (DDR, RDRAM) have different ways of encoding the signals for transmission to the DRAM, but all share the same core architecture October 3, 2005 Multilevel Memory Strategy: Hide latency using small, fast memories called caches. Caches are a mechanism to hide memory latency based on the empirical observation that the patterns of memory references made by a processor are often highly predictable: PC … 96 loop: ADD r2, r1, r1 100 What is the pattern of instruction SUBI r3, r3, 1 104 memory addresses BNEZ r3, loop 108 … 112 October 3, 2005 Typical Memory Reference Patterns Address linear sequence n loop iterations Instruction fetches Stack accesses Data accesses Time October 3, 2005 Common Predictable Patterns Two predictable properties of memory references: – Temporal Locality: If a location is referenced it is likely to be referenced again in the near future. – Spatial Locality: If a location is referenced it is likely that locations near it will be referenced in the near future. October 3, 2005 Caches Caches exploit both types of predictability: – Exploit temporal locality by remembering the contents of recently accessed locations. – Exploit spatial locality by fetching blocks of data around recently accessed locations. October 3, 2005 6.823 L7 14 Joel Emer Memory Hierarchy Small, A B Big, Slow Fast CPU Memory Memory (DRAM) (RF, SRAM) holds frequently used data • size: Register SRAM DRAM why • latency: Register SRAM DRAM why • bandwidth: onchip offchip why On a data access: hit (data ∈ fast memory) ⇒ low latency access miss (data ∉ fast memory) ⇒ long latency access (DRAM) Fast mem. effective only if bandwidth requirement at B A October 3, 2005 6.823 L7 15 Joel Emer Management of Memory Hierarchy • Small/fast storage, e.g., registers – Address usually specified in instruction – Generally implemented directly as a register file • but hardware might do things behind software’s back, e.g., stack management, register renaming • Large/slower storage, e.g., memory – Address usually computed from values in register – Generally implemented as a cache hierarchy • hardware decides what is kept in fast memory • but software may provide “hints”, e.g., don’t cache or prefetch October 3, 2005 6.823 L7 16 Joel Emer A Typical Memory Hierarchy c.2003 Split instruction data Multiple interleaved primary caches memory banks (onchip SRAM) (DRAM) L1 Memory Instruction Cache CPU Memory Unified L2 Cache Memory L1 Data RF Memory Cache Multiported Large unified secondary cache register file (onchip SRAM) (part of CPU) October 3, 2005 6.823 L7 17 Joel Emer Workstation Memory System (Apple PowerMac G5, 2003) Image removed due to copyright restrictions. To view image, visit http://www.apple.com/powermac/pciexpress.html • Dual 2GHz processors, each with 64KB I cache, 32KB Dcache, and 512KB L2 unified cache • 1GB/s1GHz, 2x32bit bus, 16GB/s • North Bridge Chip • Up to 8GB DRAM, 400MHz, 128bit bus, 6.4GB/s • AGP Graphics Card, 533MHz, 32bit bus, 2. • PCIX Expansion, 133MHz, 64bit bus, 1 GB/s October 3, 2005 18 Fiveminute break to stretch your legs Inside a Cache Address Address Main Processor CACHE Memory Data Data copy of main copy of main memory memory location 100 location 101 Data Data Byte Byte 100 Line Data Byte 304 6848 Address Tag Data Block October 3, 2005 Cache Algorithm (Read) Look at Processor Address, search cache tags to find match. Then either Found in cache Not in cache a.k.a. HIT a.k.a. MISS Return copy Read block of data from of data from Main Memory cache Wait … Return data to processor and update cache Q: Which line do we replace October 3, 2005 6.823 L7 21 Joel Emer Placement Policy 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 3 3 Block Number 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 0 1 2 3 4 5 6 7 8 9 Memory 0 1 2 3 4 5 6 7 0 1 2 3 Set Number Cache Fully (2way) Set Direct Associative Associative Mapped block 12 anywhere anywhere in only into can be placed set 0 block 4 (12 mod 4) (12 mod 8) October 3, 2005 DirectMapped Cache Block Tag Index Offset t k b V Tag Data Block k 2 lines t = HIT Data Word or Byte October 3, 2005 Direct Map Address Selection higherorder vs. lowerorder address bits Block Index Tag Offset t k b V Tag Data Block k 2 lines t = HIT Data Word or Byte October 3, 2005 2Way SetAssociative Cache Block Tag Index Offset b t k V Tag Data Block V Tag Data Block t Data = = Word or Byte HIT October 3, 2005 Fully Associative Cache V Tag Data Block t = t = HIT Data Word = b or Byte October 3, 2005 Block Tag Offset 6.823 L7 26 Joel Emer Replacement Policy In an associative cache, which block from a set should be evicted when the set becomes full • Random • Least Recently Used (LRU) • LRU cache state must be updated on every access • true implementation only feasible for small sets (2way) • pseudoLRU binary tree often used for 48 way • First In, First Out (FIFO) a.k.a. RoundRobin • used in highly associative caches • Not Least Recently Used (NLRU) • FIFO with exception for most recently used block This is a secondorder effect. Why October 3, 2005 6.823 L7 27 Joel Emer Block Size and Spatial Locality Block is unit of transfer between the cache and memory 4 word block, Word0 Word1 Word2 Word3 Tag b=2 block address offset Split CPU b address b bits 32b bits b 2 = block size a.k.a line size (in bytes) Larger block size has distinct hardware advantages • less tag overhead • exploit fast burst transfers from DRAM • exploit fast burst transfers over wide busses What are the disadvantages of increasing block size October 3, 2005 Average Cache Read Latency α is HIT RATIO: Fraction of references in cache 1 α is MISS RATIO: Remaining references Average access time for serial search: Addr Addr Main t + (1 α) t c m Processor CACHE Memory Data Data Average access time for parallel search: Addr Main α t + (1 α) t Processor CACHE c m Memory Data Data t is smallest for which type of cache c October 3, 2005 6.823 L7 29 Joel Emer Improving Cache Performance Average memory access time = Hit time + Miss rate x Miss penalty To improve performance: • reduce the miss rate (e.g., larger cache) • reduce the miss penalty (e.g., L2 cache) • reduce the hit time What is the simplest design strategy October 3, 2005 6.823 L7 30 Joel Emer Write Performance Block Tag Index Offset b t k Data V Tag k 2 lines t = WE Data Word or Byte HIT October 3, 2005 6.823 L7 31 Joel Emer Write Policy • Cache hit: –write through: write both cache memory • generally higher traffic but simplifies cache coherence –write back: write cache only (memory is written only when the entry is evicted) • a dirty bit per block can further reduce the traffic • Cache miss: – no write allocate: only write to main memory – write allocate (aka fetch on write): fetch into cache • Common combinations: – write through and no write allocate – write back with write allocate October 3, 2005 32 Thank you 33 Backup 6.823 L7 34 Joel Emer DRAM Packaging 7 Clock and control signals DRAM Address lines multiplexed chip row/column address 12 Data bus (4b,8b,16b,32b) • DIMM (Dual Inline Memory Module) contains multiple chips with clock/control/address signals connected in parallel (sometimes need buffers to drive signals to all chips) • Data pins work together to return wide word (e.g., 64bit data bus using 16x4bit parts) 168pinn DIMM 72pin SO DIMM Images removed due to copyright restrictions. October 3, 2005 6.823 L7 35 Joel Emer DoubleData Rate (DDR2) DRAM Figure removed for copyright reasons. Source: Micron 256Mb DDR2 SDRAM datasheet Bank Read Mode on pg. 44 of Micron Synchronous DRAM Specification. October 3, 2005
sharer
Presentations
Free
Document Information
Category:
Presentations
User Name:
Dr.ShaneMatts
User Type:
Teacher
Country:
United States
Uploaded Date:
23-07-2017