Question? Leave a message!




Multithreading Architectures

Multithreading Architectures
Joel Emer December 5, 2005 6.823, L231 Multithreading Architectures Joel Emer Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology Based on the material prepared by Krste Asanovic and Arvind Joel Emer December 5, 2005 6.823, L232 Pipeline Hazards t0 t1 t2 t3 t4 t5 t6 t7 t8 t9 t10 t11 t12 t13 t14 F D X M W LW r1, 0(r2) F D D D D X M W LW r5, 12(r1) ADDI r5, r5, 12 F F F F D D D D X M W SW 12(r1), r5 F F F F D D D D • Each instruction may depend on the next What can be done to cope with this • Even bypassing does not eliminate all delays Joel Emer December 5, 2005 6.823, L233 Multithreading How can we guarantee no dependencies between instructions in a pipeline One way is to interleave execution of instructions from different program threads on same pipeline Interleave 4 threads, T1T4, on nonbypassed 5stage pipe t0 t1 t2 t3 t4 t5 t6 t7 t8 t9 Prior instruction in F D X M W T1: LW r1, 0(r2) a thread always F D X M W T2: ADD r7, r1, r4 completes write F D X M W T3: XORI r5, r4, 12 back before next instruction in T4: SW 0(r7), r5 F D X M W same thread reads T1: LW r5, 12(r1) F D X M W register file Joel Emer December 5, 2005 6.823, L234 CDC 6600 Peripheral Processors (Cray, 1964) Image removed due to copyright restrictions. To view image, visit http://www.bambi.net/computermuseum/cdc6600 andconsole.jpg • First multithreaded hardware • 10 “virtual” I/O processors • Fixed interleave on simple pipeline • Pipeline has 100ns cycle time • Each virtual processor executes one instruction every 1000ns • Accumulatorbased instruction set to reduce processor state Joel Emer December 5, 2005 6.823, L235 Simple Multithreaded Pipeline X PC PC GPR1 1 PC GPR1 IR I GPR1 1 PC GPR1 1 1 D Y +1 2 2 Thread select Have to carry thread select down pipeline to ensure correct state bits read/written at each pipe stage Joel Emer December 5, 2005 6.823, L236 Multithreading Costs • Each thread requires its own user state –PC –GPRs • Also, needs its own system state – virtual memory page table base register – exception handling registers • Other costs • Appears to software (including OS) as multiple, albeit slower, CPUs Joel Emer December 5, 2005 6.823, L237 Thread Scheduling Policies • Fixed interleave (CDC 6600 PPUs, 1965) – each of N threads executes one instruction every N cycles – if thread not ready to go in its slot, insert pipeline bubble • Softwarecontrolled interleave (TI ASC PPUs, 1971) – OS allocates S pipeline slots amongst N threads – hardware performs fixed interleave over S slots, executing whichever thread is in that slot • Hardwarecontrolled thread scheduling (HEP, 1982) – hardware keeps track of which threads are ready to go – picks next thread to execute based on hardware priority scheme Joel Emer December 5, 2005 6.823, L238 Denelcor HEP (Burton Smith, 1982) Image removed due to copyright restrictions. To view image, visit http://ftp.arl.mil/ftp/historic computers/png/hep2.png First commercial machine to use hardware threading in main CPU – 120 threads per processor – 10 MHz clock rate – Up to 8 processors – precursor to Tera MTA (Multithreaded Architecture) Joel Emer December 5, 2005 6.823, L239 Tera MTA (199097) Image removed due to copyright restrictions. To view image, visit http://www.npaci.edu/online/v2.1/ mta.html • Up to 256 processors • Up to 128 active threads per processor • Processors and memory modules populate a sparse 3D torus interconnection fabric • Flat, shared main memory – No data cache – Sustains one main memory access per cycle per processor • GaAs logic in prototype, 1KW/processor 260MHz – CMOS version, MTA2, 50W/processor Joel Emer December 5, 2005 6.823, L2310 MTA Architecture • Each processor supports 128 active hardware threads – 1 x 128 = 128 stream status word (SSW) registers, – 8 x 128 = 1024 branchtarget registers, – 32 x 128 = 4096 generalpurpose registers • Three operations packed into 64bit instruction (short VLIW) – One memory operation, – One arithmetic operation, plus – One arithmetic or branch operation • Thread creation and termination instructions • Explicit 3bit “lookahead” field in instruction gives number of subsequent instructions (07) that are independent of this one – c.f. instruction grouping in VLIW – allows fewer threads to fill machine pipeline – used for variablesized branch delay slots Joel Emer December 5, 2005 6.823, L2311 MTA Pipeline Issue Pool Inst Fetch • Every cycle, one instruction from one W active thread is M A C launched into pipeline • Instruction pipeline is 21 cycles long W • Memory operations incur 150 cycles of W latency Retry Pool Assuming a single thread issues one instruction every 21 cycles, and clock Interconnection Network rate is 260 MHz… What is performance Memory pipeline Effective single thread issue rate is 260/21 = 12.4 MIPS Write Pool Memory Pool Joel Emer December 5, 2005 6.823, L2312 Multithreading Design Choices • Finegrained multithreading – Context switch among threads every cycle • Coarsegrained multithreading – Context switch among threads every few cycles, e.g., on: » Function unit data hazard, »L1 miss, »L2 miss… • Why choose one style over another • Choice depends on – Contextswitch overhead – Number of threads supported (due to perthread state) – Expected applicationlevel parallelism… Joel Emer December 5, 2005 6.823, L2313 CoarseGrain Multithreading Tera MTA designed for supercomputing applications with large data sets and low locality – No data cache – Many parallel threads needed to hide large memory latency Other applications are more cache friendly – Few pipeline bubbles when cache getting hits – Just add a few threads to hide occasional cache miss latencies – Swap threads on cache misses Joel Emer December 5, 2005 6.823, L2314 MIT Alewife (1990) • Modified SPARC chips Image removed due to – register windows hold different thread contexts copyright restrictions. • Up to four threads per node To view image, visit http://www.cag.lcs.mit.e • Thread switch on local cache du/alewife/pictures/jpg/1 miss 6extender.jpg Joel Emer December 5, 2005 6.823, L2315 IBM Power RS64IV (2000) • Commercial coarsegrain multithreading CPU • Based on PowerPC with quadissue inorder fivestage pipeline • Each physical CPU supports two virtual CPUs • On L2 cache miss, pipeline is flushed and execution switches to second thread – short pipeline minimizes flush penalty (4 cycles), small compared to memory access latency – flush pipeline to simplify exception handling Joel Emer December 5, 2005 6.823, L2316 Superscalar Machine Efficiency Issue width Instruction issue Completely idle cycle (vertical waste) Time Partially filled cycle, i.e., IPC 4 (horizontal waste) • Why horizontal waste • Why vertical waste Joel Emer December 5, 2005 6.823, L2317 Vertical Multithreading Issue width Instruction issue Second thread interleaved cyclebycycle Time Partially filled cycle, i.e., IPC 4 (horizontal waste) • What is the effect of cyclebycycle interleaving – removes vertical waste, but leaves some horizontal waste Joel Emer December 5, 2005 6.823, L2318 Chip Multiprocessing Issue width Time • What is the effect of splitting into multiple processors – eliminates horizontal waste, – leaves some vertical waste, and – caps peak throughput of each thread. Joel Emer December 5, 2005 6.823, L2319 Ideal Superscalar Multithreading Tullsen, Eggers, Levy, UW, 1995 Issue width Time • Interleave multiple threads to multiple issue slots with no restrictions Joel Emer December 5, 2005 6.823, L2320 OoO Simultaneous Multithreading Tullsen, Eggers, Emer, Levy, Stamm, Lo, DEC/UW, 1996 • Add multiple contexts and fetch engines and allow instructions fetched from different threads to issue simultaneously • Utilize wide outoforder superscalar processor issue queue to find instructions to issue from multiple threads • OOO instruction window already has most of the circuitry required to schedule from multiple threads • Any single thread can utilize whole machine Joel Emer December 5, 2005 6.823, L2321 Basic Outoforder Pipeline Execute Fetch Decode Queue Reg Dcache Reg Retire /Map Read /Store Write Buffer PC Register Map Regs Regs Dcache Icache Thread blind EV8 – Microprocessor Forum, Oct 1999 Joel Emer December 5, 2005 6.823, L2322 SMT Pipeline Execute Fetch Decode Queue Reg Dcache Reg Retire /Map Read /Store Write Buffer PC Register Map Regs Dcache Regs Icache EV8 – Microprocessor Forum, Oct 1999 Joel Emer December 5, 2005 6.823, L2323 Icount Choosing Policy Fetch from thread with the least instructions in flight. Why does this enhance throughput Joel Emer December 5, 2005 6.823, L2324 Why Does Icount Make Sense N T = ­ L Assuming latency (L) is unchanged with the addition of threading. For each thread i with original throughput T: i N/4 T /4 = ­ i L Joel Emer December 5, 2005 6.823, L2325 SMT Fetch Policies (Locks) •Problem: Spin looping thread consumes resources • Solution: Provide quiescing operation that allows a thread to sleep until a memory location changes Load and start loop: watching 0(r2) ARM r1, 0(r2) BEQ r1, gotit QUIESCE Inhibit scheduling of BR loop thread until activity gotit: observed on 0(r2) Joel Emer December 5, 2005 6.823, L2326 Adaptation to parallelism type For regions with low thread level For regions with high thread parallelism (TLP) entire machine level parallelism (TLP) entire width is available for instruction machine width is shared by all level parallelism (ILP) threads Issue width Issue width Time Time Joel Emer December 5, 2005 6.823, L2327 Pentium4 Hyperthreading (2002) • First commercial SMT design (2way SMT) –Hyperthreading == SMT • Logical processors share nearly all resources of the physical processor – Caches, execution units, branch predictors • Die area overhead of hyperthreading 5 • When one logical processor is stalled, the other can make progress – No logical processor can use all entries in queues when two threads are active • Processor running only one active software thread runs at approximately same speed with or without hyperthreading Joel Emer December 5, 2005 6.823, L2328 Pentium4 Hyperthreading Front End Figure removed due to copyright restrictions. Refer to Figure 5a in Marr, D., et al. "Hyperthreading Technology Architecture and Microarchitecture." Intel Technology Journal 6, no. 1 (2002): 8. http://www.intel.com/technology/itj/2002/volume06issue01/vol6iss1hyperthreadingtechnology.pdfJoel Emer December 5, 2005 6.823, L2329 Pentium4 Branch Predictor • Separate return address stacks per thread Why • Separate firstlevel global branch history table Why • Shared secondlevel branch history table, tagged with logical processor IDs Joel Emer December 5, 2005 6.823, L2330 Pentium4 Hyperthreading Execution Pipeline Figure removed due to copyright restrictions. Refer to Figure 6 in Marr, D., et al. "Hyperthreading Technology Architecture and Microarchitecture." Intel Technology Journal 6, no. 1 (2002): 10. http://www.intel.com/technology/itj/2002/volume06issue01/vol6iss1hyperthreadingtechnology.pdf Joel Emer December 5, 2005 6.823, L2331 Extras Joel Emer December 5, 2005 6.823, L2332 Speculative, OutofOrder Superscalar Processor OutofOrder InOrder Decode PC Fetch Reorder Buffer Commit Rename InOrder Physical Reg. File Branch Store ALU MEM D Unit Buffer Execute Joel Emer December 5, 2005 6.823, L2333 Figure removed due to copyright considerations.Joel Emer December 5, 2005 6.823, L2334 Granularity of Multithreading L1 Memory Inst. Unified Cache CPU Memory L2 Memory L1 Cache RF Data Memory Cache So far, assumed finegrained multithreading – CPU switches every cycle to a different thread – When does this make sense Coarsegrained multithreading – CPU switches every few cycles to a different thread – When does this make sense