Advantages of virtual memory

Modern Virtual Memory Systems and advantages of virtual memory management and cache memory and virtual memory
Dr.ShaneMatts Profile Pic
Dr.ShaneMatts,United States,Teacher
Published Date:23-07-2017
Your Website URL(Optional)
1 Modern Virtual Memory Systems Arvind Computer Science and Artificial Intelligence Laboratory M.I.T. Based on the material prepared by Arvind and Krste Asanovic 6.823 L10-2 Arvind Address Translation: putting it all together Virtual Address hardware Restart instruction hardware or software TLB software Lookup miss hit Protection Page Table Check Walk the page is denied permitted ∉ memory ∈ memory Page Fault Physical Protection Update TLB (OS loads page) Address Fault (to cache) SEGFAULT October 17, 2005 6.823 L10-3 Arvind Topics • Interrupts • Speeding up the common case: – TLB & Cache organization • Speeding up page table walks • Modern Usage October 17, 2005 6.823 L10-4 Arvind Interrupts: altering the normal flow of control I HI i-1 1 interrupt HI program I 2 i handler HI I n i+1 An external or internal event that needs to be processed by another (system) program. The event is usually unexpected or rare from program’s point of view. October 17, 2005 6.823 L10-5 Arvind Causes of Interrupts Interrupt: an event that requests the attention of the processor • Asynchronous: an external event – input/output device service-request – timer expiration – power disruptions, hardware failure • Synchronous: an internal event (a.k.a exceptions) – undefined opcode, privileged instruction – arithmetic overflow, FPU exception – misaligned memory access – virtual memory exceptions: page faults, TLB misses, protection violations – traps: system calls, e.g., jumps into kernel October 17, 2005 6.823 L10-6 Arvind Asynchronous Interrupts: invoking the interrupt handler • An I/O device requests attention by asserting one of the prioritized interrupt request lines • When the processor decides to process the interrupt – It stops the current program at instruction I, i completing all the instructions up to I i-1 (precise interrupt) – It saves the PC of instruction I in a special i register (EPC) – It disables interrupts and transfers control to a designated interrupt handler running in the kernel mode October 17, 2005 6.823 L10-7 Arvind Interrupt Handler • Saves EPC before enabling interrupts to allow nested interrupts ⇒ – need an instruction to move EPC into GPRs – need a way to mask further interrupts at least until EPC can be saved • Needs to read a status register that indicates the cause of the interrupt • Uses a special indirect jump instruction RFE (return-from-exception) which – enables interrupts – restores the processor to the user mode – restores hardware status and control state October 17, 2005 6.823 L10-8 Arvind Synchronous Interrupts • A synchronous interrupt (exception) is caused by a particular instruction • In general, the instruction cannot be completed and needs to be restarted after the exception has been handled – requires undoing the effect of one or more partially executed instructions • In case of a trap (system call), the instruction is considered to have been completed – a special jump instruction involving a change to privileged kernel mode October 17, 2005 6.823 L10-9 Arvind Exception Handling 5-Stage Pipeline Inst. Data Decode PC D E M W + Mem Mem PC address Illegal Data address Overflow Exception Opcode Exceptions Asynchronous Interrupts • How to handle multiple simultaneous exceptions in different pipeline stages? • How and where to handle external asynchronous interrupts? October 17, 2005 6.823 L10-10 Arvind Exception Handling 5-Stage Pipeline Commit Point Inst. Data Decode PC D E M W + Mem Mem Illegal Data address Overflow PC address Opcode Exceptions Exception Exc Exc Exc D E M PC PC PC Select D E M Asynchronous Kill F Kill D Kill E Kill Handler Interrupts Stage Stage Stage Writeback PC October 17, 2005 EPC Cause6.823 L10-11 Arvind Exception Handling 5-Stage Pipeline • Hold exception flags in pipeline until commit point (M stage) • Exceptions in earlier pipe stages override later exceptions for a given instruction • Inject external interrupts at commit point (override others) • If exception at commit: update Cause and EPC registers, kill all stages, inject handler PC into fetch stage October 17, 2005 6.823 L10-12 Arvind Topics • Interrupts • Speeding up the common case: – TLB & Cache organization • Speeding up page table walks • Modern Usage October 17, 2005 6.823 L10-13 Arvind Address Translation in CPU Pipeline Inst Inst. Data Data Decode PC D E M W + TLB Cache TLB Cache TLB miss? Page Fault? TLB miss? Page Fault? Protection violation? Protection violation? • Software handlers need a restartable exception on page fault or protection violation • Handling a TLB miss needs a hardware or software mechanism to refill TLB • Need mechanisms to cope with the additional latency of a TLB: – slow down the clock – pipeline the TLB and cache access – virtual address caches – parallel TLB/cache access October 17, 2005 6.823 L10-14 Arvind Virtual Address Caches PA VA Physical Primary CPU TLB Cache Memory Alternative: place the cache before the TLB VA Primary PA Virtual (StrongARM) CPU Memory TLB Cache • one-step process in case of a hit (+) • cache needs to be flushed on a context switch unless address space identifiers (ASIDs) included in tags (-) • aliasing problems due to the sharing of pages (-) October 17, 2005 6.823 L10-15 Arvind Aliasing in Virtual-Address Caches Page Table Tag Data VA 1 VA 1st Copy of Data at PA 1 Data Pages PA VA 2nd Copy of Data at PA 2 VA 2 Virtual cache can have two copies of same physical data. Two virtual pages share Writes to one copy not visible one physical page to reads of other General Solution: Disallow aliases to coexist in cache Software (i.e., OS) solution for direct-mapped cache VAs of shared pages must agree in cache index bits; this ensures all VAs accessing same PA will conflict in direct- mapped cache (early SPARCs) October 17, 2005 6.823 L10-16 Arvind Concurrent Access to TLB & Cache Virtual VA Index VPN L b Direct-map Cache TLB k L 2 blocks b 2 -byte block PA PPN Page Offset Tag = Physical Tag Data hit? Index L is available without consulting the TLB ⇒ cache and TLB accesses can begin simultaneously Tag comparison is made after both accesses are completed Cases: L + b = k L + b k L + b k October 17, 2005 6.823 L10-17 Arvind Virtual-Index Physical-Tag Caches: Associative Organization Virtual a 2 VA VPN a L = k-b b Index Direct-map Direct-map TLB k L L 2 blocks 2 blocks Phy. PA Tag PPN Page Offset = = Tag a hit? 2 Data a After the PPN is known, 2 physical tags are compared Is this scheme realistic? October 17, 2005 6.823 L10-18 Arvind Concurrent Access to TLB & Large L1 The problem with L1 Page size Virtual Index L1 PA cache VA VPN a Page Offset b Direct-map PPN Data VA a 1 TLB VA PPN Data 2 a PA PPN Page Offset b hit? = Tag Can VA and VA both map to PA ? 1 2 October 17, 2005 6.823 L10-19 Arvind A solution via Second Level Cache L1 Memory Instruction Cache Memory Unified L2 CPU Cache Memory L1 Data RF Memory Cache Usually a common L2 cache backs up both Instruction and Data L1 caches L2 is “inclusive” of both Instruction and Data caches October 17, 2005 6.823 L10-20 Arvind Anti-Aliasing Using L2: MIPS R10000 L1 PA cache Virtual Index Direct-map VPN a Page Offset b VA into L2 tag PPN Data VA a 1 TLB VA PPN Data 2 a PPN Page Offset b PA PPN hit? = Tag • Suppose VA1 and VA2 both map to PA and VA1 is already in L1, L2 (VA1 ≠ VA2) PA a Data 1 • After VA2 is resolved to PA, a collision will be detected in L2. Direct-Mapped L2 • VA1 will be purged from L1 and L2, and VA2 will be loaded ⇒ no aliasing October 17, 2005

Advise: Why You Wasting Money in Costly SEO Tools, Use World's Best Free SEO Tool Ubersuggest.