Memory management hardware ppt

Memory Management: From Absolute Addresses to Demand Paging and memory management in linux ppt
Dr.ShaneMatts Profile Pic
Dr.ShaneMatts,United States,Teacher
Published Date:23-07-2017
Your Website URL(Optional)
Comment
1 Memory Management: From Absolute Addresses to Demand Paging Joel Emer Computer Science and Artificial Intelligence Laboratory M.I.T. Based on the material prepared by Arvind and Krste Asanovic 6.823 L9-2 Emer Memory Management • The Fifties - Absolute Addresses - Dynamic address translation • The Sixties - Paged memory systems and TLBs - Atlas’ Demand paging • Modern Virtual Memory Systems October 12, 2005 6.823 L9-3 Emer Names for Memory Locations Physical Address ISA Memory Mapping machine virtual physical (DRAM) language address address address • Machine language address – as specified in machine code • Virtual address – ISA specifies translation of machine code address into virtual address of program variable (sometime called effective address) • Physical address ⇒ operating system specifies mapping of virtual address into name for a physical memory location October 12, 2005 6.823 L9-4 Emer Absolute Addresses EDSAC, early 50’s virtual address = physical memory address • Only one program ran at a time, with unrestricted access to entire machine (RAM + I/O devices) • Addresses in a program depended upon where the program was to be loaded in memory • But it was more convenient for programmers to write location-independent subroutines How could location independence be achieved? October 12, 2005 6.823 L9-5 Emer Dynamic Address Translation Motivation In the early machines, I/O operations were slow and each word transferred involved the CPU Higher throughput if CPU and I/O of 2 or more programs were overlapped. How? prog1 ⇒ multiprogramming Location independent programs Programming and storage management ease ⇒ need for a base register Protection prog2 Independent programs should not affect each other inadvertently ⇒ need for a bound register October 12, 2005 Physical Memory 6.823 L9-6 Emer Simple Base and Bound Translation Segment Length Bound Bounds ≤ Register Violation? Physical current Address Effective segment Load X + Address Base Register Base Physical Address Program Address Space Base and bounds registers are visible/accessible only when processor is running in the supervisor mode October 12, 2005 Main Memory6.823 L9-7 Emer Separate Areas for Program and Data Bounds Data Bound ≤ Violation? Register data Effective Addr segment Load X Register Data Base + Register Program Program Bound Bounds ≤ Address Register Violation? Space program Program segment Counter Program Base + Register What is an advantage of this separation? (Scheme still used today on Cray vector supercomputers) October 12, 2005 Main Memory 6.823 L9-8 Emer Memory Fragmentation free Users 4 & 5 Users 2 & 5 arrive leave OS OS OS Space Space Space 16K 16K user 1 user 1 16K user 1 24K 24K user 2 user 2 24K user 4 16K 24K 16K user 4 8K 8K 32K 32K user 3 user 3 user 3 32K 24K 24K user 5 24K As users come and go, the storage is “fragmented”. Therefore, at some stage programs have to be moved around to compact the storage. October 12, 2005 6.823 L9-9 Emer Paged Memory Systems • Processor generated address can be interpreted as a pair page number, offset page number offset • A page table contains the physical address of the base of each page 1 0 0 0 1 1 2 2 3 3 3 Address Space Page Table 2 of User-1 of User-1 Page tables make it possible to store the pages of a program non-contiguously. October 12, 2005 6.823 L9-10 Emer Private Address Space per User OS User 1 VA1 pages Page Table User 2 VA1 Page Table VA1 User 3 Page Table free • Each user has a page table • Page table contains an entry for each user page October 12, 2005 Physical Memory 6.823 L9-11 Emer Where Should Page Tables Reside? • Space required by the page tables (PT) is proportional to the address space, number of users, ... ⇒ Space requirement is large ⇒ Too expensive to keep in registers • Idea: Keep PT of the current user in special registers – may not be feasible for large page tables – Increases the cost of context swap • Idea: Keep PTs in the main memory – needs one reference to retrieve the page base address and another to access the data word ⇒ doubles the number of memory references October 12, 2005 6.823 L9-12 Emer Page Tables in Physical Memory PT User 1 VA1 PT User 2 User 1 VA1 User 2 October 12, 2005 6.823 L9-13 Emer A Problem in Early Sixties • There were many applications whose data could not fit in the main memory, e.g., payroll – Paged memory system reduced fragmentation but still required the whole program to be resident in the main memory • Programmers moved the data back and forth from the secondary store by overlaying it repeatedly on the primary store tricky programming October 12, 2005 6.823 L9-14 Emer Manual Overlays • Assume an instruction can address all the storage on the drum 40k bits main • Method 1: programmer keeps track of addresses in the main memory and initiates an I/O transfer when required 640k bits drum • Method 2: automatic initiation of I/O Central Store transfers by software address Ferranti Mercury translation 1956 Brooker’s interpretive coding, 1960 Problems? Method1: Difficult, error prone Method2: Inefficient October 12, 2005 6.823 L9-15 Emer Demand Paging in Atlas (1962) “A page from secondary storage is brought into the primary storage whenever it is (implicitly) demanded by the processor.” Primary Tom Kilburn 32 Pages 512 words/page Primary memory as a cache for secondary memory Secondary (Drum) Central 32x6 pages User sees 32 x 6 x 512 words Memory of storage October 12, 2005 6.823 L9-16 Emer Hardware Organization of Atlas 16 ROM pages system code Effective 0.4 1 µsec (not swapped) Initial Address Address 2 subsidiary pages system data Decode PARs 1.4 µsec (not swapped) 0 48-bit words Main Drum (4) 512-word pages 8 Tape decks 32 pages 192 pages 88 sec/word 1.4 µsec 1 Page Address 31 Register (PAR) effective PN , status per page frame Compare the effective page address against all 32 PARs match ⇒ normal access no match ⇒ page fault save the state of the partially executed instruction October 12, 2005 6.823 L9-17 Emer Atlas Demand Paging Scheme • On a page fault: – Input transfer into a free page is initiated – The Page Address Register (PAR) is updated – If no free page is left, a page is selected to be replaced (based on usage) – The replaced page is written on the drum • to minimize drum latency effect, the first empty page on the drum was selected – The page table is updated to point to the new location of the page on the drum October 12, 2005 6.823 L9-18 Emer Caching vs. Demand Paging secondary memory primary primary CPU CPU cache memory memory Caching Demand paging cache entry page-frame cache block (32 bytes) page (4K bytes) cache miss (1% to 20%) page miss (0.001%) cache hit (1 cycle) page hit (100 cycles) cache miss (100 cycles) page miss(5M cycles) a miss is handled a miss is handled in hardware mostly in software October 12, 2005 19 Five-minute break to stretch your legs 6.823 L9-20 Emer Modern Virtual Memory Systems Illusion of a large, private, uniform store OS Protection & Privacy several users, each with their private address space and one or more user shared address spaces i page table ≡ name space Swapping Demand Paging Store Primary Provides the ability to run programs Memory larger than the primary memory Hides differences in machine configurations The price is address translation on mapping each memory reference VA PA TLB October 12, 2005

Advise: Why You Wasting Money in Costly SEO Tools, Use World's Best Free SEO Tool Ubersuggest.