Operating Systems Lecture Notes

memory management lecture notes operating systems and lecture notes on advanced operating system pdf free download
JackBrown Profile Pic
Published Date:11-07-2017
Your Website URL(Optional)
Advanced Topic in Operating Systems Lecture Notes Dr. Warren Toomey School of Information Technology Bond University Queensland, Australia With quotes from `The New Hacker's Dictionary' Third Session, 2003 c 1992-2003, Warren Toomey1 Introduction to Operating Systems 1.1 What is an Operating System? Textbook reference: Stallings ppg 53 – 100; Tanenbaum & Woodhull ppg 1 – 5 Without software, a computer is effectively useless. Computer software controls the use of the hardware (CPU, memory, disks etc.), and makes the computer into a useful tool for its users. In most computers, the software can be regarded as a set of layers, as shown in the following diagram. Each layer hides much of the complexity of the layer below, and provides a set of abstract services and concepts to the layer above. More abstract Users Application Programs System Software Less Computer Hardware abstract For example, the computer's hard disk allows data to be stored on it in a set of xed-sized blocks. The system software hides this complexity, and provides the concept of les to the application software. In turn, an application program such as a word processor hides the idea of a le, and allows the user to work with documents instead. Computer software can be thus be divided into 2 categories:  system software, which manages the computer's operation, and  applications software, which allows users to do useful things. The most fundamental of all system software is the operating system. It has three main tasks to perform.  The operating system must shield the details of the hardware from the application programs, and thus from the user.  The operating system has to provide a set of abstract services to the application programs, instead. When applications use these abstract services, the operations must be translated into real hardware operations.  Finally, the resources in a computer (CPU, memory, disk space) are limited. The operating system must act as a resource manager, optimising the use of the resources, and protecting them against misuse and abuse. When a system provides multiuser or multitasking capabilities, resources must be allocated fairly and equitably amongst a number of competing requests. operating system: (Often abbreviated `OS') The foundation software of a machine, of course; that which schedules tasks, allocates storage, and presents a default interface to the user between applications. The facilities an operating system provides and its general design philosophy exert an extremely strong inuence on programming style and on the technical cultures that grow up around its host machines. 11.2 Kernel Mode and User Mode Textbook reference: Tanenbaum & Woodhull pg 3 Because an operating system must hide the computer's hardware, and manage the hardware resources, it needs to prevent the application software from accessing the hardware directly. Without this sort of protection, the operating system would not be able to do its job. The computer's CPU provides two modes of operation which enforce this protection. The operating system runs in kernel mode, also known as supervisor mode or privileged mode. In kernel mode, the software has complete access to all of the computer's hardware, and can control the switching between the CPU modes. The rest of the software runs in user mode. In this mode, direct access to the hardware is prohibited, and so is any arbitrary switching to kernel mode. Any attempts to violate these restrictions are reported to the kernel mode software: in other words, to the operating system itself. By having two modes of operation which are enforced by the computer's own hardware, the operating system can force application programs to use the operating system's abstract services, instead of circum- venting any resource allocations by direct hardware access. 1.3 Other System Software Textbook reference: Tanenbaum & Woodhull pg 2 Before we go on with our introduction to operating systems, we should look at what other system software there is. More abstract Users Application User User System Programs programs Programs mode Library routines System calls Kernel Operating System mode Less Computer Hardware abstract At the top of the operating system are the system calls. These are the set of abstract operations that the operating system provides to the applications programs, and thus are also known as the application pro- gram interface, or API. This interface is generally constant: users cannot change what is in the operating system. Above the system calls are a set of library routines which come with the operating system. These are functions and subroutines which are useful for many programs. The programs do the work for the user. Systems programs do operating system-related things: copy or move les, delete les, make directories, etc. Other, non-system software are the application programs installed to make the computer useful. Appli- cations like Netscape Navigator, Microsoft Word or Excel are examples of non-system software. These are usually purchased separately from the operating system. Of course, in many cases software must be written for a special application, by the users themselves or by programmers in an organisation. 2Regardless of type, all programs can use the library routines and the system calls that come with an operating system. 1.4 Types of Operating Systems Every operating system is different, and each is designed to meet a set of goals. However, we can generally classify operating systems into the following categories.  A simple monitor provides few services to the user, and leaves much the control of the hardware to the user's own programs. A good example here is MS-DOS.  A batch system takes user's jobs, and segregates them into batches, with similar requirements. Each batch is given to the computer to run. When jobs with similar system requirements are batched together, this helps to streamline their processing. User interaction is generally lacking in batch systems: jobs are entered, are processed, and the output from each job comes out at a later time. The emphasis here is on the computer's utilisation. An example batch system is IBM's VM.  An embedded system usually has the operating system built into the computer, and is used to control external hardware. There is little or no application software in an embedded system. Exam- ples here are the Palm Pilot, the electronic diaries that everybody seems to have, and of course the computers built into VCRs, microwaves, and into most cars.  A real-time system is designed to respond to input within certain time constraints. This input usually comes from external sensors, and not from humans. Thus, there is usually little or no user interaction. Many embedded systems are also real-time systems. An example real-time system is the QNX operating system.  Finally, a multiprogramming system appears to be running many jobs at the same time, each with user interaction. The emphasis here is on user response time as well as computer utilisation. Multi- programming systems are usually more general-purpose than the other types of operating systems. Example multiprogramming systems are Unix and Windows NT. In this course, we will concentrate on multiprogramming systems: these are much more sophisticated and complex then the other operating system types, and will give us a lot more to look at. We will also concentrate on multi-user systems: these are systems which support multiple users at the same time. 2 Design Principles & Concepts Textbook reference: Stallings ppg 53 – 100; Tanenbaum & Woodhull ppg 15 – 20 The services provided by an operating system depends on the concepts around which the operating sys- tem was created; this gives each operating system a certain `feel' to the programmers who write programs for it. We are talking here not about the `look & feel' of the user interface, but the `look & feel' of the programmer's interface, i.e the services provided by the API. Although each operating system provides its own unique set of services, most operating systems share a few common concepts. Let's briey take at look at each now. We will examine most of these concepts in detail in later topics. 32.1 The Process Most operating systems provide the concept of a process. Here, we need to distinguish between a program and a process.  A program is a collection of computer instructions plus some data that resides on a storage medium, waiting to be called into action.  A process is a program during execution. It has been loaded into the computer's main memory, and is taking input, manipulating the input, and producing output. Specically, a process is an enviroment for a program to run in. This environment protects the running program against other processes, and also provides the running program with access to the operating system's services via the system calls. 2.2 Memory Part of every computer's hardware is its main memory. This is a set of temporary storage locations which can hold machine code instructions and data. Memory is volatile: when the power is turned off, the contents of main memory are lost. In current computers, there are usually several megabytes of memory (i.e millions of 8-bit storage areas). Memory contents can be accessed by reading or writing a memory location, which has an integer address, just like the numbers on the letter boxes in a street. Memory locations often have a hardware protection, allowing or preventing read and writes. Usually, a process can only read or write to a specic set of locations that have been given to it by the operating system. The operating system allocates memory to processes as they are created, and reclaims the memory once they nish. As well, processes can usually request more memory, and also relinquish this extra memory if they no longer require it. 2.3 Files Files are storage areas for programs, source code, data, documents etc. They can be accessed by processes, but don't disappear when processes die, or when the machine is turned off. They are thus persistent objects. Operating systems provide mechanisms for le manipulation, such as open, close, create, read and write. As part of the job of hiding the hardware and providing abstract services, the operating system must map les onto areas on disks and tapes. The operating system must also deal with les that grow or shrink in size. Some operating systems don't enforce any structure to les, or enforce particular le types types. Others distinguish between le types and structures, e.g Pascal source les, text documents, machine code les, data les etc. Most operating systems allow les to have permissions, allowing certain types of le access to authorised users only. Directories may exist to allow related les to be collected. The main reason for the existence of directories is to make le organisation easier and more exible for the user. 42.4 Windows Nearly all operating systems these days provide some form of graphical user interface, although in many cases a command-line interface is also available. In these operating systems, there are services available to allow processes to do graphical work. Although there are primitive services such as line and rectangle drawing, most GUI interfaces provide a abstract concept known as the window. The window is a logical, rectangular, drawing area. Processes can create one or more windows, of any size. The operating system may decorate each window with borders, and these may include icons which allow the window to be destroyed, resized, or hidden. The operating system must map these logical windows onto the physical display area provided by the video card and computer monitor. As well, the operating system must direct the input from the user (in the form of keyboard input, and mouse operations) to the appropriate window: this is known as changing the input focus. 2.5 Operating System Services Textbook reference: Tanenbaum & Woodhull ppg 21 – 26 From a programmer's point of view, an operating system is dened mainly by the Application Program Interface that it provides, and to a lesser extent what library routines are available. It follows, therefore, that a number of different operating system products may provide exactly the same Application Program Interface, and thus appear to be the same operating system to the programmer. The most obvious example of this is Unix. Unix is really not a single operating system, but rather a collection of operating systems that share a common Application Program Interface. This API has now been standardised, and is known as the POSIX standard. Solaris, HP-UX, SCO Unix, Digital Unix, System V, Linux, Minix and FreeBSD are all examples of Unix operating systems. What this means is that a program written to run on one Unix platform can be recompiled and will run on another Unix system. As long as the set of systems calls are the same on both systems, the program will run on both systems. Another group of operating systems which share a common API are the Windows systems from Microsoft: Windows CE, Windows 95 or 98 and Windows NT. Although structurally different, a program can be written to run on all three. 2.6 Unix and Laboratory Work The aim of this course is not to examine the implementation of a particular operating system. Instead, we will be looking at the abstractions provided by operating systems, and the design tradeoffs that must be made when constructing an operating system. In the laboratory work in this course, we will be using the Unix operating system to look at some of its abstract concepts, and to see some of their implementation details. It is in your best interests to learn a bit about Unix and what it provides to the programmer and the user. Section 1.3 of Tanenbaum's textbook gives a good overview of the main Unix concepts. For the program- mers who are interested in Unix's system calls, an overview of these are given in Section 1.4. Note that Sections 1.3 and 1.4 cover the Minix system. As noted above, Minix is a particular implemen- tation of Unix. Sections 1.3 and 1.4 cover the core concepts and system calls that are available in all Unix systems. 52.7 Operating System Structure Textbook reference: Tanenbaum & Woodhull ppg 37 – 44 The implementation of an operating system is completely up to its designers, and throughout the course we will look at some of the design decisions that must be made when creating an operating system. In general, none of the implementation details of an operating system are visible to the programmer or user: these details are hidden behind the operating system's Application Program Interface. The API xes the “look” of the operating system, as seen by the programmer. This API, however, can be implemented by very different operating system designs. For example, So- laris, Linux and Minix all provide a POSIX API: all three systems have a very different operating system architecture. We will examine the two most common operating system designs, the monolithic model and the client- server model. 2.8 The Monolithic Model In the monolithic model, the operating system is written as a collection of routines, each of which can call any of the other routines as required. At build-time, each routine is compiled, and then they are all linked together to create a single program called the operating system kernel. When the operating system is started, this kernel is loaded into the computer's memory, and runs in kernel mode. Most versions of Unix, including Linux, use the monolithic design model. The monolithic design model suffers from the fact that every part of the operating system can see all the other parts; thus, a bug in one part may destroy the data that another part is using. Recompilation of the operating system can also be slow and painful. To reduce this shortcoming, most designers place some overriding structure on their operating system design. Many of the routines and data structures are `hidden' in some way, and are visible only to the other routines that need them. An abstract map of Unix's architecture is shown in the diagram on the following page. As you can see, the functionality provided by the kernel is broken up into a number of sections. Each section provides a small number of interface routines, and it is these routines which can be used by the other sections. Because Unix is monolithic, nothing stops one section of the operating system from calling another with function calls, or using another section's data. Each box is a set of C source les. 67 User Mode High-Level Language System Call Interface System Call Argument Checking File Type Switch Socket/ Callout File Stream Management Management Interface Upper Half K L of the S e i y F Kernel r b Upper-half Terminal Memory Process s i Internet n r t Kernel Management l Management Management e a Network e e Data l r m Stack i s Structures e s Buffer cache Management Shared Upper-Lower Device Drivers Data Structures Lower Half Dispatch Syscall trap handling of the Low-level Low-level Context Interrupt handling Table Kernel Process Scheduling Callout Code Switching Exception handling Overview of the Unix Kernel Architecture2.9 Client-Server Model An alternative method of operating system design, called the client-server model, tries to minimise the chance of a bug in one part of the operating system from corrupting another part. In this model, most of the operating system services are implemented as privileged processes called servers. Remember, each process is protected against interference by other processes. These servers have some ability to access the computer's hardware, which ordinary processes cannot. Ordinary processes are known as clients. These send requests in the form of messages to the servers, which do the work on their behalf and return a reply. The set of services that the servers provide to the user processes thus form the operating system's Appli- cation Program Interface. Client Client Process Terminal File Memory User mode process process server server server server Kernel mode Kernel Client obtains service by sending messages to server processes The messages sent between the clients and servers are well-dened `lumps' of data. These must be copied between the client and the server. This copying can slow the overall system down, when compared to a monolithic system where no such copying is required. The servers themselves also may need to intercommunicate. There must be a layer in the operating system that does message passing. This model can be implemented on top of a single machine, where messages are copied from a client's memory are into the server's mem- ory area. The client-server model can also be adapted to work over a network or distributed system where the processes run on several machines. Machine 1 Machine 2 Machine 3 Machine 4 Client File server Process server Terminal server Kernel Kernel Kernel Kernel Network Message from client to server Windows NT uses the client-server model, as shown in the diagram below. Most of the subsystems are privileged processes. The NT executive, which provides the message-passing capability, is known as a monolithic microkernel. Other client-server based operating systems are Minix and Plan 9. 893 The OS/Machine Interface Textbook reference: Stallings ppg 9 – 38 The operating system must hide the physical resources of the computer from the user's programs, and fairly allocate these resources. In order to explore how this is achieved, we must rst consider how the main components of a computer work. There are several viewpoints on how a computer works: how its electronic gates operate, how it executes machine code etc. We will examine the main functional components of a computer and their abstract interconnection. We will ignore complications such as caches, pipelines etc. 3.1 The CPU The CPU is the part of the computer where instructions are executed, as shown below. We won't delve too much into the operation of the CPU in this course. However, you should note that the CPU contains a small amount of extremely fast storage space in the form of a number of registers. In order to execute an instruction, the CPU must fetch a word (usually) from main memory and decode the instruction: then the instruction can be performed. The Program Counter, or PC, in the CPU indicates from which memory location the next instruction will be fetched. The PC is an example of a register. Some instructions may cause the CPU to read more data from main memory, or to write data back to main memory. This occurs when the data needed to perform the operation must be loaded from the main memory into the CPU's registers. Of course, if the CPU already has the correct data in its registers, then no main memory access is required, except to fetch the instruction. 10As the number of internal registers is limited, data currently in a registers often has to be discarded so that it can be replaced by new data that is required to perform an instruction. Such a discard is known as a register spill. 3.2 Main Memory The main memory is the storage place for the instructions which are being executed by the CPU, and also the data which is required by the running program. As we have seen, the CPU fetchs instructions, and sometimes data, from main memory as part of its normal operation. Main memory is organised as a number of locations, each with a unique address, and each holding a particular value. Main memory is typically composed of Random Access Memory (RAM), which means that the CPU can read from a memory location, or the CPU can overwrite the contents of a memory location with a new value. When registers are spilled, the CPU often saves the old register value into a location in the main memory. Main memory is also used to hold buffers of data which will be written out to permanent storage on the disk. Parts of main memory may be occupied by Read Only Memory (ROM). Write operations to ROM are elec- trically impossible, and so the write has no effect on their contents. 3.3 Buses The CPU and main memory are connected via three buses:  The address bus, which carries the address in main memory which the CPU is accessing. Its size 32 indicates how many memory locations the CPU can access. A 32-bit address bus allows 2 address locations, giving 4 Gigbytes of addressable memory.  The data bus, which carries the data being read to/from that address). Its size indicates the natural data size for the machine. A 32-bit machine means that its data bus is 32-bits wide.  The status bus, which carries information about the memory access and other special system events to all devices connected to the three buses. 11Tape Drive Disk drive Memory UART controller controller Decoder Decoder Decoder Decoder Status Data CPU Address Valid Read/write bit CPU halted Status Reset Assert Halt CPU Interrupt Bus lines Several DMA levels Here is how an instruction is fetched from main memory:  The CPU places the value of the program counter on the address bus.  It asserts a `read' signal on the read/write line (part of the status bus).  Main memory receives both the address request and the type of request (read).  Main memory retrieves the value required from its hardware, and places the value on the data bus. It then asserts the `valid address' line on the status bus.  Meanwhile, the CPU waits a period of time watching the `valid address' line.  If a `valid address' signal is returned, the value (i.e the next instruction) is loaded off the data bus and into the CPU.  If no `valid address' returned, there is an error. The CPU will perform some exceptional operation instead. Read accesses for data, and the various write requests, are performed in a similar fashion. Note that main memory needs an address decoder to work out which addresses it should respond to, and which it should ignore. Most computers don't have their entire address space full of main memory. This implies that reads or writes to certain memory locations will always fail. Here are some example computers and their address & data bus sizes: Computer Address Bus Data Bus IBM XT 20-bit 8-bit IBM AT 24-bit 16-bit 486/Pentium 32-bit 32-bit 68030/040 32-bit 32-bit Sparc ELC 32-bit 32-bit DEC Alpha 64-bit 64-bit 123.4 Peripheral Devices Textbook reference: Tanenbaum & Woodhull ppg 154 – 157 The computer must be able to do I/O, so as to store data on long-term storage devices, and to commu- nicate with the outside world. However, we don't want the CPU to be burdened with the whole task of doing I/O, i.e controlling every electrical & mechanical aspect of every peripheral. Therefore, most devices have a device controller which takes device commands from the CPU, performs them on the actual device, and reports the results (and any data) back to the CPU. The CPU communicates with the device controllers via the three buses. Therefore, the controllers usually appear to be memory locations from the CPU's point of view. Each device controller has a decoder which tells the device if the asserted address belongs to that device. If so, parts of the address and the data is written to/from the device. Usually, this means that the device controller is mapped into the computer's address space. And because main memory has its own decoder, we can say that the locations in main memory are also mapped into the computer's address space: 1,000,000 ROM 900,000 800,200 Disk 800,100 800,032 UART 800,000 Video 730,000 RAM 0 I/O Decoding Addresses In the diagram above, the UART (serial I/O) controller decodes addresses 800,000 to 800,031, which is 32 addresses. It ignores addresses outside this region, and the decodes passes values 0 to 31 to the controller, when the address is inside the region. Assume this UART uses the following addresses: Decoded Real Use of this location Format of this location Location Location 0 800,000 Output format Speed (4), Parity (2), Stop bits (2) 1 800,001 Output status register 2 800,002 Output character 3 800,003 Input format Speed (4), Parity (2), Stop bits (2) 4 800,004 Input status register 5 800,005 Input character These special addresses are known as device registers, and are similar to the registers inside a CPU. To output a character, rst the operating system must set up the output characteristics:  CPU asserts the address 800,000 on the address bus  It places a word of data onto the data bus. This describes the format of output to be used by the UART.  It asserts `write' on the r/w line. 13 It waits a period of time.  If no `valid address' returned, error. Then, to output a character, the character is sent to 800,002 as above. The UART latches the character, and it is transmitted over the serial line at the bit rate set in the output format. Input from a device is more complicated. There are three types: polling, interrupts, and direct memory access (DMA). We will leave DMA until later. With polling, the UART leaves the input character at the address 800,005 and an indicator that a character has arrived at the address 800,004. The CPU must periodically scan (i.e read) this address to determine if a character has arrived. Because of this periodic checking, polling makes multitasking difcult if not impossible: the frequent reading cannot be performed by the operating system if a running program has sole use of the CPU. poll: v.,n. 1. techspeak The action of checking the status of an input line, sensor, or memory location to see if a particular external event has been registered. 2. To repeatedly call or check with someone: “I keep polling him, but he's not answering his phone; he must be swapped out.” 3.5 Interrupts An alternative way for the operating system to nd out when input has arrived, or when output has been completed, is to use interrupts. If a computer uses interrupts for I/O operations, a device will assert an interrupt line on the status bus when an I/O operation has been completed. Each device has its own interrupt line. For example, when a character arrives, the UART described above asserts its interrupt line. This sends a signal in to the CPU along the status bus. If the interrupt has priority greater than any other asserted interrupt line, the CPU will stop what it is doing, and jump to an interrupt handler for that line. This interrupt handler is a section of machine code places at a xed location in main memory. Here, the interrupt handler will collect the character, do something with it and then return the CPU to what it was doing before the handler started i.e the program running before the interrupt came in. Gen- erally speaking, interrupt handlers are a part of the operating system. Interrupts are prioritised. The CPU is either running the current program, or dealing with the highest interrupt sent in from devices along the status bus. If an interrupt's priority is too low, then the interrupt will remain asserted until the other interrupts nish, and the CPU can handle it. Alternatively, if a new interrupt has a priority higher than the one currently being handled by the CPU, then the CPU diverts to the new interrupt's handler, just as it did when it left the running program. The CPU has an internal status register which holds the value of the current interrupt being handed. Normal programs run at a level below the lowest interrupt priority. 3.6 Interrupt Vectors To ensure that the CPU goes back to what it was doing, old values of the program counter are stacked in interrupt-level order somewhere. Each time an interrupt handler is called, the program counter's value is stacked, and the PC is set to the address of the rst instruction in the interrupt handler. The last instruction in an interrupt handler must unstack an old PC value, and put it back into the program counter. All CPUs have a special instruction (often known as ReTurn from Interrupt or RTI) which does the unstacking. Each interrupt level has its own interrupt handler. How does the CPU know where each handler is stored in main memory? A table of vectors is kept in main memory for each interrupt level. It holds the address of the rst instruction in the appropriate interrupt handler. 14Address Holds Address Of For Example 0 Reset Handler 1,000,870 1 IRQ 1 - Keyboard 1,217,306 2 IRQ 2 - Mouse 1,564,988 15 IRQ 15 - Disk 1,550,530 16 Zero Divide 1,019,640 17 Illegal instruction 1,384,200 18 Bad mem access 1,223,904 19 TRAP 1,758,873 The above ctitious table shows where the vectors might be stored in main memory, their value (i.e where the rst address of each interrupt handler is), and what interrupt is associated with each. Most CPUs keep vectors for other `abnormal' events, such as the attempt to execute an illegal instruction, to access a memory location which doesn't exist etc. These events are known as exceptions. If any of these exceptions occur, the CPU starts running the appropriate handler for the error. All vectors should point to interrupt handlers within the operating system, and not to handlers written by users. Why? 3.7 The OS vs The User The operating system must hide the actual computer from the users and their programs, and present an abstract interface to the user instead. The operating system must also ensure fair resource allocation to users and programs. The operating system must shield each user and her programs from all other users and programs. Therefore, the operating system must prevent all access to devices by user programs. It must also limit each program's access to main memory, to only that program's memory locations. These restrictions are typically built into the CPU (i.e into unchangeable hardware) as two operating modes: user and kernel mode. In kernel mode, all memory is visible, all devices are visible, all instruc- tions can be executed. The operating system must run in kernel mode, why? In user mode, all devices are hidden, and most of main memory is hidden. This is performed by the Memory Management Unit, of which we will learn more later. Instructions relating to device access, interrupt handling and mode changing cannot be executed either. When user programs run, the operating system forces them to run in user mode. Any attempt to violate the user mode will cause an exception, which starts an interrupt handler running. Because the interrupt handler is part of the operating system, the operating system can thus determine when user mode viola- tions have been attempted. Every interrupt or exception causes the CPU to switch from its current mode into kernel mode. Why? The previous mode is stacked so that the correct mode is restored when an interrupt handler nishes. Finally, because a user program runs in user mode and can only see its own memory, it cannot see the operating system's instructions or data. This prevents nosy user programs from subverting the working of the operating system. 3.8 Traps and System Calls Textbook reference: Tanenbaum & Woodhull ppg 37 – 38 If the operating system is protected, how does a program ask for services from the OS? User programs can't call functions within the operating system's memory, because it can't see those areas of memory. A special user-mode machine instruction, known as a TRAP instruction, causes an exception, switches the CPU mode to kernel mode, and starts the handler for the TRAP instruction. 15To ask for a particular service from the operating system, the user program puts values in machine regis- ters to indicate what service it requires. Then it executes the TRAP instruction, which changes the CPU mode to privileged mode, and moves execution to TRAP hander in the operating system's memory. The operating system checks the request, and performs it, using a dispatch table to pass control to one of a set of operating system service routines. User program 2 User programs run in User program 1 user mode Kernel call Service procedure Dispatch table Figure 1-16. How a system call can be made: (1) User pro- gram traps to the kernel. (2) Operating system determines ser- vice number required. (3) Operating system calls service pro- cedure. (4) Control is returned to user program. When the service has been performed, the operating system returns control to the program, lowering the privileges back to user-mode. Thus, the job only has access to the privileged operating system via a single, well-protected entry point. This mechanism for obtaining operating system services is known as a system call. The set of available system calls is known as that operating system's Application Program Interface or API. trap: 1. n. A program interrupt, usually an interrupt caused by some exceptional situation in the user program. In most cases, the OS performs some action, then returns control to the program. 2. vi. To cause a trap. “These instructions trap to the monitor.” Also used transitively to indicate the cause of the trap. “The monitor traps all input/output instructions.” 4 Operating System History and Evolution Textbook reference: Stallings ppg 58 – 68; Tanenbaum & Woodhull ppg 5 – 13 The history and development of operating systems is described in some detail in the textbook. We will only cover the highlights here. 4.1 1st Generation: 1945 – 1955 The rst computers were built around the mid 1940's, using vacuum tubes. On these computers, a pro- gram's instructions were hard-wired. The computer needed to be manually rewired to change programs. The MTBF for these machines was on the order of hours. These 1st generation machines were programmed by individuals who knew the hardware intimately, in machine code, Later, assembly code was developed to make the programming slightly easier. There was no mode disctinctions; effectively, all instructions ran in privileged mode. At the time, machines had no operating system: you wrote all the code yourself, or used other program- mers routines. Eventually, groups of people developed libraries of routines to help the task of program- ming. 16You usually had to book time slots to use the computer. Often the slot was too short. Sometimes (if your program worked), the slot was too long. This of course led to CPU wastage. Most early programs were numerical calculations, and very CPU-intensive. The early 1950s saw the introduction of punched cards to speed programming. bare metal: n. 1. New computer hardware, unadorned with such snares and delusions as an operating system, a high-level language, or even an assembler. Commonly used in the phrase `programming on the bare metal', which refers to the arduous work needed to create these basic tools for a new machine. Real bare-metal programming involves things like building boot proms and BIOS chips, implementing basic monitors used to test device drivers, and writing the assemblers that will be used to write the compiler back ends that will give the new machine a real development environment. Stone Age: n., adj. 1. In computer folklore, an ill-dened period from ENIAC (ca. 1943) to the mid- 1950s; the great age of electromechanical dinosaurs, characterised by hardware such as mercury delay lines and/or relays. 4.2 2nd Generation: 1955 – 1965 The introduction of the transistor made computers more reliable. It was now possible for companies to sell/lease the computers they built to 3rd parties. Computers were used more efciently by employ- ing people to run the machines for the customer. High-level languages such as FORTRAN and COBOL invented, which made programming much easier and somewhat more portable. To run a program, a user would punch their code/data onto cards, give the deck of cards to operators, who would feed them to the computer, and return printout/new cards to user. Each program run was known as a job. Doing this this way made programs hard to debug due to the slow turnaround, but meant that the CPU was utilised more. However, the CPU still sat idle between jobs. The next idea was to batch similar jobs to make the execution faster, e.g all FORTRAN jobs. Similar jobs were batched and copied from card to magnetic tape. The tape was then fed to the computer, and output also sent to tape, converted to printout. System Input tape Output drive Card tape tape reader Printer 1401 7094 1401 (a) (b) (c) (d) (e) (f) Figure 1-2. An early batch system. (a) Programmers bring cards to 1401. (b) 1401 reads batch of jobs onto tape. (c) Operator carries input tape to 7094. (d) 7094 does computing. Tape (e) Operator carries output tape to 1401. (f) 1401 prints output. CPU was thus less idle as the tape could be read/written to faster than the punched cards. However, the CPU was still mostly idle when reading from or writing to the tape. Whyis this? Reading a piece of data from main memory is very quick, because it is completely electronic. Reading the same piece of data from tape is much slower, as the tape is a mechanical device. Punched cards are even slower. The rst basic operating system performed batch operations: for each job on the input tape load the job, run it, send any output to a second tape, and move onto the next job. Because the operating system must keep its instructions in main memory to work, it had to be protected to prevent itself from being destroyed by the jobs that it was loading and running. In this generation, the jobs were mainly scientic and engineering calculations. 17Bronze Age: n. 1. Era of transistor-logic, pre-ferrite-core machines with drum or CRT mass storage. 4.3 3rd Generation: 1965 – 1980 In the 3rd Generation, integrated circuits (ICs) make machines smaller and more reliable, although they were initially more expensive. Companies found they outgrew their machines, but each model had dif- ferent batch systems. This meant that each change of computer system involved a recoding of jobs and retraining of operators. To alleviate these problems, IBM decided to create a whole family of machines, each with a similar hard- ware architecture, with a single operating system that ran on every family member. This was the Sys- tem/360 family, and OS/360. OS/360 ran to millions of lines of code, with a constant number of bugs, even in new system releases. Computer usage moved away from purely scientic work to business work e.g inventories. These type of jobs were more I/O intensive (lots of reading/writing on tape). The CPU became idle waiting for the tape while processing these I/O intensive jobs, and so CPU utilisation dropped again. The solution to the problem of CPU utilisation on I/O jobs was multiprogramming:  Have 1 jobs in memory, each protected from the others.  As one job goes idle waiting for I/O, the operating system can switch to another job which is waiting for the CPU. Alternatively, the operating system could start up another job if no current jobs are waiting for the CPU. This could give over 90% CPU utilisation, but with some overhead caushed by the switching between jobs. To improve performance further, disks were used to cache/spool jobs (i.e both the programs to execute and their associated data). Disks were faster to access than tape, especially for random access where the data is accessed in no particular order from the disk. These system still suffered from slow job turnaround: the users had to wait for a job to run to termination (or crash) before they could do any reprogramming. Iron Age: n. In the history of computing, 1961–1971 — the formative era of commercial mainframe technology, when big iron dinosaurs ruled the earth. These began with the delivery of the rst PDP-1, coincided with the dominance of ferrite core, and ended with the introduction of the rst commercial microprocessor (the Intel 4004) in 1971. 4.4 Timesharing A method of overcoming the slow job turnaround was introduced at this point timesharing. on a time- sharing system, the operating system swapped very quickly between jobs (even if the current job was still using the CPU), allowing input/output to come from users on terminals instead of tape or punched cards. This switching away from a job using the CPU is known as pre-emption, and is the hallmark of an interac- tive operating multiprogramming operating system. The Multics operating system was designed at this time, to support hundreds of users simultaneously. Its design was good, and introduced many new ideas, but was very expensive hardware-wise, and zzled out with the introduction of minicomputers. Multics: n. from “MULTiplexed Information and Computing Service” A late 1960s timesharing operating system co-designed by a consortium including MIT, GE, and Bell Laboratories. Very innovative for its time — among other things, it introduced the idea of treating all devices uniformly as special les. All the members but GE eventually pulled out after determining that second-system effect had bloated Multics to the point of practical unusability. One of the developers left in the lurch by the project's breakup was Ken Thompson, a circumstance which led directly to the birth of UNIX. 184.5 3rd Generation – Part 2 Minicomputers arrived, introduced with the PDP-1 in 1961. These machines were only 5% the cost of mainframes, but gave about 10% – 20% of their performance. These made minicomputers affordable to individual departments, not just to large companies. Although Multics died, many of its ideas were passed on to Unix. Unix was mostly written in a high-level language called `C', thus aiding ports to new hardware. In fact, it was one of the rst portable operating systems. Both minicomuters and mainframes got faster/cheaper and minis picked up more mainframe operating system ideas as time went on. 4.6 4th Generation: 1980 onwards Microcomputers brought computers to individuals. They began by using the 1st generation operating system ideas, but have been catching up ever since. Networking was introduced, allowing machines to be connected over small/large distances. Operating systems had functionality added to allow the les (or other services) of machines to be accessible by other machines over the network. Such systems are known as network operating systems. In another approach to using the connectivity provided by a network, distributed operating systems were created. These make all the machines on a network appear to be part of one single machine. Because of the immense power of the new machines, the emphasis on software design shifted away from system performance and efciency to user interface and applications. killer micro: n. A microprocessor-based machine that infringes on mini, mainframe, or supercomputer performance turf. Often heard in “No one will survive the attack of the killer micros”, the battle cry of the downsizers. Used esp. of RISC architectures. 19

Advise: Why You Wasting Money in Costly SEO Tools, Use World's Best Free SEO Tool Ubersuggest.