Lecture notes Operating system

lecture notes on system software and operating system pdf free download
JackBrown Profile Pic
JackBrown,Georgia,Professional
Published Date:12-07-2017
Your Website URL(Optional)
Comment
CS604-Operating System VU Operating System CS604 Delivered by Dr. Syed Mansoor Sarwar Virtual University of Pakistan Knowledge beyond the boundaries © Copyright Virtual Un iversity of Pakistan OperatingSystems CS-604 Lecture No. 1 Operating Systems Lecture No. 1 Reading Material Operating Systems Concepts, Chapter 1 PowerPoint Slides for Lecture 1 Summary Introduction and purpose of the course Organization of a computer system Purpose of a computer system Requirements for achieving the purpose – Setting the stage for OS concepts and principles Outline of topics to be discussed What is an Operating System? Organization of a Computer System As shown in Figure 1.1, the major high-level components of a computer system are: 1. Hardware, which provides basic computing resources (CPU, memory, I/O devices). 2. Operating system, which manages the use of the hardware among the various application programs for the various users and provides the user a relatively simple machine to use. 3. Applications programs that define the ways in which system resources are used to solve the computing problems of the users (compilers, database systems, video games, business programs). 4. Users, which include people, machines, other computers. Figure 1.1. High-level components of a computer system 1 Purpose of a Computer—Setting the Stage for OS Concepts and Principles Computer systems consist of software and hardware that are combined to provide a tool to implement solutions for specific problems in an efficient manner and to execute programs. Figure 1.2 shows the general organization of a contemporary computer system and how various system components are interconnected. Integer Control Mouse Keyboard Unit Unit CD Floating Point Unit Cache Processor System Bus HD Mem Bus RAM/ROM Printer Monitor Figure 1.2. Organization of a Computer System Viewing things closely will reveal that the primary purpose of a computer system is to generate executable programs and execute them. The following are some of the main issues involved in performing these tasks. 1. Storing an executable on a secondary storage device such as hard disk 2. Loading executable from disk into the main memory 3. Setting the CPU state appropriately so that program execution could begin 4. Creating multiple cooperating processes, synchronizing their access to shared data, and allowing them to communicate with each other The above issues require the operating system to provide the following services and much more: Manage secondary storage devices Allocate appropriate amount of disk space when files are created Deallocate space when files are removing Insure that a new file does not overwrite an existing file Schedule disk requests Manage primary storage Allocate appropriate amount of memory space when programs are to be loaded into the memory for executing Deallocate space when processes terminate Insure that a new process is not loaded on top of an existing process Insure that a process does not access memory space that does not belong to it Minimize the amount of unused memory space Allow execution of programs larger in size than the available main memory Manage processes 2 Allow simultaneous execution of processes by scheduling the CPU(s) Prevent deadlocks between processes Insure integrity of shared data Synchronize executions of cooperating processes Allow a user to manage his/her files and directories properly User view of directory structure Provide a mechanism that allows users to protect their files and directories In this course, we will discuss in detail these operating system services (and more), with a particular emphasis on the UNIX and Linux operating systems. See the course outline for details of topics and lecture schedule. What is an Operating System? There are two views about this. The top-down view is that it is a program that acts as an intermediary between a user of a computer and the computer hardware, and makes the computer system convenient to use. It is because of the operating system that users of a computer system don’t have to deal with computer’s hardware to get their work done. Users can use simple commands to perform various tasks and let the operating system do the difficult work of interacting with computer hardware. Thus, you can use a command like copy file1 file2 to copy ‘file1’ to ‘file2’ and let the operating system communicate with the controller(s) of the disk that contain(s) the two files. A computer system has many hardware and software resources that may be required to solve a problem: CPU time, memory space, file storage space, I/O devices etc. The operating system acts as the manager of these resources, facing numerous and possibly conflicting requests for resources, the operating system must decide how (and when) to allocate (and deallocate) them to specific programs and users so that it can operate the computer system efficiently, fairly, and securely. So, the bottom-up view is that operating system is a resource manager who manages the hardware and software resources in the computer system. A slightly different view of an operating system emphasizes the need to control the various I/O devices and programs. An operating system is a control program that manages the execution of user programs to prevent errors and improper use of a computer. 3 Operating Systems CS-604 Lecture No. 2 Operating Systems Lecture No. 2 Reading Material Operating Systems Concepts, Chapter 1 PowerPoint Slides for Lecture 2 Summary Single-user systems Batch systems Multi programmed systems Time-sharing systems Real time systems Interrupts, traps and software interrupts (UNIX signals) Hardware protection Single-user systems A computer system that allows only one user to use the computer at a given time is known as a single-user system. The goals of such systems are maximizing user convenience and responsiveness, instead of maximizing the utilization of the CPU and peripheral devices. Single-user systems use I/O devices such as keyboards, mice, display screens, scanners, and small printers. They can adopt technology developed for larger operating systems. Often individuals have sole use of computer and do not need advanced CPU utilization and hardware protection features. They may run different types of operating systems, including DOS, Windows, and MacOS. Linux and UNIX operating systems can also be run in single-user mode. Batch Systems Early computers were large machines run from a console with card readers and tape drives as input devices and line printers, tape drives, and card punches as output devices. The user did not interact directly with the system; instead the user prepared a job, (which consisted of the program, data, and some control information about the nature of the job in the form of control cards) and submitted this to the computer operator. The job was in the form of punch cards, and at some later time the output was generated by the system— user didn’t get to interact with his/her job. The output consisted of the result of the program, as well as a dump of the final memory and register contents for debugging. To speed up processing, operators batched together jobs with similar needs, and ran them through the computer as a group. For example, all FORTRAN programs were complied one after the other. The major task of such an operating system was to transfer control automatically from one job to the next. In this execution environment, the CPU is often idle because the speeds of the mechanical I/O devices such as a tape drive are slower than that of electronic devices. Such systems in which the user does not get to 4 interact with his/her jobs and jobs with similar needs are executed in a “batch”, one after the other, are known as batch systems. Digital Equipment Corporation’s VMS is an example of a batch operating system. Figure 2.1 shows the memory layout of a typical computer system, with the system space containing operating system code and data currently in use and the user space containing user programs (processes). In case of a batch system, the user space contains one process at a time because only one process is executing at a given time. Fi gure 2.1 Memory partitioned into user and system spaces Multi-programmed Systems Multi-programming increases CPU utilization by organizing jobs so that the CPU always has one to execute. The operating system keeps several jobs in memory simultaneously, as shown in Figure 2.2. This set of jobs is a subset of the jobs on the disk which are ready to run but cannot be loaded into memory due to lack of space. Since the number of jobs that can be kept simultaneously in memory is usually much smaller than the number of jobs that can be in the job pool; the operating system picks and executes one of the jobs in the memory. Eventually the job has to wait for some task such as an I/O operation to complete. In a non multi-programmed system, the CPU would sit idle. In a multi- programmed system, the operating system simply switches to, and executes another job. When that job needs to wait, the CPU simply switches to another job and so on. Figure 2.2 Memory layout for a multi-programmed batch system 5 Figure 2.3 illustrates the concept of multiprogramming by using an example system with two processes, P1 and P2. The CPU is switched from P1 to P2 when P1 finishes its CPU burst and needs to wait for an event, and vice versa when P2 finishes it CPU burst and has to wait for an event. This means that when one process is using the CPU, the other is waiting for an event (such as I/O to complete). This increases the utilization of the CPU and I/O devices as well as throughput of the system. In our example below, P1 and P2 would finish their execution in 10 time units if no multiprogramming is used and in six time units if multiprogramming is used. I/O Burst CPU Burst One unit One unit P1 P2 Figure 2.3 Illustration of the multiprogramming concept All jobs that enter the system are kept in the job pool. This pool consists of all processes residing on disk awaiting allocation of main memory. If several jobs are ready to be brought into memory, and there is not enough room for all of them, then the system must choose among them. This decision is called job scheduling. In addition if several jobs are ready to run at the same time, the system must choose among them. We will discuss CPU scheduling in Chapter 6. Time-sharing systems A time-sharing system is multi-user, multi-process, and interactive system. This means that it allows multiple users to use the computer simultaneously. A user can run one or more processes at the same time and interact with his/her processes. A time-shared system uses multiprogramming and CPU scheduling to provide each user with a small portion of a time-shared computer. Each user has at least one separate program in memory. To obtain a reasonable response time, jobs may have to be swapped in and out of main memory. UNIX, Linux, Widows NT server, and Windows 2000 server are time- sharing systems. We will discuss various elements of time-sharing systems throughout the course. Real time systems Real time systems are used when rigid time requirements are placed on the operation of a processor or the flow of data; thus it is often used as a control device in a dedicated application. Examples are systems that control scientific experiments, medical imaging systems, industrial control systems and certain display systems. 6 A real time system has well defined, fixed time constraints, and if the system does not produce output for an input within the time constraints, the system will fail. For instance, it would not do for a robot arm to be instructed to halt after it had smashed into the car it was building. Real time systems come in two flavors: hard and soft. A hard real time system guarantees that critical tasks be completed on time. This goal requires that all delays in the system be completed on time. This goal requires that all delays in the system be bounded, from the retrieval of stored data to the time it takes the operating system to finish any request made of it. Secondary storage of any sort is usually limited or missing, with data instead being stored in short-term memory or in read only memory. Most advanced operating system features are absent too, since they tend to separate the user from the hardware, and that separation results in uncertainty about the amount of time an operation will take. A less restrictive type of real time system is a soft real time system, where a critical real-time task gets priority over other tasks, and retains that priority until it completes. As in hard real time systems, the operating system kernel delays need to be bounded. Soft real time is an achievable goal that can be mixed with other types of systems, whereas hard real time systems conflict with the operation of other systems such as time-sharing systems, and the two cannot be mixed. Interrupts, traps and software interrupts An interrupt is a signal generated by a hardware device (usually an I/O device) to get CPU’s attention. Interrupt transfers control to the interrupt service routine (ISR), generally through the interrupt vector table, which contains the addresses of all the service routines. The interrupt service routine executes; on completion the CPU resumes the interrupted computation. Interrupt architecture must save the address of the interrupted instruction. Incoming interrupts are disabled while another interrupt is being processed to prevent a lost interrupt. An operating system is an interrupt driven software. A trap (or an exception) is a software-generated interrupt caused either by an error (division by zero or invalid memory access) or by a user request for an operating system service. A signal is an event generated to get attention of a process. An example of a signal is the event that is generated when you run a program and then press Ctrl-C. The signal generated in this case is called SIGINT (Interrupt signal). Three actions are possible on a signal: 1. Kernel-defined default action—which usually results in process termination and, in some cases, generation of a ‘core’ file that can be used the programmer/user to know the state of the process at the time of its termination. 2. Process can intercept the signal and ignore it. 3. Process can intercept the signal and take a programmer-defined action. We will discuss signals in detail in some of the subsequent lectures. Hardware Protection Multi-programming put several programs in memory at the same time; while this increased system utilization it also increased problems. With sharing, many processes 7 could be adversely affected by a bug in one program. One erroneous program could also modify the program or data of another program or even the resident part of the operating system. A file may overwrite another file or folder on disk. A process may get the CPU and never relinquish it. So the issues of hardware protection are: I/O protection, memory protection, and CPU protection. We will discuss them one by one, but first we talk about the dual-mode operation of a CPU. a) Dual Mode Operation To ensure proper operation, we must protect the operating system and all other programs and their data from any malfunctioning program. Protection is needed for any shared resources. Instruction set of a modern CPU has two kinds of instructions, privileged instructions and non-privileged instructions. Privileged instructions can be used to perform hardware operations that a normal user process should not be able to perform, such as communicating with I/O devices. If a user process tries to execute a privileged instruction, a trap should be generated and process should be terminated prematurely. At the same time, a piece of operating system code should be allowed to execute privileged instructions. In order for the CPU to be able to differentiate between a user process and an operating system code, we need two separate modes of operation: user mode and monitor mode (also called supervisor mode, system mode, or privileged mode). A bit, called the mode bit, is added to the hardware of the computer to indicate the current mode: monitor mode (0) or user mode (1). With the mode bit we are able to distinguish between a task that is executed on behalf of the operating system and one that is executed on behalf of the user. The concept of privileged instructions also provides us with the means for the user to interact with the operating system by asking it to perform some designated tasks that only the operating system should do. A user process can request the operating system to perform such tasks for it by executing a system call. Whenever a system call is made or an interrupt, trap, or signal is generated, CPU mode is switched to system mode before the relevant kernel code executes. The CPU mode is switched back to user mode before the control is transferred back to the user process. This is illustrated by the diagram in Figure 2.4. Interrupt/ fault User Monitor Set user mode Figure 2.4 The dual-mode operation of the CPU b) I/O Protection A user process may disrupt the normal operation of the system by issuing illegal I/O instructions, by accessing memory locations within the operating system itself, or by 8 refusing to relinquish the CPU. We can use various mechanisms to ensure that such disruptions cannot take place in the system. To prevent users from performing illegal I/O, we define all I/O instructions to be privileged instructions. Thus users cannot issue I/O instructions directly; they must do it through the operating system. For I/O protection to be complete, we must be sure that a user program can never gain control of the computer in monitor mode. If it could, I/O protection could be compromised. Consider a computer executing in user mode. It will switch to monitor mode whenever an interrupt or trap occurs, jumping to the address determined from the interrupt from the interrupt vector. If a user program, as part of its execution, stores a new address in the interrupt vector, this new address could overwrite the previous address with an address in the user program. Then, when a corresponding trap or interrupt occurred, the hardware would switch to monitor mode and transfer control through the modified interrupt vector table to a user program, causing it to gain control of the computer in monitor mode. Hence we need all I/O instructions and instructions for changing the contents of the system space in memory to be protected. A user process could request a privileged operation by executing a system call such as read (for reading a file). 9 Operating Systems CS-604 Lecture No. 3 Operating Systems Lecture No. 3 Reading Material Computer System Structures, Chapter 2 Operating Systems Structures, Chapter 3 PowerPoint Slides for Lecture 3 Summary Memory and CPU protection Operating system components and services System calls Operating system structures Memory Protection The region in the memory that a process is allowed to access is known as process address space. To ensure correct operation of a computer system, we need to ensure that a process cannot access memory outside its address space. If we don’t do this then a process may, accidentally or deliberately, overwrite the address space of another process or memory space belonging to the operating system (e.g., for the interrupt vector table). Using two CPU registers, specifically designed for this purpose, can provide memory protection. These registered are: Base register – it holds the smallest legal physical memory address for a process Limit register – it contains the size of the process When a process is loaded into memory, the base register is initialized with the starting address of the process and the limit register is initialized with its size. Memory outside the defined range is protected because the CPU checks that every address generated by the process falls within the memory range defined by the values stored in the base and limit registers, as shown in Figure 3.1. Figure 3.1 Hardware address protection with base and limit registers 10 In Figure 3.2, we use an example to illustrate how the concept outlined above works. The base and limit registers are initialized to define the address space of a process. The process starts at memory location 300040 and its size is 120900 bytes (assuming that memory is byte addressable). During the execution of this process, the CPU insures (by using the logic outlined in Figure 3.1) that all the addresses generated by this process are greater than or equal to 300040 and less than (300040+120900), thereby preventing this process to access any memory area outside its address space. Loading the base and limit registers are privileged instructions. Figure 3.2 Use of Base and Limit Register CPU Protection In addition to protecting I/O and memory, we must ensure that the operating system maintains control. We must prevent the user program from getting stuck in an infinite loop or not calling system services and never returning control to the CPU. To accomplish this we can use a timer, which interrupts the CPU after specified period to ensure that the operating system maintains control. The timer period may be variable or fixed. A fixed-rate clock and a counter are used to implement a variable timer. The OS initializes the counter with a positive value. The counter is decremented every clock tick by the clock interrupt service routine. When the counter reaches the value 0, a timer interrupt is generated that transfers control from the current process to the next scheduled process. Thus we can use the timer to prevent a program from running too long. In the most straight forward case, the timer could be set to interrupt every N milliseconds, where N is the time slice that each process is allowed to execute before the next process gets control of the CPU. The OS is invoked at the end of each time slice to perform various housekeeping tasks. This issue is discussed in detail under CPU scheduling in Chapter 7. 11 Another use of the timer is to compute the current time. A timer interrupt signals the passage of some period, allowing the OS to compute the current time in reference to some initial time. Load-timer is a privileged instruction. OS Components An operating system has many components that manage all the resources in a computer system, insuring proper execution of programs. We briefly describe these components in this section. Process management A process can be thought of as a program in execution. It needs certain resources, including CPU time, memory, files and I/O devices to accomplish its tasks. The operating system is responsible for: Creating and terminating both user and system processes Suspending and resuming processes Providing mechanisms for process synchronization Providing mechanisms for process communication Providing mechanisms for deadlock handling Main memory management Main memory is a large array of words or bytes (called memory locations), ranging in size from hundreds of thousands to billions. Every word or byte has its own address. Main memory is a repository of quickly accessible data shared by the CPU and I/O devices. It contains the code, data, stack, and other parts of a process. The central processor reads instructions of a process from main memory during the machine cycle— fetch-decode-execute. The OS is responsible for the following activities in connection with memory management: Keeping track of free memory space Keeping track of which parts of memory are currently being used and by whom Deciding which processes are to be loaded into memory when memory space becomes available Deciding how much memory is to be allocated to a process Allocating and deallocating memory space as needed Insuring that a process is not overwritten on top of another Secondary storage management The main purpose of a computer system is to execute programs. The programs, along with the data they access, must be in the main memory or primary storage during their execution. Since main memory is too small to accommodate all data and programs, and because the data it holds are lost when the power is lost, the computer system must provide secondary storage to backup main memory. Most programs are stored on a disk until loaded into the memory and then use disk as both the source and destination of their processing. Like all other resources in a computer system, proper management of disk storage is important. The operating system is responsible for the following activities in connection with disk management: Free-space management 12 Storage allocation and deallocation Disk scheduling I/O system management The I/O subsystem consists of: A memory management component that includes buffering, caching and spooling A general device-driver interface Drivers for specific hardware devices File management Computers can store information on several types of physical media, e.g. magnetic tape, magnetic disk and optical disk. The OS maps files onto physical media and accesses these media via the storage devices. The OS is responsible for the following activities with respect to file management: Creating and deleting files Creating and deleting directories Supporting primitives (operations) for manipulating files and directories Mapping files onto the secondary storage Backing up files on stable (nonvolatile) storage media Protection system If a computer system has multiple users and allows concurrent execution of multiple processes then the various processes must be protected from each other’s activities. Protection is any mechanism for controlling the access of programs, processes or users to the resources defined by a computer system. Networking A distributed system is a collection of processors that do not share memory, peripheral devices or a clock. Instead, each processor has it own local memory and clock, and the processors communicate with each other through various communication lines, such as high- speed buses or networks. The processors in a communication system are connected through a communication network. The communication network design must consider message routing and connection strategies and the problems of contention and security. A distributed system collects physically separate, possibly heterogeneous, systems into a single coherent system, providing the user with access to the various resources that the system maintains. Command-line interpreter (shells) One of the most important system programs for an operating system is the command interpreter, which is the interface between the user and operating system. Its purpose is to read user commands and try to execute them. Some operating systems include the command interpreter in the kernel. Other operating systems (e.g. UNIX, Linux, and DOS) treat it as a special program that runs when a job is initiated or when a user first logs on (on time sharing systems). This program is sometimes called the command-line interpreter and is often known as the shell. Its function is simple: to get the next command statement and execute it. Some of the famous shells for UNIX and Linux are 13 Bourne shell (sh), C shell (csh), Bourne Again shell (bash), TC shell (tcsh), and Korn shell (ksh). You can use any of these shells by running the corresponding command, listed in parentheses for each shell. So, you can run the Bourne Again shell by running the bash or /usr/bin/bash command. Operating System Services An operating system provides the environment within which programs are executed. It provides certain services to programs and users of those programs, which vary from operating system to operating system. Some of the common ones are: Program execution: The system must be able to load a program into memory and to run that programs. The program must be able to end its execution. I/O Operations: A running program may require I/O, which may involve a file or an I/O device. For efficiency and protection user usually cannot control I/O devices directly. The OS provides a means to do I/O. File System Manipulation: Programs need to read, write files. Also they should be able to create and delete files by name. Communications: There are cases in which one program needs to exchange information with another process. This can occur between processes that are executing on the same computer or between processes that are executing on different computer systems tied together by a computer network. Communication may be implemented via shared memory or message passing. Error detection: The OS constantly needs to be aware of possible errors. Error may occur in the CPU and memory hardware, in I/O devices and in the user program. For each type of error, the OS should take appropriate action to ensure correct and consistent computing. In order to assist the efficient operation of the system itself, the system provides the following functions: Resource allocation: When multiple users are logged on the system or multiple jobs are running at the same time, resources must be allocated to each of them. There are various routines to schedule jobs, allocate plotters, modems and other peripheral devices. Accounting: We want to keep track of which users use how many and which kinds of computer resources. This record keeping may be used for accounting or simply for accumulating usage statistics. Protection: The owners of information stored in a multi user computer system may want to control use of that information. When several disjointed processes execute concurrently it should not b possible for one process to interfere with the others or with the operating system itself. Protection involves ensuring that all access to system resources is controlled. Entry Points into Kernel As shown in Figure 3.3, there are four events that cause execution of a piece of code in the kernel. These events are: interrupt, trap, system call, and signal. In case of all of these events, some kernel code is executed to service the corresponding event. You have 14 discussed interrupts and traps in the computer organization or computer architecture course. We will discuss system calls execution in this lecture and signals subsequent lectures. We will talk about many UNIX and Linux system calls and signals throughout the course. System Call Signal Trap Interrupt Figure 3.3 Entry points into the operating system kernel System Calls System calls provide the interface between a process and the OS. These calls are generally available as assembly language instructions. The system call interface layer contains entry point in the kernel code; because all system resources are managed by the kernel any user or application request that involves access to any system resource must be handled by the kernel code, but user process must not be given open access to the kernel code for security reasons. So that user processes can invoke the execution of kernel code, several openings into the kernel code, also called system calls, are provided. System calls allow processes and users to manipulate system resources such as files and processes. System calls can be categorized into the following groups: Process Control File Management Device Management Information maintenance Communications Semantics of System Call Execution The following sequence of events takes place when a process invokes a system call: The user process makes a call to a library function The library routine puts appropriate parameters at a well-known place, like a register or on the stack. These parameters include arguments for the system call, return address, and call number. Three general methods are used to pass parameters between a running program and the operating system. – Pass parameters in registers. – Store the parameters in a table in the main memory and the table address is passed as a parameter in a register. – Push (store) the parameters onto the stack by the program, and pop off the stack by operating system. 15 A trap instruction is executed to change mode from user to kernel and give control to operating system. The operating system then determines which system call is to be carried out by examining one of the parameters (the call number) passed to it by library routine. The kernel uses call number to index a kernel table (the dispatch table) which contains pointers to service routines for all system calls. The service routine is executed and control given back to user program via return from trap instruction; the instruction also changes mode from system to user. The library function executes the instruction following trap; interprets the return values from the kernel and returns to the user process. Figure 3.4 gives a pictorial view of the above steps. Process Library Call System Call trap Dispatch Table Service Code Kernel Code Figure 3.4 Pictorial view of the steps needed for execution of a system call Operating Systems Structures Just like any other software, the operating system code can be structured in different ways. The following are some of the commonly used structures. Simple/Monolithic Structure In this case, the operating system code has not structure. It is written for functionality and efficiency (in terms of time and space). DOS and UNIX are examples of such systems, as shown in Figures 3.5 and 3.6. UNIX consists of two separable parts, the kernel and the system programs. The kernel is further separated into a series of interfaces and devices drivers, which were added and expanded over the years. Every thing below the system call interface and above the physical hardware is the kernel, which provides the file system, CPU scheduling, memory management and other OS functions through system calls. Since this is an enormous amount of functionality combined in one level, UNIX is difficult to enhance as changes in one section could adversely affect other areas. We will discuss the various components of the UNIX kernel throughout the course. 16 Figure 3.5 Logical structure of DOS Figure 3.6 Logical structure of UNIX 17 Operating Systems CS-604 Lecture No. 4 Operating Systems Lecture No. 4 Reading Material Operating Systems Structures, Chapter 3 PowerPoint Slides for Lecture 3 Summary Operating system structures Operating system design and implementation UNIX/Linux directory structure Browsing UNIX/Linux directory structure Operating Systems Structures (continued) Layered Approach The modularization of a system can be done in many ways. As shown in Figure 4.1, in the layered approach the OS is broken up into a number of layers or levels each built on top of lower layer. The bottom layer is the hardware; the highest layer (layer N) is the user interface. A typical OS layer (layer-M) consists of data structures and a set of routines that can be invoked by higher-level layers. Layer M in turn can invoke operations on lower level layers. Figure 4.1 The layered structure The main advantage of the layered approach is modularity. The layers are selected such that each uses functions and services of only lower layers. This approach simplifies debugging and system verification. The major difficulty with layered approach is careful definition of layers, because a layer can only use the layers below it. Also it tends to be less efficient than other approaches. Each layer adds overhead to a system call (which is trapped when the 18 program executes a I/O operation, for instance). This results in a system call that takes longer than does one on a non-layered system. THE operating system by Dijkstra and IBM’s OS/2 are examples of layered operating systems. Micro kernels This method structures the operating system by removing all non-essential components from the kernel and implementing as system and user level programs. The result is a smaller kernel. Micro kernels typically provide minimum process and memory management in addition to a communication facility. The main function of the micro kernel is to provide a communication facility between the client program and the various services that are also running in the user space. The benefits of the micro kernel approach include the ease of extending the OS. All new services are added to user space and consequently do not require modification of the kernel. When the kernel does have to be modified, the changes tend to be fewer because the micro kernel is a smaller kernel. The resulting OS is easier to port from one hard ware design to another. It also provides more security and reliability since most services are running as user rather than kernel processes. Mach, MacOS X Server, QNX, OS/2, and Windows NT are examples of microkernel based operating systems. As shown in Figure 4.2, various types of services can be run on top of the Windows NT microkernel, thereby allowing applications developed for different platforms to run under Windows NT. Figure 4.2 Windows NT client-server structure Virtual Machines Conceptually a computer system is made up of layers. The hardware is the lowest level in all such systems. The kernel running at the next level uses the hardware instructions to create a set of system call for use by outer layers. The system programs above the kernel are therefore able to use either system calls or hardware instructions and in some ways these programs do not differentiate between these two. System programs in turn treat the hardware and the system calls as though they were both at the same level. In some systems the application programs can call the system programs. The application programs view everything under them in the hierarchy as though the latter were part of the machine itself. This layered approach is taken to its logical conclusion in the concept of a virtual machine (VM). The VM operating system for IBM systems is the best example of VM concept. By using CPU scheduling and virtual memory techniques an operating system can create the illusion that a process has its own memory with its own (virtual) memory. The 19

Advise: Why You Wasting Money in Costly SEO Tools, Use World's Best Free SEO Tool Ubersuggest.