Question? Leave a message!




Principles of Operating Systems

Principles of Operating Systems
Principles of Operating Systems Lecture 35 Processes and Threads Ardalan Amiri Sani (ardalanuci.edu) lecture slides contains some content adapted from : previous slides by Prof. Nalini Venkatasubramanian, http://wwwinst.eecs.berkeley.edu/cs162/ Copyright © 2010 UCB, and course text slides © Silberschatz 1Outline ■ Process Concept ■ Process Scheduling ■ Operations on Processes ■ Cooperating Processes ■ Threads ■ Interprocess Communication 2Process Concept ■ An operating system executes a variety of programs ■ batch systems jobs ■ timeshared systems user programs or tasks ■ job and program used interchangeably ■ Process a program in execution (with limited rights) ■ process execution proceeds in a sequential fashion ■ A process contains ■ program counter, stack and data section 3Process = Program main () main () Heap …; …; Stack A() A() A main …; …; Process Program ■ More to a process than just a program: ❑ Program is just part of the process state ❑ I run Vim on lectures.txt, you run it on homework.java – Same program, different processes ■ Less to a process than a program: ❑ A program can invoke more than one process ❑ cc/cpp starts up processes to handle different stages of the compilation process cc1, cc2, as, and ld 4Process State ■ A process changes state as it executes. admitted new terminated exit interrupt running ready Scheduler I/O or I/O or dispatch event event wait completion waiting 5Process States ■ New The process is being created. ■ Running Instructions are being executed. ■ Waiting Waiting for some event to occur. ■ Ready Waiting to be assigned to a processor. ■ Terminated Process has finished execution. 6Process Control Block ■ Contains information associated with each process ■ Process State e.g. new, ready, running etc. ■ Process Number – Process ID ■ Program Counter address of next instruction to be executed ■ CPU registers general purpose registers, stack pointer etc. ■ CPU scheduling information process priority ■ Memory Management information base/limit information ■ Accounting information time Process limits, I/O resources Control ❑ I/O Status information list of I/O devices allocated Block 7Process Scheduling Process (PCB) moves from queue to queue When does it move Where A scheduling decision 8Process Scheduling Queues ■ Job Queue set of all processes in the system ■ Ready Queue set of all processes residing in main memory, ready and waiting to execute. ■ Device Queues set of processes waiting for an I/O device. ■ Process migration between the various queues. ■ Queue Structures typically linked list, circular list etc. 9Process Queues Device Queue Ready Queue 10Enabling Concurrency and Protection: Multiplex processes ■ Only one process (PCB) active at a time ❑ Current state of process held in PCB: ■ “snapshot” of the execution and protection environment ❑ Process needs CPU, resources ■ Give out CPU time to different processes (Scheduling): ❑ Only one process “running” at a time ❑ Give more time to important processes ■ Give pieces of resources to different processes (Protection): ❑ Controlled access to nonCPU resources ■ E.g. Memory Mapping: Give each process their own address space Process Control 11 BlockEnabling Concurrency: Context Switch ■ Task that switches CPU from one process to another process ❑ the CPU must save the PCB state of the old process and load the saved PCB state of the new process. ■ Contextswitch time is overhead ❑ System does no useful work while switching ❑ Overhead sets minimum practical switching time; can become a bottleneck ■ Time for context switch is dependent on hardware support ( 1 1000 microseconds). 12CPU Switch From Process to Process ■ Code executed in kernel above is overhead ❑ Overhead sets minimum practical switching time 13Schedulers ■ Longterm scheduler (or job scheduler) ❑ selects which processes should be brought into the ready queue. ❑ invoked very infrequently (seconds, minutes); may be slow. ❑ controls the degree of multiprogramming ■ Short term scheduler (or CPU scheduler) ❑ selects which process should execute next and allocates CPU. ❑ invoked very frequently (milliseconds) must be very fast ■ Medium Term Scheduler ❑ swaps out process temporarily ❑ balances load for better throughput 14Medium Term (Timesharing) Scheduler 15Process Profiles ■ I/O bound process ❑ spends more time in I/O, short CPU bursts, CPU underutilized. ■ CPU bound process ❑ spends more time doing computations; few very long CPU bursts, I/O underutilized. ■ The right job mix: ❑ Long term scheduler admits jobs to keep load balanced between I/O and CPU bound processes ❑ Medium term scheduler – ensures the right mix (by sometimes swapping out jobs and resuming them later) 16Process Creation ■ Processes are created and deleted dynamically ■ Process which creates another process is called a parent process; the created process is called a child process. ■ Result is a tree of processes ■ e.g. UNIX processes have dependencies and form a hierarchy. ■ Resources required when creating process ■ CPU time, files, memory, I/O devices etc. 17UNIX Process Hierarchy 18What does it take to create a process ■ Must construct new PCB ❑ Inexpensive ■ Must set up new page tables for address space ❑ More expensive ■ Copy data from parent process (Unix fork() ) ❑ Semantics of Unix fork() are that the child process gets a complete copy of the parent memory and I/O state ❑ Originally very expensive ❑ Much less expensive with “copy on write” ■ Copy I/O state (file handles, etc) ❑ Medium expense 19Process Creation ■ Resource sharing ❑ Parent and children share all resources. ❑ Children share subset of parent’s resources prevents many processes from overloading the system. ❑ Parent and children share no resources. ■ Execution ❑ Parent and child execute concurrently. ❑ Parent waits until child has terminated. ■ Address Space ❑ Child process is duplicate of parent process. ❑ Child process has a program loaded into it. 20UNIX Process Creation ■ Fork system call creates new processes ■ execve system call is used after a fork to replace the processes memory space with a new program. 21Process Termination ■ Process executes last statement and asks the operating system to delete it (exit). ❑ Output data from child to parent (via wait). ❑ Process’ resources are deallocated by operating system. ■ Parent may terminate execution of child processes. ❑ Child has exceeded allocated resources. ❑ Task assigned to child is no longer required. ❑ Parent is exiting ▪ OS does not allow child to continue if parent terminates ▪ Cascading termination 22Threads ■ Processes do not share resources well ❑ high context switching overhead ■ Idea: Separate concurrency from protection ■ Multithreading: a single program made up of a number of different concurrent activities ■ A thread (or lightweight process) ❑ basic unit of CPU utilization; it consists of: ▪ program counter, register set and stack space ■ A thread shares the following with peer threads: ▪ code section, data section and OS resources (open files, signals) ▪ No protection between threads ■ Collectively called a task. ■ Heavyweight process is a task with one thread. 23Single and Multithreaded Processes ■ Threads encapsulate concurrency: “Active” component ■ Address spaces encapsulate protection: “Passive” part 24 ❑ Keeps buggy program from trashing the systemBenefits ■ Responsiveness ■ Resource Sharing ■ Economy ■ Utilization of MP Architectures 25Threads (Cont.) ■ In a multithreaded task, while one server thread is blocked and waiting, a second thread in the same task can run. ■ Cooperation of multiple threads in the same job results in higher throughput and improved performance. ■ Applications that require sharing a common buffer (i.e. producerconsumer) benefit from thread utilization. ■ Threads provide a mechanism that allows sequential processes to make blocking system calls while also achieving parallelism. 26Thread State ■ State shared by all threads in process/addr space ❑ Contents of memory (global variables, heap) ❑ I/O state (file system, network connections, etc) ■ State “private” to each thread ❑ Kept in TCB ≡ Thread Control Block ❑ CPU registers (including, program counter) ❑ Execution stack ■ Parameters, Temporary variables ■ return PCs are kept while called procedures are executing 27Threads (cont.) ■ Thread context switch still requires a register set switch, but no memory management related work ■ Thread states ■ ready, blocked, running, terminated ■ Threads share CPU and only one thread can run at a time. ■ No protection among threads. 28Examples: Multithreaded programs ■ Embedded systems ❑ Elevators, Planes, Medical systems, Wristwatches ❑ Single Program, concurrent operations ■ Most modern OS kernels ❑ Internally concurrent because have to deal with concurrent requests by multiple users ❑ But no protection needed within kernel ■ Database Servers ❑ Access to shared data by many concurrent users ❑ Also background utility processing must be done 29 More Examples: Multithreaded programs ■ Network Servers ❑ Concurrent requests from network ❑ Again, single program, multiple concurrent operations ❑ File server, Web server, and airline reservation systems ■ Parallel Programming (more than one physical CPU) ❑ Split program into multiple threads for parallelism ❑ This is called Multiprocessing 30 One Many threads Per AS: MS/DOS, early Traditional UNIX One Macintosh Embedded systems Mach, OS/2, Linux (Geoworks, VxWorks, Win NT to XP, Solaris, Many JavaOS, etc) HPUX, OS X, JavaOS, Pilot (PC) Windows 9x Real operating systems have either ❑ One or many address spaces ❑ One or many threads per address space 31 of addr spaces:Types of Threads ■ Kernelsupported threads ■ Userlevel threads ■ Hybrid approach implements both userlevel and kernelsupported threads (Solaris 2). 32Kernel Threads ■ Supported by the Kernel ❑ Native threads supported directly by the kernel ❑ Every thread can run or block independently ❑ One process may have several threads waiting on different things ■ Downside of kernel threads: a bit expensive ❑ Need to make a crossing into kernel mode to schedule ■ Examples ❑ Windows XP/2000, Solaris, Linux,Tru64 UNIX, Mac OS X, Mach, OS/2 33User Threads ■ Supported above the kernel, via a set of library calls at the user level. ■ Thread management done by userlevel threads library ❑ User program provides scheduler and thread package ■ May have several user threads per kernel thread ■ User threads may be scheduled nonpreemptively relative to each other (only switch on yield()) ❑ Advantages ■ Cheap, Fast ❑ Threads do not need to call OS and cause interrupts to kernel ❑ Disadv: If kernel is single threaded, system call from any thread can block the entire task. ■ Example thread libraries: ❑ POSIX Pthreads, Win32 threads, Java threads 34Multithreading Models ■ ManytoOne ■ OnetoOne ■ ManytoMany 35ManytoOne ■ Many userlevel threads mapped to single kernel thread ■ Examples: ❑ Solaris Green Threads ❑ GNU Portable Threads 36OnetoOne ■ Each userlevel thread maps to kernel thread Examples ❑ Windows NT/XP/2000; Linux; Solaris 9 and later 37ManytoMany Model ■ Allows many user level threads to be mapped to many kernel threads ■ Allows the operating system to create a sufficient number of kernel threads ■ Solaris prior to version 9 ■ Windows NT/2000 with the ThreadFiber package 38Thread Support in Solaris 2 ■ Solaris 2 is a version of UNIX with support for ❑ kernel and user level threads, symmetric multiprocessing and realtime scheduling. ■ Lightweight Processes (LWP) ❑ intermediate between user and kernel level threads ❑ each LWP is connected to exactly one kernel thread 39Threads in Solaris 2 40Twolevel Model ■ Similar to M:M, except that it allows a user thread to be bound to kernel thread ■ Examples ❑ IRIX, HPUX, Tru64 UNIX, Solaris 9 and earlier 41Threading Issues ■ Semantics of fork() and exec() system calls ■ Thread cancellation ■ Signal handling ■ Thread pools ■ Thread specific data 42Semantics of fork() and exec() ● Does fork() duplicate only the calling thread or all threads ● Some UNIXes have two versions of fork ● exec() usually works as normal – replace the running process including all threads Signal Handling ● Signals are used in UNIX systems to notify a process that a particular event has occurred. ● A signal handler is used to process signals 1. Signal is generated by particular event 2. Signal is delivered to a process 3. Signal is handled by one of two signal handlers: 1. default 2. userdefined ● Every signal has default handler that kernel runs when handling signal ● Userdefined signal handler can override default ● For singlethreaded, signal delivered to processSignal Handling (Cont.) ● Where should a signal be delivered for multithreaded ● Deliver the signal to the thread to which the signal applies ● Deliver the signal to every thread in the process ● Deliver the signal to certain threads in the process ● Assign a specific thread to receive all signals for the processThread Cancellation ● Terminating a thread before it has finished ● Thread to be canceled is target thread ● Two general approaches: ● Asynchronous cancellation terminates the target thread immediately ● Deferred cancellation allows the target thread to periodically check if it should be cancelled ● Pthread code to create and cancel a thread: Thread Pools ● Create a number of threads in a pool where they await work ● Advantages: ● Usually slightly faster to service a request with an existing thread than create a new thread ● Allows the number of threads in the application(s) to be bound to the size of the pool ● Separating task to be performed from mechanics of creating task allows different strategies for running task i.e., tasks could be scheduled to run periodically ● Windows API supports thread pools:ThreadLocal Storage ● Threadlocal storage (TLS) allows each thread to have its own copy of data ● Different from local variables ● Local variables visible only during single function invocation ● TLS visible across function invocations ● Similar to static data ● TLS is unique to each threadMulti (processing, programming, threading) ■ Definitions: ❑ Multiprocessing ≡ Multiple CPUs ❑ Multiprogramming ≡ Multiple Jobs or Processes ❑ Multithreading ≡ Multiple threads per Process ■ What does it mean to run two threads “concurrently” ❑ Scheduler is free to run threads in any order and interleaving: FIFO, Random, … ❑ Dispatcher can choose to run each thread to completion or timeslice in big chunks or small chunks A B Multiprocessing C A B C A B C A B C B Multiprogramming 49Interprocess Communication ● Processes within a system may be independent or cooperating ● Cooperating process can affect or be affected by other processes, including sharing data ● Reasons for cooperating processes: ● Information sharing ● Computation speedup ● Modularity ● Convenience ● Cooperating processes need interprocess communication (IPC) ● Two models of IPC ● Shared memory ● Message passing 50Interprocess Communication – Shared Memory ● An area of memory shared among the processes that wish to communicate ● The communication is under the control of the processes not the operating system. ● Major issues is to provide mechanism that will allow the user processes to synchronize their actions when they access shared memory. ● Synchronization is discussed in great details in Chapter 5. 51Interprocess Communication – Shared Memory 52ProducerConsumer Problem ● Paradigm for cooperating processes, producer process produces information that is consumed by a consumer process ● unboundedbuffer places no practical limit on the size of the buffer ● boundedbuffer assumes that there is a fixed buffer size 53BoundedBuffer – SharedMemory Solution ● Shared data define BUFFERSIZE 10 typedef struct . . . item; item bufferBUFFERSIZE; int in = 0; int out = 0; 54BoundedBuffer – Producer item nextproduced; while (true) / produce an item in next produced / while (((in + 1) BUFFERSIZE) == out) ; / do nothing / bufferin = nextproduced; in = (in + 1) BUFFERSIZE; 55Bounded Buffer – Consumer item nextconsumed; while (true) while (in == out) ; / do nothing / nextconsumed = bufferout; out = (out + 1) BUFFERSIZE; / consume the item in next consumed / 56BoundedBuffer – SharedMemory Solution ● How many elements in the buffer can be used at most at a given time item nextconsumed; item nextproduced; while (true) while (true) while (in == out) / produce an item in next produced / ; / do nothing / while (((in + 1) BUFFERSIZE) == out) nextconsumed = bufferout; ; / do nothing / out = (out + 1) BUFFERSIZE; bufferin = nextproduced; in = (in + 1) BUFFERSIZE; / consume the item in next consumed / Producer Consumer 57BoundedBuffer – SharedMemory Solution ● Solution is correct, but can only use BUFFERSIZE1 elements 58Interprocess Communication – Message Passing ● Mechanism for processes to communicate and to synchronize their actions ● Message system – processes communicate with each other without resorting to shared variables ● IPC facility provides two operations: ● send(message) ● receive(message) ● The message size is either fixed or variable 59Interprocess Communication – Message Passing 60Message Passing (Cont.) ● If processes P and Q wish to communicate, they need to: ● Establish a communication link between them ● Exchange messages via send/receive ● Implementation issues: ● How are links established ● Can a link be associated with more than two processes ● How many links can there be between every pair of communicating processes ● What is the capacity of a link ● Is the size of a message that the link can accommodate fixed or variable ● Is a link unidirectional or bidirectional 61Message Passing (Cont.) ● Implementation of communication link ● Physical: Shared memory Hardware bus Network ● Logical: Direct or indirect Synchronous or asynchronous Automatic or explicit buffering 62Direct Communication ● Processes must name each other explicitly: ● send (P, message) – send a message to process P ● receive(Q, message) – receive a message from process Q ● Properties of communication link ● Links are established automatically ● A link is associated with exactly one pair of communicating processes ● Between each pair there exists exactly one link ● The link may be unidirectional, but is usually bidirectional 63Indirect Communication ● Messages are directed and received from mailboxes (also referred to as ports) ● Each mailbox has a unique id ● Processes can communicate only if they share a mailbox ● Properties of communication link ● Link established only if processes share a common mailbox ● A link may be associated with many processes ● Each pair of processes may share several communication links ● Link may be unidirectional or bidirectional 64Indirect Communication ● Operations ● create a new mailbox (port) ● send and receive messages through mailbox ● destroy a mailbox ● Primitives are defined as: send(A, message) – send a message to mailbox A receive (A, message) – receive a message from mailbox A 65Indirect Communication ● Mailbox sharing ● P , P , and P share mailbox A 1 2 3 ● P , sends; P and P receive 1 2 3 ● Who gets the message ● Solutions ● Allow a link to be associated with at most two processes ● Allow only one process at a time to execute a receive operation ● Allow the system to select arbitrarily the receiver. Sender is notified who the receiver was. 66Synchronization ● Message passing may be either blocking or nonblocking ● Blocking is considered synchronous ● Blocking send the sender is blocked until the message is received ● Blocking receive the receiver is blocked until a message is available ● Nonblocking is considered asynchronous ● Nonblocking send the sender sends the message and continue ● Nonblocking receive the receiver receives: ● A valid message, or ● Null message ● Different combinations possible ● If both send and receive are blocking, we have a rendezvous 67Message passing (Cont.) ● Producerconsumer becomes trivial message nextproduced; while (true) / produce an item in next produced / Producer send(nextproduced); message nextconsumed; Consumer while (true) receive(nextconsumed); / consume the item in next consumed / 68Buffering ● Queue of messages attached to the link. ● implemented in one of three ways 1. Zero capacity – no messages are queued on a link. Sender must wait for receiver (rendezvous) 2. Bounded capacity – finite length of n messages Sender must wait if link full 3. Unbounded capacity – infinite length Sender never waits 69Examples of IPC Systems POSIX ● POSIX Shared Memory ● Process first creates shared memory segment shmfd = shmopen(name, O CREAT O RDWR, 0666); ● Also used to open an existing segment to share it ● Set the size of the object ftruncate(shm fd, 4096); ● Now the process could write to the shared memory sprintf(shared memory, "Writing to shared memory"); 70IPC POSIX Producer 71IPC POSIX Consumer 72Examples of IPC Systems Mach ● Mach communication is message based ● Even system calls are messages ● Each task gets two mailboxes at creation Kernel and Notify ● Only three system calls needed for message transfer msgsend(), msgreceive(), msgrpc() ● Mailboxes needed for communication, created via portallocate() ● Send and receive are flexible, for example four options if mailbox full: Wait indefinitely Wait at most n milliseconds Return immediately Temporarily cache a message 73Examples of IPC Systems – Windows ● Messagepassing centric via advanced local procedure call (LPC) facility ● Only works between processes on the same system ● Uses ports (like mailboxes) to establish and maintain communication channels ● Communication works as follows: The client opens a handle to the subsystem’s connection port object. The client sends a connection request. The server creates two private communication ports and returns the handle to one of them to the client. The client and server use the corresponding port handle to send messages or callbacks and to listen for replies. 74Local Procedure Calls in Windows 75Communications in ClientServer Systems ● Sockets ● Remote Procedure Calls ● Pipes ● Remote Method Invocation (Java) 76Sockets ● A socket is defined as an endpoint for communication ● Concatenation of IP address and port – a number included at start of message packet to differentiate network services on a host ● The socket 161.25.19.8:1625 refers to port 1625 on host 161.25.19.8 ● Communication consists between a pair of sockets ● All ports below 1024 are well known, used for standard services ● Special IP address 127.0.0.1 (loopback) to refer to system on which process is running 77Socket Communication 78Sockets in Java ● Three types of sockets ● Connectionoriented (TCP) ● Connectionless (UDP) ● MulticastSocket class– data can be sent to multiple recipients ● Consider this “Date” server: 79Remote Procedure Calls ● Remote procedure call (RPC) abstracts procedure calls between processes on networked systems ● Again uses ports for service differentiation ● Stubs – clientside proxy for the actual procedure on the server ● The clientside stub locates the server and marshalls the parameters ● The serverside stub receives this message, unpacks the marshalled parameters, and performs the procedure on the server ● On Windows, stub code compile from specification written in Microsoft Interface Definition Language (MIDL) 80Remote Procedure Calls (Cont.) ● Data representation handled via External Data Representation (XDL) format to account for different architectures ● Bigendian and littleendian ● Remote communication has more failure scenarios than local ● Messages can be delivered exactly once rather than at most once ● OS typically provides a rendezvous (or matchmaker) service to connect client and server 81Execution of RPC 82Pipes ● Acts as a conduit allowing two processes to communicate ● Ordinary pipes – cannot be accessed from outside the process that created it. Typically, a parent process creates a pipe and uses it to communicate with a child process that it created. ● Named pipes – can be accessed without a parentchild relationship. 83Ordinary Pipes ● Ordinary Pipes allow communication in standard producerconsumer style ● Producer writes to one end (the writeend of the pipe) ● Consumer reads from the other end (the readend of the pipe) ● Ordinary pipes are therefore unidirectional ● Require parentchild relationship between communicating processes ● Windows calls these anonymous pipes ● See Unix and Windows code samples in textbook 84Ordinary Pipes define READEND 0 define WRITEEND 1 (see full example in the book) int main (void) char writemsgBUFFERSIZE = “Greetings”; char readmsgBUFFERSIZE; int fd2; pidt pid; if (pipe(fd) == 1) / handle error / pid = fork(); if (pid 0) / handle error / If (pid 0) / parent process / close(fdREADEND); write(fdWRITEEND, writemsg, strlen(writemsg) + 1); close(fdWRITEEND); else / child process / close(fdWRITEEND); read(fdREADEND, readmsg, BUFFERSIZE); printf(“read s”, readmsg); close(fdREADEND); return 0; 85 Named Pipes ● Named Pipes are more powerful than ordinary pipes ● Communication is bidirectional ● No parentchild relationship is necessary between the communicating processes ● Several processes can use the named pipe for communication ● Provided on both UNIX and Windows systems 86
sharer
Presentations
Free
Document Information
Category:
Presentations
User Name:
Dr.SamuelHunt
User Type:
Teacher
Country:
United Arab Emirates
Uploaded Date:
21-07-2017