Lecture Notes of Modeling and Simulation

what is modeling and simulation engineering. lecture notes on system modeling and simulation. how to learn modeling and simulation pdf free download
NancyWest Profile Pic
Published Date:12-07-2017
Your Website URL(Optional)
Lecture Notes of Modeling and Simulation th 7 Sem IT BCS-408 MODELING & SIMULATION (3-1-0) Cr.-4 Module I (10 Lectures) Inventory Concept: The technique of Simulation. : 1 class Major application areas, concept of a System. : 1 class Environment. : 1 class Continuous and discrete systems. : 1 class Systems modeling, types of models. : 1 class Progress of a Simulation Study. : 1 class Monte Carlo Method. : 1 class Comparison of Simulation and Analytical Methods. : 1 class Numerical Computation Technique for discrete and continuous models. : 1 class Continuous System Simulation. ;1 class Revision Module II (12 Lectures)Probability Concepts in Simulation: 2 classes Stochastic variables, Discrete and Continuous Probability Functions. 2 classes Numerical evaluation of continuous probability functions, continuous uniformly distributed random numbers. : 2 classes Random Number Generators – Linear congruential Generator, Mid Square Method, Multiplicative Congruential generator, rejection Method. : 2 classes Testing of random Numbers. : 2 classes Generation of Stochastic variants. : 1 class Arrival Patterns Service times. : 1 class Revision Module III (10 Lectures) Discrete System Simulation and GPSS: Discrete Events, Representation of Time, generation of arrival patterns. : 2 classes Fixed time step versus next event simulation, Simulation of a Telephone System, Delayed calls . : 2 classes Introduction to GPSS: Creating and moving transactions, queues. : 2 classes Facilities and storages, gathering statistics, conditional transfers, program control statements, priorities and parameters. : 2 classes Standard numerical attributes, functions, gates, logic switches and tests, Variables, Select and Count. : 2 classes Revision Module IV (10 Lectures) Simulation Languages and Practical Systems : 1 class Continuous and discrete systems languages, factors in the section of discrete systems simulation language. ; 2 classes Computer model of queuing, inventory and scheduling systems. : 2 classes Design and Evaluation of simulation Experiments: Length of simulation runs, validation, variance reduction techniques. : 2 classes Experimental layout, analysis of simulation output, Recent trends and developments. : 1 class Revision Books: nd 1. System Simulation – Geoffrey Gordon, 2 Edition, PHI 2. System Simulation with Digital computer – Narsingh Deo, PHI Module-I Objectives:  To give an overview of the course (Modeling & simulation).  Define important terminologies.  Classify systems/models System: any set of interrelated components acting together to achieve a common objective. Examples: 1. Battery • Consists of anode, cathode, acid and other omponents. • These components act together to achieve one objective like preserving electricity. 2. University • Consists of professors, students and employees. • These objects act together to achieve the objective of teaching & learning process. A system consists of • Inputs Elements that cause changes in the systems variables. • Outputs Response • Systems (process) Defines the relationship between the inputs and outputs Some Possible Inputs • Inlet flow rate • Temperature of entering material• Concentration of entering material Some Possible Outputs • Level in the tank • Temperature of material in tank • Outlet flow rate • Concentration of material in tank Qn: What inputs and outputs are needed when we want to model the Inventory Control System? Model: A model describes the mathematical relationship between inputs and outputs. Simulation: is the process of using the mathematical model to determine the response of the system in different situations in a Computer system. Classification of Systems Systems can be classified based on different criteria: • Spatial characteristics: lumped & distributed • Continuity of the time variable: Continuous, discrete-time • Quantization of dependent variable: Quantized & Non- quantized • Parameter variation: time varying & fixed (time-invariant) • Superposition principle: linear & nonlinear Continuous-time System: • The signal is defined for all t in an interval ti, tf Discrete-time System: • The signal is defined for a finite number of time points t0, t1,… A system is linear: • if it satisfies the super position principle. • A system satisfies the superposition principle if the following conditions are satisfied: 1. Multiplying the input by any constant, multiplies the output by the same constant. 2. The response to several inputs applied simultaneously is the sum of individual response to each input applied separately.Quantized variable System: • The variable is restricted to a finite or countable number of distinct values. Non-Quantized variable System: • The variable can assume any value within a continuous range. Characteristics of Lumped Systems: • Only one independent variable ( t ) • No dependence on the spatial coordinates • Modeled by ordinary differential equations • Needs a finite number of state variables Distributed System: • More than one independent variable • Depends on the spatial coordinates or some of them. • Modeled by partial differential equations • Needs an infinite number of state variables. Time variant Systems: • Observes the change of state of the variable regularly and records the related information at that point of change. Fixed-Time Event Systems: • Changes of the state of the variable occurs at some constant interval. Models and types: Models are the replica of systems which can be represented physically or mathematically. All the physical and mathematical models can further be divided into categories like: static and dynamic. Simulation models may be either deterministic or stochastic (meaning probabilistic). In a deterministic simulation, all of the events and relationships among the variables are governed entirely by a combination of known, but possibly complicated, rules. The advantage of simulation is that you can still answer the question even if the model is too complicated to solve analytically. In a stochastic simulation, ‘‘random variables’’ are included in the model to represent the influence of factors that are unpredictable, unknown, or beyond the scope of the model we use in the simulation. . In many applications, such as a model of the tellers in a bank, it makes sense to incorporate random variables into the model. In the case of a bank, we might wish to assume that there is a stream of anonymous customers coming in the door at unpredictable times, rather than explicitly modeling the behavior of each of their actual customers to determine when they plan to go to the bank. It is worth noting here that it is well known in statistics that when we combine the actions of a large population of more-or-less independently operating entities (customers, etc.) the resulting behavior appears to have been randomly produced, and that the patterns of activity followed by the individual entities within that population are unimportant. For example, all of the telephone systems designed in the last 60 years are based on the (quite justified) assumption that the umber of calls made in fixed length time intervals obeys the Poisson distribution. Thus the generation and use of random variables is an important topic in simulation. Static Versus Dynamic Simulation Models Another dimension along which simulation models can be classified is that of time. If the model is used to simulate the operation of a system over a period of time, it is dynamic. The baseball example above uses dynamic simulation. On the other hand, if no time is involved in a model, it is static. Many gambling situations (e.g., dice, roulette) can be simulated to determine the odds of winning or losing. Since only the number of bets made, rather than the duration of gambling, matters, static simulation models are appropriate for them Monte Carlo Simulation (named after a famous casino town1 in Europe) refers to the type of simulation in which a static, approximate, and stochastic model is used for a deterministic system. Let us now look at an example of Monte Carlo simulation. Consider estimating the value of π by finding the approximate area of a circle with a unit radius. The first quadrant of the circle is enclosed by a unit square. If pairs of uniformly distributed pseudo-random values for the x and y coordinates in 0, 1 are generated, the probability of the corresponding points falling in the quarter circle is simply the ratio of the area of the quarter circle to that of the unit square. The above program is an example of Monte Carlo integration, by which definite integral of arbitrary but finite functions can be estimated. Consider the following integral: Such an integral can be partitioned into segments above or below the horizontal axis. Without loss of generality, let us assume f (x)≥0, a ≤ x ≤ b. We can then bound the curve with a rectangle with borders of x = a, x = b, y = 0, and y = y max, where y max is the maximum value of f (x) in the interval a, b. By generating random points uniformly distributed over this rectangular area, and deciding whether they fall above or below the curve f (x), we can estimate the integral. Dynamic Simulation In the remainder of chapter, dynamic stochastic simulation models will be emphasized. A dynamic simulation program that is written in a general purpose programming language (such as Turing) must: i) keep track of ‘‘simulated’’ time, ii) schedule events at ‘‘simulated’’ times in the future, and iii) cause appropriate changes in state to occur when ‘‘simulated’’ time reaches the time at which one or more events take place. The structure of a simulation program may be either time-driven or event-driven, depending on the nature of the basic loop of the program. In time-driven models (see Figure 2a), each time through the basic loop, simulated time is advanced by some ‘‘small’’ (in relation to the rates of change of the variables in the program) fixed amount, and then each possible event type is checked to see which, if any, events have occurred at that point in simulated time. In event- driven models (see Figure 2b), events of various types are scheduled at future points in simulated time. The basic program loop determines when the next scheduled event should occur, as the minimum of the scheduled times of each possible event. Simulated time is then advanced to exactly that event time, and the corresponding event handling routine is executed to reflect the change of state that occurs as a result of that event. Constructing a Simulation Model 1. Identification of Components Our first task is to identify those system components that should be included in the model. This choice depends not only on the real system, but also on the aspects of its behavior that we intend to investigate. The only complete model of a system is the system itself. Any other model includes assumptions or simplifications. Some components of a system may be left out of the system model if their absence is not expected to alter the aspects of behavior that we wish to observe. We assume that the bank automation problem specification includes the following information: i) the times between successive arrivals of customers, expressed as a probability distribution, ii) the distribution of the number of liters of gasoline needed by customers, iii) the distribution of how long service at the pump takes as a function of the number of litres needed, iv) the probability that an arriving customer will balk as a function of the number of cars already waiting at the service station and the number of liters of gasoline he needs, v) the profit per liter of gasoline sold, and vi) the cost per day of running a pump (including an attendant’s salary, etc.). 2. Entities Customers, resources, service facilities, materials, service personnel, etc., are entities. Each type of entity has a set of relevant attributes. In our service station example, the entities are cars, with the attributes ‘arrival time’ and ‘number of liters needed’, and pumps, with the attribute ‘time to the completion of service to the current customer’. 3. Events Events are occurrences that alter the system state. Here the events are the arrival of a customer at the station, the start-of-service to a customer at a pump, and the completion-of-service to a customer at a pump. The first arrival event must be scheduled in the initialization routine; the remaining arrivals are handled by letting each invocation of the arrival routine schedule the next arrival event. The scheduling of a start-of-service event takes place either in the arrival routine, if there are no other cars waiting and a pump is available, or in the completion-of service routine, otherwise. Each time the start-of-service routine is invoked, the completion-of-service at that pump is scheduled. Besides the types of events identified in the real system, there are two other pseudo-events that should be included in every simulation program. End-of-simulation halts the simulation after a certain amount of simulated time has elapsed, and initiates the final output of the statistics and measurements gathered in the run. End-of simulation should be scheduled during the initialization of an event-driven simulation; for time-driven simulations, it is determined by the upper bound of the basic program loop. A progress-report event allows a summary of statistics and measurements to be printed after specified intervals of simulated time have elapsed. These progress-report summaries can be used to check the validity of the program during its development, and also to show whether or not the system is settling down to some sort of ‘‘equilibrium’’ (i.e., stable) behavior. 4. GroupingsSimilar entities are often grouped in meaningful ways. Sometimes an ordering of the entities within a group is relevant. In the service station example, the groupings are available pumps (that are not currently serving any auto), busy pumps (ordered by the service completion time for the customer in service), and autos awaiting service (ordered by time of arrival). In a Turing program, such groupings can be represented as linked lists of records, as long as a suitable link field is included in the records for that entity type. 5. Relationships Pairs of non-similar entities may be related. For example, a busy pump is related to the auto that is being served. Relationships can be indicated by including in the record for one of the entities in a pair a link to the other entity. In some cases, it may be desirable for each of the entities in the pair to have a link to the other entity. For example, the relationship between a busy pump and the auto being served can be represented by a link from the pump record to the corresponding auto record. 6. Stochastic Simulation In a stochastic simulation, certain attributes of entities or events must be chosen ‘‘at random’’ from some probability distribution. In the service station example, these include the customer inter arrival times (i.e., the times between successive arrivals), the number of liters needed, the service time (based on the number of liters needed), and whether the customer decides to balk (based on the length of the waiting queue and the number of liters needed). 7. Strategies Typically, a simulation experiment consists of comparing several alternative approaches to running the system to find out which one(s) maximize some measure of system ‘‘performance’’. In the case of the service station, the strategies consist of keeping different numbers of pumps in service. 8. Measurements The activity of the system will be reflected in measurements associated with entities, events, groups, or relationships. Such measurements are used in specifying the performance of the system. If the measurement is associated with an entity, it should be recorded in a field contained within the record for that entity. For measurements associated with events, groupings or relationships, additional variables or records must be declared to record the measured quantities. In our example system, we might wish to measure the numbers of customers served, liters of gasoline sold, customers who balk, and liters of gasoline not sold to balking customers, the total waiting time (from which we can calculate average waiting time), the total service time (to get pump utilization), and the total time that the waiting queue is empty. conditions due to customers left in the system when end-of-simulation is reached are handled properly. Advantages and disadvantages of simulation Competition in the computer industry has led to technological breakthroughs that are allowing hardware companies to continually produce better products. It seems that every week another company announces its latest release, each with more options, memory, graphics capability, and power. What is unique about new developments in the computer industry is that they often act as a springboard for other related industries to follow. One industry in particular is the simulation software industry. As computer hardware becomes more powerful, more accurate, faster, and easier to use, simulation software does too. The number of businesses using simulation is rapidly increasing. Many managers are realizing the benefits of utilizing simulation for more than just the one-time remodeling of a facility. Rather, due to advances in software, managers are incorporating simulation in their dailyoperations on an increasingly regular basis. Advantages: For most companies, the benefits of using simulation go beyond just providing a look into the future. These benefits are mentioned by many authors Banks, Carson, Nelson, and Nicol, 2000; Law and Kelton, 2000; Pegden, Shannon and Sadowski, 1995; and Schriber, 1991 and are included in the following: Choose Correctly. Simulation lets you test every aspect of a proposed change or addition without committing resources to their acquisition. This is critical, because once the hard decisions have been made, the bricks have been laid, or the material-handling systems have been installed, changes and corrections can be extremely expensive. Simulation allows you to test your designs without committing resources to acquisition. Time Compression and Expansion. By compressing or expanding time simulation allows you to speed up or slow down phenomena so that you can thoroughly investigate them. You can examine an entire shift in a matter of minutes if you desire, or you can spend two hours examining all the events that occurred during one minute of simulated activity. Understand "Why?" Managers often want to know why certain phenomena occur in a real system. With simulation, you determine the answer to the "why" questions by reconstructing the scene and taking a microscopic examination of the system to determine why the phenomenon occurs. You cannot accomplish this with a real system because you cannot see or control it in its entirety. Explore Possibilities. One of the greatest advantages of using simulation software is that once you have developed a valid simulation model, you can explore new policies, operating procedures, or methods without the expense and disruption of experimenting with the real system. Modifications are incorporated in the model, and you observe the effects of those changes on the computer rather than the real system. Diagnose Problems. The modern factory floor or service organization is very complex. So complex that it is impossible to consider all the interactions taking place in one given moment. Simulation allows you to better understand the interactions among the variables that make up such complex systems. Diagnosing problems and gaining insight into the importance of these variables increases your understanding of their important effects on the performance of the overall system. The last three claims can be made for virtually all modeling activities, queuing, linear programming, etc. However, with simulation the models can become very complex and, thus, have a higher fidelity, i.e., they are valid representations of reality. Identify Constraints. Production bottlenecks give manufacturers headaches. It is easy to forget that bottlenecks are an effect rather than a cause. However, by using simulation to perform bottleneck analysis, you can discover the cause of the delays in work-in-process, information, materials, or other processes. Develop Understanding. Many people operate with the philosophy that talking loudly, using computerized layouts, and writing complex reports convinces others that a manufacturing or service system design is valid. In many cases these designs are based on someone's thoughts about the way the system operates rather than on analysis. Simulation studies aid in providingunderstanding about how a system really operates rather than indicating an individual's predictions about how a system will operate. Visualize the Plan. Taking your designs beyond CAD drawings by using the animation features offered by many simulation packages allows you to see your facility or organization actually running. Depending on the software used, you may be able to view your operations from various angles and levels of magnification, even 3-D. This allows you to detect design flaws that appear credible when seen just on paper on in a 2-D CAD drawing. Build Consensus. Using simulation to present design changes creates an objective opinion. You avoid having inferences made when you approve or disapprove of designs because you simply select the designs and modifications that provided the most desirable results, whether it be increasing production or reducing the waiting time for service. In addition, it is much easier to accept reliable simulation results, which have been modeled, tested, validated, and visually represented, instead of one person's opinion of the results that will occur from a proposed design. Prepare for Change. We all know that the future will bring change. Answering all of the "what- if" questions is useful for both designing new systems and redesigning existing systems. Interacting with all those involved in a project during the problem-formulation stage gives you an idea of the scenarios that are of interest. Then you construct the model so that it answers questions pertaining to those scenarios. What if an automated guided vehicle (AGV) is removed from service for an extended period of time? What if demand for service increases by 10 percent? What if....? The options are unlimited. Wise Investment. The typical cost of a simulation study is substantially less than 1% of the total amount being expended for the implementation of a design or redesign. Since the cost of a change or modification to a system after installation is so great, simulation is a wise investment. Train the Team. Simulation models can provide excellent training when designed for that purpose. Used in this manner, the team provides decision inputs to the simulation model as it progresses. The team, and individual members of the team, can learn by their mistakes, and learn to operate better. This is much less expensive and less disruptive than on-the-job learning. Specify Requirements. Simulation can be used to specify requirements for a system design. For example, the specifications for a particular type of machine in a complex system to achieve a desired goal may be unknown. By simulating different capabilities for the machine, the requirements can be established. B. Disadvantages The disadvantages of simulation include the following: Model Building Requires Special Training. It is an art that is learned over time and through experience. Furthermore, if two models of the same system are constructed by two competent individuals, they may have similarities, but it is highly unlikely that they will be the same. Simulation Results May Be Difficult to Interpret. Since most simulation outputs are essentially random variables (they are usually based on random inputs), it may be hard to determine whether an observation is a result of system interrelationships or randomness. Simulation Modeling and Analysis Can Be Time Consuming and Expensive.Skimping on resources for modeling and analysis may result in a simulation model and/or analysis that is not sufficient for the task. Simulation May Be Used Inappropriately. Simulation is used in some cases when an analytical solution is possible, or even preferable. This is particularly true in the simulation of some waiting lines where closed-form queueing models are available, at least for long-run evaluation. Pseudo-Random Number Generation Probability is used to express our confidence in the outcome of some random event as a real number between 0 and 1. An outcome that is impossible has probability 0; one that is inevitable has probability 1. Sometimes, the probability of an outcome is calculated by recording the outcome for a very large number of occurrences of that random event (the more the better), and then taking the ratio of the number of events in which the given outcome occurred to the total number of events. It is also possible to determine the probability of an event in a non-experimental way, by listing all possible non-overlapping outcomes for the experiment and then using some insight to assign a probability to each outcome. For example, we can show that the probability of getting ‘‘heads’’ twice during three coin tosses should be 3/8 from the following argument. First, we list all eight possible outcomes for the experiment (TTT, TTH, THT, THH, HTT, HTH, HHT, HHH). Next, we assign equal probability to each outcome (i.e., 1/8), because a ‘‘fair’’ coin should come up heads half the time, and the outcome of one coin toss should have no influence on another. (In general, when we have no other information to go on, each possible outcome for an experiment is assigned the same probability.) And finally, we observe that the desired event (namely two H’s) includes three of these outcomes, so that its probability should be the sum of the probabilities for those outcomes, namely 3/8. A random variable is a variable, say X, whose value, x, depends on the outcome of some random event. For example, we can define a random variable that takes on the value of 1 whenever ‘‘heads’’ occurs and 0 otherwise. Generating Uniform Pseudo-Random Numbers Since the result of executing any computer program is in general both predictable and repeatable the idea that a computer can be used to generate a sequence of random numbers seems to be a contradiction. There is no contradiction, however, because the sequence of numbers that is actually generated by a computer is entirely predictable once the algorithm is known. Such algorithmically generated sequences are called pseudo-random sequences because they appear to be random in the sense that a good pseudo-random number sequence can pass most statistical tests designed to check that their distribution is the same as the intended distribution. On the other hand, to call them ‘‘random’’ numbers is no worse than it is to label floating point numbers as ‘‘real’’ numbers in a programming language. It is worth noting that the availability of pseudo-random (rather than truly random) numbers is actually an advantage in the design of simulation experiments, because it means that our ‘‘random’’ numbers are reproducible. Thus, to compare two strategies for operating our service station, we can run two experiments with different numbers of pumps but with exactly the same sequence of customers. Were we to attempt the same comparison using an actual service station, we would have to try the two alternatives one after the other, with a different sequence of customers (e.g., those that came on Tuesday instead of those that came on Monday), making it difficult to claim that a change in our profits was due to the change in strategy rather than the differences in traffic patterns between Mondays and Tuesdays. In general, most pseudo-random number generators produce uniform values. (This does not lead to a loss of generality because, as we shall see in the next section, uniform values can be transformed into other distributions.) Since only finite accuracy is possible on a real computer, we cannot generate continuous random variables. However, it is possible to generate pseudo-random integers, xk , uniformly between 0 and some very large number (say m), which we use directly as a discrete uniform distribution, or transform into the fraction xk/m to be used as an approximation to a uniform continuous distribution between 0 and 1. The most popular method of generating uniform pseudo-random integers is as follows. First, an initial value x 0, the seed, is chosen. Thereafter, each number, xk , is used to generate the next number in the sequence, xk +1, from the relation x := (a x + c) mod m k +1 k The only difficult part is choosing the constants a, c and m. These must be chosen both to ensure that the period of the random number generator (i.e., the number of distinct values before the sequence repeats) is large, and also to ensure that consecutive numbers appear to be independent. Note that the period can be at most m, because there are only m distinct values between 0 and m- 1. Progress in a simulation study 1. Problem formulation. Every simulation study begins with a statement of the problem. If the statement is provided by those that have the problem (client), the simulation analyst must take extreme care to insure that the problem is clearly understood. If a problem statement is prepared by the simulation analyst, it is important that the client understand and agree with the formulation. It is suggested that a set of assumptions be prepared by the simulation analyst and agreed to by the client. Even with all of these precautions, it is possible that the problem will need to be reformulated as the simulation study progresses. 2. Setting of objectives and overall project plan. Another way to state this step is "prepare a proposal." This step should be accomplished regardless of location of the analyst and client, viz., as an external or internal consultant. The objectives indicate the questions that are to be answered by the simulation study. The project plan should include a statement of the various scenarios that will be investigated. The plans for the study should be indicated in terms of time that will be required, personnel that will be used, hardware and software requirements if the client wants to run the model and conduct the analysis, stages in the investigation, output at each stage, cost of the study and billing procedures, if any. 3. Model conceptualization. The real-world system under investigation is abstracted by a conceptual model, a series of mathematical and logical relationships concerning the components and the structure of the system. It is recommended that modeling begin simply and that the model grow until a model of appropriate complexity has been developed. For example, consider the model of a manufacturing and material handling system. The basic model with the arrivals, queues and servers is constructed. Then, add the failures and shift schedules. Next, add the material-handling capabilities. Finally, add the special features. Constructing an unduly complex model will add to the cost of the study and the time for its completion without increasing the quality of the output. Maintaining client involvement will enhance the quality of the resulting model and increase the client's confidence in its use. 4. Data collection. Shortly after the proposal is "accepted" a schedule of data requirements should be submitted to the client. In the best of circumstances, the client has been collecting the kind of data needed in the format required, and can submit these data to the simulation analyst in electronic format. Oftentimes, the client indicates that the required data are indeed available. However, when the data are delivered they are found to be quite different than anticipated. For example, in the simulation of an airline-reservation system, the simulation analyst was told "we have every bit of data that you want over the last five years." When the study commenced, the data delivered were the average "talk time" of the reservationist for each of the years. Individual values were needed, not summary measures. Model building and data collection are shown as contemporaneous in . This is to indicate that the simulation analyst can readily construct the model while the data collection is progressing. 5. Model translation. The conceptual model constructed in Step 3 is coded into a computer recognizable form, an operational model. 6. Verified? Verification concerns the operational model. Is it performing properly? Even with small textbook sized models, it is quite possible that they have verification difficulties. These models are orders of magnitude smaller than real models (say 50 lines of computer code versus 2,000 lines of computer code). It is highly advisable that verification take place as a continuing process. It is ill advised for the simulation analyst to wait until the entire model is complete to begin the verification process. Also, use of an interactive run controller, or debugger, is highly encouraged as an aid to the verification process. 7. Validated? Validation is the determination that the conceptual model is an accurate representation of the real system. Can the model be substituted for the real system for the purposes of experimentation? If there is an existing system, call it the base system, then an ideal way to validate the model is to compare its output to that of the base system. Unfortunately, there is not always a base system. There are many methods for performing validation. 8. Experimental design. For each scenario that is to be simulated, decisions need to be made concerning the length of the simulation run, the number of runs (also called replications), and the manner of initialization, as required. 9. Production runs and analysis. Production runs, and their subsequent analysis, are used to estimate measures of performance for the scenarios that are being simulated. 10. More runs? Based on the analysis of runs that have been completed, the simulation analyst determines if additional runs are needed and if any additional scenarios need to be simulated. 11. Documentation and reporting. Documentation is necessary for numerous reasons. If the simulation model is going to be used again by the same or different analysts, it may be necessary to understand how the simulation model operates. This will enable confidence in the simulation model so that the client can make decisions based on the analysis. Also, if the model is to be modified, this can be greatly facilitated by adequate documentation. The result of all the analysis should be reported clearly and concisely. This will enable the client to review the final formulation, the alternatives that were addressed, the criterion by which the alternative systems were compared, the results of the experiments, and analyst recommendations, if any.12. Implementation. The simulation analyst acts as a reporter rather than an advocate. The report prepared in step 11 stands on its merits, and is just additional information that the client uses to make a decision. If the client has been involved throughout the study period, and the simulation analyst has followed all of the steps rigorously, then the likelihood of a successful implementation is increased. Example : An inventory control system is a process for managing and locating objects or materials. In common usage, the term may also refer to just the software components. Modern inventory control systems often rely upon barcodes and radio-frequency identification (RFID) tags to provide automatic identification of inventory objects. Inventory objects could include any kind of physical asset: merchandise, consumables, fixed assets, circulating tools, library books, or capital equipment. To record an inventory transaction, the system uses a barcode scanner or RFID reader to automatically identify the inventory object, and then collects additional information from the operators via fixed terminals (workstations), or mobile computers. The new trend in inventory management is to label inventory and assets with QR Code, and use smartphones to keep track of inventory count and movement. These new systems are especially useful for field service operations, where an employee needs to record inventory transaction or look up inventory stock in the field, away from the computers and hand-held scanners.Module-II (12 Lectures) Objective: To use the Probability concepts in Simulation Events that cannot be predicted precisely are often called random. Many if not most of the inputs to, and processes that occur in systems are to some extent random. Hence, so too are the outputs or predicted impacts, and even people’s reactions to those outputs or impacts. Probability Concepts and MethodsThe basic concept in probability theory is that of the random variable. By definition, the value of a random variable cannot be predicted with certainty. It depends, at least in part, on the outcome of a chance event. Examples are: (1) the number of years until the flood stage of a river washes away a small bridge; (2) the number of times during a reservoir’s life that the level of the pool will drop below a specified level; (3) the rainfall depth next month; and (4) next year’s maximum flow at a gauge site on an unregulated stream. The values of all of these random events or variables are not knowable before the event has occurred. Probability can be used to describe the likelihood that these random variables will equal specific values or be within a given range of specific values. Distributions of Random Events Given a set of observations to which a distribution is to be fit, one first selects a distribution function to serve as a model of the distribution of the data. The choice of a distribution may be based on experience with data of that type, some understanding of the mechanisms giving rise to the data, and/or examination of the observations themselves. One can then estimate the parameters of the chosen distribution and determine if the fitted distribution provides an acceptable model of the data. Stochastic Processes Historical records of rainfall or stream flow at a particular site are a sequence of observations called a time series. In a time series, the observations are ordered by time, and it is generally the case that the observed value of the random variable at one time influences one’s assessment of the distribution of the random variable at later times. This means that the observations are not independent. Time series are conceptualized as being a single observation of a stochastic process, which is a generalization of the concept of a random variable. Stochastic Variables, Discrete and Continuous Probability function: The variables that can change with certain probability are called stochastic variables (random variables). In discrete system simulation we use the probability mass function. If a random variable is a discrete variable, its probability distribution is called a discrete probability distribution. An example will make this clear. Suppose you flip a coin two times. This simple statistical experiment can have four possible outcomes: HH, HT, TH, and TT. Now, let the random variable X represent the number of Heads that result from this experiment. The random variable X can only take on the values 0, 1, or 2, so it is a discrete random variable. The probability distribution for this statistical experiment appears below. Number of heads Probability 0 0.25 1 0.50 2 0.25 The above table represents a discrete probability distribution because it relates each value of a discrete random variable with its probability of occurrence. Binomial and Poisson distribution will also be covered here. When variable being observed is continuous then an infinite number of possible values can be assumed by the function and we describe the variable by a probability density function. On integrating it within the range we get the cumulative distribution function. Numerical evaluation of Continuous Probability function and continuous uniformly distributed random number:The purpose of generating a probability function into a simulation is to generate random numbers with that particular distribution. Customary way of organizing data derived from observations is to display them as a frequency distribution. Relative frequency distribution is a better approach. By a continuous uniform distribution we mean that the probability of a variable, X, falling in any interval within a certain range of values is proportional to the ratio of the interval size to the range; that is, every point in the range is equally likely to be chosen. Random number generators: Linear Congruential Generator, Mid Square Method, Multiplicative Congruential Generator, Rejection Method: Linear congruential generator (LCG) is an algorithm that yields a sequence of pseudo- randomized numbers calculated with a discontinuous piecewise linear equation. The method represents one of the oldest and best-known pseudorandom number generator algorithms. The theory behind them is relatively easy to understand, and they are easily implemented and fast, especially on computer hardware which can provide modulo arithmetic by storage-bit truncation. The generator is defined by the recurrence relation: where is the sequence of pseudorandom values, and – the "modulus" – the "multiplier" – the "increment" – the "seed" or "start value" are integer constants that specify the generator. If c = 0, the generator is often called a multiplicative congruential generator (MCG), or Lehmer RNG. If c ≠ 0, the method is called a mixed congruential generator. Mid-square method is a method of generating pseudorandom numbers. In practice it is not a good method, since its period is usually very short and it has some severe weaknesses, such as the output sequence almost always converging to zero. Here we take the mid of a number as the seed value and then square it to generate a random number and accordingly continue till we have not obtained the desired set of random numbers. The rejection sampling method generates sampling values from an arbitrary probability distribution function by using an instrumental distribution , under the only restriction that where is an appropriate bound on . Rejection sampling is usually used in cases where the form of makes sampling difficult. Instead of sampling directly from the distribution , we use an envelope distribution where sampling is easier. These samples from are probabilistically accepted or rejected. Testing of Random Numbers: Chi-square is a statistical test commonly used to compare observed data with data we would expect to obtain according to a specific hypothesis. For example, if, according to Mendel's laws, you expected 10 of 20 offspring from a cross to be male and the actual observed number was 8 males, then you might want to know about the "goodness to fit" between the observed and expected. Were the deviations (differences between observed and expected) the result of chance, or were they due to other factors. How much deviation can occur before you, the investigator, must conclude that something other than chance is at work, causing the observed to differ from the expected. The chi-square test is always testing what scientists call the null hypothesis, which states that there is no significant difference between the expected and observed result. 2 The formula for calculating chi-square ( x ) is: 2 2 x = (o-e) /e That is, chi-square is the sum of the squared difference between observed (o) and the expected (e) data (or the deviation, d), divided by the expected data in all possible categories. Chi-squared 2 distribution, showing X on the x-axis and P-value on the y-axis. A chi-squared test, also referred to as test (or chi-square test), is any statistical hypothesis test in which the sampling distribution of the test statistic is a chi-squared distribution when the null hypothesis is true. Also considered a chi-squared test is a test in which this is asymptotically true, meaning that the sampling distribution (if the null hypothesis is true) can be made to approximate a chi-squared distribution as closely as desired by making the sample size large enough. The chi-squared (I) test is used to determine whether there is a significant difference between the expected frequencies and the observed frequencies in one or more categories. Does the number of individuals or objects that fall in each category differ significantly from the number you would expect? Is this difference between the expected and observed due to sampling variation, or is it a real difference? Exact chi-squared distribution One case where the distribution of the test statistic is an exact chi-squared distribution is the test that the variance of a normally distributed population has a given value based on a sample variance. Such a test is uncommon in practice because values of variances to test against are seldom known exactly. Chi-squared test for variance in a normal population If a sample of size n is taken from a population having a normal distribution, then there is a result which allows a test to be made of whether the variance of the population has a pre-determined value. For example, a manufacturing process might have been in stable condition for a long period, allowing a value for the variance to be determined essentially without error. Suppose that a variant of the process is being tested, giving rise to a small sample of n product items whose variation is to be tested. The test statistic T in this instance could be set to be the sum of squares about the sample mean, divided by the nominal value for the variance (i.e. the value to be tested as holding). Then T has a chi-squared distribution with n − 1 degrees of freedom. For example if the sample size is 21, the acceptance region for T for a significance level of 5% is the interval 9.59 to 34.17. Chi-squared test for independence and homogeneity in tables Suppose a random sample of 650 of the 1 million residents of a city is taken, in which every resident of each of four neighborhoods, A, B, C, and D, is equally likely to be chosen. A null hypothesis says the randomly chosen person's neighborhood of residence is independent of the person's occupational classification, which is either "blue collar", "white collar", or "service". The data are tabulated:Let us take the sample proportion living in neighborhood A, 150/650, to estimate what proportion of the whole 1 million people live in neighborhood A. Similarly we take 349/650 to estimate what proportion of the 1 million people are blue-collar workers. Then the null hypothesis independence tells us that we should "expect" the number of blue-collar workers in neighborhood A to be Then in that "cell" of the table, we have The sum of these quantities over all of the cells is the test statistic. Under the null hypothesis, it has approximately a chi-squared distribution whose number of degrees of freedom is If the test statistic is improbably large according to that chi-squared distribution, then one rejects the null hypothesis of independence. A related issue is a test of homogeneity. Suppose that instead of giving every resident of each of the four neighborhoods an equal chance of inclusion in the sample, we decide in advance how many residents of each neighborhood to include. Then each resident has the same chance of being chosen as do all residents of the same neighborhood, but residents of different neighborhoods would have different probabilities of being chosen if the four sample sizes are not proportional to the populations of the four neighborhoods. In such a case, we would be testing "homogeneity" rather than "independence". The question is whether the proportions of blue- collar, white-collar, and service workers in the four neighborhoods are the same. However, the test is done in the same way. The Kolmogorov-Smirnov Test The Kolmogorov-Smirnov test is designed to test the hypothesis that a given data set could have been drawn from a given distribution. Unlike the chi-square test, it is primarily intended for use with continuous distributions and is independent of arbitrary computational choices such as bin width. Suppose that we had collected four data points and sorted them into increasing order to get the data set 1.2, 3.1, 5.1, 6.7. From the pattern of our data alone, we might guess that, if we continued to collect data from this process, 0-25% of our observations would be less than or equal to 1.2, 25- 50% would be less than or equal to 3.1, etc. Perhaps we would like to compare this empirical pattern to the pattern we would expect to observe if the data points were drawn from a given theoretical distribution; say, an exponential distribution with a mean of 5 (i.e., l = 1/5 = 0.2). If data points were drawn from this exponential distribution, what fraction would we expect to see below 1.2? Below 3.1? These figures can be computed from the cumulative distribution function for the exponential distribution: -(0.2)(1.2) F(1.2) = 1 - e = 0.21 -(0.2)(3.1) F(3.1) = 1 - e = 0.46 -(0.2)(5.1) F(5.1) = 1 - e = 0.64 -(0.2)(6.7) F(6.7) = 1 - e = 0.74 We compare this to our empirical pattern in Figure 1. The first three rows contain the data points and our highest and lowest estimates of the fraction of the data that would fall below each point. The fourth row contains the result of plugging the data points into the theoretical distribution under consideration (in this case, the exponential distribution with a mean of 5). These values are the theoretical estimate of what fraction should fall below each data point. The fifth row is obtained by comparing the fourth row to the second and third rows. Is 0.21 near 0? Near 0.25? We take the absolute value of the larger of the two deviations. For example, in the first column, we get 0 = 0.21 = 0.21 0.25 - 0.21 = 0.04 so, the larger deviation is 0.21. This gives an idea of how far our empirical pattern is from our theoretical pattern. FIGURE 1: Computing D for the Kolmogorov-Smirnov test. Row 1: Data point x 1.2 3.1 5.1 1 Row 2: Empirical fraction falling below data 0 0.25 0.50 point (low estimate) Row 3: Empirical fraction falling below data 0.25 0.50 0.75 point (high estimate) Row 4: F(x ) 0.21 0.46 0.64 1 Row 5: Largest deviation 0.21 0.21 0.14 Row 6: Overall largest deviation (D) Next, we look over the fifth row to find the largest overall deviation (D). The largest error, 0.26, is the value of our test statistic. Is this measure of "error" large or small for this situation? To make this judgment, we compare our computed value of this test statistic to a critical value from the table in Appendix A. Setting a=0.1 and noting that our sample size is n=4, we get a critical value of D = 0.565. Since our test statistic, D=0.26, is less than 0.565, we do not reject the hypothesis 4,1.0 that our data set was drawn from the exponential distribution with a mean of 5. In general, we use the Kolmogorov-Smirnov test to compare a data set to a given theoretical distribution by filling in a table as follows: • Row 1: Data set sorted into increasing order and denoted as x , where i=1,...,n. i

Advise: Why You Wasting Money in Costly SEO Tools, Use World's Best Free SEO Tool Ubersuggest.