what are operational research techniques

how operational research helps in decision making how operational research is useful to a manager and what is operation research and its applications
ImogenCameron Profile Pic
ImogenCameron,France,Teacher
Published Date:14-07-2017
Your Website URL(Optional)
Comment
MA30087/50087: Optimisation methods of operational research 1 Brief notes to accompany the lectures. 2 Prepared by A. E. Kyprianou Department of Mathematical Sciences The University of Bath Claverton Down BA2 7AY Suggested further reading: Elementary Linear Programming With Ap- plications, (Second Edition). B. Kolman and R. E. Beck, Academic Press 1980. ISBN 0-12-417910-X. This book goes at rather a slow pace but spells the theory and practice out very clearly. Other books: - Understanding and Using Linear Programming Series: Universitext J. Ma- touˇsek, B. G¨artner, Springer (2007). - B.D. Bunday, Basic Optimization Methods, Hodder Arnold (1984) . - Bazaran, J.J. Harvis and H.D. Shara’i, Linear Programming and Network Flows, Wiley (2005). - V. Chvatal, Linear Programming, Freeman (1983). - D. G. Luenberger, Linear and Nonlinear Programming, Addison-Wesley (1984). -F.S.Hillier,G.J.Lieberman,IntroductiontoOperationsResearch. McGraw- Hill (2001), 7th Edition. - A. M. Glicksman, An Introduction to Linear Programming and the Theory of Games. Dover. Averyinterestingsourceofmaterial: Theinternet-trygoogling‘linear programming’ and look for lecture notes given at other universities 1 Corrections are strongly encouraged to be sent to a.kyprianoubath.ac.uk 2 AlsopartlybasedontheoldernotesofM.E.Brigden. ThanksalsotoM.E.Brigdenfor supplying me with a comprehensive set of notes with additional examples and exercises. 1Contents 1 Introduction: aims of the course 4 2 The linear programming problem 5 2.1 A Simple Example . . . . . . . . . . . . . . . . . . . . . . . . 5 2.2 Another example . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.3 A third example . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2.4 General statement of the linear programming problem . . . . . 7 2.5 Slack variables and canonical form. . . . . . . . . . . . . . . . 8 3 Geometrical considerations 11 3.1 Constraints, objective functions and hyperplanes . . . . . . . . 11 3.2 Feasible solutions and convexity . . . . . . . . . . . . . . . . . 18 3.3 Extreme points . . . . . . . . . . . . . . . . . . . . . . . . . . 20 4 The fundamental theorem of linear programming 21 4.1 Basic solutions . . . . . . . . . . . . . . . . . . . . . . . . . . 21 4.2 The fundamental theorem . . . . . . . . . . . . . . . . . . . . 23 5 The simplex method 26 5.1 Simplex algorithm in action: example 1 . . . . . . . . . . . . . 26 5.2 Simplex algorithm in action: example 2 . . . . . . . . . . . . . 30 5.3 Simplex algorithm in action: example 3 . . . . . . . . . . . . . 32 5.4 The theory behind the simplex method . . . . . . . . . . . . . 34 5.5 Degeneracy and cycling . . . . . . . . . . . . . . . . . . . . . . 36 5.6 The two-phase method . . . . . . . . . . . . . . . . . . . . . . 37 5.7 An example of the two-phase method . . . . . . . . . . . . . . 39 6 Duality 42 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 6.2 Symmetric dual . . . . . . . . . . . . . . . . . . . . . . . . . . 43 6.3 The asymmetric dual . . . . . . . . . . . . . . . . . . . . . . . 44 6.4 Primal to the Dual via the diet problem . . . . . . . . . . . . 45 6.5 The Duality Theorem . . . . . . . . . . . . . . . . . . . . . . . 46 6.6 Complementary slackness . . . . . . . . . . . . . . . . . . . . 54 27 The transportation problem 59 7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 7.2 The canonical form . . . . . . . . . . . . . . . . . . . . . . . . 59 7.3 Dual of the transportation problem . . . . . . . . . . . . . . . 60 7.4 Properties of the solution to the transportation problem . . . 63 7.5 Solving the transportation problem . . . . . . . . . . . . . . . 65 7.6 Initial feasible solutions . . . . . . . . . . . . . . . . . . . . . . 67 7.7 A worked example . . . . . . . . . . . . . . . . . . . . . . . . 68 7.8 Improvement at each iteration . . . . . . . . . . . . . . . . . . 70 7.9 Degeneracy . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 7.10 Pricing out . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 8 Optimisation over Networks 77 8.1 Capacitated Networks . . . . . . . . . . . . . . . . . . . . . . 77 8.2 Flows in Networks . . . . . . . . . . . . . . . . . . . . . . . . 78 8.3 Maximal flow problem and labelling algorithm . . . . . . . . . 79 8.4 A worked example . . . . . . . . . . . . . . . . . . . . . . . . 81 8.5 Maximal flow and minimal cuts . . . . . . . . . . . . . . . . . 83 8.6 Example revisited . . . . . . . . . . . . . . . . . . . . . . . . . 86 8.7 A second worked example . . . . . . . . . . . . . . . . . . . . 87 31 Introduction: aims of the course Theprincipleaimofthiscoursewillbetocoverthebasicideasandtechniques which lie behind modern-day linear programming. One may think of the latter as a field of applied mathematics which concerns itself with resource allocation making use of classical elements of Linear Algebra. Specifically the reader, who should be well versed in an understanding of basic Linear Algebra,willseethatmanyoftheresultsandtechniquespresentedboildown to an application of concepts such as linear independence and matrix inversion, convexity in Euclidian spaces and geometric interpretation of linear equations. Thecourseisdividedintoessentiallytwoparts. Firstweconsidertheclas- sical linear programming problem and show that there exists a generic methodforanalysingsuchproblemswiththesocalledthesimplexmethod. In the second part of the course we look at a more complicated class of lin- earprogrammingproblems whichcomeunder the headingofthetransport problem and optimisation on networks. We consider methods of solu- tion to the latter class of problems which can be thought of as variants of the simplex method. Some of the proofs in this course are rather lengthy but none are particu- larlydemandinginviewofthevolumeofmathematicsthat3rdyearstudents intheirfirstsemesterwillhaveseenthusfar. Althoughyoumaybeexpecting this course to be more ‘applied’ and therfore ‘easier’, please remember that even applications require some degree of intellectual justification for them to be of any value. Therefore the algorithms alone will be meaningless to you unless you make the effort to understand the mathematics behind them. 42 The linear programming problem 2.1 A Simple Example Let us begin by with an example. Betta Machine Products plc makes two different products. Each requires casting, machining and assembly time. Products are made at a fixed cost and likewise sold at a fixed cost. This information is given in the table below. Casting Machining Assembly Cost Selling Price Product 1 5 5 1 25 40 Product 2 8 4 3 30 50 Timesaregiveninminutesperunit. CostsandPricesareinpounds. Each week there are 16,000 mins of casting time, 14,000 mins of machining time and 5,000 mins of assembly time available and there is no limit to numbers that can be sold. The objective of this company is to maximise the difference between total revenue and total cost. We may now formulate this optimisation problem in a purely mathemat- ical context. Define x units of product 1 and x units of product 2 each 1 2 week. Our objective is to: maximise z =15x +20x 1 2 subject to: 5x +8x ≤16,000 1 2 5x +4x ≤14,000 1 2 x +3x ≤5,000 1 2 x ≥0,x ≥0. 1 2 This is a linear programming problem with two variables and three con- straints. (Strictly speaking there are in fact five constraints but, as we shall shortly see, positivity of variables is a usual requirement). Many practical problems have literally thousands of variables and constraints. 2.2 Another example We wish to feed cattle, meeting nutritional requirements at minimum cost. AfarmerhastwofeedsAandBatherdisposal. Therearecertainnutritional requirements which stipulate that cattle should receive minimum quantities 5of carbohydrate, vitamins and protein. The table below gives the number of units per kilo of the nutrients. Food Carbo. Vit. Pro. Cost (pence/kilo) A 10 2 2 40 B 5 3 8 80 Daily Requirement 10 3 4 The mathematical formulation of the above problem is as follows. Use x 1 kilos of food A and x kilos of B per day. Our objective is to: 2 minimise z =40x +80x 1 2 subject to: 10x +5x ≥10 1 2 2x +3x ≥3 1 2 2x +8x ≥4 1 2 x ,x ≥0. 1 2 Thisisanexampleofadietproblem. Notethatwhereasintheprevious example we were asked to maximise a linear function, in this problem we are asked to minimise a linear function. None the less, as we see in the next section, both problems fit into the same framework. 2.3 A third example The R.H.Lawn Products Co. has available 80 metric tons of nitrate and 50 metric tons of phosphate to use in producing its three types of fertiliser during the coming week. The mixture ratios and profit figures are given in the table below. Determine how the current inventory should be used to maximise profit. Metric tons / 1000 bags Profit Nitrate Phosphate (dollars / 1000 bags) Regular lawn 4 2 300 Super lawn 4 3 500 Garden 2 2 400 6The above problem is again a maximisation problem. In this case we let x ,x ,x denote the quantities of the number of bags of the three products 1 2 3 in the left hand column of the given table respectively. Our objective is to: maximise z =300x +500x +400x 1 2 3 subject to: 4x +4x +2x ≤80 1 2 3 2x +3x +2x ≤50 1 2 3 x ,x ,x ≥0. 1 2 3 Thus we reach a linear programming problem with three variables and two constraints. 2.4 Generalstatementofthelinearprogrammingprob- lem In general we may identify a linear programming problem to take the follow- m×n ing standard form. Suppose that A is a matrix belonging toR where m n m,n ≥ 1, b is a vector belonging toR and c is a vector belonging toR and 0 is an n-dimensional vector filled with zeros. Our objective is to find n n a vector x∈R that will: maximise z :=c·x subject to: (1) Ax≤b x≥0 . n This is a compact way of saying: maximise z :=c x +c x +···+c x 1 1 2 2 n n subject to: A x +A x ···A x ≤b 11 1 12 2 1n n 1 A x +A x ···A x ≤b 21 1 22 2 2n n 2 . . . A x +A x ···A x ≤b m1 1 m2 2 mn n m x ≥0, j =1,2,...,n. j Notes. 1. In general there are no restrictions on the dimension of the matrix A so that either m≤n or n≤m. 72. A ‘minimisation’ problem (i.e. a problem which requires one to min- imise a linear function of the form c·x) can easily be converted to a maximisation problem by changing c to−c. 3. A problem which appears to require Ax≥ b (i.e. the inequality goes the wrong way) can reduced to the standard form by writing it in the form−Ax≤−b. 4. A problem which appears to require that Ax =b (i.e. an equality in- steadofaninequality)canbere-expressedintermsoftherequirements that Ax≤b and−Ax≤−b. However, as we shall see later when we define the canonical form of a linear programming problem, having an equality in place of an inequality is an advantage. 5. Sometimes one may find that instead of x≥ 0, one has unconstrained + values for the vector x. In that case, one may always write x =x − − ± x where x ≥ 0 and then one may re-write the conditions on x as + − conditions on the concatenation of x and x . We conclude this section with a definition of two types of solutions. Definition 2.1 Consider the standard form of the linear programming prob- n lem (2). We say that x ∈ R is a feasible solution if it respects all the constraintsofthespecifiedproblem. Ifxisfeasibleandmaximisesthevalueof z then we call it an optimal solution. (We will also refer to it on occasion 3 as an optimal feasible solution ). Note, we have no reason to believe at this point in time that any given linear programming problem has a feasible solution, let alone an optimal feasible solution. Moreover, even if it does have an optimal solution, then it may not necessarily be unique. 2.5 Slack variables and canonical form If instead of Ax ≤ b we have Ax = b in the formulation of the linear programming problem then we say that it is in canonical form. In other 3 Sometimes we will drop the word ‘feasible’ as clearly an optimal solution must neces- sarily be feasible 8words, a linear programming problem in canonical form requires one to find n a vector x∈R that will: maximise z :=c·x subject to: (2) Ax=b x≥0 . n The way to convert a problem from standard form into canonical form is to introduce slack variables. That is to say, if A is a matrix belonging m×n to A then we introduce m new variables, say u ,...,u so that the i-th 1 m constraint becomes A x +···+A x +u =b i1 1 in n i i n+m fori=1,...,m. IfwenowintroducethenewvectorbelongingtoR which T is the concatenation of x and u =(u ,...,u ), say 1 m µ ¶ x 0 x = . u 0 Similarly we introduce the vectorc which is the concatenation ofc and0 , m µ ¶ c 0 c = 0 m m×(m+n) and define a newR matrix 0 A =(AI ) m whereI is the m-dimensional identity matrix, then the linear programming m probleminstandardformcanbere-expressedasalinearprogrammingprob- 0 lem in linear form by identifying it as having objective to find a vector x in n+m R that will: 0 0 0 maximise z :=c ·x subject to: 0 0 Ax =b 0 x ≥0 . m+n As with the standard form of the linear programming problem, we call a feasible solution for the canonical form to be any solution which satisfies 9Ax =b and x≥0 . Further, an optimal solution for the canonical form n is any feasible solution which optimises the value of objective function. Animportantobservationisthatifoneispresentedwithalinearprogram- ming problem in standard form, solving the associated linear programming problem in canonical form will produce the same optimal value of the ob- jective. Further, restricting the optimal solution of the canonical problem to the original variables will also give an optimal solution to the problem in standard form. In principle this is all intuitively obvious, however for a proof, see the accompanying exercise sheet. Let us conclude this chapter with a note on notation. In the above discussion on the introduction of slack variables, we started with n variables andmconstraintswhichmeantthatweneededtointroducemslackvariables. Hence we ended up with a matrix of dimension m×(m+n). However, more often than not, when introducing a linear programming problem already in canonical form for the purpose of theoretical discussion, we shall write the constraintssimplyasAx=bwhereAisamatrixofdimensionm×n. That is to say, we shall not bother indicating the number of slack variables that have been introduced (if al all) and use m and n as generic terms for the dimensions of the matrix A as it appears in the given linear programming problem. 103 Geometrical considerations 3.1 Constraints, objective functions and hyperplanes There is a very natural geometrical interpretation of a system of linear equa- tions which are presented in the form Ax ≤ b where A is a prespecified m×n n m R matrix, x is anR vector and b is a prespecifiedR vector. Suppose for i = 1,...,m we write r as the vector corresponding to the i-th row of A i then the system Ax≤b may otherwise be written r ·x≤b for i=1,...,m i i where b is the i-th entry of b. i At this point it is instructive to recall that in n-dimensional Euclidian space one may describe any hyperplane in the the form n·x=d wheren is a vector of unit length orthogonal to the hyperplane and d is the orthogonal distance of the hyperplane from the origin. Indeed the intuition behind this formula is that any vectorx which belongs to such a hyperplane can be decomposed into the sum of two vectors. The first vector moves from the origin onto the hyperplane at the closest position to the origin and the second vector moves from the aforementioned point on the hyperplane to x. That is to say x=dn+y whereobviouslythefirstvectorontherighthandsideistheshorteststraight line from the origin to the hyperplane and the vector y is the relative posi- tion of x to the closest point on the hyperplane to the origin and must be orthogonal to n. See Figure 1. Hence n·x = dn·n+n·y = d+0. Note in particular that when each of the vectors is taken in two dimensional Euclidian space then µ ¶ µ ¶ n x 1 1 n·x= · =n x +x x =d 1 1 2 2 n x 2 2 which is the equation of a line. If we are working in three-dimensional Eu- clidian space then 0 1 0 1 n x 1 1 A A n·x= n · x =n x +n x +n x =d 2 2 1 1 2 2 3 3 n x 3 3 11n y x d Figure 1: A visial representation of the plane n·x=d. which is the equation of a plane. If instead we now consider the equation n·x≤d then in light of the above intuitive explanation one sees that the latter equa- tion describes a closed half space which lies ‘to one side’ of the hyperplane n·x=d. Indeed, if a vectorx satisfiesx·nd then we can imagine that it lies on another hyperplane with the same normal n which is at some other orthogonal distance, say D, from the origin; i.e. n·x = D. Then if, for example, d 0 and D d then x lies on a hyperplane parallel to n·x =d, which is contained in the same half-space as the origin is and which is an orthogonal distance D from the origin. Returning to the linear programming problem in standard form (1) we now see that if the vector x satisfies Ax ≤ b then it must lie within the intersectionofanumberofclosedhalfspaceswhoseboundariesaredescribed by the hyperplanes r ·x≤b for i=1,...,m. i i Indeed,eventheadditionalpositivityconstraintx≥0 isalsoarequirement n that x lies to one side of a hyperplane. It says that x lies in the positive 12n orthant ofR which is a quick way of saying that 0 10 1 0 10 1 0 10 1 −1 x 0 x 0 x 1 1 1 B CB C B CB C B CB C 0 x −1 x 0 x 2 2 2 B CB C B CB C B CB C · ≤0, · ≤0,··· and · ≤0. B CB C B CB C B CB C . . . . . . . . . . . . A A A A A A . . . . . . 0 x 0 x −1 x n n n This interpretation also has meaning when we look at the objective func- tion of a linear programming problem too. Indeed in the standard form we are required to maximise the function c·x. If we write z =c·x then we have the usual constraintsAx≤b andx≥0 then are required to: n find a vector x which belongs to a hyperplane which is orthogonal to c and 4 whose distance from the origin is as large as as possible so that x is still contained in the region which lies in the intersection of the closed half spaces specified by the constraints. We call the hyperplane given by the objective equation z =c·x the objective hyperplane or objective function. To some extent it is easier to visualise what is being said above with a diagram. This can be done however only in dimensions 2 and 3 for obvious 5 reasons. Below are several examples of systems of constraints in either two or three variables together with a plot of the region which lies in the intersection of the closed half spaces specified by the constraints. Example 3.1 This is a system of constraints in two dimensions, x +x ≤5 1 2 2x +x ≤8 1 2 x ≥0,x ≥0. 1 2 As mentioned earlier, a hyperplane in two dimensions is nothing more than a line. If we take the first constraint and change the inequality to an equality then it reads x =5−x . 2 1 4 Even if this is negative. 5 I have not yet found a way of drawing diagrams in 4 or more dimensional space which can be put effectively on paper. 13x 2 8 2x +x =8 1 2 6 4 2 x +x =5 1 2 x 1 2 4 6 Figure 2: The region described by the equations x +x ≤ 5, 2x +x ≤ 8 1 2 1 2 and x ,x ≥0. 1 2 That is to say it describes the unique line which passes through the points (x ,x ) = (0,5) and (x ,x ) = (5,0) (and therefore has gradient −1). Or, 1 2 1 2 √ said another way, it describes the line which is an orthogonal distance 5/ 2 for the origin with orthogonal unit vector µ ¶ 1 √ 2 1 √ . 2 When we put the inequality back in so that it reads x ≤5−x 2 1 then it describes all pairs (x ,x ) which are ‘below’ the line x = 5− x . 1 2 2 1 A similar analysis of the second constraint requires us to consider all points which are ‘below’ the points described by the line x = 8− 2x . Finally 2 1 taking account of the fact that both variables must be positive, we should only consider points which are ‘to the right’ of the line described by the vertical axis x = 0 and ‘above’ the line described by the horizontal axis 1 x =0. We thus arrive at the representation in Figure 2 for points satisfying 2 the given constraints (shaded in grey). 14−3x +2x =6 1 2 x 2 4 2 x 1 2 4 x +x =3 1 2 Figure 3: The region described by the equations x +x ≥3,−3x +2x ≤6 1 2 1 2 and x ,x ≥0. 1 2 Example 3.2 A similar analysis of the constraints x +x ≥3 1 2 −3x +2x ≤6 1 2 x ≥0,x ≥0 1 2 yields Figure 3 for the points satisfying the constraints (shaded in grey). Example 3.3 Now consider the following system of constraints 6x +4x +9x ≤36 1 2 3 2x +5x +4x ≤20 1 2 3 x ≥0,x ≥0,x ≥0. 1 2 3 Takefor examplethefirst constraint. Changing theinequalityto an equality it may be written in the form 6x +4x +9x = 36 or equivalently in the 1 2 3 form 0 1 0 1 6 √ x 1 133 36 B C 4 √ A √ · x = A 2 133 133 9 √ x 3 133 suggesting that it is a plane whose orthogonal distance from the origin is √ 36/ 133 which has a unit normal vector equal to the first vector in the 15inner product above. It might seem difficult to imagine how one can easily draw this plane, however looking again at the equation of this plane in the form 6x +4x +9x = 36 one easily sees that it passes through the points 1 2 3 (x ,x ,x )=(6,0,0),(0,9,0)and(0,0,4)whichisenoughinformationtosee 1 2 3 how this plane appears in the positive orthant x ,x ,x ≥ 0. This appears 1 2 3 as one of the planes in Figure 4. Returning to the original constraint 6x + 1 4x +9x ≤ 36, we are required to look for points (x ,x ,x ) which are on 2 3 1 2 3 the ‘same side’ of the plane 6x +4x +9x =36 as the origin is. 1 2 3 The other plane in the diagram in Figure 4 represents the second con- straint. Note that, for example, the constraint x ≥ 0 corresponds to the 1 requirement that points (x ,x ,x ) lie ‘on the positive side’ of the (x ,x ) 1 2 3 2 3 plane. All points satisfying the constraints are again shaded in grey. x 3 5 4 9 x 2 6 10 x 1 Figure 4: The region described by the equations 6x + 4x + 9x ≤ 36, 1 2 3 2x +5x +4x ≤20 and x ≥0,x ≥0,x ≥0. 1 2 2 1 2 3 Example 3.4 In our final example we leave the reader to examine the fol- lowing system of equations, x +x +x ≤1 1 2 3 2x +5x +3x ≤4 1 2 2 4x +x +3x ≤2 1 2 3 x ≥0,x ≥0,x ≥0. 1 2 3 We leave it as an exercise to associate them with the sketch in Figure 5. 16x 3 x 2 x 1 Figure 5: The region described by the equations x +x +x ≤ 1, 2x + 1 2 3 1 5x +3x ≤4, 4x +x +3x ≤2 and x ≥0,x ≥0,x ≥0. 2 2 1 2 3 1 2 3 Examples 3.1 and 3.2 examples above in the context of a linear program- mingproblem,wouldrequireanobjectivefunctionoftheformz =c x +c x . 1 1 2 2 Forafixedobjectivevaluez wethushaveanequationofastraightlinewhich passes through the points (x ,x ) = (0,z/c ) and (z/c ,0). Maximising the 1 2 2 1 value of z constitutes increasing the value of z, thereby moving the line in the direction of the vector µ ¶ c 1 c , 2 ∗ until the value z for which there is no intersection between the line and ∗ region described by the constraints for all z z . Thesecondtwoexamplesinthecontextofalinearprogrammingproblem would require an objective function taking the form z = c x +c x +c x 1 1 2 2 3 3 which is the equation of a plane. Analogously to the two dimensional case, maximising the value of z requires us to move the plane in the direction of ∗ its orthogonal such that z increases until we reach a value z beyond which there is no longer contact with the region described by the constraints. How should one understand the canonical form of the linear program- ming problem in the above geometrical context? The introduction of slack 17variablesincreasesthedimensionoftheproblemandonelooksforavectorx which solves a system of m linear equations in the positive orthant. There- fore any solution, written in terms of the original and slack variables, will lie at the intersection of the m specified hyperplanes. In general as there are less equations than variables, in dimension three and above, there must be an infinite number of intersection points whose intersection with the pos- itive orthant describe the feasible region. This is difficult to see in higher dimensions, but note that two planes in three dimensional space with dis- tinct normal vectors intersect on a line; that is to say common solutions to both planes are points on a line. Alternatively, since an equation of the form r·x = d may be written as the two constraintsr·x≤d andr·x≥d, then the feasible region of a linear programming problem in canonical form is still the intersection of a finite number of closed half spaces. There is a special definition for the latter. Definition 3.5 The intersection of a finite number of closed half spaces is a closed polyhedron. 6 Notethattheintersectionofafinitenumberofclosedspacesisstillclosed which qualifies the use of the word ‘closed’ in the definition. 3.2 Feasible solutions and convexity For linear programming problems in either canonical or standard form, let us define F, the feasible domain, to be the set of feasible solutions. From the previous section we argued that F consists of the intersection of a finite number of closed half spaces and hence constitutes a closed polyhedron. In this section our objective is to give a mathematical description of the set F. Indeed we shall see that F is a closed convex polyhedron. To this n end, let us recall the notion of a convex set inR . n Definition 3.6 A non-empty set S inR is a convex set if for allx ,x ∈S 1 2 and λ∈(0,1) it holds that λx +(1−λ)x ∈S. 1 2 6 Suppose that C ,...,C are closed spaces. If x : n≥ 1 is a sequence of points in 1 n n T n C withanaccumulationpoint,sayx. Thenforeachi=1,...,nwehavex :n≥1 i n i=1 is a sequence of points in C . Since, for each i = 1,...,n, we are given that C is closed i i T T n n then it follows that x∈C . Hence x∈ C and thus C is also closed. i i i i=1 i=1 18Roughly speaking what this definition means that S is a convex set if, when choosing any two points in that set, the line joining those points is also contained in S. Here are some relevant examples of convex sets. n Example 3.7 Suppose that for some given vectorn inR and constant d≥ n n 0, S = x ∈R : n·x = d (i.e. S is a hyperplane in R ). Then S is a convex set. To see why note that if x ,x ∈S then since for λ∈(0,1) 1 2 n·(λx +(1−λ)x ) = λn·x +(1−λ)n·x 1 2 1 2 = λd+(1−λ)d = d it follows that λx +(1−λ)x ∈S. 1 2 n Example 3.8 Suppose now that we take S = x ∈R : n·x ≤ d where the quantities n and d are as in the previous example (i.e. S is the closed half space with boundary given by the hyperplane in the previous example). Then again S is convex. In this case the proof of this fact is almost the same as before. Indeed as before we may write n·(λx +(1−λ)x ) = λn·x +(1−λ)n·x 1 2 1 2 ≤ λd+(1−λ)d = d showing that λx +(1−λ)x ∈S whenever x ,x ∈S and λ∈(0,1). 1 2 1 2 T m m Example 3.9 SupposethatS ,...,S areconvexsetsinR suchthat S = 6 1 n i i=1 T m ∅. Then S is also a convex set. In this case, one supposes that i i=1 T n x ,x ∈ S then in particular x ,x ∈ S for each i = 1,2,...,m and 1 2 i 1 2 i i=1 hence by convexity of each of the latter sets, λx +(1−λ)x ∈ S for any 1 2 i T n λ ∈ (0,1) and i = 1,2,...,m. Consequently, λx +(1−λ)x ∈ S as 1 2 i i=1 required. The last three examples can be used to prove in particular the following result. Theorem 3.10 Consider either the standard or the canonical form of the linearprogrammingproblem, (1)and(2). ThenthesetF offeasiblesolutions is a closed convex polyhedron. Proof. By the examples above and the earlier observation that F is the intersection of a finite number closed half spaces, the statement of the theorem follows. 193.3 Extreme points Let us conclude this chapter on the geometry of the linear programming problem by hinting at where we should expect to find optimal solutions in the feasible region F. Definition 3.11 Given a non-empty convex set S, we say that x∈ S is an extreme point in S if x is not an interior point of any line segment in S. That is to say, there exists no two vectorsx ,x ∈S and λ∈(0,1) such that 1 2 x=λx +(1−λ)x . 1 2 A good example to visualise extreme points is to think of the vertices of 7 anyPlatonicSolidorindeedanyn-dimensionalsimplex . Thefollowing result is intuitively obvious and we exclude its proof. Lemma 3.12 Any closed convex polyhedron generated by the intersection of a finite number of half spaces (and hence any feasible region) has at most a finite number of extreme points. The importance of extreme points is that one should look for optimal solu- tions to linear programming problems by considering the extreme points of the feasible region as the following theorem confirms. The proof is omitted. Theorem 3.13 Consider a linear programming problem either in standard or canonical form. Let F be the space of feasible solutions. Then one of the following three scenarios necessarily holds. 8 (i) If F 6=∅ and is bounded and there is an optimal solution which occurs at an extreme point. (ii) If F 6=∅ and not bounded, but an optimal solution exists which occurs at an extreme point. (iii) There exists no optimal solution in which case F is either unbounded or F =∅. 7 Look them up on Google 8 One may understand bounded in this context to mean that F ⊂ B(r) where B(r) = n x ∈ R : x r for some 0 r ∞. In other words, there exists a big enough hyper-sphere of radius r, centred at the origin such that F fits inside this sphere. 20

Advise: Why You Wasting Money in Costly SEO Tools, Use World's Best Free SEO Tool Ubersuggest.