Sei sulla pagina 1di 27

Operations Research

Chapter 3 (ii)

12/21/2018 BG 1
Simplex Method for Standard Minimization Problem
Sometimes we need to minimize the objective function. There are two types of
minimization problems. The first type of standard minimization problem is one in which:
1. the objective function is to be minimized,
2. all variables involved in the problem are nonnegative, and
3. all other linear constraints may be written so that the expression involving the
variables is less than or equal to a nonnegative constant.
The process for solving this type of minimization problem is very similar to the process
for solving the standard maximization problem. Given this type of standard
minimization problem with some objective function z, we solve the problem by
maximizing - z.
Example:
Minimize z = −2x1 + 3x2
subject to
3x1 + 4x2 ≤ 24,
7x1 − 4x2 ≤ 16, and x1, x2 ≥ 0
12/21/2018 BG 2
Solution:
Let p = - z. Then p = 2x1 − 3x2.
We seek to maximize p subject to the above given constraints.
The equations are rewritten as,
−2x1 + 3x2 + p = 0
3x1 + 4x2 + s1 = 24 Basic variable x1 x2 s1 s2 RHS
7x1 − 4x2 + s2 = 16
s1 3 4 1 0 24
Initial tableau is formed as shown:
s2 7 -4 0 1 16
p -2 3 0 0 0
Standard simplex method is followed after this.
Here, x1 becomes the entering variable and s1 is the leaving variable.
The optimal solution obtained is,
p is maximized at 32/7 when x1 = 16/7 and x2 = 0.

Therefor, z is minimized at -32/7 at x1 = 16/7 and x2 = 0.


12/21/2018 BG 3
Alternate Simplex Method for Standard Minimization Problem
In this method, the equation for z is not replaced by the equation by the equation for – z.
But the entering variable is different from the case of the maximization problem.
 Entering variable is the non basic variable with the most positive objective
coefficient in the objective equation, the “exact opposite” rule of the maximization
case. This logic is also based on the fact that max z is equivalent to the min (–z).
The optimum is reached at the iteration where all the z-row coefficients are
nonpositive.
 The leaving variable is the basic variable associated with the smallest nonnegative
ratio, with strictly positive denominator. This is same as in the case of the
maximization problem.
Example:
Minimize z = −2x1 + 3x2
subject to
3x1 + 4x2 ≤ 24,
7x1 − 4x2 ≤ 16, and x1, x2 ≥ 0

12/21/2018 BG 4
The equations are rewritten as,
2x1 - 3x2 + z = 0
3x1 + 4x2 + s1 = 24
7x1 − 4x2 + s2 = 16

Initial tableau is formed as shown:

Basic variable x1 x2 s1 s2 RHS


s1 3 4 1 0 24
s2 7 -4 0 1 16
z 2 -3 0 0 0

So x1 is determined as the entering variable from the most positive objective coefficient
‘2’ in the z row.
Other rules are unchanged.

12/21/2018 BG 5
3.3 Artificial Starting Solution
M-Method

12/21/2018 BG 6
Two phase Method

LPs in which all the constraints are (≤) with non-negative right-hand sides offer a
convenient all-slack starting basic feasible solution.
Models involving (=) and/or (≥) constraints do not.
The procedure for starting "ill-behaved" LPs with (=) and (≥) constraints is to
use artificial variables that play the role of slacks at the first iteration, and then dispose
of them legitimately at a later iteration.

If equation i does not have a slack (or a variable that can play the role of a slack), an
artificial variable, Ri, is added to form a starting solution similar to the convenient all-
slack basic solution.

12/21/2018 BG 7
Example

Solution:
Using x3 as a surplus in the second constraint and x4 as a slack in the third constraint,
the equation form of the problem is given as

12/21/2018 BG 8
The third equation has its slack variable, x4, but the first and second equations do not.
Thus, we add the artificial variables R1 and R2 in the first two equations.

Minimize z = 4x1 + x2 + R1 + R2

12/21/2018 BG 9
Phases of Two-Phase Solution:
Phase I. Put the problem in equation form, and add the necessary artificial variables
to the constraints to secure a starting basic solution. Next, find a basic solution of the
resulting equations that, regardless of whether the LP is maximization or
minimization, always minimizes the sum of the artificial variables. If the minimum
value of the sum is positive, the LP problem has no feasible solution. Otherwise,
proceed to Phase II.

Phase II. Use the feasible solution from Phase I as a starting basic feasible solution
for the original problem.

12/21/2018 BG 10
Minimize z = 4x1 + x2 + R1 + R2

12/21/2018 BG 11
Phase I:

The Associated tableau is given by

12/21/2018 BG 12
R1 and R2 are substituted out in the r-row by using the following computations:
New r-row = Old r-row + (1 × R1-row + 1 × R2-row)
The new r-row is used to solve Phase I of the problem, which yields the following
optimum tableau

Because minimum r = 0, Phase I produces the basic feasible solution x1 = 5/3, x2 = 6/5,
and x4 = 1. At this point, the artificial variables have completed their mission, and we can
eliminate their columns altogether from the tableau and move on to phase II.

12/21/2018 BG 13
Phase II:
After deleting the artificial columns, we write the original problem as

Essentially, Phase I is a procedure that transforms the original constraint equipment that
provides a starting basic feasible solution for the problem, if one exists.

12/21/2018 BG 14
The tableau associated with Phase II problem is thus given as

Again, because the basic variable x1 and x2 have nonzero coefficients in the z-row, they
must be substituted out, using the following computations,

The initial tableau of Phase II is


thus given as shown here.
The optimum solution is obtained in
one iteration.

12/21/2018 BG 15
Special cases of Linear Programming

(I) Degenerated solution

In the application of the feasibility condition of the simplex method, a tie for the
minimum ratio may occur and can be broken arbitrarily. When this happens, at least one
basic variable will be zero in the next iteration, and the new solution is said to be
degenerate.
Degeneracy can cause the simplex iterations to cycle indefinitely, thus never terminating
the algorithm .
Repetitive sequence of iteration → never improving the objective value → never
satisfying the optimality condition

12/21/2018 BG 16
Example:

12/21/2018 BG 17
(II) Multiple solution (infinitely many solution)

When the objective function is parallel to a non redundant binding constraint (i.e., a
constraint that is satisfied as an equation at the optimal solution), the objective function
can assume the same optimal value at more than one solution point, thus giving rise to
alternative optima. The next example shows that there is an infinite number of such
solutions. It also demonstrates the practical significance of encountering such solutions.

12/21/2018 BG 18
Figure below demonstrates how alternative optima can arise in the LP model when the
objective function is parallel to a binding constraint. Any point on the line
segment BC represents an alternative optimum with the same objective value z = 10.

12/21/2018 BG 19
Iteration 1 gives the optimum solution x1 = 0, x2 = 5/2 and z = 10, which coincides with
point B in Figure.
How do we know from this tableau that alternative optima exist?
Look at the z-equation coefficients of the nonbasic variables in iteration 1. The coefficient
of nonbasic x1 is zero, indicating that x1 can enter the basic solution without changing the
value of z, but causing a change in the values of the variables. Iteration 2 does just that-
letting x1 enter the basic solution and forcing x4 to leave. The new solution point occurs at
C(x1 = 3, x2 = 1, z = 10).

The simplex method determines only the two corner points B and C. Mathematically, we
can determine all the points (x1, x2) on the line segment BC as a nonnegative weighted
average of points B and C.

12/21/2018 BG 20
Thus, given

Remarks. In practice, alternative optima are useful because we can choose from many
solutions without experiencing deterioration in the objective value. For instance, in the
present example, the solution at B shows that activity 2 only is at a positive level,
whereas at C both activities are positive. If the example represents a product-mix
situation, there may be advantages in producing two products rather than one to
meet market competition. In this ease, the solution at C may be more appealing.

12/21/2018 BG 21
(III) Unbounded solution

In some LP models, the values of the variables may be increased indefinitely without
violating any of the constraints-meaning that the solution space is unbounded in at least
one variable. As a result, the objective value may increase (maximization case) or
decrease (minimization case) indefinitely. In this case, both the solution space and the
optimum objective value are unbounded.
Unboundedness points to the possibility that the model is poorly constructed. The
most likely irregularity in such models is that one or more non redundant constraints
have not been accounted for, and the parameters (constants) of some constraints may not
have been estimated correctly.
The following examples show how unboundedness, in both the solution space and the
objective value, can be recognized in the simplex tableau.

12/21/2018 BG 22
12/21/2018 BG 23
In the starting tableau, both xl and x2 have negative
z-equation coefficients. Hence either one can
improve the solution. Because xl has the most
negative coefficient, it is normally selected as the
entering variable. However, all the constraint
coefficients under x2 (i.e., the denominators of the
ratios of the feasibility condition) are negative or
zero. This means that there is no leaving variable
and that x2 can be increased indefinitely without
violating any of the constraints. Because each
unit increase in xl will increase z by 1, an infinite
increase in x2 leads to an infinite increase in z.
Thus, the problem has no bounded solution. This
result can be seen in Figure. The solution space is
unbounded in the direction of x2, and the value of z
can be increased indefinitely.
12/21/2018 BG 24
(IV) Infeasible solution
LP models with inconsistent constraints have no feasible solution. This situation can never
occur if all the constraints are of the type ≤ with nonnegative right-hand sides because the
slacks provide a feasible solution. For other types of constraints, we use artificial
variables. Although the artificial variables are penalized in the objective function to force
them to zero at the optimum, this can occur only if the model has a feasible space.
Otherwise, at least one artificial variable will be positive in the optimum iteration. From
the practical standpoint, an infeasible space points to the possibility that the model is not
formulated correctly.

12/21/2018 BG 25
Using the penalty M = 100 for the artificial variable R, the following tableaux provide the
simplex iterations of the model.

Optimum iteration 1 shows that the artificial variable R is positive (= 4), which indicates
that the problem is infeasible.

12/21/2018 BG 26
12/21/2018 BG 27

Potrebbero piacerti anche