Sei sulla pagina 1di 26

IEOR E4007 Handout 5

Prof. Iyengar September 18, 2002

Lecture 3
Linear Programming

Standard form LP

Review of Basic Solutions and Basic Feasible solutions

Simplex Algorithm

Big-M method
Linear Programming 32

Geometry of linear programs


How does one solve LPs ? The answer lies in geometry !

one-dimensional LP
maximize cx
subject to a1 x a2
Optimal solution x
c > 0: x = a2
c = 0: any x [a1 , a2 ] optimal ... in particular x = a1 , a2
c < 0: x = a1
Moral: Optimal solution x always at the boundary.

General LP
minimize cT x,
subject to x P (polytope)
Set of points x with cost cT x = z is a plane perpendicular to c (a
hyperplane)
Slide the hyperplane in the c direction until the plane is at the
boundary of the feasible region.

Example
minimize c1 x1 + c2 x2 ,
such that x1 + x2 1,
x R2+
i.e. n o
P = x : x1 + x2 0, x1 0, x2 0
Linear Programming 33

Algorithm: slide the hyperplane along c until one can go no further

Results:

c = (1 1)T : x = (0, 0) is optimal


c = (1 0)T : all x = (0, x2 ), 0 x2 1 optimal
c = (0 1)T : all x = (x1 , 0), x1 0 optimal
c = (1 1)T : unbounded

Possible outcomes :

1. The optimal point always lies on the boundary of P


2. If optimal point is unique then it is corner
3. If there is an optimal solution, then there is an optimal corner
4. The problem might be unbounded ... also infeasible
Linear Programming 34

Algebraic definition of corners

corner or extreme point: x P is a corner if it cannot be written as


x = y + (1 )z
for 0 1, y, z P

vertex: x P is a vertex if there exists a c Rn such that


cT x < cT y, for all y P, y 6= x

basic feasible solution (BFS): Suppose the polyhedron

n o
n mn
P = x R : Ax b, A R
n o
n
= x R : ai x bi , i = 1, . . . , m


x P and let I = i : ai x = bi , = active constraints at x

x a BFS if x unique x P with ai x = bi , forall i I

Theorem : x corner x vertex x BFS


Linear Programming 35

Simple example

Consider the following LP:


max x1 + x2
s. t. 2x1 + x2 3
x1 + 2x2 3
x1 0
x2 0

The feasible region is shown below

2x1 + x2 = 3
PSfrag replacements

d
c x1 + 2x2 = 3

a b

Figure 1: Feasible region for the example

corner unique solution of binding constraints: there are 2 unknowns


(x1 , x2 ) therefore need 2 equations.
1. corner a: unique solution of x1 = 0 and x2 = 0
2. corner b: unique solution of 2x1 + x2 = 3 and x2 = 0
3. corner c: unique solution of 2x1 + x2 = 3 and x1 + 2x2 = 3
4. corner d: unique solution of x1 = 0 and x1 + 2x2 = 3
Linear Programming 36

Optimization strategy: start from one corner and move to a better


neighbouring corner

Start at corner a
1. move towards corner b: Then dx1 > 0 and dx2 = 0, or
df = d(x1 + x2 ) = dx1 > 0.
The objective improves (this is a maximization problem) so move to
corner b

At corner b
1. move to corner a: This cannot improve matters
2. move to corner c: Then dx1 < 0 and 2dx1 + dx2 = 0, i.e.
dx2 = 2dx1 , or
df = d(x1 + x2 ) = dx1 > 0
The objective improves (this is a maximization problem) so move to
corner c

At corner c
1. move to corner b: This cannot improve matters
2. move to corner d: Then dx1 < 0 and dx1 + 2dx2 = 0, i.e.
dx2 = 12 dx1 , or
1
df = d(x1 + x2 ) = dx1 < 0
2
The objective does not improve so do not move to d
Does this mean c is optimal ? Answer: yes, but why ?
Linear Programming 37

To get a legitimate algorithm must answer the following questions:

1. How does one find the initial corner ?


2. How does one move to a better neighbouring corner ?
3. How does one decide whether the corner is optimal ?
4. How does one decide that the program is unbounded and quit ?

Simplex skeleton:

1. Choose an initial corner


2. Move to a better neighbouring corner
3. If no better neighbour, current corner optimal; stop
4. Check to see if the problem is unbounded; If so, stop
5. Return to step 2

Easier to work with the following form of LP :

min/max cT x,
such that Ax = b,
x 0,
where A Rmn , b Rm and x, c Rn .
called Standard form LP
Linear Programming 38

Standard form LP

All LPs can be converted to this form (see Chap. 5 p. 180)


Example

maximize x 1 + x2 ,
such that x1 + 2x2 3,
2x1 + x2 3,
x1 0,
x2 0,
Standard form: introduce two new slack variables

maximize x 1 + x2 ,
such that x1 + 2x2 + s1 = 3,
2x1 + x2 + s2 = 3,
x1 0,
x2 0,
s1 0,
s2 0.
In matrix notation :

minimize 1100 z

1 2 1 0 3
such that z= ,
2 1 0 1 3
z 0,
T
where z = x1 x2 s1 s2
Linear Programming 39

Basic solutions in standard form LP

Assumption: m n and rows of A are LI ... why ?

x is BFS x is the unique solution of constraints tight at x.


Ax = b gives m linearly independent equations
set n m inequalities to equality
such that the rest of the system has a unique solution.

Alternatively :

1. Choose m LI columns of A: ai1 , . . . , aim .


2. Set xi = 0 for all i 6 B = {i1 , . . . , im }.
3. Solve the system of equations:

x i1

im x i2
i i
a a . . . a .. = b
1 2

| {z } .

=AB x im
| {z }

=xB

Unique solution since AB has rank m (by construction)

vector generated by this algorithm is called a basic solution

If vector feasible for LP, then basic feasible solution

the variables in B are called basic variables

and the rest are called non-basic variables.


Linear Programming 3 10

Example (contd.):

min 1
1 0 0z
1 2 1 0 3
subject to z =
2 1 0 1 3
z 0

Case (i) : Choose B = {1.4}



1 0
AB = is non-singular
2 1
set z2 = z3 = 0 and solve to get z = [3 0 0 3]T
solution not feasible for the linear program

Case (ii) : Choose B = {1,3}



1 1
AB = is non-singular
2 0
set z2 = z4 = 0 and solve to get z = [1.5 0 1.5 0]T
solution feasible for the linear program
Linear Programming 3 11

For this

example ... n = 4, m = 2 and any two columns are LI therefore
n
= 6 basic solutions ...
m

z1 = 0, z2 =0 z = [0, 0, 3, 3]T BFS


z1 = 0, z3 =0 z = [0, 3, 0, 3]T Infeasible
z1 = 0, z4 =0 z = [0, 1.5, 1.5, 0]T BFS
z2 = 0, z3 =0 z = [1.5, 0, 0, 1.5]T BFS
z2 = 0, z4 =0 z = [3, 0, 3, 0]T Infeasible
z3 = 0, z4 =0 z = [1, 1, 0, 0]T BFS

Can you associate each BFS with the appropriate corner of the feasible
region ?

Algorithm: Check all the extreme points and choose best ...
in general number of BFS is exponential in n : impossible to list them
Linear Programming 3 12

Simplex algorithm

Suppose we start with the corner with B = {3,4}, i.e.



0
0
0
z =

3
3

Want to
move to a new adjacent BFS with higher value
check if current BFS as optimal
check if the problem is unbounded
Attack the first problem and others will be solved on the way.

Strategy: Go along feasible direction d to an adjacent BFS.


d feasible at z if z + d feasible for all small enough > 0
adjacent BFS has only one different basic variable

Have to find feasible direction that change (i.e. increase) only one
non-basic variable keeping the rest set at zero.

At z0 the non-basic variables NB = {1,2}.


Increase z1 keeping z2 = 0, i.e. move along the vector

1
0
d1 =
d13

d14
Linear Programming 3 13

Want z0 + d feasible, so at the very least

A(z0 + d1 ) = b, i.e. Ad1 = 0


1
1 1 0 d13 d3 1
+ = 0, or 1 =
2 0 1 d14 d4 2

The objective at new point z = z0 + d1 is given by

cT z = cT z0 + (cT d1 ),

1
0
= c T z0 + 1 1 0 0
1

2
= c T z0 +
c1 = cT d1 = 1 is called the reduced cost with respect to the non-basic
variable 1.
Should one move along this vector d ? why ?

Increase z2 keeping z1 constant. Then


2
d 1
d2B = 32 = A1
B a2 =
d4 2
and the reduced cost with respect to z2

c2 = c2 + cTB dB = 1
Linear Programming 3 14

Let c denote the vector of reduced costs of all non-basic variables.


Suppose c 0, i.e. every neighboring point is worse.
Does this mean it is the best ? Yes. (why ? see Tech Appendix)
This answers our second question ...

Suppose there exists j NB such that cj > 0 then objective improves as


one moves along the vector dj ...

Since c1 = c2 = 1. Arbitrarily choose z1 .

T
How far can one march along d1 = 1 0 2 1 until one leaves the
feasible region ?
Since Az = A(z0 + d1 ) = b, need ensure z0 + d1 0
d11 = 1 and d1k = 0 for all k NB, therefore zN B 0
Have only to ensure that zB = z0B + d1B 0.

If djB 0 then can be increased indefinitely ... we have answered the


third question ...

Theorem: Suppose there exists non-basic variable j with reduced


cost cj > 0 and djB 0. Then the problem is unbounded.
Linear Programming 3 15

Have to choose such that


z3 = z30 + d3 = 3 2 0 1.5
z4 = z40 + d4 = 3 0 3
Choose = min{1.5, 3} = 1.5

0 1 1.5
0
+ 1.5 0 = 0

z= 3 2 0
3 1 1.5

New adjacent BFS ... B = {1,4} and NB = {2,3} ! Now repeat this until
optimum is reached or problem unbounded
Linear Programming 3 16

Canonical formulation and revised simplex

Let B be a given basis then Ax = b is equivalent to


AB xB + AN xN = b xB + A1
B AN xN = A1
B b
| {z } | {z }

=B =b

In new formulation, the columns corresponding to basic variables only have


one 1 and all 0s.
Continuing further
T
= cT x = cTB (b BxN ) + cTN xN = cTB b + (cN B cB )T xN
| {z }

=cTN


+ cTN xN = cTB b
In new formulation, the basic components of the objective vector are all 0s.
We will write this canonical formulation in the following manner

0 cTN cTB b
I B b
B NB

For the example, choose B = {3, 4}. Then AB = I and the canonical
formulation is given by

z1 z2 z3 z4
1 1 0 0 0
1 2 1 0 3
2 1 0 1 3
NB NB B B
Linear Programming 3 17

Simplex step (also called pivot) in canonical formulation

1. Select the maximum non-zero component k of the objective vector c. If


ck 0 ... declare current basis optimal.
2. Let d denote bl the
column of constraint matrix corresponding to k. If
maxl=1,...,m d 0, declare problem unbounded.
l

b
3. Let dj be the smallest positive value of dbl l=1,...,m
j l

4. Pivot on the (j, k) position, i.e.


(a) Divide row j of constraint matrix by B jk
B lk
(b) For l 6= j, subtract B jk
times row j from row l
5. k is now a basic variable and the basic variable corresponding to j is no
longer basic.

Working out the first pivot for the example.



1. Select column: c = 1 1 0 0 . Set k = 1
n o
bl
2. Select row: d = 31 , 32 . Set j = 2
l

3. Pivot on the (j = 2, k = 1) position. New canonical form

z1 z2 z3 z4
0 0.5 0 0.5 1.5
0 1.5 1 0.5 1.5
1 0.5 0 0.5 1.5
B NB B NB
Linear Programming 3 18

Continuing with the example

Iteration 1:

1. c = 1 1 0 0 . Set k = 1
n o
bl
2. d = 31 , 23 . Set j = 2
l
3
3. Pivot on the (j = 2, k = 1) posi-
2 tion. New canonical form
PSfrag replacements z1 z2 z3 z4
1
0 0.5 0 0.5 1.5
z0 z1 0 1.5 1 0.5 1.5
1 2 3 1 0.5 0 0.5 1.5
B NB B NB

Iteration 2

1. c = 0 0.5 0 0.5 . Set k = 2
n o
bl 1.5 1.5
2. d = 1.5 , 0.5 . Set j = 1
l
3
3. Pivot on the (j = 1, k = 2) posi-
2
tion.
PSfrag replacements z1 z2 z3 z4
b2
1 0 0 0.33 0.167 2.0
z1
0 1 0.67 0.33 1.0
z0
1 2 3 1 0 0.33 2 1.0
B B NB NB
Linear Programming 3 19

Iteration 3

1. c = 0 0 0.33 0.167 . Basis optimal
2. Optimal value = (2.0)
3. Optimal basis B = {1, 2}
4. Optimal value of basic variables:

z1 1
=
z2 1
Linear Programming 3 20

Simplex: remaining issues

Does this algorithm ever stop ?



n
There are at most m
basic feasible solutions (why ?).
If > 0 then cost strictly decreases at every iteration. So one never

returns back to the same basis. So algorithm stop after at most mn steps
exponential in (n, m) !
But implies xi = 0 for some basic variable a degenerate BFS. If
there are degenerate BFS the algorithm can keep cycling through bases
and never stop.
Blands rule:
1. The variable entering is the smallest j for which cj > 0
2. The pivot row is the smallest among those tied for the leaving
How does one find the initial BFS ?
By suitably multiplying the rows of A by 1 assume that b 0. Construct
an auxilliary LP as follows:
min 10 y,
subject to Ax + y = b,
x, y 0
For this problem an initial BFS : B = {n + 1, . . . , n + m}.
Solve this LP to optimality. If the optimal cost is not 0 then declare the
original LP as infeasible and stop. Else we take the optimal basis as the
starting basis for solving our LP. (not quite ... there are some delicate
details)
Linear Programming 3 21

Big-M method

Combines the two phases into one: Again assume that b 0.


P
min cT x min cT x + M m i=1 yi
s. t. Ax = b, s. t. Ax + y = b,
x0 x, y 0
Initial basis for the transformed problem is easy ... just take the
columns corresponding to y
If M is large enough and the initial problem problem is feasible ... the
optimal solution will not contain any of the ys.
Treat M as a parameter larger than everything else
Example
min x1 + x2 + x3
s.t x1 + 2x2 + 3x3 = 3
x1 + 2x2 + 6x3 = 2
+ 4x2 + 9x3 = 5
3x3 + x4 = 1
x4 only in the last equation, can be taken as a basis element. Introduce
new variables for the rest to get
min x1 + x2 + x3 + M x5 + M x6 + M x7
s.t x1 + 2x2 + 3x3 + x5 = 3
x1 + 2x2 + 6x3 + x6 = 2
+ 4x2 + 9x3 + x7 = 5
3x3 + x4 = 1

Initial Basis B = {x5 , x6 , x7 , x4 } and so cTB = M M M 0 . The
reduced costs ci = ci cTB AB ai are:
c1 = 1, c2 = 8M + 1, c3 = 18M + 1,
and cTB xB = 10M
Linear Programming 3 22

Initial Tableau
x1 x2 x3 x4 x5 x6 x7
1 8M + 1 18M + 1 0 0 0 0 10M
1 2 3 0 1 0 0 3
1 2 6 0 1 0 0 2
0 4 9 0 0 0 1 5

0 0 3 1 0 0 0 1
N N N B B B B
x3 has the lowest reduced cost, so it enters and x4 leaves basis
Step 2
x1 x2 x3 x4 x5 x6 x7
1 8M + 1 0 6M 13 0 0 0 4M 31
1 2 0 1 1 0 0 2
1 2 0 2 0 1 0 0
0 4 0 3 0 0 1 2
1 1
0 0 1 3 0 0 0 3
N N B N B B B

c2 < 0 so it enters and x6 leaves (degenerate pivot)


Step 3
x1 x2 x3 x4 x5 x6 x7
4M + 2 0 0 2M + 3 0 4M 12 0 4M 13
3 2

2 0 0 1 1 1 0 2
1 1
2 1 0 1 0 2 0 0
2 0 0 1 0 2 1 2
1 1
0 0 1 3 0 0 0 3
N B B N B N B
c1 < 0 so it enters and x5 leaves
Linear Programming 3 23

Step 4
x1 x2 x3 x4 x5 x6 x7
1
0 0 0 12 0 4M 12 0 11
6
1 1 1
1 0 0 2 2 2 0 1
3 1 1 1
0 1 0 4 4 4 0 2
0 0 0 0 1 1 1 0
1 1
0 0 1 3 0 0 0 3
B B B N N N B
c4 < 0, so x4 enters and x3 leave.
Step 5
x1 x2 x3 x4 x5 x6 x7
1
0 0 4 0 0 2M 34 2M + 14 74
1 0 32 0 1
2 12 0 1
2
9 1 1 5
0 1 4 0 4 4 0 4
0 0 0 0 1 1 1 0
0 0 3 1 0 0 0 1
B B N B N N B
All reduced costs are non-negative ... optimal point
Linear Programming 3 24

Readings

Rardin
Chapter 5: Section 5.3, 5.4
Chapter 7: Section 7.1

Bertsimas
Chapter 3 : Sections 3.1, 3.2, 3.3, 3.4
Chapter 4 : Sections 4.1, 4.2, 4.3

Luenberger
Chapter 4: Sections 4.1, 4.2, 4.3
Linear Programming 3 25

Technical appendix
Assume the rows of the matrix [A b] are linearly independent.
Proof of Theorem : x corner x vertex x BFS

x BFS x vertex
P
Let I = {i : aTi x = bi } and set c = iI i ai , where i < 0
P T P
T T
For all y P, c y = iI i a i y iI i bi = c x.

Suppose x is not a vertex, i.e. there exists y such that cT y = cT x.


Then X
0 = cT (y x) = i (aTi y bi )
iI

Since i < 0 and aTi y bi , the above equation implies aTi y = bi for all
i I. Thus, x is not a unique solution, i.e. x is not a BFS. A
contradiction; therefore, x is a vertex.
x vertex x corner
Let c such that cT x < cT y for all y P
Suppose x is not a corner, i.e. there exists y, z P and 0 < < 1
such that x = y + (1 )z. Then

cT x = cT y+(1)z = cT y+(1)cT z > cT x+(1)cT x > cT x.
A contradiction; therefore, x is a corner.
x corner x BFS
Let I = {i : aTi x = bi }, then x unique solution of AI z = bI , i.e.
Rank(AI ) = n
Suppose not. Then there exists y such that AI y = 0.
Since aTi x < bi for all i 6 I, there exists > 0 such that
aTi (x y) > bi , for all i 6 I.
Then x = 21 (x + y) + 21 (x y). A contradiction; therefore x is a BFS.

We are done since we completed the circle.


Linear Programming 3 26

Neighbours bad optimal point


Theorem: Let B be the basic variables at a BFS z and let c 0 be the
reduced costs at B. Then c 0 implies that z is optimal.
Proof: Suppose z is not optimal, i.e. there exists a feasible y such that
cT y < cT z. Let d = y z then Ad = 0, i.e.
X
AB d B = d j aj
jNB
P 1 j
i.e. dB = jNB dj AB a . Therefore

cT (y z) = cT d
= cTB dB + cTN B dN B
X
= (cj cTB A1 j
B a )dj
jNB
X
= cj d j
jNB

Since y is feasible we have dj = yj zj 0 for all j NB, i.e. cT (y z) 0.


A contradiction.

Potrebbero piacerti anche