Sei sulla pagina 1di 18

# MATHEMATICAL PROGRAMMING I

Books
There is no single course text, but there are many useful books, some more
mathematical, others written at a more applied level. A selection is as follows:
Bazaraa, Jarvis and Sherali.
Linear Programming and Network Flows. Wiley, 2nd Ed. 1990
A solid reference text.
Combinatorial Optimization: Algorithms and Complexity. Dover, 1998.
Recommended - good value.
Gass, Saul I.
Linear Programming: Methods and Applications, 5th edition. Thomson,1985.
Dantzig, George B.
Linear Programming and Extensions, Princeton University Press, 1963.
The most widely cited early textbook in the eld.
Chvatal, V.,
Linear Programming, Freeman, 1983.
Luenberger, D.
Introduction to Linear and Nonlinear Programming, Addison Wesley, 1984.
Wolsey, Laurence A.
Integer programming, Wiley, 1998.
Taha, H.
Operations Research: An Introduction Prentice-Hall, 7th Ed. 2003.
(More applied, many examples)
Winston, Wayne
Operations Research Applications & Algorithms, Duxbury Press, 1997
(Totally applied)
Useful websites
1. FAQ page at Optimization Technology Center
Northwestern University and Argonne National Laboratory
http://www-unix.mcs.anl.gov/otc/Guide/faq/linear-programming-faq.html
2. My notes are currently at:
http://www.maths.man.ac.uk/~mkt/new_teaching.htm
1
1. Introduction
Denition
A linear programming problem (or LP) is the optimization (maximization
or minimization) of a linear function of n real variables subject to a set of linear
constraints.
Example 1.1
The following is a LP problem in n = 2 non-negative variables x
1
; x
2
:
maximize x
1
+3x
2
O.F.
subject to x
1
+x
2
_ 6 Constraint 1
x
1
+2x
2
_ 8 Constraint 2
x
1;
x
2
_ 0 Non-negativity
The variables x
1
; x
2
are the decision variables which can be represented
as a vector x in the positive quadrant of a real 2D space R
2
. The function
f (x
1
; x
2
) = x
1
+ 3x
2
we wish to maximize is known as the objective function
(OF) and represents the value of a particular choice of x
1
and x
2
.
The two inequalities that have to be satised by a feasible solution to our
problem are known as the LP constraints. Finally, the constraints x
1
; x
2
_ 0
represent non-negativity of the problem variables. The set of x-values, i.e. all
pairs (x
1
; x
2
), satisfying all the constraints is a subset S R
2
known as the
LPs feasible region.
For minimization problems, the value of the OF is required to be as small as
possible and f (x
1
; x
2
) = f (x) is often referred to as a cost function. Sometimes
we denote the objective function by z (x) or z (x).
Notes
Graphical solution of this example (which will be covered in lectures) is
only possible for problems in two variables.
Finding the maximum of z (x) is equivalent to nding the minimum of
z (x) so we can, for theoretical purposes and without loss of general-
ity (w.l.o.g.), consider either max or min problems only. Any additive
constant in z (x) can also be ignored.
A problem with a variable x that can take positive or negative values
(known as free or unrestricted in sign (u.r.s.) variables) can easily be
incorporated into a LP by dening x = u v with u; v _ 0:
LP problems are commonly formulated with a mixture of _ , _ and =
constraints.
2
Example 1.2
A rm manufactures two products A and B.
To produce each product requires a certain amount of processing on each of
three machines I, II, III.
The processing time (hours) per unit production of A,B are as given in the
table
I II III
A 0.5 0.4 0.2
B 0.25 0.3 0.4
The total available production time of the machines I, II, III is 40 hours, 36
hours and 30 hours respectively, each week.
If the unit prot from A and B is \$5 and \$3 respectively, determine the
weekly production of A and B which will maximize the rms prot.
Formulation:
Let x
1
be the no. of item A to produce per week
Let x
2
be the no. of items of B to produce per week
Producing x
1
units of Product A consumes 0:5x
1
hours on machine I and
contributes 5x
1
towards prot.
Producing x
2
items of Product B requires in addition 0:25x
2
hours on ma-
chine I and contributes 3x
2
towards prot.
The following formulation seeks to maximize prot:
Maximize 5x
1
+ 3x
2
(Objective Function)
subject to
0:5x
1
+ 0:25x
2
_ 40 Constraints
0:4x
1
+ 0:3x
2
_ 36 ...
0:2x
1
+ 0:4x
2
_ 30 ...
x
1
; x
2
_ 0 Non-negativity
This is an optimization problem in 2 non-negative decision variables x
1
; x
2
(the unknowns) and 3 constraints (not counting the non-negativity constraints).
More generally, notice that each constraint row can be regarded as a re-
source constraint. The solution to the LP in this case tells us how best to use
scarce resources. Examples of resources that often vary linearly with amounts
of production are manpower, materials, time.
3
Example 1.3 (The diet problem)
How to optimize the choice of n foods (e.g. animal feed) when each food has
some of each of m nutrients?
Suppose
a
ij
= amount of i
th
nutrient in a unit of j
th
food,
i = 1; :::; m j = 1; :::; n
r
i
= yearly requirement of the i
th
nutrient,
i = 1; :::; m
x
j
= yearly consumption of the j
th
food,
j = 1; :::; n
c
j
= cost per unit of the i
th
food,
j = 1; :::; n:
We seek the "best" yearly diet represented by a vector x _ 0 that satises
the nutritional requirement
Ax _ r
and interpret "best" as least cost
min c
T
x
s.t.
Ax _ r
x _ 0
1.1 Standard Form
For an LP in standard form, all the constraints are equalities.
(apart from non-negativity constraints)
Suppose there are m such equality constraints.
The LP can be a maximization (MAX) or a minimization (MIN) problem.
Let
x = (x
1
; :::; x
n
)
T
be n non-negative real variables.
c
T
= (c
1
; c
2
; :::; c
n
) be a set of real (OF) coecients
A = (a
ij
) be a mn matrix of real coecients
b = (b
1
; :::; b
m
) be a non-negative real r.h.s. vector
(sometimes called the requirements vector)
The general LP in standard form with n variables and m constraints
(MINimization form) is
4
Minimize c
1
x
1
+ c
2
x
2
+ ::: + c
n
x
n
_
=

n
j=1
c
j
x
j
_
subject to a
11
x
1
+ a
12
x
2
+ ::: + a
1n
x
n
= b
1
a
21
x
1
+ a
22
x
2
+ ::: + a
2n
x
n
= b
2
...
a
m1
x
1
+ a
m2
x
2
+ ::: + a
mn
x
n
= b
m
and x
1
; x
2
:::; x
n
_ 0
For mathematical convenience, note that
b
i
_ 0 for each i (as mentioned above)
rows of A will be assumed to be linearly independent
The last condition (a technicality) ensures for m _ n that a set of m linearly
independent columns of A can be found (known as a basis of R
m
).
Example 1.1 (contd.)
To convert this problem to standard form, we introduce two nonnegative
slack variables s
1
; s
2
and rewrite the set of constraints
x
1
+x
2
_ 6
x
1
+2x
2
_ 8
as
x
1
+x
2
+s
1
= 6
x
1
+2x
2
+s
2
= 8
where are equivalent since s
1
; s
2
_ 0. Notice that the problem dimensions are
changed to m = 2; n = 4:
1.2 Vector-matrix notation
We can write the LP (standard min/maximization form) concisely as
Min/max c
T
x
subject to Ax = b
x 0
(SF)
Note that x 0 is to be interpreted component-wise as each x
j
_ 0: Equiv-
alently,
Min/max
_
c
T
x j Ax = b; x 0
_
where
x = (x
1
; :::; x
n
)
T
is a column vector
c
T
= (c
1
; :::; c
n
) is a conformable row-vector.
Note: In the subsequent notes we will not always adhere strictly (pedanti-
cally) to bold face for matrices and vectors. Books also adopt dierent conven-
tions. Where confusion is unlikely we may also write x (the vector x) as a
5
row vector with or without a transpose sign. e.g. x = (1; 0; 3; 5) rather than
x
T
. Usually vectors are in lower case, the exception being A
j
to denote the j
th
column of the matrix A:
A =
_
_
_
_
_
a
11
a
12
: : : a
1n
a
21
a
22
: : : a
2n
.
.
.
.
.
.
a
m1
: : : : : : a
mn
_
_
_
_
_
and b =
_
_
_
_
_
b
1
b
2
.
.
.
b
m
_
_
_
_
_
.
Assumptions
We suppose that m _ n; in fact the rank of A is m (full row rank).
=The rows of A are linearly independent (no redundant constraints).
=It is possible to choose (usually in many ways) a subset of m linearly
independent columns of A; to form a basis.
B =
_
A
j(1)
; A
j(2)
; :::; A
j(m)
_
The matrix formed from these columns is called the basis matrix B:
1.3 Canonical form
In Example 1.1, the constraints are all in the same direction and the original
formulation may be written briey in canonical maximization form
maximize c
T
x
subject to Ax b
x 0
(CF1)
where x =
_
x
1
x
2
_
; c
T
=
_
1 3
_
A =
_
1 1
1 2
_
; b =
_
6
8
_
The problem
minimize c
T
x
subject to Ax b
x 0
(CF2)
(c.f. diet problem) is said to be in canonical minimization form.
Notice the direction of the constraint inequalities is determined by whether
we have a MAX or a MIN problem. (Intuitively) When maximizing remember
that we have a ceiling-type constraint and, when minimizing, a oor-type
constraint.
6
1.4 General LP problems
Any LP problem may be structured into either standard form (SF) or one of
the canonical forms (CF1) , (CF2)
Example 1.4
minimize x
1
2x
2
3x
3
subject to x
1
+2x
2
+x
3
_ 14
x
1
+2x
2
+4x
3
_ 12
x
1
x
2
+x
3
= 2
x
1;
x
2
u.r.s:
x
3
_ 3
a) Convert the LP to standard form
Let x
1
= u
1
v
1
; x
2
= u
2
v
2
; x
0
3
= (3 + x
3
) with x
0
3
_ 0 and u
j
; v
j
_ 0
(j = 1; 2)
1. Introduce a slack variable s
1
to Constraint 1
Introduce a surplus variable s
2
to Constraint 2
This results in
minimize u
1
v
1
2u
2
+2v
2
+3x
0
3
(+9)
subject to u
1
v
1
+2u
2
2v
2
x
0
3
+s
1
= 17
u
1
v
1
+2u
2
2v
2
4x
0
3
s
2
= 24
u
1
v
1
u
2
+v
2
x
0
3
= 5
u
1
; v
1
; u
2
; v
2
; x
0
3
; s
1
; s
2
_ 0
b) Obtain the canonical minimization form
To reverse the inequality in Constraint 1 we multiplied by -1.
Replace the equality a
T
3
x = b
3
in Constraint 3 by a
T
3
x _ b
3
and a
T
3
x _ b
3
then reverse the latter constraint by a sign change
minimize u
1
v
1
2u
2
+2v
2
+3x
0
3
subject to u
1
+v
1
2u
2
+2v
2
+x
0
3
_ 17
u
1
v
1
+2u
2
2v
2
4x
0
3
_ 24
u
1
v
1
u
2
+v
2
x
0
3
_ 5
u
1
+v
1
+u
2
v
2
+x
0
3
_ 5
u
1
; v
1
; u
2
; v
2
; x
0
3
_ 0
c) Convert the problem into a maximization
Change the objective function (OF) to maximize u
1
+v
1
+2u
2
2v
2
3x
0
3
7
2. Basic solution and extreme points
2.1 Basic solutions
The constraints of an LP in standard form are an underdetermined linear equa-
tion system
A x = b
mn n1 m1
with m < n: There are fewer equations than unknowns =an innite number
of solutions.
Denition
A solution x to (2.1) corresponding to some basis matrix B that is obtained
by setting nm remaining components of x to zero and solving for the remaining
m variables is known as a basic solution.
If, in addition, x 0 such a solution is said to be feasible for the LP.
If we assume (w.l.o.g) that the entries of A; x and b are integers, we can
bound from above the absolute value of the components of any basic solution.
Lemma
Let x = (x
1
; :::; x
n
) be a basic solution. Then
[x
j
[ _ m!
m1

where
= max
i;j
[a
ij
[
= max
j=1;:::;m
[b
j
[
Proof
Trivial if x
j
is non-basic, since x
j
= 0:
For x
j
a basic variable, its value is the sum of products
m

j=1
b
ij
b
j
of elements of B
1
multiplied by elements of b:
Each element of B
1
is given by
B
1
=
det A
8
Now [ det A[ is integer valued, therefore the denominator _ 1.
Adj A is the matrix of cofactors. Each cofactor is the determinant of
a.(m1) (m1) matrix, i.e. the sum of (m1)! products of m 1 ele-
ments of A:
Therefore each element of B
1
is bounded in modulus by
(m1)!
m1
Because each x
j
is the sum of m elements of B
1
multiplied by an element
of b
j
; we have
[x
j
[ _ m!
m1

as required.
Example 2.1
Consider the LP
min 2x
2
+ x
4
+ 5x
7
subject to
x
1
+ x
2
+ x
3
+ x
4
= 4
x
1
+ x
5
= 2
x
3
+ x
6
= 3
3x
2
+ x
3
+ x
7
= 6
x
1
; x
2
; x
3
; x
4
; x
5
; x
6
; x
7
_ 0
One basis is B =A
4
; A
5
; A
6
; A
7
; which corresponds to the matrix B = I:
the corresponding basic solution is
x = (0; 0; 0; 4; 2; 3; 6) :
Another basis corresponds to B
0
=A
2
; A
5
; A
6
; A
7
with basic solution
x
0
= (0; 4; 0; 0; 2; 3; 6) :
Note that x
0
is not a feasible solution, since x
0
7
< 0:
Remark: The basis feasible solutions (BFS) of an LP are precisely the
vertices or extreme points (EPs) of the feasible region. We will show that the
optimum (if it exists) is achieved at a vertex.
Let B be a mm non-singular submatrix of A (m columns of A).
Let x
B
denote the components of x corresponding to B and x
N
denote the
remaining nm (zero) components. For convenience of notation we may reorder
the columns of A so that the rst m columns relate to B and the remaining
columns to a m(n m) submatrix N.
Then
Ax =
_
B N
_
_
x
B
x
N
_
= Bx
B
+Nx
N
= b
9
Since x
N
= 0 for this basic solution x we obtain
Bx
B
= b
x
B
= B
1
b (2.2)
Denition:
A BFS (and the corresponding vertex) is called degenerate if it contains more
than n m zeros.
i.e.
Some component of x
B
is zero,== the basic solution is degenerate.
Lemma
If two distinct bases correspond to the same BFS x then then x is degenerate.
Proof
Suppose that B and B
0
both determine the same BFS x: Then x has zeros
in all the n m columns not in B: Some such column must belong to B
0
so x
is degenerate.
Example 2.2
Determine all the basic solutions to the system
x
1
+ x
2
_ 6
x
2
_ 3
x
1
; x
2
_ 0
Solution
Introduce slack variables s
1
; s
2
_ 0 to write the system in standard form
x
1
+x
2
+s
1
= 6
x
2
+s
2
= 3
or in matrix form (with m = 2; n = 4)
_
1 1 1 0
0 1 0 1
_
_
_
_
_
x
1
x
2
s
1
s
2
_
_
_
_
=
_
6
3
_
:
A x = b
(24) (41) (21)
Set n m = 2 variables to zero to obtain a basic solution if the resulting
B-matrix is invertible (so columns of B form a basis or minimal spanning set
of R
m
).
10
1. Set s
1
= s
2
= 0 then B =
_
1 1
0 1
_
and B
1
=
_
1 1
0 1
_
x
B
= B
1
b =
_
1 1
0 1
__
6
3
_
=
_
3
3
_
_ 0
x =
_
x
T
B
; x
T
N
_
= (3; 3; 0; 0)
T
is a BFS.
2. Set x
2
= s
1
= 0: B =
_
1 0
0 1
_
= I
2
= B
1
x
B
= B
1
b = b =
_
6
3
_
_ 0: so x = (6; 0; 0; 3)
T
is a BFS.
Continue to examine a total of
_
4
2
_
=
4!2!
2!
= 6 selections of basic variables.
We obtain (Ex.) the four BFSs
x
1
=
_
_
_
_
3
3
0
0
_
_
_
_
x
2
=
_
_
_
_
6
0
0
3
_
_
_
_
x
3
=
_
_
_
_
0
3
3
0
_
_
_
_
x
4
=
_
_
_
_
0
0
6
3
_
_
_
_
Ex. The corners or vertices of the feasible region in (x
1
; x
2
) space are (0; 0) ;
(0; 3) ; (6; 0) ; (3; 3) :
11
Theorem 1
(Existence of a Basic Feasible Solution)
Given a LP in standard form where A is (mn) of rank m
i) If there is a feasible solution there is a BFS
ii) If the LP has an optimal solution, there is an optimal BFS.
Proof
i) Let A be partitioned by columns as (A
1
[A
2
[:::[A
n
)
ie. A
j
denotes the j th column of A (an mvector)
Suppose that x = (x
1
; x
2
; :::; x
n
)
T
is a feasible solution. Then
Ax = x
1
A
1
+ x
2
A
2
+ ::: + x
n
A
n
= b
where x
j
_ 0; each j: Let x have p strictly positive components and renum-
ber the columns of A so these are the rst p components x
1
; x
2
; :::; x
p
.
Then
Ax = x
1
A
1
+ x
2
A
2
+ ::: + x
p
A
p
= b (1)
Case 1
A
1
; :::A
p
are linearly independent. Then p _ m.
If p = m then A
1
; :::A
m
form a basis. i.e. they span R
m
:
If p < m we can add additional columns from A to complete a basis.
Assigning a value zero to the corresponding variables x
p+1
; :::; x
m
results
in a (degenerate) BFS.
Case 2
A
1
; :::A
p
are linearly dependent.
By denition, a non-trivial linear combination of the A
j
s summing to
zero i.e.
y
1
A
1
+ y
2
A
2
+ ::: + y
p
A
p
= 0 (2)
where some y
j
> 0 can be assumed.
Eq. (1) - "Eq. (2) gives
(x
1
"y
1
) A
1
+ (x
2
"y
2
)A
2
+ ::: + (x
p
"y
p
)A
p
= b (3)
is true for any ":
Let y
T
= (y
1
; y
2
; :::; y
p
; 0; :::; 0).
The vector x "y satises (2.1). Consider " 0; i.e. increasing from a
value of zero and let
" = min
_
x
j
y
j
y
j
> 0
_
12
be the minimum ratio over positive components y
j
:
For this value of "; at least one coecient in (3) is zero and x"y has at
most p 1 strictly positive coecients.
Repeating this process as necessary, we eventually obtain a set of linearly
independent columns A
j
. We are thus back to Case 1 and conclude that
we can construct a BFS given a feasible solution.
ii) Let x
T
= (x
1
; x
2
; :::; x
n
) be an optimal (=feasible) solution to LP with
the strictly positive components x
1
; :::; x
p
(after reordering).
Consider the same two cases as before.
Case 1
(A
1
; :::A
p
are linearly independent) If p < m; the procedure described
before results in an optimal BFS whose OF value

c
j
x
j
is unchanged
through addition of components with value x
j
= 0.
Case 2
(A
1
; :::A
p
are linearly dependent) The value of the solution x "y is
c
T
(x "y) = c
T
x "c
T
y (4)
For " suciently small, x "y is a feasible solution (all components _ 0)
of value c
T
x "c
T
y. However, because x is optimal, the value of (4) is
not permitted to be less than c
T
x (for minimization): Therefore c
T
y = 0,
and (4) does not change in value, though the number of strictly positive
components of x is reduced.
Example 2.3 (illustrating fundamental theorem)
Consider the following LP in standard form:
Maximize 80x
1
+60x
2
s. t. x
1
+ x
2
+s
1
= 100
2x
1
+ x
2
+s
2
= 150
5x
1
+10x
2
+s
3
= 800
x
j
_ 0 j = 1; 2
s
i
_ 0 i = 1; 2; 3
1. Identify x and the constants A; b; c for this problem.
2. Construct a BFS from the given feasible solution
x
T
= (x
1
; x
2
; s
1
; s
2
; s
3
) = (30; 65; 5; 25; 0)
with value 6300.
13
Let y
T
= (y
1
; y
2
; y
3
; y
4
; 0) and seek y such that Ay = 0 or
y
1
+y
2
+y
3
= 0
2y
1
+ y
2
+y
4
= 0
5y
1
+10y
2
= 0
With 3 equations and 4 unknowns, there are an innite number of possible
choices. e.g. let y
T
= (2; 1; 1; 3; 0) and note that c
T
y = 100 < 0:
x " y = (30 + 2"; 65 "; 5 "; 25 3"; 0)
T
The minimum ratio over positive y s is
min
_
65
1
;
5
1
;
25
3
_
= 5
Let x
0
= x 5y = (40; 60; 0; 10; 0)
T
with value 6300 5 (100) = 6800
The columns of A corresponding to x
1
; x
2
; s
2
form the basis matrix
B =
_
_
1 1 0
2 1 1
5 10 0
_
_
which is invertible (verify e.g. [Bj ,= 0 ). The term basis refers to the vec-
tors A
1
; A
2
; A
4
which span R
3
(in general R
m
) the space of the columns of A:
Note: Some books refer to B simply as the basis.
) x
0
= (40; 60; 0; 10; 0)
T
is a BFS
Ex. Draw the feasible region S and show that x
0
is a corner of S.
14
2.2 Geometry of LP (Extreme points)
Regarding the vector x as a point in n-dimensional space R
n
provides an alter-
native geometric view and further insight into the solution of LP problems.
Convex sets
Let p q R
n
: The line segment PQ consists of all points p+(1 ) q
where 0 < < 1.
{Such points are termed convex linear combinations of p and q: More
generally, a convex linear combination of p
1
; p
2
; :::; p
k
is

k
i=1

i
p
i
with
i
_ 0
and

k
i=1

i
= 1.}
Denition
A set K R
n
is convex if, for x
1
; x
2
K and for every 0 < < 1; the
point x
1
+(1 ) x
2
belongs to K.
Result
The feasible region (FR) of a LP in standard form
F = x j Ax = b; x 0
is convex.
Proof
Let x
1
; x
2
F:
Consider x
0
= x
1
+(1 ) x
2
for 0 < < 1
Ax
0
= A[x
1
+(1 ) x
2
]
= Ax
1
+(1 ) Ax
2
= b+(1 ) b = b
so x
0
is a solution of Ax = b:
Also 0 < < 1 and x
1
; x
2
_ 0 =x
1
+(1 ) x
2
_ 0
) x
0
is a feasible solution of the system Ax = b
ie. x
0
S:
Some further denitions useful in understanding the geometric nature of an
LP are as follows:
The region to one side of an inequality of the form
_
x R
n
[a
T
x _ b
_
is
a (closed) halfspace
The region
_
x R
n
[a
T
x = b
_
is a hyperplane
[an (n 1) dimensional region, subspace if b = 0]
A polyhedral set or a polyhedron is the intersection of a nite number of
halfspaces
15
A bounded polyhedron (one that doesnt extend to innity in any direc-
tion) is termed a polytope.
Result
The FR of an LP containing a mixture of equality and inequality constraints
is also a polyhedron.
Proof
Observe that Ax = b can be written as Ax _ b and Ax _ b
The extreme points (EPs) or vertices of a polyhedron play a very im-
portant part in LP because, if an LP has a nite optimal solution, it is
achieved at a vertex.
Denition
An extreme point of a convex set K is a point which cannot be expressed
as a convex linear combination of two distinct points of K.
i.e. x K is an extreme point if and only if @ y; z K (y 6= z) such that
x = y+(1 ) z
Theorem 2 (Equivalence of EPs and BFSs)
We show that for LP in standard form
and i) BFS= EP and ii) EP=BFS
Proof
i) Let x be a BFS to the LP in standard form. Suppose (for contradiction)
that (w.l.o.g.) the rst p components x
j

p
j=1
are strictly positive and x
j
= 0
for j > p. Then Ax = b reduces to
x
1
A
1
+ x
2
A
2
+ ::: + x
p
A
p
= b
where A
j
are linearly independent.
If x is not an extreme point, two distinct points y,z F such that x =
y+(1 ) z for and 0 < < 1:
For i > p;
x
i
= 0 = y
i
+(1 ) z
i
so y
i
= z
i
= 0: (since y
i
; z
i
_ 0 because y , z 2 F and ; 1 > 0)
Therefore y; z have at most p non zero components,
y
1
A
1
+ y
2
A
2
+ ::: + y
p
A
p
= b
and
z
1
A
1
+ z
2
A
2
+ ::: + z
p
A
p
= b
Therefore
(y
1
z
1
)A
1
+ (y
2
z
2
)A
2
+ ::: + (y
p
z
p
)A
p
= 0
with not all coecients zero (because y ,= z). This contradicts our assumption
that A
j
are linearly independent.
16
ii) Let x be an extreme point of F with precisely p non-zero components, so
x
1
A
1
+ x
2
A
2
+ ::: + x
p
A
p
= b
(w.l.o.g.) with x
1
; x
2
; :::; x
p
> 0 and x
i
= 0 (i > p) :
Suppose (for contradiction) that x is not a BFS. i.e. the columns of A are
linearly dependent
y
1
A
1
+ y
2
A
2
+ ::: + y
p
A
p
= 0
for some coecients y
j

p
j=1
not all zero.
Dene the nvector y = (y
1
; y
2
; :::; y
p
; 0; :::; 0)
T
so that Ay = 0: We can
nd " suciently small so that x
1
= x + "y _ 0 and x
2
= x "y _ 0. [NB.
x
1
,= x
2
because y ,= 0]. Now x
1
and x
2
belong to F because
Ax
1
= A(x + "y)
= Ax + "Ay
= Ax
= b
and similarly for x
2
. Since
x =
1
2
(x
1
+ x
2
)
x can be written as a linear combination of distinct points of F; contradicting
our assumption that x is an EP of S:
Consequence
We can re-phrase the fundamental theorem of LP in terms of extreme points;
1. If the feasible region F is non-empty, it has at least one EP
2. If the LP has a nite optimal solution (always true if F is bounded), it
has an optimal solution which is an EP of F:
Representation of convex polytopes
Any point in a convex polytope (i.e. a bounded polyhedron) can be repre-
sented as a convex linear combination of its extreme points. This enables an
alternative proof of the fundamental theorem.
Note
S has a finite number of extreme points, since there are a maximum
of
_
n
m
_
sets of basic variables.
17
Theorem 3 (Fundamental Theorem restated)
A linear objective function c
T
x achieves its minimum over a convex polytope
(bounded polyhedron) at an extreme point of S:
Proof
Let x
1
; x
2
; :::; x
k
be the set of EPs of S: Any x S has the representation
x =
1
x
1
+
2
x
2
+ ::: +
k
x
k
for some set of coecients
i
with
i
_ 0 each i and

k
i=1

i
= 1 and
c
T
x =
1
c
T
x
1
+
2
c
T
x
2
+ ::: +
k
c
T
x
k
=
1
z
1
+
2
z
2
+ ::: +
k
z
k
; say
Let z
o
= minz
i

k
i=1
be the minimum OF value at any vertex. Then z
i
_ z
0
for each i; giving
c
T
x
1
z
0
+
2
z
0
+ ::: +
K
z
0
= (
1
+
2
+ ::: +
k
) z
0
= z
0
If x is optimal, c
T
x z
0
so c
T
x = z
0
showing that the optimal value of
the LP is achieved at a vertex with minimum value z
0
:
18