Sei sulla pagina 1di 92

Linear Programming Notes I:

Introduction and Problem Formulation


1 Introduction to Operations Research
Economics 172 is a two quarter sequence in Operations Research. Management
Science majors are required to take the course. I do not know what Management
Science is. Most of you picked the major. I assume that you either know what
it is or do not care. You may not know what Operations Research is. I am
going to tell you, but it will leave you disappointed.
Operations Research is research into operations. The field began during
the Second World War. The military needed to solve a lot of different kinds
of resource allocation problems. A prototypical problem was a form of the
transportation problem that we’ll study later in the course. In this problem
the military had supplies available in several different locations (ammunition
factories), it had several different locations that needed the supplies (battle
fronts), it knew how much it cost to ship supplies from any factory to any front.
It knew how much was produced at each factory and how much was needed in
each front. It wanted to figure out how to minimize the cost of shipping the
supplies to the various locations while meeting two types of resource availability
constraints (that you do not send more ammunition from a factory than is
available and that you send as much as necessary to each battle front). Many
other resource allocation problems arose in the planning of military operations.
Operations Research was a field of study that tried to come up with practical
solutions to these problems.
People need to allocate resources even in peacetime. Economics is a discipline
devoted to the study of methods to allocate scarce resources. It is natural
to study the methods of Operations Research in an economics class. Some
of the methods developed have direct relevance to decision making. Courses
in Operations Research are therefore traditional parts of undergraduate and
graduate business programs.
Operations Research as a discipline involves several different things. First,
there is the identification of real world situations that lend themselves to for-
mulation as mathematical optimization problems. Second, there is a process of
translating these problems into mathematical language. Third, there is the de-
velopment of mathematics that explains the general structure of the mathemat-
ical problems that arise in the second stage. Fourth, there is the development
of methods for solving these problems.
The Operations Research sequence introduces some of the basic mathemat-
ical techniques for describing and solving problems (steps 3 and 4 above). It
provides practice in the formulation of problems (steps 1 and 2 above).
A mathematical programming problem is an optimization problem subject
to constraints. In the general problem, you are given a function f and a set S.
You are asked to find a solution to the problem:

1
max f (x) subject to x ∈ S. (1)

A linear programming problem is a mathematical programming problem in


which the function f is linear and the set S is described using linear inequalities
or equations. It turns out that lots of interesting problems can be described as
linear programming problems. It turns out that there is an efficient algorithm
that solves linear programming problems efficiently and exactly. It turns out
that the solutions to linear programming problems provide interesting economic
information. Economics 172A concentrates on these problems.
Economics 172B primarily studies non-linear programming. That is, prob-
lems in which the function f is non-linear and the set S is described using
non-linear inequalities or equations. This theory uses calculus techniques.
In the Economics 172 sequence, the word “programming” has nothing to
do with computer programming (although it is true that there are computer
programs that can be used to solve mathematical programming problems). This
terminology is confusing, but it is standard.

2 Introduction to Linear Programming


Economics 172A studies linear programming. So you need to know what a linear
function is. The function f of n variables x = (x1 , . . . , xn ) is linear if there are
constants a1 , . . . , an such that

f (x) = a1 x1 + . . . + an xn . (2)

This expression is also written:


n
X
f (x) = ai xi = a · x, (3)
i=1

where a = (a1 , . . . , an ).
Two properties characterize linear functions: additivity and constant returns
to scale. Additivity means that f (x + y) = f (x) + f (y). Constant returns to
scale means that f (cx) = cf (x) for any constant c. These properties make sense
sometimes. Other times they are silly. Suppose f (z) is how much it costs you
to buy z. Remember, z has n components, so you can think of z as a list. The
ith entry in the list, zi , tells you the amount of good i that you are buying.
Additivity says that if you first buy x and then buy y, it costs the same amount
if you bought x + y at one time. Constant returns to scale says that buying
half as much costs half as much and buying twice as much costs twice as much.
Provided that there are no specials (“buy two, get one free”), most of what you
buy at the grocery store satisfies these properties. It certainly describes how
you pay for gasoline at the pump. On the other hand, linearity does not hold in

2
milk prices: a gallon container costs less than two half gallon containers. Still,
one of the most basic linear functions that we deal with is the one that assigns
value to lists of goods. If pi is the price per unit of good i, then the linear
function f (x) = p · x gives the cost of buying the ‘bundle’ x consisting of x1
units of good 1, x2 units of good 2, and so on.
Linearity is usually not a very good assumption for utility functions. If
f (z) represents the utility (loosely, happiness) you get from having z, then both
additivity and constant returns to scale are likely to fail. For example, if x
represents having a CD player (and nothing else), while y represents having a
CD of your favorite music, then presumably f (x + y) > f (x) + f (y) and also
2f (x+y) > f (2(x+y)). The first inequality says that having both the CD player
and the CD is better than the sum of utilities available from having exactly one.
(You could argue that having only the CD player or only the CD is worthless.)
The second inequality says that having twice the utility from both CD player
and CD is better than having the utility of two CD players and two copies of the
CD. (You could argue that the second copy of the disk and the second player
is useless.) In economics it is typical to assume diminishing marginal utility.
In our context that is just a fancy way of saying that doubling what you have
does less than double your utility (the second $100,000,000 does not generate
as much additional utility as the first $100,000,000.)
The linearity assumption does not apply to production processes that have
fixed costs (the first unit costs much more than subsequent units) or capacity
constraints. It does not apply to situations in which ‘units’ are not perfectly di-
visible (that is, the components of x theoretically measure continuous quantities
not numbers of people). Divisibility is a standard simplifying assumption.
The point is that linearity is an assumption. You should reflect on whether
it is a reasonable assumption in the applications that arise during the quarter.
Now return to (1). It is time to get a better understanding of what a
mathematical programming problem is. The next few paragraphs will contain
several really important definitions. You’ll hear them over and over again.
S is your constraint set or feasible set. Maybe it is the different combina-
tions of things that you can afford to buy. Maybe it is the different combination
of things that you have the available raw materials to manufacture. In any
event, it is what keeps you from doing whatever you want. The function f
is your objective function. It is what you are trying to optimize (optimize
means either minimize or maximize). It is possible that the set S is empty.
If this is true, then your problem is infeasible. You can’t solve it. This is
a perfectly reasonable mathematical possibility. Economically, it means that
your constraints are inconsistent. You will see examples soon enough. If S is
not empty, then the problem is feasible.
What does it mean to solve a mathematical programming problem? A so-
lution to (1) is a special value x∗ that has two properties:
1. Feasibility. x∗ ∈ S.
2. Optimality. If x ∈ S, then f (x∗ ) ≥ f (x).

3
That is, a solution must satisfy the constraints of the problem and, among all
things that satisfy the constaints, yield the highest objective function value. If
x∗ is a solution to (1), then f (x∗ ) is called the optimal value (or sometimes just
value) of the problem. Not all problems have solutions (for example, infeasible
problems have no solution). Problems may have more than one solution. (There
may be two different ways to solve the problem.) If a problem has a solution,
then the value must be unique (otherwise the lower number can’t be the value).
Our problems will turn out to fall into one of three categories. They will
either be infeasible or they will have a solution or they will be unbounded. A
problem is unbounded it if is possible to make the objective function arbitrarily
large. In symbols, this means that for any M , there exists an x ∈ S such that
f (x) > M . In words, a problem is unbounded if for any target value of the
objective function (M ) it is possible to find a way to make f even bigger than
M using a feasible point x.
These definitions apply to any problem like (1). The course restricts at-
tention to linear programming problems. A linear programming problem is a
mathematical programming problem is which f is linear and the set S is de-
scribed by linear inequalities or equations. There is a standard form for writing
linear programming problems (LPs).

max c · x subject to Ax ≤ b and x ≥ 0. (4)

In this formulation, c = (c1 , . . . , cn ), b = (b1 , . . . , bm ), 0 denotes an n dimen-


sional list of zeros, and A is a matrix with m rows and n columns (an m × n
matrix); the entry of A in row i and column j is aij . In this basic problem, the
given data are c (the coefficients of the variables in the objective function), b
(the resources constraints), and A (the technology). In order to formulate the
problem, you must know these things. The problem that I have described has n
variables (the components of x) and m + n constraints. The first m constraints
come from the set of inequalities summarized by Ax ≤ b. The remaining n
constraints are the non-negativity constraints on the components of x. The
notation Ax ≤ b is short hand for the system of m inequalities. A representative
inequality (the ith inequality) takes the form
n
X
aij xj ≤ bi .
j=1

The objective function and the constraints in the problem are all linear. In
principle, the objective in a linear programming problem can be to maximize
or to minimize; the constraints can be written in the form of equations or
inequalities of either direction, and inequality constraints need be present for
some (or none) of the variables. It turns out that any linear programming
problem can be written in the standard form above. I’ll say more about that
later. At this point, note only that (4) describes the set of problems we will
study.

4
Now I can comment on the contents of the course outline. The first topic
is problem formulation. This is the process of taking a situation described in
words and translating it into a mathematical problem in the form (4). This
process probably represents the most likely application you might make of the
techniques of the class in the “real world.” The classroom is not the real world.
You will see rather contrived examples. During the first week of the class, I will
describe possible linear programming problem and formulate a couple slowly.
There will be formulations throughout the class. My experience is that students
have trouble formulating problems. You might find that the first topic is the
most challenging part of the course.
When there are only two variables, it is possible to solve linear programming
problems graphically. The second topic shows you how to do this. Graphical
solution is easy and illustrates most of the basic ideas about solutions of linear
programming problems. The problem is that most problems involve more than
two variables and graphical methods do not apply.
Algorithms exist that can solve any linear programming problem. These
algorithms are widely used in industry. The oldest and still most widely used
algorithm is the simplex algorithm. Versions of the algorithms are available as
part of common spreadsheet programs. Since your computer already knows the
algorithm and can do computations more easily than you can, it makes no sense
to teach you the entire procedure. Still, the essentials of the simplex algorithm
are straightforward and instructive. Knowing how the algorithm works is useful
on its own and also helps you interpret solutions provided by computers. I will
spend some time teaching you a bit about the algorithm in Topic 3.
The fourth topic is the heart of the course. It turns out that when you solve
a linear programming problem you automatically solve another linear program-
ming problem (called the dual of the original problem). The theory of duality is
beautiful and interesting (to the mathematically inclined). It also provides truly
important economic information about solutions to linear programming prob-
lems. Sensitivity Analysis refers to the study of what happens to the solution to
a linear programming problem when one changes the problem (by varying the
objective function or the resource constraints). There is a lot to say here. We
will say some things theoretically. Other things we will illustrate using solutions
to problems obtained by the computer.
Game Theory is a big topic (there is an entire undergraduate course devoted
to it). It is a mathematical theory of strategic interaction. Zero-sum games are
a special class of game that includes most of the things called games by normal
people (chess, poker, tic-tac-toe) and generally situations where players have
completely opposed interests. It turns out that there is an intimate relationship
between zero-sum games and linear programming. I will tell you about it. (I
should warn you that game theory rarely provides practical advice on how to
play a game.)
The final topic covers a special class of linear programming problem. This
problem has special structure. It provides a useful way to introduce integer
linear programming (that is, linear programming problems with the additional
restriction that all variables must be whole numbers).

5
3 Introduction to Problem Formulations
Problem formulation is the most important part of a Operations Research course
for a Management Science major. When you are the boss, you’ll hire a geeky
engineer to do some basic math and write software. You’ll earn big bucks by
identifying the important problems and translating them from a verbal identifi-
cation to a mathematical form. The engineer will then solve the mathematical
problem. You will interpret the solution and put it into practice. It is impor-
tant for you to know enough about the basic mathematics for you to be able
to frame questions that the engineer might be able to answer and to be able to
judge whether the answers provided are sensible. Formulation, however, is key.
Unfortunately, I have little useful to say on the topic. In order to formulate
problems, you need to be able to understand symbols, you need common sense,
and you need practice. I am not aware of a mechanical series of steps you can
take in order to complete a formulation.
Now I will got through a particular (and standard) linear programming
problem and formulate it.
The problem is called the Diet Problem. Here is the story.
You run a small institution (prison, junior high school, third world country).
People work in your institution and you must feed them. Your job is to meet
their basic requirements for nutrients at minimum cost. In order to do this, you
need to know several things. You must know what foods are available and the
cost of each food. You must also know which nutrients are necessary and the
nutritional content of each of the foods. With this information you can figure out
how much any combination of food costs and you can figure out the nutritional
content of any combination of food. You can decide which combinations of food
are sufficient to meet the nutritional requirements and then pick the cheapest
combination that meets the nutritional requirement. (Perhaps the story makes
more sense if you imagine that your job is to feed the animals on your farm.)
So that is the basic verbal story. It has a surface plausibility. That is, you
can imagine someone wanting to find cheap ways to feed people. It is a bit
bizarre because it contains no mention of what the foods taste like. The story
does not place restrictions on food (for example, not too much salt, sugar, or fat;
or no meat), although these restrictions can be included without much trouble.
In order to formulate the problem as a linear programming problem, we need
notation to describe the given data. This information typically is given to you
in the statement of a formulation problem. Assume that there are n different
kinds of food. The price per unit of the jth food is pj . Assume that there are
m different nutrients. The nutritional requirement of nutrient i is ci . Finally,
let aij be the amount of the ith nutrient in one unit of the jth food.
Let me repeat this information less abstractly. The n different foods could
be things like lettuce, hamburger, potatoes, oranges, pizza, and so on. When I
talk about the jth food, I mean one of these (maybe I list all available foods in
alphabetical order and number them 1 through n). pj is the unit price of food j.
So, p1 might be the price of a head of lettuce; p2 might be the price of a pound
of hamburger; and so on. The m different nutrients could be things like vitamin

6
C, iron, niacin, and so on. ci is the daily minimum requirement of nutrient
i. The units of these things are weird (I think that Vitamin E is measured in
“International Units,” other nutrients are measured in grams). This does not
matter, as long as you can figure out how much of each nutrient you can find
in each food. That is where the aij comes in. Suppose that there are .5 grams
of niacin in a head of lettuce. If lettuce is food 1 and niacin is nutrient 5, then
this means that a51 = .5.
Now we have a description of the problem in words and a description of the
basic data of the problem. Notice that you (as the manager of the institution)
could find out the data. You look up food prices at the grocery store. You
consult a nutritionist to figure out the entries in the matrix A. You check
government standards to figure out the nutritional requirements. Your problem
is to figure out what to buy. In order to formulate this as a mathematical
problem, you need to invent a name for what you are looking for.
Step 1: Identify Variables.
You are looking for amounts of food. Therefore, your variables are quantities
of each of the n foods. These are unknowns and need names. Let xj be the
number of units of food j purchased. You want to find x = (x1 , . . . , xn ).
Now you need to use this notation to figure out the objective function and
the constraints of the problem.
Step 2: Write Down the Objective Function.
The objective is to minimize the cost of the food that you buy. If you buy x
how much will it cost? Break it down. Buying x means that you buy x1 units of
the first food, x2 units of the second food, and so on. How much do you spend
on the first food? It costs p1 per unit. Therefore you spend p1 x1 on the first
food. How much do you spend in total? You just add up what you spend on
each of the foods. This quantity is:

n
X
p1 x1 + · · · + pj xj + · · · + pn xn = pj xj = p · x. (5)
j=1

(5) is the objective function. That is, you want to find x to min p · x.
I invoked linearity assumptions to write the objective function. I assumed
constant returns to scale when I asserted that if pj is the price of one unit, then
pj xj is the price of xj units. This is an assumption. Maybe it is impossible to
buy goods in tiny quantities. Maybe it is possible to get large purchases at lower
costs per unit. If so, then the linearity assumption is not appropriate (although
it may be a reasonable approximation). I also invoked additivity when I claimed
that the cost of the entire purchase is just the sum of the amount spent on each
food. This assumption is reasonable, but you can imagine settings where people
get discounts for buying large quantities.
If the problem was simply to minimize costs, then the answer would be easy.
Buy no food. After all, that costs you nothing. The problem with that is that
the people in your institution will die. You want to minimize expenditures, but

7
only after you have met the nutritional requirements. You need a way to decide
whether the food you buy actually satisfies nutritional requirements.
Step 3: Write Down the Constraints.
The constraints are that you satisfy nutritional requirements. You need to
buy enough food to supply all nutrients in (at least) the recommended amounts.
How much nutrient i do you need? ci . How much of this nutrient is supplied
when you have x? Again, take it one food at a time. You have x1 units of the
first food. This means that you obtain ai1 x1 units of the ith nutrient coming
from the first food. (The units of x1 might be pounds (of hamburger); the units
of aij might be grams (of iron) per pound (of hamburger)). Hence the product
gives you a quantity of grams (of iron). How much nutrient i do you get from
x? Add up the amount of nutrient i you get from each food.

n
X
ai1 x1 + · · · + aij xj + · · · + ain xn = aij xj . (6)
j=1

Therefore, to supply enough of nutrient i you must satisfy the constraint


that (6) be greater than or equal to ci . The constraint:
n
X
ai1 x1 + · · · + aij xj + · · · + ain xn = aij xj ≥ ci (7)
j=1

(7) describes the ith nutritional constraint. The entire problem imposes such
a constraint for each nutrient. That is, (7) must hold for i = 1, . . . , m.
I can lump the constraints together using matrix notation: the m constraints
described by (7) are: Ax ≥ c. Once again notice that I made linearity assump-
tions to formulate the constraints. If the Vitamin C you get from oranges
detracts from the Vitamin C you get from kiwis, then additivity fails. If your
body cannot process more than 3 potatoes in a day (causing them to pass from
your system without supplying nutrients), then the constant returns to scale
assumption fails.
It is also natural to add the restriction that you cannot buy negative quan-
tities of food. In symbols: x ≥ 0.
Step 4: Write Down the Entire Problem.
The work is over. Now just summarize it. The problem is to find x to solve:

min p · x subject to Ax ≥ c and x ≥ 0.

In practice, you will be given values for the parameters of the problem (A,
p, and c) and then would go ahead and try to find a numerical solution. In fact,
you can amuse yourself by going to:
http://www-fp.mcs.anl.gov/otc/Guide/CaseStudies/diet/
and finding out the costs of sample diets (I am not sure where the prices or
requirements came from). This program allows you to select the foods that you

8
are willing to eat. I selected about thirty foods and was told that I should limit
my diet to carrots, peanut butter, potatoes, and skim milk. With these, I could
meet my nutritional requirements for 99 cents per day. This diet was heavy on
the peanut butter. I decided that maybe I didn’t want to survive by spreading it
on carrots so I ruled out peanut butter. When I did, my optimal diet cost $4.32
and involved nine different foods. I am grateful that my enormous university
salary provides me the luxury of spending even more than this on food every
day.
During the formulation problem, it is useful to think about what the solu-
tion of the problem might look like. Would you expect the problem to have a
solution? In theory two things can go wrong. Maybe the problem is infeasible.
That would mean that it is impossible to find any foods that would satisfy the
nutritional requirements. This could happen if the government decided that
everyone needed to consume positive quantities of Vitamin X, but there was no
food that contained Vitamin X. (My son eats chicken soup, pasta, corn bread,
chicken nuggets, and chocolate desserts. It is possible that this would not be
enough to satisfy nutritional requirements without vitamin supplements.) On
the other hand, if you could find every nutrient in some food, then (by buying
enough) you could guarantee that you satisfy all of the requirements. That is,
it is sensible to assume that the problem is feasible. Could it be unbounded?
For a minimization problem, this would mean that you could make the cost of
the optimal diet arbitrarily small - not close to zero, but smaller than any num-
ber. It does not make sense that the diet would cost less than, say, −$100. (I
would interpret this as meaning that the store paid you $100 to take the food.)
Indeed, if you assume that prices are all non-negative (true everywhere but in
Mom’s kitchen), then any bundle of food you purchase will cost a non-negative
amount, so the cost of diets are bounded below. In summary, it is sensible to
assume that the diet problem has a solution.
Before leaving the diet problem, I want to describe another problem that is
based on precisely the same data as the diet problem. Here is the story. You
still run the institution. Someone approaches you and says: “Why bother with
food? All you care about is that your animals get nutrients. I sell pills (one
kind of pill for each nutrient). I have set my prices so that you can get nutrients
more cheaply from me than through food. I will sell you exactly the nutrients
you need and you will be better off.” You think about this and decide that it
sounds reasonable. The new problem is to figure out how the pill seller should
behave. Her problem is to set prices of pills that maximize the amount she can
get selling you the necessary nutrients subject to the constraint that the pills
provide nutrients more cheaply than food.
Step 1: Variables.
The pill seller wants to find prices for each nutrient pill. That is, she is
looking for y = (y1 , . . . , yi , . . . , ym ), where yi is the price charged for a pill that
supplies one unit of nutrient i.
Step 2: Objective.
The pill seller wants to maximize her profit. She sells c. If she can charge
the prices y, then she earns c · y.

9
Step 3: Constraints.
What does it mean for the pills to be cheaper than food? Consider the first
food. You don’t care what it looks like or what it tastes like. You only care
what nutrients it provides. Suppose you try to replace food one with pills. What
kinds of pill would you need? Food one supplies (in theory) amounts of all m
nutrients. If you wanted to replace the nutrient i found in one unit of food one
with nutrient i pills, you would need ai1 pills. This means that replacing the
nutrient i in food one with pills would cost ai1 yi . The total amount
Pm you would
need to replace the nutrients in food one with pills is therefore i=1 ai1 yi .
In order for the (nutrients in the) pills to be cheaper than (the nutrients in)
food one, it must be that
Xm
ai1 yi ≤ p1 .
i=1

I want to impose this kind of constraint for each food. That is, each food
is at least as expensive as the cost of its nutrients. This leads to, for each
j = 1, . . . , n,
m
X
aij yi ≤ pj .
i=1

In concise notation, this becomes At y ≤ p (where At is the transpose of


A: the matrix you get when you interchange rows and columns). Throughout
this course I will write this expression yA ≤ p. Those comfortable with linear
algebra will know that this notation confuses row vectors with column vectors,
but it is convenient and should lead to no confusion.
Also, I add a non-negativity constraint (that states that the pill seller does
not give people money to take her pills): y ≥ 0.
Step 4: Conclusion.
Put the constraints together and we have the pill seller’s problem:
Find y = (y1 , . . . , ym ) to solve:

max c · y subject to yA ≤ p and y ≥ 0.

On one hand, the pill seller’s problem is just a contrived way to practice
problem formulation. It turns out, however, that it illustrates an important
idea that will appear later in the course. At this stage, I want to point out
several things.
Both the diet problem and the pill seller’s problem use the same basic data
(A, c, and p). I constructed the pill seller’s problem so that there will be a
relationship between its value and the value of the diet problem. Specifically,
the optimal value of the diet problem (the minimum cost) will be greater than
or equal to the optimal value of the pill problem (the maximum earnings of
the seller). Why? The constraints in the pill problem guarantee that pills
are cheaper than food. What the pill seller earns is what you would need to
pay to buy all of the necessary nutrients. Since these nutrients cost less when

10
purchased in pill form (by the construction of the prices) than when purchased
in food form, it must be that the cost of the pills is cheaper than the cost of the
food. You can prove this. Suppose that x is feasible for the diet problem and y
is feasible for the pill problem. That means that x satisfies Ax ≥ c and x ≥ 0
and y satisfies yA ≤ p and y ≥ 0. It follows that

yAx ≥ y · c

(This follows because Ax ≥ c and y ≥ 0. All you are doing is multiplying


m separate inequalities by a non-negative numbers and then adding them up.
Note that yAx and y · c are both numbers.) and also

yAx ≤ p · x.

Combining these two inequalities yields y · c ≤ yAx ≤ p · x and, in particu-


lar, y · c ≤ p · x. This inequality says in symbols what I said in words earlier:
The cost of the pills (priced so that pills are cheaper than food) is no greater
than the cost of a feasible diet.
The general property of linear programming problems that you’ll learn is
that when you actually solve these problems, the values are equal. That is,
when you find the minimum cost diet, it will cost exactly the same amount as
you would pay a profit-maximizing pill seller for the pills. This relationship (and
consequences of it) allow us to interpret the prices obtained when you solve the
pill seller’s problems as interesting economic quantities. It turns out that they
actually provide the economic value of nutrients as seen by you (in your role as
institutional menu planner). These prices give simple ways to answer questions
of the form: how much extra would it cost to satisfy the diet problem if the
nutritional requirement of the first nutrient went up by one unit.

11
Linear Programming Notes II:

Graphical Solutions

1 Graphing Linear Inequalities in the Plane


You can solve linear programming problems involving just two variables by
drawing a picture. The method works for problems with more than two vari-
ables, but it is hard to visualize the higher dimensional problems.
There are essentially two things you need to know in order to nd graphical
solutions to linear programming problems. First, you need to be able to graph
the solution to linear inequalities in the plane. Second, you need to be able to
see how the relationship between these points and the value of the objective
function. I will discuss the rst topic in this section. The next section will
discuss the second topic.
As a warm up, remember how to graph a line. You nd two points that are
on the line and then connect them. For example, if the line is described by the
equation: 2x1 + x2 = 2, then you can observe that the points (x1 ; x2 ) = (1; 0)
and (x1 ; x2 ) = (0; 2) are on the line. Connecting them leads to a straight line.
The inequality 2x1 + x2  2, consists of all of the points above and to the
right of the straight line. (In general, inequalities are satis ed by points on
one side of the line. In order to determine which set consists of the point
that satis es the inequality, I test by checking an arbitrary point not on the
line. For example, (x1 ; x2 ) = (0; 0) does not satisfy the inequality 2x1 + x2  2.
Consequently the set of points that satis es the inequality consists of the points
on the side of the line 2x1 + x2 = 2 that does not contain (0; 0). Now you can
gure out how to graph one inequality. If your linear programming problem had
only one constraint, you would be in business. LPs can have many constraints
(and typically do). In order to complete the process, you graph the constraints
one at a time. The feasible region is the intersection. Consider, for example,
the set determined by the ve inequalities
2x1 + x2  2
2x1 + x2  2
4x1 + x2  8
x  0:
This is region bounded by the quadrilateral pictured. (The four corners are
(0; 2), (1; 0), (2; 0), and (1; 4).) (I only have so much patience for doing the work
needed to include graphs. Please accept the humble o erings at the end of the
notes.)
There are several things to note. Why did I say that there were ve in-
equalities? The rst three lines describe one inequality each. The fourth line
describes two: x1  0 and x2  0:
If you have ve inequalities, you would expect the feasible set of have ve
sides. This set has only four. The reason is that the constraint that x1  0

1
is redundant. If you satisfy the other four constraints, then you automatically
satisfy x1  0. (In fact, if the rst two constraints hold, then x1 must be
non-negative.)
Most of you have an intuition from high school algebra (or college linear
algebra), that you should have as many variables as equations to have a system
that makes sense. In this example there are two variables. There are ve
constraints. What is wrong with your intuition? One problem is that it is
not clear what it means to make sense. In linear algebra courses, you want
to have as many equations as unknowns because then (and only then) should
you expect the system of equations to have a unique solution. I am dealing
with inequalities, not equations. You can convince yourself that when you have
two (or more) variables, it is possible to have solutions to an arbitrarily large
number of distinct linear inequalities. (You describe a polygon having n sides
as the set of points that satis es n linear inequalities.) Furthermore, I do not
want a unique solution to the constraints. This would mean that the feasible
set had only one point. So the optimization problem would be simple. (In the
diet problem, if the feasible set had only one point, that would mean that there
was only one possible way in which you could meet the nutritional constraints.
Of course this is impossible - you could meet the constraints by eating more of
everything - but the point is that you should expect feasible sets to be large.)
In the example, the feasible set has four corners. These corners are deter-
mined by the intersection of pairs of constraints, solved as equations. That is,
(0; 2) is the solution to
2x1 + x2 = 2
2x1 + x2 = 2;
(1; 4) is the solution to
2x1 + x2 = 2
4x1 + x2 = 8;
(1; 0) is the solution to 2x1 + x2 = 2 and x2 = 0, and (2; 0) is the solution
to 4x1 + x2 = 8 and x2 = 0. This is what typically happens. That is, the
feasible region of a linear programming problem has corners determined by
solving subsets of the constraints as equations (here you do want to use as
many constraints as you have variables). Once you have these corners, you get
the feasible set by connecting the dots and identifying the region that satis es
all of the constraints.
Warnings: The feasible set may be empty. (Imagine that you replaced the
constraint that x1  0 with one that said that x1  1.) There is nothing
mathematically mysterious about this. It means that you need to be careful
about which side of a constraint line is in the feasible set. The feasible set may
be unbounded. That is, it may go out forever in one or more directions. (After
all, having no constraints is perfectly ok.) The only way to have a problem that
has an unbounded solution is to have an unbounded feasible set.

2
2 Graphical Solutions
Now you know how to graph the feasible set. To solve a linear programming
problem graphically, that is the rst thing you do. If the feasible set is empty,
then stop. It does not matter what the objective function is, the linear pro-
gramming problem is not feasible.
If the feasible set is non-empty, then you must decide whether the linear
programming problem has a solution or is unbounded. If the problem has a
solution, then you would like to nd it (and, if the problem has more than one
solution, you would like to nd all of them). If the problem is unbounded, then
you would like to be able to explain why it is unbounded. It should be clear that
the linear program cannot be unbounded if the feasible set is not unbounded.
It is possible for the LP to have a solution even if the feasible set is unbounded.
I will now discuss graphical solutions using the feasible region described in
the rst section:
2x1 + x2  2
2x1 + x2  2
4x1 + x2  8
x  0:
For a start, assume that the objective function (I call it x0 ) is x1 + 2x2 . (I will
discuss other possible objective functions later.)
The next step is to graph a level set of the objective function. A level set
of a function is the set of points at which the function takes on the same value.
For the example, a level set is a set of points for which x1 + 2x2 = c for some
constant c. Level sets of linear functions of two variables are straight lines (level
sets of linear functions of three variables are planes; level sets of linear functions
of more than two variables are at things called hyperplanes). Concretely, the
points at which x1 + 2x2 = 2 consist of a straight line through (2; 0) and (0; 1).
Superimpose a level set of the objective function (which is a line) on the
graph of the feasible region. When you do so, either the line intersects with
the graph or it does not. For example, the line x1 + 2x2 = 2 does intersect the
feasible region (at the point (2; 0) for example). The line x1 + 2x2 = 2 does
not intersect the feasible region (it lies below the region). Neither does the line
x1 + 2x2 = 20 (it lies above the region).

If the level set x1 + 2x2 = c intersects the feasible set, that means that there
exists feasible points that make the objective function's value equal to c. If the
level set does not intersect the feasible set, then it is not possible to make the
objective function's value equal to c. So it is possible to make the objective
function's value equal to 2, while it is not possible to make the value equal -2
or 20. These observations are important. They tell us that the value of the
optimization problem is at least 2 but no more than 20. Our job is to nd the
highest value of the objective function subject to staying within the feasible set.
Geometrically, we want to nd a level set of x0 that intersects the feasible set,
with the additional property that no higher level set intersects the feasible set.
The level set x1 + 2x2 = 2 intersects interior of the feasible set. (Interior is a
technical term, but I hope that its intuitive meaning is clear.) This means that

3
level sets for higher values still intersect the feasible region. Now consider the
level set x1 + 2x2 = 9. This level set describes points that make x0 = 9. This
level set is parallel to the others. It intersects the feasible region. It intersects
the region in just one place, however (the point (1; 4)). We can conclude that it
is possible to satisfy constraints and make the objective function's value be (at
least) 9. We can further conclude that it is not possible to make the objective
function any higher and still satisfy the constraints. It follows that the solution
to the problem is (x1 ; x2 ) = (1; 4). It leads to the optimal value 9. This is the
solution to the linear programming problem.
You should experiment with this procedure using alternative objective func-
tions. You should be able to construct objective functions that yield solutions
at any of the corners of the feasible set.
In brief, here is the process.
1. Graph feasible set. If feasible set is empty, then stop. The problem is
infeasible. Otherwise continue.
2. Graph a level set of the objective function.
3. Shift the level set (parallel movement) until it intersects the feasible region.
4. Continue to shift the level set until it reaches the maximum value the
intersects the feasible region.
You follow the same steps for a minimization problem, taking care to move
the objective function in the opposite direction. You know which direction in-
creases the objective function value by drawing two level sets and comparing (the
direction of increase never changes). In the example, the level set x1 + 2x2 = 9
lies above and to the right of the level set x1 + 2x2 = 2; you always increase the
objective function (in this example) by moving up and to the right.
If your feasible set is unbounded, then it may be that the linear programming
problem is unbounded. You will be able to see this graphically if level sets for
arbitrarily large values of the objective function continue to intersect the feasible
region.
The graphical method illustrates an important, and general, property of
solutions to linear programming problems. If a linear programming problem has
a solution, then it has a solution at a \corner" of the feasible set. Graphically,
this fact follows from the observation that if the level set of the objective function
does not intersect a corner of the feasible set, then you can move the level set a
little bit in either direction and still intersect the feasible region. You cannot be
at either a minimum or a maximum. It is a really important fact. It means that
if you want to solve a LP and you know the corners of the feasible set, then all
you need to do is plug the corners into the objective function and pick the best
one. (In the example, the feasible set has only four corners: (0; 2), (1; 0), (2; 0),
and (1; 4).) This realization turns out be the idea behind the simplex algorithm
that can be used to solve any linear programming problem.
Linear programming problems may have solutions that are not at corners.
For example, if x0 = 4x1 + x2 , then all points on the segment connecting (1; 4)

4
to (2; 0) solve the LP. Here the level sets of the objective function are parallel to
the upper right boundary of the feasible set. This example does not contradict
the fact. Points (1; 4) and (2; 0) are corners. (It is generally true that if a
linear programming problem has two solutions, then everything on the segment
connecting the two solutions is also a solution.)
If the feasible set is unbounded, then the \trick" of testing only at the corners
of the feasible region will tell you the solution to the problem if the LP has a
solution. The problem may be unbounded.
Experiment with unbounded problems. For example, take the feasible set
that I have used throughout these notes and omit the constraint that 4x1 +
x2  8. You obtain an unbounded feasible set. Even though the feasible set is

unbounded, certain problems still have a solution. For example, the solution to
min x1 + x2 subject to:
2x1 + x2  2
2x1 + x2  2
x  0
is (1; 0). It should be clear, however, that if you tried to solve max x1 + x2
subject to these constraints you cannot nd a solution. For example, since the
point (z; 0) is in the feasible set for all z  1, it is possible to make the objective
function larger than anything (just let z > M ). You can see this graphically
by noting that the level sets of x1 + x2 intersect the feasible region at the point
(z; 0).
If I ask you to solve a linear programming problem graphically, then you
should give one of three answers. If the problem is not feasible, then you demon-
strate that the graph of the constraint set is empty (and state that the problem
is infeasible). If the problem has a solution, then you should state the solution
(or solutions, if there is more than one); show a level set of the objective function
that intersects the feasible region at the solution; and show that if you increase
the objective function the corresponding level set would no longer intersect the
feasible region. If the problem is unbounded, then you should show how you
can nd a feasible point that makes the objective function larger (or, in the case
of a minimization problem, smaller) than any arbitrarily chosen value M .
HH
HH
HH
1,4
CCHHH

 C x0 = 9
 C
 C
C
0,2  C
A
A C
H
HH A C
x0 = 2 H AH C
A HH C
A HC
1,0 2,0
5
Linear Programming Notes III:

A Simplex Algorithm Example


These notes will take you through a computation using the simplex algo-
rithm. The example will give you a general idea of how the algorithm works.
Except for a few exercises that I give you, you will never do simplex algorithm
computations by hand. Why should I bother telling you about it? The basic
idea is simple, intuitive, and powerful. That makes it a good idea, one worth
knowing something about. The technique is straightforward. In principle, you
can learn how to solve any linear programming problem by hand. Algorithms
are a central concept in Operations Research. You get an understanding of the
general concept by seeing a particular example.
I will start by pushing through a speci c example. When I am done, I will
make remarks about what must be done to come up with a general algorithm.
max 2x1 + 4x2 + 3x3 + x4 (0)
subject to 3x1 + x2 + x3 + 4x4  12 (1)
x1 3x2 + 2x3 + 3x4  7 (2)
2x1 + x2 + 3x3 x4  10 (3)
x  0
The problem is feasible (since all constraints hold if you set each variable
equal to zero). The problem is bounded (inequality (1) implies that x1  4,
x2  12, x3  12, and x4  3, for example); hence it is impossible to make the

objective function larger than 2(4) + 4(12) + 3(12) + 1(3) = 95. Of course, we
cannot set x1 = 4, x2 = 12, x3 = 12, and x4 = 3 and still satisfy constraint (1),
but 95 is certainly an upper bound to the value of x0 .
These observations indicate that the problem has a solution. Finding it does
not look easy. In particular, we have too many variables to graph.
Here is the essential approach. First introduce slack variables and write the
problem as a system of equations involving nonnegative variables.
max x0
subject to:
x0 2x1 4x2 3x3 x4 = 0 (0)
3x1 + x2 + x3 + 4x4 + x5 = 12 (1)
x1 3x2 + 2x3 + 3x4 + x6 = 7 (2)
2x1 + x2 + 3x3 x4 + x7 = 10 (3)
x  0
This system of equations is identical to the original system of inequalities.
The simplex algorithm is a systematic way of solving the system of equation in
a way that:
1. Preserves non-negativity of the variables.

1
2. Assigns positive values to only a few (one for each equation) variables (the
basis).
3. Identi es whether the value of x0 is as large as possible and if it is not
4. Describes how to increase the value of x0 .
Looking at the system above, it becomes easy to \guess" values of x0 ; : : : ; x7
that satisfy the equations. Simply set the variables that appear in more than
one equation equal to zero (x1 = x2 = x3 = x4 = 0) and solve for the remain-
ing variables (x0 = 0; x5 = 12; x6 = 7; x7 = 10). This \guess" satis es the con-
straints of the original problem. We can increase x0 however. Look at (0). This
equation gives x0 in terms of x1 ; x2 ; x3 ; and x4 . Because the coeÆcients of
x1 through x4 are negative, increasing the value of any one of these variables

from 0 to a positive number will increase x0 . It follows that the guess in which
x1 = x2 = x3 = x4 = 0 is not a good way to maximize x0 . Fix one of these

variables, say x1 . How far can we increase x1 without violating the constraints
of the problem? That is, how far can we increase x1 without violating con-
straints and while still maintaining x2 = x3 = x4 = 0? Since x5  0, (1) says
that x1  4; since x6  0, (2) says that x1  7; since x7  0, (3) says that
x1  5. Hence, I can make x1 as large as 4, but no larger (without changing

the values of the variables x2 = x3 = x4 = 0 or violating one of the constraints


of the problem. The next step of the procedure involves using (1) (the equation
that bounds the value of x1 ) to eliminate x1 from the other equations. That
bit of algebraic manipulation will yield a system of equations equivalent to the
original system (and hence the original problem) that can be solved easily for
x1 . First, x equation (1) so that the coeÆcient of x1 is 1. You can do this by

dividing the equation by 3 to get equation


1 1 4 1
(1) : x1 +
0
x2 + x3 + x4 + x = 4:
3 3 3 3 5
Next use (1) to rewrite the other equations in terms of variables other than x1 .
0

That is, write


(0) = (0) + 2(1) ;
0 0

(2) = (2) + ( 1)(1) ;


0 0

(3) = (3) + ( 2)(1) :


0 0

This transformation yields the second representation of the original problem


as a system of equations:
Row Basis x0 x1 x2 x3 x4 x5 x6 x7 V alue

(0) 0
x0 1 0 10 7 5 2 0 0 8
3 3 3 3
1 1 4 1
(1) 0
x1 0 1 3 3 3 3 0 0 4
10 5 5 1
(2) 0
x6 0 0 3 3 3 3 1 0 3
(3) 0
x7 0 0 1 7 11 2 0 1 2
3 3 3 3

2
I have introduced a shorthand notation for systems of equations. Rather than
repeat the variables that appear in each equation, I wrote them along the top.
You should interpret each line (0) through (3) as an equation (the equation
0 0

(1) is expanded above). The rst column in the table indicates the row number.
0

The second column indicates the basis. Notice that the basis variable appears
in its equation with coeÆcient one and in no other equation (or with coeÆcient
zero in the other equations for you purists). The value column tells you the
right hand sides of each of the equations. This second system of equations
can be solved easily for x0 ; x1 ; x6 ; and x7 if we set the remaining variables
equal to zero. We obtain (by setting basis variables equal to corresponding
values): x0 = 8; x1 = 4; x6 = 3; x7 = 2. In terms of the original problem
all that we have done is said that we can increase x0 from 0 to 8 if we set
x1 = 4; x2 = x3 = x4 = 0. The good news is that we made this observation

in a systematic way. We can repeat the process. As x2 appears in (0) with 0

a negative coeÆcient, increasing x2 (from 0) increases x0 . How great can this


increase be (if we insist that the variables that appear in several equations - the
non basic variables - be set equal to zero)? (1) allows x2 = 12 (since we need 0

3 x2  4); (2) places no restriction on x2 as x2 's coeÆcient in that constraint


1 0

is negative. If we increase x2 we can also increase x1 to \take up the slack;"


(3) requires that 31 x2  2 or that x2  6. Since the (3) constraint places the
0 0

strictest restriction on x2 , we use (3) to eliminate x2 from the other equations


0

and come up with a solution to the equations in which x0 goes up (from 8) and
x2 is set equal to 6. The transformed equations satisfy:

(3) = (3)(3)
00 0

and then
10
(0) = (0) +
(3) ;
00 0 00

3
1
(1) = (1) (3) ;
00 0 00

3
10
(2) = (2) + (3) :
00 0 00

3
The following array summarizes the computation.
Row Basis x0 x1 x2 x3 x4 x5 x6 x7 V alue

(0)00
x0 1 0 0 21 35 6 0 10 28
(1)00
x1 0 1 0 2 5 1 0 1 2
(2)00
x6 0 0 0 25 35 7 1 10 23
(3)00
x2 0 0 1 7 11 2 0 3 6
Here we can take x3 = x4 = x5 = x7 = 0 and solve for the rest of the
variables:
x0 = 28; x1 = 2; x6 = 23; x2 = 6:
Again we have an equivalent representation of the original system. Again we
have increased x0 (this time to 28). Again we can see that it is possible to

3
increase x0 further by increasing x4 (at this point increasing x3 would actually
lower the objective function value). Arguing as before, it is (1) that restricts 00

the increase of x4 , while the other equations do not restrict x4 . Hence, I use
(1) to eliminate x4 and form a new system: (1) = 51 (1) ; (0) = (0) +
00 000 00 000 00

(35)(1) ; (2) = (2) + (35)(1) ; and (3) = (3) + (11)(1) . This new system
000 000 00 000 000 00 000

looks like:
Row Basis x0 x1 x2 x3 x4 x5 x6 x7 V alue

(0) 000
x0 1 7 0 7 0 1 0 3 42
(1) 000
x4 0 :2 0 :4 1 :2 0 :2 :4

(2) 000
x6 0 7 0 11 0 0 1 3 37
(3) 000
x2 0 2:2 1 2:6 0 :2 0 :8 10:4
Now observe that as before we can read o a feasible guess to the original prob-
lem: x0 = 42; x1 = 0; x2 = 10:4; x3 = 0; x4 = :4; x5 = 0; x6 = 37; and x7 = 0.
This time we have actually solved the original problem. Here is why. Equation
(0) expresses x0 in terms of x1 ; x3 ; x5 ; and x7 . Further, the coeÆcients of
000

these variables are all non-negative in (0) . Since these variables must take on
000

non-negative values, the best value that they can take on (for the purpose of
maximizing x0 ) is zero. If you set x1 = x3 = x5 = x7 = 0, then you get 42.
More mathematically, rewriting (0) yields 000

x0 = 42 (7x1 + 7x3 + x5 + 3x7 ):


The stu in parentheses is always nonnegative since x1 ; x3 ; x5 ; x7  0. It follows
that x0 = 42 blah, where blah is a non-negative number. Therefore, x0  42 in
any possible solution involving a feasible vector for the original problem. Since I
have found a way to make x0 = 42, I must have found a solution to the original
problem. That solution is: x1 = 0; x2 = 10:4; x3 = 0; and x4 = :4.
This example explains the essentials of the simplex algorithm. Here are
some comments. First, although I began by increasing x1 's value, I ended with
x1 = 0. Thus, what looked like a good idea at rst did not turn out to be a

good idea. The procedure that I have described always (step by step) increases
x0 , but there is not much you can say about how it changes the values of other

variables. Second, I always increase the value of some variable that appears with
a negative coeÆcient in row 0. The variable that I select will have no in uence
on the nal outcome. I could have started by increasing x2 , x3 , or x4 instead
of x1 . In fact, if I had started with either x2 or x4 it would have been possible
to obtain the solution in two steps instead of three. Third, I decided which row
to use in a \pivot" in a systematic way. The values (right-hand side constants)
are positive at every step of the computation. This is no accident. Indeed,
this property is what guarantees that your "guess" is feasible at each step of the
algorithm. Here is a general outline of how the simplex algorithm works. It takes
a linear programming problem and, in a nite number of steps (bounded by the
number of constraints and variables in the original problem), it stops. When
it stops, the algorithm either proves that the problem is infeasible; provides a
solution to the problem; or demonstrates that the problem is unbounded.

4
You start with a description of the feasible region using variables that are
constrained to be non-negative. It turns out that it is always possible to do
this. That is, given any LP, you can transform it to one in which variables are
non-negative and constraints are equations. In the example, we transformed
inequalities to equations by adding slack variables. Second, you nd a feasible
basis for the system of equations. A basis is a subset of variables, one for each
equation. You try to nd a guess for the problem by solving the constraints
using only these variables (setting the other variables equal to zero). In the
example, the initial feasible basis consisted of the slack variables and x0 . If you
describe your feasible set using n equations, then you should only need n of
your variables to solve the equations. This amounts to nding a corner of the
feasible set. You may not be able to nd a feasible basis for the problem. (After
all, the problem may be infeasible.) It turns out that if the problem is feasible,
then there exists a feasible basis. The standard way to nd a feasible basis is
to solve an \arti cial" LP derived from your problem.
Here is what you should know about nding feasible bases. Not all LPs have
feasible bases. You can nd a feasible basis (if one exists) using the simplex
algorithm. It is easy to nd feasible bases for some problems. For example, in
the example, the slack variables were a feasible basis. In general, if the feasible
set can be written Ax  b and x  0 for b  0, then x = 0 always satis es the
constraints and a good starting feasible basis is the slack variables.
Once you have written the problem in basic form (as in any of the arrays
above), you conduct a pivot operation. You do this by looking for a negative
number in Row 0. (If there are none, then you have solved the problem.) If
there is more than one negative number, pick one. Let us say that it is in column
j . You can increase the value of the objective function by \pivoting" xj into

the basis. You must decide where to pivot. Look in the xj column. If there are
no positive numbers in the column, then stop. The problem is unbounded (you
can increase xj inde nitely without violating the constraints of the problem).
If there is at least one positive number in the column, then pick the one that
minimizes the ratio between the value column and the entry in the xj column.
This is precisely what I did in the computation above. Doing this identi es a
pivot element. Now you do a computation that generates a new basis. If the
pivot is done correctly, then, when you are done, you still have a basis; the value
of the objective function goes up; and the value column remains non-negative.
Now you can repeat the pivoting process. The process must end in a nite
number of steps because there are only a nite number of bases and you never
return to a basis once you have pivoted away from it (because at each step you
increase the value of the objective function). This is essentially the entire story
of the theory of the simplex algorithm. The story had a few loose ends, but
only a few. For the sake of truth, here are the loose ends.
1. Finding an initial feasible basis.
It requires a bit of cleverness to realize that you can do this (when it is
possible) by solving another linear programming problem.

5
2. Proving that you have an unbounded problem when you have found a
negative number in row zero with no positive number in the corresponding
column.
This only requires thinking about what the simplex array means.
3. Proving that if you pivot correctly you obtain a new feasible basis.
You need to describe the simplex pivoting rules precisely and verify that
they do what they are supposed to do. The precise rules are exact trans-
lations of what I have said in words. Verifying that everything works is
easy.
4. Proving that the objective function goes up after every pivot (thereby
guaranteeing that the algorithm stops in a nite number of steps).
It turns out that the objective function need not go up every step. There
are \degenerate" cases (when there is a tie in the minimum ratio rule that
determines the pivot row). In these cases, a basis variable may take on a
zero value and subsequent pivots may lead to a new basis that leaves the
value of x0 unchanged. These ties are rare. In practice, the implementa-
tion of the algorithm breaks ties randomly. In theory, there is a re ned
method of picking where to pivot. If you follow the re ned rule, you are
guaranteed to never repeat a basis even in degenerate problems.
5. Finding all solutions.
The algorithm nds one basic solution to an LP (provided that a solution
exists). When there are many solutions, you can use the algorithm to nd
all basic solutions. This is not hard, but I won't say more about it.

6
Linear Programming Notes IV:

Solving Linear Programming Problems

Using Excel

1 Introduction
Software that solves moderately large linear programming problems is readily
available. You can nd programs on the internet. Software comes with many
textbooks. Standard spreadsheet programs often can solve linear programming
problems. In this course, we will use the solver tool found on Excel to solve
problems. I request that you use this program. If you would strongly prefer to
use other software, then talk to me.
You can nd Excel on the machines in the computation lab in Econ 100. You
may have access to Excel (it comes bundled with Microsoft OÆce). If you do,
then use it. The capacity to solve linear programs is only available if you install
the solver tool. If you nd that the solver is not available on your machine, you
must either re-install Excel or use the computation lab.
These notes explain how to use Excel to solve linear programming problems.
I will assume that you know the basics of using a computer in the computation
lab. That is, you must know how to log on, locate and start Excel, save your
work, print your work, and log out. (One pointer: If you use the machines in
Econ 100, you must know that your account is on iacs5 in order to log on.)

2 Creating a Spreadsheet
This section explains how to set up a spreadsheet to solve LPs. On the class
webpage there are companion .pdf les that contain an Excel spreadsheet that
I have created to illustrate the procedure. You should refer to that spreadsheet
when you read these notes. The webpage also contains the same spreadsheet
in .xls format. You can copy this and use it as a template spreadsheet for
class assignments or your own experimentation. In any event, you need not
prepare a new starting spreadsheet for each program. (When you do a homework
assignment, make sure to type in the assignment number, the problem you are
solving, and your name at the top of the page. If you use my template, it would
be a good idea to erase my name before handing in the assignment.)
I will describe how to set up and solve the basic linear programming example
that I solved in class. This problem has four variables. The variables are
constrained to be non-negative and there are three additional constraints. The

1
problem is to maximize 2x1 + 4x2 + 3x3 + x4 subject to:
3x1 + x2 + x3 + 4x4  12
x1 3x2 + 2x3 + 3x4  7
2x1 + x2 + 3x3 x4  10
x  0
:

I set up the sample template to deal with a problem of this size (four variables
and three resource constraints). You would need to make adjustments to deal
with smaller or larger problems. The boxes that have dark borders are the ones
in which you enter numbers. The coeÆcients of the objective function (the cj )
go in cells E5 through H5. The resource constraints (the bj ) go in K13 through
K15. The technological coeÆcients (the matrix A) go in the block from E to
H and 13 to 15. In addition, the cells E8 through H8 are reserved for values of
the variables. The program will nd the best values for these xj and put them
in these cells. B18 contains a formula. It is the formula for the value of the
P
objective function. If you did not have four variables you would need to adjust
the range of entries in the formula so that it would properly compute nj=1 cj xj .
For reference, I wrote the formula that appears in cell B18 on the spreadsheet
(starting in A20). A20 does not enter into any of the computations; it is just a
comment. Similarly, I13 through I15 contain formulas. These formulas compute
the left hand side of a resource constraint. I wrote the formula that appears
in I13 for reference in A21. I14 and I15 contain similar formulas. You need
not retype them. If you copy I13 by dragging downward, clever Excel makes
the necessary changes in the formula. (The dollar signs in this formula tell
Excel not to adjust the cell number when it copies.) If you had additional
resource constraints, you would just use more rows in the spreadsheet. If you
had a di erent number of variables, you would need to adjust the range of
the variables that appear in the formulas. Finally, L13 through L15 contain a
formula that gives you the di erence (in absolute value) between the right and
left hand side of each constraint. When you solve the problem, this di erence
must be zero for equality constraints. Otherwise, a strictly positive number
corresponds to a non-binding constraint. I have not discussed J13-15. This
column reports the way in which you compare the left-hand side of the resource
constraint to the right-hand side (=, , or ). The computer program lets you
enter any of the three comparison operators (and di erent constraints can use
di erent comparisons).
Once you have a template, you can create copies of it. Go to the pull down
Edit menu at the top of the Excel page. Click on Move or Copy Sheet . . . . Under
To Book: Book1 (or some other destination). Under Before Sheet: Highlight
Template (or whatever you call your template). Put a check mark in Copy.
Click OK. This creates a copy of the template. Excel calls it Template (2). The
name of the sheet is on a tab at the bottom of the sheet. You can change the
name by putting the cursor on the tab, right clicking, clicking on Rename . . . ,
and entering the new name.

2
I renamed the copy Example 1. The spreadsheet is now in perfect shape to
solve a problem with 4 variables and 3 resource constraints. If your problem
has more constraints, you can enter them below the ones in the problem (after
line 15). Make sure to copy the formulas for the left-hand side and the slack. If
your problem has more variables, then you need to adjust all of the formulas to
include extra terms in the sums.
A comment about spreadsheets. Cells in Excel spreadsheets can contain
words (like A1). I use these as comments. The cells can contain numbers. Some
of these cells contain the basic data of the problem (the A, b, and c). Some of
the cells contain formulas. You enter the formula; Excel inserts a value into the
cell using the formula. To enter a formula in a cell put the cursor on the cell
and then type something that starts with an equal sign. The formulas that you
use will compute linear functions - products of given coeÆcients and variables.
Excel does this using the \sumproduct" function. For example, when your write
= SUMPRODUCT(E5:H5,E8:H8)
Excel computes:
(E 5)(E 8) + (F 5)(F 8) + (G5)(G8) + (H 5)(H 8);
where (E5) is the number in cell E5 of the spreadsheet. Excel will nd some
way to whine at you if the formula is improperly con gured (for example, if
there are four cells to the left of the comma and three cells to the right; or if
the entry in any one of the cells is not a number). Numbers should appear in
cells that contain formulas. These numbers are the value obtained if you plug
in current referenced values from other parts of the spreadsheet.

3 Solving the Problem


On this spreadsheet I entered the data from the example. You may also enter
guesses for the variables in E8 through H8; if you do not put anything there,
Excel will assign the initial value of zero to each variable.
Once you have entered the data (and adjusted the formulas, if necessary),
you are ready to solve.
Go to the Tools column on the menu bar at the top of the page. Click once
to bring down the menu. Click on Solver . . . . A Solver Parameter box should
appear.
In the Set Target Cell box: Insert B18. (You can do this by clicking on cell
B18 of the spreadsheet.) In general, what goes into the Set Target Cell box is
the cell in your spreadsheet that has the formula for your objective function.
Put the bullet in the circle to the left of max (of course, if you were solving
a minimization problem, you would put the bullet to the left of min).
By changing cells: enter E8:H8 (in general, indicate the cells that you have
reserved for the variables).
To the right of the \Subject to the Constraints" box, click on Add . . . . An
Add Constraint Box should appear. Enter each resource constraint. For this

3
problem, I typed \I13:15" in the left hand box (under cell reference); I made sure
that the middle cell showed the operator \<=" (that is, ); I typed \K13:15" in
the right hand box (below and to the right of \Constraint:"). I then clicked OK.
I then clicked add again and entered the non-negativity constraints by putting
\E8:H8" in the left; \>=" in the center; and \0" in the right. I clicked OK
again. There are other ways to enter this information. The important thing to
note is that you should enter each constraint and that the Add Constraint Box
lets you designate the constraint as an equation or either kind of inequality.
You need to tell Excel to treat the program as a linear model. To do this,
click on Options . . . . Put a check mark next to \Assume Linear Model" (you
can ignore the rest of the box). Click OK.
Now you can try to get the solution to your problem by clicking on Solve.
The Solver Results box now appears. You should place a bullet to the left of
\Keep Solver Solution" and highlight \Sensitivity" in the Reports list of options
on the right. Click on OK.
You now have a spreadsheet that contains the answer to the problem and
another sheet that has sensitivity results. You should be able to interpret the
answer page: the spreadsheet now tells you the solution (x in E8 through H8)
and the value of the problem (in B18). The spreadsheet also tells you which
of the constraints were binding and the amount of slack in the non-binding
constraints. (This information is in L11:13.) The sensitivity report contains a
lot of useful information that you'll learn about later.
At this point you can print out and save your results. Sometimes the output
looks nicer if you print in using the landscape option rather than the portrait
option. Click: File, then Page Setup, then highlight the bullet to the left of
Landscape if you think that your results would look better in that format.
Similarly by going through: File, Page Setup, Sheet, you can check boxes that
will (or will not) print the grid lines and the row and column headings.
If all has gone well, then you can reproduce the answer to the Example that
we found using direct computation. My spreadsheet (Example 1) does contain
the data from the problem and its solution. You can read o the values of the
variables (in dark bordered box under x1 ; : : : ; x4 ). You can read o the value
and the slack as well. Note that the computer decided that the slack in the
rst and third constraints was not quite zero. This is due to rounding error.
2:17E 12 means 2:17  10 12 , which is closer to zero than most of us will ever
get.

4 Loose Ends
I have described how to use Excel to solve linear programming problems. There
are a couple of details.

4
4.1 Infeasible Problems

If your feasible set is empty, then Excel will tell you. After you ask Excel to
nd a solution, the Solver Results Box appears with the statement that \Solver
could not nd a feasible solution." Provided that there are no mistakes, this
means that the problem is not feasible. There is no solution to nd and there
is nothing else to do.

4.2 Unbounded Problems

If your problem is unbounded, then Excel will tell you. The Solver Results Box
appears with the statement that \The Set Target Cell Values do not converge."
The spreadsheet then provides you with a nite guess for all of the variables,
but the guess will have the property that you can increase (at least) one of the
variables without bound; maintain feasibility by varying only the variables that
have positive values; and increase the objective function without bound.

4.3 Mysterious Failures

Almost certainly something will go wrong at some point. Maybe it is the hard-
ware. Maybe it is the software. Probably you made an incorrect assumption
about how the program interprets the data you give it or you just typed a for-
mula or cell address incorrectly. Identifying the problem is frustrating and often
time consuming. If you obtain a numerical answer, then I urge you to examine
it to see whether it looks sensible. It could be the answer to the wrong problem.
If Solver does not work, then go over the data and formulas slowly and carefully.
Usually you will nd a mistake.

4.4 Sensitivity Analysis

At this point you can general a report on the sensitivity of your solution, but
you cannot interpret this report. Sensitivity analysis is a major topic of the
course. I will teach you what this report means soon.

5
Linear Programming Notes V
Problem Transformations
1 Introduction
Any linear programming problem can be rewritten in either of two standard
forms. In the first form, the objective is to maximize, the material constraints
are all of the form: “linear expression ≤ constant” (ai · x ≤ bi ), and all variables
are constrained to be non-negative. In symbols, this form is:

max c · x subject to Ax ≤ b, x ≥ 0.

In the second form, the objective is to maximize, the material constraints


are all of the form: “linear expression = constant” (ai · x = bi ), and all variables
are constrained to be non-negative. In symbols, this form is:

max c · x subject to Ax = b, x ≥ 0.

In this formulation (but not the first) we can take b ≥ 0.


Note: The c, A, b in the first problem are not the same as the c, A, b in the
second problem.
In order to rewrite the problem, I need to introduce a small number of
transformations. I’ll explain them in these notes.

2 Equivalent Representations
When I say that I can rewrite a linear programming problem, I mean that I can
find a representation that contains exactly the same information. For example,
the expression 2x = 8 is equivalent to the expression x = 4. They both describe
the same fact about x. In general, an equivalent representation of a linear
programming problem will be one that contains exactly the same information
as the original problem. Solving one will immediately give you the solution to
the other.
When I claim that I can write any linear programming problem in a standard
form, I need to demonstrate that I can make several kinds of transformation:
change a minimization problem to a maximization problem; replace a constraint
of the form (ai · x ≤ bi ) by an equation or equations; replace a constraint of the
form (ai · x ≥ bi ) by an equation or equations; replace a constraint of the form
(ai · x = bi ) by an inequality or inequalities; replace a variable that is not ex-
plicitly constrained to be non-negative by a variable or variables so constrained.
If I can do all that, then I can write any problem in either of the desired forms.

1
3 Rewriting the Objective Function
The objective will be either to maximize or to minimize. If you start with
a maximization problem, then there is nothing to change. If you start with a
minimization problem, say min f (x) subject to x ∈ S , then an equivalent maxi-
mization problem is max −f (x) subject to x ∈ S. That is, minimizing −f is the
same as maximizing f . This trick is completely general (that is, it is not limited
to LPs). Any solution to the minimization problem will be a solution to the
maximization problem and conversely. (Note that the value of the maximization
problem will be −1 times the value of the minimization problem.)
In summary: to change a max problem to a min problem, just multiply the
objective function by −1.

4 Rewriting a constraint of the form (ai · x ≤ bi )


To transform this constraint into an equation, add a non-negative slack variable:

ai · x ≤ bi
is equivalent to

ai · x + si = bi and si ≥ 0.
We have seen this trick before. If x satisfies the inequality, then si = bi − ai · x ≥ 0.
Conversely, if x and si satisfy the expressions in the second line, then the first
line must be true. Hence the two expressions are equivalent. Note that by
multiplying both sides of the expression ai · x + si = bi by −1 we can guarantee
that the right-hand side is non-negative.

5 Rewriting a constraint of the form (ai · x ≥ bi )


To transform this constraint into an equation, subtract a non-negative surplus
variable:
ai · x ≥ bi
is equivalent to

ai · x − si = bi and si ≥ 0.
The reasoning is exactly like the case of the slack variable.
To transform this constraint into an inequality pointed in the other direction,
multiply both sides by −1.

ai · x ≥ bi
is equivalent to

−ai · x + si ≤ −bi .

2
6 Rewriting a constraint of the form (ai · x = bi )
To transform an equation into inequalities, note that w = z is exactly that same
as w ≥ z and w ≤ z. That is, the one way for two numbers to be equal is for
one to be both less than or equal to and greater than or equal to the other. It
follows that

ai · x = bi

is equivalent to

ai · x ≤ bi and ai · x ≥ bi .

By the last section, the second line is equivalent to:

ai · x ≤ bi and − ai · x ≤ −bi .

7 Guaranteeing that All Variables are Explicitly


Constrained to be Non-Negative
Most of the problems that we look at requires the variables to be non-negative.
The constraint arises naturally in many applications, but it is not essential. The
standard way of writing linear programming problems imposes this condition.
The section shows that there is no loss in generality in imposing the restriction.
That is, if you are thinking about a linear programming problem, then I can
think of a mathematically equivalent problem in which all of the variables must
be non-negative.
The transformation uses a simple trick. You replace an unconstrained vari-
able xj by two variables uj and vj . Whenever you see xj in the problem, you
replace it with uj − vj . Furthermore, you impose the constraint that uj , vj ≥ 0.
When you carry out the substitution, you replace xj by non-negative variables.
You don’t change the problem. Any value that xj can take, can be expressed as
a difference (in fact, there are infinitely many ways to express it). Specifically,
if xj ≥ 0, then you can let uj = xj and vj = 0; if xj < 0, then you can let
uj = 0 and vj = −xj .

8 What Is the Point?


The previous sections simply introduce accounting tricks. There is no substance
to the transformations. If you put the tricks together, they support the claim
that I made in the beginning of the notes. Who cares? The form of the problem
with equality constraints and non-negative variables is the form that the simplex
algorithm uses. The inequality constraint form (with non-negative variables) is
the form used in for the duality theorem.

3
Warnings: These transformations really are transformations. If you start
with a problem in which x1 is not constrained to be non-negative, but act as
if it is so constrained, then you may not get the right answer (you’ll be wrong
if the solution requires that x1 take a negative value). If you treat an equality
constraint like an inequality constraint, then you’ll get the wrong answer (unless
the constraint binds at the solution). Similarly, you can’t treat as inequality
constraint as an equation in general. The transformations involve creating a new
variable or constraint to compensate for the changing inequalities to equations,
equations to inequalities, or whatever it is you do.

9 Example
You know all the ideas. Let me show you how they work. Start with the
problem:

min 4x1 + x2
subject to −2x1 + x2 ≥ 6
x2 + x3 = 4
x1 ≥ −4
x2 x3 ≥ 0
.

and let’s write it in either of the two standard forms.


First, to get it into the form:

max c · x subject to Ax ≤ b, x ≥ 0

change the objective to maximize by multiplying by −1:

max −4x1 − x2 .

Next, change the constraints. Multiply the first constraint by −1:

2x1 − x2 ≤ −6;

replace the second constraint by two inequalities:

x2 + x3 ≤ 4 and − x2 − x3 ≤ −4;

and replace the third constraint by the inequality:

−x1 ≤ 4.

Finally, replace the unconstrained variable x1 everywhere by u1 −v1 and add


the constraints that u1 , v1 ≥ 0. Putting these together leads to the reformulated
problem

4
max −4u1 + 4v1 − x2
subject to 2u1 − 2v1 − x2 ≤ −6
x2 + x3 ≤ 4
− x2 − x3 ≤ −4
−u1 + v1 ≤ 4
u1 v1 x2 x3 ≥ 0
.

In notation, this problem is in the form: max c · x subject to Ax ≤ b, x ≥ 0


with c = (−4, 4, −1, 0), b = (−6, 4, −4, 4) and
 
2 −2 −1 0
 0 0 1 1 
A= .
 0 0 −1 −1 
−1 1 0 0

Next, to put the problem into the form: max c · x subject to Ax = b, x ≥ 0,


change the objective function to max as above; replace x1 by u1 − v1 as above;
replace the first constraint with

−2u1 + 2v1 + x2 − s1 = 6 and s1 ≥ 0;

leave the second constraint alone; and replace the third constraint with

−u1 + v1 + s3 = 4.

The problem then becomes

max −4u1 + 4v1 − x2


subject to −2u1 + 2v1 + x2 − s1 = 6
x2 + x3 = 4
−u1 + v1 + s3 = 4
u1 v1 x2 x3 s1 s3 ≥ 0
.

In notation, this problem is in the form: max c · x subject to Ax = b, x ≥ 0


with c = (−4, 4, −1, 0, 0, 0), b = (6, 4, 4) and
 
−2 2 1 0 −1 0
A =  0 0 1 1 0 0 .
−1 1 0 0 0 1

In the two different transformations, the A, b, and c used in the representa-


tion differ. Indeed, the two descriptions of the problem have different numbers
of variables and different numbers of constraints. It does not matter. If you
solve either problem, you can substitute back to find values for the original
variables, x1 , . . . , x4 , and the original objective function.

5
Computer programs that solve linear programming problems (like Excel) are
smart enough to perform these transformations automatically. That is, you need
not perform any transformations of this sort in order to enter an LP into Excel.
The program asks you whether you are minimizing or maximizing, whether
each constraint is an inequality or an equation, and whether the variables are
constrained to be nonnegative.

6
Linear Programming Notes VI
Duality and Complementary Slackness
1 Introduction
It turns out that linear programming problems come in pairs. That is, if you
have one linear programming problem, then there is automatically another one,
derived from the same data.
Start with an LP written in the form:

max c · x subject to Ax ≤ b, x ≥ 0.

(We know from the study of problem transformations that you can write any LP
in this form.) I will call this the Primal. It is useful to keep track of dimensions.
Assume that there are n variables (components of x) and m constraints. That
means that c is n−dimensional, b is m−dimensional, and A is a matrix with m
rows and n columns. The data of the problem are b, c, and A. Using the same
information, we can write down a new LP. This one has n constraints and m
variables. The new problem, called the Dual has the form:

min b · y subject to yA ≥ c, y ≥ 0.

You get the dual by “switching around” the parts of the Primal. The ob-
jective switches from max to min. The constraints switch from ≤ to ≥. The
number of constraints changes from m to n. The number of variables changes
from n to m. The objective function coefficients switch roles with the right-hand
side constants.
All we have done is switch symbols around. Here are two unrelated questions
that might have come to mind. What’s the point? (This is a good, all-purpose,
question.) Why stop at taking the dual of the primal, why not take the dual
of the dual and the dual of the dual of the dual and so on? (This is not an
all purpose question and probably would not occur to you unless you are really
interested in logical manipulation.)
The real answer to the first question is that you will see. At least, I hope
you will see. The Primal and the Dual are not just two linear programming
problems formed using the same data. They are intimately related. Knowing
something about one problem tells you something about the other. The mathe-
matical relationship is described in what is called the Duality Theorem of Linear
Programming. I will have a lot to say about this theorem.
The answer to the second question is simple. Before I give the answer, let
me explain the question in more detail. I gave you a definition of the dual of a
linear programming problem. Since any LP can be written in the standard form
above, any LP has a dual. Since the dual of a LP is itself a LP, it has a dual.
So we could keep on taking duals forever. Except that if you take the trouble to
find the dual of the dual (in order to use the definition of duality you would need

1
to change the objective function from min to max and reverse the inequalities
in the constraints), you would find that you get right back to the Primal. If you
“operate” on a LP twice by taking duals you get right back where you started
from. Verifying this is a simple exercise in problem transformations. Try it.

2 Example
In this section I will take a Linear Programming problem and write its dual.
This simple exercise builds on the section on problem transformations.
Earlier (in the section on Problem Transformations) we started with the
problem:

min 4x1 + x2
subject to −2x1 + x2 ≥ 6
x2 + x3 = 4
x1 ≥ −4
x2 x3 ≥ 0
.

Suppose you want to find the dual of this problem. The first thing to do is
write it in the form:

max c · x subject to Ax ≤ b, x ≥ 0

Once the problem is in this form, you can apply the definition of the dual.
We already rewrote the problem (at the end of the previous section) c =
(−4, 4, −1, 0), b = (−6, 4, −4, 4) and
 
2 −2 −1 0
 0 0 1 1 
A= .
 0 0 −1 −1 
−1 1 0 0

Hence the dual of the problem is

min b · y subject to yA ≥ c, y ≥ 0.

Expanding is out, we have that the dual is to find y to solve:

min −6y1 + 4y2 − 4y3 + 4y4


subject to 2y1 − y4 ≥ −4
−2y1 + y4 ≥ 4
−y1 + y2 − y3 ≥ −1
y2 − y3 ≥ 0
.

2
3 The Duality Theorem
3.1 Statement
In this subsection, I will state the theorem and try to explain what it implies.

Theorem 1 If problem (P) has a solution x∗ , then problem (D) also has a
solution (call it y ∗ ). Furthermore, the values of the problems are equal: c · x∗ =
b · y ∗ . If problem (P) is unbounded, then problem (D) is not feasible.
Similarly, if problem (D) has a solution y ∗ , then problem (P) also has a
solution (call it x∗ ). Furthermore, the values of the problems are equal: c · x∗ =
b · y ∗ . If problem (D) is unbounded, then problem (P) is not feasible.

The Duality Theorem states that the problems (P) and (D) are intimately re-
lated. One way to think about the relationship is to create a table of possibilities.

P\D unbounded has solution not feasible


unbounded no no possible
has solution no same values no
not feasible possible no possible

Each of the three rows represents one of three (exclusive and exhaustive)
properties of the Primal. That is, (P) is either unbounded, has a solution, or is
infeasible. The columns represent the same features of the Dual. If I grabbed
two LPs at random, any one of the nine cells could happen. For example, the
both problems could be unbounded. If the two problems are related by duality,
then five of the nine boxes are impossible. Since one box must be true in each
row and in each column, at least three of the boxes must be possible. What
duality does, therefore, is rule out all but one possibility. If you are told that
(P) is unbounded, then you know that (D) can’t be feasible. If you are told
that (P) has a solution, then you know that (D) has one too. Only when (P)
is not feasible are you uncertain. Maybe (D) is infeasible too or maybe (D) is
unbounded.
Much of this information can be summarized in a smaller table. Remember
that every LP is either feasible or not feasible and that feasible problems either
are unbounded or have a solution. The next table just divides problems into
feasible and infeasible.

P\D feasible not feasible


feasible both have solutions; values equal P unbounded
not feasible D unbounded possible

This table shows that if you know that both problems are feasible, then
you know that neither problem is unbounded or, equivalently, that both have
solutions. More than that, the values of the solutions are equal.

3
3.2 Why is the Duality Theorem True?
The Duality Theorem is a piece of mathematics. It requires a mathematical
proof. I will spare you the details. You do not need to know the proof. One
way to prove the theorem is to examine the simplex algorithm really carefully.
It turns out that the algorithm solves (P) and (D) simultaneously. We will
see that Excel spits out the solution to the dual whenever it solves a problem.
This “proof” requires little more than high-school algebra and a willingness to
tolerate a logical argument. That is, it is “elementary.” The other proof uses a
tool from convex analysis called the separating hyperplane theorem. It is within
the grasp of an undergraduate math major. Although it is not relevant for a
Management Science major, ask and I’ll give you references.
Even though I am not going to prove the theorem, I should make an attempt
to tell you why it is true. On the surface it is really amazing. Why should the
two problems be so closely related? A lazy answer is that they both use the
same data, so they must be related somehow. Here is a slightly better answer.
By a feasible value for (P) I mean a number v with that property that
there is some x such that Ax ≤ b and x ≥ 0, such that v = c · x. In words, if v
is a feasible value that means that there is some x that satisfies the constraints
that yields objective function value v. I define a feasible value for (D) similarly.
I claim that any feasible value for (P) is less than or equal to any feasible
value for (D). In symbols:

if Ax ≤ b, x ≥ 0, yA ≥ c and y ≥ 0, then c · x ≤ b · y.

You have seen the proof of this already (in the discussion of the diet problem).
All you do is multiply Ax ≤ b on both sides by y and multiply yA ≥ c on both
sides by x to get
c · x ≤ yAx ≤ b · y.
For the time being, forget about the yAx in the middle (we’ll get back to it in
the next section). We can conclude,

c · x ≤ b · y.

This is just what I claimed.


This inequality tells you much of the duality theorem. Suppose that (P) was
feasible. That means that there is an x that satisfies the constraints of (P). So
the value of (D) is bounded below. That is, (D), a minimization problem, cannot
be unbounded. Similarly, if (D) is feasible, then (P) cannot be unbounded. The
Duality Theorem goes on to say two things. First, if one problem has a solution,
then the other one does. Second, if the problems have solutions, then the values
are equal. These facts are mysterious, although you may gain a greater intuition
for them after you have interpreted some duals.
The purpose of this subsection was to provide a bit of insight into why the
Duality Theorem is true. I did this by proving an easy fact, which takes you
part of the way to the conclusion of the Duality Theorem.

4
3.3 Using the Duality Theorem
The Duality Theorem tells you that the behavior of one LP is related to the
behavior of another LP. One useful way to employ the theorem is to conclude
that since both primal and dual are feasible, both must have solutions. For
example, take the diet problem. You can see that the diet problem is feasible
without computation. Provided that it is possible to supply every nutrient (in
symbols, this means for each nutrient i there exists a food j such that aij > 0),
you can satisfy all nutritional constraints simply by buying enough food. (In
symbols: for every nutrient i select a food j that supplies i - so that aij > 0 and
buy abiji of food j.) Similarly, when the prices of food are positive, it is clear
that the dual problem (the Pill Problem) is feasible: the pill seller can satisfy
all constraints by setting all nutrient prices to zero. From this the Duality
Theorem tells you that both problems have solutions. (This kind of reasoning
does not compute the solution, of course, but it gives you a strong clue about
whether the problem is well posed.) The logic is general too. It depends on
some intuitive general properties of the nutritional requirements and of prices,
rather than specific information about the data of the problem.
The Duality Theorem can also be a useful way to identify whether a problem
is unbounded or infeasible. Consider the following pair of problems:

max x1 + x2
subject to −3x1 + 2x2 ≤ −1
x1 − x2 ≤ 2
x ≥ 0
.

The dual is:


min −y1 + 2y2
subject to −3y1 + y2 ≥ 1
2y1 − y2 ≥ 1
y ≥ 0
Naturally, you could graph each of these problems and easily determine their
solution. You could solve them using the simplex algorithm (either by hand or
with Excel). Let me point out what common sense and the Duality Theorem
tell you.
First note that the primal is feasible. For example, the point (x1 , x2 ) = (1, 0)
satisfies the constraints. (How did I know this? I set x2 = 0 and observed that
the constraints reduced to 13 ≤ x1 ≤ 2. There are a lot of other feasible things.
You could find them all by graphing the constraints.) The point is that for small
problems it is often easy to figure out a point in the feasible set by common
sense. In “abstract” problems formulated from economic principles (like the
diet problem), it is also possible to determine feasibility from the logic of the
problem. Now that we know that the primal is feasible, what do we do next?
Since this is an exercise in using the Duality Theorem, I propose that you
look at the Dual. I claim that the Dual is infeasible. In general, it is painful

5
to confirm it. In this example, one bit of cleverness (or a graph) confirms this.
Suppose that y = (y1 , y2 ) actually satisfies the constraints of the dual. It would
necessarily satisfy the constraints if I added them together. Do it. If you add the
two resource constraints in the dual together you get: −y1 ≥ 2. This inequality
implies that y1 ≤ −2, which is inconsistent with the non-negativity constraint.
Hence the dual is infeasible. From this we can immediately conclude that the
Primal is infeasible. (Why? The Duality Theorem says that if one problem
is infeasible, the other problem is either infeasible or unbounded. We already
checked that the primal was feasible, so it must be unbounded.)
Did you think that the trick of adding the constraints in the dual was too
magical? Maybe you are right. You have options: you can solve problems
directly (graphing, Excel, . . . ); you can practice (sometimes magic is what you
call the unfamiliar); or, in this case, you can try to use common sense in a
different way.
Go back to the primal. You have decided that it is feasible. Could it be
unbounded (remember: we are assuming that you have not already solved the
problem or figured out that the dual is infeasible)? Yes, if you can make either
one or both of the variables in the objective function arbitrarily large. Can you
do that? Maybe. Although x1 enters the first constraint with a positive sign
and x2 enters the second constraint with a positive sign, in both constraints
you can subtract the other variable. That is, maybe you can make x1 large if
you make x2 large at the same time. Indeed, this is true. There are many ways
to see this, but one is to imagine that x1 = x2 . Under this assumption, the
objective function is max 2x1 and the constraints simplify to −x1 ≤ −1 and
x1 ≥ 0 (the second resource constraint becomes 0 ≤ 2, which is always true).
The conclusion is that the primal is unbounded. You can make the value of
the problem equal to 2K by setting x1 = x2 = K. This choice of x is feasible
(provided K ≥ 1) and there is no limit to the value of the objective function you
can get. The discussion in this paragraph said nothing about the dual. It was a
direct, “common sense” explanation of why the original problem is unbounded.
On the basis of this discussion we can conclude (using the Duality Theorem)
that the dual is infeasible.
The insight is that the Duality Theorem allows you to infer something that
may not be obvious. There are three kinds of inference.
1. Observe that both primal and dual are feasible. Conclude: both have
solution.
2. Observe that primal is feasible and dual is not. Conclude: primal is
unbounded.
3. Observe that primal in unbounded. Conclude: dual is infeasible.
(The second and third observations remain true if you interchange the words
primal and dual.

6
4 Complementary Slackness
The Duality Theorem implies a relationship between the primal and dual that
is known as complementary slackness. I will try to explain the term first. Recall
that the number of variables in the dual is equal to the number of constraints
in the primal and the number of constraints in the dual is equal to the number
of variables in the primal. This correspondence suggests that variables in one
problem are complementary to constraints in the other. We talk about a
constraint having slack if it is not binding. For an inequality constraint, the
constraint has slack if the slack variable is positive. For a variable constrained
to be non-negative, there is slack if the variable is positive. The term comple-
mentary slackness refers to a relationship between the slackness in a primal
constraint and the slackness (positivity) of the associated dual variable. (Re-
member, this paragraph was only designed to give you an idea of where the
terminology comes from.)
Theorem 2 Complementary Slackness Assume problem (P) has a solution
x∗ and problem (D) has a solution y ∗ .
1. If x∗j > 0, then the jth constraint in (D) is binding.
2. If the jth constraint in (D) is not binding, then x∗j = 0.
3. If yi∗ > 0, then the ith constraint in (P) is binding.
4. If the ith constraint in (P) is not binding, then yi∗ = 0.
The theorem identifies a relationship between variables in one problem and
associated constraints in the other problem. Specifically, it says that if a variable
is positive, then the associated dual constraint must be binding. It also says
that if a constraint fails to bind, then the associated variable must be zero.
The statement really is about “complementary slackness” in the sense that it
asserts that there cannot be slack in both a constraint and the associated dual
variable. I will outline a proof of this statement in the next section; once you
have the duality theorem it is really easy to prove. It is extremely important
to note that the result says that you cannot have slack in two associate places
at the same time (primal variable, dual constraint) or (primal constraint, dual
variable). So: it is possible for a primal constraint to be binding while the
associated dual variable is equal to zero (that is, no slack in two places), but it
is not possible for the primal constraint to have slack (to be non-binding) and
the associated dual variable be positive. While I listed four statements in the
theorem, there really are only two. The second two statements have precisely
the same content as the first two statements, they just switch around the roles
of primal and dual. (Remember, if you started with the Dual, then its dual is
the original Primal.)
The theorem on Complementary Slackness is useful because it helps you
interpret dual problems and dual variables, because it enables you to solve
(easily) the dual of an LP knowing the solution to the primal, and because it
enables you to check whether a feasible ”guess” is a solution to a LP.

7
4.1 Why the Complementary Slackness Condition is True
The theorem on Complementary Slackness is really the Duality Theorem in
disguise. The math behind the result is simple. You do not need to know the
details, but some of you may find this subsection useful.
Early I showed that if x is feasible for (P) and y is feasible for (D), then

c · x ≤ yAx ≤ b · y.

Furthermore, if x∗ solves (P) and y ∗ solves (D), then the first and last terms
are equal, so both inequalities must really be equations:

c · x∗ = y ∗ Ax∗ = b · y ∗ .

You can write the first equationPn as (c − y ∗ A) · x∗ = 0. This expression can


be written in more detail as j=1 (c − y ∗ A)j x∗j = 0 where I use (c − y ∗ A)j to
represent the jth component of y ∗ A. For each j, we know that (c − y ∗ A)j is a
nonnegative number (this follows because y ∗ is feasible for the Dual) and x∗j is
nonnegative. So when we multiply them, we must get a non-negative number.
The only way that you can add up a bunch of non-negative numbers and get
zero is if each one of the non-negative numbers is zero. Consequently, for each
j = 1, . . . , n, (c − y ∗ A)j x∗j = 0. This expression says that when you multiply
two numbers ((c − y ∗ A)j and x∗j ) you get zero. This can only happen if at
least one of the numbers is zero. And this is precisely what the complementary
slackness condition says: If x∗j > 0, then the jth dual constraint binds (that is,
(c−y ∗ A)j = 0) and if the jth dual constraint does not bind (that is, (c−y ∗ A)j >
0, then x∗j = 0. You can derive the other CS conditions by applying the same
kind of reasoning to the equation y ∗ Ax∗ = b · y ∗ .

4.2 Complementary Slackness and Interpretation of Dual


The most important part about the dual problem is that the dual variables
provide information about quantities relevant to the original problem. You get
different kinds of information. I will discuss the relationship between Comple-
mentary Slackness and the dual in this subsection. Later in these notes I will
have more to say about interpretations of duals.
We have formulated a problem and its dual already. In the first week of
class I presented the diet problem. The problem was to minimize the amount
spent on food subject to meeting nuitritional constraints. The given information
for the problem were the costs of food, the nutritional requirements, and the
nutrient content in each of the foods. For this problem, we also formulated
the dual. I presented a rather artificial story in which the problem was to find
prices of nutrient pills to maximize the cost of the nutrients needed subject
to the constraints that nutrients supplied in any food can be purchased more
cheaply by buying pills than by buying the food. I want you to think about
the solution to the dual as providing prices of nutrients in the sense that they

8
represent how much the food buyer (the person in charge of the cafeteria) is
willing to pay to get the nutrient directly.
Consider what the complementary slackness conditions mean in this context.
Suppose that when you solve the diet problem you find that it is part of the
solution to buy a positive amount of the first food. The complementary slackness
condition then implies that the first dual constraint must be binding. That
means that the price of (one unit of) the first food is exactly equal to the cost
of (one unit of) the nutrients supplied by the food. Why should this be true?
Imagine that the pill seller is really there, offering nutrients at the prices that
solve the dual. If you really buy a positive amount of the first food, then it
must be no more expensive to do that than to get the nutrients through pills.
What if the first dual constraint is not binding? The first dual constraint states
that the price of one unit of the first food should be at least as great as the cost
of the nutrients inside the food. If the constraint does not bind, then it means
that the food is actually more expensive than the nutrients inside it. That is,
the food is strictly more expensive than its nutrient content. If this happens,
then it makes no sense to buy the food: x∗1 = 0.
If a constraint in the primal does not bind, then there is some nutrient that it
oversupplied when you solve the diet problem. Surely this can happen. Imagine
- just for a theoretical example - that every time that you find Vitamin C in
a food you also find an equal amount of Vitamin E (this is not really true!),
but that you only need half as much Vitamin E as Vitamin C. In this case, if
you satisfy the Vitamin C constraint, then you automatically more than satisfy
(or satisfy with slack) the Vitamin E constraint. You cannot possible satisfy all
nutrient constraints as equations. Now if you found yourself in this situation
(where you get more than enough Vitamin E just meeting your Vitamin C
requirement), how much would you be willing to pay for a Vitamin E pill?
Complementary Slackness says that the answer is zero. If your food supplies
more of a nutrient than you need, then the price of that nutrient - that is, how
much you are willing to pay for the nutrient - is zero. Now suppose that a
dual variable is positive. In that case, you are willing to pay a strictly positive
amount for nutrient supplied by the pill. Complementary Slackness says that
(at a solution) it must be the case that you are supplying exactly the amount
of the nutrient you need (not anything extra).
The complementary slackness conditions guarantee that the values of the
primal and dual are the same. In the diet problem, the pill seller guarantees
that pills are no more expensive than food. CS guarantees that when you solve
the problem, pills cost exactly the same as food for those foods that you actually
buy. (By the way, nothing rules out the possibility that there is a food that costs
exactly as much as its nutritional content, but that you don’t buy the food. If
this happens, you have no slack in both primal variable and dual constraint.)

4.3 Using Complementary Slackness to Solve Duals


Let’s take a familiar example (the problem we solved using the simplex algorithm
in the third set of lecture notes).

9
max 2x1 + 4x2 + 3x3 + x4
subject to 3x1 + x2 + x3 + 4x4 ≤ 12
x1 − 3x2 + 2x3 + 3x4 ≤ 7
2x1 + x2 + 3x3 − x4 ≤ 10
x ≥ 0

We know that the solution to this problem is x0 = 42, x1 = 0; x2 = 10.4; x3 =


0; x4 = .4. We can use this information (and Complementary Slackness condi-
tions) to solve the dual. In order to do this, we first need to write the dual:

min 12y1 + 7y2 + 10y3


subject to 3y1 + y2 + 2y3 ≥ 2
y1 − 3y2 + y3 ≥ 4
y1 + 2y2 + 3y3 ≥ 3
4y1 + 3y2 − y3 ≥ 1
y ≥ 0

The Complementary Slackness conditions typically allow you to reduce solv-


ing the dual to solving a system of equations with one unknown per equation.
The equations are the constraints that correspond to positive primal variables.
In this example, x2 and x4 are positive. That means that in the solution to the
dual the second and fourth constraints hold as equations. That leaves us two
equations and three unknowns, y1 , y2 , y3 . But there is another CS condition: if a
primal constraint is not binding, then the associated dual variable must be zero.
In this example, the first and third primal constraints bind, but the second one
does not. Hence y2 = 0. So let’s solve the second and fourth dual constraints
using y1 and y3 :
y1 + y3 = 4 and 4y1 − y3 = 1.
It follows that y1 = 1 and y3 = 3. At this point you can check that the
vector y we have constructed, y = (1, 0, 3), is actually feasible for the dual (that
is, y ≥ 0 and the first and third constraints, which we have ignored up until
now, are really satisfied). Moreover, the value of the dual is 42, which is equal
to the value of the primal. Hence we can conclude that (1, 0, 3) is the solution
to the dual.
This exercise required knowing a solution to the primal. Using the solution,
we wrote down the dual and then used complementary slackness to obtain an
easy problem (solving two equations and two unknowns) that led to the dual’s
solution. On one hand, this is an unimportant exercise: We could have solved
the dual directly. On the other hand, it underlines the power of duality. Know-
ing the solution to the primal makes it easier to find a solution to the dual.

4.4 Using Complementary Slackness to Check Optimality


In the previous section we started with a solved problem and used Complemen-
tary Slackness to find the solution to the dual. If you have a linear program,

10
you can use the same kind of reasoning to test whether “guesses” really do solve
the problem.
Take the LP:
max x1 − x2
subject to −2x1 + x2 ≤ 2
x1 − 2x2 ≤ 2
x1 + x2 ≤ 5
x ≥ 0
.

The dual is:


min 2y1 + 2y2 + 5y3
subject to −2y1 + y2 + y3 ≥ 1
y1 − 2y2 + y3 ≥ −1
y ≥ 0

Suppose I claimed that (1, 4) solved the primal. How could you check this?
Of course, you could solve the problem (by graphing, using the simplex algo-
rithm, or using Excel). You could also use complementary slackness. First you
check feasibility of (1, 4) in the primal. If it is not feasible, then it can’t be
a solution. You observe that both components are positive, so (1, 4) satisfies
the non-negativity constraint. You observe also that this guess also satisfies the
resource constraints: the first and third constraints are satisfied as equations,
while the second constraint does not bind. Now go to the dual. If (1, 4) solves
the primal, then CS implies that both dual constraints hold as equations in
the solution to the dual (since both primal variables are strictly positive) and
y2 = 0 (since the second primal constraint is not binding). Hence, solving the
dual involves solving the dual constraints (as equations) using y1 and y3 . When
you do so, you find y1 = − 23 and y3 = − 13 . So, assuming that (1, 4) solves
the primal leads to the conclusion that y = (− 23 , 0, − 13 ) solves the dual. This
choice of y is not feasible for the dual (it violates the non-negative constraints).
The only conclusion is that (1, 4) did not solve the primal after all. Using the
same reasoning, you can check the the solution to the primal is (4, 1). The dual
solution is y = (0, 23 , 13 ). The value of both problems is 3.
This procedure is general. Start with a “guess” for the primal. Check feasi-
bility. If infeasible, then it cannot be a solution. If feasible, use CS conditions
to reduce dual to a system of equations. Solve these equations to get guesses
for dual variables. Check feasibility of the dual guess. If the dual guess is feasi-
ble, then both the original primal guess and the dual guess actually solve their
problems. If the dual guess is not feasible, then neither guess is a solution. One
wrinkle: in the example, we concluded that y was not feasible because it vio-
lated the non-negativity constraints. It is possible that the guess you generate
for the dual is infeasible because it violates one of the resource constraints in
the dual. This couldn’t happen in the example because both resource primal
variables were positive (so we used all dual constraints to compute the guess).

11
It could happen if one of the primal guesses was zero. For example, if I had
guessed that x = (2, 0) solved the primal, then I would take y1 = y3 = 0 (first
and third primal constraints not binding) and solve the first dual constraint as
an equation (since x1 > 0) to conclude y2 = 1. Hence I obtain a non-negative
guess for the dual, y = (0, 1, 0). This guess is not feasible, however, because it
does not satisfy the second dual constraint.

5 Interpretation of the Dual


The Duality Theorem and the associated Theorem on Complementary Slackness
give us strong reason to believe that the primal and dual are related problems.
The story that I told about the diet problem and the pill problem suggest that
you can construct a story that goes along with the dual that relates it the primal.
I will call such a story an interpretation of the dual.
Finding interpretations of dual programs is an art rather than a science in
that I cannot provide unambiguous mechanical rules for performing the inter-
pretation. It is, however, a valuable art, and clear guidelines are available.
The main reason why interpreting the dual is useful is that dual variables
provide valuable information about the primal. The information is largely what
I have already described in the discussions of the Duality Theorem and Com-
plementary Slackness, but one other insight is necessary.
In this section, I will explain the additional insight, give some general advice
about how to go about interpreting the dual, and then interpret the dual of a
general problem. In order to have a sense of how to interpret a dual, you need
to practice. An ambitious exercise would be to take every formulation problem
you have seen in the course and write and interpret the associated dual problem.

5.1 Dual Variables as Prices


Dual Variables are so important that they have many names. Mathematicians
will call them dual variables. In more general contexts, they may call them La-
grange multipliers (or just multipliers). Economists may call them dual prices,
shadow prices, or implicit prices. These terms all have “price” in them. Let me
try to explain why.
In the diet problem, I interpreted dual variables as prices. Specifically, the
ith dual variable was the price of the ith nutrient. It is a price in a rather
peculiar way. yi represented how much the pill seller would charge for one unit
of the ith nutrient. This price depends only on the context of the problem. It
may not be a market price. (The pill seller may not exist, so that the market
for nutrients may not really exist.) So the prices are implicit because they may
describe transactions that you cannot really make. In the problem, nutrients
are important because you must satisfy nutritional constraints. You can only
satisfy them using available foods. The cost of satisfying them depends on the
price of the different foods. The value of nutrients would change if the price
of a nutrient rich food rose (this would tend to raise nutrient prices); if a new

12
nutrient rich food were discovered (this might lower nutrient prices); or if the
nutritional requirements changed (this is not enough information to predict the
direction of change in the value of nutrients).
In general, treat dual variables as answers to the question: If the amount
of the constant on the right of some constraint changes, how does the value
of my objective function change? In the diet problem, the question becomes:
If the nutritional requirements change, so that people are required to consume
one more unit of Vitamin E each day, how does the cost of the cost minimizing
diet change? This question is interesting and it does not require a story about
pill sellers to ask. The answer to the question is given by the dual variable
associated with the Vitamin E constraint. It gives the cost of Vitamin E from
the context of the rest of the information in the diet problem.
This interpretation of the dual stems from the “surprising” part of the Du-
ality Theorem: The fact that if you have a solution to both primal and dual,
then values are equal. Suppose that you start with an LP in standard form.
You find its solution and the solution to its dual. Call this the old problem and
denote the solutions xold and y old and the associated value V old . It follows that

V old = cold · xold = bold · y old .

Create a new problem and find the solution to it. Call this the new problem
and denote the solutions xnew and y new and the associated value V new . Again
you have
V new = cnew · xnew = bnew · y new .
Now make two assumptions. First, assume that you get from the old problem
to the new problem by changing one of the resource constants, bi , by adding ∆
to it (∆ could be positive or negative). Second, assume that when you make
this change, the solution to the dual does not change, that is y new = y old . You
are free to invent any new problem that you wish. So the first assumption is not
really an assumption, it is a description of how you changed the problem. The
second assumption really is an assumption. It turns out that when you change
the resource constants in the primal, you usually do not change the solution to
the dual (but you do typically change the value of the solution). On the other
hand, changing the resource constraints typically do change the solution to the
primal. For example, if you learned that the government requires that you eat
less Vitamin C, you may buy fewer oranges.
Finally, go back to the expressions above. The change in the value of the
problems, V new − V old , is equal to

bnew · y new − bold · y old .

We have assumed that the dual variables do not change. We can write y new = y old = y.
Therefore,
V new − V old = (bnew − bold ) · y = ∆yi .
The second equation above comes from the assumption that you get bnew from
bold by adding ∆ to the ith component of bold , while leaving the other com-

13
ponents unchanged. The last equation is the algebraic expression of my inter-
pretation of dual variables. The difference in value as you go from old to new
problem is equal to the change in the resource constant times the associated
dual variable. Mathematically, what is neat about this is that you can figure
out the change in value without solving the new LP. Mathematically, the impor-
tant assumption is that when you change the LP, you do not change the solution
to the dual. One expects that “small” changes in bi do not change the solution
to the dual. Hence the restriction that the dual solution does not change means
that the interpretation of dual variables remains valid only for “small” changes
in the level of resources available. When Excel solves an LP, it will tell you just
how much you can change the amounts of available resources without changing
the solution to the dual.

5.2 Interpreting the Dual: Watch your Units


I cannot give you precise rules for interpreting a dual LP. One feature is common
to all problems. Since the values of primal and dual are equal, they have the
same units. Usually, values come in units of money. In the diet problem, the
value of the primal is the amount you spend on food. The value of the dual
is the amount the pill seller can earn selling pills. In both cases, the units are
monetary units. You arrive at the units differently. You are typically given
the data of the problem (A, b, and c). This means that you know the units of
these quantities. You define the primal variables, x. These variables must have
units that are compatible with the rest of the problem. In the diet problem,
the objective function coefficients are in the units of price per quantity of food
(for example, hamburger sells for $2.00 per pound). In order to translate this
into monetary units, you must multiply it by a quantity of food (so that x1
might be a number of pounds of hamburger). When you formulate a problem,
you define the primal variables, but the problem itself “forces” you to choose
the fundamental unit. The same insight applies to the dual variables. You pick
them, but the problem itself has determined their units. In the diet problem
you know that the nutrient constraints involve requirements bi that are given
in units like “grams of Vitamin C.” Since bi yi must be an amount of money (as
it is a term in the objective function), it must be the case that yi is in units
of “dollars per unit of Vitamin C.”) Keeping this in mind does not provide
a description of the dual, but it does give you a start. It also allows you to
recognize nonsense (inappropriate units) when you see them.

5.3 An Example: Production Problem


A standard kind of LP is a problem that asks you to find a production plan that
maximizes profit from a variety of processes given available resources. You have
seen versions of this general problem. One of the practice formulation problems
asks you to mix together varieties of nuts to make different blends, that can
be sold at different prices. On the first homework assignment you figured out
how many of different varieties of coffin to make. In general, imagine that there

14
are n different production activities. If you operate activity j at unit level (for
example, you make one can of the deluxe mixture of nuts), then you can sell
it for cj . Your production processes use m different basic resources. You have
the amount bi of resource i. When you operate activity j at unit level, you
use up some of the basic resources. The technology matrix A describes this
information. The entry aij of the matrix A is the amount of basic resource i
needed to operate the jth activity at unit level. The problem is find a production
plan that maximizes profit using only the available resources. Assuming that
the technology exhibits constant returns to scale and additivity, this becomes a
standard kind of LP. Find x = (x1 , . . . , xj , . . . , xn ) to solve:

max c · x subject to Ax ≤ b, x ≥ 0.
The objective function just adds up the profits earned from operating each
of the activities. That is, if you operate activity 1 at level x1 , then you earn
c1 x1 from it. Total profit comes from adding the profit of each of the activities.
The resources constraints state that the production plan does not use up more
of any resource than is available. If you follow the production plan x, then
n
X
aij xj
j=1

is the amount of the ith resources consumed. So you need to have


n
X
aij xj ≤ bi
j=1

for all i = 1, . . . , m. The non-negativity constraint states that you cannot


operate a production process at a negative level.
Once you have formulated the problem, you can ask whether it is feasible
and, if so, whether the LP has a solution. In this case, feasibility is satisfied if
(as seems sensible) all resources are available in non-negative quantities (that
is, b ≥ 0). In this case, x = 0 is an element of the feasible set. (The feasible set
may be non-empty even if one or more of the bi are negative. Feasibility would
be harder to check in this case. Besides, this case doesn’t fit with the story.)
The LP will have a solution unless it is unbounded. In order for the LP to be
unbounded, you would have to be able to produce something in arbitrarily large
amounts (that is, one of the components of x would be able to grow to infinity).
A simple way to rule this out is to assume that every entry in the maxtrix A
is strictly positive. There are weaker assumptions that would be sufficient to
guarantee that the problem is bounded.
Since the production problem is an LP in standard form, you know that the
mathematical form of the dual is
min b · y subject to yA ≥ c, y ≥ 0.
The point of this section is to give this problem an interpretation. Imagine
that you are the owner of a firm that can operate the n production processes,

15
can sell output at prices c, and has available the resources b. If you choose to
uses your production process, then you should pick a production plan x to solve
the original LP. Now imagine that someone offers to buy out your inventory of
basic resources (b). If you sell, then you can’t operate your production process.
How might this outsider convince you to sell out? One way to do it is if the
potential buyer set prices for each of these resources. Call the unit price of the
ith resources yi . The buyout artist comes to you and says: “I want to buy your
complete inventory. I will pay you yi for each unit of the ith resource. Moreover,
I will set my prices high enough so that you earn at least as much selling your
resources to me as you would turning your resources into final products (using
your production technology) and then selling them at the prices c.” The buyout
artist then argues that her prices for resources are sufficiently high that the
price of a unit of final output, cj is always less than the price she’ll pay for
the ingredients used in that output. Algebraically, she guarantees that for each
j = 1, . . . , n,
Xm
aij yi ≥ cj .
i=1

Of course, this constraint in just a representative constraint of the dual.


Hence the dual of the problem can be interpreted as the buyout artist’s problem.
Set prices (of basic resources) y = (y1 , . . . , yi , . . . , ym ) to minimize what the
buyout artist pays to acquire the resources (y · b) subject to the constraint that
the buyout artist always pays at least as much for the resources as you could
get from transforming the resources into a final product.
This kind of story should sound familiar. The tale I told about the dual of the
diet problem was almost the same. The nature of the story should tell you things
that you know from the Duality Theorem. The value of the original problem
must be no greater than the value of the dual. The surprising implication of the
Duality Theorem is that when you solve these problems the values are equal.
The dual variables in this formulation are prices of the basic inputs. Al-
though they are prices, they may have nothing to do with market supply and
demand. They depend only on the basic information of the problem (A, b, c).
The sense in which they are prices is that they value the inputs to the produc-
tion process. That is, they provide answers to questions like: How much is the
first ingredient worth to you (if you have access to the technology that turns
inputs into outputs according to the matrix A, b is your list of available inputs,
and c are the prices at which you can sell final outputs). If A, b, or c changes,
then you would expect the “price” of ingredients to change too.
The buyout story is contrived, but it is accurate. If someone offers to buy a
little bit of the first input, then you would accept if you are offered y1 or more
(per unit). Similarly, if you could buy a more of the first input at less than y1 ,
it would be profitable for you to do so. The identity between the value of the
primal and the value of the dual tells you that the ingredients, when evaluated
according to the prices y, are worth exactly as much as the final product.
Let me repeat the Complementary Slackness exercise. If when you solve the
production process, you find that you don’t use all of your supply of the first

16
resource, that means that you would gladly sell a bit of that resource at any
positive price. On the other hand, you won’t not pay a positive amount for
more of this resource. Hence y1 = 0. Notice that this doesn’t mean that you
would never place a positive value on the first resource. If you have 100 units
of the resource and your production plan uses 90, then you would be willing to
give away (sell for the price y1 = 0) ten units, but further sales would interfere
with your production plan. When the first input becomes scarce enough, its
price becomes positive. Similar, you would expect that if prices of outputs
change, then the value of the inputs would change as well. Suppose that the
first input is the primary ingredient of the nth output. If the price of the nth
output goes up, you might want to change your production plan and make
more of the nth output. This would increase your demand for the first input.
If demand increases enough to make the first resource constraint bind, then y1
would become positive.
You can tell similar stories for the other Complementary Slackness condi-
tions. For example, suppose that you produce positive amounts of the nth good
when you solve the problem (xn > 0). The dual constraint says that the amount
you can sell the nth good for is no greater than the value of its ingredients. If
you really could sell the ingredients, then you would do so (instead of setting
xn > 0 unless the nth constraint of the dual is binding. Conversely, if a dual
constraint fails to bind, then the corresponding production level in the primal
must be zero.
It is useful to useful to think of the dual variables as prices. The prices
tell you the value of the resources from the point of view of the technology and
output prices that describe the primal. Knowing the dual variables tells you how
much you gain (lose) from an increase (decrease) in one of the basic resources.

6 Hints on Writing Duals


This section is optional. Knowing its contents gives you insight into duality, but
you can do everything I want you to be able to do without knowing its contents.
(The next paragraph explains why.)
You know how to write the dual of any LP written in the form:

max c · x subject to Ax ≤ b, x ≥ 0.

Furthermore, you know that you can write any LP in this form. Hence, in
theory, there is no problem: you can find the dual of any LP. These general
tricks are good to know, and they enable you to write the dual of any problem,
but in practice it is useful to recognize a few short cuts.
Imagine that you have a linear programming problem in which the objective
is to maximize something and the constraints are either equations or “less than
or equal to a constant (≤ bi ).” Some variables may be explicitly constrained
to be non-negative. Others may be free (unconstrained). In this case, you’ll
be able to write the dual using one variable for each resource constraint in the

17
Pm
primal. The dual objective will be to minimize i=1 bi yi (where there are m
resource constraints in the primal; the bi are the right-hand side constants; and
the yi are the dual variables). The constraints of the dual will look the same
as usual, except that unconstrained variables in the primal give rise to equality
constraints in the dual (constrained primal variables give rise to ≥ cj constraints
as before). Further, the dual variables corresponding to inequality constraints in
the primal are explicitly constrained to be non-negative, while the dual variables
corresponding to equality constraints in the primal are not constrained at all.

Primal Dual
max min
equality constraint unconstrained variable
≤ constraint unconstrained variable
non-negative variable ≥ constraint
unconstrained variable equality constraint

You can confirm these claims by taking the original problem, transforming
it to the standard form, taking the dual of that problem, and then simplifying
what you get. These steps are tedious, but a useful exercise in writing the dual.
It is useful to try to understand the transformation knowing what we know
about dual variables and complementary slackness. Take a constraint of the
form “blah, blah, blah ≤ bi ” in a maximization problem. You know several
things about the associated dual variable, yi . First, yi ≥ 0. Second, yi measures
the increase in value of the objective function in the primal associated with an
increase in the right-hand side constant bi . Third, by complementary slackness,
if yi > 0, then the constraint binds and if the constraint does not bind, then
yi = 0. These properties are related. The second one is really the most powerful.
If you accept that yi measures how the primal objective function will change if
bi increases, then you know that yi must be nonnegative. Increasing bi (when
the ith constraint in of the form “blah, blah, blah ≤ bi ” must not lower the
value of the objective function (you can use the original solution). If changing
bi changes your solution, then your value goes up. If changing bi does not change
your solution, then your value stays the same. Going down is not possible. So
yi ≥ 0. Now consider the third property. If when you solve the original problem,
the ith constraint does not bind, then increasing bi more is not going to help
you. Hence yi = 0. On the other hand, if yi > 0, then it must be the case that
a little bit more bi would help you and a little bit less bi would hurt you. So,
you must be using all of the ith resource.
Consider these statements when the ith constraint is an equation. Continue
to assume that yi measures how the primal objective function will change if bi
increases. Now you have no way of knowing whether an increase in bi is good or
bad. That is, there is nothing in the problem that tells you what sign yi should
have. Hence it makes sense that the dual variable associated with an equality
constraint should be unconstrained. Also, you know that the complementary

18
slackness condition always holds no matter what yi is equal to (the primal
constraint must be binding).

19
Linear Programming Notes VII
Sensitivity Analysis
1 Introduction
When you use a mathematical model to describe reality you must make ap-
proximations. The world is more complicated than the kinds of optimization
problems that we are able to solve. Linearity assumptions usually are significant
approximations. Another important approximation comes because you cannot
be sure of the data that you put into the model. Your knowledge of the relevant
technology may be imprecise, forcing you to approximate values in A, b, or c.
Moreover, information may change. Sensitivity analysis is a systematic study
of how sensitive (duh) solutions are to (small) changes in the data. The basic
idea is to be able to give answers to questions of the form:

1. If the objective function changes, how does the solution change?


2. If resources available change, how does the solution change?
3. If a constraint is added to the problem, how does the solution change?
One approach to these questions is to solve lots of linear programming problems.
For example, if you think that the price of your primary output will be between
$100 and $120 per unit, you can solve twenty different problems (one for each
whole number between $100 and $120).1 This method would work, but it is
inelegant and (for large problems) would involve a large amount of computation
time. (In fact, the computation time is cheap, and computing solutions to
similar problems is a standard technique for studying sensitivity in practice.)
The approach that I will describe in these notes takes full advantage of the
structure of LP programming problems and their solution. It turns out that you
can often figure out what happens in “nearby” linear programming problems
just by thinking and by examining the information provided by the simplex
algorithm. In this section, I will describe the sensitivity analysis information
provided in Excel computations. I will also try to give an intuition for the
results.

2 Intuition and Overview


Throughout these notes you should imagine that you must solve a linear pro-
gramming problem, but then you want to see how the answer changes if the
problem is changed. In every case, the results assume that only one thing
about the problem changes. That is, in sensitivity analysis you evaluate
what happens when only one parameter of the problem changes.
1 OK, there are really 21 problems, but who is counting?

1
To fix ideas, you may think about a particular LP, say the familiar example:

max 2x1 + 4x2 + 3x3 + x4


subject to 3x1 + x2 + x3 + 4x4 ≤ 12
x1 − 3x2 + 2x3 + 3x4 ≤ 7
2x1 + x2 + 3x3 − x4 ≤ 10
x ≥ 0

We know that the solution to this problem is x0 = 42, x1 = 0; x2 = 10.4; x3 =


0; x4 = .4.

2.1 Changing Objective Function


Suppose that you solve an LP and then wish to solve another problem with the
same constraints but a slightly different objective function. (I will always make
only one change in the problem at a time. So if I change the objective function,
not only will I hold the constraints fixed, but I will change only one coefficient
in the objective function.)
When you change the objective function it turns out that there are two cases
to consider. The first case is the change in a non-basic variable (a variable that
takes on the value zero in the solution). In the example, the relevant non-basic
variables are x1 and x3 .
What happens to your solution if the coefficient of a non-basic variable
decreases? For example, suppose that the coefficient of x1 in the objective
function above was reduced from 2 to 1 (so that the objective function is:
max x1 + 4x2 + 3x3 + x4 ). What has happened is this: You have taken a
variable that you didn’t want to use in the first place (you set x1 = 0) and then
made it less profitable (lowered its coefficient in the objective function). You
are still not going to use it. The solution does not change.

Observation If you lower the objective function coefficient of a non-basic


variable, then the solution does not change.
What if you raise the coefficient? Intuitively, raising it just a little bit should
not matter, but raising the coefficient a lot might induce you to change the value
of x in a way that makes x1 > 0. So, for a non-basic variable, you should expect
a solution to continue to be valid for a range of values for coefficients of non-
basic variables. The range should include all lower values for the coefficient and
some higher values. If the coefficient increases enough (and putting the variable
into the basis is feasible), then the solution changes.
What happens to your solution if the coefficient of a basic variable (like x2 or
x4 in the example) decreases? This situation differs from the previous one in that
you are using the basis variable in the first place. The change makes the variable
contribute less to profit. You should expect that a sufficiently large reduction
makes you want to change your solution (and lower the value the associated
variable). For example, if the coefficient of x2 in the objective function in the
example were 2 instead of 4 (so that the objective was max 2x1 +2x2 +3x3 +x4 ),

2
maybe you would want to set x2 = 0 instead of x2 = 10.4. On the other hand, a
small reduction in x2 ’s objective function coefficient would typically not cause
you to change your solution. In contrast to the case of the non-basic variable,
such a change will change the value of your objective function. You compute
the value by plugging in x into the objective function, if x2 = 10.4 and the
coefficient of x2 goes down from 4 to 2, then the contribution of the x2 term to
the value goes down from 41.6 to 20.8 (assuming that the solution remains the
same).
If the coefficient of a basic variable goes up, then your value goes up and you
still want to use the variable, but if it goes up enough, you may want to adjust x
so that it x2 is even possible. In many cases, this is possible by finding another
basis (and therefore another solution). So, intuitively, there should be a range
of values of the coefficient of the objective function (a range that includes the
original value) in which the solution of the problem does not change. Outside of
this range, the solution will change (to lower the value of the basic variable for
reductions and increase its value of increases in its objective function coefficient).
The value of the problem always changes when you change the coefficient of a
basic variable.

2.2 Changing a Right-Hand Side Constant


We discussed this topic when we talked about duality. I argued that dual prices
capture the effect of a change in the amounts of available resources. When
you changed the amount of resource in a non-binding constraint, then increases
never changed your solution. Small decreases also did not change anything, but
if you decreased the amount of resource enough to make the constraint binding,
your solution could change. (Note the similarity between this analysis and the
case of changing the coefficient of a non-basic variable in the objective function.
Changes in the right-hand side of binding constraints always change the
solution (the value of x must adjust to the new constraints). We saw earlier
that the dual variable associated with the constraint measures how much the
objective function will be influenced by the change.

2.3 Adding a Constraint


If you add a constraint to a problem, two things can happen. Your original
solution satisfies the constraint or it doesn’t. If it does, then you are finished. If
you had a solution before and the solution is still feasible for the new problem,
then you must still have a solution. If the original solution does not satisfy the
new constraint, then possibly the new problem is infeasible. If not, then there
is another solution. The value must go down. (Adding a constraint makes the
problem harder to satisfy, so you cannot possibly do better than before). If your
original solution satisfies your new constraint, then you can do as well as before.
If not, then you will do worse.2
2 There is a rare case in which originally your problem has multiple solutions, but only

some of them satisfy the added constraint. In this case, which you need not worry about,

3
2.4 Relationship to the Dual
The objective function coefficients correspond to the right-hand side constants
of resource constraints in the dual. The primal’s right-hand side constants
correspond to objective function coefficients in the dual. Hence the exercise of
changing the objective function’s coefficients is really the same as changing the
resource constraints in the dual. It is extremely useful to become comfortable
switching back and forth between primal and dual relationships.

3 Understanding Sensitivity Information Pro-


vided by Excel
Excel permits you to create a sensitivity report with any solved LP. The report
contains two tables, one associated with the variables and the other associated
with the constraints. In reading these notes, keep the information in the sensi-
tivity tables associated with the first simplex algorithm example nearby.

3.1 Sensitivity Information on Changing (or Adjustable)


Cells
The top table in the sensitivity report refers to the variables in the problem. The
first column (Cell) tells you the location of the variable in your spreadsheet; the
second column tells you its name (if you named the variable); the third column
tells you the final value; the fourth column is called the reduced cost; the fifth
column tells you the coefficient in the problem; the final two columns are labeled
“allowable increase” and “allowable decrease.” Reduced cost, allowable increase,
and allowable decrease are new terms. They need definitions.
The allowable increases and decreases are easier. I will discuss them first.
The allowable increase is the amount by which you can increase the coefficient
of the objective function without causing the optimal basis to change. The
allowable decrease is the amount by which you can decrease the coefficient of
the objective function without causing the optimal basis to change.
Take the first row of the table for the example. This row describes the
variable x1 . The coefficient of x1 in the objective function is 2. The allowable
increase is 9, the allowable decrease is “1.00E+30,” which means 1030 , which
really means ∞. This means that provided that the coefficient of x1 in the ob-
jective function is less than 11 = 2 + 9 = original value + allowable increase, the
basis does not change. Moreover, since x1 is a non-basic variable, when the basis
stays the same, the value of the problem stays the same too. The information in
this line confirms the intuition provided earlier and adds something new. What
is confirmed is that if you lower the objective coefficient of a non-basic variable,
then your solution does not change. (This means that the allowable decrease
will always be infinite for a non-basic variable.) The example also demonstrates
your value will stay the same.

4
that increasing the coefficient of a non-basic variable may lead to a change in
basis. In the example, if you increase the coefficient of x1 from 2 to anything
greater than 9 (that is, if you add more than the allowable increase of 7 to the
coefficient), then you change the solution. The sensitivity table does not tell
you how the solution changes, but common sense suggests that x1 will take on a
positive value. Notice that the line associated with the other non-basic variable
of the example, x3 , is remarkably similar. The objective function coefficient is
different (3 rather than 2), but the allowable increase and decrease are the same
as in the row for x1 . It is a coincidence that the allowable increases are the same.
It is no coincidence that the allowable decrease is the same. We can conclude
that the solution of the problem does not change as long as the coefficient of x3
in the objective function is less than or equal to 10.
Consider now the basic variables. For x2 the allowable increase is infinite
9
while the allowable decrease is 2.69 (it is 2 13 to be exact). This means that
if the solution won’t change if you increase the coefficient of x2 , but it will
change if you decrease the coefficient enough (that is, by more than 2.7). The
fact that your solution does not change no matter how much you increase x2 ’s
coefficient means that there is no way to make x2 > 10.4 and still satisfy the
constraints of the problem. The fact that your solution does change when you
increase x2 ’s coefficient by enough means that there is a feasible basis in which
x2 takes on a value lower than 10.4. (You knew that. Examine the original
basis for the problem.) The range for x4 is different. Line four of the sensitivity
table says that the solution of the problem does not change provided that the
coefficient of x4 in the objective function stays between 16 (allowable increase
15 plus objective function coefficient 1) and -4 (objective function coefficient
minus allowable decrease). That is, if you make x4 sufficiently more attractive,
then your solution will change to permit you to use more x4 . If you make x4
sufficiently less attractive the solution will also change. This time to use less
x4 . Even when the solution of the problem does not change, when you change
the coefficient of a basic variable the value of the problem will change. It will
change in a predictable way. Specifically, you can use the table to tell you
the solution of the LP when you take the original constraints and replace the
original objective function by

max 2x1 + 6x2 + 3x3 + x4

(that is, you change the coefficient of x2 from 4 to 6), then the solution to the
problem remains the same. The value of the solution changes because now you
multiply the 10.4 units of x2 by 6 instead of 4. The objective function therefore
goes up by 20.8.
The reduced cost of a variable is the smallest change in the objective func-
tion coefficient needed to arrive at a solution in which the variable takes on a
positive value when you solve the problem. This is a mouthful. Fortunately,
reduced costs are redundant information. The reduced cost is the negative of
the allowable increase for non-basic variables (that is, if you change the coeffi-
cient of x1 by −7, then you arrive at a problem in which x1 takes on a positive

5
value in the solution). This is the same as saying that the allowable increase in
the coefficient is 7. The reduced cost of a basic variable is always zero (because
you need not change the objective function at all to make the variable positive).
Neglecting rare cases in which a basis variable takes on the value 0 in a solution,
you can figure out reduced costs from the other information in the table: If the
final value is positive, then the reduced cost is zero. If the final value is zero,
then the reduced cost is negative one times the allowable increase. Remarkably,
the reduced cost of a variable is also the amount of slack in the dual constraint
associated with the variable. With this interpretation, complementary slackness
implies that if a variable that takes on a positive value in the solution, then its
reduced cost is zero.

3.2 Sensitivity Information on Constraints


The second sensitivity table discusses the constraints. The cell column identifies
the location of the left-hand side of a constraint; the name column gives its name
(if any); the final value is the value of the left-hand side when you plug in the final
values for the variables; the shadow price is the dual variable associated with
the constraint; the constraint R.H. side is the right hand side of the constraint;
allowable increase tells you by how much you can increase the right-hand side
of the constraint without changing the basis; the allowable decrease tells you
by how much you can decrease the right-hand side of the constraint without
changing the basis.
Complementary Slackness guarantees a relationship between the columns in
the constraint table. The difference between the “Constraint Right-Hand Side”
column and the “Final Value” column is the slack. (So, from the table, the slack
for the three constraints is 0 (= 12 − 12), 37 (= 7 − (−30)), and 0 (= 10 − 10),
respectively. We know from Complementary Slackness that if there is slack in
the constraint then the associated dual variable is zero. Hence CS tells us that
the second dual variable must be zero.
Like the case of changes in the variables, you can figure out information on
allowable changes from other information in the table. The allowable increase
and decrease of non-binding variables can be computed knowing final value and
right-hand side constant. If a constraint is not binding, then adding more of
the resource is not going to change your solution. Hence the allowable increase
of a resource is infinite for a non-binding constraint. (A nearly equivalent, and
also true, statement is that the allowable increase of a resource is infinite for a
constraint with slack.) In the example, this explains why the allowable increase
of the second constraint is infinite. One other quantity is also no surprise.
The allowable decrease of a non-binding constraint is equal to the slack in the
constraint. Hence the allowable decrease in the second constraint is 37. This
means that if you decrease the right-hand side of the second constraint from its
original value (7) to anything greater than −30 you do not change the optimal
basis. In fact, the only part of the solution that changes when you do this is that
the value of the slack variable for this constraint changes. In this paragraph, the
point is only this: If you solve an LP and find that a constraint is not binding,

6
then you can remove all of the unused (slack) portion of the resource associated
with this constraint and not change the solution to the problem.
The allowable increases and decreases for constraints that have no slack are
more complicated. Consider the first constraint. The information in the table
says that if the right-hand side of the first constraint is between 10 (original
value 12 minus allowable decrease 2) and infinity, then the basis of the problem
does not change. What these columns do not say is that the solution of the
problem does change. Saying that the basis does not change means that the
variables that were zero in the original solution continue to be zero in the new
problem (with the right-hand side of the constraint changed). However, when
the amount of available resource changes, necessarily the values of the other
variables change. (You can think about this in many ways. Go back to a
standard example like the diet problem. If your diet provides exactly the right
amount of Vitamin C, but then for some reason you learn that you need more
Vitamin C. You will certainly change what you eat and (if you aren’t getting
your Vitamin C through pills supplying pure Vitamin C) in order to do so you
probably will need to change the composition of your diet - a little more of some
foods and perhaps less of others. I am saying that (within the allowable range)
you will not change the foods that you eat in positive amounts. That is, if you
ate only spinach and oranges and bagels before, then you will only eat these
things (but in different quantities) after the change. Another thing that you
can do is simply re-solve the LP with a different right-hand side constant and
compare the result.
To finish the discussion, consider the third constraint in the example. The
values for the allowable increase and allowable decrease guarantee that the basis
that is optimal for the original problem (when the right-hand side of the third
constraint is equal to 10) remains obtain provided that the right-hand side
constant in this constraint is between -2.3333 and 12. Here is a way to think
about this range. Suppose that your LP involves four production processes
and uses three basic ingredients. Call the ingredients land, labor, and capital.
The outputs vary use different combinations of the ingredients. Maybe they
are growing fruit (using lots of land and labor), cleaning bathrooms (using
lots of labor), making cars (using lots of labor and and a bit of capital), and
making computers (using lots of capital). For the initial specification of available
resources, you find that your want to grow fruit and make cars. If you get an
increase in the amount of capital, you may wish to shift into building computers
instead of cars. If you experience a decrease in the amount of capital, you may
wish to shift away from building cars and into cleaning bathrooms instead.
As always when dealing with duality relationships, the the “Adjustable
Cells” table and the “Constraints” table really provide the same information.
Dual variables correspond to primal constraints. Primal variables correspond
to dual constraints. Hence, the “Adjustable Cells” table tells you how sensi-
tive primal variables and dual constraints are to changes in the primal objective
function. The “Constraints” table tells you how sensitive dual variables and pri-
mal constraints are to changes in the dual objective function (right-hand side
constants in the primal).

7
4 Example
In this section I will present another formulation example and discuss the solu-
tion and sensitivity results.
Imagine a furniture company that makes tables and chairs. A table requires
40 board feet of wood and a chair requires 30 board feet of wood. Wood costs
$1 per board foot and 40,000 board feet of wood are available. It takes 2 hours
of skilled labor to make an unfinished table or an unfinished chair. Three more
hours of labor will turn an unfinished table into a finished table; two more hours
of skilled labor will turn an unfinished chair into a finished chair. There are 6000
hours of skilled labor available. (Assume that you do not need to pay for this
labor.) The prices of output are given in the table below:

Product Price
Unfinished Table $70
Finished Table $140
Unfinished Chair $60
Finished Chair $110

We want to formulate an LP that describes the production plans that the firm
can use to maximize its profits.
The relevant variables are the number of finished and unfinished tables, I
will can them TF and TU , and the number of finished and unfinished chairs, CF
and CU . The revenue is (using the table):

70TU + 140TF + 60CU + 110CF ,

, while the cost is 40TU + 40TF + 30CU + 30CF (because lumber costs $1 per
board foot).
The constraints are:
1. 40TU + 40TF + 30CU + 30CF ≤ 40000.
2. 2TU + 5TF + 2CU + 4CF ≤ 6000.
The first constraint says that the amount of lumber used is no more than what
is available. The second constraint states that the amount of labor used is no
more than what is available.
Excel finds the answer to the problem to be to construct only finished chairs
(1333.333 - I’m not sure what it means to make a sell 31 chair, but let’s assume
that this is possible). The profit is $106,666.67.
Here are some sensitivity questions.
1. What would happen if the price of unfinished chairs went up? Currently
they sell for $60. Because the allowable increase in the coefficient is $50,
it would not be profitable to produce them even if they sold for the same
amount as finished chairs. If the price of unfinished chairs went down,
then certainly you wouldn’t change your solution.

8
2. What would happen if the price of unfinished tables went up?
Here something apparently absurd happens. The allowable increase is
greater than 70. That is, even if you could sell unfinished tables for more
than finished tables, you would not want to sell them. How could this
be? The answer is that at current prices you don’t want to sell finished
tables. Hence it is not enough to make unfinished tables more profitable
than finished tables, you must make them more profitable than finished
chairs. Doing so requires an even greater increase in the price.
3. What if the price of finished chairs fell to $100? This change would alter
your production plan, since this would involve a $10 decrease in the price
of finished chairs and the allowable decrease is only $5. In order to figure
out what happens, you need to re-solve the problem. It turns out that the
best thing to do is specialize in finished tables, producing 1000 and earning
$100,000. Notice that if you continued with the old production plan your
profit would be 70 × 1333 13 = 93, 333 13 , so the change in production plan
was worth more than $6,000.
4. How would profit change if lumber supplies changed? The shadow price
of the lumber constraint is $2.67. The range of values for which the basis
remains unchanged is 0 to 45,000. This means that if the lumber supply
went up by 5000, then you would continue to specialize in finished chairs,
and your profit would go up by $2.67 × 5000 = $10, 333. At this point you
presumably run out of labor and want to reoptimize. If lumber supply
decreased, then your profit would decrease, but you would still specialize
in finished chairs.
5. How much would you be willing to pay an additional carpenter? Skilled
labor is not worth anything to you. You are not using the labor than you
have. Hence, you would pay nothing for additional workers.
6. Suppose that industrial regulations complicate the finishing process, so
that it takes one extra hour per chair or table to turn an unfinished product
into a finished one. How would this change your plans?
You cannot read your answer off the sensitivity table, but a bit of common
sense tells you something. The change cannot make you better off. On
the other hand, to produce 1,333.33 finished chairs you’ll need 1,333.33
extra hours of labor. You do not have that available. So the change will
change your profit. Using Excel, it turns out that it becomes optimal to
specialize in finished tables, producing 1000 of them and earning $100,000.
(This problem differs from the original one because the amount of labor
to create a finished product increases by one unit.)
7. The owner of the firm comes up with a design for a beautiful hand-crafted
cabinet. Each cabinet requires 250 hours of labor (this is 6 weeks of full
time work) and uses 50 board feet of lumber. Suppose that the company
can sell a cabinet for $200, would it be worthwhile? You could solve this

9
problem by changing the problem and adding an additional variable and
an additional constraint. Note that the coefficient of cabinets in the objec-
tive function is 150, which reflects the sale price minus the cost of lumber.
I did the computation. The final value increased to 106,802.7211. The
solution involved reducing the output of unfinished chairs to 1319.727891
and increasing the output of cabinets to 8.163265306. (Again, please tol-
erate the fractions.) You could not have guessed these figures in advance,
but you could figure out that making cabinets was a good idea. The way
to do this is to value the inputs to the production of cabinets. Cabinets
require labor, but labor has a shadow price of zero. They also require lum-
ber. The shadow price of lumber is $2.67, which means that each unit of
lumber adds $2.67 to profit. Hence 50 board feet of lumber would reduce
profit by $133.50. Since this is less than the price at which you can sell
cabinets (minus the cost of lumber), you are better off using your resources
to build cabinets. (You can check that the increase in profit associated
with making cabinets is $16.50, the added profit per unit, times the num-
ber of cabinets that you actually produce.) I attached a sheet where I did
the same computation assuming that the price of cabinets was $150. In
this case, the additional option does not lead to cabinet production.

10
Linear Programming Notes VIII:

The Transportation Problem

1 Introduction
Several examples during the quarter came with stories in which the variables
described quantities that came in discrete units. It makes sense that you can
produce coÆns in only whole number units. It is hard to imagine selling 32 of a
chair or 12 of a table. So far we have ignored these constraints. In applications,
one must take integer constraints seriously. An intelligent, but naive, way to deal
with the constraints is to solve the problem assuming that the constraints are
not present and then round your solution to the nearest integer value. In many
situations this technique is not only sensible, but gives good answers. If you
round down the number of items you produce in a production problem, then you
are likely to maintain feasibility and you may arrive at the true solution to the
problem. In general, the method won't work. Rounding (even rounding down)
may destroy feasibility or the true solution may not be close to the solution
of the problem solved without imposing integer constraints. The general topic
of Integer Programming deals confronts the problem directly. The theory of
Integer Programming (or Linear Integer Programming) is not as complete as
the theory of Linear Programming. Integer Programming problems are more
diÆcult to solve than LPs. Econ 172B describes some general approaches.
In this section I introduce problems that have a special property. In these
problems, it is especially natural to impose the constraint that the variables
take on integer values. Hence the problems are, strictly speaking, not linear
programming problems. Nevertheless, aside from the integer constraint, the
problems are linear. Moreover, the problems are so special that when you solve
them as LPs, the solutions you get automatically satisfy the integer constraint.
(More precisely, if the data of the problem is integral, then the solution to the
associated LP will be integral as well.)

2 The Transportation Problem


2.1 Formulation

The Transportation Problem was one of the original applications of linear pro-
gramming models. The story goes like this. A rm produces goods at m di erent
supply centers. Label these i = 1; : : : ; m. The supply produced at supply center
i is Si . The demand for the good is spread out at n di erent demand centers.

Label these j = 1; : : : ; n. The demand at the j th demand center is Dj . The


problem of the rm is to get goods from supply centers to demand centers at
minimum cost. Assume that the cost of shipping one unit from supply center
i to demand center j is cij and that shipping cost is linear. That means that

1
if you shipped xij units from supply center i to demand center j , then the cost
would be cij xij .
I have already done one of the steps of formulating the problem: I have
introduced variables. Let me be explicit. De ne xij to be the number of units
shipped from supply center i to demand center j . The problem is to identify the
minimum cost shipping schedule. The constraints are that you must (at least)
meet demand at each demand center and cannot exceed supply at each supply
center.
The cost of the schedule, by the linearity assumption, is given by

min
XX m n

xij cij :

i=1 j =1

P
Now let's gure out the constraints. Consider supply center i. The total
amount shipped out of supply center i is nj=1 xij . Think about this expression.
xij is what you ship from i to j . From i you can ship to any demand center

(j = 1; : : : ; n). The sum above just adds up the total shipment from supply
center i. This quantity cannot exceed the supply available. Hence we have the
constraint
X n

xij  Si for all i = 1; : : : ; m:


j =1

Similarly, the constraints that guarantee that you meet the demand at each of
the demand centers look like:
Xm

xij  Dj for all j = 1; : : : ; n:


i=1

P P
Consider the feasibility of the problem. The only way that the problem can
be feasible is if total supply exceeds total demand ( nj=1 Dj  m Si . If this
i=1
equation did not hold, then there would be excess demand. There would be no
way to meet all of the demand with available supply. If there is enough supply,
then you should be able to convince yourself that you can satisfy the constraints
of the problem. That is, the problem is feasible unless there is excess demand.
It is conventional to assume that the total supply is equal to the total demand.
If so, that is, if
X n

Dj =
X m

Si ;

j =1 i=1

then all of the constraints in the problem must hold as equations (that is, when
total supply equals total demand, then a feasible transportation plan exactly
meets demand at each demand center and uses up all of the supply at each
supply center). (In cases where there is excess supply, you can transform the
problem into one in which supply is equal to demand by assuming that you can
freely dispose of the extra supply.)
After making the simpli cation that total supply equals total demand, we
arrive at the standard formulation of the transportation problem. The problem

2
P
isfy nj=1 Dj = m
P
provides m supplies Si for i = 1; : : : ; m, n demands Dj for j = 1; : : : ; n that sat-
i=1
Si , and costs cij . The objective is to nd a transportation

plan denoted by xij to solve:

min
XX
m n

xij cij

i=1 j =1

subject to
Xn

xij = Si for all i = 1; : : : ; m:


j =1

and
Xm

xij = Dj for all j = 1; : : : ; n:


i=1

In this problem it is natural to assume that the variables xij take on integer
values (and non-negative ones). That is, you can only ship items in whole
number batches.

2.2 Discussion

The transportation problem is an optimization problem with a linear objective


function and linear constraints. If we ignore the restriction that the variables
take on integer values, then it would fall into our standard framework. We can
solve the transportation problem using Excel.
The transportation problem has a lot of special structure. For example,
each variable appears in exactly two constraints (with a non-zero coeÆcient).
When a variable has a non-zero coeÆcient, the coeÆcient is either plus or minus
1. Because of this special structure, two things turn out to be true. First,
there are alternative methods of solving transportation problems that are more
eÆcient than the standard simplex algorithm. This turns out to be important
in practice, because real-world transportation problems have enormous numbers
of variables. Second, because of the special structure, it is possible to solve the
transportation problem in whole numbers. That is, if the data of the problem
(supplies, demands, and costs) are all whole numbers, then there is a whole
number solution. The signi cance of this property is that you do not need to
impose the diÆcult to handle integer constraints in order to get a solution that
satis es the constraints.
I will not explain completely why you can always nd integer solutions to
transportation problems. Several things are worth noting. It is not true in
general. For example, the recurring example of the course (the problem that we
used to illustrate the simplex algorithm) started with whole number data, but
its solution involved fractions.
There are two intuitions about why transportation problems have integer
solutions. One intuition is that corners of the feasible sets of transportation
problems must have whole number coordinates. That is, if you solve a subset

3
of k constraints using only k variables, the solution will involve whole numbers.
This intuition is geometric. You know that solutions to LPs arise at corners. If
you can see that corners of the feasible set have whole number coordinates, you
are in business. (Note: I have just claimed that this property holds. I have not
proved it. The proof is complicated, so you have a right to be skeptical about
the claim.)
The other intuition in algebraic. In the simplex algorithm, you get fractions
because you must divide by the element you pivot on. In the transportation
problem pivot elements will always be 1, so there is no need to divide.

2.3 Short History

People thought the transportation problem up early in the Second World War.
It was used to determine how to move troops (located, for example, at training
basis in di erent parts of the United States) to battlegrounds in Europe and
Asia.

2.4 The Dual of the Transportation Problem

Every LP has a dual. The neatest way to write the dual of the transportation
problem is: Find ui for i = 1; : : : ; m and vj for j = 1; : : : ; n to solve:

max
Xm

ui Si +
Xn

vj Dj

i=1 j =1

subject to
ui vj  cij for all i = 1; : : : ; m and j = 1; : : : ; n:
This paragraph gives an explanation of how I arrived at the dual. It is only
an outline. I will not ask you to write the dual of a problem as complicated as
the transportation problem. So the material in this paragraph is optional. On
the other hand, it is extremely useful to know where duals come from. Arriving
at the formulation of the dual takes a bit of care. The tedious method is to
transform the original transportation problem so that it is in standard form, take
the dual of that, and simplify. The clever method is to notice that transportation
problem was written as a minimization (so that the dual will be a maximization);
it had equality constraints (so that the dual variables will be unconstrained); its
variables were constrained (so that in the dual the constraints are inequalities).
I did one other tricky thing. I let ui be the name of the variable for the
ith constraint in the transportation problem. This created negative signs in the

objective function and constraints in the dual. This de nition is mathematically


irrelevant (since the variable is unconstrained in sign), but leads to a form that
is consistent with the story I tell in the next paragraph.
Let me now try to interpret the dual. Here is a way to think of the possibility.
In the original transportation problem, the seller faces the problem of getting
goods from the supply centers to the demand centers. The only way to do this

4
is by using conventional shipping lines and paying costs described by cij . Now,
for the purpose of the dual, imagine that someone o ers to transport the goods
for the supplier. This mysterious shipper o ers to buy goods from the supplier
at each supply center (at the price ui at supply center i) and resell them at
demand center j at the price vj . Somehow the mysterious shipper manages to
get the goods where they belong. The original seller doesn't care how the goods
get where they should be, as long as shipping cost is not too great. cij is the
amount it would cost to move an item from supply center i to demand center j
using conventional methods. Using the mysterious shipper it would cost vj ui
(because the seller must pay vj to get the good back, but receives ui when he
sells it. Therefore, if the constraints in the dual are satis ed, then it is no more
expensive to use the shipper than to use conventional shipping methods. The
dual objective function is the amount that the mystery shipper earns by buying
all of the supply and then reselling it at demand centers. This discussion leads
to the interpretation of the dual. The mystery shipper sets prices at each supply
and demand location so that the net cost of shipping an item is no greater than
the direct (cij ) cost, and does so to maximize net revenue.

2.5 Example

Here is an example that is inspired by a similar problem in Hillier and Lieber-


man.
A lumber company has three sources of wood and ve markets where wood is
demanded. The annual quantity of wood available in the three sources of supply
are 15, 20, and 15 million board feet respectively. The amount that can be sold
at the ve markets is 11, 12, 9, 10, and 8 million board feet, respectively. The
company currently transports all of the wood by train. It wishes to evaluate
its transportation schedule, possibly shifting some or all of its transportation
to ships. The unit cost of shipment (in $10,000 along the various routes using
both methods is described in the table below.
Supply Market 1 Market 2 Market 3 Market 4 Market 5
A 51 62 35 45 56
B 59 68 50 39 46
C 49 56 53 51 37
Cost per Unit of Rail Transport

Supply Market 1 Market 2 Market 3 Market 4 Market 5


A 48 68 48 none 54
B 66 75 55 49 57
C none 61 64 59 50
Cost per Unit of Ship Transport
The management needs to decide to what extent to continue to rely on rail
transportation. Evaluate the following options and make a recommendation
about what to do.

5
1. How much does it cost use rail transport exclusively?
2. How much does it cost to use ships exclusively?
3. How much does it cost to use the cheapest available mode of transportation
on each route?
4. Suppose that there is an annual cost of $100,000 to operate any ships (but
that this cost does not vary with the number of shipping lines kept open).
What is the optimal transportation plan?
5. How would your answer change if you learned that the supply at Center B
and the demand at market 3 were both expected to increase by 10 million
board feet?
I wrote a spreadsheet that describes the problem. It is available. On the spread-
sheet I wrote three cost arrays. One represents the costs of train transportation;
the second of ship transportation; and the third the minimum (route by route).
For the routes that for which shipping was not feasible (the \none" entries in
the cost table), I substituted a large number.
I rst solved the problem using rail transportation. I obtained
Supply Market 1 Market 2 Market 3 Market 4 Market 5
A 6 0 9 0 0
B 2 0 0 10 8
C 3 12 0 0 0
Solution Using only Rail Transit
The cost of this transportation plan is 2316. (I also computed the cost of
this transit schedule if ships were used instead of trains, this cost is 5530; 2298
is the cost of this shipping plan using the minimum cost method on each route.
There is no a priori reason why the cost of the shipping cost should be more
than the rail cost. The min cost must be lower (or equal). The fact that it is
strictly lower means that a positive amount of the lumber was transported over
routes that are less expensive to use ships than trains.
Next I solved the problem using only ship transit. (I just copied the original
spreadsheet and changed the objective function from train value to ship value.)
This is the solution.
Supply Market 1 Market 2 Market 3 Market 4 Market 5
A 11 0 4 0 0
B 0 0 5 10 5
C 0 12 0 0 3
Solution Using only Ship Transit
The cost of this plan (using ships) is 2654. It would be 2354 using trains and
2321 using the minimum cost method. Note that even though the transportation

6
plan is optimal for ships, it costs more to use ships than trains (but if you were
going to use trains it would be cheaper still to use the rst transportation plan).
Finally, as before, if you can use the minimum cost method, you would have an
even lower cost.
At this point I have answered the rst question (2316) and the second ques-
tion (2652). If you must use only one mode of transportation, it does not pay
to switch. We can also conclude that using shipping selectively is pro table.
We do not know how pro table until we solve the third problem. I did and
obtained:
Supply Market 1 Market 2 Market 3 Market 4 Market 5
A 6 0 9 0 0
B 2 0 0 10 8
C 3 12 2 0 0
Solution Using Min Cost Transit
The cost of this transportation plan is 2298 (2316 if all were transported by
train and 5530 if all were transported by ship). This schedule is identical to the
rst one. The solution is less expensive than using only ships or only trains. In
fact, we can conclude that it is worth $180,000 (remember units are $10,000)
per year to have the option to use ships. Provided that the cost of having a
ships is less than $180,000 it is worth operating the minimum cost plan. Ships
are used for only one route: connecting A to 1.
The last question asks you to redo the problem under the assumption that
the supply at B is 30 (instead of 20) and the demand at 3 is 19 (instead of 9). I
re-solved the problem and obtained the cost of 2774 for only trains using these
routes:
Supply Market 1 Market 2 Market 3 Market 4 Market 5
A 0 0 15 0 0
B 8 0 4 10 8
C 3 12 0 0 0
Solution 2 Using only Rail Transit
The basis changed (it is no longer pro table to use the A to 1 route). Using
only ships the solution becomes:
Supply Market 1 Market 2 Market 3 Market 4 Market 5
A 11 0 4 0 0
B 0 0 15 10 5
C 0 12 0 0 3
Solution 2 Using only Ship Transit
with cost equal to 3202. The only di erence between this transportation plan
and the original solution to the problem using ships is that now ten extra

7
units are shipped from B to 3. The cost went up by 550. On the other
hand, going from the rst to the second train made the cost go up by less
(458 = 2774 2316) that 550. That is, although the direct transportation cost
from C to 3 is 55 per unit, by using other routes (at least for some of the units)
the extra demand can be transported at a smaller price.
Finally, when you solve the min problem you again nd that the solution
agrees with the solution from the train transportation problem. Now, however,
you don't use any ships. That is, the additional demand makes it optimal to
only use rail transportation.

3 The Assignment Problem


3.1 Introduction

The Assignment Problem is a special case of the transportation problem in which


there are equal numbers of supply and demand centers, and that all demands
and supplies are equal to one. Sometimes you interpret the \costs" (cij ) as
bene ts, and solve a maximization problem instead of a minimization problem.
This change of interpretation has adds no theoretical problems.
The Assignment Problem deserves special attention because it is an inter-
esting special case. The usual story that comes with it goes like this. You are
the manager of a little league baseball team. After carefully watching the nine
children on your team, you can assign the value of having player i play position
j . (I am assuming that there are nine positions on a baseball team. This is still

true in the National League.) Denote this value aij . The objective is to nd
an assignment - that is a position for each player on the team - such that each
player plays only one position and each position has only one player (this is,
there is only one pitcher and even the best player can play only one position)
that maximizes the total possible value. If we let xij be equal to 1 if player i is
assigned to position j and equal to zero otherwise, then the problem is to nd
xij to solve:

max
XX
n n

xij aij

i=1 j =1

subject to
Xn

xij = 1 for j = 1; : : : ; n
i=1

and
Xn

xij = 1 for j = 1; : : : ; n:
j =1

Also, the variables xij must take on the values 0 or 1 (otherwise your assignment
would involve cutting people into pieces. This is very messy and usually does
not improve the performance of the baseball team.)

8
The assignment model has a wide range of applications. You can imagine
matching women to men; workers to jobs; and so on. Variations of the model
are used to assign medical residents to hospital training programs. Complicated
versions of the model are used for scheduling (classes to classrooms or teams in
professional sports leagues).

3.2 The Hungarian Method

The assignment problem is a linear programming problem (with the additional


constraint that the variables take on the values zero and one). In general,
the additional constraint makes the problem quite diÆcult. However, like the
transportation problem, the assignment problem has the property that when
you solve the problem ignoring the integer constraints you still get integer so-
lutions. This means that the simplex algorithm solves assignment problems.
Assignment problems have so much special structure that there are simpler al-
gorithms available for solving them. In this section, I will describe one of the
algorithms, called the Hungarian method. I suspect that it is politically incor-
rect now to name a method after a country. I believe (but I did not verify)
that the name is a tribute to the Hungarian mathematicians that originally
discovered the algorithm.
I will illustrate the algorithm with an example. Consider the assignment
problem with the costs given in the array below.
1 2 3 4
A 10 7 8 2
B 1 5 6 3
C 2 10 3 9
D 4 3 2 3
This array describes an assignment problem with four people (labeled A, B ,
C , and D) and four jobs (1, 2, 3, 4). The rst person has a cost 10 if assigned
to the rst job; a cost 7 if assigned to the second job; etc. The goal is to assign
people to jobs in a way that minimizes total cost.
The algorithm uses a simple observation and one trick. The observation is
that you can subtract a constant from any row or column without changing the
solution to the problem. Take the rst row (the costs associated with A). All
of these numbers are at least two. Since you must assign person A to some job,
you must pay at least two no matter what. If you'd like, think of that as a xed
cost and further costs as variable costs depending on the job assigned to the
rst person. Hence if I reduce all of the entries in the rst row by two, I do not
change the optimal assignment (I lower the total cost by two). Doing so leaves
this table:
1 2 3 4
A 8 5 6 0
B 1 5 6 3
C 2 10 3 9
D 4 3 2 3

9
Again, the solution to the problem described by the second table is exactly
the same as the solution to the rst problem. Continuing in this way I can
subtract the \ xed cost" for the other three people (rows) so that there is
guaranteed to be at least one zero in each row. I obtain:
1 2 3 4
A 8 5 6 0
B 0 4 5 2
C 0 8 1 7
D 2 1 0 1
I'm not done using this observation yet. Just as I can subtract a constant
from any row, I can subtract a constant from any column. Take the second
column. It says that no matter who you assign to the second job, it will cost at
least 1. Treat the 1 as a xed cost and subtract it. Since it cannot be avoided
it does not in uence your solution (it does in uence the value of the solution).
Once you make this reduction you get:
1 2 3 4
A 8 4 6 0
B 0 3 5 2
C 0 7 1 7
D 2 0 0 1
This is the end of what we can do with the simple observation. Now it is
time to use the observation. The last table is simpler that the original one. It
has the property that there is a zero in every row and in every column. All of the
entries are non-negative. Since you want to nd an assignment that minimizes
total cost, it would be ideal if you could nd an assignment that only pairs
people to jobs when the associated cost is zero. Keep this in mind: The goal
of the computation is to write the table in a way that is equivalent (has the
same solution) as the original problem and has a zero-cost assignment. I have
just nished the step in which you reduce the costs so that there is at least one
zero in every row and every column. The example demonstrates that this is not
enough.
If you think about the table, you will see that this is not possible. If you try
to come up with a zero cost assignment, you must assign A to 4 (the only zero
in the row for A is in the 4 column) and you must assign B to 1. However, the
only way to get a zero cost from C is to assign it to 1 as well. I can't do this,
because I have already assigned B to 1. If you have followed up until now, you
will be able to conclude that you should do the next best thing: assign C to job
3 (at the cost 1) and then D to 2. This yields the solution to the problem (A to
4; B to 1; C to 3; D to 2). It is not, however, an algorithm. We made the nal
assignments by guessing. (You should be sure that this is the solution. I argued
that it is impossible to solve the problem at cost zero, but then demonstrated
that it is possible to solve the problem at the next best cost, one.)

10
To turn the intuition into an algorithm, we need a trick. When I subtracted
a constant from each row, I did so in order to make the smallest element of each
row 0. What I would like to do is to continue to create new cheap assignments
without changing the essence of the problem. The trick is to eliminate the zeroes
in the table and then try to reduce the remaining values.
Here I repeat the past table:
1 2 3 4
A j84 6 0
B j03 5 2
C j07 1 7
D j20 0 1
I have crossed out two rows and one column. Doing so \covers up" all of
the zeros. Now look at the uncovered cells and nd the smallest number (it
turns out to be one). If I subtracted one from each cell in the entire matrix,
then I would leave the basic problem unchanged (that is, I would not change the
optimal assignment) and I would \create" a new low cost route (C to 3). That
is the good news. The bad news is that some entries (covered by lines) would
become negative. This is bad news because if there are negative entries, there
is no guaranteed that a zero-cost assignment really minimizes cost. So reverse
the process by adding the same constant you subtracted from every entry (1) to
each row and column with a line through it. Doing so creates this cost matrix:
1 2 3 4
A 9 4 6 0
B 0 2 4 1
C 0 6 0 6
D 3 0 0 1
The beauty of this table is that it again is non-negative. It turns out that
using this matrix it is possible to make another minimum cost assignment. In
fact, using this table, we can come up with an optimal assignment with cost
zero. It agrees with our intuition (A to 4; B to 1; C to 3; D to 2). You can
go back to the original matrix of costs to gure out what the total cost is:
9 = 2 + 1 + 3 + 3. Mechanically:
1. Subtract the minimum number from each zero to leave one zero element
in each row.
2. Subtract the minimum number from each column to leave one zero element
in each column.
3. Find the minimum number of lines that cross out all of the zeroes in the
table.
4. >From all of the entries that are not crossed out, nd the minimum number
(it should be positive). If the minimum is zero, then you haven't crossed

11
out enough entries. If all of the entries are crossed out, then you already
should be able to nd a zero cost assignment.1
5. Subtract the number that you found in Step 4 from all of the entries that
haven't been crossed out. Do not change the entry in any cell that has
one line through it. Add the number to those entries that have two lines
through it.
6. Return to Step 1.
The rst two steps are simple. They make the problem more transparent. The
third and fourth steps are general versions of the rst two steps. What you do in
these steps is redistribute the costs in a way that does not change the solution.
Step 3 is the mysterious step. I ask you to cross out all of the zeroes in the
table using the minimum number of lines. I recommend that you do this by
nding the row or column that has the most zeroes; cross that one out. Next,
cross out the row or column that has the most remaining uncrossed zeroes.
(There may be more than one way to do this.) Continue until you are done.
In Step 5 you do two things. First, you subtract the number you found in
Step 4 from every element of the table. As you know, this does not change
the solution. It does, however, create negative numbers. Hence you must do
something to restore non-negativity in the cost table (otherwise you cannot
apply the rule that you want to nd a zero-cost assignment to solve the problem).
You do this by adding the constant back to every row or column that you draw
a line through. When all is done, you are left with a table that satis es the
properties in Step 5. All entries that are not \lined" go down; the ones that
have one line through them stay the same (go down and then go up by the same
amount); the ones that have two lines (none will have three) go up (they go
down, but then they go up twice).
You are done when you reach a stage in which you can nd a zero-cost
assignment. I won't provide a general procedure for doing this. It is natural to
start by looking to see if any row or column has exactly one zero in it. If it does,
you must include the assignment corresponding to that cell. Do so, cross out
the corresponding row and column, and solve the remaining (smaller) problem.
If each row and column contains at least two zeroes, make one assignment using
an arbitrary row and column (with a zero cell) and continue. The problems that
I ask you to solve will be small enough to solve by trial and error.
There is one other loose end. I have not demonstrated that the algorithm
must give you a solution in a nite number of steps. The basic idea is that
each step lowers the cost of your assignment. Verifying this requires a small
argument. I will spare you.
Here is another example.
1 You are done if you need to draw as many lines as there are rows or columns in the cost
table.

12
1 2 3 4 5
A 81 14 36 40 31
B 20 31 25 26 81
C 30 87 19 70 65
D 23 56 60 18 45
E 12 15 18 21 100
I will rst subtract the minimum element in each row:
1 2 3 4 5
A 67 0 22 26 17
B 0 11 5 6 61
C 11 68 0 51 46
D 5 38 42 0 27
E 0 3 6 9 88
Next, I subtract the minimum element from each column (only the fth
column has no zero in it).
1 2 3 4 5
A j67 0 j22 j26 0
B j0 11 j5 j6 44
C j11 68 j0 j51 29
D j5 38 j42 j0 10
E j0 3 j6 j9 71
This array does not permit a zero-cost solution (both 2 and 5 must be matched
with A). Hence we need to change it.
1 2 3 4 5
A 70 0 25 29 0
B 0 8 5 6 41
C 11 65 0 51 26
D 5 35 42 0 7
E 0 0 6 9 67
>From this array we can nd a zero-cost assignment. The solution is A to
5; B to 1; C to 3; D to 4; and E to 2. Using the costs from the original table,
the cost of this plan is:
31 + 20 + 19 + 18 + 15 = 103:

13
Linear Programming Notes IX:
Two-Person Zero-Sum Game Theory

1 Introduction

Economists use the word rational in a narrow way. To an economist, a rational


actor is someone who makes decisions that maximize her (or his) preferences
subject to constraints imposed by the environment. So, this actor knows her
preferences and knows how to go about optimizing. It is a powerful approach,
but it probably is only distantly related to what you mean when you think of
yourself as rational.
Decision theory describes the behavior of a rational actor when her actions
do not in uence the behavior of the people around her. Game theory describes
the behavior of a rational actor in a strategic situation. Here decisions of other
actors determine how well you do. Deciding where to go to dinner can be
thought of as a decision problem if all you care about is what you eat and where
you eat it. It is a strategic problem if you also want to meet a friend at the
restaurant. (In the rst case, you go to the restaurant that serves the food you
like best. In the second case, the restaurant that you prefer depends not only
on the food served, but also on the where your friend goes.)

2 Zero-Sum Games

These notes describe a simple class of games called two-player zero-sum games.
You can probably gure out what a two-player game is. Zero-sum games refer
to games of pure con ict. The payo of one player is the negative of the payo
of the other player. This formulation is probably appropriate for most parlor
games, where the outcomes are either win, lose, or draw (and there is at most
one winner or loser). Maybe it describes war. It is a restrictive assumption
and is not appropriate to most economic applications, where there is a strong
component of common interests mixed with the con ict. For example, in a
bargaining situation, the con ict is clear: the buyer wants to pay a low price
and the seller wants to receive a high price. The cooperative element arises
because it is frequently the case that making a transaction at an intermediate
price is better for both sides than a failure to reach an agreement. Concretely,
if something is worth $10 to the seller and $15 to the (potential) buyer, then
making a sale at the price $12 (or any price between $10 and $15) is better for
both buyer and seller than making no sale at all. Problems that describe aspects
of rm competition (models of Cournot duopoly that you may have seen in a
micro class) have non-zero sum aspects.
Why limit attention to zero-sum games? They are simpler. There is a
beautiful theory that is more compelling than the general theory of games.
Predicting outcomes in these games uses linear programming in ways that do

1
not generalize to other kinds of game.
The general structure of a game involves a list of players; a set of strategies
for each of the players; a payo for each vector of strategies. I will assume that
the game has only two players.

3 Strategies

The intuition behind a strategy is that it tells you how you are going to play the
game. In examples, it will be just a choice from one of a nite list of possible
things you can do.
This story might help you understand the notion of a strategy. You made
an arrangement to talk to a friend about what you were going to do together,
but you unexpectedly cannot be home when the friend is supposed to call. Your
roommate will be home and promises to talk to your friend. You want to give
your roommate instructions about what kind of arrangements to make. You
would like to walk on the beach, but not if it is going to rain. You would like
to go to the Belly Up, but only if you can dance. You would like to see a
movie, but only if Leonardo DiCaprio isn't in it. Most of all, you would like
to do something that your friend also wants to do. What kind of instructions
do you give your roommate? Complete instructions will account for all possible
contingencies. You won't say: \Tell my friend that I'll do whatever he or she
wants to do." Instead, you'll say something like: \If she wants to go to a movie,
nd out if DiCaprio is in it. If he isn't, tell her OK. If he is, tell her no." And
so on. In game theory, a strategy is a complete set of instructions. It allows
your roommate to \negotiate" for you no matter what your friend on the phone
says.
When you specify a strategy for each player, you determine the outcome
of the game. Payo s associate to each outcome a number for each player. You
can therefore describe two-player games using a payo matrix. The rows of
the matrix represent the strategies of one player. The columns of the matrix
represent the strategies of the other player. The cells of the matrix represent
outcomes. In these cells, you place payo numbers. In general, each cell should
have a payo for each player in it. In zero-sum games, you need only have one
number in each cell. This number represents the payo to the player who picks
rows. The negative of this number is the payo to the player who picks columns.
Take the game of matching pennies. Two players simultaneously place a
penny on the table. If the pennies `match' (both heads up or both heads down),
then the Row player wins the Column player's penny. If the pennies do not
`match' (exactly one head), then the Column player wins the Row player's
penny. The payo matrix is below.
Heads Tails
Heads 1 -1
Tails -1 1

2
In matching pennies, each player has two strategies. The player can either
play heads or play tails. Now consider a variant of matching pennies that I play
with my son. First, I decide whether to play heads or tails. Next, he looks at
what I did. Finally, he decides whether to play heads or tails. I win if the coins
match. He wins if they do not. In this game, both players must decide whether
to play heads or tails. So you might think that we both have two strategies.
This is not correct. I have two strategies, but my son can make his decision
based on what I did. He therefore has four strategies:
H H : Play heads no matter what I do.

T T : Play tails no matter what I do.

H T : Play heads if I play heads and tails if I play tails (match).

T H : Play tails if I play heads and heads if I play tails (mismatch).

Therefore the payo matrix for this version of matching pennies is:
HH TT HT TH
Heads 1 -1 1 -1
Tails -1 1 1 -1
Naturally, my son plays T H and I always lose. The point is that even though
my son ends up either playing heads or playing tails, in order to describe how
he makes his decision, you need four strategies. Using the four strategies he
could give instructions to my wife on how to play the game, go to his room and
listen to music, and still always win the game.
Strategies are complicated objects in general. Examples simplify and obscure
the complexity of the idea of a strategy. For example, chess is a zero-sum, two-
player game. A strategy for chess (to a game theorist) is a complete plan for
playing that game. If you are white (and move rst), your strategy should
include an opening move, a response to all possible rst moves of you opponent;
a response to all possible positions after two moves by your opponent; and so
on. There are an enormous number of such strategies (no, not on the order
of the number of pennies in Bill Gates's bank account; more like the number
of water molecules in the universe). The idea is that if you could specify a
strategy, then you can tell the strategy to an agent and the agent will be able
to play the game for you without ever consulting you again. Once you have
a strategy for both white and black, you can actually play out a game. From
the play of the game, you can decide who won (or whether it was a draw) and
assign payo s. Conceptually, this process is easy (at least for someone who is
comfortable with game theory). In practice, it does not tell you how to play a
game. Tic-tac-toe is a simpler example of a two-player zero-sum game. To a
game theorist, a strategy for the rst player describes the rst move and where
to move on future opportunities under all possible circumstance. This leads
to an enormous number of strategies. You have been able to play tic-tac-toe
optimally for more than fteen years. You can probably even describe it (move
in the center rst; after that block your opponent when necessary; move to an
open corner if you can). Here the point is that describing all of the strategies
even for tic-tac-toe is an enormous task and it is not directly related to what
you think about when you actually play the game.

3
Game theory does provide advice about how to play simple zero-sum games.
The rst advice is about which strategies to avoid. In the payo matrix below,
Row picker always does better picking UP than DOWN. That is, the entries
in each column of row one are bigger than the corresponding numbers in the
second row. No matter what Column player two selects, player one is better o
picking Row 1 than Row 2. If Row wants to maximize his return, he will avoid
the DOWN row. We say that DOWN is a dominated strategy.
A B C D
UP 1 2 3 4
DOWN -1 -2 -3 -4

4 Examples

In this section I will describe some fairly simple games. The goal is to use the
notion of a strategy to describe the games. After I have presented the theory,
we will return to the games.

4.1 Colonel Blotto

Several standard examples of games have charming names like \The No-Left
Turn Missle" and \Search and Destroy." These names suggest that hot and cold
warriors used game theory to think about military strategy. They did. This
is a simple example of a class of games that describe some aspect of military
strategy.
Colonel Blotto has three divisions to defend two mountain passes. He will
defend successfully against equal or smaller strength, but lose against superior
forces. The enemy has two divisions. The battle is lost if either pass is cap-
tured. Neither side has advance information on the disposition of the opponent's
divisions. What are the optimal dispositions?
Colonel Blotto has to decide how many divisions to allocate to the rst
mountain pass (he'll allocate the remaining ones to the other pass). You can
describe a strategy with a pair of numbers like (x; 3 x), where x = 0; 1; 2; or
3. x represents the troops allocated to the rst pass; 3 x the troops allocated
to the second path. Similarly, the enemy's strategy is a pair, but since it has
only two divisions, it has only three strategies. Hence I obtain the payo matrix
below.
(2,0) (1,1) (0,2)
(3,0) 1 -1 -1
(2,1) 1 1 -1
(1,2) -1 1 1
(0,3) -1 -1 1
Consider the rst row. Colonel Blotto allocates three divisions to the rst
pass. Therefore he always defeats the enemy there, but he only defeats the

4
enemy on the second pass when the enemy also allocates all of its troops to the
left pass. Since Blotto loses the war unless he can defend both passes, his payo
is negative one when the enemy uses either (1; 1) and (0; 2). If Blotto allocates
two units to the rst pass (the second row), then he successfully defends the rst
pass and win also defend the second pass if the enemy allocates fewer than two
divisions to the second pass. Hence Blotto wins unless his enemy plays (0; 2).
Similar reasoning explains the rest of the table.

4.2 Morra

Each player shows either one or two ngers and announces a number between
2 and 4. If a player's number is equal to the sum of the number of ngers
shown, then his opponent must pay him that many dollars. The payo is the
net transfer (so that both players earn zero if both or neither guess the correct
number of ngers shown).
In this game each player has 6 strategies: he may show one nger and guess
2; he may show one nger and guess 3; he may show one nger and guess 4;
or he may show two ngers and guess one of the three numbers. Of these 6
strategies, two are stupid and I will ignore them. It never pays to put out one
nger and guess that the total number of ngers will be 4 (because the other
player can put out more than two ngers). It never pays to put out two ngers
and guess that the sum will be 2 (because the other player must put down at
least one nger). Therefore, a four by four matrix describes the payo s.
12 13 23 24
12 0 2 -3 0
13 -2 0 0 3
23 3 0 0 -4
24 0 -3 4 0
In the payo matrix, \12" describes the strategy of putting out one nger
and guessing the sum is two. In general, the rst number in the strategy is the
number of ngers and the second number is the (guessed) sum. The payo s
come from playing out the game. Suppose that both players use 12. Then both
put out one nger. The sum is equal to two. So each player pays $2 to his
opponent. They break even. This explains why the payo associated with both
players playing 12 is equal to zero. Moving to the second entry in the rst row
(Row plays 12; Column plays 13): here both players put out one nger; the row
player correctly guesses the sum is 2; Column must pay Row the this amount.
When Row plays 12 and Column 23, the sum is 3; Column guesses it correctly
(but Row's guess is incorrect); so Column receives $3 from Row.

4.3 Goofspiel

Each player begins with an n card \hand," with cards numbered 1; 2; : : : ; n.


On the rst move of the game, each player picks a card from his hand. The

5
cards are compared. The player with the higher card earns a1 dollars. The
player with the lower card earns 0. If the cards are equal, then each player wins
a1 . On the next move, each player picks one of the cards remaining in his hand.
2
As before, the cards are compared. The player who put out the higher card
earns a2 dollars; the player with the lower card earns 0. If the cards are equal,
then each player wins a21 . The play continues until all n cards have been played.
The possible winnings in the ith round is ai . The total payo is equal to the
sum of the winnings in the individual moves.
While it did not take long to describe this game, the strategy space is enor-
mous. A player does not just select an order to play his cards (and there are
n! possible orders). Instead a strategy allows the player to decide which card

to play on the basis of what his opponent has done. Only when n = 2 is the
strategy set small. Here the player really need only decide what to play on his
rst move. On the second move he must play his remaining card. When there
are three cards, there are 24 strategies. You can describe them as a list. The
list contains which card you play rst (3 possible choices); what card you play
second if your opponent plays 1 in the rst round (2 possible choices because
you have already played a card); what card you play second if your opponent
plays 2 in the rst round (2 possible choices); and what card you play second if
your opponent plays 3 in the rst round (2 possible choices). Hence each player
has 3  2  2  2 = 24 possible strategies. Observe the complexity of the notion
of strategy. Your opponent is going to play one card on his rst move. Nev-
ertheless, your strategy describes how you respond to all potential rst moves.
The reason for this complexity is that you pick you strategy in advance. That
is, a strategy will typically specify how you would behave in contingencies that
do not actually take place. A tiny aspect of the strategy is simple. Once the
strategy describes how the rst two cards are played, it does not need to say
anything about the third card. On the third move (when n = 3) a player must
play his one remaining card.

5 Security Level

Imagine now that your opponent can read your mind and guess how you play
before she makes her move. What should you do? Since your opponent gains
when you lose, you should expect your opponent to pick the strategy that makes
you worse o . Take a look at the next example.
Left Center Right
Top 1 2 -1
Middle -5 0 20
Bottom 1 1 1
You are Row. Consider playing the rst strategy (top). If your opponent
could read your mind, then she would play her third strategy (right). She would
win 1 and you would lose 1. What about playing your second strategy (middle)?
If your opponent knew, then your payo would be 5 since she would pick her

6
rst strategy (left) as the response. Finally, the third strategy (bottom) pays
you 1 no matter what your opponent does. Therefore, if you are conservative
(or paranoid) or really playing against an omniscient opponent, then you would
play the third strategy. The third strategy establishes your pure-strategy
security level. Informally, the pure-strategy security level is the amount that
you can guarantee for yourself no matter what your opponent does. The reason
for the modi er \pure strategy" will become clear soon. Formally, your security
level is maxi minj u(i; j ).
Take a moment to analyze this expression. De ne an intermediate function:
f (i) = minj u(i; j ). f (i) is what you would get if you played your ith strategy

and your opponent made the response that was the worst for you (and the best
for her). Your security level is maxi f (i). That is, it is the maximum payo you
get assuming that your opponent will observe your strategy choice and take full
advantage of this information.
A security level gives a lower bound to your payo in the game. Surely you
should expect to do no worse than this when you play the game. Can you expect
to do better?
One way to check is to put yourself in the position of the column player. She
too can try to guarantee her security level. In the example, when she plays left,
the worst that can happen is that she gets 1 (if Row plays top or bottom);
when she plays center, the worst that can happen is that she gets 2; when she
plays her right, the worst that can happen is that she gets 20. (Remember
that the payo that Column gets is the negative of the payo that Row gets.)
So, Column's pure-strategy security level is 1, which she'll get if she plays
her rst strategy. In this game, at least, it appears that the security level is a
good prediction of the value of the game. Row player can play in such a way
that guarantees that he will win 1. Column player can play in such a way that
guarantees that she will lose no more than 1. Since it is a zero-sum game,
everything that Row wins must come from Column. Hence if Column plays to
guarantee her security level, then Row cannot win more than 1. If Row plays
to guarantee her security level, then Column must lose at least 1. There is no
room for either player to do better than their security level.
If the result of the example were general, then we would have a good theory
of how to play zero-sum games. Row player should play to insure that he
obtains his security level because if his opponent plays sensibly the Row can do
no better than obtaining his security level. The same statement holds for the
Column player. So, for the second time, is this general? The answer is yes and
no.
First, here is the reason why the answer is no. Take matching pennies. The
pure strategy security level for both players is 1 (and the players can attain
this payo by using either strategy). I hope that the reason for this is easy to
understand. If your opponent could gure out how you were going to play this
game, then she would always win. So it is too conservative to play as if your
opponent can out guess you.
Now imagine you were going to play the matching pennies repeatedly with
the same person. If you were to play the same strategy each time you played

7
the game, what would you do? You probably would not want to be predictable.
That is, you probably would not want to play heads every time. If you did, then
there is a chance that your opponent could gure that out and take advantage
of you. The notion of a mixed strategy, is a way to describe the idea of being
unpredictable. For example, suppose that instead of deciding whether to play
heads or tails, you simply ip the coin and play whatever side lands face up. In
this way, you end up playing heads half of the time and tails half of the time
(I am assuming that your penny is a fair coin that lands heads half the time).
You would like to know what your payo would be if you followed this strategy.
In order to gure this out, you need to know two things. First, you need to
understand that you must be content to compute your expected payo (if half
of the time you play heads and the other half you play tails, then you won't
always win or always lose). Second, you need to make some assumption about
your opponent's play in order to gure out your payo .
You answer the rst question by computing expected payo s. Doing so
requires that you interpret the numbers in the payo matrix as utilities and
that these utilities satisfy the expected utility property. You either learned all
about this in Econ 171, will learn all about this in Econ 171, or will live an
empty, unhappy existence. Here I will say that there is a well-developed theory
of decision making under uncertainty that gives conditions under which using
expected utilities is justi ed. This theory is a bit controversial, but is still the
standard way of treating payo s in games.
Once you interpret the numbers in the payo matrix as expected utility, you
must remind yourself that they need not be monetary payo s. A player need not
be indi erent between winning nothing or a 50 50 gamble than pays 1 when
it wins and costs 1 when it loses. In fact, someone who is risk averse, strictly
prefers to avoid the gamble. However, a player must be indi erent between a
gamble that either gives zero utility or a 50 50 gamble that pays utility of 1
or utility of 1.
The second issue is to decide how to evaluate the payo associated with
playing the random (or mixed) strategy of playing head and tails with equal
probability. The answer is to do what we did with pure strategies. Suppose
Column knows that Row is going to play heads and tails with equal probability.
What is the worst think that she can do (from Row's point of view)? The answer
is that it does not matter what Column does. Row's expected payo is always
zero. Of course, a symmetric argument establishes that by playing heads and
tails with equal probability that Column could also guarantee herself a payo
of zero.
The example illustrates the idea of the mixed-strategy security level. A
mixed strategy is a probability distribution over pure strategies. In games
with two pure strategies, like matching pennies, a probability distribution can
be described by a number p between zero and one (interpret p as the probability
that the player plays his rst strategy, so that 1 p is the probability that he
P
plays his remaining strategy). In general, if the player has n pure strategies, the
mixed strategy is a vector p = (p1 ; : : : ; pn ) such that p  0 and ni=1 pi = 1,
where you interpret pi as the probability that the player picks his ith pure

8
strategy. The mixed strategy security level of the Row player is de ned as

max min
Xn X
m
(
pi qj u i; j : )
p q
i=1 j =1

In this expression, I assume that Row has n pure strategies, Column has m
pure strategies, and that p and q are mixed strategies for Row and Column
respectively. It is convenient to let U be a matrix with n rows and m columns
(typical entry uij ). In that case the security level is
max min pU q
p q

The mixed-strategy security level of the Column player has a similar de nition:

max min
X
m X
n
(
pi qj u i; j : )
q p
j =1 i=1

We can write this as


max min pU q = min max pU q:
q p q p

This de nition reverses the order of p and q and puts a minus sign in front of
( ) (because the payo of Column is equal to 1 times the payo of Row).
u i; j

You should be able to convince yourself that the mixed-strategy security level
is at least as great as the pure-strategy security level. (In matching pennies,
Row's pure-strategy security level is 1 while his mixed-strategy security level
is zero.) The intuition for this is that in guring out what to do, the Row player
has the choice of using a \degenerate" mixed strategy that places probability
one on a pure strategy. Having the extra option of randomizing couldn't make
him worse o .
Warning: From now on, when I say security level I will mean mixed-strategy
security level.
Only slightly less obvious is the assertion that if you add Row's security level
to Column's security level you get something that is less than or equal to zero.
In symbols:
max min pU q min max pU q  0:
p q q p

This inequality merely expresses the idea that it is possible for both Row
and Column to attain their security levels (since the payo sums must be equal
to zero, if the sum of the security levels were negative it would be impossible
for both players to get their security level). U is the payo matrix.
The fundamental theorem of two-player zero-sum games is that the inequal-
ity above must actually hold as an equation. In symbols, the fact is that
max min pU q = min max pU q:
p q q p

9
This fact is called the Minimax Theorem. In words it says that if Row
plays in such a way that guarantees his security level, then Column cannot
get more than her security level. Also, if Column plays in such a way that
guarantees her security level, then Row cannot get more than his security level.
Hence the Minimax Theorem tells you how you should play zero-sum games (at
least against a \sensible" opponent): Each player should play to maximize his
or her security level. Why? One answer is that it maximizes your minimum
expected payo . That is, there is a sense in which it is safe. This answer is
not compelling on its own. It becomes compelling in zero-sum games because
the Minimax Theorem says that you can only expect more than your security
level if your opponent gets less than her security level. One should not expect
a sensible player to settle for less than what she could guarantee for herself. If
you do assume that your opponent is sensible in this way, then you cannot hope
to do better than your security level. Hence playing a strategy that guarantees
your security level is the right way to play the game.
Remember: Mixed-strategy security levels are expected utilities. In match-
ing pennies you never get a payo of exactly zero. Each time you play the
game you either win or you lose. However, if you play heads and tails with
equal probability, then your expected payo is zero. Once again, it is essential
to remember that the payo s are utilities. Although you are unhappy when
you lose, before you play you are indi erent between actually playing matching
pennies or not playing at all.
The recommendation that you play a strategy that guarantees your security
level is appropriate if your opponent is sensible. If your opponent does not
play a sensible strategy, then it might be appropriate to play something besides
your minimax strategy to exploit his stupidity. Speci cally, if your opponent
(Row) always plays heads in matching pennies, then it would be silly for you to
randomize. You should play tails and win for sure.

6 Linear Programming and Zero-Sum Game The-


ory

I haven't forgotten that this is a course in Linear Programming. You gured


that there was just some extra time to kill and so I threw in an unrelated topic.
But no.
The Minimax Theorem is a simple consequence of the Duality Theorem of
Linear Programming. Seeing the relationship allows you to use Linear Program-
ming techniques to solve zero-sum games.
You should not be surprised to nd that there is a relationship. Aside from
the cynical reasons (you gured that I wouldn't stray too far from the main topic
of the course), there is obviously something linear going on in the problems that
de ne security level. Furthermore, the maxmin objective (pU q looks a lot like
the yAx object that appeared in our discussions of duality and complementary
slackness).

10
Consider the following pair of LPs:

max w subject to pU we 0 
;p e = 1; p  0:

min v subject to pU ve 0 
;q e = 1; q  0:
In these problems, e is a vector of ones (be careful, sometimes e has n ones,
sometimes m ones, depending on context. To test your understanding, gure out
which is which). In the rst problem, the variables are w (a real number) and p
(an n-dimensional vector). The rst constraint says that each component of pU
(there are m of them) should be greater than or equal to w. The second and
third constraints state that p should be a probability distribution for the Row
player (a mixed strategy). Suppose that the Row player uses the mixed strategy
p. If the column player could observe this choice, then she could compute her

payo for any strategy. pU is an m vector, the j th component of which gives the
expected payo to Row if Row plays p and Column picks her j th pure strategy
(Column gets 1 times this). Hence if pU we  0, then Row gets at least w (no
matter what Column does) when he uses p. It follows that the solution (p ; w )
to the rst LP above gives Row's security level (w ) and a strategy that attains
the security level (p ). Similarly, the second problem gives Column's security
level. Careful: The second problem does describe how to nd Column's security
level, but the value of the problem actually gives the payo to the Row player.
That is, if (q  ; v  ) is the solution to the second problem, then the security level
of the column player is v  .
Verifying that the problems are dual and con rming that the relationship
they have to security level is a bit confusing. It is a worthwhile exercise to verify
the relationship carefully.
A bit of careful accounting (no thinking, just remembering the de nition
of dual linear programming problems) con rms that the second problem is the
dual of the rst problem. Since both problems are feasible (for example, in the
rst problem let p be anything that satis es the second and third constraints
and let w be the smallest element in U ), the Duality Theorem states that both
problems have solutions. and the values are the same.

7 Examples Revisited

7.1 Colonel Blotto

The rst thing to notice about Colonel Blotto is that the Colonel has two dom-
inated strategies: It is never in his interest to allocate all of his troops to one
mountain pass. He can successfully defend the pass with only two troops. This
intuitive conclusion also follows from an examination of the payo matrix. The
payo s in the rst row are all lower (or equal) to the numbers in the same

11
column in the second row. Hence we can reduce the game to:
(2,0) (1,1) (0,2)
(2,1) 1 1 -1
(1,2) -1 1 1
If you study this game, you will see that the enemy's middle strategy (1; 1)
is now dominated. The enemy will never win by sending only 1 division to a
location because Colonel Blotto has at least one division at both passes. Deleting
this strategy simpli es the game even further:
(2,0) (0,2)
(2,1) 1 -1
(1,2) -1 1
This game looks like matching pennies, so we know that the best strategy is
for both players to randomize 50 50 over their (remaining) strategies.
It should not surprise you that there are many variations of the game (de-
pending on the number of divisions the two sides have; what it takes to win a
battle; and what it takes to win the war).

7.2 Morra

To compute the pure-strategy security level in Morra, note that if your opponent
knew you were playing 12, she'd play 23; if she knew that you were playing 13,
then she'd play 12; if she knew you were playing 23, then she'd play 24; and if she
knew that you were playing 24, then she'd play 13. In each case, she'd win. Her
winnings would be at least $2 (if you played 13), so your pure-strategy security
level is $2. The game is symmetric, so the column player can guarantee that
she loses no more than $2 as well, but cannot do better. Hence there is a gap
between the pure-strategy security levels. The equilibrium strategy must be
mixed. This conclusion is not surprising. Mixed-strategies appear in situations
where you do not want your opponent to be able to predict your behavior. There
are in nitely many mixed strategies that lead to an expected payo of at least
zero. One possibility is to play 13 with probability :6 and 23 with probability :4
(and the other strategies with probability 0). If the other player plays 12, then
you lose 2 with probability :6 and you win three with probability :4. You're
expected payo is zero. If the other player plays 13, then you always break even
(you both guess right when you play 23 and you both guess wrong when you play
13). Similarly, if your opponent plays 23, then you always break even. Finally, if
your opponent plays 24, you have an expected gain of $3(:6) $4(:4) = $:20. It
follows that if you play the indicated mixture, then you guarantee a non-negative
expected payo . Your expected payo will be positive if your opponent plays
24. Game theoretic analysis recommends that you mix between your rst three
strategies (there are mixed strategies that guarantee an expected payo of zero
that use 12 with positive probability). It is not surprising that your expected
payo is zero. It is not surprising that you play randomly. It may be surprising

12
to learn that you should avoid using the strategy 24 (even though this is the
only strategy that gives you a chance of winning $4). Of course, if you think
that your opponent is likely to play 23 with high probability, then you should
not reject 24. The analysis implicitly demonstrates that it is not prudent to
play 23 with high probability, however.

7.3 Goofspiel

I do not have the energy to write down a 24  24 payo matrix for goofspiel with
n = 3. We can con rm that a particular strategy is optimal for the n = 3 game

assuming ai = i for i = 1; 2; 3. I claim that in this game an equilibrium strategy


is to play card i at move i. Suppose you play this strategy. If your opponent
plays 3 on the last move, then you are guaranteed a payo of 0 (you break even
in the last round and you either tie in the rst two rounds or lose the rst and
win the more valuable second round). If your opponent does not play 3 on the
nal round, then you win 3 on the nal round and even if you lose the rst two
rounds you still break even for the entire game. Hence your security level is at
least zero. Your opponent can play in the same way, however. Therefore, her
security level is also zero. It must be that the value of the game is zero and
(against a rational opponent) you can do no better than to play the strategy
that I described. The strategy of playing card i at round i would continue to
be optimal if you increased a3 while keeping a1 and a2 constant. On the other
hand, if you decreased the relative importance of the third round, say by having
ai = i + 3 for i = 1; 2; 3, then it would not be rational to always play 3 on the

third move.

8 More Examples

1. Calculate the optimal strategy.


1 2 3
1 16 -8 4
2 -24 -16 3
3 1 1 2
Here by playing the third strategy, Row can guarantee a payo of one.
On the other hand, column can hold player one to this payo by playing
Column 2. Hence the game has a pure strategy equilibrium point. The
value of the game (to the Row Player) is 1.
2. Each of two players has a Ace, King, Queen, and Jack. They each si-
multaneously show a card. Player 1 wins if they both show an Ace, or if
neither shows an Ace and the cards do not match. Player 2 wins if exactly
one shows an Ace or if neither shows an Ace and the cards match. (The
winner receives a payment of $1 from the loser.)

13
I will treat Player 1 as the row player and Player 2 as the column player.
The payo matrix looks like this:

ACE KING QUEEN JACK


ACE 1 -1 -1 -1
KING -1 -1 1 1
QUEEN -1 1 -1 1
JACK -1 1 1 -1

Using pure strategies, neither player can guarantee a win. The pure strat-
egy security level for each player is therefore 1. Figuring out the mixed
strategy security level \by hand" for a game with four strategies for each
player is tedious and hard. In this example, three of the strategies are
symmetric. This suggests a simpli cation.
Assume that Player 1 always plays Jack, Queen, and King with the same
probability (this probability can be any number between 0 and 13 ; if the
probability is 0, then Player 1 always plays ACE; if the probability is 13 ,
then Player 1 never plays ACE, and plays each of the remaining cards
with the same probability. You can view the game as having two pure
strategies for Player 1 (either he plays ACE or he doesn't) and the payo
matrix becomes:
ACE KING QUEEN JACK
ACE 1 -1 -1 -1 :
1 1 1
NOT ACE -1 3 3 3

If you impose the same symmetry condition on Player 2, the game reduces
to:
ACE NOT ACE
ACE 1 -1 :
1
NOT ACE -1 3

It is now possible to nd the mixed-strategy equilibrium for this 2  2 (that


is, two strategies for each player) game. Player 1's strategy will equalize
the payo he gets from either strategy choice of Player 2. That is, the
probability of ACE, call it a will satisfy:
1
a + (1 a )( 1) = a( 1) + (1 a) :
3
The solution to this equation is a = 25 . When Player 1 mixes between
ACE and NOT ACE with probabilities 25 and 53 he gets a payo of 51
whatever Player 2 does. Similarly, you can solve for Player 2's strategy

14
by equalizing Player 1's payo . If b is the probability that Player 2 plays
ACE, then b should satisfy:
1
b + (1 b )( 1) = b( 1) + (1 )
b
3
or b = 25 .
Now you can go back and check that the symmetry assumption that I
imposed is really appropriate. Both players are playing ACE with prob-
ability 25 and the other strategies with total probability 53 . That means
that they play each of the non-ACE cards with probability 13 . Under
this condition Player 1 expects to earn 15 from each of his original pure
strategies, and Player 2 can hold Player 1 to this amount by playing ACE
with probability 25 and the other three cards with probability 51 each.
3. Player I's payo matrix in a zero-sum game is:
TOP 1 2 3 4 5
BOTTOM 9 7 5 3 1

Find the pure and mixed strategy security levels of each player and the
equilibrium.
Using pure strategies, player I can guarantee a payo of 1 (using either
strategy). Player 2 can guarantee a loss of no more than 4 (by playing
the fourth column). This means that the game must have only a mixed-
strategy equilibrium. You have seen formulas that determine the best
mixed strategy (and you could nd it using Excel), but when one of the
players has only two strategies there is a graphical way of nding the
solution. I will describe the process (come to lecture to see the picture).
On the x axis you denote the probability that the row player plays up.
This number goes from zero to one. Next graph the payo associated
with each of the column player's pure strategies. Column one will be a
line segment that starts at (0; 9) (the row player gets 9 if row plays up
with probability zero) and goes to (1; 1) (the row player gets 1 if row plays
up with probability one). Do this for all of the strategies of the Column
player. So you get ve line segments. For each x, the worst that Row
can do is the lowest of the ve segments. Form a curve determined by
the minimum of the segments. The highest point on this curve is Row's
security level. For this example, all ve segments intersect at the same
point, x1 = 23 . This is the point that leads to the highest value for Row.
Hence Row's security level is 113 . There are many ways in which Column
can hold Row to this level. One way is for Column to play column 3 with
probability 31 and column 4 with probability 23 . In this example, it is an
accident that all payo s are equal at the same mixed strategy.
Algebraically, you can solve the problem like this. Notice that the higher
the probability that Row player UP, the more attractive it is for Column

15
to play \left" columns. If Column was sure that Row would play UP, then
Column would play Column 1. As the probability of playing up drops, the
second column becomes a more attractive strategy. At some point, call it
x1 Columns 1 and 2 yield the same payo . The de ning condition is

1x1 + 9(1 x1 ) = 2x1 + 7(1 )


x1 ;

which implies that x1 = 23 . You can check that this mixture guarantees
Row a payo of 11 3 . (Explicitly check Column's other strategies.) Since
Row does worse with any other mixture, this must be his optimal strat-
egy. It is an accident that all ve of columns strategies work equally well
against Row's optimal strategy. It is not an accident that the mixture that
attains Row's security level makes Column indi erent between at least two
strategies.
4. Consider the following game.

LEFT CENTER RIGHT


TOP 1 2 4
BOTTOM 9 5 1

This game also has no equilibrium in pure strategies. Row's (pure-strategy)


security level is 1, while Column can hold Row to 4 by playing the right
column. Figure out the probability x1 of playing UP that equalizes the
payo s of the rst two columns in the table:

x1 + 9(1 x1 ) = 2x1 + 5(1 )


x1 ;

or x1 = 45 . When Row uses this mixture, the third column is strictly better
for Row (payo 17 5 ) than either of the rst two columns. Hence Row can
get at 13
5 if he plays UP with probability 45 . Row would not be guaranteed
to do better if he played UP with higher probability. If Column knows
that Row will play UP with higher probability, then Column would play
LEFT, leading to a payo for Row of less than 13 5 . So we have ruled out
mixed strategies with probability greater than 45 on UP. What about other
mixtures? For these, Column is likely to respond with either the Center or
Right column. Figure out the probability x2 of playing UP that equalizes
the payo s of the last two columns:

2x2 + 5(1 x2 ) = 4x2 + (1 )


x2 ;

or x2 = 32 . If Row uses this mixture, then he is guaranteed a payo of 3,


which he will get if Column plays either his second or third strategy. (Row
will do even better if Column plays the left column.) Furthermore, if Row
places less weight on UP, Column will be able to reduce Row's payo by

16
playing RIGHT, and if Row plays up with a probability between x1 and
x2 , then Column will hold Row's payo below 3 by playing CENTER.

Since 3 > 13
5 , Row's security level must be 3; his equilibrium strategy is
to play up with probability 32 . Column can hold player one to this payo
by mixing between CENTER and RIGHT, playing each with probability
1
2.

17

Potrebbero piacerti anche