Sei sulla pagina 1di 38

Math Review: Week 3

1
Yoshifumi Konishi
Ph.D. in Applied Economics
COB 337j
E-mail: konis005@umn.edu
12. Quadratic Forms and Denite Matrices
Finally, we are going to study optimization problems. At least up until the 1970s, economics is
all about the study of rational optimizing agents. Though we often study cases of incomplete infor-
mation and imperfect competition, we still assume that agents intend to optimize their behaviors,
given the information they have. It is important to note that, when we say agents are rational,
we mean that agents make optimal choices according to their objectives. In economics, rationality
is never a statement about whether or not their objectives are reasonable or rational. We simply
take their objectives as given and solve their optimization problems. So, it is very critical for us
to be very familiar with optimization techniques. To be more precise, we need to learn at least
three things: (i) When the optimization problems of our interest are well-dened? When do they
have a solution? (ii) Are the solutions unique? Are they global or local? (iii) How can we solve
them correctly and eciently? In Week 1, we have learned how to solve unconstrained one-variable
objective functions. This section discusses the cases of more than one variable, but with a very
simple functional form. That is, a quadratic form.
Before proceeding further, recall how to solve a one-variable optimization problem. Consider:
min )(r) = r
2
FONC gives us a solution:
2r = 0 ==r = 0
Need to check if this is a minimum or maximum. SOC at r = 0 is:
2 0
So, this is at least a local minimum. But, we can easily see that )(r) 0 for all r ,= 0. This
means that r = 0 is a global minimum. Now, a question arises naturally: Can we extend this sort
of simple analysis to more than one variable cases? A natural extension of quadratic functions to
more than one variable is a quadratic form. Recall that a quadratic form on R
n
is a real-valued
function, Q : R
n
R such that:
Q(x) =

ij
a
ij
r
i
r
j
1
This lecture note is adapted from the following sources: Simon & Blume (1994), W. Rudin (1976), A. Takayama
(1985), M. Wadachi (2000), and Toda & Asano (2000).
1
Note that a quadratic form takes the value of zero if x = 0. Recall again that we can represent
every quadratic form in a matrix form, using a symmetric matrix A,
Q(x) = x
T
x
For example, on R
2
,
a
11
r
2
1
+a
22
r
2
2
+a
12
r
1
r
2
= (r
1
r
2
)
_
a
11
1
2
a
12
1
2
a
12
a
22
__
r
1
r
2
_
The deniteness of the symmetric matrix is dened by the characteristics of an original
function Q.
Denition 12-1 (Deniteness): Let be an (: :) symmetric matrix. Then, is said to be:
(i) positive denite i x
T
x 0 for all x ,= 0 in R
n
;
(ii) positive semidenite i x
T
x _ 0 for all x ,= 0 in R
n
;
(iii) negative denite i x
T
x < 0 for all x ,= 0 in R
n
;
(iv) negative semidenite i x
T
x _ 0 for all x ,= 0 in R
n
;
(v) indenite i x
T
x 0 for some x and < 0 for some other x in R
n
.
Note that if is positive (negative) denite, then is positive (negative) semidenite. Now,
whats the point of this? Remember that x
T
x = 0 at x = 0. So, if is positive denite,
then x = 0 is a unique global minimum. If is a positive semidenite, then x = 0 is a global
minimum (which may not be unique). And so on. So, if we are solving a maximization problem of a
quadratic form, what we want is a negative semideniteness of . In fact, as we will see later, we can
generalize this test to more general functional forms and we look for the negative semideniteness
of a Hessian matrix. To have a more geometric idea, take a look at gures 16.2-16.6 in S&B.
Obviously, these denitions by themselves are of no help for us, unless there is a convenient
method for identifying the deniteness of matrices. Luckily for us, there is one relatively easy
method.
Denition 12-2 (Principal Submatrix and Minor): Let be a (: :) matrix. A (/ /)
submatrix of obtained by deleting : / columns, i
1
, i
2
, ..., i
nk
, and : / rows of the same
indices i
1
, i
2
, ..., i
nk
is called a /-th order principal submatrix of . The determinant of the
(/ /) principal submatrix is called a /-th order principal minor of .
Example 1. Consider a (3 3) matrix:
=
_
_
a
11
a
12
a
13
a
21
a
22
a
23
a
31
a
32
a
33
_
_
Question: How many principal submatrices of the third order? Answer: There is only one.
Question: How many second order submatrices are there? Not necessarily principal submatrices.
Answer: We need to nd how many combinations for deleting (i, ,). So, possible combinations
are (1, 1), (1, 2), (1, 3), (2, 1), (2, 2), (2, 3), (3, 1), (3, 2), (3, 3). Note that deleting 1st row and second
2
column is dierent from deleting second row and 1st column. So, there are nine possible second
order submatrices of . How about principal submatrices? There are only three: (1, 1), (2, 2), (3, 3),
because we have to delete rows and columns of the same indices.
^

33
=
_
a
11
a
12
a
21
a
22
_
^

22
=
_
a
11
a
13
a
31
a
33
_
^

11
=
_
a
22
a
23
a
32
a
33
_
To nd the deniteness of a matrix, however, we only use one special principal minor, called a
leading principal minor.
Denition 12-3 (Leading Principal Submatrix and Minor): Let be a (::) matrix. The
/-th order principal submatrix of obtained by deleting the last : / rows and the last : /
columns is called the /-th order leading principal submatrix of . We denote it by
k
. Its
determinant is called the /-th order leading principal minor of .
Example 2. For a (3 3) matrix:
=
_
_
a
11
a
12
a
13
a
21
a
22
a
23
a
31
a
32
a
33
_
_
Their leading principal minors are:
det(
1
) = det(a
11
)
det(
2
) = det
_
a
11
a
12
a
21
a
22
_
det(
3
) = det
_
_
a
11
a
12
a
13
a
21
a
22
a
23
a
31
a
32
a
33
_
_
Now, we are ready to state the main theorem of this section.
Theorem 12-1 (Deniteness): Let be a (: :) symmetric matrix. Then,
(i) is positive denite if and only if all its leading principal minors are strictly positive ( 0);
(ii) is positive semidenite if and only if all its principal minors are nonnegative (_ 0);
(iii) is negative denite if and only if all its leading principal minors alternate sings as follows:
det(
1
) < 0, det(
2
) 0, det(
3
) < 0 etc
3
(iv) is negative semidenite if and only if all its principal minors of odd order are _ 0 and of
even order are _ 0;
(v) If some /-th order leading principal minor of is nonzero but the sign pattern of nonzero terms
does not t either of the case (i) or (iii), then is indenite.
Note that for positive or negative deniteness of , we only need to check their leading principal
minors. But, if is neither positive (negative) denite nor indenite, then we must check all of
its principal minors. Furthermore, it is important to note that when some of the leading principal
minors are zero, it may not be indenite and can be positive or negative semidenite if the sign
pattern of its nonzero terms still obeys the patterns of (i) or (iii).
Example 3. Consider a (4 4) matrix:
=
_
_
_
_
a
11
a
12
a
13
a
14
a
21
a
22
a
23
a
24
a
31
a
32
a
33
a
34
a
41
a
42
a
43
a
44
_
_
_
_
Question: What are the leading principal submatrices? Question: Consider each of the following
cases. Are they positive denite, negative denite, or indenite?
(a) det(
1
) 0, det(
2
) 0, det(
3
) 0, det(
4
) 0 == is positive denite.
(b) det(
1
) < 0, det(
2
) 0, det(
3
) < 0, det(
4
) 0 == is negative denite.
(c) det(
1
) 0, det(
2
) < 0, det(
3
) 0, det(
4
) < 0 == is indenite.
(d) det(
1
) 0, det(
2
) 0, det(
3
) = 0, det(
4
) 0 == is not positive denite, but may
be positive semidenite. If it is not positive semidenite, then it is indenite.
(e) det(
1
) = 0, det(
2
) < 0, det(
3
) 0, det(
4
) 0 == is indenite, because of det(
2
).
If a symmetric matrix is a diagonal matrix, then it becomes very easy. Recall that the deter-
minant of a diagonal matrix is simply a product of the diagonal terms. So, its leading principal
minors are:
det(
1
) = a
11
, det(
2
) = a
11
a
22
, ..., det(
n
) = a
11
a
22
...a
nn
Thus, we have the following convenient theorem.
Theorem 12-2 (Deniteness of Diagonal Matrices): Let be a (: :) diagonal matrix.
Then,
(i) is positive denite if and only if all the a
0
ii
are strictly positive;
(ii) is negative denite if and only if all the a
0
ii
are strictly negative;
(iii) is positive semidenite if and only if all the a
0
ii
are nonnegative (_ 0);
(iv) is negative semidenite if and only if all the a
0
ii
are nonpositive (_ 0);
(iv) is indenite if two of the a
0
ii
have opposite signs.
In summary, we have that x = 0 is a unique solution to max Q(x) = x
T
x if is N.D. and to
minQ(x) = x
T
x if P.D.
Lastly, in economic applications, we often have a linear constraint of the form:
4
max Q(x) s.t. c x = 0 where c = (c
1
, ..., c
n
)
In this case, we need to have " is negative denite on the constraint set x : c x = 0". To check
this, we form a bordered matrix:
H
n+1
=
_
0 c
c
T

_
(1)
=
_
_
_
_
_
_
_
0 c
1
c
2
c
n
c
1
c
2
.
.
.
c
n
a
11
a
12
a
1n
a
21
a
22
a
2n
.
.
.
.
.
.
.
.
.
.
.
.
a
n1
a
n2
a
nn
_
_
_
_
_
_
_
Theorem 12-3: Consider a bordered matrix of the form in (1). Suppose that c
1
,= 0.
(i) If the last : leading principal minors of H
n+1
has the same sign, then the quadratic form Q is
positive denite on the constraint set x : c x = 0 (so that x = 0 is a unique global minimum).
(ii) If the last : leading principal minors of H
n+1
alternate signs, then the quadratic form Q is
negative denite on the constraint set x : c x = 0(so that x = 0 is a unique global maximum).
We can generalize the result of this theorem for more than one constraint. However, we rarely
see the use of it. So, I believe it is enough that you know where you can nd it in your textbook
when you need to refer to it. (But you will see relatively more frequent use of a similar technique,
bordered Hessian later).
13. Unconstrained Optimization
In this section, we deal with unconstrained optimization. The analysis of this case is impor-
tant, because this case corresponds to the interior solutions of any (constrained or unconstrained)
optimization problems. We have learned that for functions of one variable, a necessary condition
for an interior maximum is that a derivative of the objective must be equal to zero. We also had
sucient conditions for an interior maximum, which is that the second derivative must be negative.
The main results for functions of several variables are analogous to these. First, lets review a key
term.
Denition 3-11 (Interior): For any set 1 A, a point r is an interior point of 1 if there exists
a neighborhood of r such that 1. We denote the set of all interior points of 1 as i:t(1).
Example 1. Consider a set 1 = (r, j) : r
2
+j
2
_ 1. Question: What does this set look like in
R
2
? Answer: It is an area inside a unit circle. Consider a point (0, 1). Is it an interior point? No,
it is a boundary point. How about (0, 1,2)? 0
2
+(1,2)
2
= 1,4 < 1. So it is an interior point. Now,
consider a set 1 = (r, j) : r _ 0, j 0. Is a point (0, 1) a boundary or interior point? Answer:
It is a boundary point. To convince yourself, try to construct an open ball around (0, 1) and see if
any open ball can be 1.
5
Theorem 13-1 (FONC for Local Max/Min): Let 1 : l R
n
R be a C
1
function. Suppose
that x

l is a local maximum or minimum of 1 in l. Suppose that


(i) l is open; or
(ii) x

i:t(l).
Then,
1
1
1(x

) = 0
T
i.e.
01
0r
i
(x

) = 0 for all i
It is important to keep in mind that (i) this condition is a necessary condition (1
1
1(x

) = 0
T
does not imply that x

is a local min or max. It simply means that x

is a critical point of 1) and


(ii) this necessary condition only works for interior points. (See the graph).
What about second order conditions? Recall that the second order derivative of 1 can be
summarized as the Hessian of 1:
H = 1
2
1 =
_
_
_
_
_
_
_
@
2
f
@x
2
1
@
2
f
@x
2
@x
1

@
2
f
@xn@x
1
@
2
f
@x
1
@x
2
@
2
f
@x
2
2

@
2
f
@xn@x
2
.
.
.
.
.
.
.
.
.
.
.
.
@
2
f
@x
1
@xn
@
2
f
@x
2
@xn

@
2
f
@x
2
n
_
_
_
_
_
_
_
Recall also that the Hessian is symmetric. We have the following second-order conditions.
Theorem 13-2 (SOSC for Local Max/Min): Let 1 : l R
n
R be a C
2
function. Suppose
that l is open and that 1
1
1(x

) = 0
T
.
(i) If the Hessian 1
2
1(x

) is negative denite, then x

is a (strict) local maximum of 1;


(ii) If the Hessian 1
2
1(x

) is positive denite, then x

is a (strict) local minimum of 1;


(iii) If the Hessian 1
2
1(x

) is indenite, then x

is neither local max nor minimum of 1.


Denition 13-1 (Saddle Point): A critical point x

of 1 for which the Hessian 1


2
1(x

) is
indenite is called a saddle point of 1.
A saddle point is a minimum of 1 in some directions and a maximum in other directions. It
looks like a saddle, just like in Figure 16.4.
To prove Theorem 13-2, we utilize two of the important results learned in Week 1 and 2. Lets
see the sketch of the proof, as it is a good review of the previous materials. In Week 1, we have
learned a Taylor approximation. Lets write a second-order approximation of 1 around the critical
point x

. Let h be a change in x, so that x

+ h represents an arbitrary point around x

. In a
suciently small neighborhood of x

, therefore,
1(x

+h) - 1(x

) +1
1
1(x

)h +
1
2
h
T
1
2
1(x

)h
Because 1
1
1(x

) = 0, 1
1
1(x

)h = 0. So, we can rewrite this as:


6
1(x

+h) 1(x

) -
1
2
h
T
1
2
1(x

)h
We learned that, if 1
2
1(x

) is negative denite, for all h ,= 0, h


T
1
2
1(x

)h < 0. This means that,


for all h ,= 0,
1(x

+h) 1(x

) < 0 or 1(x

+h) < 1(x

)
This means that, for all points around x

, 1(x

) gives the highest value. In other words, 1(x

) is
a local maximum. We can easily adopt it for the case of positive denite.
Example 2. Lets work through a concrete problem. Consider a function:
1(r, j) = r
3
j
3
+ 9rj
Lets nd a critical point rst, using FOC.
1
x
= 3r
2
+ 9j = 0 (1)
1
y
= 3j
2
+ 9r = 0 (2)
Now, from (1), j = (1,3) r
2
. Substitute this into (2), and manipulate:
3
_
(1,3) r
2

2
+ 9r = (1,3) r
4
+ 9r = 0
or r
4
27r = 0
or r
_
r
3
27
_
= 0
So, r = 0 or 3. Thus, the solution to this system is: (r, j) = (0, 0) or (3, 3). Now, to determine
which one is a local maximum or minimum, lets check the Hessian.
H =
_
1
xx
1
xy
1
xy
1j
_
=
_
6r 9
9 6j
_
At (r, j) = (0, 0),
H =
_
0 9
9 0
_
To check the deniteness of the Hessian, compute the leading principal minors,
det(H
1
) = det(0) = 0
det(H
2
) = det
_
0 9
9 0
_
= 81
7
This is indenite. So, (0, 0) is neither a local maximum or minimum. It is a saddle point. How
about (3, 3)?
H =
_
18 9
9 18
_
det(H
1
) = det(18) = 18 0
det(H
2
) = det
_
18 9
9 18
_
= 18 (18) 81 = 243 0
Thus, it is positive denite. (3, 3) is a local minimum.
It is important to realize that the above second-order conditions are sucient conditions for
local minima and maxima. However, its converse is not true. That is, local minima (maxima)
do not imply the positive (negative) deniteness of Hessian. To see why, consider the following
example.
Example 3. Consider the minimization problem:
min )(r) = r
4
As you can see from the graph, its strict local minimum is r = 0. But, check the rst-order and
second-order conditions. )
0
(r) = 4r
3
; )
00
(r) = 12r
2
, so that )
0
(0) = 0, )
00
(0) = 0. The Hessian of )
is simply a scalar H = 0:
det(H) = 0
Thus, the Hessian is positive semidenite (it can be said negative semidenite, in this case). So,
we need SONC.
Theorem 13-3 (SONC for Local Max/Min): Let 1 : l R
n
R be a C
2
function.
(i) If x

i:t(l) is a local maximum of 1, then 1


2
1(x

) is negative semidenite;
(ii) If x

i:t(l) is a local minimum of 1, then 1


2
1(x

) is positive semidenite.
Up to now, we have discussed local minima and local maxima. But, what we are really after is
global maxima and minima. In Week 1, we have studied conditions for global maxima and minima
for functions of one variable. Recall that we had:
Theorem 4-15 (Global max/min): Suppose that ) : 1 R
1
is twice-dierentiable and 1 is a
connected interval. Suppose further that:
(i) r is a local maximum (minimum) of );
(ii) r is the only critical point of ) on 1.
Then, r is the global maximum (minimum) of ).
Theorem 4-16 (Global max/min): Suppose that ) : [a, /] R
1
is twice-dierentiable on (a, /).
If )
00
is never zero on (a, /), then ) has at most one critical point in (a, /). This critical point is a
global minimum if )
00
0 and global maximum if )
00
< 0.
8
In summary, we need either one of the conditions:
(a) r is a local maximum (minimum) of ) and r is the only critical point of ) on a connected
interval 1; or
(b) )
00
_ 0 (for minimum) and )
00
_ 0 (for maximum) for all r.
As we will see in a moment, we have analogous conditions for global maxima for functions of
several variables. Before discussing further, we need to dene several related concepts.
Denition 13-2 (Concave & Convex Functions): A function 1 : l R
n
R is concave on
l if and only if for all x, y l, for all ` [0, 1], we have:
1 [`x + (1 `)y] _ `1(x) + (1 `)1(y)
A function 1 : l R
n
R is convex on l if and only if for all x, y l, for all ` [0, 1], we
have:
1 [`x + (1 `)y] _ `1(x) + (1 `)1(y)
A function 1 : l R
n
R is strictly concave on l if and only if for all x ,= y l, for all
` (0, 1), we have:
1 [`x + (1 `)y] `1(x) + (1 `)1(y)
A function 1 : l R
n
R is strictly convex on l if and only if for all x ,= y l, for all
` (0, 1), we have:
1 [`x + (1 `)y] < `1(x) + (1 `)1(y)
For functions of one variable, we can easily see what concavity and convexity means graphically.
(See the graphs). Note that a linear function is both convex and concave. Now, we have a closely
related concept for sets. You may nd them a bit confusing, but it is very important that you are
able to use them correctly. Believe me, you will see them everywhere in your prelim or coursework.
Denition 13-3 (Convex Sets): A set l R
n
is convex if and only if for all x, y l, for all
` [0, 1],
`x + (1 `)y l
There is no such thing as concave sets. We say the sets that are not convex are nonconvex.
Moreover, note that, for functions of one variable, if )
00
_ 0, it denes a concave function and if
)
0
_ 0, then it denes a convex function. Thus, condition (b) above translates into the statement:
If a function is concave, then a critical point is a global maximum. If a function is convex, then a
critical point is a global minimum. Now, we are ready to state the main theorem:
Theorem 13-4 (Suciency for Global Min/Max): Let 1 : l R
n
R be a C
2
function
and l is a convex open subset of R
n
.
(a) The following three conditions are equivalent:
9
(i) 1 is a concave function on l;
(ii) 1(y) 1(x) _ 1
1
1(x)(y x) for all x, y l;
(iii) 1
2
1(x) is negative semidenite for all x l.
(b) The following three conditions are equivalent:
(i) 1 is a convex function on l;
(ii) 1(y) 1(x) _ 1
1
1(x)(y x) for all x, y l;
(iii) 1
2
1(x) is positive semidenite for all x l.
(c) If 1 is a concave function on l and 1
1
1(x

) = 0 for some x

l, then x

is a global maximum
of 1 on l.
(d) If 1 is a convex function on l and 1
1
1(x

) = 0 for some x

l, then x

is a global minimum
of 1 on l.
Question: What does (ii) and (iii) mean for one-variable functions? Answer: For (i), draw a
picture. For (ii), 1
2
1(r) = 1
00
, so that 1 is concave if and only if 1
00
_ 0 or < 0.
Note the dierence between Theorem 13-2, 13-3 and 13-4.
For local maxima,
_
H(x

) is ND and 1
1
1(x

) = 0

== [x

is a local maximum]
[x

is a local maximum] ==
_
H(x

) is NSD and 1
1
1(x

) = 0

For global maxima,


_
H(x) is NSD for all x and 1
1
1(x

) = 0

== [x

is a global maximum]
Example 4. Recall the problem in Example 2.
1(r, j) = r
3
j
3
+ 9rj
We had two critical points: (r, j) = (0, 0) or (3, 3). We concluded that at (0, 0), Hessian is
indenite, so that it is neither a local maximum or minimum. On the other hand, we nd that at
(3, 3), Hessian is positive denite, so that it is a local minimum. Now, the question is whether
or not this local minimum is also a global minimum. To nd out, we need to check the Hessian at
arbitrary points of r.
H =
_
1
xx
1
xy
1
xy
1j
_
=
_
6r 9
9 6j
_
We need to check all leading principal minors rst:
det(H
1
) = 6r ? 0
det(H
2
) = 36rj 81 ? 0
10
So, we cannot say that this is a global minimum or not.
Example 5. Consider an additively-separable utility function:
1(r, j) = n(r) +(j)
Suppose that both n() and () are concave functions, so that n
00
_ 0 and
00
_ 0. Question:
Then, is the original utility function 1 concave? Answer: To answer this, we need to check the
Hessian at arbitrary points. 1
xx
= n
00
; 1
xy
= 1 = 0; 1
yy
=
00
. So, the Hessian is:
H =
_
1
xx
1
xy
1
xy
1j
_
=
_
n
00
0
0
00
_
For NSD, we need to check all principal minors. We have two 1st-order principal minors and one
2nd-order principal minor.
det(n
00
) = n
00
_ 0, det(
00
) =
00
_ 0, det
_
n
00
0
0
00
_
= n
00

00
_ 0
So, they are consistent with NSD. 1 is a concave function. Question: What can you say, if n()
and () are strictly concave? Answer: 1 is a strictly concave function.
Lastly, as a corollary to Theorem 13-4, we have the following result.
Theorem 13-5 (Uniqueness of Global Min/Max): Let 1 : l R
n
R be a C
2
function
and l is a convex open subset of R
n
.
(c) If 1 is a strictly concave function on l and 1
1
1(x

) = 0 for some x

l, then x

is a unique
global maximum of 1 on l.
(d) If 1 is a strictly convex function on l and 1
1
1(x

) = 0 for some x

l, then x

is a unique
global minimum of 1 on l.
To check strict concavity (convexity) of a function, we have the following characterization
theorem (See Takayama A., Mathematical Economics, 1985, p.125-126).
Theorem 13-6: Let 1 : l R
n
R be a C
2
function and l is a convex open subset of R
n
.
(a) 1 is a strictly concave function on l if 1
2
1(x) is negative denite for all x l.
(b) 1 is a convex function on l if 1
2
1(x) is positive denite for all x l.
Note that the converse may not be true (See an example).
14. Constrained Optimization I: First-Order Conditions
The objective of this and the next section is for us to be able to solve the optimization problems
of the following form:
max
x2R
n
1(r
1
, r
2
, ..., r
n
)
11
subject to q
1
(r
1
, r
2
, ..., r
n
) _ /
1
.
.
.
q
k
(r
1
, r
2
, ..., r
n
) _ /
k
and /
1
(r
1
, r
2
, ..., r
n
) = c
1
.
.
.
/
m
(r
1
, r
2
, ..., r
n
) = c
m
That is, an optimization problem of : decision variables with / inequality constraints and : equality
constraints. Note that this form of optimization arises very naturally in economics. For example,
a utility maximization is generally of the form:
max
x2R
n
l(r
1
, r
2
, ..., r
n
) s.t. p x _ '; r
1
, ..., r
n
_ 0
The last : constraints are usually called nonnegativity constraints and are assumed to be there,
whether implicitly stated or not, because a consumer can never consume a negative amount of a
good (except some special cases). Other common economic problems include prot maximization:
max
x2R
n
p y w x s.t. )
1
(x) _ j
1
, ..., )
m
(x) _ j
m
; r
1
, ..., r
n
_ 0
and cost minimization:
min
x2R
n
p x s.t. l(r
1
, r
2
, ..., r
n
) _

l; r
1
, ..., r
n
_ 0
14-1. Equality Constraints without Inequality Constraints
Lets rst consider an optimization problem with equality constraints only. It is called a La-
grangian method. You may have learned it somewhere in calculus classes. Consider an objective
function of two decision variables with one equality constraint:
max
x2R
2
)(r
1
, r
2
) s.t. /(r
1
, r
2
) = c (1)
Then, we have the following result.
Result 14-1 (FONC for (1)): Suppose that in Problem (1) ) and / are C
1
function and that
(r

1
, r

2
) is the solution. If 1
1
/(r

1
, r

2
) ,= 0 (that is, (r

1
, r

2
) is not a critical point of /), then there
exists a real number `

such that (r

1
, r

2
, `

) is a critical point of the following function:


/(r
1
, r
2
, `) = )(r
1
, r
2
) +`[c /(r
1
, r
2
)]
Thus, by FONC, we must have:
1
1
1(r

1
, r

2
, `

) = 0
12
In other words,
01
0r
1
(r

1
, r

2
, `

) = 0,
01
0r
2
(r

1
, r

2
, `

) = 0,
01
0`
(r

1
, r

2
, `

) = 0
This is a very surprising result (at least to me). It basically says that we can turn constrained
optimization into unconstrained optimization and just working through FONCs can give you (can-
didate) solutions. As we will see, this method works (with some modication) for inequality con-
straints as well. The function / is called a Lagrange function and the miraculous variable ` is
called a Lagrange multiplier. The condition that (r

1
, r

2
) is not a critical point of / is called
a constraint qualication. As we get into more complicated problems, we will see various con-
straint qualications. But, in most economic problems, this condition is automatically satised,
because their constraint is likely to be a linear combination of variables:
a
1
r
1
+... +a
n
r
n
_ '
in which case, we only need a
i
,= 0 for all i.
Now, lets see why this result can be obtained geometrically. Consider Problem (1) again. Note
rst that the equality constraint forms a level set:
C = (r
1
, r
2
) : /(r
1
, r
2
) = c
In two dimensional space, this set is represented by some line. Now, consider the objective, )(r
1
, r
2
).
We can also draw levels sets for each and every real number a.
1(a) = (r
1
, r
2
) : )(r
1
, r
2
) = a
Suppose that an increase in a is represented by the moves of levels sets in the northeast direction.
Then, the maximum solution occurs exactly at the tangency of 1(a) and C. Recall Implicit Function
Theorem, which says the slope of the constraint set is given by:
dr
2
dr
1
=
0/,0r
1
0/,0r
2
This must be equal to the slope of the level set 1(a) at this optimum, which is also given by Implicit
Function Theorem:
dr
2
dr
1
=
0),0r
1
0),0r
2
Thus, we have:
0/,0r
1
0/,0r
2
(x

) =
0),0r
1
0),0r
2
(x

)
Or,
0),0r
1
0/,0r
1
(x

) =
0),0r
2
0/,0r
2
(x

)
13
So, let `

be the common number:


`

=
0),0r
1
0/,0r
1
(x

) =
0),0r
2
0/,0r
2
(x

)
Rewrite these as:
`

=
0),0r
1
0/,0r
1
(x

) ==
0)
0r
1
(x

) `

0/
0r
1
(x

) = 0
`

=
0),0r
2
0/,0r
2
(x

) ==
0)
0r
2
(x

) `

0/
0r
2
(x

) = 0
Along with the original constraint,
c /(x

) = 0
We have a system of three equations with three unknown variables. Note that this system is exactly
the FONCs for the Lagrangian:
01
0r
1
(r

1
, r

2
, `

) =
0)
0r
1
(x

) `

0/
0r
1
(x

) = 0
01
0r
2
(r

1
, r

2
, `

) =
0)
0r
2
(x

) `

0/
0r
2
(x

) = 0
01
0`
(r

1
, r

2
, `

) = c /(x

) = 0
A couple of caveats in using this result. First, `

cannot be dened at x

if
@h
@x
1
(x

) = 0 or
@h
@x
2
(x

) = 0, because RHS of the equation:


`

=
0),0r
1
0/,0r
1
(x

) =
0),0r
2
0/,0r
2
(x

)
are not well dened in such a case. Lets see why this can be a problem in a concrete example.
Example 1. Consider a problem:
max
x2R
2
)(r
1
, r
2
) = r
1
r
2
s.t. /(r
1
, r
2
) = r
1
= 1
Note that 0/,0r
1
= 1 but 0/,0r
2
= 0. Lets draw a picture. In this case, r
1
is xed at c. But,
there is no constraint on r
2
(in general, if 0/,0r
2
= 0 for all r
2
, there is no constraint on r
2
).
Thus, we can take any value of r
2
. Thus, the function )(r
1
, r
2
) = r
1
r
2
is unbounded under this
constraint and there can be no solution.
In general, when CQ is violated, there may or may not be a solution. Caution: This ex-
ample is not meant as an example of CQ being violated. In fact, CQ is still satised, because
(0/,0r
1
, 0/,0r
2
) = (1, 0) ,= (0, 0). Rather, it is an example illustrating what kind of problems you
14
might encounter when CQ is violated. The following example is a case in which this constraint
works just ne.
Example 2. Consider a problem:
min
x2R
2
)(r
1
, r
2
) = r
2
1
+r
2
2
s.t. /(r
1
, r
2
) = r
1
= 1
The level set of )(r
1
, r
2
) is obviously a circle. The bigger the value of ) is, the bigger the circle
gets. So, the minimum obviously happens at r
1
= 1 and r
2
= 0, at which we have a tangency.
(Note that if this were a maximization problem, then there would not be a solution).
Another caveat is about how to set up the Lagrangian function. As you may be aware, (i) we
can have + or before the Lagrange multiplier and (ii) we can have /(r
1
, r
2
) c or c /(r
1
, r
2
).
Mathematically, to obtain a solution, it does not matter what combination of these (total of four
combinations) you have in your Lagrangian setup. However, to get a right interpretation of the
value of the Lagrange multiplier, it is sometimes important to set it up in a right manner. We will
discuss this point later. For this reason, I advise that you use either one of the following setups
and make it a habit.
/(r
1
, r
2
, `) =
_
)(r
1
, r
2
) +`[c /(r
1
, r
2
)]
)(r
1
, r
2
) `[/(r
1
, r
2
) c]
Lastly, we generalize this Lagrangian method for : variables and : equality constraints.
Theorem 14-2 (FONC for Equality Constraints): Consider the maximization or minimization
problem of the form:
max
x2R
n
1(r
1
, r
2
, ..., r
n
)
subject to /
1
(r
1
, r
2
, ..., r
n
) = c
1
.
.
.
/
m
(r
1
, r
2
, ..., r
n
) = c
m
Suppose that 1, /
1
, ..., /
m
are C
1
functions and that x

= (r

1
, r

2
, ..., r

n
) is a local maximum (or
minimum) of 1 on the constraint set. If x

is not a critical point of h = (/


1
, ..., /
m
), then there
exists a vector of Lagrange multipliers

= (`

1
, `

2
, ..., `

m
) such that (x

) is a critical point of
the Lagrangian function:
/(x, ) = 1(x) + [c h]
= 1(x) +`
1
[c
1
/
1
(x)] +... +`
1
[c
m
/
m
(x)]
Note that we havent dened what it means to be a critical point of a vector-valued multivariate
function h.
15
Denition 14-1 (Critical Point): Let h : R
n
R
m
be a C
1
function. A point x

is said to be
a critical point of h if and only if the rank of its Jacobian matrix, ra:/(1
1
h(x

)) is < :.
14-2. Inequality Constraints without Equality Constraints
What if constraints are inequality constraints? Things get a bit more complicated. To develop
our intuition, lets consider a simple case of two variables and one inequality constraint.
max
x2R
2
)(r
1
, r
2
) s.t. q(r
1
, r
2
) _ / (2)
For this problem, two cases can happen. Case 1: The constraint is binding at optimum, i.e.
q(r

1
, r

2
) = /. Case 2: The constraint is not binding, i.e. q(r

1
, r

2
) < /. In other words, the
optimum happens at an interior of the constraint set. Lets see this graphically. (See the Figure).
Case 1. The maximum would have occurred outside the constraint set if there were no constraint.
In this case, we could set it up as a problem with an equality constraint. Thus, the constraint
problem is exactly the same as the one we saw in (1) before and FONCs would be:
0)
0r
1
(x

) `

0q
0r
1
(x

) = 0 (3)
0)
0r
2
(x

) `

0q
0r
2
(x

) = 0
Case 2. The maximum would be exactly the same as if there were no constraint. In this case, we
could set it up as an unconstrained optimization. FONCs would be just:
0)
0r
1
(x

) = 0 (4)
0)
0r
2
(x

) = 0
So, we just need a convenient device such that we can deal with both cases. The following
trick, called a complementary slackness condition, would do the job.
`

[/ q(r

1
, r

2
)] = 0
Lets examine what this does for us. First, note that (4) becomes (3) if `

= 0. Suppose that the


constraint is binding. Then, / q(r

1
, r

2
) = 0 so `

can be any value. Suppose that the constraint


is not binding. Then, / q(r

1
, r

2
) 0 so that `

must be zero. So, we have an ideal relationship.


Binding constraint ==
0)
0r
1
(x

) `

0q
0r
1
(x

) = 0,
0)
0r
2
(x

) `

0q
0r
2
(x

) = 0
Non-binding const. ==
0)
0r
1
(x

) = 0,
0)
0r
2
(x

) = 0
Question: Can it happen that `

= 0 and / q(r

1
, r

2
) = 0? Answer: Yes. It is called a
degenerate case. It rarely happens. But when it does, it means that the unconstrained optimum
16
occurs exactly at the boundary of the constraint set. (See the Figure). These conditions can be
summarized as the following result.
Result 14-3 (FONC for (2)): Suppose that in Problem (2) ) and q are C
1
function and that
(r

1
, r

2
) is the solution. Suppose that, if q(r

1
, r

2
) = 0, 1
1
q(r

1
, r

2
) ,= 0 (that is, (r

1
, r

2
) is not a
critical point of q). Then, there exists a real number `

such that, for the Lagrangian function


dened by:
/(r
1
, r
2
, `) = )(r
1
, r
2
) +`[/ q(r
1
, r
2
)]
we have:
(i)
01
0r
1
(r

1
, r

2
, `

) = 0;
(ii)
01
0r
2
(r

1
, r

2
, `

) = 0;
(iii) q(r
1
, r
2
) _ /, `

_ 0, `

[/ q(r

1
, r

2
)] = 0
I advise that you write the last condition (iii) as a whole set. This way, you are less likely to
forget when there are many mixed constraints. Note that nonnegativity of the Lagrange multiplier
comes only if you set up the Lagrangian function right. For this problem, there are only two ways
we can set it up right.
/(r
1
, r
2
, `) = )(r
1
, r
2
) +`[/ q(r
1
, r
2
)]
/(r
1
, r
2
, `) = )(r
1
, r
2
) `[q(r
1
, r
2
) /]
Use + in front of ` if you put RHS of less than equal constraint rst (i.e. / q(r
1
, r
2
)) and use
in front of ` if you put RHS of less than equal constraint second (i.e. q(r
1
, r
2
) /). Question:
What if we have a resource constraint of the form: / _ q(r
1
, r
2
)? Answer: The same rule applies.
So, the sign changes. Use + in front of ` if you put RHS of less than equal constraint rst (i.e.
q(r
1
, r
2
) /). That is, )(r
1
, r
2
) +`[q(r
1
, r
2
) /] or )(r
1
, r
2
) `[/ q(r
1
, r
2
)]. In any case, make
a habit of using a correct rule, so that you dont have to think while you are writing a Lagrangian
in exams.
Now, we generalize this to an optimization problem with : decision variables and : inequality
constraints.
Theorem 14-4 (FONC for Inequality Constraints): Consider the maximization problem of
the form:
max
x2R
n
1(r
1
, r
2
, ..., r
n
)
subject to q
1
(r
1
, r
2
, ..., r
n
) _ /
1
.
.
.
q
m
(r
1
, r
2
, ..., r
n
) _ /
m
17
Suppose that 1, q
1
, ..., q
m
are C
1
functions and that x

= (r

1
, r

2
, ..., r

n
) is a local maximum of 1
on the constraint set. Suppose that, when the rst / constraints are binding at x

, x

is not a
critical point of g
k
= (q
1
, ..., q
k
). That is, the rank of the Jacobian:
1
1
g
k
(x

) =
_
_
_
@g
1
@x
1
(x

)
@g
1
@xn
(x

)
.
.
.
.
.
.
.
.
.
@g
k
@x
1
(x

)
@g
k
@xn
(x

)
_
_
_
is /. Then, there exists a vector of Lagrange multipliers

= (`

1
, `

2
, ..., `

m
) such that, for the
Lagrangian function dened by:
/(x, ) = 1(x) + [b g]
= 1(x) +`
1
[/
1
q
1
(x)] +... +`
m
[/
m
q
m
(x)]
we have:
(i)
0/(x

)
0r
i
= 0 for all i = 1, 2...:
(ii) q
j
(x

) _ /
j
, `

j
_ 0, `

j
[/
j
q
j
(x

)] = 0 for all , = 1, 2...:


Example 3. Lets consider a standard utility maximization problem:
max
x2R
2
l(r
1
, r
2
) s.t. j
1
r
1
+j
2
r
2
_ 1
Assume that j
1
, j
2
0, so that CQ is satised. The Lagrangian is:
/ = l(r
1
, r
2
) +`[1 j
1
r
1
j
2
r
2
]
FONCs are:
0/
0r
1
=
0l
0r
1
`j
1
= 0
0/
0r
2
=
0l
0r
2
`j
2
= 0
0/
0`
= 1 j
1
r
1
j
2
r
2
_ 0, ` _ 0, `[1 j
1
r
1
j
2
r
2
] = 0
Now, lets impose another assumption on utility, called monotonicity. That is,
@U
@x
1
0 or
@U
@x
2
0.
This says that increasing consumption of at least one good would strictly increase her utility.
Under such assumption, ` cannot be zero, for otherwise, FONCs would become
@U
@x
1
= `j
1
= 0
or
@U
@x
2
= `j
2
= 0. Thus, ` 0. But, then, from the slackness condition, 1 j
1
r
1
j
2
r
2
= 0.
18
This means that the consumer spends all of her income if her utility is monotonic increasing. This
phenomenon is called a Walras Law.
In the above formulation, we have omitted non-negativity constraints: r
1
, r
2
_ 0. We should
have the following problem:
max
x2R
2
l(r
1
, r
2
) s.t. j
1
r
1
+j
2
r
2
_ 1, r
1
_ 0, r
2
_ 0
How can we incorporate the constraints? Notice that we can rewrite the non-negativity constraints
as:
0 _ r
1
, 0 _ r
2
Then, in terms of Theorem 14-4, /
1
= 0, q
1
= r
1
, /
2
= 0, q
2
= r
2
. Thus, the Lagrangian is:
/ = l(r
1
, r
2
) +`
1
[/
1
q
1
] +`
2
[/
2
q
2
] +`
3
[1 j
1
r
1
j
2
r
2
]
= l(r
1
, r
2
) +`
1
r
1
+`
2
r
2
+`
3
[1 j
1
r
1
j
2
r
2
]
FONCs are:
0/
0r
1
=
0l
0r
1
+`
1
`
3
j
1
= 0, r
1
_ 0, `
1
_ 0, `
1
r
1
= 0 (5)
0/
0r
2
=
0l
0r
2
+`
2
`
3
j
2
= 0, r
2
_ 0, `
2
_ 0, `
2
r
2
= 0 (6)
0/
0`
= 1 j
1
r
1
j
2
r
2
_ 0, `
3
_ 0, `
3
[1 j
1
r
1
j
2
r
2
] = 0 (7)
Now, note that in (5), if we have r
1
0, from the slackness condition `
1
= 0. Then, it implies that
@U
@x
1
`
3
j
1
= 0. If r
1
= 0, then `
1
may not be zero, `
1
_ 0, which implies:
0l
0r
1
`
3
j
1
= `
1
_ 0
We can do the same analysis for (6). In summary, we can simplify our FONCs using only one
Lagrange multiplier:
0l
0r
1
`j
1
_ 0, r
1
_ 0 and
0l
0r
1
`j
1
= 0 if r
1
0
0l
0r
2
`j
2
_ 0, r
2
_ 0 and
0l
0r
2
`j
2
= 0 if r
2
0
1 j
1
r
1
j
2
r
2
_ 0, ` _ 0, `[1 j
1
r
1
j
2
r
2
] = 0
This is called Kuhn-Tucker conditions. In essence, Kuhn-Tucker formulation is a special case
of Theorem 14-4, in which there are non-negativity constraints but the Lagrangian is formulated
without non-negativity constraints.
19
Theorem 14-5 (Kuhn-Tucker Theorem): Consider the maximization problem of the form:
max
x2R
n
1(r
1
, r
2
, ..., r
n
)
subject to q
1
(r
1
, r
2
, ..., r
n
) _ /
1
.
.
.
q
m
(r
1
, r
2
, ..., r
n
) _ /
m
r
1
_ 0
.
.
.
r
n
_ 0
Suppose that 1, q
1
, ..., q
m
are C
1
functions and that x

= (r

1
, r

2
, ..., r

n
) is a local maximum of 1
on the constraint set. Suppose that the modied Jacobian matrix:
(0q
j
,0r
i
)
ji
has maximal rank, where the ,s run over the indices of the q
j
that are binding at x

, and the is
range over the indices i for which r
i
0. Then, there exists a vector of Lagrange multipliers such
that, for the Kuhn-Tucker Lagrangian dened by:
/(x, ) = 1(x) + [b g]
= 1(x) +`
1
[/
1
q
1
(x)] +... +`
m
[/
m
q
m
(x)]
we have:
(i)
0/(x

)
0r
i
_ 0, r
i
_ 0, r
i
0/(x

)
0r
i
= 0 for all i = 1, 2...:
(ii) q
j
(x

) _ /
j
, `

j
_ 0, `

j
[/
j
q
j
(x

)] = 0 for all , = 1, 2...:


So, we have : slackness conditions for all non-negativity constraints and : slackness conditions
for all regular constraints.
Lastly, I will state the theorem for the case of mixed constraints. It is a straightforward
adaptation of Theorems 14-2 and 14-4.
Theorem 14-6 (FONC for Mixed Constraints): Consider the maximization problem of the
form:
max
x2R
n
1(r
1
, r
2
, ..., r
n
)
20
subject to q
1
(r
1
, r
2
, ..., r
n
) _ /
1
.
.
.
q
k
(r
1
, r
2
, ..., r
n
) _ /
k
/
1
(r
1
, r
2
, ..., r
n
) = c
1
.
.
.
/
m
(r
1
, r
2
, ..., r
n
) = c
m
Suppose that 1, q
1
, ..., q
k
, /
1
, ..., /
m
are C
1
functions and that x

= (r

1
, r

2
, ..., r

n
) is a local max-
imum of 1 on the constraint set. Without loss of generality, assume that the rst /
0
inequality
constraints are binding at x

. Suppose that the rank of the Jacobian matrix of the equality con-
straints plus binding constraints:
_
_
_
_
_
_
_
_
_
_
_
@g
1
@x
1
(x

)
@g
1
@xn
(x

)
.
.
.
.
.
.
.
.
.
@g
k
0
@x
1
(x

)
@g
k
0
@xn
(x

)
@h
1
@x
1
(x

)
@h
1
@xn
(x

)
.
.
.
.
.
.
.
.
.
@hm
@x
1
(x

)
@hm
@xn
(x

)
_
_
_
_
_
_
_
_
_
_
_
is /
0
+:. Then, there exists a vector of Lagrange multipliers

= (`

1
, `

2
, ..., `

k
),

= (j

1
, j

2
, ..., j

m
)
such that, for the Lagrangian function dened by:
/(x, ) = 1(x) + [b g] + [c h]
= 1(x) +`
1
[/
1
q
1
(x)] +... +`
k
[/
k
q
k
(x)] +j
1
[c
1
/
1
(x)] +... +j
m
[c
m
/
m
(x)]
we have:
(i)
0/(x

)
0r
i
= 0 for all i = 1, 2...:
(ii) q
j
(x

) _ /
j
, `

j
_ 0, `

j
[/
j
q
j
(x

)] = 0 for all , = 1, 2.../


(iii) /
l
(x

) = c
l
, for all | = 1, 2...:
Example 4. Lets conclude this section with a couple of concrete examples. Consider an opti-
mization problem with mixed constraints:
max 1(r, j) = 3rj r
2
21
subject to 2r j = 5 (1)
5r + 2j _ 8 (2)
r _ 0 (3)
j _ 0 (4)
Let us use a Kuhn-Tucker formulation. So, the Lagrangian can be written:
/ = 3rj r
2
+`
1
[5 2r +j] +`
2
[5r + 2j 8]
First, we need to check a constraint qualication. The Jacobian of the system (1)-(2) (for Kuhn-
Tucker) is:
_
0q
1
,0r 0q
1
,0j
0q
2
,0r 0q
2
,0j
_
=
_
2 1
5 2
_
The two vectors are linearly independent, so it has maximal rank=2, regardless of what the solutions
are. Thus, it satises CQ. Now, from KTNC, we have:
0/
0r
= 3j 2r 2`
1
+ 5`
2
_ 0, r _ 0, and r[2j 3r
2
2`
1
+ 5`
2
] = 0 (5)
0/
0j
= 3r +`
1
+ 2`
2
_ 0, j _ 0, and j[3r +`
1
+ 2`
2
] = 0 (6)
0/
0`
1
= 5 2r +j = 0 (This must be always binding) (7)
0/
0`
2
= 5r + 2j 8 _ 0, `
2
_ 0, and `
2
[5r + 2j 8] = 0 (8)
There may be a couple of possible cases. But, lets eliminate impossible cases. Suppose r = 0.
Then, from (7), we have j = 5 < 0, which contradicts j _ 0. So, r 0. If j = 0, then from (7),
we have r = 5,2. We are done. Now, suppose that both r 0 and j 0. By slackness conditions,
we know that:
3j 2r 2`
1
+ 5`
2
= 0 (9)
3r +`
1
+ 2`
2
= 0 (10)
Combining these with (7), we have three equations with four unknowns. We need to eliminate at
least one more variable. Fortunately, we can eliminate `
2
. To see this, suppose by contradiction
that `
2
0. Then, it must be the case that 5r + 2j = 17 by slackness condition. Combining with
(7),
5r + 2j = 8
4r 2j = 10
22
This gives us: r = 2, j = 1 < 0. So, this cannot be a solution. Thus, we can set `
2
= 0. Then,
the system (7), (9), and (10) reduces to a system of 3 equations with 3 unknowns:
2r j = 5
3j 2r 2`
1
= 0
3r +`
1
= 0
With appropriate algebra, we should be able to solve this system. But, in fact, solving this leads to
j = 2, which contradicts j 0. Thus, the unique (candidate) solution for this problem is r = 5,2
and j = 0.
Example 5. Lastly, lets consider the following utility maximization problem.
max l(r, j) = r
2
+j
2
subject to 2r +j _ 4 (1)
r _ 0 (2)
j _ 0 (3)
For this problem, it might be helpful to use a diagram. Draw several level sets for the objective
function. Each level set is a quarter piece of a circle in the positive orthant of R
2
. So, from the
diagram, it is clear that a solution is on the j-axis. The question is: Can we conrm this using the
Kuhn-Tucker theorem? (BTW, you will be often asked to do both diagrammatic and mathematical
analyses in a micro sequence). The Lagrangian is:
/ = r
2
+j
2
+`[4 2r j]
By FONCs, we have:
/
x
= 2r 2`
_
= 0 if r 0
_ 0 if r = 0
(4)
/
y
= 2j `
_
= 0 if j 0
_ 0 if j = 0
(5)
/

= 4 2r j
_
= 0 if ` 0
_ 0 if ` = 0
(6)
Note that l
x
= 2r _ 0 and l
y
= 2j _ 0 and it cannot happen that both r

= j

= 0 (for example,
(1, 1) is within the budget and l(1, 1) l(0, 0)). So, We have l
x
= 2r 0 or l
y
= 2j 0. Thus,
by Walras law, the consumer must spend all of her resource. We can consider three possible cases
of solutions (See the diagram).
Case 1: r

0, j

0.
Case 2: r

0, j

= 0.
23
Case 3: r

= 0, j

0.
Consider each case.
Case 1: Suppose r

0, j

0. From (4), 2r 2` = 0, so that r = `. From (5), 2j ` = 0,


so that 2j = `. Thus, r = 2j. By assumption, r 0, j 0 so that ` 0. This implies that
4 2r j = 0. So, combining with r = 2j, we have: 4 4j j = 0. So, j = 4,5, r = 8,5.
Case 2: Suppose r

0, j

= 0. From (4), 2r 2` = 0, so that r = `. This implies ` 0. So,


from (6), 4 2r j = 0. Substitute j = 0, then we have r = 2.
Case 3: Suppose r

= 0, j

0. From (5), 2j ` = 0, so that 2j = `. This implies ` 0. So,


again from (6), 4 2r j = 0. Substitute r = 0, then we have j = 4.
So, we have obtained three solutions directly from KT conditions. But, substitute those solutions
into the objective:
Case 1: l (r

, j

) = (8,5)
2
+ (4,5)
2
=
64
25
+
16
25
=
80
25
=
16
5
.
Case 2: l (r

, j

) = (2)
2
+ (0)
2
= 4 =
20
5
.
Case 3: l (r

, j

) = (0)
2
+ (4)
2
= 16 =
80
5
.
Obviously, r = 0, j = 4 gives the highest value of the objective, so it is the solution. The lesson
here is: Although KT conditions are useful, they are still necessary conditions for optimality. We
will learn second order sucient conditions in the next section.
15. Constrained Optimization II: Comparative Statics and SOCs
In this section,we will learn (i) comparative statics and (ii) second order conditions that distin-
guish maxima from minima.
15-1. Comparative Statics and Envelope Theorem
By comparative statics, we mean the sensitivity of (i) the optimal value of the objective and
(ii) the optimal value of the decision variables, with respect to changes in primitive parameters of
the problem. For example, in a standard utility maximization,
max l(r, j)
subject to jr +j _ 1
r _ 0
j _ 0
We have two primitive parameters, i.e. relative price of a good, j, and income, 1. Thus, in general,
the solution to this problem will be written as:
24
r

(j, 1)
j

(j, 1)
(j, 1) = l(r

(j, 1), j

(j, 1))
as functions of primitive parameters. As many of you know, (j, 1) is called an indirect utility
function and r

(j, 1), j

(j, 1) are (individual) demand functions. Our interest here is: how
would a change in these parameters change (j, 1), r

(j, 1), j

(j, 1)? More explicitly, what are:


0r

(j, 1)
01
,
0r

(j, 1)
0j
,
0(j, 1)
01
,
0(j, 1)
0j
In many economic applications, we have to work very hard to get these comparative statics.
However, there are several useful theorems that we can employ in such an endeavor: a theorem on
shadow price and Envelope Theorem. To get an idea, lets turn to a simple problem. One primitive
parameter on one equality constraint.
max )(r, j) (1)
subject to q(r, j) = a
We can write the optimal value of this problem as:
)(r

(a), j

(a))
We can prove that:
`

(a) =
d
da
)(r

(a), j

(a))
where `

(a) is a Lagrange multiplier on the equality constraint. This says that the Lagrange
multiplier measures the rate of change of the optimal value of ) with respect to a marginal change
in the resource constraint. Lets prove this result.
Proof : The Lagrangian for problem (1) is:
/ = )(r, j) +`[a q(r, j)]
By FONC, we have:
0/
0r
=
0)(r

, j

)
0r
`
0q(r

, j

)
0r
= 0 ==
0)(r

, j

)
0r
= `

0q(r

, j

)
0r
(2)
0/
0j
=
0)(r

, j

)
0j
`
0q(r

, j

)
0j
= 0 ==
0)(r

, j

)
0j
= `

0q(r

, j

)
0j
(3)
Now, because q(r

(a), j

(a)) = a for all a, we can consider this as an identity. So, lets take a
derivative of both sides of the identity w.r.t. a:
25
dq(r

(a), j

(a))
da
=
d
da
a ==
0q(r

, j

)
0r
dr

(a)
da
+
0q(r

, j

)
0j
dj

(a)
da
= 1 (4)
Using a chain rule, we can compute:
d
da
)(r

(a), j

(a)) =
0)(r

, j

)
0r
dr

(a)
da
+
0)(r

, j

)
0j
dj

(a)
da
Substitute (2) and (3) rst and then (4),
d
da
)(r

(a), j

(a)) =
0)(r

, j

)
0r
dr

(a)
da
+
0)(r

, j

)
0j
dj

(a)
da
= `

(a)
0q(r

, j

)
0r
dr

(a)
da
+`

(a)
0q(r

, j

)
0j
dj

(a)
da
= `

(a)
_
0q(r

, j

)
0r
dr

(a)
da
+
0q(r

, j

)
0j
dj

(a)
da
_
= `

(a)
Thus, we have obtained the desired result. Q11
It is relatively straightforward to generalize this result to the case of several equality constraints
and inequality constraints. I will state it without proof.
Theorem 15-1 (Shadow Price): Consider the following maximization problem:
max
x2R
n
1(r
1
, r
2
, ..., r
n
)
subject to q
1
(r
1
, r
2
, ..., r
n
) _ /
1
.
.
.
q
k
(r
1
, r
2
, ..., r
n
) _ /
k
/
1
(r
1
, r
2
, ..., r
n
) = c
1
.
.
.
/
m
(r
1
, r
2
, ..., r
n
) = c
m
Suppose that 1, q
1
, ..., q
k
, /
1
, ..., /
m
are C
1
functions and that x

= (r

1
, r

2
, ..., r

n
) is a local max-
imum of 1 on the constraint set. Let

(b, c) = (`

1
, `

2
, ..., `

k
),

(b, c) = (j

1
, j

2
, ..., j

m
) be the
corresponding Lagrange multipliers for the Lagrangian:
/(x, ) = 1(x) + [b g] + [c h]
= 1(x) +`
1
[/
1
q
1
(x)] +... +`
k
[/
k
q
k
(x)] +j
1
[c
1
/
1
(x)] +... +j
m
[c
m
/
m
(x)]
26
Suppose that x

(b, c),

(b, c),

(b, c) are dierentiable w.r.t. (b, c) and that relevant CQ holds


at x

(b, c). Then,


`

j
(b, c) =
01(x

(b, c))
0/
j
for all , = 1, 2, .., /
j

l
(b, c) =
01(x

(b, c))
0c
l
for all | = 1, 2, .., :
Thus, a Lagrange multiplier `

j
measures the eect of a marginal change in input , on the
objective value. In this view, economists often call `

j
the shadow price (or imputed value) of
input ,.
Example 1.
max l(r, j)
subject to jr +j _ 1
r _ 0
j _ 0
Lets treat price j as xed, so we write the objective value as l(r

(1), j

(1)). Then, by the above


theorem,
`

=
dl(r

(1), j

(1))
d1
Thus, the Lagrange multiplier on the budget constraint measures the shadow price of income.
Now, I will state and prove an easy version of Envelope Theorem. Although some people may
nd it obvious, this is a very useful result and, yet, is sometimes not understood properly. The
confusion comes mainly from the notation. For this reason, I slightly deviate from Simon & Blumes
notation.
Theorem 15-2 (Envelope Theorem I): Let 1 : R
n
R
m
R be a continuously dierentiable
function. For each xed vector of parameters R
m
, consider the maximization problem:
max
x2R
n
1(x; )
Let x

() be the solution and () = max


x2R
n 1(x; ) = 1(x

(); ) be the value function of


this problem. If x

() is a C
1
function, then:
0()
0c
j
=
01(x; )
0c
j

x=x

()
(#)
That is, the derivative of the value function w.r.t. a parameter is equal to the derivative of the
objective function w.r.t. that parameter evaluated at optimum x

().
27
Proof : We can prove this using Chain Rule and total dierential:
0()
0c
j
=
01(x

(); )
0c
j
=
_
n

i=1
01(x; )
0r
i
0r

i
()
0c
j
+
01(x; )
0c
j
_
x=x

()
=
01(x; )
0c
j

x=x

()
where the last equality follows, because
@F(x;)
@x
i
= 0 at optimum by FONC. Q11.
Question: Whats the point of this theorem? Answer: It is sometimes cumbersome to com-
pute LHS of (#). But, by this theorem, we can simply compute RHS of (#). To see how it works,
consider the following example.
Example 2.
max
x
r
2
+ 2ar + 4a
2
where a is the primitive parameter of this problem. We are interested in the eect of a change in a
on the optimal value. The direct approach is to compute LHS of (#). Lets rst derive the solution
rst. By FONC,
1
x
= 2r + 2a = 0
) r

= a
Then, the value function is:
(a) = a
2
+ 2a a + 4a
2
= 5a
2
Thus, its derivative is:
d(a)
da
= 10a
Now, lets use Envelope Theorem. We can simply compute:
01
0a
= 2r + 8a
Evaluating this at optimum r

= a, we obtain 2r

+8a = 10a, which is exactly the same as before.


A more surprising result is that we can generalize this result for parameters on constraints.
That is,
Theorem 15-3 (Envelope Theorem II): Let 1, G : R
n
R
m
R be a continuously dierentiable
function. For each xed vector of parameters R
m
, consider the maximization problem:
28
max
x2R
n
1(x; ) subject to G(x; ) _ 0
(Note that some of the parameters may be in the objective while others may be on the constraint).
Let x

() be the solution and () = 1(x

(); ) be the value function of this problem. Write


the Lagrangian:
/ = 1(x; ) ` G(x; )
If x

() and `

() are C
1
functions and CQ conditions are satised, then
0()
0c
j
=
0/(x,`; )
0c
j

x=x

();=

()
In fact, this theorem can generalize to the case of several constraints. Lets see how this works
with a couple of examples.
Example 3. Consider a standard utility maximization again.
max l(r, j)
subject to jr +j _ 1
r _ 0
j _ 0
Let (j, 1) = l(r

(j, 1), j

(j, 1)) be its indirect utility function and / = l(r, j) + `[1 jr j].
Then, using Envelope Theorem,
0(j, 1)
01
=
0/
01

(p;I);y

(p;I);

(p;I)
= `

(j, 1)
This shows that the shadow price theorem is in fact a special case of Envelope Theorem. Advantage
of Envelope Theorem, however, goes beyond that of shadow price theorem. Lets take a derivative
w.r.t. price.
0(j, 1)
0j
=
0/
0j

(p;I);y

(p;I);

(p;I)
= `

(j, 1)r

(j, 1)
Substitute the result above for `

(j, 1), then


29
0(j, 1)
0j
_
0(j, 1)
01
= r

(j, 1)
Or in a more conventional notation,
0l(r

, j

)
0j
_
0l(r

, j

)
01
= r

In general, we can generalize this to : goods case,


0l(x

)
0j
i
_
0l(x

)
01
= r

i
for i = 1, 2, ...:
This is called Roys Identity, which says that (Marshallian) demand for good i can be obtained
by a quotient of two partials of the value function.
Example 4. Consider a rms cost minimization problem:
min w x
subject to 1(x) _ j
where w is a vector of input prices and 1 is a production function. Let C(w) = w x

and
/ = w x+`[1(x) j]. (Note minw x = max w x). By Envelope Theorem,
0C(w)
0n
i
=
0/
0n
i

= r

i
for i = 1, 2, ...:
This is called Shephards Lemma, which says that input demand for good i can be obtained by
a derivative of the cost function evaluated at optimum.
15-2. Second Order Conditions
Up to now, we have only looked at rst order necessary conditions. But, as we know, FONCs
do not guarantee a solution. We need second order conditions. Recall that, from Section 13, we
need to check the deniteness of the Hessian matrix for sucient conditions:
Theorem 13-2 (SOSC for Local Max/Min): Let 1 : l R
n
R be a C
2
function. Suppose
that l is open and that 1
1
1(x

) = 0
T
.
(i) If the Hessian 1
2
1(x

) is negative denite, then x

is a (strict) local maximum of 1;


(ii) If the Hessian 1
2
1(x

) is positive denite, then x

is a (strict) local minimum of 1;


(iii) If the Hessian 1
2
1(x

) is indenite, then x

is neither local max nor minimum of 1.


But, this theorem is for unconstrained optimization. What if we have constraints? Intuitively,
what we only need is to study the deniteness of the Hessian on the constraint space. How can we
30
do that? Now, recall that we have studied a very simple case of constrained optimization with a
quadratic function in Section 12. We constructed a bordered Hessian of the form:
H
n+1
=
_
0 c
c
T

_
where c is a vector of coecients for a linear constraint: c x = 0. We basically combine these
methods together.
Theorem 15-4 (SOSC for Constrained Local Max/Min): Let 1, /
1
, ..., /
m
be C
2
functions
on R
n
. Consider the problem:
max
x2R
n
1(r
1
, r
2
, ..., r
n
)
subject to /
1
(r
1
, r
2
, ..., r
n
) = c
1
.
.
.
/
m
(r
1
, r
2
, ..., r
n
) = c
m
Form a Lagrangian as usual, / = 1(x) + [c h], and let x

satisfy FONCs. Suppose that


"the Hessian of / w.r.t. x at (x

), 1
2
x
/(x

), is negative denite on the linear constraint set


v : 1
1
h(x

)v = 0", then x

is a strict local constrained maximum of 1.


What does the condition in quotation mean? It means that the following bordered Hessian
matrix.
H
m+n
=
_
_
_
_
_
mm
..
0
mn
..
1
1
h(x

)
1
1
h(x

)
T
. .
nm
1
2
x
/(x

)
. .
nn
_
_
_
_
_
=
_
_
_
_
_
_
_
_
_
_
_
0 0
.
.
.
.
.
.
.
.
.
0 0
@h
1
@x
1

@h
1
@xn
.
.
.
.
.
.
.
.
.
@hm
@x
1

@hm
@xn
@h
1
@x
1

@hm
@x
1
.
.
.
.
.
.
.
.
.
@h
1
@xn

@hm
@xn
@
2
L
@x
2
1

@
2
L
@xn@x
1
.
.
.
.
.
.
.
.
.
@
2
L
@x
1
@xn

@
2
L
@x
2
n
_
_
_
_
_
_
_
_
_
_
_
satises two conditions: (i) the last (: :) leading principal minors alternate in sign, and (ii)
det(H) 0 if : is even or det(H) < 0 if : is odd.
There are related theorems in Simon & Blume, pp.460-467. As you can see, it is very cum-
bersome to check second order conditions (even with computer, writing a code is cumbersome).
Moreover, we cannot guarantee a global solution from this second order condition. So, what we
31
usually do is to study characteristics of objective and constraint functions such that global solutions
are guaranteed. The method is called concave programming.
Example 5. Consider a standard utility maximization again:
max l(r, j)
subject to j
1
r +j
1
j = /(r, j) = 1
Lets construct a bordered Hessian.
H =
_
_
0 /
1
/
2
/
1
/
xx
/
yx
/
2
/
xy
/
yy
_
_
== H =
_
_
0 j
1
j
2
j
1
l
xx
l
yx
j
2
l
xy
l
yy
_
_
(by elementary row operations)
We want to have:
det(H) = det
_
_
0 j
1
j
2
j
1
l
xx
l
yx
j
2
l
xy
l
yy
_
_
0
It turns out that det(H) 0 if l is quasiconcave, a concept we study in the next section.
16. Concave and Quasiconcave Functions
In economic applications, you will often encounter concave and quasiconcave functions. It is
very essential that you familiarize yourself with properties of these functions. In general,
(i) For a global maximizer of an unconstrained maximization problem, we want the objective
function to be concave.
(ii) For a global maximizer of a constrained maximization problem, we want the objective to
be quasiconcave and constraint functions to be quasiconvex.
The objective of this section is three-fold:
(a) To learn properties of concave and convex functions;
(b) To learn properties of quasiconcave and quasiconvex functions;
(c) To learn concave programming.
16-1. Concave and Convex Functions
First, lets recall the denitions of concave and convex functions.
Denition 13-2 (Concave & Convex Functions): A function 1 : l R
n
R is concave on
l if and only if for all x, y l, for all ` [0, 1], we have:
1 [`x + (1 `)y] _ `1(x) + (1 `)1(y)
A function 1 : l R
n
R is convex on l if and only if for all x, y l, for all ` [0, 1], we
have:
32
1 [`x + (1 `)y] _ `1(x) + (1 `)1(y)
A function 1 : l R
n
R is strictly concave on l if and only if for all x ,= y l, for all
` (0, 1), we have:
1 [`x + (1 `)y] `1(x) + (1 `)1(y)
A function 1 : l R
n
R is strictly convex on l if and only if for all x ,= y l, for all
` (0, 1), we have:
1 [`x + (1 `)y] < `1(x) + (1 `)1(y)
We have the following characterizations of concave (convex) functions.
Theorem 16-1 (Properties of Concave Functions): Let 1 : l R
n
R be a C
2
unction
and l is a convex open subset of R
n
.
(a) 1 is a concave function on l if and only if:
(i) 1(y) 1(x) _ 1
1
1(x)(y x) for all x, y l;
(ii) 1
2
1(x) is negative semidenite for all x l;
(iii) 1 is convex on l;
(iv) c1 is concave for all c _ 0;
(v) For i = 1, ..., :, 1( r
1
, ..., r
i
, ..., r
n
) is concave in r
i
for each xed ( r
1
, ... r
i1
, r
i+1
..., r
n
);
(vi) its restriction to every line segment in l is a concave function.
(b) Let 1
1
, 1
2
, ..., 1
m
be concave functions and let c
1
, c
2
, ..., c
m
be positive numbers. Then, linear
combination

m
j=1
c
j
1
j
is a concave function.
(c) If 1 is a concave function on l, then for every x

l, the upper contour set:


C
+
(x

) = x l : 1(x) _ 1(x

)
is a convex set. (If 1 is convex, then the lower contour set is a convex set). The converse is not
true in general.
We saw (a)-(i),-(ii), and -(ii) already. (a)-(iv) is obvious. Implication of (v) and (vi) is very
important it says that we can determine the concavity of a function just by looking at the
geometric feature of its graph. In particular, let F be a function from R
2
to .R Then, (vi) means
that if one slices the image of 1 along any line segment in R
2
, then we should see the graph just like
that of a one-variable concave function. Question: Can you write down the analogous theorem
for convex functions?
Example 1. In a dynamic programming, we often see the following time-separable utility function
for an innite sequence of consumption c = c
t

1
t=0
:
l(c) =
1

t=0
,
t
n
t
(c
t
)
where , is a discount factor. If each n
t
() is concave, then l(c) is also concave by (b), as long as
the innite sum is well-dened.
33
Example 2. Consider a Cobb-Douglas function:
1(r, j) = r
a
j
b
where , a, / 0, a +/ _ 1
Question: Is this concave or convex? Answer: It is a concave function on R
2
++
. To see this, we
only need to prove that r
a
j
b
is concave. Lets construct a Hessian:
1
2
1 = H =
_
a(a 1)r
a2
j
b
a/r
a1
j
b1
a/r
a1
j
b1
/(/ 1)r
a
j
b2
_
det(H
1
) = a(a 1)r
a2
j
b
< 0, because a 1 < 0
det(H) = a/(1 a /)r
2a2
j
2b2
_ 0, because a +/ _ 1
So, under the assumption, the Hessian is negative semidenite. So, 1 is concave. In fact, if a+/ = 1,
1 is concave. If a +/ < 1, 1 is strictly concave.
Example 3. We can show that an expenditure function is concave in prices. Consider a standard
expenditure minimization problem.
min
x2R
n
+
p x subject to l(x) _

l
Let c(p,

l) = min
x2R
n
+
p x : l(x) _

l be an expenditure function. We want to show c(,

l)
is a concave function of p for each xed

l. Now, pick arbitrary points p
1
, p
2
and ` [0, 1]. Let
p

= `p
1
+ (1 `)p
2
and let x(p
1
) , x(p
2
) , x(p

) be the solution to each minimization problem.


Now, by denition,
c(p
1
,

l) = min
x2R
n
+
p
1
x : l(x) _

l
= p
1
x(p
1
)
_ p
1
x(p

) because x(p

) is not the solution (1)


Similarly,
c(p
1
,

l) = p
2
x(p
2
)
_ p
2
x(p

) because x(p

) is not the solution (2)


Its convex combination is,
`c(p
1
,

l) + (1 `)c(p
2
,

l) = `p
1
x(p
1
) + (1 `)p
2
x(p
2
)
_ `p
1
x(p

) + (1 `)p
2
x(p

) by (1) and (2)


= [`p
1
+ (1 `)p
2
] x(p

)
= p

x(p

) = c(p

,

l) by denition
Using a similar argument, we can show that a prot function is convex in prices. Try it yourself.
34
16-2. Quasiconcave and Quasiconvex Functions
Lets rst dene what it means by quasiconcave/quasiconvex functions.
Denition 16-1 (Quasiconcave & Quasiconvex Functions):
(i) A function 1 : l R
n
R is quasiconcave on l, where l is a convex set in R
n
, if and only
if for all x, y l, for all ` [0, 1], we have:
1(x) _ 1(y) == 1(`x + (1 `)y) _ 1(y)
(ii) A function 1 : l R
n
R is quasiconvex on l, where l is a convex set in R
n
, if and only
if for all x, y l, for all ` [0, 1], we have:
1(x) _ 1(y) == 1(`x + (1 `)y) _ 1(y)
(iii) A function 1 : l R
n
R is strictly quasiconcave on l, where l is a convex set in R
n
,
if and only if for all x ,= y l, for all ` (0, 1), we have:
1(x) _ 1(y) == 1(`x + (1 `)y) 1(y)
(iii) A function 1 : l R
n
R is strictly quasiconvex on l, where l is a convex set in R
n
, if
and only if for all x ,= y l, for all ` (0, 1), we have:
1(x) _ 1(y) == 1(`x + (1 `)y) < 1(y)
What does a quasiconcave (quasiconvex) function looks like? (See the graphs).Are these func-
tions quasiconcave?
Obviously, quasiconcavity is a geometric characterization. So, we have the following equivalence
theorem.
Theorem 16-2 (Properties of Quasiconcave Functions): Let 1 : l R
n
R be dened on
a convex set l. Then, the following are equivalent.
(a) 1 is quasiconcave on U;
(b) For every real number a, the upper contour set dened by,
C
+
(a) = x l : 1(x) _ a
is a convex set in l.
(c) For all x, y l, for all ` [0, 1], we have:
1(`x + (1 `)y) _ min1(x), 1(y)
(d) 1 is quasiconvex on l.
Moreover, in view of Theorem 16-1 (c), we have:
35
1 is concave
==
:
1 is quasiconcave
1 is strictly concave
==
:
1 is strictly quasiconcave
Theorem 16-3 (Cobb-Douglas Functions): Any Cobb-Douglas function 1(r, j) = r
a
j
b
is
quasiconcave for , a, / 0.
Question: How can we check if a function 1 is quasiconcave on R or R
2
? Answer: If 1 : R R
is monotone increasing, it is quasiconcave. To see this, suppose 1(r) _ 1(j). Then, it must imply
that r _ j, for otherwise r < j would imply 1(r) < 1(j). A contradiction. But, then, it convex
combination `r+(1`)j _ j. So, by monotonicity, 1(`r+(1`)j) _ 1(j). Now, if 1 : R
2
R,
then we use the idea that level sets for quasiconcave function must form a convex function. So,
follow the steps:
(i) Set 1(r, j) = c for some arbitrary constant c.
(ii) Solve for j to get: j = q(r, c).
(iii) Check if q is convex. Recall that q is convex i q
00
is positive.
Example 4. Lets do this for Cobb-Douglas function. Let 1(r, j) = r
a
j
b
= c. Solving for j,
j = (c,r
a
)
1=b
= (c,)
1=b
r
a=b
. Now, take the second derivative. j
0
= (a,/) (c,)
1=b
r
a=b1
_
0, j
00
= (a,/) (a,/ + 1)(c,)
1=b
r
a=b2
_ 0 if , a, / 0 (Note 1 cannot take negative values for
(r, j) R
2
+
so that c _ 0). So, it is convex.
If 1 : R
n
R where : _ 2, then we need to invoke the following theorem to check the
quasiconcavity of 1.
Theorem 16-4 (Bordered Hessian Test): Let 1 : l R
n
R be a C
2
function on a convex
set l. Consider the following bordered Hessian:
H =
_
0 1
1
1(x)
1
1
1
T
(x) 1
2
1(x)
_
(n+1)(n+1)
=
_
_
_
_
_
0 1
x
1
1
xn
1
x
1
1
x
1
x
1
1
x
1
xn
.
.
.
.
.
.
.
.
.
.
.
.
1
xn
1
xnx
1
1
xnxn
_
_
_
_
_
(a) Starting with the 3rd leading principal minors, if the (: 1) leading principal minors of H
alternate in sign and the sign starts with a positive sign for all x l, then 1 is quasiconcave. *
(a) Starting with the 3rd leading principal minors, if the (: 1) leading principal minors of H are
all negative in sign for all x l, then 1 is quasiconvex.
*Remark: Some textbooks start with the 2nd LPM, instead of 3rd LPM, and its sign must be
negative.
36
Example 5. Lets use this for 1(r, j) = rj, a special case of Cobb-Douglas. 1
x
= j, 1
y
= r, 1
xx
=
0, 1
xy
= 1, 1
yy
= 0. So, the bordered Hessian is:
H =
_
_
0 j r
j 0 1
r 1 0
_
_
We start with the third leading principal minors. That means det(H).
det(H) = 0
_
0 1
1 0
_
j
_
j 1
r 0
_
+r
_
j 0
r 1
_
= rj +rj = 2rj 0 for all r, j 0
Thus, 1(r, j) = rj is quasiconcave on R
2
++
.
16-3. Concave Programming
Concave programming is a name assigned for a class of techniques/theorems for solving pro-
gramming problems with concave/quasiconcave functions. Although it is a very rich eld with
many analytically important theorems, as applied economists, we probably need to know just two
of them at least for this lecture. We have seen the rst one already before.
Theorem 13-4 (Suciency for Unconstrained Global Min/Max): Let 1 : l R
n
R
be a C
1
function and l is a convex and open subset of R
n
.
(c) If 1 is a concave function on l and 1
1
1(x

) = 0 for some x

l, then x

is a global maximum
of 1 on l.
(d) If 1 is a convex function on l and 1
1
1(x

) = 0 for some x

l, then x

is a global minimum
of 1 on l.
Question: Why is it essential to have l convex and open? Answer: If l is not convex, then 1
is not well dened as a concave or convex function (because convex combination `x + (1 `)y in
l is not well dened). If l is not open, then there may be a boundary solution at which FONCs
may fail.
Theorem 16-5 (Suciency for Constrained Global Min/Max): Let 1, q
1
, ..., q
m
: l
R
n
R be a C
1
function and l is a convex and open subset of R
n
. Consider the maximization
problem:
max
x2R
n
1(r
1
, r
2
, ..., r
n
)
subject to q
1
(r
1
, r
2
, ..., r
n
) _ /
1
.
.
.
q
m
(r
1
, r
2
, ..., r
n
) _ /
m
37
Suppose that 1 is quasiconcave on l and q
1
, ..., q
m
are quasiconvex on l. If all the necessary FOCs
and CQ (as stated in Theorem 14-4) are satised at x

, then x

is a global maximum of 1 on the


constraint set.
What does this theorem mean intuitively? Lets consider a standard utility maximization
problem.
max l(r, j)
subject to jr +j _ 1
Suppose l = R
2
++
. Note that the budget function q(r, j) = jr +j is a quasiconvex (actually also
quasiconcave) function, because its level sets are linear. Suppose l(r, j) is (strictly) quasiconcave.
Then, there will be a tangent point that satisfy FONCs. Suppose on the contrary that l(r, j)
is strictly quasiconvex. Then, the tangency does not mean a maximum. There is also a case
where l(r, j) is not even quasiconvex. Then, it does not guarantee a global maximum. Consider,
furthermore, the case where l(r, j) is quasiconcave but q(r, j) is not quasiconvex. Then, a tangency
(FONC) does not imply a global solution. (See the graphs).
38

Potrebbero piacerti anche