Sei sulla pagina 1di 2

LECTURE 15: CONSTRAINED OPTIMISATION

Reference: Pemberton & Rau Section 17.1 and 17.2

Lagrange multipliers
Let f ( x, y ) and g ( x, y ) be functions of two variables. We wish to
maximize f ( x, y ) subject to the constraint g ( x, y )  0.
The first-order conditions for this problem turn out to be
f g f g f g
  0,   0 where   / .
x x y y y y
The Lagrangian
Define a function L of three variables by setting
L ( x , y ,  )  f ( x , y )  g ( x , y ) .
L is called the Lagrangian function for the problem, and its third argument λ is called the
Lagrange multiplier. The relationship between the function L and the constrained
maximization problem is as follows:
If ( x*, y*) is a solution of the constrained maximisation problem
maximise f ( x, y ) subject to g ( x, y )  0 ,
then there is a real number λ* such that the Lagrangian L has a critical point at
( x*, y*,  *) .
This gives us a method of solving the problem. We introduce a new unknown quantity λ
(the Lagrange multiplier) and form the Lagrangian function L. The critical points of this
function are then investigated: the first two coordinates of one of these points will be a
constrained maximum of the original problem. If there is more than one critical point of
L, some of these points may not correspond to constrained maxima, and ad hoc methods
are typically employed to find the ones which do.
The whole procedure is known as Lagrange's method of undetermined multipliers, or
the Lagrange multiplier rule. The usual way to find the critical points of L is to solve
the equations L / x  0, L / y  0 for x and y in terms of λ and then to use L /   0,
which is the constraint g ( x, y )  0 , to solve for λ.

Local and global maxima


Since our emphasis is now on optimisation rather than the geometry of critical points, in
this Lecture and the next, the term ‘constrained maximum’ will mean ‘global constrained
maximum’ unless otherwise stated.

Constrained minimisation
Since the method is concerned only with first-order conditions, exactly the same algebra
applies: to
minimise f ( x, y ) subject to g ( x, y )  0 ,
we may define the Lagrangian L and find its critical points as before.
In general, it is better to distinguish between max and min by ad hoc methods.

Higher dimensions
Lagrange's method can be applied to functions of any number of variables and any
number of constraints so long as the number of constraints does not exceed the number of
variables.

Two warnings
(a) Behaviour of the Lagrangian
The essence of Lagrange's method is that a constrained maximum is a critical point of the
Lagrangian; it is not necessarily a local maximum of the Lagrangian.
(b) Constraint qualifications
Since Lagrange's method does not work in all conceivable cases, we have to look for
conditions under which we can be sure that it does work. The easiest conditions to apply
consist of restrictions on the constraint functions and are known as constraint
qualifications.
When there is only one constraint, the proposition justifying Lagrange's method may be
stated as follows:
Proposition If there is only one constraint, and the gradient of the constraint function at
the constrained maximum is not the zero-vector, the constrained maximum is a critical
point of the Lagrangian.
When there is more than one constraint the corresponding constraint qualification is that
the gradients of the constraint functions at the optimum are linearly independent.

Exercises: Pemberton & Rau 17.1.1-17.1.5, 17.2.1-17.2.4

Potrebbero piacerti anche