Sei sulla pagina 1di 37

MATLAB Optimization

Toolbox
Presentation Outline
Introduction
Function Optimization
Optimization Toolbox
Routines / Algorithms available
Optimization Problems
Unconstrained
Constrained
Example
The Algorithm Description

Function Optimization
Optimization concerns the
minimization or maximization of
functions
Standard Optimization Problem
( )
~
~
min
x
f x
( )
~
0
j
g x s
( )
~
0
i
h x =
L U
k k k
x x x s s
Equality Constraints
Subject to:
Inequality Constraints
Side Constraints
Function Optimization
( )
~
f x is the objective function, which measure
and evaluate the performance of a system.
In a standard problem, we are minimizing
the function.
For maximization, it is equivalent to minimization
of the ve of the objective function.
~
x
is a column vector of design variables, which can
affect the performance of the system.
Function Optimization
Constraints Limitation to the design space.
Can be linear or nonlinear, explicit or implicit functions
( )
~
0
j
g x s
( )
~
0
i
h x =
L U
k k k
x x x s s
Equality Constraints
Inequality Constraints
Side Constraints
Most algorithm require less than!!!
Optimization Toolbox
Is a collection of functions that extend the capability of
MATLAB. The toolbox includes routines for:
Unconstrained optimization
Constrained nonlinear optimization, including goal
attainment problems, minimax problems, and semi-
infinite minimization problems
Quadratic and linear programming
Nonlinear least squares and curve fitting
Nonlinear systems of equations solving
Constrained linear least squares
Specialized algorithms for large scale problems
Minimization Algorithm
Minimization Algorithm (Cont.)
Equation Solving Algorithms
Least-Squares Algorithms
Implementing Opt. Toolbox
Most of these optimization routines require
the definition of an M-file containing the
function, f, to be minimized.
Maximization is achieved by supplying the
routines with f.
Optimization options passed to the routines
change optimization parameters.
Default optimization parameters can be
changed through an options structure.
Unconstrained Minimization
Consider the problem of finding a set
of values [x
1
x
2
]
T
that solves
( )
( )
1
~
2 2
1 2 1 2 2
~
min 4 2 4 2 1
x
x
f x e x x x x x = + + + +
| |
1 2
~
T
x x x =
Steps
Create an M-file that returns the
function value (Objective Function)
Call it objfun.m
Then, invoke the unconstrained
minimization routine
Use fminunc
Step 1 Obj. Function
function f = objfun(x)

f=exp(x(1))*(4*x(1)^2+2*x(2)^2+4*x(1)*x(2)+2*x(2)+1);

| |
1 2
~
T
x x x =
Objective function
Step 2 Invoke Routine
x0 = [-1,1];
options = optimset(LargeScale,off);
[xmin,feval,exitflag,output]=
fminunc(objfun,x0,options);
Output arguments
Input arguments
Starting with a guess
Optimization parameters settings
Results
xmin =
0.5000 -1.0000
feval =
1.3028e-010
exitflag =
1
output =
iterations: 7
funcCount: 40
stepsize: 1
firstorderopt: 8.1998e-004
algorithm: 'medium-scale: Quasi-Newton line search'
Minimum point of design variables
Objective function value
Exitflag tells if the algorithm is converged.
If exitflag > 0, then local minimum is found
Some other information
More on fminunc Input
[xmin,feval,exitflag,output,grad,hessian]=
fminunc(fun,x0,options,P1,P2,)
fun: Return a function of objective function.
x0: Starts with an initial guess. The guess must be a vector of size of
number of design variables.
option: To set some of the optimization parameters. (More after few
slides)
P1,P2,: To pass additional parameters.
More on fminunc Output
[xmin,feval,exitflag,output,grad,hessian]=
fminunc(fun,x0,options,P1,P2,)
xmin: Vector of the minimum point (optimal point). The size is the
number of design variables.
feval: The objective function value of at the optimal point.
exitflag: A value shows whether the optimization routine is
terminated successfully. (converged if >0)
output: This structure gives more details about the optimization
grad: The gradient value at the optimal point.
hessian: The hessian value of at the optimal point
Options Setting optimset
Options =
optimset(param1,value1, param2,value2,)
The routines in Optimization Toolbox has a set of default optimization
parameters.
However, the toolbox allows you to alter some of those parameters,
for example: the tolerance, the step size, the gradient or hessian
values, the max. number of iterations etc.
There are also a list of features available, for example: displaying the
values at each iterations, compare the user supply gradient or
hessian, etc.
You can also choose the algorithm you wish to use.
Options Setting (Cont.)
Options =
optimset(param1,value1, param2,value2,)
Type help optimset in command window, a list of options setting
available will be displayed.
How to read? For example:
LargeScale - Use large-scale algorithm if
possible [ {on} | off ]
The default is with { }
Parameter (param1)
Value (value1)
Options Setting (Cont.)
LargeScale - Use large-scale algorithm if
possible [ {on} | off ]
Since the default is on, if we would like to turn off, we just type:
Options =
optimset(LargeScale, off)
Options =
optimset(param1,value1, param2,value2,)
and pass to the input of fminunc.
Useful Option Settings
Display - Level of display [ off | iter |
notify | final ]
MaxIter - Maximum number of iterations
allowed [ positive integer ]
TolCon - Termination tolerance on the
constraint violation [ positive scalar ]
TolFun - Termination tolerance on the
function value [ positive scalar ]
TolX - Termination tolerance on X [ positive
scalar ]
Highly recommended to use!!!
fminunc and fminsearch
fminunc uses algorithm with gradient
and hessian information.
Two modes:
Large-Scale: interior-reflective Newton
Medium-Scale: quasi-Newton (BFGS)
Not preferred in solving highly
discontinuous functions.
This function may only give local solutions.
fminunc and fminsearch
fminsearch is generally less efficient than
fminunc for problems of order greater than
two. However, when the problem is highly
discontinuous, fminsearch may be more
robust.
This is a direct search method that does not
use numerical or analytic gradients as in
fminunc.
This function may only give local solutions.
Constrained Minimization
[xmin,feval,exitflag,output,lambda,grad,hessian]
=
fmincon(fun,x0,A,B,Aeq,Beq,LB,UB,NONLCON,options,
P1,P2,)
Vector of Lagrange
Multiplier at optimal
point
Example
( )
~
1 2 3
~
min
x
f x x x x =
2
1 2
2 0 x x + s
1 2 3
1 2 3
2 2 0
2 2 72
x x x
x x x
s
+ + s
1 2 3
0 , , 30 x x x s s
Subject to:
1 2 2 0
,
1 2 2 72
A B

( (
= =
( (

0 30
0 , 30
0 30
LB UB
( (
( (
= =
( (
( (

function f = myfun(x)
f=-x(1)*x(2)*x(3);
Example (Cont.)
2
1 2
2 0 x x + s
For
Create a function call nonlcon which returns 2 constraint vectors [C,Ceq]
function [C,Ceq]=nonlcon(x)

C=2*x(1)^2+x(2);
Ceq=[];
Remember to return a null
Matrix if the constraint does
not apply
Example (Cont.)
x0=[10;10;10];
A=[-1 -2 -2;1 2 2];
B=[0 72]';
LB = [0 0 0]';
UB = [30 30 30]';
[x,feval]=fmincon(@myfun,x0,A,B,[],[],LB,UB,@nonlcon)
1 2 2 0
,
1 2 2 72
A B

( (
= =
( (

Initial guess (3 design variables)
CAREFUL!!!
fmincon(fun,x0,A,B,Aeq,Beq,LB,UB,NONLCON,options,P1,P2,)
0 30
0 , 30
0 30
LB UB
( (
( (
= =
( (
( (

Example (Cont.)
Warning: Large-scale (trust region) method does not currently solve this type of problem, switching to
medium-scale (line search).
> In D:\Programs\MATLAB6p1\toolbox\optim\fmincon.m at line 213
In D:\usr\CHINTANG\OptToolbox\min_con.m at line 6
Optimization terminated successfully:
Magnitude of directional derivative in search direction less than 2*options.TolFun and maximum constraint
violation is less than options.TolCon
Active Constraints:
2
9
x =
0.00050378663220
0.00000000000000
30.00000000000000

feval =
-4.657237250542452e-035
2
1 2
2 0 x x + s
1 2 3
1 2 3
2 2 0
2 2 72
x x x
x x x
s
+ + s
1
2
3
0 30
0 30
0 30
x
x
x
s s
s s
s s
Const. 1
Const. 2
Const. 3
Const. 4
Const. 5
Const. 6
Const. 7
Sequence: A,B,Aeq,Beq,LB,UB,C,Ceq
Const. 8
Const. 9
Solving Linear Programs:
MATLAB uses the following format for
linear programs:



A linear program in the above format is
solved using the command:
x = linprog(f ;A; b;Aeq; beq; l; u):


Simple LP Example:
Suppose we want to solve the following
linear program using MATLAB:
























Convert the LP into MATLAB format:


Simple LP Example cont
Input the variables into Matlab:
>> f = -[4;2;1];
>> A = [2 1 0;1 0 2];
>> b = [1;2];
Aeq = [1 1 1];
beq = [1];
l = [0;0;0];
u = [1;1;2];

Simple LP Example cont
Solve the linear program using MATLAB:
>> x = linprog(f,A,b,Aeq,beq,l,u)
And you should get:
x =
0.5000
0.0000
0.5000

Simple LP Example cont
What to do when some of the variables are
missing?
For example, suppose that no lower bounds on the variables. In this case
define l to be the empty set using the MATLAB command:
>> l = [ ];
Doing this and resolving the LP gives:
x =
0.6667
-0.3333
0.6667
Simple LP Example cont
Define other matrices to be empty matrices if they
do not appear in the problem formulation. For
example, if there are no equality constraints, define
Aeq and beq as empty sets, i.e.
>> Aeq = [ ];
>> beq = [ ];
Solution becomes:
X =
0.0000
1.0000
1.0000
Sensitivity Analysis:
How sensitive are optimal solutions to
errors in data, model parameters, and
changes to inputs in the linear
programming problem?
Sensitivity analysis is important
because it enables flexibility and
adaptation to changing conditions
Conclusion
Easy to use! But, we do not know what is happening behind
the routine. Therefore, it is still important to understand the
limitation of each routine.
Basic steps:
Recognize the class of optimization problem
Define the design variables
Create objective function
Recognize the constraints
Start an initial guess
Invoke suitable routine
Analyze the results (it might not make sense)

Potrebbero piacerti anche