Sei sulla pagina 1di 33

WEEK 12 & 13

OPTIMIZATION
One dimensional Unconstrained Optimization Golden section search Quadratic interpolation Newtons method Multidimensional Constrained Optimization Linear Programming

LESSON OUTCOMES
At the end of this topic, the students will be able:
To apply the methods outlined to solve one dimensional unconstrained optimization problems.

To apply the methods outlined constrained optimization problems.

to

solve

OPTIMIZATION
Root finding and optimization are related, both involve guessing and searching for a point on a function. Fundamental difference is:
Root finding is searching for zeros of a function or functions Optimization is finding the minimum or the maximum of a function of several variables.

A function of a single variable illustrating the difference between roots and optima.
5

Mathematical Background
An optimization or mathematical programming problem generally be stated as: Find x, which minimizes or maximizes f(x) subject to

d i ( x) ai ei ( x) bi

i 1,2,, m i 1,2,, p

Eq. (a) Eq. (b)

where; x = n-dimensional design vector, f(x) = objective function, di(x) = inequality constraints, ei(x) = equality constraints, ai and bi = constants
6

Optimization problems can be classified on the basis of the form of f(x):


If f(x) and the constraints are linear, we have linear programming. If f(x) is quadratic and the constraints are linear, we have quadratic programming. If f(x) is not linear or quadratic and/or the constraints are nonlinear, we have nonlinear programming.

If constraints are included, we have a constrained optimization problem; If constraints are not included, it is unconstrained optimization problem.
7

For constrained problems:

Degrees of freedom, DOF = npm


Hence, if p+m n solutions exist p+m > n overconstrained Another way of classifying optimization problems:

One dimensional and multi-dimensional problems


Note
If x* minimizes f(x) then x* would also maximizes f(x) If x* maximizes f(x) then x* would also minimizes f(x)
8

One dimensional Unconstrained Optimization


In multimodal functions, both local and global optima can occur. In almost all cases, we are interested in finding the absolute highest or lowest value of a function.

How do we distinguish global optimum from local one?


By graphing to gain insight into the behavior of the function only viable for low dimensional functions. Using randomly generated starting guesses and picking the largest of the optima as global. Perturbing the starting point to see if the routine returns a better point or the same local minimum.
10

Optimization in one dimension can be divided into bracketing and open methods. Bracketing methods:
1.Golden-Section search method 2.Quadratic method

Open methods:
1. Newtons method
11

Bracketing Method
~ Golden-Section search method
~ Quadratic method

12

Golden Section Search


A unimodal function has a single maximum or a minimum in the a given interval. For a unimodal function: 1. First pick two initial guesses [xl and xu] that will bracket the extremum point of f(x). 2. Next two interior points x1 and x2 are chosen according to the golden ratio

5 1 d ( xu xl ) 2 x1 xl d x2 xu d
13

3. Two results can occur:


If f(x1)>f(x2) then the domain of x to the left of x2 from xl to x2, can be eliminated because it does not contain the maximum. Then, x2 becomes the new xl for the next round. {f(x1) > f(x2) , xl = x2} If f(x2)>f(x1), then the domain of x to the right of x1 from xl to xu, would have been eliminated. In this case, x1 becomes the new xu for the next round. {f(x1) < f(x2) , xU = x1}
14

4. Step 3 is repeated until the extremum is found (within the specified tolerance).

New x1 is determined as before

5 1 x1 xl ( xu xl ) 2
The real benefit from the use of golden ratio is because the original x1 and x2 were chosen using golden ratio, we do not need to recalculate all the function values for the next iteration.
15

Quadratic Interpolation
1. Start with three initial guesses (x0 , x1 and x2 ) that jointly bracket the extremum point of f(x) 2. Calculate the next point x3:
2 2 2 2 f ( x0 )( x12 x2 ) f ( x1 )( x2 x0 ) f ( x2 )( x0 x12 ) x3 2 f ( x0 )( x1 x2 ) 2 f ( x1 )( x2 x0 ) 2 f ( x2 )( x0 x1 )

3. If f(x3)>f(x1), x1= x3, x0= x1 and discard x0 If f(x3)<f(x1), x2= x3 (Must observe on which side x3 falls) 4. Step 3 is repeated until the extremum is found (within the specified tolerance).
16

Open Method
~ Newtons method
A similar approach to Newton-Raphson method can be used to find an optimum of f(x) by defining a new function g(x)=f (x). Thus because the same optimal value x* satisfies both

f(x*)=g(x*)=0

18

We can use the following as a technique to the extremum of f(x). 1. Start with one initial guess (xi) 2. Calculate the next iteration point:

f ( xi ) xi 1 xi f ( xi )
3. Step 2 is repeated until the extremum is found (within the specified tolerance)
19

Multidimensional Constrained Optimization


The simplest multidimensional constrained optimization is the so called Linear programming

LINEAR PROGRAMMING
An optimization approach that deals with meeting a desired objective such as maximizing profit or minimizing cost in presence of constraints such as limited resources Mathematical functions representing both the objective and the constraints are linear. The method that we are going to use is the Simplex method. 20

Standard Form
Basic linear programming problem consists of two major parts:
The objective function A set of constraints

The first step in linear programming is to convert the problem into a standard form if it is not already in the standard form.

21

The Simplex Method


Assumes that the optimal solution will be an extreme point. The approach must discern whether during problem solution an extreme point occurs. To do this, the constraint equations are reformulated as equalities by introducing slack variables.

22

Standard Maximization Problem


A maximization problem is in standard for if the objective function is generally expressed as
Z c1 x1 c2 x2 cn xn is to be maximized, subject to the constraint:
a11x1 a12 x2 a1n xn b1 a21x1 a22 x2 a2 n xn b2 . . . . . . . . an1 x1 an 2 x2 ann xn bn

With additional constraint:

xi 0
23

bi 0

Characteristics of standard maximization problem:

1. Objective function is to be maximized 2. All variables are non-negative (xi 0) 3. Each linear constraint is in the form of to a non-negative constants (bi 0)

24

Simplex method for Standard Maximization Problem


Example Maximize P = 10x + 12y subject to the constraints:

x + 2y 12 3x + 2y 24 x 0, y 0

25

Standard Minimization Problem


A minimization problem is in standard for if the objective function is generally expressed as
Z c1 x1 c2 x2 cn xn is to be minimized, subject to the constraint:
a11x1 a12 x2 a1n xn b1 a21x1 a22 x2 a2 n xn b2 . . . . . . . . an1 x1 an 2 x2 ann xn bn

With additional constraint:

xi 0
26

bi 0

Characteristics of standard minimization problem:

1. Objective function is to be minimized 2. All variables are non-negative (xi 0) 3. Each linear constraint is in the form of to a non-negative constants (bi 0)
The basic procedure used to solve the standard minimization problem is to convert it to a standard maximization problem and then solved it using the techniques for standard minimization problem. The transformed maximization problem from the minimization problem is called dual maximization problem.
27

Simplex method for Standard Minimization Problem


Example Minimize P = 0.12x + 0.15y subject to the constraints: 60x + 60y 300

12x + 6y 36 10x + 30y 90 x 0, y 0


28

Simplex method for Non-Standard Problem


There are many cases of linear programming that do not fall neatly into standard maximization or standard minimization problem.
Some adjustment will be required i.e. the equations need to be transformed into standard maximization problem before using the methods for standard maximization problem.
29

Any minimization objective function can be transformed into maximization objective function by multiplying the objective function by a negative sign.
Example: Minimize: 12x + 4y 6z Can be converted to: Maximize: 12x 4y + 6z

Any less than inequality (<) must be converted to less than or equal () by adding a slack variable.

30

Any greater than or equal inequality () in the constraints must be converted to less than or equal inequality () by subtracting a surplus variable. The standard form of linear program requires all variable to be positive. If some variables are not restricted to be positive, these variables have to be substituted by the differences of two positive variables. For example, if variable y is unrestricted in sign, it is replaced by two new positive variables y1 and y2 with y = y1 y2.
31

Example:
Minimize f(x) = 2x+3y+4z Subject to x 2z 40 xy5 x, y 0 (z is unrestricted) Substituted z=z1z2 Minimize f(x) = 2x+3y+4z Subject to x 2z1+ 2z2 40 xy5 x, y, z1, z2 0
32

Example:
If a variable has a negative lower bound, another method is applied. Assume variable y L where L = -20. Then, all linear program constraints containing y have to be modified as the following: x + 2y 40 x 0, y L (where L is a negative number) Introduced a variable y2 = y L (so that y = y2+ L) and rearranged the constraint: x + 2(y2+ L) 0 Finally, x + 2y2 40 2L where x, y2 0
33

Potrebbero piacerti anche