Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
OPTIMIZATION
One dimensional Unconstrained Optimization Golden section search Quadratic interpolation Newtons method Multidimensional Constrained Optimization Linear Programming
LESSON OUTCOMES
At the end of this topic, the students will be able:
To apply the methods outlined to solve one dimensional unconstrained optimization problems.
to
solve
OPTIMIZATION
Root finding and optimization are related, both involve guessing and searching for a point on a function. Fundamental difference is:
Root finding is searching for zeros of a function or functions Optimization is finding the minimum or the maximum of a function of several variables.
A function of a single variable illustrating the difference between roots and optima.
5
Mathematical Background
An optimization or mathematical programming problem generally be stated as: Find x, which minimizes or maximizes f(x) subject to
d i ( x) ai ei ( x) bi
i 1,2,, m i 1,2,, p
where; x = n-dimensional design vector, f(x) = objective function, di(x) = inequality constraints, ei(x) = equality constraints, ai and bi = constants
6
If constraints are included, we have a constrained optimization problem; If constraints are not included, it is unconstrained optimization problem.
7
Optimization in one dimension can be divided into bracketing and open methods. Bracketing methods:
1.Golden-Section search method 2.Quadratic method
Open methods:
1. Newtons method
11
Bracketing Method
~ Golden-Section search method
~ Quadratic method
12
5 1 d ( xu xl ) 2 x1 xl d x2 xu d
13
4. Step 3 is repeated until the extremum is found (within the specified tolerance).
5 1 x1 xl ( xu xl ) 2
The real benefit from the use of golden ratio is because the original x1 and x2 were chosen using golden ratio, we do not need to recalculate all the function values for the next iteration.
15
Quadratic Interpolation
1. Start with three initial guesses (x0 , x1 and x2 ) that jointly bracket the extremum point of f(x) 2. Calculate the next point x3:
2 2 2 2 f ( x0 )( x12 x2 ) f ( x1 )( x2 x0 ) f ( x2 )( x0 x12 ) x3 2 f ( x0 )( x1 x2 ) 2 f ( x1 )( x2 x0 ) 2 f ( x2 )( x0 x1 )
3. If f(x3)>f(x1), x1= x3, x0= x1 and discard x0 If f(x3)<f(x1), x2= x3 (Must observe on which side x3 falls) 4. Step 3 is repeated until the extremum is found (within the specified tolerance).
16
Open Method
~ Newtons method
A similar approach to Newton-Raphson method can be used to find an optimum of f(x) by defining a new function g(x)=f (x). Thus because the same optimal value x* satisfies both
f(x*)=g(x*)=0
18
We can use the following as a technique to the extremum of f(x). 1. Start with one initial guess (xi) 2. Calculate the next iteration point:
f ( xi ) xi 1 xi f ( xi )
3. Step 2 is repeated until the extremum is found (within the specified tolerance)
19
LINEAR PROGRAMMING
An optimization approach that deals with meeting a desired objective such as maximizing profit or minimizing cost in presence of constraints such as limited resources Mathematical functions representing both the objective and the constraints are linear. The method that we are going to use is the Simplex method. 20
Standard Form
Basic linear programming problem consists of two major parts:
The objective function A set of constraints
The first step in linear programming is to convert the problem into a standard form if it is not already in the standard form.
21
22
xi 0
23
bi 0
1. Objective function is to be maximized 2. All variables are non-negative (xi 0) 3. Each linear constraint is in the form of to a non-negative constants (bi 0)
24
x + 2y 12 3x + 2y 24 x 0, y 0
25
xi 0
26
bi 0
1. Objective function is to be minimized 2. All variables are non-negative (xi 0) 3. Each linear constraint is in the form of to a non-negative constants (bi 0)
The basic procedure used to solve the standard minimization problem is to convert it to a standard maximization problem and then solved it using the techniques for standard minimization problem. The transformed maximization problem from the minimization problem is called dual maximization problem.
27
Any minimization objective function can be transformed into maximization objective function by multiplying the objective function by a negative sign.
Example: Minimize: 12x + 4y 6z Can be converted to: Maximize: 12x 4y + 6z
Any less than inequality (<) must be converted to less than or equal () by adding a slack variable.
30
Any greater than or equal inequality () in the constraints must be converted to less than or equal inequality () by subtracting a surplus variable. The standard form of linear program requires all variable to be positive. If some variables are not restricted to be positive, these variables have to be substituted by the differences of two positive variables. For example, if variable y is unrestricted in sign, it is replaced by two new positive variables y1 and y2 with y = y1 y2.
31
Example:
Minimize f(x) = 2x+3y+4z Subject to x 2z 40 xy5 x, y 0 (z is unrestricted) Substituted z=z1z2 Minimize f(x) = 2x+3y+4z Subject to x 2z1+ 2z2 40 xy5 x, y, z1, z2 0
32
Example:
If a variable has a negative lower bound, another method is applied. Assume variable y L where L = -20. Then, all linear program constraints containing y have to be modified as the following: x + 2y 40 x 0, y L (where L is a negative number) Introduced a variable y2 = y L (so that y = y2+ L) and rearranged the constraint: x + 2(y2+ L) 0 Finally, x + 2y2 40 2L where x, y2 0
33