Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Ingredients
Objective function
Variables
Constraints
min f (x)
x
subject to : g ( x) 0
h ( x) 0
x X n
x x x
Summary
h ( x) 1
Multi-objective problems
(xleft, f(xleft))
(xright, f(xright))
(xleft, f(xleft))
(xright, f(xright))
(xnew, f(xnew))
(xmid, f(xmid))
Optimization in 1-D
Strategy: evaluate function at some xnew
Here, new “bracket” points are xnew, xmid, xright
(xleft, f(xleft))
(xright, f(xright))
(xnew, f(xnew))
(xmid, f(xmid))
Optimization in 1-D
Strategy: evaluate function at some xnew
Here, new “bracket” points are xleft, xnew, xmid
(xleft, f(xleft))
(xright, f(xright))
2
f ( xk )
xk 1 xk
f ( xk )
Requires 1st and 2nd derivatives
Quadratic convergence
Multi-Dimensional Optimization
Hard in general
Weird shapes: multiple extrema, saddles,
curved or elongated valleys, etc.
Can’t bracket
f ( x, y )
fx
f f
y
2 2 f 2 f
H x2 f
xy
2 f
xy y 2
Newton’s Method in
Multiple Dimensions
Replace 1st derivative with gradient,
2nd derivative with Hessian
So,
1
xk 1 xk H ( xk ) f ( xk )
Tends to be extremely fragile unless function
very smooth and starting close to minimum
Optimality Conditions
Unconstrained optimization – multivariate
calculus problem. For Y=f(X), the optimum
occurs at the point where f '(X) =0 and
f’''(X) meets second order conditions
A relative minimum occurs where f '(X) =0 and
f’''(X) >0
A relative maximum occurs where f '(X) =0 and
f’''(X) <0
Concavity and Second Derivative
local max and
global max
local max
f11 <0
Global Optimum
A univariate function with a negative second derivative everywhere
guarantees a global maximum at the point (if there is one) where f’(X)=0.
These functions are called “concave down” or sometimes just “concave.”
Maximize f(X)
s.t. gi(X) = bi
42
Hook and Jeeves Pattern
Search
1. Define:
Starting point x0
Increment i for all variables (i = 1, .., n)
Step reduction factor
Termination parameter
2. Perform Exploratory search
3. If exploratory move successful, Go to 5. If not, continue
4. Check for termination: Is ||i||< ?
• Yes: Stop. Current point is x*
• No: Reduce increments i= (i / for i = 1, .., n. Go to 2
5. Perform pattern move: xpk+1 = xk + (xk – xk-1)
6. Perform exploratory move with xp as the base point. Let result
be xk+1.
7. Is f(xk+1) < f(xk)?
• Yes: Set xk-1 = xk and xk = xk+1. Go to 5.
• No: go to 4.
43
Cyclic Coordinate Algorithm
Let x1 be the starting point for the cycle, set i = 1, x= x1 , f= f1
Given f 100 x2 x1 1 x
2 2
1
2
f f x o d o
Pattern Search
Graphically
x3acc
p
e
x2 acc x4temp x4acc
e
p
= x3 temp
p
= x 4base e
x 1acc
= x 2temp
p
= x3base
x1temp
KEY
= x 2base
e : exploratory move
e p : pattern move
improv ement
e no improvement
x 1base
46
Pattern
Search
Advantages:
Simple
Robust (relatively)
No gradients necessary.
Disadvantages:
Can get stuck and special "tricks" are needed to get the search
going again.
May take a lot of calculations
47
Exploratory
Search
18 18
l l l l l l l l l l 17
17
l l l l l l l l l l
16 16
l l l l l l l l l l
l l l l l l l l l l 15 1
15
BEAM (meters)
BEAM (meters)
l l l l l l l l l l
14 14
l l l l l l l l l l
13 l l l l l l l l l l 13
12 l l l l l l l l l l 12 1
2
l l l l l l l l l l
11 11 5 43
l l l l l l l l l l 876
10 10
116118 120122124126128130132134 116118 120122124126128130132134
LBP (meters) LBP (meters)
48
Exploratory Search
Algorithms
Two options currently exist to identify the vector of design variables, x.
1) Randomly select n points in the bounded space. A uniform
distribution is assumed for the variables between the defined bounds.
(An enhancement could be to allow for alternative distributions.)
Monte Carlo methods are based upon this.
2) Use a systematic search algorithm such as from Aird and Rice
(1977). While they claim that their systematic algorithm provides a
better way of searching a region (and provides better starting points)
than a random algorithm, their method uses a small number of
variable values repeatedly on each axis.
This can be a problem when considering a large number of variables
and attempting to “visualize” the effect of changes. Considering a
problem in 15 variables, 2 to the power 15 or 32,768 candidate
designs would need to be evaluated to ensure just two sample values
on each axis.
49
Cyclic coordinate search
● Computationally intensive
1 1 0 1 0 0 1 0 1 1 0 0 1 0 1
x1 x2
Genetic programming
x1 x2
Genetic programming
Create initial
population Evaluate fitness
of all individuals
Select individuals
for reproduction Quit
GA population operators
Reproduction:
Exact copy/copies of individual
Crossover:
Randomly exchange genes of different parents
Many possibilities: how many genes, parents,
children …
Mutation:
Randomly flip some bits of a gene string
Used sparingly, but important to explore new
designs
Population operators
Crossover:
Parent 1 Parent 2
1 1 0 1 0 0 1 0 1 1 0 0 1 0 1 0 1 1 0 1 0 0 1 0 1 1 0 0 0 1
0 1 1 0 0 0 1 0 1 1 0 0 1 0 1 1 1 0 1 1 0 0 1 0 1 1 0 0 0 1
Child 1 Child 2
● Mutation:
1 1 0 1 0 0 1 0 1 1 0 0 1 0 1
1 1 0 1 0 1 1 0 1 1 0 0 1 0 1
Unconstrained problems
Transformation methods
Existence of solutions, optimality conditions
Global optimality
Unconstrained Optimization
Why?
Elimination of active constraints unconstrained
problem
Develop basic understanding useful for
constrained optimization
Transformation of constrained problems into
unconstrained problems
Transformation of constrained problems into
unconstrained
Relevant problems
engineering problems
(potential energy minimization)
Transforming constrained
problem
Reformulation through barrier functions:
f x
g
g x2 1 0
Transformation:
~ 1
f f ln( g )
r
~ 1 f
f x ln x 2 1
r x
Transformed problem
~~
f~f(r(r1)32) g
f (r 10)
f
Transformed problem
Barrier functions result in feasible, interior
optimum:
r = 100
r = 200
r = 400
r = 800
Penalization
g x2 1 0
Transformation:
f f pmax(0, g )
~ 2
~
f x p max(0, x 1)
2
2
f
Penalization (2)
Penalty functions result in infeasible, exerior
optimum:
p = 40
p = 20
p = 10
p=4
Problem transformation
summary
Barrier function Penalty
function
k1 = 8N/cm x1
10 cm
Fy = 5N x2
Fx = 5N
10 cm k2 = 1N/cm
Potential energy:
k1
1 1 Fy
k1u1 k2u2 Fx x1 Fy x2
2 2
x1
2 2 Fx
2
4 x1 (10 x2 ) 2 10
2 x2
2
0.5 x1 (10 x2 ) 2 10
2 k2
5 x1 5 x2
● Equilibrium: min
x1 , x2
Unconstrained engineering problem
x2
x2
x1
x1
Contents
Unconstrained problems
Transformation methods
Existence of solutions, optimality conditions
Global optimality
Theory for solving unc.
problems
Assumptions:
Objectivecontinuous and differentiable (C1)
Domain closed and bounded (compact)
f
]a, b[