Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
CONSTRAINED
MINIMIZATION II
Functions of One Variable
Functions of N Variables
Genetic Search
Penalty Function Methods
Ranjith Dissanayake
Structures Laboratory
Dept. of Civil of Engineering
Faculty of Engineering
University of Peradeniya
VR&D
CONSTRAINED MINIMIZATION
Find the Set of Design Variables that will
Minimize F(X)
Subject to;
Objective Function
g j(X ) 0
j 1, M
Inequality Constraints
hk ( X ) 0
k 1, L
Equality Constraints
X iL X i X iU
VR&D
i 1, N
Side Constraints
Kuhn-Tucker Conditions
F ( X *)
Is Feasible
j g j ( X *) 0
*
F ( X )
j 0,
VR&D
jg j (X
j 1
M L
k h( X * ) 0
k M 1
j 1, M
EXAMPLE
A Simple Cantilevered Beam
P = 2250 NT
L = 500 cm
CROSS
SECTION
H
B
VR&D
Problem Statement
Find B and H to Minimize V = BHL
Subject to;
Mc
700
I
PL3
2.54
3EI
H
12
B
1.0 B H
20.0 H 50
VR&D
Design Space
H
H/B = 12
60
V = 15,000
V = 10,000
V = 20,000
55
50
H = 50
OPTIMUM
45
40
35
VR&D
V = 5,000
= 700
B
j = 1,M
hk(X + S) = 0
k = 1,L
i=1,N
7
VR&D
Polynomial Interpolation
Constrained Minimum
We Approximate all Constraints as Polynomials,
Along with the Objective
For the Objective, We Seek to Minimize F()
For Constraints, we Seek where gj() = 0
For all possible s Calculated This way, We Choose
the Smallest One
VR&D
Polynomial Interpolation
Several Possible Cases
Initially Feasible, No Active Constraints
Initially Feasible, One or More Active Constraints
Initially Infeasible, Active and Violated Constraints
Initially Infeasible, No Feasible Solution
VR&D
10
10
Polynomial Interpolation
Initially Feasible, No Active Constraints
F, gj
CONSTRAINED MINIMUM
UNCONSTRAINED MINIMUM
g2
0
VR&D
g1
11
11
Polynomial Interpolation
Initially Feasible, Active Constraints
F, gj
CONSTRAINED MINIMUM
UNCONSTRAINED MINIMUM
F
g1
g2
0
VR&D
g3
12
12
Polynomial Interpolation
Initially Infeasible, Active and Violated
Constraints
F, g j
CONSTRAINED MINIMUM
INCREASING F
CONSTRAINED MINIMUM
UNCONSTRAINED MINIMUM
DECREASING F
g1
g2
g3
VR&D
13
13
Polynomial Interpolation
Initially Infeasible, No Feasible Solution
gj
0
g1
14
Key Issues
Estimating an Initial
If Too Large, the Quality of the Polynomial Fit will
be Poor
If Too Small, Many Steps will be Needed to Bracket
the Minimum
15
Genetic Search
Basically a Random Search Method
Uses Function Values Only
Treats Variables as Discrete
Basic Concept
Represent Numbers as a Binary String
Xi = 1011101001 = 1*20 + 0*21 + 0*22 + 1*23 + 0*24 + 1*25 +
1*26 + 1*27 + 0*28 + 1*29 = 745
VR&D
16
16
Genetic Search
For a Candidate Design, the Objective Function
(Fitness) is Defined using an Exterior Penalty
Function
(X ) F(X ) R
F
j 1
Max 0, g j ( X )
X 1011101001
0101100110
17
17
Genetic Search
Basic Operations
Reproduction
Bias the Offspring in Favor of Most Fit Parents
Crossover
Allow Members of a Population to Exchange Characteristics
With a Typical Probability of Pc = 0.6 0.8
Mutation
Randomly Switch Zeros and Ones
With a Typical Probability of Pc = 0.01 0.02
VR&D
18
18
Genetic Search
Algorithm
Create a Random Population
i (X )
Calculate all Fitnesses, F
i (X )
Get Their Sum Fsum F
19
19
Genetic Search
Algorithm
Perform Crossover
Use a Weighted Coin Toss to Pick Probability of Crossover
Of Crossover is dictated, Pick Integer Number Between 1
and Stringlength to Establish Crossover Location
Exchange 0 and 1
20
20
Genetic Search
Features
Uses Function Values Only
Naturally Handles Discrete Variables
Easy to Program
Requires a Very Large Number of Function
Evaluations
Improved Probability of Finding a Global Optimum
21
22
P( X ) R
Max[0, g j ( X )]2 R
j 1
k 1
hk ( X )
Algorithm
1.
2.
3.
4.
Minimize F ( X )
Increase R (Say by a Factor of 10)
If Converged, Exit. Else go to Step 2
Features
Easy to Program
Approaches Optimum from Infeasible Region
VR&D
23
23
X 2 2X 8
F
16
Subject to;
g1
1 X
0
2
g2
X 2
0
2
F, g
FEASIBLE
R EGION
-1
0
-1
VR&D
g2
g1
24
24
Exterior Penalty
Function of One Variable
F%
R 100
F%
R 10
FEASIBLE
REGION
F%
R 1
VR&D
25
X
25
F X1 X 2
Subject to; g1 2 X1 X 2 0
g 2 8 6 X1 X12 X 2 0
X2
12
g2 0
F = 10
6
8
g1 0
4
2
0
-2
X1
-2
VR&D
26
26
8
6
2
X1
1
VR&D
27
27
8
6
VR&D
X1
28
28
1
R
g j (X )
[hk ( X )]2
k 1
VR&D
29
VR&D
30
30
X 2 2X 8
F
16
Subject to;
g1
1 X
0
2
g2
X 2
0
2
F, g
FEASIBLE
R EGION
-1
0
-1
VR&D
g2
g1
31
31
Interior Penalty
Function One Variable
F%
R '0.5
F%
R '0.1
F%
R ' 0.01
-2
VR&D
F(X )
1
FEASIBLE
REGION
32
32
Interior Penalty
Alternative Forms
Log Function
M
P( X ) R '
LOG[ g j ( X )]
j 1
g j ( X )
P( X ) R '
j LOG 1
R
'
j 1
VR&D
33
33
Interior Penalty
Alternative Forms
Linear Extended Penalty
M
P( X ) R '
g j ( X )
j 1
where
g j ( X )
g j ( X )
VR&D
1
if g j ( X )
g j (X )
g j ( X ) 2
if g j ( X )
34
34
Interior Penalty
Algorithm
1.
2.
3.
4.
VR&D
35
35
X 2 2X 8
F
16
Subject to;
g1
1 X
0
2
g2
X 2
0
2
F, g
FEASIBLE
R EGION
-1
0
-1
VR&D
g2
g1
36
36
F%
F%
R '0.5
R ' 0.1
F%
R ' 0.01
F(X )
0
-2
VR&D
FEASIBLE
REGION
37
37
Easy to Program
Approaches Optimum from the Feasible Region
No Good Rules for Choosing the Transition Point
Has the Best Features of the Exterior and Interior
Penalty Function Methods
VR&D
38
38
INTERIOR
PENALTY
VARIABLE
PENALTY
LINEAR
EXTENDED
PENALTY
FEASIBLE REGION
VR&D
INFEASIBLE REGION
39
39
Augmented Lagrange
Multiplier Method
Minimize the Augmented Lagrangian
A( X , , R) F ( X )
j 1
R 2
j
j j
k 1
k M hk ( X ) R hk ( X )
where
j Max
VR&D
j
g j ( X ),
2 R
40
40
Augmented Lagrange
Multiplier Method
Algorithm
1. Start with a Small Value of R and all j = 0
2. Minimize A(X, , R)
3. Update the Lagrange Multipliers
j Old
j
2 R Max
Old
j
g j ( X ),
2R
k M kOld
M 2 Rhk ( X )
41
Augmented Lagrange
Multiplier Method
Features
VR&D
Easy to Program
Approaches the Optimum from Either the Feasible
or Infeasible Region, Depending on Initial Values of
the Lagrange Multipliers
It is Not Necessary to Continually Increase R
Beyond a Reasonable Value
Considered the Best Among the Penalty Function
Methods
42
42
Summary of Penalty
Function Methods
VR&D
43
43
VR&D
44
VR&D
45
45
1. Exterior Penalty
2. Interior Penalty, Reciprocal
3. Interior Penalty, Original Log Barrier
4. Interior Penalty, Polyaks Log Barrier
5. Interior Penalty, Polyans Log Segmoid
6. Interior Penalty, Linear Extended
7. Interior Penalty, Quadratic Extended
8. Interior Penalty, Variable Extended
9. Augmented Lagrange Multiplier Method
10. Duality Theory
VR&D
46
46
Pseudo Objective
M
( X , rp ) F ( X ) rp
Max[0, g j ( X )]
r jp
j 1
p
r
Individual Penalties, j are Proprietary, but Similar
to Lagrange Multipliers
We Need the Gradient of
M
( X , rp ) F ( X ) 2rp
j 1
VR&D
r jp Max[0, g j ( X )] g j ( X )
47
47
Key Points
VR&D
48
48
Method
100
1,000
10,000
DOT-MMFD
53,000
5,000,000
5x108
DOT-SLP
113,000
11,000,000
11x108
DOT-SQP
119,000
11,500,000
12x108
BIGDOT
1,600
To
11,000
16,000
To
1,000,000
160,000
to
10x107
49
Discrete Variables
i 1
0.5
X 0.25( X L 3 X U )
i
i
1 sin 2
U
L
Xi Xi
VR&D
P
X i
F
X i
50
50
Cantilevered Beam
VR&D
10,000
50,000
100,000
250,000
CONTINUOUS
OPTIMUM
53,744
(233/43)
[9,995/12]
53,744
(243/46)
[49,979/46]
53,720
(209/38)
[99,927/150]
53,755
(262/49)
[249,919/211]
DISCRETE OPTIMUM
54,864
(80/14)
54,864
(92/38)
54,848
(96/25)
54,887
(143/24)
51
51
Example
D E S IG N R E G IO N
VR&D
LOAD, P
52
52
Example
Method
Optimum
Time in the
Approximate
Optimization
DOT
955.07
2,870
2,643
BIGDOT
953.98
345
147
53
VR&D
54
54