Sei sulla pagina 1di 54

DESIGN OPTIMIZATION

CONSTRAINED
MINIMIZATION II
Functions of One Variable
Functions of N Variables
Genetic Search
Penalty Function Methods

Ranjith Dissanayake
Structures Laboratory
Dept. of Civil of Engineering
Faculty of Engineering
University of Peradeniya

VR&D

CONSTRAINED MINIMIZATION
Find the Set of Design Variables that will
Minimize F(X)
Subject to;

Objective Function

g j(X ) 0

j 1, M

Inequality Constraints

hk ( X ) 0

k 1, L

Equality Constraints

X iL X i X iU

VR&D

i 1, N

Side Constraints

Kuhn-Tucker Conditions
F ( X *)

Is Feasible

j g j ( X *) 0
*

F ( X )

j 0,

VR&D

jg j (X
j 1

M L

k h( X * ) 0

k M 1

j 1, M

Unrestricted in Sign, k=M+1, M+L


3

EXAMPLE
A Simple Cantilevered Beam
P = 2250 NT

L = 500 cm

CROSS
SECTION

H
B

VR&D

Problem Statement
Find B and H to Minimize V = BHL
Subject to;

Mc
700
I

PL3

2.54
3EI

H
12
B

1.0 B H
20.0 H 50

VR&D

Design Space
H

H/B = 12

60
V = 15,000

V = 10,000

V = 20,000

55

50

H = 50
OPTIMUM

45

40

35

VR&D

V = 5,000

= 700
B

The One-Dimensional Search


Minimize F(X + S)
Subject to;
gj(X + S) < 0

j = 1,M

hk(X + S) = 0

k = 1,L

XiL < Xi + Si < XiU


VR&D

i=1,N
7

The One-Dimensional Search


Polynomial Search or a Modified Version of the
Golden Section Method is Useful Here
If F(X0) is Infeasible (One or More Constraint are
Violated), the Search is to the Feasible Region if
Possible, with Minimal Increase in the Objective
If no Feasible Solution can be Found, it is Desirable
to Find the Point where the Constraint Violations are
Minimized

VR&D

Polynomial Interpolation
Constrained Minimum
We Approximate all Constraints as Polynomials,
Along with the Objective
For the Objective, We Seek to Minimize F()
For Constraints, we Seek where gj() = 0
For all possible s Calculated This way, We Choose
the Smallest One

VR&D

Polynomial Interpolation
Several Possible Cases
Initially Feasible, No Active Constraints
Initially Feasible, One or More Active Constraints
Initially Infeasible, Active and Violated Constraints
Initially Infeasible, No Feasible Solution

VR&D

10

10

Polynomial Interpolation
Initially Feasible, No Active Constraints
F, gj
CONSTRAINED MINIMUM
UNCONSTRAINED MINIMUM

g2
0

VR&D

g1

11

11

Polynomial Interpolation
Initially Feasible, Active Constraints
F, gj
CONSTRAINED MINIMUM
UNCONSTRAINED MINIMUM

F
g1

g2
0

VR&D

g3

12

12

Polynomial Interpolation
Initially Infeasible, Active and Violated
Constraints
F, g j

CONSTRAINED MINIMUM
INCREASING F
CONSTRAINED MINIMUM
UNCONSTRAINED MINIMUM

DECREASING F
g1

g2

g3

VR&D

13

13

Polynomial Interpolation
Initially Infeasible, No Feasible Solution
gj

MINIMUM CONSTRAINT VIOLATION


g2

0
g1

Before We Overcome the Constraint Violation, Another


Constraint Becomes Violated
Therefore, Interpolate for Minimum Violation
VR&D
14

14

Key Issues
Estimating an Initial
If Too Large, the Quality of the Polynomial Fit will
be Poor
If Too Small, Many Steps will be Needed to Bracket
the Minimum

Use as Much Information as Possible


If Bounds are Found with the First Step, then Use a
Linear Fit to Constraints to Estimate to Overcome
Violations or Hit a New Constraint and Use
Quadratic Fit on Objective to Estimate Minimum
Increase Polynomial Order as Information is
Accumulated
VR&D
15

15

Genetic Search
Basically a Random Search Method
Uses Function Values Only
Treats Variables as Discrete

Basic Concept
Represent Numbers as a Binary String
Xi = 1011101001 = 1*20 + 0*21 + 0*22 + 1*23 + 0*24 + 1*25 +
1*26 + 1*27 + 0*28 + 1*29 = 745

NOTE: We could Create Variables with Two


Decimal Points by Dividing by 100
XRi = Float (Xi)/100 = 7.45

VR&D

16

16

Genetic Search
For a Candidate Design, the Objective Function
(Fitness) is Defined using an Exterior Penalty
Function
(X ) F(X ) R
F

j 1

Max 0, g j ( X )

Consider a Two Variable Design

X 1011101001
0101100110

We can Think of the Design as One Long String


of Zeros and Ones
X = 10111010010101100110
VR&D

17

17

Genetic Search
Basic Operations
Reproduction
Bias the Offspring in Favor of Most Fit Parents

Crossover
Allow Members of a Population to Exchange Characteristics
With a Typical Probability of Pc = 0.6 0.8

Mutation
Randomly Switch Zeros and Ones
With a Typical Probability of Pc = 0.01 0.02

VR&D

18

18

Genetic Search
Algorithm
Create a Random Population
i (X )
Calculate all Fitnesses, F

i (X )
Get Their Sum Fsum F

Construct a Roulette Wheel, With Each String


Occupying an Area on the Wheel in Proportion to
the Ratio F i
Fsum

Use a Random Number 0 1 to Pick Pairs on the


Wheel as Mating Pairs that will Reproduce
VR&D

19

19

Genetic Search
Algorithm
Perform Crossover
Use a Weighted Coin Toss to Pick Probability of Crossover
Of Crossover is dictated, Pick Integer Number Between 1
and Stringlength to Establish Crossover Location

Exchange Values in String Between Parents

Perform Mutation on Child


Use a Weighted Coin Toss to Pick Probability of Mutation
If Mutation is Dictated, Pick an Integer Number Between 1
and Stringlength to Establish Mutation Location

Exchange 0 and 1

Repeat the Process to Convergence


VR&D

20

20

Genetic Search
Features
Uses Function Values Only
Naturally Handles Discrete Variables
Easy to Program
Requires a Very Large Number of Function
Evaluations
Improved Probability of Finding a Global Optimum

No Method Can Guarantee a Global Optimum


Regardless of What Proponents Claim
VR&D
21

21

Penalty Function Methods


Basic Concept
Create a Pseudo-Objective Function that will Penalize
Constraint Violations
( X ) F ( X ) P( X )
F

Use Well Established Unconstrained


Minimization to Minimize F ( X )
Common Methods
Exterior Penalty Function
Interior Penalty Function
Extended Interior Penalty Function
Augmented Lagrange Multiplier Method
Log-Segmoid Method
VR&D
22

22

Exterior Penalty Function

Penalty for Violated Constraints


M

P( X ) R

Max[0, g j ( X )]2 R

j 1

k 1

hk ( X )

Algorithm
1.
2.
3.
4.

Start with a Small Value of R

Minimize F ( X )
Increase R (Say by a Factor of 10)
If Converged, Exit. Else go to Step 2

Features

Easy to Program
Approaches Optimum from Infeasible Region
VR&D
23

23

Example: One Variable Function


Minimize

X 2 2X 8
F
16

Subject to;

g1

1 X
0
2

g2

X 2
0
2

F, g
FEASIBLE
R EGION

-1

0
-1

VR&D

g2

g1

24

24

Exterior Penalty
Function of One Variable
F%

R 100

F%

R 10

FEASIBLE
REGION

F%

R 1

VR&D

25

X
25

Example: Two Variable Function


Minimize

F X1 X 2

Subject to; g1 2 X1 X 2 0

g 2 8 6 X1 X12 X 2 0

X2
12

g2 0

F = 10
6
8

g1 0

4
2

0
-2

X1

-2

VR&D

26

26

Pseudo-Objective for R = 0.05


X2
12

8
6

2
X1
1

VR&D

27

27

Pseudo-Objective for R = 1.0


X2
12

8
6

VR&D

X1

28

28

Interior Penalty Function

Penalty Function P( X ) R '


j 1
Algorithm
1.
2.
3.
4.

1
R
g j (X )

[hk ( X )]2
k 1

Start with Large Value of R and Small Value of R


Minimize F ( X )
Decrease R and Increase R (Say by a Factor of 10)
If Converged, Exit. Else go to Step 2

Note: Always uses an Exterior Penalty for


Equality Constraints

VR&D

Hereafter, we will not Include Equality Constraints


29

29

Example: Two Variable Function


Features
Difficult to Program do to the Discontinuity at
Constraint Boundaries
The Penalty Approaches Infinity as Constraints Approach
Zero

Polynomial Interpolation Must Guard Against Crossing


Constraint Boundaries
With Golden Section Method, Anytime a Constraint Becomes
Violated, this is a New Upper Bound

Approaches the Optimum From the Feasible Region

VR&D

30

30

Example: One Variable Function


Minimize

X 2 2X 8
F
16

Subject to;

g1

1 X
0
2

g2

X 2
0
2

F, g
FEASIBLE
R EGION

-1

0
-1

VR&D

g2

g1

31

31

Interior Penalty
Function One Variable
F%

R '0.5

F%

R '0.1

F%

R ' 0.01

-2

VR&D

F(X )
1

FEASIBLE
REGION

32

32

Interior Penalty
Alternative Forms
Log Function
M

P( X ) R '

LOG[ g j ( X )]
j 1

Shifted Log Function


M

g j ( X )
P( X ) R '
j LOG 1

R
'

j 1

Theoretically, the Shifted Log Function is


Considered Best
Numerically, This has Not Been Verified

VR&D

33

33

Interior Penalty
Alternative Forms
Linear Extended Penalty
M

P( X ) R '

g j ( X )
j 1

where
g j ( X )

g j ( X )

VR&D

1
if g j ( X )
g j (X )
g j ( X ) 2

if g j ( X )

34

34

Interior Penalty

Algorithm
1.
2.
3.
4.

VR&D

Set Large Value of R and Small Value of R


Minimize F ( X ) F ( X ) P( X )
Decrease R and Increase R (Say by a Factor of 10)
If Converged, Exit. Else go to Step 2

35

35

Example: One Variable Function


Minimize

X 2 2X 8
F
16

Subject to;

g1

1 X
0
2

g2

X 2
0
2

F, g
FEASIBLE
R EGION

-1

0
-1

VR&D

g2

g1

36

36

Linear Extended Interior Penalty


6

F%

F%

R '0.5

R ' 0.1

F%

R ' 0.01

F(X )
0

-2

VR&D

FEASIBLE
REGION

37

37

Linear Extended Penalty Function


Features

Easy to Program
Approaches Optimum from the Feasible Region
No Good Rules for Choosing the Transition Point
Has the Best Features of the Exterior and Interior
Penalty Function Methods

VR&D

38

38

Comparison of Penalty Functions


QUADRATIC
EXTENDED
PENALTY

INTERIOR
PENALTY

VARIABLE
PENALTY

LINEAR
EXTENDED
PENALTY

FEASIBLE REGION

VR&D

INFEASIBLE REGION

39

39

Augmented Lagrange
Multiplier Method
Minimize the Augmented Lagrangian
A( X , , R) F ( X )

j 1

R 2
j
j j

k 1

k M hk ( X ) R hk ( X )

where

j Max

VR&D

j
g j ( X ),

2 R

40

40

Augmented Lagrange
Multiplier Method

Algorithm
1. Start with a Small Value of R and all j = 0
2. Minimize A(X, , R)
3. Update the Lagrange Multipliers
j Old
j

2 R Max

Old
j

g j ( X ),
2R

k M kOld
M 2 Rhk ( X )

4. Increase R (Say R = Min(10R, 1000)


5. If Converged, Exit. Else go to Step 2
VR&D
41

41

Augmented Lagrange
Multiplier Method

Features

VR&D

Easy to Program
Approaches the Optimum from Either the Feasible
or Infeasible Region, Depending on Initial Values of
the Lagrange Multipliers
It is Not Necessary to Continually Increase R
Beyond a Reasonable Value
Considered the Best Among the Penalty Function
Methods

42

42

Summary of Penalty
Function Methods

Normally Useful Only if Function Values are


Very Cheap
Exterior Method has the Best Chance of
Finding the True Optimum if Relative Minima
Exist
Today, the Shifted Log Function Method is
Considered Promising by Theoreticians
Very Popular in the 1960s. Today, These
Methods are Receiving Considerable Attention
for Very Large Problems

VR&D

43

43

New Interest in SUMT

SLP, SQP, MFD, GRG and Similar Methods


(to be Discussed Later) all Require Solution of
a Sub-Problem

The Sub-Problem Dimension is Typically on the


Order of the Number of Critical or Potentially
Critical Constraints
This can be Very Large and Time Consuming

For Very Large Problems with Many Active


Constraints, SUMT May be More Efficient

VR&D

Especially if Approximations are Used as in


Structural Optimization
44

44

Very Large Scale Optimization

Recent Method Developed at VR&D


Methods used by DOT

Modified Method of Feasible Directions (MMFD)


Sequential Linear Programming (SLP)
Sequential Quadratic Programming (SQP)

Key Difficulties with Increased Problem Size

Exponential Growth in Memory Requirements

Out of Core Operations are Inefficient and Complicated

Direction Finding Requires Solving Large


Sub-Problem

VR&D

CPU time Grows Exponentially

45

45

SUMT Methods Considered

Ten Methods Considered

1. Exterior Penalty
2. Interior Penalty, Reciprocal
3. Interior Penalty, Original Log Barrier
4. Interior Penalty, Polyaks Log Barrier
5. Interior Penalty, Polyans Log Segmoid
6. Interior Penalty, Linear Extended
7. Interior Penalty, Quadratic Extended
8. Interior Penalty, Variable Extended
9. Augmented Lagrange Multiplier Method
10. Duality Theory
VR&D
46

46

Exterior Penalty Function Chosen

Pseudo Objective
M

( X , rp ) F ( X ) rp

Max[0, g j ( X )]
r jp

j 1

p
r
Individual Penalties, j are Proprietary, but Similar
to Lagrange Multipliers
We Need the Gradient of
M

( X , rp ) F ( X ) 2rp


j 1

VR&D

r jp Max[0, g j ( X )] g j ( X )

47

47

Key Points

We only need Gradients of the Objective and


Violated Constraints
is the Summation of Vectors

Equality Constraints and Redundant


Constraints are Easily Handled
Use Fletcher-Reeves to Solve the
Unconstrained Sub-Problem

We can Provide Gradients one at a time

Requires Very Little Memory


Direction Finding task is Very Simple and Fast

BIGDOT From VR&D Uses This Method

VR&D

48

48

Storage Requirements in Words

Number of Constraints, M Equals Number of


Design Variables, N
Number of Design Variables

Method

100

1,000

10,000

DOT-MMFD

53,000

5,000,000

5x108

DOT-SLP

113,000

11,000,000

11x108

DOT-SQP

119,000

11,500,000

12x108

BIGDOT

1,600
To
11,000

16,000
To
1,000,000

160,000
to
10x107

For Realistic Problems, Often M >> N

The New Method Needs no More Storage


VR&D
49

49

Discrete Variables

First Solve the Continuous Problem


Then add the Following Penalty Terms to Drive the
Variables to a Nearby Discrete Value
P( X ) R

i 1

0.5

X 0.25( X L 3 X U )
i
i
1 sin 2

U
L
Xi Xi

Solve this Including all Original Constraints

If this does not Drive all Discrete Variables to a Good


Value, for each Xi, find
Max

VR&D

P
X i

F
X i

Move to next Discrete Value that is Feasible with Minimum


Increase in the Objective

50

50

Cantilevered Beam

Number of Design Variables, NDV

VR&D

10,000

50,000

100,000

250,000

CONTINUOUS
OPTIMUM

53,744
(233/43)
[9,995/12]

53,744
(243/46)
[49,979/46]

53,720
(209/38)
[99,927/150]

53,755
(262/49)
[249,919/211]

DISCRETE OPTIMUM

54,864
(80/14)

54,864
(92/38)

54,848
(96/25)

54,887
(143/24)

51

51

Example

Topology Optimization with 35,000 Variables

Using GENESIS with BIGDOT


SU PPO RT

D E S IG N R E G IO N

VR&D

LOAD, P

52

52

Example

Aircraft Wing; Solved by GENESIS

50,000 DOF Model with 8 Load Cases (Very Small)


N = 1,251 M = 8,4000
Computational Cost with Many Active Constraints

Method

Optimum

CPU Time per


GENESIS
Design Cycle

Time in the
Approximate
Optimization

DOT

955.07

2,870

2,643

BIGDOT

953.98

345

147

Approximately 95% less time in Optimizer


VR&D
53

53

Key Features of This Method

Requires very little Computer Memory


Does not solve a Complex and Time Consuming
Direction Finding Sub-problem
Handles Redundant Constraints (Same Value and
Gradients) Easily
Deals with Equality Constraints with no loss in
Efficiency or Reliability
Efficiently gets a Good Discrete Solution
Scales Very well. Efficiency is about the same
Regardless of Problem size
Easy to parallelize Function and Gradient Evaluations

VR&D

54

54

Potrebbero piacerti anche