Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Chiriac Igor
Summary
The Motivation for Dierential Evolution
Introduction to Parameter Optimization
Local Versus Global Optimization
Overview
Optimization - the attempt to maximize a systems desirable
properties while simultaneously minimizing its undesirable
characteristics
Function of the tuning knob:
noise power
f (x) =
signal power
Parameter quantization
Parameter dependence
Dimensionality
Modality
Time dependence
Noise
Constraints
Dierentiability
DIFFERENTIAL EVOLUTION
Classifying optimizers
Single-point
Steepest descent
Conjugate gradient
Derivative-based
Quasi-Newton
Derivative-free
DIFFERENTIAL EVOLUTION
Random walk
HookeJeeves
Multi-point
Multi-start and
clustering techniques
NelderMead
Evolutionary algorithms
Dierential evolution
Single-point, derivative-based
optimization
Classical derivative-based optimization can be eective
as long the objective function fulfills two requirements:
1. The objective function must be two-times
dierentiable.
2. The objective function must be uni-modal
DIFFERENTIAL EVOLUTION
Example
A simple example of a dierentiable and uni-modal objective function is:
DIFFERENTIAL EVOLUTION
f (x1 , x2 ) = 10
(x21 +3x22 )
Fig2: The method of steepest descent first computes the negative gradient,
then takes a step in the direction indicated.
DIFFERENTIAL EVOLUTION
DIFFERENTIAL EVOLUTION
DIFFERENTIAL EVOLUTION
DIFFERENTIAL EVOLUTION
10
Simulated Annealing
modifying the greedy criterion to accept
some uphill moves while continuing to
accept all downhill moves
one drawbacks is that special eort may be
required to find an annealing schedule that
lowers T at the right rate
DIFFERENTIAL EVOLUTION
11
DIFFERENTIAL EVOLUTION
12
DIFFERENTIAL EVOLUTION
13
DIFFERENTIAL EVOLUTION
14
tries to solve the step size problem by allowing the step size to
expand or contract as needed
to obtain a new trial point, xr, the worst point, xD, is reflected
through the opposite face of the polyhedron using a weighting
factor, F1: xr = xD + F1 (xm xD )
D
1
X
1
xi )
the vector, x , is the centroid of the face opposite x : xm = (
D
i=0
DIFFERENTIAL EVOLUTION
15
DIFFERENTIAL EVOLUTION
16
Dierential Evolution
DE is a population-based optimizer
DIFFERENTIAL EVOLUTION
17
DE INITIALIZATION
18
DE Mutation
Fig8: Mutation
DIFFERENTIAL EVOLUTION
19
Crossover
crossover builds trial vectors
out of parameter values that
have been copied from two
dierent vectors
DE crosses each vector with a
mutant vector
the crossover probability, Cr
[0,1], is a user-defined value
DIFFERENTIAL EVOLUTION
Fig10: Crossover
20
DE Selection
Fig.9: Selection
DIFFERENTIAL EVOLUTION
21
Visualizing DE
DIFFERENTIAL EVOLUTION
22
Visualizing DE
DIFFERENTIAL EVOLUTION
23
Visualizing DE
DIFFERENTIAL EVOLUTION
24
Visualizing DE
DIFFERENTIAL EVOLUTION
25
Visualizing DE
DIFFERENTIAL EVOLUTION
26
Parameter Representation
DE encodes all parameters as floating-point numbers.
Advantages over the traditional GA bit flipping approach :
DIFFERENTIAL EVOLUTION
ease of use
ecient memory utilization
lower computational complexity scales better
enlarge problems
lower computational eort faster convergence
greater freedom in designing a mutation
distribution
27
the standard GA coding scheme imposes multimodality on even uni-modal objective function
DIFFERENTIAL EVOLUTION
28
Floating-Point
unlike the standard GA representation the floatingpoint format retains only a limited number of
significant digits
DIFFERENTIAL EVOLUTION
29
Initialization
DIFFERENTIAL EVOLUTION
30
DIFFERENTIAL EVOLUTION
31
Initial Distribution
Uniform Distributions - preferred because
they best reflect the lack of knowledge
about the optimums location
Gaussian Distribution - may prove faster if
the location of optimums location is well
known, although it may increase the
probability that the population will converge
prematurely
DIFFERENTIAL EVOLUTION
32
Uniform Distribution
distributing initial points with random uniformity
is not mandatory, but experience has shown
randj(0,1) to be very eective in this regard
DIFFERENTIAL EVOLUTION
33
Uniform Distribution
Fig: 14 : Two hundred points distributed with random uniformity (left) and according to a two-dimensional Halton point set (right)
DIFFERENTIAL EVOLUTION
34
Uniform Distribution
DIFFERENTIAL EVOLUTION
35
Gaussian Distribution
DIFFERENTIAL EVOLUTION
36
37
DIFFERENTIAL EVOLUTION
38
DIFFERENTIAL EVOLUTION
39
DIFFERENTIAL EVOLUTION
40
r1 = r2: No Mutation
when indices are chosen without restrictions, r1 will equal r2 on
average once per generation, i.e., with probability 1/Np
the probability that all three indices will be equal is (1/Np)2
DIFFERENTIAL EVOLUTION
41
DIFFERENTIAL EVOLUTION
42
DIFFERENTIAL EVOLUTION
43
Dierential Mutation
DE uses a uniform probability distribution function to
randomly sample vector dierences:
In a population of Np distinct vectors, there will be
Np(Np 1) non- zero vector dierences
DIFFERENTIAL EVOLUTION
44
DIFFERENTIAL EVOLUTION
45
DIFFERENTIAL EVOLUTION
46
DIFFERENTIAL EVOLUTION
47
48
DIFFERENTIAL EVOLUTION
49
DE Selection
Termination Criteria
Benchmarking Differential Evolution
DIFFERENTIAL EVOLUTION
50
QUESTIONS ?
DIFFERENTIAL EVOLUTION
51