Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
P. B. de Moura Oliveira
I. INTRODUCTION
6th International PhD Workshop on Systems and Control, October 4-8, 2005 Izola, Slovenia
r
trial solution vector x (t +1) . This can be generated
r
from the current vector by using equation (3), with
representing an incremental vector (4).
f : R n R, 0 (1)
v
x (t ) = x1 (t ), x 2 (t ), K , x n (t ) (2)
r
r
r
x (t +1) = x (t ) + (t ) (3)
r
(t ) = 1(t ), 2 (t ), K , n (t ) (4)
A. Simulated annealing
The basic simulated annealing algorithm proposed by
Metropolis et al. [9] uses a temperature variable, T, that is
decreased through the optimization process following a
specified scheduled or by using a function. Considering
that a new solution is generated in the neighborhood of the
current solution value, the difference between the current
and new solutions is represented by E (5), known as
energy gradient. In the context of minimization problems,
the replacement of the current solution by the new solution
is performed using a condition defined by (6) based on a
Boltzmann probability threshold. Figure 1 illustrates the
basic simulated annealing algorithm for minimization
problems.
r
E = f (x (t + 1)) f (x (t )) (5)
r
r
x (t ) = x (t + 1) if E < 0
r
r
1
> rand [0,1] otherwise (6)
x (t ) = x (t + 1)
E
1 + exp
T
t=0
T= Tinitial
r
initialize x
(t ) randomly
while(!(termination criterion))
while(!(repetition criterion))
generate a new trial solution
r
x (t + 1)
E = f (x (t + 1)) f (x (t ))
if ( E < 0 )
r
r
x (t ) = x (t + 1)
1
> rand [0,1]
else if
E
1 + exp
T
r
r
x (t ) = x (t + 1)
end
end
t=t+1
T=annealing(T,t)
end
Fig. 1. Basic simulated annealing algorithm.
B. Genetic Algorithm
The genetic algorithm (GA), is a randomly based
technique to search for the best problem solutions, and was
r
p (t ) = p1 (t ), p 2 (t ), K , p j (t )
1 j n = d * res (8)
6th International PhD Workshop on Systems and Control, October 4-8, 2005 Izola, Slovenia
t=0
initialize probability vector
r
p = (0.5,0.5, K ,0.5)
while(!(termination criterion))
t=t+1
generate new population X(t) using the probability vector
evaluate X(t)
update the probability vector
adjust the probability vector
end
Fig. 3: Basic PBIL algorithm.
1 j n (10)
1 j n
(11)
p j (t + 1) = p j (t ) FF p j (t ) 0.5
1 j n (12)
p j (t + 1) = p j (t ) FF p j (t ) 0.5
1 j n (13)
xid (t + 1) = xid (t ) +
id
(t + 1)
1 i m 1 d n (15)
(t + 1)
(16)
1 i m 1 d n r1 [1, m] i
id (t + 1) = F (x r d (t ) x r d (t )) 1 i m 1 d n
r2 [1, m] i r1 r2 r3 [1, m] i r1 r2
r
r
r
x ci (t + 1) = crossover (x i (t ), x vi (t + 1)) 1 i m (18)
r
r
r
r
xi (t + 1) = x ci (t + 1) if f (x ci (t + 1)) f (x i (t + 1))
2
id
1 i m
(17)
(19)
i (t + 1)
r
x vi (t + 1)
r
r
r
x ci (t + 1) = crossover (xi (t ), x vi (t + 1)) 1 i m
generate new solution vector
r
r
f (xci (t + 1)) f (xi (t + 1))
i=i+1
end
end
Fig. 4: Basic differential evolution algorithm.
6th International PhD Workshop on Systems and Control, October 4-8, 2005 Izola, Slovenia
i (t ) = ( i1(t ), i 2 (t ),K, in (t )) = Vi (t )
(22)
= (vi1 (t ), vi 2 (t ),K, vin (t )) 1 i m
id
id
t=0
Initialize population X(t)
while(!(termination criterion))
Evaluate X(t)
t=t+1
while(i<=population size)
generate the incremental vector
generate new solution vector
i (t + 1)
r
xi (t + 1)
i=i+1
end
replace X(t) by X(t+1)
end
Fig. 5: Basic particle swarm optimization algorithm.
e1
+
Gc1(s)
u1
+
e2
Gc2(s)
u2
Gp2(s)
y2
Gp1(s)
y1
1
u1,2(t) = K p1,2 e1,2 (t) +
e1,2 (t)dt Td 1,2
o
dt
T i1,2
(26)
The value given to the inertia weight will affect the type
of search in the following way: a large inertia weight will
direct the PSO for a global search while a small inertia
weight will direct the PSO for a local search. The PSO
algorithm is illustrated by Figure 5.
6th International PhD Workshop on Systems and Control, October 4-8, 2005 Izola, Slovenia
Algorithm
SA
GA
PBIL
PSO
DE
1
G p1,2 (s ) =
e s (28)
(s + 3)
Algorithm
SA
GA
PBIL
PSO
DE
Ti2
0,309
0,285
0,196
0,000
0,000
Td2
2,570
0,318
1,144
0,115
0,000
Standard
Deviation
524221,7
831,836
449,273
735,260
330,344
SA
GA
PBIL
PSO
DE
1.2
1
y
0.8
0.6
0.4
0.2
0
10
15
20
25
30
time
V. CONCLUSION
Some of the most popular modern heuristics were
reviewed in the context of an evolutionary algorithms
option assignment, requiring the optimization of PID
controllers for cascaded feedback loops. The selected
techniques are: simulated annealing, genetic algorithm,
population based incremental learning algorithm, particle
swarm optimization algorithm and the differential evolution
algorithm. The best test result was achieved with the
particle swarm optimization algorithm, while for the mean
of the best result and standard deviation, the differential
evolution algorithm delivered the best results. It is
important to note that the result analysis is conditioned by
the assignment specification in terms of the small number
of function evaluations and population sizes used in each
optimization trial. This requirement is important in practice
6th International PhD Workshop on Systems and Control, October 4-8, 2005 Izola, Slovenia
SA
GA
PBIL
PSO
DE
3.5
3
u1
[4]
[5]
[6]
[7]
[8]
[9]
2.5
2
[10]
1.5
[11]
1
[12]
0.5
0
10
15
20
25
[13]
30
time
[14]
Fig. 8. Control signals for the primary controller of the cascaded system.
[15]
18000
DE
PSO
16000
14000
12000
J 10000
8000
6000
4000
2000
0
10
15
20
25
30
35
40
45
50
iterations
Fig. 9. Convergence plot for the mean best fitness over the 20 runs for the
DE and PSO algorithms.
REFERENCES.
[1]
[2]
[3]