Sei sulla pagina 1di 10

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/262388552

Heuristic and metaheuristic optimization techniques with application to


power systems

Conference Paper · October 2010

CITATIONS READS

8 84

1 author:

Mihai Gavrilas
Gheorghe Asachi Technical University of Iasi
58 PUBLICATIONS   181 CITATIONS   

SEE PROFILE

All content following this page was uploaded by Mihai Gavrilas on 23 June 2016.

The user has requested enhancement of the downloaded file.


SELECTED TOPICS in MATHEMATICAL METHODS and COMPUTATIONAL TECHNIQUES in ELECTRICAL ENGINEERING

Heuristic and Metaheuristic Optimization Techniques with


Application to Power Systems
MIHAI GAVRILAS
Power System Department
“Gheorghe Asachi” Technical University of Iasi
21-23 D. Mangeron Blvd., 700050, Iasi
ROMANIA
mgavril@ee.tuiasi.ro http://www.ee.tuiasi.ro/~mgavril

Abstract: - The development of modern wide-area power systems, as well as recent trends towards the creation of
sustainable energy systems have given birth to complex studies addressing technical, but also economical and
environmental, aspects related to simple or multi-objective optimization problems. Recently, heuristic and
metaheuristic approaches that apply combinations of different heuristics with or without traditional search and
optimization techniques were proposed to solve such problems. This paper provides basic knowledge about most
widely used (meta)heuristic optimization techniques, and their application in optimization problems in power systems.

Key-Words: - Heuristic optimization, Metaheuristic optimization, Power systems, Efficiency.

1 Introduction complexity of problems faced by power engineers,


Optimization is a branch of mathematics and researchers in the field of Power Systems were pioneers
computational science that studies methods and in using heuristic and metaheuristic techniques for
techniques specially designed for finding the “best” solving optimization problems [4].
solution of a given “optimization” problem. Such This paper presents an overview of the most popular
problems aim to minimize or maximize one or more (meta)heuristic techniques used for solving typical
objective functions based on one or more dependent optimization problems in the field of Power Systems.
variables, which can take integer or real values. Thus, Sections 2 and 3 describe generic heuristic
Optimization is widely applied in fields such as methods and metaheuristics. Then, Section 4 considers
engineering, commerce, transportation, finance, typical optimization problems in power engineering and
medicine, and any decision making processes. for each such problem a brief description is presented.
Numerous conventional optimization techniques have The author assumed the task to prepare the
been designed to solve a wide range of optimization presentation of the recent advances and applications in
problems, such as Linear Programming, Nonlinear (meta)heuristic-based optimization in power systems
Programming, Dynamic Programming or Combinatorial considering the fact that most numerous and valuable
Optimization [1]. However, many of the existing contributions belong to the worldwide scientific
conventional optimization techniques applied to real community. Results of research undertaken by the
world problems suffer from a marked sensitivity with author, together with other fellows, were added with
respect to problems such as: difficulties in passing over modesty to the numerous other theoretical and practical
local optimal solutions; risk of divergence; difficulties in results presented in this paper.
handling constraints or numerical difficulties related to
computing first or second order derivatives [1].
To overcome these problems, heuristic and 2 Heuristic optimization
metaheuristic techniques were proposed in the early 70‟s Difficulties faced by exact, conventional optimization
[3]. Unlike exact methods, (meta)heuristic methods have methods often end up by preventing these methods to
a simple and compact theoretical support, being often determine a solution to the optimization problem within
based on criteria of empirical nature. These issues are a reasonable amount of time. To avoid such cases,
responsible for the absence of any guarantee for alternative methods have been proposed, which are able
successfully identifying the optimal solution. to determine not perfectly accurate, but good quality
Many optimization problems encountered in the field approximations to exact solutions. These methods,
of Power Systems may be approached using heuristic called heuristics, were initially based essentially on
and metaheuristic techniques. Moreover, as in many experts‟ knowledge and experience and aimed to explore
other situations, due to the large diversity and the search space in a particularly convenient way.

ISSN: 1792-5967 95 ISBN: 978-960-474-238-7


SELECTED TOPICS in MATHEMATICAL METHODS and COMPUTATIONAL TECHNIQUES in ELECTRICAL ENGINEERING

Heuristics were first introduced by G. Polya in 1945 3 Metaheuristics


[2] and were developed later in the 70‟s, when various The new paradigms were called metaheuristics and were
heuristics were also introduced for specific purpose first introduced at mid-80s as a family of searching
problems in different fields of science and technique, algorithms able to approach and solve complex
including electrical engineering [3]. Basically, a optimization problems, using a set of several general
heuristic is designed to provide better computational heuristics. The term metaheuristic was proposed in [18]
performance as compared to conventional optimization to define a high level heuristic used to guide other
techniques, at the expense of lower accuracy. However, heuristics for a better evolution in the search space.
the „rules of thumb” underlying a heuristic are often Although traditional stochastic search methods are
very specific to the problem under consideration. mainly guided by chance (solutions change randomly
Moreover, since heuristics are problem solving from one step to another), they can be used in
techniques based on solvers‟ expertise, they use domain- combination with metaheuristic algorithms to guide the
specific representations, and hence general heuristics search process and to accelerate the convergence.
can be successfully defined only for fundamental Most metaheuristics algorithms are only
problem solving methods, such as search methods [6]. approximation algorithms, because they cannot always
A wide range of heuristic search strategies can be find the global optimal solution. But the most attractive
cited from the literature, from uninformed approaches feature of a metaheuristic is that its application requires
(basically non-heuristic) like Depth First Search (DeFS), no special knowledge on the optimization problem to be
Breadth First Search (BrFS), or Uniform Cost Search solved, hence it can be used to define the concept of
(UnCS), to informed approaches (mainly heuristic) like general problem solving model for optimization
Best First Search (BeFS), Beam Search (BeS) or the A* problems or other related problems [4, 12].
Search (A*S) [5,6,7,8,10]. Uninformed or blind search Since their introduction in the mid-80s till now,
strategies are applied with no information about the metaheuristic methods for solving optimization
search space, other than the ability to distinguish problems have been continuously developed, allowing
between an intermediate-state and a goal-state. addressing and solving a growing number of such
Informed search strategies use problem-specific problems, previously considered difficult or even
knowledge. Usually, this knowledge is represented using impossible to solve. These methods include simulated
an evaluation function that assesses either the quality of annealing, tabu search, evolutionary computation
each state in the search space, or the cost of moving techniques, artificial immune systems, memetic
from the current state to a goal-state, using various algorithms, particle swarm optimization, ant colony
possible paths. In case of BeFS, among all possible algorithm, differential evolution, harmony search,
states at one level, the algorithm chooses to expand the honey-bee colony optimization etc
most “promising” one in terms of a specified rule [9]. The next section presents a brief review of basic
BeS is an enhanced version of the BeFS algorithm. issues for the most commonly used metaheuristics cited
The improvements consist in reducing the memory above. Several applications of these methods in the field
requirements [11]. With this aim in view, BeS is defined of power systems will be presented in section 5.
based on BrFS, which is used to build the search tree. At
each level, all new states are generated and the heuristic
function is computed for each state that a inserted in a 4 Metaheuristic methods
list ordered by heuristic function values. The list is of
limited length, equal to the so called “beam width”. This 4.1 Simulated Annealing
limits the memory requirements, but the compromise Studies on Simulated Annealing (SA) were developed in
risk to pruning out the path to the goal-state. the 1980s based on the Metropolis algorithm [17], which
The A* search algorithms uses a BeFS strategy, and a was inspired by statistical thermodynamics, where the
heuristic function that combines two metrics: the cost relationship between the probabilities of two states A
from the origin to the current state (or the cost-so-far) and B, with energies EA and EB, at a common tempera-
and an estimation of the cost from the current state to a ture T, suggests that states with higher energies are less
goal-state (or the cost-to-goal). The A* algorithm is probable in thermodynamic systems. Thus, if the system
considered to be very efficient [7]. is in state A, with energy EA, another state B of lower
One of the shortcomings of the search strategies energy (EB < EA) is always possible. Conversely, a state
presented above is the numerical inefficiency of the B of higher energy (EB > EA) will not be excluded, but it
search process, especially for high dimensional will be considered with probability exp(- (EB – EA)/T).
problems. Thus, significant efforts were devoted to Application of Metropolis algorithm in search
identify new heuristics able to successfully cope with the problems is known as SA and is based on the possibility
above problems. of moving in the search space towards states with poorer

ISSN: 1792-5967 96 ISBN: 978-960-474-238-7


SELECTED TOPICS in MATHEMATICAL METHODS and COMPUTATIONAL TECHNIQUES in ELECTRICAL ENGINEERING

Table 1 – Pseudocode for Simulated Annealing Table 2 – Pseudocode for Tabu Search
Data: initial approximation X0, initial temperature T, Data: length of the Tabu list LT, number of
number of iteration for a given temperature nT. intermediate solutions N.
Optimal solution:Xbest ← X0. Initialization: approximation X, Tabu list TABU={X}.
WHILE {stopping criterion not met} Optimal solution: Xbest ← X.
n = 0; i = 0; WHILE {stopping criterion not met}
WHILE (n < nT) DO prepare the Tabu list: if Length(TABU) = LT,
choose a new approximation Y then delete the oldest item from the list.
accept or reject the new approximation based generate N new aproximations in the
on the Metropolis rule: Xi+1 = G(Xi,Y,T) neighborhood of X and select the best candidate-
update optimal solution: if Xi+1 is better than solution Y which is not TABU.
Xbest, then Xbest ← Xi+1 update current approximation X ← Y and add it
next n-iteration (n ← n+1) in the Tabu list: Add(TABU,X).
update temperature T update optimal solution: if X is better than Xbest,
next T-iteration (i ← i+1) then Xbest ← X

values of the fitness function. Starting from a Table 3 – Pseudocode for Evolution Strategy
temperature T and an initial approximation Xi, with a Data: number of parents μ and offsprings λ (λ = k· μ).
fitness function Fitness(Xi), a perturbation is applied to Initialization: create initial population P = {Pi}, i=1 …
Xi to generate a new approximation Xi+1, with λ, and initialize the best solution Best ← void.
Fitness(Xi+1). If Xi+1 is a better solution then Xi, i.e. WHILE {stopping criterion not met}
Fitness(Xi+1) > Fitness(Xi), the new approximation will evaluate P and update the best solution, Best.
replace the old one. Otherwise, when Fitness(Xi+1) < reproduction stage: select μ fittest individuals from
P and create parent-population, R = {Rj}, j=1 … μ.
Fitness(Xi), the new approximation will be considered
mutation stage: apply stochastic changes to parents
with probability pi=exp(-[Fitness(Xi)–Fitness(Xi+1)] / T).
and create k = λ / μ offsprings for each parrent:
The above steps are repeated for a given number of [ R = {Rj}, j=1… μ ] → [ Q = {Qi}, i=1… λ ]
times for a constant value of temperature, then mutation stage: replace the current population with
temperature is updated by decreasing its value, and the the mutated one:
iterative process continues until a stopping criterion is [ Pi = Qi, i=1 … λ ]
met. The pseudocode for SA is shown in Table 1.
common notation is ES(μ, λ). The main genetic operator
4.2 Tabu Search that controls the evolution from one generation to
Tabu Search (TS) was introduced by [18], as a search another is mutation.
strategy that avoids returning to solutions already visited In a general ES(μ, λ) model (where λ = k · μ), each
by maintaining a Tabu list, which stores successive generation starts with a population of λ individuals. The
approximations. Since the Tabu list is finite in length, at fitness of each individual is computed to rank them in
some point, after a number of steps, some solutions can descending order of their fitness. Amongst the current
be revisited. Adding a new solution to a complete Tabu population, only the first μ fittest individuals are
list is done by removing the oldest one from the list, selected to create the parent population (this selection
based o a FIFO principle (First In – First Out). phase is sometimes called truncation). Next, each of the
New approximations can be generated in different μ parents will create by repeated mutation k = λ / μ
ways. The pseudocode presented in Table 2 uses the offsprings. Eventually, the new, mutated population will
following procedure: at each step a given number of new replace the old one and the algorithm reiterates. The
approximations are generated in the neighborhood of the pseudocode for the ES(μ, λ) model is shown in Table 3.
current solution X, but considering as feasible only the
ones which are not in the Tabu list. Amongst the new 4.4 Genetic Algorithms
approximations the best one is chosen to replace the The Genetic Algorithm (GA) is a step forward of ES in
current solution, being also introduced in the Tabu list. the general framework of Evolutionary Computation.
GAs have been designed and developed by Holland [15]
4.3 Evolution Strategy and later by Goldberg [26] and De Jong [25].
Evolution Strategy (ES) was first proposed in [13] as a GAs are search strategies that are based on specific
branch of evolutionary computation. It was further mechanisms of genetics and natural selection, using
developed after 1970‟s. An ES may be described by two three basic operators: selection, crossover and mutation.
main parameters: number of parents in a generation μ, For each generation, selection is used to choose parent
and number of offsprings created in a generation λ. A individuals , based on their fitness function. After

ISSN: 1792-5967 97 ISBN: 978-960-474-238-7


SELECTED TOPICS in MATHEMATICAL METHODS and COMPUTATIONAL TECHNIQUES in ELECTRICAL ENGINEERING

Table 4 – Pseudocode for Genetic Algorithm Table 5 – Pseudocode for Differential Evolution
Data: population size N, crossover rate ηc and mutation Data: population size parents N, weighting factors α, β.
rate ηm Initialization: create initial population P={Pi}, i=1…N.
Initialization: create initial population P={Pi}, i=1…N, Evaluate current population P and store the fittest
and initialize the best solution Best ← void. individual as Best.
WHILE {stopping criterion not met} WHILE {stopping criterion not met}
evaluate P and update the best solution Best. FOR i = 1 TO N DO
initialize offspring population: R ← void. select two different individuals Xr1 and Xr2,
create offsprings: other than Xi .
FOR k = 1 TO N / 2 DO apply mutation: Xi‟= Xi + α·( Xr1 –Xr2).
selection stage: select parents Q1 and Q2 from apply crossover: Xi‟‟= Xi + β·( Xi –Xi‟).
P, based on fitness values. evaluate Xi‟‟ and replace Xi with Xi‟‟ anytime
crossover stage: use crossover rate ηc and when Fitness (Xi‟‟) > Fitness (Xi).
parents (Q1;Q2) to create offsprings (S1; S2). update the best solution Best.
mutation stage: use mutation rate ηm to apply
stochastic changes to S1 and S2 and create
mutated offsprings T1 and T2. solutions to which the mechanisms of DE are applied.
add T1 and T2 to offspring population: Thus, for a reference individual Xi, two different
R ←R { T1 and T2 }. individuals Xr1 and Xr2, other than Xi, are randomly
replace current population P with offspring selected and an arithmetic mutation is applied to Xi
population R: P ← R. based on the difference between Xr1 and Xr2, to produce
elitism: replace the poorest solution in P with the a mutant Xi’. Then an arithmetic crossover, based on the
best solution stored in Best.
difference between current and mutated solutions, is
applied to generate the new estimation Xi’’. Xi’’ will
selecting a pair of parent chromosomes, they enter the replace the reference solution and the best one anytime
crossover stage to generate two offsprings. Crossover is when its fitness function is better. The pseudocode for
useful to create new individuals or solutions that inherit the DE algorithm is shown in Table 5.
good characteristics from both parents. Newly created
individuals will be altered by small-scale changes in the 4.6 Immune Algorithms
genes, applying mutation operator. Mutations ensure the The Immune Algorithm (IA) was proposed first be [19],
introduction of "novelty" in the genetic material. After to simulate the learning and memory abilities of
completing the offspring population, this will replace immune systems. The IA is a search strategy based on
the parents from the previous generation and the genetic algorithm principles and inspired by protection
selection-crossover-mutation process will be resumed mechanisms of living organisms against bacteria and
for a next generation. To avoid losing the best solution viruses. The problem coding is similar for both GA and
due to the stochastic character of the search procedure IA, except that chromosomes in GA are called
described above, [25] has proposed to apply a special antibodies in IA, and problem formulation, i.e. objective
replacement procedure called “elitism” that makes a or fitness functions are coded as antigens in IA.
copy of the best individual from the current population The basic difference between AG and IA lies in the
and transfer it unchanged in the next generation. The selection procedure. Instead of fitness functions, IA
pseudocode for the GA is shown in Table 4. computes affinities between antibodies and / or between
antibodies and antigens. Based on the affinities between
4.5 Differential Evolution antibodies and antigens a selection and reproduction
Differential Evolution (DE) was developed mainly by pool (the proliferation pool), is created using antibodies
[23] as a new evolutionary algorithm. Unlike other with greatest affinities. The proliferation pool is created
evolutionary algorithms, DE change successive by clonal selection: the first M antibodies, with highest
approximations of solutions or individuals based on the affinities relative to antigens, are cloned (i.e. copied
differences between randomly selected possible unchanged) in the proliferation pool. Using a mutation
solutions. This approach uses indirectly information rate inversely proportional to the affinity of each
about the search space topography in the neighborhood antibody to antigens, mutations are applied to the
of the current solution. When candidate-solutions are clones. Then affinities for new, mutated clones and
chosen in a wide area, mutations will have large ampli- affinities between all clones are computed, and a limited
tudes. Conversely, if candidate-solutions are chosen in a number of clones Nrep (with lowest affinities) are
narrow area, mutations will be of small importance. replaced by randomly generated antibodies, to introduce
For each generation, all current individuals that diversity. Elitism can be applied to avoid losing best
describe possible solutions are considered as reference solutions. The pseudocode of IA is described in Table 6.

ISSN: 1792-5967 98 ISBN: 978-960-474-238-7


SELECTED TOPICS in MATHEMATICAL METHODS and COMPUTATIONAL TECHNIQUES in ELECTRICAL ENGINEERING

Table 6 – Pseudocode for Immune Algorithm Table 7 – Pseudocode for Particle Swarm Optimization
Data: population, clonal and replacement size N, M, Nrep. Data: population size N, personal-best weight α, local-
Initialization: create initial population P. best weight β, global-best weight γ, correction factor ε.
Evaluate affinities for antibodies in current population P. Initialization: create initial population P.
WHILE {stopping criterion not met} WHILE {stopping criterion not met}
clonal selection: clone the first M antibodies from P, select the best solution from the current P: Best.
with highest affinity. Number of clones for an select global-best solution for all particles: BG.
antibody is proportional to its affinity. Number of apply swarming:
clones in the proliferation pool Q is N. FOR i = 1 TO N DO
mutation: apply stochastic changes to clones from select personal-best solution for particle Xi: BiP.
the proliferation pool Q, with mutation rate inversly select local-best solution for particle Xi: BiL.
proportional to their affinity. compute velocity for particle Xi:
replacement: evaluate affinities for mutated antibo- FOR j = 1 TO DIM DO
dies and replace the worst Nrep clones from generate correction coefficients: a = α ·
population Q with randomly generated antibodies. rand(); b = β · rand(); c = γ · rand().
elitism: evaluate new created antibodies and replace update velocity of particle Xi along
the worst antibody from Q with the best one from P. dimension j:
next generation: replace current population with the Vij = Vij + a · (BijP –xij) +
one from the proliferation pool: P ← Q. + b · (BijL –xij) + c · (BjG –xij)
update position of particle Xi: Xi = Xi + ε · Vi
4.7 Particle Swarm Optimization
Particle Swarm Optimization (PSO) was developed by Table 8 – Pseudocode for Ant Colony Optimization
Kennedy and Eberhart in the mid-1990s [21]. PSO is a Data: population size N, set of components C = {C1,
stochastic optimization technique which emulates the …, Cn}, evaporation rate evap.
“swarming” behavior of animals such as birds or Initialization: amount of pheromones for each
insects. Basically, PSO develops a population of component PH = {PH1, … , PHn}; best solution Best
particles that move in the search space through WHILE {stopping criterion not met}
cooperation or interaction of individual particles. PSO is initialize current population, P = void.
basically a form of directed mutation. create current population of virtual solutions P:
Any particle i is considered in two parts: particle‟s FOR i = 1 TO N DO
location Xi and its velocity Vi. At any moment, the create feasible solution S.
position of a particle i is computed based on its prior update the best solution, Best ← void.
position Xi and a correction term proportional with its Add solution S to P: P ← P S
velocity ε·Vi. In its turn, the velocity assigned to each Apply evaporation:
particle is computed using four components: (i) the FOR j = 1 TO n DO
influence of the previous value of velocity Vi; (ii) the PHj = PHj · (1 – evap)
influence of the best personal solution for particle i, XiP; Update pheromones for each component:
FOR i = 1 TO N DO
(iii) the influence of the best local solution so far for
FOR j = 1 TO n DO
informants of particle i, XiL and (iv) the influence of the if component Ci is part of solution Pj,
best global solution so far for the entire swarm, XG. then update pheromones for this
These components are taken into consideration using component: PHj = PHj + Fitness(Pj)
three weighting factors, denoted by a, b and c. The
pseudocode of PSO is presented in Table 7.
number of ants will follow the same path. For each ant, a
4.8 Ant Colony Optimization taboo list may be defined to memorize its path.
The classic ant colony optimization (ACO) algorithm, The ants move between components Ci and Cj with
proposed by Marco Dorigo in 1992 [22], is inspired probability Pij, defined based on the pheromone density
from the natural behaviour of ants, which are able to between components, the visibility between the two
find their way using pheromone trails. Ants travel components and two weighting factors. After each ant
between two fixed points A and B on the shortest route, completes its path, the pheromone density for every
leaving behind them trail of pheromone that mark component is updated based on the fitness functions.
chosen paths. After the first ant reaches the B point, it The pseudocode of ACO is presented in Table 8.
returns in A following its own pheromone trail, and
doubling the pheromone layer density. Further, more 4.9 Honey Bee Colony Optimization
ants will probabilistically prefer to choose a path with The Honey Bee Colony Optimization (HBCO)
higher pheromone density, and gradually an increasing algorithm was first proposed in [24]; it is a search

ISSN: 1792-5967 99 ISBN: 978-960-474-238-7


SELECTED TOPICS in MATHEMATICAL METHODS and COMPUTATIONAL TECHNIQUES in ELECTRICAL ENGINEERING

Table 9 – Pseudocode for Honey Bee Optimization Paper [28] presents a new approach to this problem,
Data: size of populations: - drones (ND), broods (NB) based on joint application of genetic programming (GP)
and genetic pool (NP); initial queen‟s speed Smax; and symbolic regression (SR). GP is a special form of
crossover rate ηc and mutation rate ηm GA that describes possible solutions using
Initialization: create initial population Drones with ND chromosomes with a tree-like structure [16]. SR is a
individuals, and select the best drone as the Queen. special numerical technique used to identify those
WHILE {stopping criterion not met}
analytical expressions, previously unknown, that best fit
create the genetic pool: use Drones popoulation
some relations among observed data. The GP&SR
and select NP individuals using a Simulated
Annealing-type acceptance rule, based on the method was applied to a dataset comprising hourly load
queen‟s speed S, and gradually reduce S. profiles. The daily peak load was estimated using
crossover: apply arithmetic crossover between different combinations of data. The optimal solution
Queen and succesively selected drones from the found by the GP&SR estimates the daily peak load
genetic pool, until a population of NB broods using as input data other peak loads form days d-7, d-6
(offsprongs) is created. and d-1 and the number of the reference day in the year.
mutation: apply arithmetic mutation to randomly One of the most widely used techniques to represent
selected broods (offsprings). consumers is the Load Profiling (LP-ing) method, which
update the Queen: if any brood is better than the associates typical load profiles (TLPs) with simple
Queen, update the Queen. readings in the network [29, 30, 31]. Paper [32]
selection: use broods and, based on a selection proposes a new LP-ing technique that uses a fuzzy
criterion, create the new population of Drones.
implementation of Self Organizing Maps (SOMs) as a
basic clustering approach, combined with a weighting
procedure that mimics the mating process in honey-bee procedure to compute distances between metered load
colonies, using selection, crossover and mutation. profiles and TLPs around peak and valley load hours.
A honey bee colony houses a queen-bee, drones and Then the growing SOM technique is applied to
workers. The queen-bee is specialized in egg laying;
determine the optimal architecture of the SOM, using a
drones are fathers of the colony and mate with the
queen-bee. During the mating flight the queen mates control strategy based on the assessment of affinity
with drones to form a genetic pool. After the genetic measures between metered LPs and TLPs.
pool was filled with chromosomes, genetic operators are Paper [33] presents a new approach to the LP-ing
applied. During the crossover stage, drones are problem based on the HBCO algorithm. The proposed
randomly selected from the current population and mate method can be easily implemented as a clustering
with the queen using a Simulated Annealing-type technique, with high robustness properties. The
acceptance rule based on the difference between the application of HBCO algorithm is an original approach
fitness functions of the selected drone and the queen. to the LP-ing problem. A major advantage of the
The final stage of the evolutionary process consists in proposed method is its capability to produce qualitative
raising the broods generated during the second stage, classification results with fewer parameters that need
and creating a new generation of NB broods, based on calibration than other alternative clustering approaches.
mutation operators. A new generation of ND drones is
created based on a specific selection criterion. The 5.2 Network reconfiguration
pseudocode of HBCO algorithm is presented in Table 9. The network reconfiguration problem arises usually in
distribution systems and aims at changing the network
topology by altering the position and status of
5 Applications of (meta)heuristic sectionalizing switches. Generally speaking, this is a
methods in power systems complex combinatorial problem, because in real systems
Many optimization problems in Power Systems may be the number of candidate solutions is huge.
approached using heuristic and metaheuristic techniques. Paper [34] approaches the network reconfiguration
A brief description of most representative applications problem using the ACO algorithm. Candidate solutions
of the (meta)heuristic methods in the field of power are represented by vectors consisting of the load
system optimization is presented below. sections that must be opened to obtain a radial
configuration of the network, taking into account the
5.1 Load assessment and profiling problem‟s constraints (no consumer is to be
The knowledge of load characteristics at system buses, disconnected and the thermal current and voltage drop
in general, and the estimation of peak loads in must meet the admissible limits).
distribution systems, in particular, is one of the top A different approach, based on the PSO algorithm
requirements for taking high quality decisions for the was proposed in [35]. This time, a particle in the swarm
optimal operation and planning of a distribution system. encodes two m-tuples of indices, where the value of

ISSN: 1792-5967 100 ISBN: 978-960-474-238-7


SELECTED TOPICS in MATHEMATICAL METHODS and COMPUTATIONAL TECHNIQUES in ELECTRICAL ENGINEERING

parameter m is equal to the sum of the number of source 5.4 System security analysis
nodes and the number of independent loops in the System equivalents are a good solution to simplify the
system, minus one. The first m-tuple indicates the on-line analysis of present day wide-area power
feeders in the system where the loops should be opened, systems. Paper [42] presents an original approach to the
and the second m-tuple shows the line sections on each problem of the REI equivalent design optimization
feeder where the switches must be turned off. based on the sensitivity of the complex bus voltage to a
An interesting approach to network reconfiguration set of simulated contingencies. The optimal structure of
problem is the one depicted in [36]. This approach is the REI equivalent is determined using a GA approach.
based on a heuristic that replace the traditional approach Basically, besides the standard and mixed REI
that computes network losses for every switching equivalents, with PQ or PV-type buses, the new
configuration with a simple one based on the following proposed model allows to define any structure of the
heuristic. Start with the standard radial configuration equivalent, with any number of virtual REI buses, which
and select the one switch p with the maximum voltage group real buses from any category (PQ or PV-type).
difference across it. If this difference is large enough, In [39] the authors propose a feeder reconfiguration
stop the search. Otherwise, choose the bus across switch approach to increase voltage stability limits for a
p with minimum voltage, and open the switch q adjacent distribution system, based on a TS optimization
to this bus, while closing switch p. Move to the q+1 procedure. The quality of a candidate solution is
switch and repeat the above steps until losses begin to assessed based on a voltage stability index
increase. At that moment, stop the search and choose as recommended in [39].
optimal solution the one with the open switch q. FACTS (Flexible AC Transmission Systems)
optimization is approached in [41], where a hybrid
5.3 Reactive power planning metaheuristic approach to the efficient allocation of
Reactive power control can be achieved using several UPFC (Unified Power Flow Controller) for maximizing
approaches such as generator voltage control, transmission capability is presented. The model
transformer tap control and fixed or controllable VAR proposed in [41] is applied in two steps. The first step
sources. At the distribution level, the most efficient consists in determining the optimal location of UPFC
approach, as it produces other positive effects too, is the using TS algorithm, while the second step consists in
Reactive Power Compensation (RPC) method through optimizing the outputs of the UPFC using a
power factor correction. evolutionary PSO approach.
RPC was approached in [34] and [35] in parallel with
the distribution system reconfiguration, using the ACO 5.5 State estimation
and the IA algorithms. The two algorithms were run to The most popular State Estimation (SE) model is based
produce the final solution of the RPC optimization on the Weighted Least Squares method, and uses
problem. Multiple runs of the PSO and IA algorithms measurements for bus voltage, real and reactive power
will lead eventually to the optimal solution. flows or injections. Recent works, such as [42],
A different approach was used in [37], where an approached the problem of power system SE using a
enhanced PSO (E-PSO) algorithm was proposed. For the parallel implementation of the PSO algorithm based on
E-PSO algorithm, a particle changes its position based PC cluster systems. The main advantages of parallel
not only on its best position so far and the global best processing consist in reducing computing time and
position, but also on other successful positions of itself improving computation efficiency. The proposed PC
and other particles in the current population. Tests have cluster configuration considers sub-populations that
proved that a second order E-PSO model is enough to evolve independently and change information only with
guarantee a faster convergence. other predefined neighboring sub-populations.
In [38] the reactive power planning problem is Recently, GPS synchronized phasor measurement
approached in a more complex way, considering the data were proposed to be used as input data to the
simultaneous optimization of reactive generation of traditional SE model [43]. Phasor measurement units
generators, transformer tap positions and capacitor (PMUs) are placed at the buses of transmission
banks. The authors uses a hybrid optimization method: substations to measure voltage and current phasors.
while the optimal settings for reactive generation of From a financial standpoint, a hybrid solution which
generators and transformer tap positions are determined uses both traditional and synchronized phasor
using metaheuristics (GA, PSO and DE), the number of measurements is better in terms of costs. From this
capacitor banks in the system buses are established using standpoint paper [44] approaches the problem of
a heuristic approach based on an analysis of loss optimal PMUs placement as an optimization problem
sensitivity at the buses. which aims to expand an existing traditional
measurement configuration through the placement of

ISSN: 1792-5967 101 ISBN: 978-960-474-238-7


SELECTED TOPICS in MATHEMATICAL METHODS and COMPUTATIONAL TECHNIQUES in ELECTRICAL ENGINEERING

additional PMUs using a GA – based approach. The [3] F. Box, A Heuristic technique for assigning
algorithm aims to find the optimal placement of a given frequencies to mobile radio nets, IEEE Transactions
set of PMUs at the buses of a power system with a on Vehicular Technology, 27, 57–74, 1978.
known conventional measurement configuration. [4] K. Y. Lee and M.A. El-Sharkawi (editors), Modern
Heuristic Optimization Techniques with
5.6 Distributed generation Applications to Power Systems, IEEE Press Series
Present day distribution systems are facing deep on Power Engineering, John Wiley & Sons, 2008.
changing that transforms traditional design for passive [5] J. Pearl., Heuristics: Intelligent Search Strategies
operation into new concepts centered on distributed for Computer Problem Solving, Add.-Wesley, 1984.
generation (DG) and a more active role of end-users. [6] S. J. Russell, P. Norvig, Artificial Intelligence: A
Two representative types of optimization problems for Modern Approach (3rd ed.), 2009.
DG applications will be briefly described forward. [7] S. Koenig M. Likhachev, Y. Liu, et al., Incremental
An appropriate location of DG sources can determine heuristic search in AI, AI Magazine, 25 (2), 2004.
reduction in system losses, improvement of voltage [8] R. Zhou, E. A. Hansen, Breadth-First Heuristic
profiles or simplification of network protection Search, Proc. of the 14th International Conference
schemes. Thus, a multi-objective approach to the on Automated Planning and Scheduling, ICAPS-04,
optimal placement of DG, based on the HBCO Whistler, British Columbia, Canada, June 2004.
algorithm is presented in [45]. The fitness function used [9] R. Dechter, J. Pearl, Generalized Best-First Search
by the HBCO algorithm is a combination of two Strategies and the Optimality of A*. Journal of the
objective functions, which describe different features of Assoc. for Comp.Machinery, 32, pp. 505-536, 1985.
the system: system real power losses and a penalty [10] E. Burns, S. Lemons, R. Zhou,W. Ruml, Best-First
function associated with bus voltage and line Heuristic Search for Multi-Core Machines, Proc. of
overloading violations for contingency analysis. the 21-st International Joint Conference on
The problem of optimal design of multiple energy Artificial Intelligence (IJCAI-09), Pasadena,
hubs was approached in [46] for a trigeneration-energy California, pp. 449-455, 2009.
system using a GA-based approach. The hub uses as [11] R. Zhou, E. A. Hansen, Beam-Stack Search:
primary energy electricity and natural gas or biomass, Integrating Backtracking with Beam Search, Proc.
and produces as outputs electricity, cooling and heating. of the 15th International Conference on Automated
The mathematical model is based on a compound Planning and Scheduling, Monterey, June, 2005.
objective function with two components: the primary [12] C. Blum, A. Rolli, Metaheuristics in Combinatorial
energy function and an error term equal to the difference Optimization: Overview and Conceptual Compari-
between the primary energy computed on two son, ACM Computing Surveys,35(3),268–308,2003.
independent ways (starting from the heating or cooling [13] I. Rechenberg, Cybernetic Solution Path of an
output energy). The optimization process consists in Experimental Problem, Royal Aircraft
finding the optimal values for three conversion Establishment Library Translation, 1965.
coefficients that uniquely describe the hub energy flow. [14] L. Fogel, A.J. Owens, MJ. Walsh, Artificial Intelli-
gence through Simulat.ed Evolution, Wiley 1966.
[15] J. H. Holland, Adaptation in Natural and Artificial
6 Conclusion Systems, Univ. of Michigan Press, 1975.
During the last 20 years numerous (meta)heuristic [16] J. Koza, Genetic programming, MIT Press; 1992.
approaches have been devised and developed to solve [17] S. Kirkpatrick, C. D. Gelatt, M. P. Vecchi,
complex optimization problems. Their success is due Optimization by Simulated Annealing. Science, 220
largely to their most important features, namely the need (4598): 671–680, 1983.
of minimal additional knowledge on the optimization [18] F. Glover, Future Paths for Integer Programming
problem and a highly numerical robustness of and Links to Artificial Intelligence, Computers and
algorithms. This paper provided basic knowledge of Operations Research 13 (5): 533–549, 1986.
most popular (meta)heuristic optimization techniques, [19] J. D. Farmer, N. Packard, A. Perelson, The immune
and how they are applied in common optimization system, adaptation and machine learning, Physica D
problems in power systems. 22: 187–204, 1986.
[20] P. Moscato, On Evolution, Search, Optimization,
Genetic Algorithms and Martial Arts : Towards
References: Memetic Algorithms, Techn. Report C3P 826, 1989.
[1] R. Fletcher, Practical Methods of Optimization, 2- [21] J. Kennedy, R. Eberhart, Particle Swarm
nd Edition, John Wiley & Sons, 2000. Optimization, Proc. of IEEE Internal Conference on
[2] G. Polya, How to Solve It., Princeton Univ.Pr, 1945. Neural Networks,pp.1942–1948, 1995.

ISSN: 1792-5967 102 ISBN: 978-960-474-238-7


SELECTED TOPICS in MATHEMATICAL METHODS and COMPUTATIONAL TECHNIQUES in ELECTRICAL ENGINEERING

[22] M. Dorigo, Optimization, Learning and Natural Optimization and Immune Algorithm, Proc. of the
Algorithms (Phd Thesis). Polit.di Milano, 1992. First International Symposium on Electrical and
[23] R. Storm, K. Price, Differential evolution - a simple Electronics Engineering, Galati, Romania, 2006.
and efficient heuristic for global optimization over [36] R. Srinivasa, S.V.L. Narasimham, A New Heuristic
continuous spaces, Journal of Global Optimization Approach for Optimal Network Reconfiguration in
11: 341–359, 1997. Distribution Systems, Internnal Journal of Appl.
[24] S. Nakrani, S. Tovey, On honey bees and dynamic Science, Engin. and Technol., 5:1 2009, pp.15-21.
server allocation in Internet hosting centers, [37] M. Gavrilas, O. Ivanov, C.V. Sfintes, Enhanced
Adaptive Behaviour 12, 2004. Particle Swarm Optimization Method for Power
[25] K.A. De Jong, Genetic Algorithms: A 10 Year Loss Reduction in Distribution Systems, Proc. of
Perspective; Proceedings of the 1st Internal Conf. 19th Internal CIRED, Vienna, May 2007.
on GAs and Their Appl, pp. 169-177, 1985. [38] B. Bhattacharyya, S.K. Goswami, Combined
[26] D. Goldberg, Genetic Algorithms in Search, Heuristic and Evolutionary approach for Reactive
Optimization and Machine Learning, Addison- Power Planning Problem, Journal of Electrical
Wesley,Reading, MA, 1989. Systems, Volume 3, Issue 4, 2007, pp. 203-212.
[27] R. Dawkins et al., The Selfish Gene, Oxford Univ. [39] M.A.N. Guimaraies, J.E.C. Lorenzeti, C.A. Castro,
Press, 1989. Reconfiguration of distribution systems for voltage
[28] M. Gavrilas, C.V. Sfintes, O. Ivanov, Comparison stability margin -enhancement using tabu search,
of Neural and Evolutionary Approaches to Peak Proc.. of Intematlonal Conference on Power
Load Estimation in Distribution Systems, Proc. of System Technology - POWERCON 2004,
the Internal Conf. on "Computer as a Tool", EU- Singapore, Nov. 2004, pp. 1556-1561.
ROCON 2005,Belgrade,Nov.2005,pp.1461-1464 . [40] M. Gavrilas, O. Ivanov, G. Gavrilas, A New Static
[29] G. Chicco, R. Napoli, and F. Piglione, Network Reduction Technique Based on REI
Comparisons among clustering techniques for Equivalents and Genetic Optimization, Proc. of the
electricity customer classification, IEEE Trans. 8th WSEAS Intnal Conference on Power Systems
Power Syst., vol. 21, no. 2, pp. 933–940, May 2006. (PS '08),Santander,Spain, Sept. 2008, pp. 106–111.
[30] M. Gavrilas, O. Ivanov, G. Gavrilas, Load [41] H. Mori, Y. Maeda, A Hybrid Method of EPSO and
Profiling with Fuzzy Self-Organizing, Proc. of the TS for FACTS Optimal Allocation in Power
9th Symposium on Neural Network Applications in Systems, Proc. of the 2006 IEEE Internal Conf. on
Electrical Engineering NEUREL 2008, Belgrade, Systems,Man, and Cybernetics,2006,pp.1831-1836.
Sept. 2008, CD-ROM, ISBN: 978-1-4244-2903-S. [42] H.M. Jeong, H.S. Lee, J.H. Park, Application of
[31] G. Chicco, M. S. Ilie, Support Vector Cluste- Parallel Particle Swarm Optimization on Power
ring of Electrical Load Pattern Data ,IEEE Trans. System State Estimation, Proc. of IEEE Transm. &
Power Syst.,Vol.24,No.3,2009,pp.1619-1628. Distrib. Asia Conference, 2009.
[32.GG] M. Gavrilas, O. Ivanov, Load Profiling by Self [43] A. Abur, A. G. Expósito, Power System State
Organization with Affinity Control Strategy, Rev. Estimation—Theory and Implementations, Marcel
Roum. Sci. Techn. – Électrotechn. et Énerg., 53, 4, Dekker, Inc., 2004., ISBN: 0-8247-5570-7.
Bucarest, 2008, pp. 413–421. [44] M. Gavrilas, I. Rusu, G. Gavrilas, O. Ivanov,
[33] M. Gavrilas, G. Gavrilas, C.V. Sfintes, Synchronized Phasor Measurements for State
Application of Honey Bee Mating Optimization Estimation, Rev. Roum. Sci. Techn. – Électrotechn.
Algorithm to Load Profile Clustering, Proc. of the et Énerg., Vol. 54, No. 4, 2009, pp. 335-344.
IEEE International Conference on Computational [45] S. Anantasate, C. Chokpanyasuwan, W.
Intelligence for Measurement Systems and Pattaraprakor and P. Bhasaputra, Multi-objectives
Applications, Taranto, Italy, Sept. 2010. Optimal Placement of DG Using Bee Colony
[34] O. Ivanov, M. Gavrilas, C.V. Sfintes, Loss Optimization, Proc. of the 3rd GMSARN Internal
Reduction in Distribution Systems using Ant Conference, Nov.2008, China.
Colony Algorithms, Proc. of the 2-nd Reg. Conf. & [46] E. Hopulele, M. Gavrilas, C. Afanasov, Optimal
Exhib.on Electricity Distribution, Serbia, Oct. 2006. Design of a Hybrid Trigeneration System with
[35] M. Gavrilas, O. Ivanov, Optimization in Stirling Engine, Proc. of the 6th Internal Conf. on
Distribution Systems Using Particle Swarm Electrical and Power Engineering, Iasi, Oct. 2010.

ISSN: 1792-5967 103 ISBN: 978-960-474-238-7

View publication stats

Potrebbero piacerti anche