Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Abstract In this paper, the swarm implementation of the Mean- so far and saved in a continually-updating archive. One
Variance Mapping optimization (MVMOS) is proposed to solve remarkable trait of the classical implementation of MVMO
the optimal reactive power dispatch problem. Apart from constitutes a single-particle approach in which the tradeoff
incorporating swarm intelligence principles, MVMOS possesses between search diversification and intensification results in fast
enhanced mapping and penalty schemes as compared to the progress rates with reduced risk of premature convergence.
classical MVMO procedure. Based on the IEEE 57- and 118- bus
systems, numerical tests and comparisons with other heuristic In this paper, the swarm implementation of MVMO, which
optimization methods are carried out. Additionally, a scheme for has been termed as MVMOS, is proposed for the solution of the
adaptive smart optimal reactive power source coordination, ORPD problem. This new approach extends the innate power
which can be used online, is suggested. of global searching of the original MVMO by starting the
search with a set of particles (i.e. swarm), each having its own
Keywords Heuristic optimization; mean-variance mapping memory and represented by the corresponding archive and
optimization; reactive power dispatch; smart grid; swarm mapping function, as well as allowing information exchange
intelligence. and dynamic reduction of the swarm size through simple rules.
I. INTRODUCTION Enhanced schemes for shape factor assignment and dynamic
penalty are also incorporated.
The optimal reactive power dispatch (ORPD) problem has
the goal of scheduling the available reactive power sources in In the following sections details of the algorithm as well as
such a way that the overall transmission losses are minimized some application examples will be provided. Following this
by satisfying a set of operational constraints [1]. ORPD has introductions, Section II presents the mathematical formulation
received a renewed interest in recent years, since optimal of the ORPD problem. In Section III, MVMO basics are briefly
integration and intelligent operation of an increasingly diverse reviewed whereas MVMOS is explained in Section IV.
mix of primary sources into the power supply system is of great Numerical results are given and discussed in Section V,
concern for a well-functioning competitive and including a performance comparison between MVMOS,
internationalized electricity market built around smart grids [2]. MVMO, and other heuristic optimization methods, and an
outline to the coupling of MVMOS with adaptive reactive
In addition to its complex underlying mathematical power source coordination as well. Finally, Section VI
framework, the ORPD problem constitutes a large scale summarizes the concluding remarks.
optimization task involving a mixture of continuous and
discrete control variables. Hence, modern heuristic II. PROBLEM STATEMENT
optimization techniques, such as genetic algorithms [3], Solving the ORPD entails determining the optimal settings
particle swarm optimization (PSO) [4], and differential of reactive power control variables (i.e. generator bus voltages,
evolution (DE) [5] are widely used for its solution. Although setting of reactive power sources/sinks and transformer tap
these techniques have some advantages over classical positions, etc.) leading to the minimum power losses while
optimization methods, their performance may entail risk of fulfilling the operational constraints. Mathematically, the
premature convergence or local stagnation when handling problem can be formulated as follows [3], [4]:
discontinuous multimodal (i.e. multiple local optima) non-
convex landscapes [6]. Minimize
Mean-variance mapping optimization (MVMO) is a recent
addition to the emerging optimization algorithms with some Ploss = g (V
k N K
k i
2
i j cos ij
+ V j2 2VV ) (1)
basic conceptual similarities to other heuristic approaches.
Several potential areas of application in research on power subject to
system optimization tasks have been reported [6]-[12]. Its
working principle is based on a special mapping function p ( v, ) p g + pd = 0 (2)
applied for mutating the offspring on the basis of mean and
variance of the set comprising of the n-best solutions attained
978-1-4673-6002-9/13/$31.00 2013
c IEEE 29
q ( v, ) q g + q d = 0 (3) - A novel mapping function that is used for mutating genes in
the offspring based on the mean and variance of the
solution archive
v min v v max (4)
- A compact and dynamically updated solution archive that
q gmin q g q gmax (5) serves as the knowledge base for guiding the search
direction (i.e. adaptive memory). The n-best individuals that
q cmin qc qcmax (6) MVMO has found so far are saved in the archive and sorted
in a descending order of fitness.
t min t t max (7)
s s max (8)
where Gij and Bij denote conductance and susceptance the link
between buses i and j, respectively. N is the total number of
buses.
III. REVISITING CLASSICAL MVMO
The flowchart of the implementation of the single-particle
MVMO is schematically sketched in Fig. 1. The procedure
begins with an initialization phase where the algorithms
parameter settings are defined and random samples for control
variables from the space of possible solutions are generated as
well. Next, an iterative loop is initiated, in which fitness
evaluation (i.e. objective function plus penalty as a function of
the degree of constraint fulfillment) is performed, the
termination criterion is checked, the solution archive (i.e.
inclusion or exclusion of candidate solutions) is updated, the
global best solution is determined (i.e. parent assignment), and Figure 1. Classical MVMO single-particle approach.
new candidate solutions are created (i.e. mutation by projection
of selected variables onto the mapping function and crossover). - A single parent-offspring pair concept is adopted. In
The salient features of the algorithm can be summarized as contrast to other methods, where the term iteration
follows: generally refers to the number of fitness evaluations (which
is proportional to the total number of individuals in the
swarm) MVMO requires only one fitness evaluation per
The individual with the best fitness so far in the archive 0.4 0.4
(first position) is used in every iteration to generate a new asymmetrical symmetrical
descendant (i.e. assigned parent). Basically, m out of k 0.2 s1 = 5, s2 = 15 0.2 s1 = s2 = 15
dimensions of the optimization problem are strategically 0 0
selected for mutation operation via mapping function while the 0 0.5 x* 1 0 0.5 x* 1
remaining dimensions inherit the corresponding values from
Figure 2. Change of the mapping function shape under different values of
the parent. Alternative selection methods are described in [6] mean and shape factors.
and [13].
The new value of each selected dimension xi is determined Recalling (14), it is evident that the factor fs can be used to
by change the shape of the function. A small value (e.g. between
0.5 and 1.0) allows the slope of the mapping curve to increase
xi = hx + (1 h1 + h0 ) xi* h0 (11) and thus enable better exploration, whereas values above 1.0
will result in a rather flat curve and thus lead to improved
where x*i is a variable varied randomly with uniform exploitation. In general, it is recommended to start the search
distribution and the term h refers to the transformation process with a smaller fs and then increase it as the
mapping function, which is defined as optimization progresses. In several applications the random
variation of fs according to (15) has result in significant
h( x, s1 , s2 , x) = x (1 e xs1 ) + (1 xi ) e (1 x ) s2 (12)
improvements.
hx, h1 and h0 are the outputs of the mapping function, based on
different inputs given by f s = fs* (1 + rand() ) (15)
*
hx = h( x = x ), h0 = h( x = 0), h1 = h( x = 1) (13)
i
Where fs* denotes the smallest value of fs and rand() is a
si is the shape variable and is calculated as follows random number in the range [0, 1]. When the accuracy of the
si = ln(vi ) fs (14) optimization needs to be improved, the following extension
can also be added in order to allow a progressive increase of
At the start, the mean xi corresponds with the initialized fs* :
value of xi and variance vi (associated to si ) and is set to one. 2
But as the optimization progresses, they are recalculated after
every update of the archive for each selected optimization
fs* = f s*_ ini
i
+
ifinal
( fs*_ final fs*_ ini ) (16)
TABLE II. STATISTICS OF ACTIVE POWER LOSSES IEEE 118 BUS SYSTEM
Algorithms
Ploss (MW)
MVMOS MVMO CL
LPSO SPSO UPSO FDRPSO DMS-PSO-HS DE
D JADE-vPS
Minimum 117.0802 117.0074 1200.2117 121.8049 123.1174 119.1387 123.4717 118
8.7199 118.1047
Maximum 118.1662 125.1501 1322.0461 125.4654 130.2011 123.5461 128.5504 121.1128 120.2177
Mean 117.4251 119.3353 1222.2499 123.6784 125.6709 121.6536 125.0562 119
9.7737 118.9533
Std. 0.2285 1.9386 22.0533 0.9145 1.7663 1.0501 1.2033 0.6289
0 0.5321