Sei sulla pagina 1di 88

ABSTRACT

Optimal Power Flow (OPF) problem in electrical power systems is considered as


a static, non-linear, multi-objective or a single objective optimization problem. As the
power industrial companies have been moving into a more competitive environment,
OPF has been used as a tool to define the level of the inter-utility power exchange. The
OPF problem solution aims to optimize a selected objective function such as fuel cost
via optimal adjustment of the power system control variables, while at the same time
satisfying various equality and inequality constraints.

Throughout the research history, a wide variety of classical optimization


techniques have been applied in solving the OPF problems called Conventional
methodologies. However, the Conventional methods have some drawback of the
convergence characteristics or lack of accuracy, etc.

To overcome some of the above mentioned difficulties, this paper presents an


efficient and reliable evolutionary-based algorithm to solve the Optimal Power Flow
(OPF) problems in electric power systems. The proposed algorithm employs the
concept of swarm behavior in finding the optimum (e.g., food source) in nature, called
Particle Optimal Swarm (PSO). The problem is to determine the set-point of the power
system; including the power output of generators, the voltage of PV bus, etc. to supply
the demand at least cost, at the same time subject to the equality and inequality
constraints of the system. The proposed algorithm utilizes the global and personal
exploration capabilities of PSO’s particles to search for the optimal settings of the
control variables, , i.e., power output, voltage of generators.

Particle Optimal Swarm algorithm was successfully tested in a 5-bus system and
the simulation result showed its robustness and effectiveness compared to the
Conventional method in literature. PSO is characterized as simple in concept, easy to
implement and computationally efficient.
Table of Contents
ABSTRACT ....................................................................................................................................... 1
ACKNOWLEDGEMENT...................................................................... Error! Bookmark not defined.
BACHELOR THESIS APPROVAL FORM........................................... Error! Bookmark not defined.
OTHER COMMENT ............................................................................. Error! Bookmark not defined.
TABLE OF FIGURE .......................................................................................................................... 5
TABLE OF ABBREVIATION ........................................................................................................... 6
CHAPTER 1 ....................................................................................................................................... 7
INTRODUCTION .............................................................................................................................. 7
1.1 Overview .................................................................................................................................. 7
1.2 Scope of thesis .......................................................................................................................... 9
1.3 Organization of thesis ................................................................................................................ 9
CHAPTER 2 ..................................................................................................................................... 11
LITERATURE SURVEY ................................................................................................................. 11
2.1 Introduction............................................................................................................................. 11
2.2 Optimal Power Flow Challenges.............................................................................................. 12
2.3 OPF Solution Methodologies ................................................................................................... 13
2.3.1 Conventional Methods ...................................................................................................... 14
a. Gradient Methods .................................................................................................................. 15
b. Newton Method ..................................................................................................................... 16
c. Linear Programming Method ................................................................................................. 17
d. Quadratic Programming Method ............................................................................................ 19
e. Interior Point Method ........................................................................................................ 20
2.3.2 Computational Evolution Methods .................................................................................... 21
CHAPTER 3 ..................................................................................................................................... 25
OPTIMAL POWER FLOW .............................................................................................................. 25
3.1 Introduction............................................................................................................................. 25
3.2 Power Flow Analysis............................................................................................................... 27
3.2.1 Introduction ...................................................................................................................... 27
3.2.2 Power Flow Equations ...................................................................................................... 27
3.2.3 The Power flow problem................................................................................................... 29
3.2.4 Solution by Gauss Iteration ............................................................................................... 32
3.2.5 Newton-Raphson iteration ................................................................................................ 38
3.2.6 Application to power flow equations ................................................................................. 43
3.3 General OPF problem formulation ........................................................................................... 47
3.4 Optimization Problem in Power System................................................................................... 50
3.4.1 Formulation of the Economic Dispatch Problem ................................................................... 50
3.4.2 Classical Economic Dispatch (Line Losses Neglected).......................................................... 52
3.4.3 Generator Limits Included .................................................................................................... 53
3.4.4 Line Losses Considered ........................................................................................................ 54
3.4.5 Proposed OPF Formulation................................................................................................... 55
3.5 The constraints ........................................................................................................................ 55
CHAPTER 4 ..................................................................................................................................... 59
PARTICLE SWARM OPTIMIZATION ........................................................................................... 59
4.1 Introduction............................................................................................................................. 59
4.2 PSO algorithm ......................................................................................................................... 59
4.3 PSO example........................................................................................................................... 63
4.4 Merits and Demerits of PSO Method ....................................................................................... 66
CHAPTER 5 ..................................................................................................................................... 67
CASE STUDY.................................................................................................................................. 67
5.1 Introduction............................................................................................................................. 67
5.2 PSO Algorithm for OPF problem............................................................................................. 67
5.3 Illustrative Example................................................................................................................. 70
CHAPTER 6 .......................................................................................... Error! Bookmark not defined.
CONCLUSION AND FUTURE WORK ................................................ Error! Bookmark not defined.
6.1 Conclusion ................................................................................... Error! Bookmark not defined.
6.2 Future Work and Potential Applications ....................................... Error! Bookmark not defined.
CHAPTER 7 .......................................................................................... Error! Bookmark not defined.
INTERNATIONAL JOURNAL ............................................................. Error! Bookmark not defined.
Research Paper: A Novel Approach to Solving the Optimal Power Flow Problem based on Particle
Swarm Optimization .......................................................................... Error! Bookmark not defined.
APEENDIX A ....................................................................................... Error! Bookmark not defined.
Psudo_code for the example .......................................................................................................... 75
APEENDIX B .................................................................................................................................. 78
Psudo_code for the case study ....................................................................................................... 78
REFERENCE ................................................................................................................................... 85
TABLE OF FIGURE

Figure 1: Tree diagram indicating OPF Methodologies 16

Figure 2: Step in figuration 36

Figure 3: The direct-current (dc) system 43

Figure 4: Fuel-cost curve 52

Figure 5: System to be optimized 54

Figure 6: Incremental cost curves with constraints 55

Figure 7: Initial PSO state 63

Figure 8: Flow chart for solving the example 66

Figure 9: 67
The value of x and y

Figure 10: The entire particles after iterations 67

Figure 11: Flow chart of PSO Algorithm 71

Figure 12: 5-bus electric power system. 72

Figure 13: PSO algorithms for OPF problems. 73

Figure 14: The convergence of the particles in (V2, V3) plane. 74

Figure 15: The convergence of the particles in (P2, P3) plane. 75

Figure 16: The convergence of 1-st particle in PSO algorithm 75

Figure 17: Optimal generation and the incremental costs of 76


three generators
TABLE OF ABBREVIATION

OPF Optimal Power Flow


EDP Economic Dispatch Problem

PSO Particle Swarm Optimization

LR Lagrangian Relaxation

LP Linear Programming

QP Quadratic Programming

IP Interior Point

AI Artificial Intelligence

GA Genetic Algorithm

SA Simulated Annealing

EP Evolutionary Programming

ACO Ant Colony Optimization


Artificial Bee Colony
ABCO
Optimization
EP Evolution Programing

MIP Mix Integer Programming


Sequential Quadratic
SQP
Programming
ANN Artificial Neural Network

TS Tabu Search

N-R Newton-Raphson

NLP Non-Linear Programming

6
CHAPTER 1
INTRODUCTION

1.1 Overview

In power systems, the optimal problem of the generation of electric power


plants and the allocation of power flows in the transmission lines is becoming
increasingly important. Especially, it is subject to the fact that there is a major
change which is taking place in the electric power industry; these are the
restructuring of the ownership of the systems and the implementation of competitive
electricity markets. As a result, this allows any individuals to get access and
participate in the energy markets: Buying and/or selling energy and other services.
Therefore, the philosophy of optimization and operation in the power systems
would change essentially.

Optimal Power Flow (OPF) is a numerical analysis of the flow of electric


power in an interconnected system. The OPF problem has had a long history in its
development. More than twenty five years ago, Carpentier introduced a generalized
formulation of the Economic Dispatch Problem (EDP) including voltage and other
operating constraints. This formulation was named as the Optimal Power Flow
Problem [4]. OPF programs based on mathematical programming approaches are
used daily to solve very large OPF problems. However, they are not guaranteed to
converge to the global optimum of the general non-convex OPF problem, although
there are some empirical evidences on the uniqueness of the OPF solution within
the domain of interest. The existing OPF approaches have some problems, which
include not only the robustness of optimization methodology used, but also the
power system modeling.

Power-flow or load-flow studies are important for planning future expansion


of power systems as well as in determining the best operation of existing systems.
The OPF problem solution aims to optimize a selected objective function such as
fuel cost via optimal adjustment of the power system control variables, while at the
same time satisfying various equality and inequality constraints. The equality

7
constraints are the power flow equations, while the inequality constraints are the
limits on control variables and the operating limits of power system dependent
variables. The problem control variables include the generator real powers, the
generator bus voltages, the transformer, the reactive power of switches VAR
sources, while the problem dependent variables include the load bus voltages, the
generator reactive powers and the line flows.

A new application of OPF in the solution of some problems requires especially


high performance of optimization algorithms. A wide variety of classical
optimization techniques have been applied in solving the OPF problems considering
a single objective function such as Non-Linear Programming [5], Quadratic
Programming [6], Linear Programming [7], Newton-based techniques [8],
Sequential unconstrained minimization technique [9], Interior point methods [10]
and Parametric method [11]. Generally, nonlinear programming based procedures
have many drawbacks such as insecure convergence properties and algorithmic
complexity. Quadratic programming based techniques have some disadvantages
associated with the piecewise quadratic cost approximation. Newton-based
techniques have a drawback of the convergence characteristics that are sensitive to
the initial conditions and they may even fail to converge due to the inappropriate
initial conditions. Sequential unconstrained minimization techniques are known to
exhibit numerical difficulties when the penalty factors become extremely large.
Although linear programming methods are fast and reliable, they have some
disadvantages associated with the piecewise linear cost approximation. Interior
point methods have been reported as computationally efficient; however, if the step
size is not chosen properly, the sub-linear problem may have a solution that is
infeasible in the original nonlinear domain. In addition, interior point methods, in
general, suffer from bad initial, termination, and optimality criteria and, in most
cases, are unable to solve nonlinear and quadratic objective functions.

This paper emphasizes on the development of a single-objective optimal


power flow techniques using Particle Swarm Optimization (PSO) method which has
overcome some of the above mentioned difficulties. This technique has been

8
motivated by the behavior of organisms such as fish schooling and bird flocking.
Generally, PSO is characterized as simple in concept, easy to implement and
computationally efficient.

1.2 Scope of thesis

The aim of the proposed work is to apply the PSO optimization technique to
solve the single objective OPF problem, to improve the quality of the solution. Here
an objective function of minimization of generation fuel cost is achieved by using
the PSO technique which gave very promising and better results as compared to the
other optimization techniques.

1.3 Organization of thesis

Chapter 1: It gives the overview of the thesis problem and also mentions the scope
of this thesis is to apply the PSO optimization technique to solve the single
objective OPF problem.

Chapter 2: The Literature Survey gives a brief describe about the solution
methodologies for OPF problems. There are two groups of methodologies named
Conventional and Intelligent/Evolution Method. Each method has advantages and
disadvantages.

Chapter 3: This chapter presents Power Flow Analysis and latter explains two
ways to solve this problem are Gauss (Seidel) Iteration and Newton-Raphson
iteration. The goals of OPF and the objective of OPF are effectively described in
this chapter. It also contains the proposed OPF problem and the various constraints
are also defined.

Chapter 4: Describe about PSO, its description and an algorithm along with
applied example is presented as well.

Chapter 5: Case study – Apply PSO algorithm to solve OPF problem

Chapter 6: Conclusion and Future work

9
Chapter 7: International Journal: A novel Approach to Solving the Optimal Power
Flow Problem based on Particle Swarm Optimization

Abstract

Chapter 1
Overview of the thesis problem

Chapter 2
Literature Survey

Chapter 3
Power Flow Analysis

Chapter 4
Particle Swarm Optimization

Chapter 5
Case Study

Chapter 6
Conclusion and Future Work

Chapter 7
International Journal

Reference and
Appendix

10
CHAPTER 2
LITERATURE SURVEY

2.1 Introduction

During the past few decades, researchers have incorporated the simplified
network model into the scheduling problem formulation by using the Lagrangian
relaxation (LR) and the transmission constraints can be relaxed with an additional
set of Lagrangian multipliers [1-3]. It is very difficult to model the full AC
transmission system in this fashion. Carpentier introduced a general formulation of
the EDP including voltage and other operating constraints. This formulation was
named the Optimal Power Flow [4]. OPF programs based on mathematical
programming approaches are used daily to solve very large OPF problems.
However, they are not guaranteed to converge to the global optimum of the general
non-convex OPF problem, although there are some empirical evidences on the
uniqueness of the OPF solution within the domain of interest. The existing OPF
approaches have some problems, which include not only robustness of optimization
methodology used, but also the power system modeling.

The OPF problem is known as twin sub-problem of active power generation


dispatch (EDP) and reactive power generation dispatch. The main purpose of the
EDP is to determine the generation schedule of the electrical energy system that
minimizes the total generation and operation cost and does not violate any of the
system operating constraints such as line overloading, bus voltage profiles and
deviation. In the other hand, the objective of reactive power dispatch is to minimize
the active power transmission losses in an electrical system while satisfying all the
system operating constraints. The objective function of the OPF can take different
forms other than minimizing the generation cost and the losses in the transmission
system.

The OPF problem is often infeasible, due to either badly posed or being under
heavy operational stress. In the latter case, the online calculation of OPF problem is
a critical function. Therefore, in cases where the original OPF does not have a

11
feasible solution, it is desirable to be able to relax some softer constraints to produce
the best engineering solution representing a solved power system operating state. If
the feasibility is the result of an operator or system error in the definition of some
constraint limit, it is also very useful to remove or mark off the offending
constraints. If both active and reactive powers are dispatch-able, in an electrical
network, then the usual criterion for optimal operation is the minimization of
generation cost. If only a reactive power is dispatch-able, then active power loss
minimization is frequently the desired objective. This is also a convenient dummy
objective if the main problem is to determine a feasible reactive power/ voltage
solution, or for the other purposes. Any other objective can be used based on
utility’s interest and needs. Performance and reliability of optimal power flow
algorithms remain important problems in power system control and planning areas.
A new application of OPF in the solution of some problem requires especially high
performance of optimization algorithms. Some of the control variables of OPF
problems can be adjusted only in discrete steps, but present OPF solution methods
treat all variables as continuous. The adjustments, if any, for discrete variables are
made by arbitrary suboptimal procedure. These procedures may result on a
significant higher objective function cost than a solution in which the adjustment of
the discrete variables is more nearly optimized. In conventional power flow
solutions all of the control variables including those that can be adjusted only in
discrete steps, are treated as continuous until a first tentative solution has been
reached. Then each discrete variable is rounded to its nearest step and second and
final solution is obtained using only the true continuous variables. This procedure is
valid for the conventional power flow problem because the only solution
requirement is the feasibility, but it is not the case with OPF problem where an
objective function must also be minimized. Therefore, rounding to the nearest step
does not minimize the objective function and it could make it impossible to obtain a
feasible solution.

2.2 Optimal Power Flow Challenges


The demand for an OPF tool has been increasing to assess the state and

12
recommended control actions both for off line and online studies, since the first
OPF paper was presented in 60’s. The thrust for OPF to solve problems of today’s
deregulated industry and the unsolved problem in the vertically integrated industry
has posed further challenges to OPF to evaluate the capabilities of existing OPF in
terms of its potential and abilities [15].

Many challenges are before OPF remain to be answered. They can be listed as given
below.

1. Because of the consideration of large number of variety of constraints and due to


non-linearity of mathematical models OPF poses a big challenge for the
mathematicians as well as for engineers in obtaining optimum solutions.
2. The deregulated electricity market seeks answer from OPF, to address a variety
of different types of market participants, data model requirements and real time
processing and selection of appropriate costing for each unbundled service
evaluation.
3. To cope up with response time requirements, modeling of externalities (loop
flow, environmental and simultaneous transfers), practicality and sensitivity for
on line use.
4. How well the future OPF provide local or global control measures to support the
impact of critical contingencies, which threaten system voltage and angle
stability simulated.
5. Future OPF has to address the gamut of operation and planning environment in
providing new generation facilities, unbundled transmission services and other
resources allocations.

2.3 OPF Solution Methodologies

The OPF methods are broadly grouped as Conventional and Intelligent. The
conventional methodologies include the well known techniques like Gradient
method, Newton method, Quadratic Programming method, Linear Programming
method and Interior point method. Intelligent methodologies include the recently
developed and popular methods like Genetic Algorithm, Particle Swarm
Optimization…
13
The further sub classification of each methodology is given below as the Tree
diagram.

OPF Solution Methodologies


OPF Methods

Conventional Methods Intelligent Methods


- Gradient Methods - Artificial Neural Networks
- Generalised Reduced - Fuzzy Logic
- Reduced Gradient - Evolutionary Programming
- Conjugate Gradient - Ant Colony
- Newton – based - Particle Swarm Optimization
- Linear Programming
- Quadratic Programming
- Interior Point

Figure 1: Tree diagram indicating OPF Methodologies

2.3.1 Conventional Methods

Traditionally, conventional methods are used to effectively solve OPF. The


application of these methods had been an area of active research in the recent past.
The conventional methods are based on mathematical programming approaches and
used to solve different size of OPF problems. To meet the requirements of different
objective functions, type of application and nature of constraints, the popular
conventional methods is further sub divided into the following:

a) Gradient Methods [2, 5, 6]


b) Newton – based [35]
c) Linear Programming [2, 5, 6, 36]
d) Quadratic Programming [5]
e) Interior Point [2, 5, 6, 37]

14
a. Gradient Methods

The Gradient Methods is applied to the OPF problem [29] with the main
motivation being the existence of the concept of the state and control variables, with
load flow equations providing a nodal basis for the elimination of state variables.
With the availability of good load flow packages, the sensitivity information needed
is provided. This in turn helps in obtaining a reduced problem in the space of the
control variables with the load flow equations and the associated state variables
eliminated.

Merits and Demerits of Gradient Method

The Merits and Demerits of Gradient Method are summarized and given below.

Merits
1. With the Gradient method, the Optimal Power Flow solution usually
requires 10 to 20 computations of the Jacobian matrix formed in the Newton
method.
2. The Gradient procedure is used to find the optimal power flow solution that
is feasible with respect to all relevant inequality constraints. It handles
functional inequality constraints by making use of penalty functions.
3. Gradient methods are better fitted to highly constrained problems.
4. Gradient methods can accommodate non linearity easily compared to
Quadratic method.
5. Compact explicit gradient methods are very efficient, reliable, accurate and
fast.

This is true when the optimal step in the gradient direction is computed
automatically through quadratic developments.

Demerits
1. The higher the dimension of the gradient, the higher the accuracy of the OPF
solution. However consideration of equality and inequality constraints and
penalty factors make the relevant matrices less sparse and hence it

15
complicates the procedure and increases computational time.
2. Gradient method suffers from the difficulty of handling all the inequality
constraints usually encountered in optimum power flow.
3. During the problem solving process, the direction of the Gradient has to be
changed often and this leads to a very slow convergences. This is
predominant, especially during the enforcement of penalty function; the
selection of degree of penalty has bearing on the convergence.
4. Gradient methods basically exhibit slow convergence characteristics near the
optimal solution.

b. Newton Method
In the area of Power systems, Newton’s method is well known for solution of
Power Flow. It has been the standard solution algorithm for the power flow
problem for a long time The Newton approach [40] is a flexible formulation that
can be adopted to develop different OPF algorithms suited to the requirements of
different applications. Although the Newton approach exists as a concept entirely
apart from any specific method of implementation, it would not be possible to
develop practical OPF programs without employing special sparsity techniques.
The concept and the techniques together comprise the given approach. Other
Newton-based approaches are possible. Newton’s method [2, 35] is a very powerful
solution algorithm because of its rapid convergence near the solution. This property
is especially useful for power system applications because an initial guess near the
solution is easily attained. System voltages will be near rated system values,
generator outputs can be estimated from historical data, and transformer tap ratios
will be near 1.0 p.u.

Merits and Demerits of Newton Method

The Merits and Demerits of Newton Method are summarized and given below.

Merits

1. The method has the ability to converge fast.

16
2. It can handle inequality constraints very well.

3. In this method, binding inequality constraints are to be identified, which

helps in fast convergence.

4. The Newton approach is a flexible formulation that can be used to develop

different OPF algorithms to the requirements of different applications.

5. With this method efficient and robust solutions can be obtained for problems

of any practical size.

6. Solution time varies approximately in proportion to network size and is

relatively independent of the number of controls or inequality constraints.

7. There is no need of user supplied tuning and scaling factors for the

optimization process.

Demerits

1. The penalty near the limit is very small by which the optimal solution will

tend to the variable to float over the limit

2. It is not possible to develop practical OPF programs without employing

sparsity techniques.

3. Newton based techniques have a drawback of the convergence

characteristics that are sensitive to the initial conditions and they may even

fail to converge due to inappropriate initial conditions.

c. Linear Programming Method


Linear Programming (LP) method [2, 5] treats problems having constraints
and objective functions formulated in linear form with non-negative variables.
Basically the simplex method is well known to be very effective for solving LP
problems.

The Linear Programming approach has been advocated [13] on the grounds

17
that

1. The LP solution process is completely reliable.


2. The LP solutions can be very fast.
3. The accuracy and scope of linearized model is adequate for most
engineering purposes.

It may be noted that point (1) is certainly true while point (2) depends on the
specific algorithms and problem formulations. The observation (3) is frequently
valid since the transmission network is quasi linear, but it needs to be checked out
for any given system and application.

Merits and Demerits of Linear Programming Method


The Merits and Demerits of Linear Programming Method are summarized and
given below.

Merits
1. The LP method easily handles Non linearity constraints
2. It is efficient in handling of inequalities.
3. Deals effectively with local constraints.
4. It has ability for incorporation of contingency constraints.
5. The latest LP methods have overcome the difficulties of solving the non-
separable loss minimization problem, limitations on the modeling of
generator cost curves.
6. There is no requirement to start from a feasible point .The process is entered
with a solved or unsolved power flow. If a reactive balance is not initially
achievable, the first power flow solution switches in or out the necessary
amount of controlled VAR compensation

7. The LP solution is completely reliable

8. It has the ability to detect infeasible solution

9. The LP solution can be very fast.

10. The advantages of LP approach ,such as, complete computational reliability

18
and very high speed enables it , suitable for real time or steady mode

purposes

Demerits

1. It suffers lack of accuracy.

2. Although LP methods are fast and reliable, but they have some

disadvantages associated with the piecewise linear cost approximations.

d. Quadratic Programming Method


The objective function of Quadratic Programming (QP) optimization
model is quadratic and the constraints are in linear form. Quadratic
Programming has higher accuracy than LP - based approaches. Especially the
most often used objective function is a quadratic.

Quadratic Programming based optimization is involved in power systems [18]


for maintaining a desired voltage profile, maximizing power flow and minimizing
generation cost. These quantities are generally controlled by complex power
generation which is usually having two limits. Here minimization is considered as
maximization can be determined by changing the sign of the objective function.
Further, the quadratic functions are characterized by the matrices and vectors.

Merits and Demerits of Quadratic Programming Method

The Merits and Demerits of Quadratic Programming Method are summarized


and given below.

Merits

1. The method is suited to infeasible or divergent starting points.

2. Optimum Power Flow in ill conditioned and divergent systems can be solved

in most cases.

3. The Quadratic Programming method does not require the use of penalty

factors or the determination of gradient step size which can cause

19
convergence difficulties. In this way convergence is very fast.

4. The method can solve both the load flow and economic dispatch problems.

5. During the optimization phase all intermediate results feasible and the

algorithm indicates whether or not a feasible solution is possible.

6. The accuracy of QP method is much higher compared to other established

methods.

Demerits

1. The main problems of using the Quadratic Programming in Reactive Power


Optimization are:

2. Convergence of approximating programming cycle (successive solution of


quadratic programming and load flow problems).
3. Difficulties in obtaining solution of quadratic programming in large
dimension of approximating QP problems.
4. Complexity and reliability of quadratic programming algorithms.

5. QP based techniques have some disadvantages associated with the piecewise


quadratic cost approximations.

e. Interior Point Method

The Interior Point Method [2, 5, and 6] is one of the most efficient algorithms.
The IP method classification is a relatively new optimization approach that was
applied to solve power system optimization problems in the late 1980s and early
1990s and as can be seen from the list of references [32]
The Interior Point Method can solve a large scale linear programming
problem by moving through the interior, rather than the boundary as in the simplex
method, of the feasible reason to find an optimal solution. The IP method was
originally proposed to solve linear programming problems; however later it was
implemented to efficiently handle quadratic programming problems.

Merits and Demerits of Interior Point Method


20
Merits

1. The Interior Point Method is one of the most efficient algorithms. Maintains

good accuracy while achieving great advantages in speed of convergence of

as much as 12:1 in some cases when compared with other known linear

programming techniques.

2. The Interior Point Method can solve a large scale linear programming

problem by moving through the interior, rather than the boundary as in the

simplex method, of the feasible region to find an optimal solution.

3. The Interior Point Method is preferably adapted to OPF due to its reliability,

speed and accuracy.

4. Automatic objective selection (Economic Dispatch, VAR planning and Loss

Minimization options) based on system analysis.

5. IP provides user interaction in the selection of constraints.

Demerits

1. Limitation due to starting and terminating conditions

2. Infeasible solution if step size is chosen improperly.

2.3.2 Computational Evolution Methods

To overcome the limitations and deficiencies in analytical methods, Intelligent


Methods based on Artificial Intelligence (AI) techniques have been developed in the
recent past. These methods can be classified or divided into the following:

a) Artificial Neural Networks


b) Evolutionary Programming
c) Ant Colony Optimization
d) Particle Swarm Optimization

21
Generally, most of the classical optimization techniques mentioned in the
preceding section apply sensitivity analysis and gradient-based optimization
algorithms by linearizing the objective function and the system constraints around
an operating point. The OPF problem is often infeasible due to either badly posed or
being under heavy operational stress. If both active and reactive powers are
dispatch-able in an electrical network then the usual criterion for optimal operation
is the minimization of generation cost. If only a reactive power is dispatch-able then
active power loss minimization is desired. Unfortunately, the OPF problem is
highly non-linear and a multimodal optimization problem, i.e. there exist more than
one local optimum. Hence, local optimization techniques, which are well
elaborated, are not suitable for such a problem. Moreover, there is no criterion to
decide whether a local solution is also the global solution. Therefore, conventional
optimization methods that make use of derivatives and gradient are not able to
identify the global optimum. Conversely, many mathematical assumptions such as
convex, analytical, and differential objective functions have to be given to simplify
the problem. However, the OPF problem is an optimization problem with in general
non-convex, non-smooth, and non-differentiable objective functions. It becomes
essential to develop optimization techniques that are efficient to overcome these
drawbacks and handle such difficulties.

More recently, OPF has enjoyed renewed interest in a variety of formulations


through use of evolutionary optimization techniques to overcome the limitations of
the mathematical programming approaches. A wide variety of advance optimization
techniques have been applied in solving the OPF problem considering a single
objective function including Genetic Algorithm (GA), Simulated Annealing (SA),
Tabu Search (TS) and Particle Swarm Optimization (PSO) algorithm. These are
Evolutionary programming (EP) algorithms which use the mechanics of evolution
of candidate solutions toward the global optimum. The EP algorithms give results
better than the heuristic and the classical algorithm the results reported were
promising and encouraging for further research in this direction. Unfortunately,
recent research has identified some deficiency in GA performance. The degradation
in efficiency is apparent in applications with highly epistatic objective function, i.e.
22
where the parameters being optimized are highly correlated. In addition, the
premature convergence of GA degrades its performance and reduces its search
capability. The SA algorithm is a metaheuristic and many choices are required to
turn it into an actual algorithm. There is a clear tradeoff between the quality of the
solutions and the time required to compute them. The tailoring work required to
account for different classes of constraints and to fine-tune the parameters of the
algorithm can be rather delicate. The precision of the numbers used in
implementation is of SA can have a significant effect upon the quality of the
outcome.

In recent years, several evolutionary algorithms have been proposed for


constrained engineering optimization problems. In recent years, many methods have
been proposed for handling constraints, which is the key point of the optimization
process. Recently a new evolutionary computational technique, called Particle
Swarm Optimization has been proposed and introduced. PSO is a population based
stochastic optimization technique developed by Dr. Kennedy and Dr. Eberhart in
1995, inspired by social behavior of bird flocking or fish schooling. In PSO, the
potential solutions, called particles, fly through the problem space by following the
current optimum particles. Particles change their position by flying around in a
multidimensional search space until a relatively unchanged position has been
encountered, or until computational limitations are exceeded. It has many
advantages over the other optimization techniques mentions ahead in the thesis. It
has been successfully applied to various problems. So I am going with this
optimization technique to solve the problem of single objective OPF.

The major advantage of the intelligent methods is that they are relatively
versatile for handling various qualitative constraints. These methods can find
multiple optimal solutions in single simulation run. So they are quite suitable in
solving multi objective optimization problems. In most cases, they can find the
global optimum solution. The main advantages of Intelligent methods are: Possesses
learning ability, fast, appropriate for non-linear modeling, etc. whereas, large

23
dimensionality and the choice of training methodology are some disadvantages if
Intelligent methods.

24
CHAPTER 3
OPTIMAL POWER FLOW

3.1 Introduction

The Optimal Power Flow module is an intelligent load flow that employs
technique to automatically adjust the power system control settings while
simultaneously solving the load flows and optimizing the operating conditions
within specific constraints. Optimal power Flow uses state-of-art techniques with
barrier functions and infeasibility handling to achieve ultimate accuracy and
flexibility is to determine the best way to instantaneously operate a power system.
Usually “best” refers to minimizing the operating cost. OPF considers the impact of
the transmission system. OPF functionally combines the power flow with economic
dispatch. It minimizes cost function, such as operating cost, taking into account
realistic equality and inequality constraints. A typical OPF problem seeks a dispatch
of active power and/or reactive power by adjusting the appropriate control
variables, so that a specific objective in operating a power system network is
optimized (maximizing or minimizing) and the feasibility with respect to the power
system constraints dictated by the electrical network is maintained. The OPF
problem is considered as a static, non-linear, multi-objective optimization problem
with both continuous and discrete control variables. In addition it is a non-convex
problem that has local minima. The non-linearity of the power flow equations
makes the OPF problem a non-linear constrained problem. The importance of OPF
problem in power system is not only due to operational security considerations, but
also due to the projected annual saving resulted from operating the power system in
an optimized state. OPF is one of the most important operational functions of the
modern energy management system. The continuous control variables model
include unit active power outputs and generator bus voltage magnitudes while the
discrete variables include transformer tap setting and switchable shunt devices.

OPF has been widely used in power system operation and planning. It has
been used as a tool to define the level of inter-utility power exchange. The primary
goal of a generic OPF is to minimize the costs of meeting the load demand for a

25
power system while maintaining the security of the system. The costs associated
with the power system may depend on the situation, but in general they can be
attributed to the cost of generating power (MW) at each generator. In OPF the
maintenance of system security needs to keep each device of power system within
desired limits. It includes maximum and minimum outputs for generators,
maximum MVA flows on transmission lines and generators and also keeping bus
voltages within desired ranges. The OPF mainly addresses the steady-state of the
power system. The OPF performs all the control function of the power system to
achieve these goals. The functions may include generator and transmission system
control. The OPF will control generator MW outputs as well as generator voltage
for the generators and for transmission system it may control the tap ratio or phase
shift angle for variable transformers, switched shunt control and other facts devices.
Secondary goal of OPF is to determine the system marginal cost data. It aids in the
pricing of MW transactions as well as pricing axillary services such as voltage
support through MVAr support.

The OPF has many applications which include:

1. The calculation of the optimum generation pattern, as well as all control


variables, in order to achieve the minimum cost of the generation together
with meeting the transmission system limitations
2. Using either the current state of the power system or a short-term load
forecast, the OPF can be set up to provide a preventive dispatch if the
security constraints are incorporated
3. In an emergency, that is when some component of the system is overloaded
or a bus is experiencing a voltage violation, the OPF can provide a corrective
dispatch, which tells the system’s operators what kind of adjustments to
perform in order to mitigate the overload or voltage violation problems.
4. The OPF can also be periodically to find the optimum settings for generation
voltages, transformers taps and switchable capacitors or statics VAr
components (called Voltage-VAr optimization)

26
5. The OPF is routinely used in planning studies to determine the maximum
stress that a planned transmission system can withstand.

3.2 Power Flow Analysis


3.2.1 Introduction

The purpose of a power system is to deliver the power to the customers


requite in real time, on demand, within acceptable voltage and frequently limits, and
in a reliable and economic manner. The considered transmission system is modeled
by a set of buses or nodes interconnected by transmission links. Generators or loads
connected to various nodes of the system, injected and remove power from the
transmission system.

In system operating and planning, it is also extremely important to consider


the economy of operation. For example, we wish to consider among all the possible
allocations of generation assignments what is optimal in the sense of minimum
production costs.

3.2.2 Power Flow Equations

We define the complex per phase bus power, Si as follow:

S i  S Gi  S Di

(3.
1)

Si is what left of S Gi after stripping away the local load S Di . We can visualize Si by

splitting a bus. Using conservation of complex power, we also have for the i th bus,

n
Si   Sik i  1, 2,......, n
k 1

(3.2)

Where we sum Sik over all the transmission links connected to the i th bus.

We also define the bus current Ii

27
n
I i  I GI  I Di   I ik i  1, 2,......, n
k 1

(3.
3)

Ii is the total phase a current entering the transmission system.

The relationship between the injected note currents and the node voltages: I  YbusV

From I  YbusV , we get for the i th component

n
I i   YikVk i  1, 2,......, n
k 1

(3.
4)

We next calculate the i th bus power.

Si  Vi I i*
n
 Vi ( YikVk )* i  1, 2,......, n
k 1
n
 Vi ( Yik*Vk*
k 1

(3.5)

Suppose we let

Vi  Vi e jVi  Vi e ji

(3.
6)

ik  i   k

Yik  Gik  jBik

28
Note that we use a polar representation for (complex) voltage but a rectangular
representation for (complex) admittance. The Gik is called conductance, and the Bik
is called susceptance. Then we have:

n
Si   Vi Vk eiik (Gik  jBik )
k 1

(3.
7)

n
= V
k 1
i Vk (cos ik  j sin ik )(Gik  jBik ) i  1, 2,......, n

(3.8)

Resolving the above equation into real and imaginary parts, we obtain:

n
Pi   Vi Vk (Gik cos ik  Bik sin ik )
k 1

(3.
9)

n
Qi   Vi Vk (Gik sin ik  Bik cos ik )
k 1

(3.1
0)

3.2.3 The Power flow problem

Some buses are supplied by generators. We can call these generator buses.
Other buses without generators are called load buses.

Operational considerations indicate that at a generator bus, the active power


PGi and the voltage magnitude Vi may be specified (by varying turbine power and

generator field current). At all buses we assume that the S Di are specified. In terms

of bus power, then, we see that at a generator bus Pi  PGi  PDi may be specified,

while at a load bus Si = - S Di is specified. Thus at some buses Pi and Vi may be

29
specified, at others Pi and Qi . One important point must be noted. In general, we
can not specify all the Pi ’s independently. There is a constraint imposed by the need
to balance active power. With a lossless transmission system, the sum of the Pi ’s

overall the buses equals zero. Thus one of the Pi ’s is determined by specification of
the rest. One the other hand, with a lossy system the sum of the Pi ’s must equal the
2
I R losses in the transmission system. A problem arises because these losses are

not known accurately in advance of the power flow calculation. The resolution of
this problem is simple and effective and takes care of both cases. For a calculation
of steady-state values, this choice is arbitrary; for convenience we specify Pi at all
the buses but one. The injected power at this bus is left open “to take up the slack”
and balance the active powers. It is conventional, but completely arbitrary, to
number the buses so that the generator assigned this function are connected to bus 1.
For this generator we do not specify P1 , or equivalent PG1 , but rather specify

V1  V1 V1 . Choosing V1 amounts to no more than picking a time reference. For a

calculation of steady-state values, this choice is arbitrary, for convenience we


ordinarily pick 1  V1  0 .

In summary, there are three types of sources at the different buses:

1. A Voltage source. Assume at bus 1.


2. P , Vi sources. At the other generator buses.

3. P, Q Sources. At the load buses.

We also note the following terminology. Bus 1 is called a a slack bus or swing bus
or voltage reference bus. We prefer not to use the terminology voltage reference bus
to avoid confusion with the designation of the neutral as voltage reference (or
datum) note. Buses with P , Vi sources are called P , Vi , or voltage control buses.

Buses with only P, Q sources are called P, Q or load buses.

Finally, it should be noted that while ordinarily a bus may be clearly identified as
either generator or load bus, in the case of a load bus with capacitors the bus may be

30
identified as P, Q bus if the capacitors supply a fixed reactive power, or it may be a

P , Vi bus if the capacitors are utilized to maintain a specified P (=0) and Vi .

Sometimes, Q rather than V is specified at a generator bus. In this case we include

it with the load buses. Unless otherwise indicated, we assume voltage control at
generator buses.

We now state two versions of the power flow problem. In both cases we assume
that bus 1 is a slack (or swing) bus. In Case I, we assume that all the remaining
buses are P, Q buses.

Case I: Given V1 , S2 , S3 ,……, Sn , find S1 , V2 , V3 ,……, Vn .

In Case II, we assume both P , V and P, Q buses. We number the buses so that

buses 2,3,…., m are P , V buses and m+1,….., n are P, Q buses.

Case II: Given V1 ,( P2 V2 ),……, ( Pm , Vm ), Sm1 ,….., Sn find S1 ,( Q2 , V2 ),……...

( Qm , Vm ), Vm1 ,…., Vn .

We now discuss the formulation by noting the following points

1. The preceding formulation concern bus power (and voltages). However, the
S Di and SGi are involved since Si  SGi  S Di . Thus at the P,Q buses Si =  S Di

while at the P , V buses, Pi  PGi  PDi .

2. Case I corresponds to the one-generator case (at bus 1). Case II is the more
typical case.
3. In both cases we assume that two out of four variables at each bus are given
(complex Vi specifies Vi and Vi , while complex Si specifies Pi and Qi ), and

are asked to find the remaining two variables.


4. From (3.6) we can see that th Vi appear only on the differences  ij . Thus,

form these equations we can only solve for the differences. However, since
V1 is assumed specified ( = 0), all the Vi can then be calculated.

31
5. Equation (3.5) or (3.6) is implicit in the Vi , and it turns out that we need to
solve implicit (and nonlinear) equations. Only in the case of solving for S1
do we have an explicit equation.
6. Once the stated problem has been solved, we know all the Vi and can then
solve for power flows or currents on individual transmission links. In
particular, we can check if stability margins and line or transformer thermal
ratings are satisfied.
7. In the practical case we may pose the problem differently to take into
account some practical limits on the dependent variables. Thus, in Case II,
we may specify Qimin  Qi  Qimax , i  2,3,, m . Then in order not to over
specify the problem, it is the practice to relax the specification on a particular
Vi , in the event that a limit on Qi is reached.

8. Suppose that a particular load is specified by its impedance Z Di instead of its

complex power S Di . This fits into the general scheme with the following
modification: Set S Di = 0, and add YDi  1 Z Di to the ith term on the diagonal

of Ybus .
9. The reader may wish to check that, for a normal system, the problem
formulation is consistent with the use of  -equivalent circuits to model the
transmission links.

3.2.4 Solution by Gauss Iteration

Consider Case I of the power flow problem: V1 , S2 , S3 ,……, Sn , find S1 , V2 , V3 ,……,


Vn . We have

n
S1  V1  Y1*kVk* (a)
k 1

n
Si  Vi  Yik*Vk* (b)
k 1

32
Note: If we know V1 ,V2 ,.....,Vn we can solve S1 explicitly using equation (a). Since
we already do know V1 , it only remain to find V2 , V3 ,.....,Vn .These n – 1 unknowns
may be found from the n – 1 equations of equation (b). Thus, the heart of the
problem is the solution of n - 1 implicit equation in the unknown V2 , V3 ,.....,Vn where
V1 and S2 , S3 ,....., Sn are known. Equivalently, taking complex conjugates, we can

replace equation (b) by

n
Si*  Vi *  YikVk i  2, 3......, n
k 1

(3.11)

We now rearrange equation (3.11) in a form in which a solution by iteration may be


attempted. It should be noted that there are alternative ways of setting up the
problem.

Dividing equation (3.11) by Vi* and separating out the Yii term, we can rewrite
equation (3.11) as

Si* n

Vi *
 YiiV  
k 1
YikVk i  2, 3......, n
k i

(3.12)

or, rearranging,

 * n 
1  Si  Y V 
 Vi * 
Vi  ik k
i  2, 3......, n
Yii k 1

 k i 

(3.13)

Thus, we get n – 1 implicit nonlinear algebraic equations in the unknown complex


Vi in the form

33
V2  h2 (V2 ,V3 ,......,Vn )
V3  h3 (V2 ,V3 ,......,Vn )
Vn  hn (V2 ,V3 ,......,Vn )

(3.14)

Where the hi are given by (3.13) [e.g., h2 is the right-hand side of the first equation
(with i=2), etc.]. The numbering in (3.14) is awkward, and we therefore renumber.
Define a complex vector x with components x1  V2 , x2  V3 ,........, xN  Vn . Also

renumber the equations in a similar fashion. We then get N  n  1 equations in the


n  1 unknown variables. In vector notation (3.14) is then in the form

x = h(x)
(3.15)

we solve (3.15) by iteration. In the simplest case we use the formula

xv1  h( xv ) v  0,1,.....

(3.16)

Where the superscript indicates the iteration number. Thus, starting with an initial
value x 0 [which we pick by guessing the solution of (3.15)], we can generate the
sequence

x0 , x1 , x 2 ,......

If the sequence converges (i.e., x v  x* ) then

34
Figure 2: Step in figuration

Thus x* is a solution of (3.15). The procedure can be visualized in a two-


dimension real space as shown in Figure 2. This solution x* is also called a fixed
point of h(.), which is a good terminology because while other values of x cause
h(x) to be different than x (i.e., h applied to x moves it), h applied to x* leaves it
fixed.

In practice we stop the iteration when the changes in x v become very small.
Thus defining x v  x v 1  x v , we stop when

x v 

Where  is a small positive number (typically on the order of 0.0001 p.u) and
where . indicates a norm. Specific examples of norms are x  max (x)i called
i

the sub norm, and

1/ 2
N 2
x    ( x)i 
 i 1 

Called the Eudlidean norm.

Gauss iteration, v = 0,1,2,….:

x1v1  h1 ( x1v , x2v ,....., xNv )


x2v1  h2 ( x1v , x2v ,....., xNv )

xNv1  hN ( x1v , x2v ,....., xNv )
(3.17)

In carrying out the computation (normally by digital computer) we process the


equations from top to bottom. We now observe that when we solve for x2v 1 we

already know x1v 1 . Since x1v 1 is (presumably) a better estimate than x1v , it seems

reasonable to use the updated value. Similarly, when solve for x3v 1 we can use the

35
values of x1v 1 and x2v 1 . This line of reasoning leads to the modification called the
Gauss-Seidel iteration.

Gauss-Seidel iteration, v = 0,1,2,….:

x1v 1  h1 ( x1v , x2v ,....., xNv )


x2v 1  h2 ( x1v 1 , x2v ,....., xNv )
x3v 1  h3 ( x1v 1 , x2v 1 , x3v ....., xNv )

xNv 1  hN ( x1v 1 , x2v 1 ,....., xNv 11 , xNv )

(3.18)

Note: Gauss-Seidel is actually easier to program than Gauss and it converges faster.

We conclude this section by considering Case II ò the power flow problem: given
V1 ,( P2, V2 ,....,( Pm , Vm ), Sm1 ,..., Sn , find S1 , (Q2 , V2 ),...., (Qm , Vm ),Vm1 ,....,Vn . There is a

simple modification of the procedures developed to handle Case I that works for
Case II, which will now be described. Equation (3.13) is the basis for the iteration
and we show it here in a form suitable for Gauss iteration in Case II:

 v 1
 
1  Pi  jQiv n v
V i

Yii  (Vi v )*
 
k 1
Y V
ik k

i = 2,3,…..,n
 k i 
(3.19)

For the load buses (i.e., I = m+1,….,n) Pi and Qi are known and the iteration

proceeds just as in Case I[i.e, in (3.19) the superscript on Qi and the tilde on Vi v1 are
ignored]. For the generator buses (i.e., i = 2,…m), Qi is not specified, but we can do
a side calculation to estimate it on the basis of the vth step voltages, already
calculated. Thus we yields

n
 
Qiv  Im Vi v  Yik* (Vkv )*  i = 2,3,…..,m (3.20)
 k 1 

36
 v 1
And this is the value we use in (3.19). We them use (3.19) to calculate V i
, a
 v 1
preliminary version of V i .

 v 1
Since Vi is specified for the generator buses, we then replace V i
by Vi spec
and

obtain Vi v1 .

We have been describing Gauss iteration. The extension to Gauss-Seidel


iteration in Case II is exactly the same as in Case I. We simply use the most up-to-
date values of Vi at each stage of the iteration.

Example of Gauss-Seidel iteration

Use Gauss-Seidel method to find the solution of the following equations

x1  x1 x2  10
x1  x2  6

With the following initial estimates

(a) x1(0)  1 and x2(0)  1


(b) x1(0)  1 and x2(0)  2

Continue the iteration until x1( k ) and x2( k ) are less than 0.001

Solving for x1 and x2 from the first and second equation respectively, results in

10
x1 
1  x2
x2  6  x1

a. With initial estimates x1(0)  1 and x2(0)  1 , the iterative sequence

becomes

37
10 10
x1(1)  5 x1(2)  5
11 11
x2(1)  65 1 x2(2)  65  5

b. With initial estimates x1(0)  1 and x2(0)  2 , the iterative sequence

becomes

10
x1(1)   3.3333 x 2(6 )  6  2.0244  3.9756
1 2
10
x2(1)  6  3.3333  2.6666 x1(7 )   2.0098
1  3.9756
10 x 2(7 )  6  2.0098  3.9902
x1(2)   2.7272
1  2.6666 10
x2(2)  6  2.7272  3.2727 x1(8 )   2.0039
1  3.9902
10 x 2(8 )  6  2.0039  3.9961
x1(3)   2.3404
1  3.2727 10
10 x1(9 )   2.0016
x1(4)   2.1461 1  3.9961
1  3.6596 x 2(9 )  6  2.0244  3.9984
10 10
x1(5)   2.0602 x1(10 )   2.0006
1  3.8539 1  3.9984
x2(5)  6  2.0602  3.9398 x 2(10 )  6  2.0006  3.9994

3.2.5 Newton-Raphson iteration

1
x v 1  x v   f '( x v )  f ( xv ) (3.21)

The iterative scheme specified in (3.21) is the well-known Newton-Raphson


(N-R) iteration formula in the one-dimensional case. By analogy we can also easily
obtain the n-dimensional Newton-Raphson iteration formula. We replace the scalars
x and f(x) by n-vectors x and f(x). It is also reasonable that the scalar derivative
operator f’(x) must be generalized into an n  n matrix operator.

One way to obtain the generalization is to write appropriate Taylor series


expansions for f(x +  x) and f(x +  x) and to compare corresponding terms. In the
case of the scalar Taylor series, we get

f(x +  x)= f(x) +f’(x)  x + h.o.t


(3.22)

38
where “h.o.t” stands for “higher-order terms”. In the case of the vector Taylor series
instead of one equation we get n scalar equations.

f1 ( x) f ( x)
f1 ( x  x)  f1 ( x)  x1    1 xn  h.o.t
x1 xn
f 2 ( x) f ( x)
f 2 ( x  x)  f 2 ( x)  x1    2 xn  h.o.t
x1 xn

f n ( x) f ( x)
f n ( x  x)  f n ( x)  x1    n xn  h.o.t
x1 xn

(3.23)

Where the notation fi ( x) / x j means the partial derivative of fi with respect to x j

evaluated at x. Using matrix notation, we get

f ( x  x)  f ( x)  J ( x)x  h.o.t

(3.24)

 f1 ( x) f1 ( x) 
 x  x   x1 
 1 n
  x 
Where J ( x)    x   2 
   
 f n ( x)  f n ( x )   
 x1 xn   xn 

(3.25)

J(x) is called the Jacobian matrix of f evaluated at x. Comparing (3.24) and (3.22),
it is reasonably clear that J(x) is the generalization of the scalar derivative f’(x). In
this case, by analogy with the scalar case, (3.21) we get the more general N-R
iteration formula,

xv 1  xv  [ J ( xv )]1 f ( x v )

(3.26)

We can comment on this equation as follows:

1. Usually rather than J ( xv ) we will use the simpler notation J v . Occasionally,


to leave room for the inverse symbol, we will use the notation J v instead.
39
2. We do not need to use analogy to derive (3.26). We can use the following
reasoning. Suppose that we want to solve f(x) = 0. Try x v . Suppose that
f ( xv )  0 but is small (i.e., x v is pretty close to the exact solution). How

should we pick the next approximation x v 1 ? One way: Let x v 1 = x v +  x v


with  x v to be determined; we expect  x v to be small. Then using Taylor
series, f( x v 1 ) = f( x v +  x v ) = f( x v ) + J’  x v + h.o.t. Neglecting the h.o.t, we
1
can pick  x v so that f( x v 1 ) = 0. We get  x v = x v 1 - x v = -  J v  f ( x v ) , which

is the same as (3.26). If the h.o.t are negligible, we get very fast convergence
3. The h.o.t are usually negligible as x  0 . At the beginning, to improve the
initial guess x0, a few steps of Gauss-Seidel in iteration may be used before
the N-R iteration is started.
4. A disadvantage of N-R is the need to update J every iteration. Sometimes we
can update less often and still get good results.
5. In practice we do not evaluate the inverse matrix. Taking inverses is
computationally expensive and not really needed. Instead, using
x v  x v 1  x v
(3.27)
We can write (3.26) as
 J v xv  f ( x v )
(3.28)
And solve for x v by Gaussian elimination. Of course, once we know x v
we can find x v 1  x v  x v and proceed to the next iteration. We emphasize
that (3.22) is just a rearrangement of (3.21) to suggest the method of solution
for the unknown x v 1 .

Example

Given the direct-current (dc) system shown in Figure 3, use the Newton-Raphson
method to find the (dc) bus voltages V1 and V2 and find PGi .

40
Solution

For simplicity we are using the same notation, although all the variables are real
quantities. We can easily forming Ybus by inspection

 2 1 1
Ybus  100  1 2 1
 1 1 2 

In what corresponds to (3) we write the power flow equations

P1  200V12  100V1V2  100V1V3


P2  100V2V1  200V22  100V2V3
P3  100V3V1  100V3V2  200V32

(3.29)

Figure 3: The direct-current (dc) system

Bus 1 is the slack bus, with V1 known and P1 unknown. We temporarily strip
away the first equation and solve the remaining two for the unknown V2 and V3 .

In using Newton-Raphson the next step is to put the equation in the form f(x)
= 0. The simplest way is to subtract the left sides from the right sides, or vice versa.
It does not matter which we do, the solution algorithm will be identical in either
case.

41
Subtracting left from right and putting in the known values of
P2   PD 2  1.0, P3   PD 3  0.5,V1  1.0 we get

f1 ( x)  1.0  100V2  200V22  100V2V3  0


f 2 ( x )  0.5  100V3  100V2V3  200V32  0

The first component of x is V2 , the second is V3 , with that understanding we will not
always use the x1 , x2 notation.

Next, we find the Jacobian matrix,

 f1 f1 
 x x2  1  4V2 V2
J ( x)   1   100  
 f 2 f 2   V3 1  V2  V3 
 x x2 
 1

In the 2  2 case it is so easy to find the inverse that we will solve using (3.21); in
higher-dimensional cases we would almost certainly use (3.23)

Starting with a “flat profile” (i.e, V2  1,V3  1 ), we get

 2 1 1 2 1  1.0 
J 0  100   ( J 0 )1  f ( x0 )   
 1 2  300 1 2  0.5

Then, using (3.26) yields

1 1  2 1  1.0  0.991667 


x1          
1 300 1 2  0.5 0.993333 

Continuing to the next iteration gives us

 1.973333 0.991667 
J 1  100  
 0.993333 1.981667 

1 1.981667 0.991667 
( J 1 ) 1 
292.54 0.993333 1.973333 

42
0.00843
f ( x1 )   
0.00323

Note that x1 is really a very good estimate. We want f ( x)  0 and already f ( x1 ) is


almost zero. Note that great improvement from f ( x0 ) to f ( x1 ) in just one step.
Continuing the iteration, we have

0.991667  1  1.981667 0.991667   0.00843 0.991599 


x2        
0.993333  294.54  0.993333 1.973333   0.00323 0.993283 

Note that the change between x1 and x 2 is less than 0.0005, we stop the iteration.
Alternatively, we can compute f(x2). We get

0.000053 
f ( x2 )    which is suitably small.
0.000040 

Thus the objective of finding the x for which f(x) = 0 seems very well met. An
interpretation using the power flow equation (3.23) is meaningful. Using the values
V2  0.991599 and V3  0.993283 found after two iterations, the right side of the

second equation and the given P2 on the left side match very well; the mismatch is
only 0.000053. The mismatch in the third equation is only 0.000040.

Finally, to complete the problem, we find P1 by using the first equation in (3.29).
We find P1 = 1.511800. Note that since PD1  PD 2  1.5 , the I 2 R loss in the
transmission system is 0.011800

3.2.6 Application to power flow equations

We will start by considering the use of (3.28) to solve the power flow
equations Case I. For N-R calculation the real form of the power flow equations is
used. These may be derived form (3.7) by taking the real and imaginary parts. We
get

n
Pi   Vi Vk [Gik cos i   k )  Bik sin(i   k )] i  1, 2,......, n
k 1

43
n
Qi   Vi Vk [Gik sin(i   k )  Bik cos(i   k )] i  1, 2,......, n
k 1

(3.30)

Where i  Vi

Just as in the solution by Gauss iteration discussed, we strip away the first equation
(involving P1 and Q1 ). In the remaining equations, this being a Case I power flow

problem, the Pi and Qi on the left sides of the equations are specified numbers. The

right sides are function of Vi and i . We assume that V1 and 1 ( 0) are known. It

remains to find the n -1 unknown Vi and n – 1 unknown i in the equations on the

right. It is convenient to define the (n - 1)- vectors  , V , and their composite vector

x as follows:

 2   V2 
   
    V    x 
 n  V  V 
 n 
(3.31)

With this definition the right sides of (3.25) are functions of the unknown x, and we
wish to introduce notation that makes that dependence explicit. Thus define the
functions Pi ( x) and Qi ( x) by

n
Pi   Vi Vk [Gik cos i   k )  Bik sin(i   k )] i  1, 2,......, n
k 1

n
Qi   Vi Vk [Gik sin(i   k )  Bik cos(i   k )] i  1, 2,......, n
k 1

(3.32)

The notation is a natural one since for any given x the right sides are the active and
reactive components of the bus power. We can then replace (3.30) with equivalent
but notational simpler power flow equations, at the same time we will strip away the
first (active and reactive) equations. We get

44
Pi  Pi ( x) i = 2,3,…..,n

Qi  Qi ( x) i = 2,3,…..,n
(3.33)

In these equations the Pi and Qi are specified constants, while the Pi ( x) and Qi ( x)
are specified functions of the unknown x. In the course of the iterations we will be
picking a sequence of values x v in a effort to make the right sides match the given
left sides (i.e., to drive the mismatches to zero). We now set up the equations in the
form f(x) = 0. We subtract the left sides of (33) from the right sides to get

Pi ( x)  Pi  0 i = 2,3,…..,n

Qi ( x)  Qi  0 i = 2,3,…..,n
(3.34)

Equation (3.34) identifies the 2n – 2 components of f(x). Thus f1 ( x)  P2 ( x)  P2 ,


f 2 ( x)  P3 ( x)  P3 , , f 2 n 2 ( x)  Qn ( x)  Qn . In matrix notation (3.34) becomes

 P2 ( x)  P2 
 
 
 Pn ( x)  Pn 
f ( x)   0 (3.35)
 Q 2 ( x )  Q2 
 
 
Qn ( x)  Qn 

We next consider J, the Jacobian of f. It is convenient to partition as follows:

J J12 
J   11
 J 21 J 22 

(3.36)

Each partition of the matrix J is (n – 1)  (n – 1). J11 is made up of terms Pi ( x) /  k

.J12 has terms Pi ( x) /  Vk , J21 them terms Qi ( x) /  k and J22 the terms

45
Qi ( x) /  Vk . These terms may be evaluated explicitly using (3.32). But first let us

find the form of the N-R iteration. For convenience we repeat (28):

J v xv   f ( x v ) (3.37)

J has been partitioned; x, and by extension, x , have partitioned forms. It remains


to partition f(x) and at the same time we would like to get rid of the minus sign in
(3.28). Noting (3.30), we defined so-called mismatch vectors:

 P2  P2 ( x )   Q2  Q2 ( x ) 
P ( x )    
 Q ( x )    

 Pn  Pn ( x )  Qn  Qn ( x ) 

(3.38)

And can express f(x) as follows:

 P( x) 
f ( x)   
 Q( x) 
(3.39)

Using (31), (36), and (38) in (28), we finally ge

v
 J11v J12v      P ( x v ) 
 v  v
 
 J 21 J 22v    V   Q ( x v ) 

(3.40)

As a form in which the power flow equations may be solved by N-R iteration. Two
comments on (3.34):

1. The right sides are defined by (3.32) and represent the mismatch between the
specified values of P and Q and the corresponding values obtained with the
trial values x v . As the iteration proceeds, we expect these mismatched terms
to go to zero.
2. The components of x v are specified in (3.31). To find x v 1 , we solve (3.34)
for x v and use x v 1 = x v + x v . At that point we can update the mismatch
vector and the Jacobian matrix and continue iterating.
46
We next calculate the elements of the Jacobian submatrices using (3.32). Let us
use the following notion: Let J 11
pq designate the pq element of the Jacobian

submatrix J11, etc., for all four submatrices. We will also adopt the notation
 pq   p   q

We have the following

pq
Pp ( x )
J 11
pq   V p Vq (G pq sin  pq  B pq cos  pq )
 q
21
Q p ( x )
J pq    V p Vq (G pq cos  pq  B pq sin  pq )
 q
Pp ( x )
J 12
pq   V p (G pq cos  pq  B pq sin  pq )
 Vq

22
Q p ( x )
J pq   V p (G pq sin  pq  B pq cos  pq )
 Vq
(3.40)

pq
Pp ( x) 2
J 11
pp   Q p  B pp V p
 p
21
Q p ( x) 2
J pp   Pp  G pp V p
 p
Pp ( x) Pp
J 12
pp    G pp V p
 Vp Vp

22
Q p ( x) Qp
J pp    B pp V p
 Vp Vp

(3.41)

3.3 General OPF problem formulation

The OPF problem is to optimize the steady state performance of a power


system in terms of an objective function while satisfying several equality and
inequality constraints. Mathematically, the OPF problem can be formulated as
follows:

47
Min F(x,u) (3.42)

Subject to satisfaction of Nonlinear Equality Constrain:

gE(x,u) = 0) (3.43)

and Nonlinear Inequality Constraints:

go(x,u) ≤ 0

gc(x,u) ≤ 0)
(3.44)

Where:

1. The vector of variables is partitioned into the controllable quantities (control


variables) u and the dependent (state) variables x
2. The objective function F(x,u) is scalar quality and is considered objective
function of any optimization problem. This function represent, for instance,
economic and security oriented interests of the power utility
3. gE(x,u) = 0 are equality constraints
4. go(x,u) ≤ 0 are the operating constraints. Most network state variables are
not allowed to exceed certain lower and upper limits. These limitations are
“soft” constraints and corresponding to security and power quality based
limitations and requirement. Some of the most common operating constraints
are limitations on:
a) voltage magnitude at load buses
b) reactive power of PV-generators
c) branch currents, branch MW,MVAR,MVA flows
d) angle/voltage magnitude drop along a line
e) slack bus active power output limits
5. gc(x,u) ≤ 0 are the control variable constraints. Control variables do not
exceed lower and upper limits. These can be “hard”
constraints, especially when corresponding to the operating range of physical
apparatus. The most common control variable constraints are:

48
a) transformer load tap changer magnitudes
b) active generating power
c) voltage magnitude at PV buses
d) switched capacitor or reactors settings
e) MW interchange transactions
f) Phase shift transformer tap position
g) Reactive injection for a static VAR compensator

The OPF problem has many control variables to be adjusted, while the economic
dispatch problem and reactive power generation dispatch have much less. The
control variables u of the OPF problem can be stated in equation:

uT = [QTc TCT VTG PTG] ) (3.45)

While its state variables x are stated as

x = [VTL θT PSG QTG] ) (3.46)

Where:

Qc: reactive power supplied by all shunt reactors

TC: transformer load tap changer magnitudes

VG: voltage magnitude at PV buses

PG: active power generated at the PV buses

NL: number of load buses

NG: number of PV buses, generator buses

VL: voltage angles of all buses, except the slack bus

PSG: active power of all generator units

49
3.4 Optimization Problem in Power System

3.4.1 Formulation of the Economic Dispatch Problem


The total cost of operation includes fuel, labor, and maintenance costs, but for
simplicity we will assume that the only variable costs we need to consider are fuel
costs. For the costs we assume that we are given the fuel-cost curves for each
generating unit, specifying the cost of the fuel used per hour as function of the
generator power output. For simplicity we assume that each generating unit consists
of a generator, turbine, steam generating unit (boiler furnace), and associated
auxiliary equipment. An approximation to a typical fuel-cost curve is shown in
Figure 14

Figure 4: Fuel-cost curve

Given a system with m generators committed and all the S Di given, pick the

PGi and Vi , i  1, 2,3,, m, to minimize the total cost

m
CT   Ci ( PGi ) ) (3.47)
i 1

50
subject to the satisfaction of the power flow equations and to the following
inequality constraints on generator power, line power flow, and voltage magnitude.
1.PGimin  PGi  PGimax , i  1, 2,...., m
2. Pij  Pijmax , all lines
min max
3. Vi  Vi  Vi i  1, 2,...., m,...., n

Some briefly comment on the formulation

1. The power flow equation must be satisfied (they are an equality constraint on
the optimization)
2. The upper limit on PGi is set by thermal limits on the turbine generator unit,
while the lower limit is set by boiler and/or other thermodynamic
considerations.
3. The constraints on voltage keep the system voltages from varying too far
from their rated or nominal values. The objective is to help maintain the
consumer’s voltage; the voltage should neither be too high or too low.
4. The reasons for constraints on the transmission-line powers relate to thermal
and stability limits.
5. The formulation of the problem is consistent with the availability of injected
active power and bus voltage magnitude as control variables at each
generator. It can be extended to deal with other control variables, such as the
phase angles across phse-shifting transformers, the turns ratios of tap-
changing transformers, and the admittances of variable (and controllable)
shunt and series inductor and capacitors.
6. The minimization of a cost function subject to equality and inequality
constraints is a problem in optimization that is treated by a branch of applied
mathematics called nonlinear programming.

To help fix ideas, consider next the formulation as applied to the simple
system in Figure 15. Assume that the system is specified; in particular, all the bus
admittance parameters are given, the reference voltage angle is given, and all the
S Di are given. We can pick a particular set of V1 , PG 2 , V2 , PG 3 and V3 (within the

51
constraint set) and by power flow analysis find PG1 and the transmission-line powers
satisfy the inequality constraints, our choice is feasible and we calculate the total
cost CT . The problem is to minimize CT over the set of feasible independent control

variables V1 , PG 2 , V2 , PG3 and V3 .

Figure 5: System to be optimized

3.4.2 Classical Economic Dispatch (Line Losses Neglected)

Suppose that we neglect the line power flow constraints and for neglect the line
losses as well. We can then replace the general problem formulation with a much
simplified version
m
CT   Ci ( PGi ) ) (3.48)
i 1

Such that
m
CT   Ci ( PGi )
i 1
m n

 PGi  PD   PDi
i 1 i 1
) (3.49)

PGimin  PGi  PGimax , i  1, 2,...., m

52
We have the incremental costs (ICs):

dCi ( PGi )
ICi   slope of fuel-cost curve ) (3.50)
dPGi

If the units of Ci ( PGi ) are dollars/hr, the units of the ICi are dollars/hr/MW or
dollars/MWh. The ICi figure represents the increase in cost rate per increase in MW
output or equivalently the increase in cost per increase in MWh.

3.4.3 Generator Limits Included

Returning to the simplified problem formulation, we now add the inequality


constraints. Consider the case with generator for limits. We can show these
constraints on the IC curves, as is done in figure 16.

Figure 6: Incremental cost curves with constraints

Suppose that for a given PD , the system tern  is 1 . PD increases and we


increase  to provide more generation. Continuing the process in this way we reach
2 . PG 3 has reached a limit and cannot be increased further. The increased load must

be taken by PG1 and PG 2 . Clearly, they should operate at equal IC, say 3 . Further
increases in load can be taken by PG1 and PG 2 operating at equal IC until PG 2 reaches

its upper limit, and   4

53
3.4.4 Line Losses Considered

If all the generators are located in one plant or are otherwise very close
geographically, it is physically reasonable to neglect line losses in calculating the
optimal dispatch. On the other hand, if the power stations are spread out
geographically, the transmission-line (link) losses must usually be considered, and
this will modify the optimal generation assignments as determined in the preceding
section.

Pick the PGi to minimize

m
CT   Ci ( PGi ) ) (3.51)
i 1

Subject to

P
i 1
Gi  PL (PG 2 ,..., PGm )  PD  0
) (3.52)
min max
PGi  PGi  P Gi , i  1, 2,...., m

To obtain at least a formal solution, we can use the method of Lagrange multipliers.
Define the augmented cost function

dCT m
  PGi  PL PD  0
d i 1

dCT dCi
   0 ) (3.53)
dPGi dPGi
dCT dCi ( PGi ) P
   (1  L )  0 i=2,...,m
dPGi dPGi PGi

Define the penalty factor Li for the ith generator

L1  1
1
Li  i=2,...,m ) (3.54)
P
1 L
PGi

54
3.4.5 Proposed OPF Formulation

As any optimization problem, the OPF problem is formulated as a


minimization or maximization to a certain objective function in which it is
subjected to a variety of equality and inequality constraints. The proposed objective
function is mentioned:

The objective function: Minimization of Generation Fuel Cost

The objective function is to minimize of the generation fuel cost of thermal


units. Generally, the OPF generation fuel cost function can be expressed by a
quadratic function as follow:

Minimize (CT) = ∑ Ci(PGi)

Ci(PGi) = ai + biPGi + c

Where :

NG: is the number of generators including the slack generator in any electric
network

ai: is the basic cost coefficient of the ith generator

bi: is the linear cost coefficient of the ith generator

ci: is the quadratic cost coefficient of the ith generator

PGi: is the real power output of the ith generator. PG is the vector of real power
outputs of all generator units and is defined as

PG = [PG1, PG2,………… PGn]T

3.5 The constraints

The control variables for the OPF include: active power at all generator units;
generator bus voltages; transformers tap positions; and switchable shunt reactors.
The OPF constraints are divided into equality and inequality constraints. The
equality constraints are power/reactive power equalities, the inequality constraints
55
include bus voltage constraints, generator reactive power constraints, reactive
source reactive power capacity constraints and the transformer tap position
constraints, etc. Therefore, the above objective function is subjected to the below
constraints:

(i) Equality Constraints:

The equality constraints of the OPF reflect the physics of the power system. The
physics of the power system are enforced through the power flow equation which
require that the net injection of the real and reactive power at each bus to be zero as
shown:

The power flow equation of the network

g(V, ϕ) = 0

where

g(V, ϕ) = Pi(V, ϕ) - Pinet


For each PQ bus i
net
Qi(V, ϕ) - Qi

Pm(V, ϕ) – Pmnet For each PV bus m, not including


the Ref.bus For each PQ bus i

Where

Pi and Qi are respectively calculated real and reactive power for PQ bus i

Pinet and Qinet are respectively specified real and reactive power for PQ bus i

Pm and Pmnet are respectively calculated and specified real power for PV bus m

V and ϕ are voltage magnitude and phase angles at different buses.

(ii) Inequality Constraints


The inequality constraints of the OPF reflect the limits on physical
devices in the power system as well as the limits created to ensure system

56
security. This section will lay out all the necessary inequality constraints
needed for the OPF implementation. This types if inequality constraints
are bus voltage limits at generations, maximum line loading limits and
limits on tap setting. The various inequality constraints are as follows:
 The inequality constraints on real power generation at bus i

≤ PGi ≤ , i = 1, 2, ……. , NG

Where and are respectively minimum and maximum values of


real power generation allowed at generator bus i
 The inequality constraints on reactive power generation QGi at
each PV bus
≤ QGi ≤ , i = 1, 2, ……. , NG
 The inequality constraints on voltage magnitude V of each PQ bus
≤ Vi ≤ , i = 1, 2, ……. , NG
Where and are respectively minimum and maximum voltage
at bus i
 The inequality constraint on phase angle ϕi of voltage at all buses i
ϕimin ≤ ϕi ≤ϕimax
where ϕimin and ϕimax are respectively minimum and maximum phase angle at bus i
 MVA flow limit on transmission line
MVAij ≤ MVAijmax
Where
MVAijmax is the maximum rating of transmission line connecting bus i
and j
Summary
The OPF problem
Minimize (CT) = ∑ Ci(PGi)
Constraints
≤ PGi ≤ , i = 1, 2, ……. , NG

∑ PGi = PD + PLoss

57
≤ QGi ≤ , i = 1, 2, ……. , NG

≤ Vi ≤ , i = 1, 2, ……. , NG

ϕimin ≤ ϕi ≤ϕimax

MVAij ≤ MVAijmax

58
CHAPTER 4
PARTICLE SWARM OPTIMIZATION

4.1 Introduction
Particle swarm optimization (PSO) is a population based stochastic
optimization technique inspired by social behavior of bird flocking or fish schooling

In PSO, the search for an optimal solution is conducted using a population of


particles, each of which represents a candidate solution to the optimization problem.
Particles change their position by flying around a multidimensional space by
following current optimal particles until a relatively unchanged position has been
achieved or until computational limitation are exceeded. Each particle adjusts its
trajectory towards its own previous best position and towards the global best
position attained till then. PSO is easy to implement and provides fast convergence
for many optimization problems and has gained lot of attention in power system
applications recently.

The system is initialized with a population of random solutions and searches


for optima by updating generations. In PSO, the potential solutions, called particles,
fly through the problem space by following the current optimum particles. Each
particle in PSO makes its decision using its own experience together with its
neighbor’s experience.

4.2 PSO algorithm

The PSO algorithm works by simultaneously maintaining several candidate


solutions in the search space. During each iteration of the algorithm, each candidate
solution is evaluated by the objective function being being optimized, determining
the fitness of that solution. Each candidate solution can be thought of as a particle
“flying” through the fitness landscape finding the maximum or minimum of the
objective function.

59
Initially, the PSO algorithm chooses candidate solutions randomly within the
search space. Figure 1 shows the initial state of a four-particle PSO algorithm
seeking the global maximum in a one-dimensional search space. The search space is
composed of all the possible solutions along the x-axis, the curve denotes the
objective function. It should be noted that the PSO algorithm has no knowledge of
the underlying objective function, and thus has no way of knowing if any of the
candidate solutions are near to or far away form a local or global maximum. The
PSO algorithm simply uses the objective function to evaluate its candidate
solutions, and operates upon the resultant fitness values.

Each particle maintains its position, composed of the candidate solution and its
evaluated fitness, and its velocity. Additionally, it remembers the best fitness value
it has achieved thus far during the operation of the algorithm, referred to as the
individual best fitness, and the candidate solution that achieved this fitness referred
to as the individual best position or individual best candidate solution. Finally, the
PSO algorithm maintains the best fitness value achieved among all particles in the
swarm, called the global best fitness, and the candidate solution that achieved this
fitness, called the global best position or global best candidate solution.

The PSO algorithm consists of just three main steps, which are repeated until
some stopping condition is met:

1. Evaluate the fitness of each particle


2. Update individual and global best finesses and positions
3. Update velocity and position of each particle

The first two steps are fairly trivial. Fitness evaluation is conducted by
supplying the candidate solution to the objective function. Individual and global
best finesses and positions are updated by comparing the newly evaluated finesses
against the previous individual and global best finesses, and replacing the best
finesses and position as necessary.

60
Figure 7. Initial PSO state

The velocity and position update step is responsible for the optimization
ability of the PSO algorithm. The velocity of each particle in the swarm is updated
using the following equation:

vi,(k+1) = wvi(k) + c1r1[Pbest -xj(k)] + c2r2[Gbest -xi(k)] = Inertia + Conitive +


Social
(4.1)

The index of the particle is represented by i. Thus, vi(k) is the velocity of


particle i at time t and xi(k) at the position of particle i at time t.

The parameters w, c1, c2 are user-supplied coefficients.

The values r1 and r2 are random values regenerated for each velocity update.
The value Pbest is the personal best or individual best candidate solution for particle i
at time t, and Gbest is the swarm’s global best candidate solution at time t.

Each of the three terms of the velocity update equation have different roles in
the PSO algorithm. The first term wvi(k) is the inertia component, responsible for
keeping the particle moving in the same direction it was originally heading. The
value of the inertial coefficient w is typically between 0.8 to 1.2, which can either
dampen the particle’s intertia or accelerate the particle in its original direction.
Generally, lower values of the inertial coefficeient speed up the convergence of the

61
swarm to optima, and higher values of the inertial coefficient encourage exploration
of the entire search space.

The second term c1r1[Pbest -xj(k)], called the cognitive component, acts as the
particle’s memory, causing it to tend to return to the regions of the search space in
which it has experienced high individual fitness. The cognitive coefficient c1 is
usally close to 2, and affects the size of the step the particle takes towards its
individual best candidate solution.

The third term c2r2[Gbest -xi(k)], called the social component, causes the
particle to move to the best region the swarm has found so far. The social coefficent
c2 is typically close to 2, and represents the size of step the particle takes toward the
global best candidate solution Gbest the swarm has found up until that point.

The random value r1 in the cognitive component and r2 in the social


components cause these components to have stochastic influence on the velocity
update. This stochastic nature causes each particle to move in a semi-random
manner hearvily influenced in the directions of the individual best solution of the
particle and global best solution of the swarm.

In order to keep the particles from moving too far beyond the search space, we
use a technique called velocity clamping to limit the maximum velocity of each
particle. For a search space bounded by the range [-xmax, xmax], velocity clamping
limits the velicity to the range [-vmax, vmax], where vmax = k × xmax . The value k
represents a user-supplied velocity clamping factor 0.1 ≤ k ≤ 1.0. In many
optimization tasks, the search space is not centered around 0 and thus the range [-
xmax, xmax] is not an adequate definition of the search space. In such a case where the
search space is bounded by [xmin, xmax], we define vmax = k × (xmax,- xmin)/2

Once the velocity for each particle is calculated, each particle’s position is
updated by applying the new velocity to the particle’s previous position:

xi(k+1) = xi(k) + vi(k+1) (4.2)

62
This process is repeated until some stopping condition is met. Some stopping
conditions include:

(a) The number of iterations since the last update of the best solution
is greater than a predefined number
(b) The number of iteration reaches the maximum allowable number.
With the description of basic elements as above.

4.3 PSO example

The proposed PSO based approach was implemented using Matlab to solve the
equation:

Min F(x,y) = x2 + y2 - 8x - 10y + 1 = 0

Constraints: 0 ≤ x ≤10; 0 ≤ y ≤15

In this example, we choose the swarm N is 100, w coefficient is 0.001 which


can either dampen the particle’s intertia or accelerate the particle in its original
direction. Cognitive and social coefficient c1 and c2 are 0.01. The cognitive
coefficient affects the size of the step the particle takes towards its individual best
candidate solution and the social coefficent c2, represents the size of step the
particle takes toward the global best candidate solution Gbest that the swarm has
found up until that point. With the PSO flow chart diagram below, we can easily get
the code and yield the result as we expected: the entire particle (100) after 5000
iterations converge all together.

Start

K=1
Initialized
63 Xi(k), Vi(k)
Compute f(xi(k))

Chapter 2
Yes

No

No
Yes

64
Result

We got the value of x and y as the figure below:

15

14

13

12

11
.
10

5
0 1000 2000 3000 4000 5000 6000

Figure 9: The value of x and y

15

10
y

0
0 1 2 3 4 5 6 7 8 9 10

x
Figure 10: The entire particles after iterations
converge all together

65
4.4 Merits and Demerits of PSO Method

Merits

1. PSO is one of the modern heuristic algorithms capable to solve large-scale non
convex optimization problems like OPF.

2. The main advantages of PSO algorithms are: simple concept, easy


implementation, relative robustness to control parameters and computational
efficiency.

3. The prominent merit of PSO is its fast convergence speed.

4. PSO algorithm can be realized simply for less parameter adjusting.

5. PSO can easily deal with non-differentiable and non-convex objective functions.

6. PSO has the flexibility to control the balance between the global and local
exploration of the search space.

Demerits

1. The candidate solutions in PSO are coded as a set of real numbers.

But, most of the control variables such as transformer taps settings and switchable
shunt capacitors change in discrete manner. Real coding of these variables
represents a limitation of

PSO methods as simple round-off calculations may lead to significant errors.

2. Slow convergence in refined search stage (weak local search ability).

66
CHAPTER 5
CASE STUDY

5.1 Introduction
Through a wide variety of optimization techniques have been applied for
solving the single objective OPF problem as mentioned earlier but the results
obtained by using PSI method are much more promising and better as compared to
other techniques. Many advantages of PSO over the other techniques include:

 It is less susceptible in being trapped to local minima.


 It is more flexible and robust
 No problem of premature convergence
 Solution quality independent of the initial population

All these advantages make PSO technique over the other optimization techniques

5.2 PSO Algorithm for OPF problem


The various steps involved in the implementation of PSO to the OPF problem are:
Step 1: Input parameters of system, and specify the lower and upper boundaries of
each variable
Step 2: Initialize randomly the particles of the population. These initial particles
must be feasible candidate solution that satisfy the practical operation constraints
Step 3: To each particles of the population, employ the Newton-Raphson method to
calculate power flow and the transmission loss
Step 4: Calculate the evaluation value of each particle, in the population using the
evaluation function
Step 5: Compare each particle’s evaluation value with its Personal/ Individual Best.
The best evaluation value among the Pbest is denoted as Gbest
Step 6: Update the time counter t = t + 1

Step 7: Update the inertia weight w given by

W = Wmax – (Wmax - Wmin)/itermax = iter

Step 8: Modify the velocity v of each particle according to the mentioned equation
67
vi,(k+1) = wvi(k) + c1r1[Pbest -xj(k)] + c2r2[Gbest -xi(k)]

Step 9: Modify the position of each particle according to the mentioned equation. If
a particle violates the its position limits in any dimension, set its position at the
proper limit: X(k,j,i+1) = x(k,j-1,i) + v(k,j,i)

Step 10: Each particle is evaluated according to its updated position. If the
evaluation value of each particle is better than the previous Pbest, the current value is
set to the Pbest. If the Pbest is better than Gbest, the value is set to be Gbest.

Step 11: If one of the stopping criteria is safisfied then go to step 12, otherwise go
to step 6

Step 12: The particle that generates the latest Gbest is the optimal value

The parameters that must be selected carefully for the efficient performance of PSO
algorithm are:
a. Both acceleration factors C1 and C2
b. Number of particles
c. The inertia factor
d. The search will terminate if one of the below scenario is encountered:
 Gbest(i) - Gbest (i-1)<0.0001 for 50 iterations
 Maximum number of iteration reached (500 iterations)
e. Number of intervals N, which determine the maximum velocity vkmax

The PSO algorithm for solving the OPF problem with an objective function of
minimization of generation fuel cost is shown on the next page

Feasible Random
Initialization

Update Counter

Run 68
Load Flow & Calculate
Objective Function
69
5.3 Illustrative Example

In this example, we consider a 5-bus electric power system with three


generators connected to Bus 1, 2 and 3, supplies load at Bus 4 and 5. In addition,
there are capacitors at Bus 4 and 5 to compensate for the reactive power required by
the load. It is assumed that the impedance in all lines are zL = rr + jxL = 0.0099
+j0.099. Bus 1 is slack (Vδ) bus; Bus 2 and 3 are generator (PV) buses; and Bus 4
and 5 are load (PQ) buses. The OPF in this case is to determine the value of P2, P3,
V2 and V3, which minimizes the total cost of the system, while satisfies the equality
and inequality constraints.

Figure 12 . 5-bus electric power system

70
Start

Read data

Initialize particle

Analyze system
by Newton-Raphson method

Calculate the objective function

Determine
the Individual Best and Global best

Calculate velocity

Update position

No
Check stopping
condition

Yes

Result

Figure 13 . PSO algorithms for OPF problems

71
The parameters of the network, PSO algorithm are provided in Appendix.
With the number of particles is arbitrarily chosen to be 100, the PSO converges
pretty well with the number of iteration is about 28. The convergence of the
particles in (V2, V3) and (P2, P3) planes are displayed in Figure 14 and 15.

In Figure 14 and 15, it can be seen that the starting points of the particles are
arbitrarily selected in the searching space. By communicating with others and
memorizing the best position being taken, the particles tend to move and converge
at the optimum point. The process stops when the particles’ position is relatively
unchanged (ε = 10 –3).

With SD1 = 1.7 + j1.35 and SD2 = 1.7 + j1.42; the convergence of 1-th particle is
displayed in Figure 16.

1.15

1.1

1.05
V3 (pu)

0.95

0.9
0.9 0.95 1 1.05 1.1 1.15
V2 (pu)

Figure 14: The convergence of the particles in (V2, V3) plane.

72
2.5

2
P G3 (pu)

1.5

0.5
0.5 1 1.5 2 2.5
P G2 (pu)

Figure 15. The convergence of the particles in (P2, P3) plane.

1.1
Voltage (pu)

1 V3
V2
0.9
0 5 10 15 20 25 30
1.5
Power (pu)

1
PG3
0.5
PG2
0
10 0 15 5 20 25 30
Iteration number
Figure 16: The convergence of 1-st particle in PSO algorithm (V2 = 1.06 pu; V3 =
1.08; PG2 = 0.95 pu; and PG3 = 1.11 pu)

In this case, the PG1 = 1.41 pu; the voltage in all buses, the current in all lines are
kept within the limits. In addition, the validity of PSO algorithm is checked since
the incremental cost at all generators are relative the same (IC = 0.78).

73
When the system load changes during the day (24 hours), the optimal generation
of three generators is obtained in Figure 11. The incremental costs of different
generators are slightly different due to penalty factor caused by power losses.

2.5

2
Power (pu)

1.5

0.5
5 10 15 20
Incremental cost (pu)

0.9
0.8
0.7 G1
G2
0.6
G3
0.5
5 10 15 20
Time (hour)

Figure 17: Optimal generation and the incremental costs of three generators.

74
APPENDIX A
Psudo_code for the example
clear;

% Import Parameter

a = 1;
b = 1;
c = -8;
d = -10;
e = 1;
% f = a*x^2 + b*y^2 + c*x + d*y + e
% The constraints of the problem
x_min = 0;
x_max = 10;
y_min = 0;
y_max = 15;

N = 500; Number of particle is 500


% khoi tao
for i = 1:N
x(1,i) = rand()*(x_max - x_min) + x_min;
y(1,i) = rand()*(y_max - y_min) + y_min;
v_x(1,i) = 0;
v_y(1,i) = 0;
f_best_ind(i) = inf;
end;

f_best_gl = inf;
x_best_gl = 0;
y_best_gl = 0;

75
omega = 0.001; inertia coefficient
c_1 = 0.01; cognitive coefficient
c_2 = 0.01; cognitive coefficient

N_iteration = 5000;
for k = 1:N_iteration
for i = 1:N
f(k,i) = a*x(k,i)^2 + b*y(k,i)^2 + c*x(k,i) + d*y(k,i) + e;
if f(k,i) <= f_best_gl
f_best_gl = f(k,i);
x_best_gl = x(k,i);
y_best_gl = y(k,i);
end;
if f(k,i) <= f_best_ind(i)
f_best_ind(i) = f(k,i);
x_best_ind(i) = x(k,i);
y_best_ind(i) = y(k,i);
end;
end;

for i = 1:N
v_x(k+1,i) = omega*v_x(k,i) +c_1*rand()*(x_best_ind(i) - x(k,i)) +
c_2*rand()*(x_best_gl - x(k,i));
v_y(k+1,i) = omega*v_y(k,i) +c_1*rand()*(y_best_ind(i) - y(k,i)) +
c_2*rand()*(y_best_gl - y(k,i));

x(k+1,i) = x(k,i) + v_x(k+1,i);


y(k+1,i) = y(k,i) + v_y(k+1,i);
end;
end;
for i = 1:N

76
plot(x(:,i),y(:,i));
hold on;
end;

77
APPENDIX B
Psudo_code for the case study
clear;
clc;
%% Fuel cost C(P) = aP^2 + bP + c
a = [.1 .2 .15];
b = [.5 .4 .45];
c = [1.5 2.5 2.0];
%% Capacity of gen. Pmin < P < Pmax
P_min = [0.5 0.5 0.5];
P_max = [2.5 2.5 2.5];
Q_min = [0.5 0.5 0.5];
Q_max = [2.5 2.5 2.5];
%% Capacity of trans. lines I < Imax
I_max = 5*[0 1.0 0 2.0 0; 1.0 0 1.0 2.0 0; 0 1.0 0 0 2.0; 2.0 2.0 0 0 1.0; 0 0 2.0 1.0
0];
V_max = 1.1;
V_min = 0.8;
%% Load data
P_4 = -[1.7 1.65 1.6 1.6 1.65 1.75 1.9 2.1 2.2 2.25 2.3 2.3 2.25 2.25 2.3 2.3 2.3 2.35
2.45 2.45 2.4 2.1 1.8 1.7];
Q_4 = 0.6*P_4;
P_5 = P_4;
Q_5 = 0.65*P_4;
%% Addmittance matrix Y_bus
Y_bus = [2-20i -1+10i 0 -1+10i 0;-1+10i 3-30i -1+10i -1+10i 0; 0 -1+10i 2-20i 0 -
1+10i; -1+10i -1+10i 0 3-30i -1+10i; 0 0 -1+10i -1+10i 2-20i];
%Y_bus = [20-200i -10+100i 0 -10+100i 0;-10+100i 30-300i -10+100i -10+100i 0;
0 -10+100i 20-200i 0 -10+100i; -10+100i -10+100i 0 30-299.99i -10+100i; 0 0 -
10+100i -10+100i 20-199.99i];
%% PSO algorithm

78
N_p = 200; % number of particles

omega = .5;
c_1 = 0.2;
c_2 = 0.2;

Pen_factor = 10;

for k = 1:length(P_4)

for i = 1:N_p % initialize the particles


P_2(i) = rand()*(P_max(2)-P_min(2)) + P_min(2);
P_3(i) = rand()*(P_max(3)-P_min(3)) + P_min(3);
V_2(i) = rand()*(V_max - V_min) + V_min;
V_3(i) = rand()*(V_max - V_min) + V_min;

v_P_2(i) = 0;
v_P_3(i) = 0;
v_V_2(i) = 0;
v_V_3(i) = 0;

Cost_ind_best(i) = inf;
end;

N_iteration = 1;
epsilon_PSO = 1;

while (epsilon_PSO > 0.002)&&(N_iteration < 100)


for i = 1:N_p

79
P_0 = [0 P_2(i) P_3(i) P_4(k) P_5(k)];
Q_0 = [0 0 0 Q_4(k) Q_5(k)];
V_0 = [1.05 V_2(i) V_3(i) 1 1];
delta_0_degree = [0 -5 -10 -10 -15];

[V_bus, delta, P_bus, Q_bus I_line] = Newton_Raphson(Y_bus, P_0, Q_0,


V_0, delta_0_degree);

Cost_fuel(i) = Fuel_Cost(P_bus(1:3), a, b, c);

% Calculate penalty cost of constraints


if (P_bus(1)>P_max(1))||(P_bus(1)<P_min(1))
Cost_pen(i) = Pen_factor*min(abs(P_bus(1)-P_max(1)), abs(P_bus(1)-
P_min(1)));
else
Cost_pen(i) = 0;
end;

for n = 1:length(Q_max)
if (Q_bus(n)<Q_min(n))||(Q_bus(n)>Q_max(n))
Cost_pen(i) = Cost_pen(i) + Pen_factor*min(abs(Q_bus(n)-Q_max(n)),
abs(Q_bus(n)-Q_min(n)));
end;
end;

for n = 1:length(Y_bus(1,:))
for m = 1:length(Y_bus(1,:))
if I_line(n,m)>I_max(n,m)
Cost_pen(i) = Cost_pen(i) + Pen_factor*(I_line(n,m) - I_max(n,m));
end;
end;

80
end;

for n = 1:length(Y_bus(1,:))
if (V_bus(n)<V_min)||(V_bus(n)>V_max)
Cost_pen(i) = Cost_pen(i) + Pen_factor*min(abs(V_bus(n)-V_max),
abs(V_bus(n)-V_min));
end;
end;

Cost_total(i) = Cost_fuel(i) + Cost_pen(i);

if Cost_total(i) < Cost_ind_best(i)


Cost_ind_best(i) = Cost_total(i);
P_2_ind_best(i) = P_2(i);
P_3_ind_best(i) = P_3(i);
V_2_ind_best(i) = V_2(i);
V_3_ind_best(i) = V_3(i);
end;
end;

[Cost_global_best n] = min(Cost_ind_best);
P_2_global_best = P_2_ind_best(n);
P_3_global_best = P_3_ind_best(n);
V_2_global_best = V_2_ind_best(n);
V_3_global_best = V_3_ind_best(n);

% Calculate velocity vector and new position


for i = 1:N_p
v_P_2(i) = omega*v_P_2(i) + c_1*rand()*(P_2_ind_best(i) - P_2(i)) +
c_2*rand()*(P_2_global_best - P_2(i));

81
v_P_3(i) = omega*v_P_3(i) + c_1*rand()*(P_3_ind_best(i) - P_3(i)) +
c_2*rand()*(P_3_global_best - P_3(i));
v_V_2(i) = omega*v_V_2(i) + c_1*rand()*(V_2_ind_best(i) - V_2(i)) +
c_2*rand()*(V_2_global_best - V_2(i));
v_V_3(i) = omega*v_V_3(i) + c_1*rand()*(V_3_ind_best(i) - V_3(i)) +
c_2*rand()*(V_3_global_best - V_3(i));

P_2(i) = P_2(i) + v_P_2(i);


P_3(i) = P_3(i) + v_P_3(i);
V_2(i) = V_2(i) + v_V_2(i);
V_3(i) = V_3(i) + v_V_3(i);

if P_2(i)>P_max(2)
P_2(i) = P_max(2);
elseif P_2(i)<P_min(2)
P_2(i) = P_min(2);
end;

if P_3(i)>P_max(3)
P_3(i) = P_max(3);
elseif P_3(i)<P_min(3)
P_3(i) = P_min(3);
end;

if V_2(i)>V_max
V_2(i) = V_max;
elseif V_2(i)<V_min
V_2(i) = V_min;
end;

if V_3(i)>V_max

82
V_3(i) = V_max;
elseif V_3(i)<V_min
V_3(i) = V_min;
end;

end;

epsilon(1) = max(v_P_2);
epsilon(2) = max(v_P_3);
epsilon(3) = max(v_V_2);
epsilon(4) = max(v_V_3);
epsilon_PSO = max(epsilon);

for i = 1:N_p
P_2_gen(i,N_iteration) = P_2(i);
P_3_gen(i,N_iteration) = P_3(i);
V_2_gen(i,N_iteration) = V_2(i);
V_3_gen(i,N_iteration) = V_3(i);
end;

N_iteration = N_iteration + 1;

end;

P_2_opt(k) = P_2_global_best;
P_3_opt(k) = P_3_global_best;
V_2_opt(k) = V_2_global_best;
V_3_opt(k) = V_3_global_best;

P_0 = [0 P_2_opt(k) P_3_opt(k) P_4(k) P_5(k)];

83
Q_0 = [0 0 0 Q_4(k) Q_5(k)];
V_0 = [1.05 V_2_opt(k) V_3_opt(k) 1 1];
delta_0_degree = [0 -5 -10 -10 -15];

[V_bus_opt, delta_opt, P_bus_opt, Q_bus_opt I_line_opt] =


Newton_Raphson(Y_bus, P_0, Q_0, V_0, delta_0_degree);

P_1_opt(k) = P_bus_opt(1);
P_2_opt(k) = P_bus_opt(2);
P_3_opt(k) = P_bus_opt(3);

P_sys(k) = -(P_4(k) + P_5(k));

V_5_opt(k) = V_bus_opt(5);
P_loss(k) = sum(P_bus_opt);
Q_loss(k) = sum(Q_bus_opt);

Cost_fuel_opt(k) = Fuel_Cost(P_bus_opt(1:3), a, b, c);

for i = 1:3
IC(k,i) = 2*a(i)*P_bus_opt(i) + b(i);
end;

end;

84
REFERENCE

1. Rahli, M., “Optimal Power Flow Using Sequential Unconstrained Minimization


Technique (SUMT) Method Under Power Transmission Losses Minimization”,
Electric Power System Research 1999; 52:61-64.
2. Liu Shengsong, Hou Zhijian, Wang Min, “A Hybrid Algorithm for Optimal
Power Flow Using the Chaos Optimization and the Linear Interior Point
Algorithm”, Power System Technology, 2002. Proceedings International
Conference on Power Control 2002, Vol. 2, 13-17 October 2002, pp. 793-797.
3. K.C. Almeida, F.D. Galiana, S. Soares, “A General Parametric Optimal Power
Flow”, IEEE Transactions on Power Systems, Vol. 9, No. 1, Feb 1994, pp. 540-547.
4. Zhang S., Irving M.R., “Analytical Algorithm for Constraint Relaxation in LP-
based Optimal Power Flow”, IEE Proceedings, Vol. 140, No. 4, July 1993, pp. 326-
330.
5. Alsac, O., Stott, B., “Optimal Load Flow with Steady State Security”, IEEE
Transactions on Power Apparatus and Systems, 1974, Vol.93, pp.745-751.
6. Grudinin, N., “Combined Quadratic-Separable Programming OPF Algorithm for
Economic Dispatch and Security Control”, IEEE Transactions on Power Systems,
Vol. 12, No. 4, November 1997, pp. 1682-1688.
7. Almeida, K.C., Salgado, R., “Optimal Power Flow Solutions Under Variable
Load Conditions”, IEEE Transactions on Power Apparatus and Systems, Vol. 15,
No. 4, November 2000. pp. 1204-1211.
8. K. Aoki, M. Kanezashi, “A Modified Newton Method for Optimal Power Flow
Using Quadratic Approximation Power Flow”, IEEE Transactions on Power
Apparatus and Systems, Vol. PAS-104, No. 8, August 1985, pp.2119-2124.
9. Rahli, M., “Optimal Power Flow Using Sequential Unconstrained Minimization
Technique (SUMT) Method Under Power Transmission Losses Minimization”,
Electric Power System Research 1999; 52:61-64.
10. Liu Shengsong, Hou Zhijian, Wang Min, “A Hybrid Algorithm for Optimal
Power Flow Using the Chaos Optimization and the Linear Interior Point
Algorithm”, Power System Technology, 2002. Proceedings International

85
Conference on Power Control 2002, Vol. 2, 13-17 October 2002, pp. 793-797.
11. K.C. Almeida, F.D. Galiana, S. Soares, “A General Parametric Optimal Power
Flow”, IEEE Transactions on Power Systems, Vol. 9, No. 1, Feb 1994, pp. 540-547.
12. Zhang S., Irving M.R., “Analytical Algorithm for Constraint Relaxation in LP-
based Optimal Power Flow”, IEE Proceedings, Vol. 140, No. 4, July 1993, pp. 326-
330.
13. Alsac, O., Stott, B., “Optimal Load Flow with Steady State Security”, IEEE
Transactions on Power Apparatus and Systems, 1974, Vol.93, pp.745-751.
14. Lai, L.L., MA, J.T., Yokohoma, R., Zhao, M., “Improved Genetic Algorithm for
Optimal Power Flow Under Both Normal and Contingent Operation States”,
Electrical Power Energy System, Vol.19, 1997, pp. 287-291.
15. Chen, L., Suzuki, H., Katou, K., “Mean Field Theory for Optimal Power Flow”,
IEEE Transactions on Power Systems, Vol. 12, November 1997, pp. 1481-1486.
16. Kulworawanichpong, S.S., “Optimal Power Flow Using Tabu Search”,IEEE
Power Engineering Review, June 2002, pp. 37-39.
17. Abido, M.A., “Optimal Power Flow Using Particle Swarm Optimization”,
Electrical Power and Energy Systems 24 (2002). Power Systems, Vol. 17, No. 2,
May 2002, pp. 563-571.
18. Kennedy, J., “The Particle Swarm: Social Adaptation of Knowledge”, Proc
1997 IEEE International Conference Evolutionary Computation ICEC 97,
Indianapolis, IN, USA, 1997:303-8.
19. Angeline, P. “Evolutionary Optimization Versus Particle Swarm Optimization:
Philosophy and Performance Differences”, Proc 7th Annual Conference
Evolutionary Programming 1998:601-10.
20. Shi, Y., Eberhart, R., “Parameter Selection in Particle Swarm Optimization”,
Proc 7th Annual Conference Evolutionary Programming 1998:591-600.
21. Ozcan, E., Mohan, C., “Analysis of a Simple Particle Swarm Optimization
System”, Intelligent Engineering System Artificial Neural Networks, 1998; 8:253-
258.
22. Stott, B., Marinho, J.L., “Linear Programming for Power System Network
Security Applications”, ibid., PAS-98,1979, pp. 837-848.

86
23. Eberhart, R.C., Shi, Y., “Comparing Inertia Weights and Constriction Factors in
Particle Swarm Optimization”, IEEE int. Conf. Evolutionary Computation
,2000.,pp.84- 88.
24. Abido, M.A., “Optimal Design of Power-System Stabilizers Using Particle
Swarm Optimization”, IEEE Transactions on Energy Conversion, Vol. 17, No.3,
September 2002, pp. 406-413.
25. Gaing, Z.L., “Particle Swarm Optimization to Solving the Economic Dispatch
Considering the Generator Constraints”, IEEE Transactions on Power Systems, Vol.
18, No. 3, August 2003, pp. 11871-195.
26. Hirotaka, Y., Kawata, K. Fukuyama, Y., “A Particle Swarm optimization for
Reactive Power and Voltage Control Considering Voltage Security Assessment”,
IEEE Transactions on Power Systems, Vol. 15, No. 4, , November 2000, pp. 1232-
1239.
27. Miranda, V., Fonseca, N., “EPSO-Evolutionary Particle Swarm Optimization, a
New Algorithm with Applications in Power Systems”, IEEE Transactions on Power
Systems, 2000, pp. 745-750.
28. Shi, Y., Eberhart, “A Modified Particle Swarm Optimizer”, Proceedings of
IEEE International Conference on Evolutionary Computation, Anchorage, May
1998, pp. 69-73
29. Zhenya, H., et al., “Extracting Rules from Fuzzy Neural Network by Particle
Swarm Optimization”, Proceedings of IEEE International Conference on
Evolutionary Computation, , Anchorage, May 1998, pp. 74-77.
30. Kennedy, J., Spears, W., “Matching Algorithm to Problems: An Expermental
Test of the Particle Swarm Optimization and Some Genetic Algorithms on the
Multimodal Problem Generator, Proceedings of IEEE International Conference on
Evolutionary Computation, Anchorage, May 1998, pp. 78-83.
31. Angeline, P, “Using Selection to Improve Particle Swarm Optimization”,
Proceedings of IEEE International Conference on Evolutionary Computation, ,
Anchorage, May 1998, pp. 84-89.
32. Hadi Saadat, “Power System Analysis”, McGraw-Hill, 1999.

87
33. J. Branke, Evolutionary Optimization in Dynamic Environments. Norwell, MA:
Kluwer, 2002
34. D. E. Goldberg, Genetic Algorithms in Search, Optimization and Machine
Learning. Reading, MA: Addison-Wesley, 1989.
35. D. A. Van Veldhuizen and G. B. Lamont, “Multiobjective evolutionary
algorithm research: A history and analysis,” Dept. Elec. Comput. Eng.,
Graduate School of Eng., Air Force Inst. Technol., Wright-Patterson AFB, OH,
Tech. Rep. TR-98-03, 1998..
36. R. Poli, J. Kennedy, and T. Blackwell. Particle swarm optimization. An
overview. Swarm Intelligence, 1(1):33-57, 2007.
37. F. van den Bergh, “An analysis of particle swarm optimization,” Ph.D.
dissertation, Faculty of Natural and Agricultural Sci., Univ. Petoria, Pretoria,
South Africa, Nov. 2002
38. Andries P. Engelbrecht, editor. Computational Intelligence: An Introduction.
John Wiley & Sons, England, 2002.
39. T. Ray and K. M. Liew, “A swarm metaphor for multiobjective design
optimization,” Eng. Opt., vol. 34, no. 2, pp. 141–153, Mar. 2002.
40. J. E. Fieldsend and S. Singh, “A multi-objective algorithm based upon particle
swarm optimization, an efficient data structure and turbulence,” in Proc. 2002
U.K. Workshop on Computational Intelligence, Birmingham, U.K., Sept. 2002,
pp. 37–44.

88

Potrebbero piacerti anche