Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Power Systems
1001308
Decision-Making Techniques for
Security Constrained Power
Systems
1001308
EPRI • 3412 Hillview Avenue, Palo Alto, California 94304 • PO Box 10412, Palo Alto, California 94303 • USA
800.313.3774 • 650.855.2121 • askepri@epri.com • www.epri.com
DISCLAIMER OF WARRANTIES AND LIMITATION OF LIABILITIES
THIS DOCUMENT WAS PREPARED BY THE ORGANIZATION(S) NAMED BELOW AS AN
ACCOUNT OF WORK SPONSORED OR COSPONSORED BY THE ELECTRIC POWER RESEARCH
INSTITUTE, INC. (EPRI). NEITHER EPRI, ANY MEMBER OF EPRI, ANY COSPONSOR, THE
ORGANIZATION(S) BELOW, NOR ANY PERSON ACTING ON BEHALF OF ANY OF THEM:
ORDERING INFORMATION
Requests for copies of this report should be directed to the EPRI Distribution Center, 207 Coggins
Drive, P.O. Box 23205, Pleasant Hill, CA 94523, (800) 313-3774.
Electric Power Research Institute and EPRI are registered service marks of the Electric Power
Research Institute, Inc. EPRI. ELECTRIFY THE WORLD is a service mark of the Electric Power
Research Institute, Inc.
Copyright © 2001 Electric Power Research Institute, Inc. All rights reserved.
CITATIONS
Principal Investigators
J. McCalley
M. Ni
Other contributors
J. Chen
W. Fu
V. Van Acker
The report is a corporate document that should be cited in the literature in the following manner:
Decision-Making Techniques for Security Constrained Power Systems, EPRI, Palo Alto, CA,
Copyright 2001. 1001308.
iii
REPORT SUMMARY
This report provides a summary of decision-making techniques that can be applied to security-
constrained power systems. The single unifying theme throughout the report is that we are
capable of quantifying security level using risk. It is by this quantification that we are then able
to precede in our investigation of decision-making techniques, as decision-making techniques
invariably require quantification of criteria on which the decision is based. We summarize our
method of risk-based security assessment (RBSA) in chapter 1, and we provide an overview of
the applications of risk-based decision-making. Chapter 2 describes how RBSA can be applied
for determining operational limits. Chapter 3 reports on the risk-based optimal power flow
(OPF), which is the classical OPF modified based on the ability to quantify security level in
terms of risk. Chapter 4 explores various decision-making methods for performing control-room
preventive/corrective action, including several multi-criteria decision-making methods based on
risk, variance, and an economic criterion. We believe that this exploration provides the basis for
developing automated decision-support tools for operators where they can reach inside a box and
pull out one or perhaps several decision-making techniques, run them, and then make use of the
multiple suggestions provided. Chapter 5 develops the decision-making problem associated with
when to obtain more information. This relates to the classical data gathering problem that has for
so long plagued probabilistic techniques, but rather than focus on how to obtain the information,
we address the issue of whether to obtain it.
v
ABSTRACT
vii
CONTENTS
1 INTRODUCTION
1.1 Overview of RBSA....................................................................................................... 1-2
1.2 The Decision-making Approach in Industry Today....................................................... 1-4
1.2.1 Deterministic Reliability Criteria............................................................................ 1-4
1.2.2 The Deterministic Decision Making Approach ...................................................... 1-5
1.3 Applications for Risk-based Decision-making .............................................................. 1-5
1.3.1 Operations ........................................................................................................... 1-6
1.3.2 Operational Planning ........................................................................................... 1-7
1.3.3 Facility Planning................................................................................................... 1-7
1.3.4 Reliability Criteria ................................................................................................. 1-7
1.3.5 Data Gathering by Information Valuation ............................................................. 1-8
1.4 Report Overview.......................................................................................................... 1-8
References......................................................................................................................... 1-9
ix
2.5.1 Steps 1, 2, 3 for Deterministic and Probabilistic Studies......................................2-13
2.5.2 Steps 4, 5 for Deterministic Method ....................................................................2-15
2.5.3 Steps 4, 5 for Probabilistic Method......................................................................2-16
2.6 Discussion ..................................................................................................................2-18
2.7 Conclusion..................................................................................................................2-19
References........................................................................................................................2-20
x
4.3.2.2 Per-unit Method (Method No. 7) .................................................................... 4-9
4.3.2.2.1 Mini-max Criterion.................................................................................4-10
4.3.2.2.2. Minimum Maximum Regrets Criteria ....................................................4-10
4.4 Decision with Additional Information Using Bayesian Decision Tree............................4-11
4.4.1 Decision Tree.......................................................................................................4-11
4.4.2 Decision-making with Additional Information ........................................................4-12
4.5 Multi-objective Decision Making...................................................................................4-14
4.5.1 Shortcomings of Single Criterion Risk-based Approaches ...................................4-14
4.5.2 Literature Review on Multi-criteria Decision Making .............................................4-15
4.5.3 Overview..............................................................................................................4-16
4.5.4 Value or Utility-based Approaches .......................................................................4-17
4.5.4.1 Define the Scales of Measurement of the Objectives....................................4-18
4.5.4.2 Develop Value Functions..............................................................................4-19
4.5.4.3 Making Decisions based on the Value..........................................................4-23
4.5.5 ELECTRE IV........................................................................................................4-24
4.5.5.1 Main Steps of the Method.............................................................................4-25
4.5.5.2 Results with ELECTRE IV ............................................................................4-27
4.5.6 Other Methods .....................................................................................................4-31
4.6 Evidential Theory.........................................................................................................4-31
4.6.1 Brief Introduction of Evidential Theory..................................................................4-32
4.6.1.1 The Frame of Discernment and Basic Probability Assignment......................4-32
4.6.1.2 Belief and Plausibility Function .....................................................................4-33
4.6.1.3 Dempster’s Rule of Combination ..................................................................4-33
4.6.2 Application of Evidential Theory in Corrective/preventive Action Selection...........4-34
4.6.2.1 Single Decision Makier MCDM .....................................................................4-34
4.6.2.1.1 Appraisal of Each Action.......................................................................4-34
4.6.2.1.2 Select Action Based on the Appraisal ...................................................4-37
4.6.2.2 Multiple Decision Makers MCDM..................................................................4-37
4.7 Conclusion...................................................................................................................4-38
References........................................................................................................................4-39
5 VALUE OF INFORMATION
5.1 Introduction................................................................................................................... 5-1
5.2 Perfect Information ....................................................................................................... 5-2
5.3 Imperfect Information.................................................................................................... 5-6
xi
5.4 Conclusion.................................................................................................................... 5-8
APPENDIX
A.1 Introduction .................................................................................................................. A-1
A.2 Rating-based vs. Cost-based ....................................................................................... A-1
A.3 Impacts vs. Decisions .................................................................................................. A-2
A.4 Modeling Impact Uncertainty........................................................................................ A-2
A.5 Cost Estimation............................................................................................................ A-3
A.6 Classification of Impacts............................................................................................... A-4
A.6.1 Based on Affected Group ..................................................................................... A-4
A.6.2 Based on Cost Category....................................................................................... A-5
A.6.3 Based on Impact Component ............................................................................... A-5
A.6.4 Based on Cost Component................................................................................... A-6
A.7 Impacts for Different Security Problems ....................................................................... A-7
A.7.1 Overload Security ................................................................................................. A-8
A.7.2 Voltage Security ................................................................................................. A-10
A.7.3 Dynamic Security................................................................................................ A-11
A.8 Summary ................................................................................................................... A-14
References....................................................................................................................... A-14
xii
LIST OF FIGURES
Figure 2-1 Uncertainty Due to Operating Conditions and Contingency State .......................... 2-5
Figure 2-2 Overload and Low-voltage Continuous Severity Functions .................................... 2-7
Figure 2-3 Concept of Loadability and Margin......................................................................... 2-6
Figure 2-4 Deterministic Security Boundary ...........................................................................2-10
Figure 2-5a Risk Indices with Discrete Severity Functions & Uncertainty Model 1..................2-11
Figure 2-5b Risk Level for Power A – E along the Deterministic Boundary.............................2-11
Figure 2-6 Risk Indices with Continuous Severity Functions & Uncertainty Model 1...............2-12
Figure 2-7 Risk Indices with Continuous Severity Functions & Uncertainty Model 2...............2-13
Figure 2-8 Modified IEEE RTS ‘96 .........................................................................................2-14
Figure 2-9 Deterministic Security Boundary ...........................................................................2-16
Figure 2-10 Risk Indices with Discrete Severity Functions and Uncertainty Model 1 ..............2-17
Figure 2-11 Risk Indices with Continuous Severity Functions & Uncertainty Model 1.............2-18
Figure 2-12 Risk Indices with Continuous Severity Functions & Uncertainty Model 2.............2-18
Figure 3-1 Risk-flow Curve for 138kV Line .............................................................................. 3-6
Figure 3-2 Risk-flow Curve for 230kV Line .............................................................................. 3-6
Figure 3-3 Risk-flow Curve for 400MVA Transformer .............................................................. 3-7
Figure 3-4 Risk-voltage Curve for 138kV Bus ......................................................................... 3-9
Figure 3-5 Risk-voltage Curve for 230kV Bus ......................................................................... 3-9
Figure 3-6 The IEEE RTS’96 System.....................................................................................3-14
Figure 3-7 Generation Cost vs. Component Risk Limit...........................................................3-21
Figure 3-8 Generation Cost vs. System Risk Limit .................................................................3-21
Figure 3-9 Lagrange Multipliers vs. System Risk Limits .........................................................3-22
Figure 4-1 Risk Inconsistency in System Operation ................................................................ 4-2
Figure 4-2 Decision Tree of the Example ...............................................................................4-11
Figure 4-3 The Decision Tree with Additional Information ......................................................4-13
Figure 4-4 An Example of a Value Function ...........................................................................4-18
Figure 4-5 Value Curves of Profit ...........................................................................................4-21
Figure 4-6 Value Curves of Risk ............................................................................................4-22
Figure 4-7 Value Curves of Variance .....................................................................................4-22
Figure 4-8 Hierarchy Structure for the Decision-making Problem ...........................................4-23
Figure 4-9 Preference and Indifference Thresholds ...............................................................4-26
Figure 4-10 Example of Final Ranking with ELECTRE IV ......................................................4-27
xiii
Figure 4-11 Final Ranking......................................................................................................4-31
xiv
LIST OF TABLES
Planning and operating bulk interconnected electric power systems are complex activities that require involvement
of a large number of people bringing a wide range of experiences and interests. What was once mainly the domain
of planning and operating engineers within the utility company now must involve people representing interests and
needs of transmission owners, system operators, energy sellers, large industrial customers and other end users,
regulators, reliability councils, security centers, manufacturers, marketers, brokers, and power exchange personnel.
In parallel with the increase in the diversity of participants, the conditions under which power systems are operated
have also become more diverse. Transmission loading patterns differ from those for which they were originally
planned, and the ability to monitor and control them has greatly increased in complexity. High uncertainty is a
characterizing feature of this complexity, and the ability to obtain, manage, and use large amounts of information
has become the primary means of handling this uncertainty.
Within the electric network, an individual disturbance resulting in a cost consequence may occur for a number of
reasons at any time. The disturbance may result in overload, voltage collapse, or transient instability, drawing the
prevailing system to an uncontrollable cascading situation leading to widespread power outages. To maintain system
reliability under uncertainty, studies are performed to aid in operating and planning decisions. The current practice
within the industry uses deterministic methods to perform these studies, with significant safety margins to cover
“all” the possible unknown uncertainties. In practice, this means that power engineers propose a strong system and
then operate it with large security margins. Though investment and operational costs are relatively high, this has
resulted in a corresponding high degree of reliability in most power systems.
The power system, however, has been shifting from a regulated system to a competitive and uncertain market
environment. A fluctuation of market demand and supply has led to an uncertain market price for energy in system
operation. Although some methods of risk assessment and management have been introduced into the market-
oriented energy trading business, the traditional deterministic reliability criteria are still intact. This has led
engineers to face more pressure, from economic imperatives in the marketplace, to operate power systems with
lower security margins. To operate the system closer to the traditional deterministic limits, or even beyond them, a
refined called Risk-based Security Assessment (RBSA) [1,2,3] has been developed. An important feature of this
approach is an index that quantitatively captures the basic factors that determine security level: likelihood and
severity of events. The use of this index provides that security level may be included in decision-making paradigms.
It is to this end that the work of this project is intended.
The main contribution of the work in this report is to show how RBSA can be included in formal decision-making
methods in order to select the “best” alternative or course of action accounting for the impact of network security.
Several different decision-making applications are explored in this context, and several different decision-making
paradigms are employed. We view this work as providing the foundation on which a “toolbox” of decision-making
methods will be coded so that the decision-maker can pull out any one of them or perhaps several of them to provide
decision-support of a variety of different kinds of situations.
In Section 1.1, we briefly describe the RBSA approach to quantifying security level. We motivate this work in
Section 1.2 by describing the process used in the industry today for security-related decision-making. Section 1.3
identifies several different applications for risk-based decision-making.
1-1
Risk ( Sev | X ) = E ( Sev( X ) | X )
t, f t t, f
(eq. 1-1)
= ∑ ∫ Pr( E , X | X ) × Sev( E , X )dX
i t t, f i t t
i Xt
Here the risk associated with the future operating condition at time t, X t , f ,is given by the expected value of the
Xt
uncertain operating condition: (the operating condition at time t)
times its corresponding (post-contingency) severity over the set of all possible uncertain events. Another integration
with respect to time (not shown in eq. 1-1), over a specified time period, provides the basis for performing risk
assessment for planning. We emphasize that the "uncertain event" includes uncertainty in the contingency state1,
Xt
E i as well as in the future operating condition .
The risk index is based on the elements of probability and severity. These elements also enable the calculation of the
variance. Variance characterizes the uncertainty of the risk index, and it can be important for good decision-making.
For example, an alternative for which the expected cost, or risk, is low, but the amount of potential variation from
the expected cost is great, may not be a better alternative than one where the expected cost is higher but potential for
variation is much smaller.
Severity assessment is highly influential for decision-making. In RBSA, there are at least two levels of severity
assessment. One level may be broadly identified as “rating-based” and the other level as “cost-based.” Rating-based
severity assessment establishes relatively simple severity functions that depend on deterministic criteria. For that
reason, they are preferred for operational security assessment where engineers prefer indices that reflect physical
attributes of the network that are easily understandable. These severity functions and the manner in which they are
developed are fully described in a report on the EPRI project called “Security Mapping and Reliability Index
Evaluation.” Another level of severity assessment is to assign an economic value to each possible outcome identified
as an impact. Then the corresponding risk has explicit economic meaning in that it represents the expected cost due
to possible insecurity problems. A disadvantage with the latter approach is that it introduces another layer of
uncertainty in translating the uncertain network performance to an even more uncertain associated cost of that
network performance. Yet the cost-based approach to impact evaluation provides the capability for quantification of
the cost uncertainty and for that reason may have advantages in planning.
In either case, the resulting risk index may be used to provide a direct bridge between power system economics and
reliability, in that it is a means to explicitly include reliability in ordinary economic decision-making problems using
formal decision-making paradigms. We have included a summary of cost-based impact assessment in Appendix A.
We utilize this approach in illustrating most of the decision-making techniques described in this report.
One unique feature of RBSA that distinguishes it from traditional security assessment is that it is capable of
assessing uncertainties in the impact given the contingency state Ei and the operating condition Xt, using a
probabilistic model to account for uncertainties in Im(Ei , Xt). For line overload, the uncertainty is in the ambient
temperature, wind speed, and wind direction [4,5]. For transformer overload, it is in the ambient temperature and the
transformers' loading cycle [6,7]. For voltage security, it is in the interruption voltage level of the load [7]. For
dynamic (angle) security, it is in the fault type and fault location of the outaged circuit corresponding to contingency
state Ei [8,9,10]. Details on the related computations can be found in the references [4-11]. Appendix A describes
the impact assessment in more detail.
1
The set of contingency states {E i , ∀ i = 0, N } includes the possibility that the current state remains the same, i.e.,
an outage does not occur.
1-2
1.2 The Decision-making Approach in Industry Today
In today’s power industry, the traditional deterministic reliability criteria are still the basis for the operating and
planning decision-making. Within the electric network, an individual disturbance with non-zero cost consequences
may occur for any number of reasons or at any time in any system environment. The disturbance may result in
overload, voltage collapse or transient instability, and draw the prevailing system to an uncontrollable cascading
situation leading to wide spread power outage. To maintain system security under these uncertainties, some limits
must be satisfied regardless of the economic factor behind the system operation.
1-3
1.3 Applications for Risk-based Decision-making
Quantification of the security level via the risk calculation previously described offers us another approach for
decision-making in power systems. Below, we suggest a few typical applications where this approach is applicable.
This list is not exhaustive, as it is expected that additional applications would be identified following its use.
1.3.1 Operations
a. Unit commitment: In deciding whether to commit a unit to relieve a high
transmission flow, the operator would want to weigh the risk associated with the flow
against the cost of committing the additional unit.
b. Economic dispatch: Dispatching interconnected units to minimize production costs is often
constrained due to security limits. Traditionally, these limits have been hard. However, use
of hard limits sometimes results in acceptance of high energy costs even though the actual
risk may be very low. A “Risk” approach can identify and quantify these situations.
c. Market lever: The risk is able to function as a lever to adjust the behavior of
market participants via an economic mechanism to avoid system security
problems, rather than using mandatory cutting of transactions based on hard rules.
d. Preventive/corrective action selection: The preventive/corrective (P/C) action is very
important for maintaining the power system at an acceptable risk level. The selection of such
an action is a complicated decision-making process where the influence of an action must be
assessed for multiple problems, and frequently, what improves one problem may degrade
another one. Offering the best action or a possible action list will help the operator to
efficiently operate under highly stressed conditions. The traditional corrective/preventive
action selection is to solve an optimization problem, commonly known as the security
constrained optimal power flow (SC-OPF). The objective function is normally the
production cost or and the constraints include the power flow equalities and the limits on the
component performance (branch flows, bus voltage limits, generator capability). In contrast,
we have formulated a risk-based optimal power flow (RB-OPF) based on the ability to
quantify risk. There are two different kinds of formulations for the RB-OPF, depending on
how risk is included:
Risk in the constraints: We may use a traditional objective function (e.g., production
costs) together with the power flow equality constraints, but rather than include limits on
branch and bus performance, we include limits on component risk. Alternatively, we
may include a limit on the system risk.
Risk in the objective: Here, we include the production costs together with the system risk
in the objective function. The only constraints modeled are the power flow equations
and the generator capability limits. Under this circumstance, the limits on bus voltage
and transmission line flow performance are not modeled as this influence is reflected in
the risk part of the objective.
1-4
These limitations are often given for various conditions; for example, a
transmission line typically has both a normal rating, which limits the
continuous current flow, and a 15 or 30 minute emergency rating, which
limits the flow for the corresponding amount of time. RBSA is very effective in
identifying different ratings for different durations and different
components.
b. Identifying operating limits: Operators must adhere to limits on transmission flows,
generation levels, load levels, and voltage levels. These limits, often complex functions of
several operating parameters, are driven by risk associated with normal conditions as well as
risk associated with potential outage conditions. RBSA can quantify these risks and provide
decision criteria for use in identifying them.
1-5
In Chapter 3, a risk based Optimal Power Flow (RB-OPF) is developed. The method assumes that power demand in
each bus is random and normally distributed. There are two basic implementations of the RB-OPF. The first
implementation is to replace the traditional deterministic constraints with component risk functions. The advantage
here is that one can then solve the RB-OPF with individual component risk limits, regional risk limits, system risk
limits, or a combination of these various risk limits. The second implementation is to eliminate constraints altogether
and include the total risk in the objective function with the generation cost so that these two can be optimized
against each other.
Chapter 4 explores various decision-making paradigms for performing corrective and preventive action selection. In
the past, such actions have been selected based on the concepts introduced by Dy Liacco [13], where preventive
actions are selected to move the system from the alert state to the normal state, and corrective actions are selected to
move the system from the emergency state to the normal state. Thus, preventive/corrective actions require a decision
in terms of when to take action and which action to take. The basis for this decision has been the identification of the
alert or the emergency states in terms of deterministic criteria. The ability to compute risk and related measures
provides for various new decision-making paradigms in this arena. We have used a simple decision-making scenario
to test several variations on two basic types of decision-making methods. A single criterion method results from
combining economic measures with risk. There are a variety of such approaches that we describe. The Bayesian
decision tree is particularly effective as a tool that provides integration of additional information as it becomes
available. On the other hand, risk alone, as an expected value, does not completely describe the uncertainty inherent
to the decision. We may also use variance as an index, along with risk, yet its inclusion requires multi-criteria
decision-making, an approach that is also described in Chapter 4. Finally, Chapter 4 explores use of evidential
theory to deal with the corrective/preventive action selection problem. This theory provides an effective method to
process the uncertainty and has a special advantage of combining the opinions of different decision makers.
Chapter 5 addresses the issue of data gathering for use in applying probabilistic methods to decision-making. Rather
than discuss the mechanics of how to do it, we focus on the decision of whether to do it. Thus, the data-gathering
problem becomes a decision problem in itself. This decision requires assessment of the information cost to the
information value. The concept that underlies placing a dollar value on information is that the purpose of gathering
information is to reduce uncertainty. The anticipated change in uncertainty, measured by changes in probabilities,
result in changes in expected impacts (risks). The value of the information is determined by comparing the risk with
and without the additional information.
References
[1] EPRI final report WO8604-01, “ Risk-based Security Assessment”, December,
1998.
[2] J. McCalley, V. Vittal, and N. Abi-Samra, “Overview of Risk Based Security
Assessment,” Proc. of the 1999 IEEE PES Summer Meeting , July 18-22, 1999.
[3] J. McCalley, V. Vittal, N. Abi-Samra, "Use of Probabilistic Risk in Security
Assessment: A Natural Evolution," International Conference on Large High
Voltage Electric Systems (CIGRE), Selected by the CIGRE U.S. National
Committee for presentation at the CIGRE 2000 Conference, August, 2000,
Paris.
[4] H. Wan, J. McCalley, and V. Vittal, "Increasing Thermal Rating by Risk
Analysis," IEEE Trans. on Pwr Sys., Vol. 14, No. 3, Aug., 1999, pp. 815-828.
[5] J. Zhang, J. McCalley, H. Stern, and W. Gallus, “A Bayesian Approach to Short-
Term Transmission Line Thermal Overload Risk Assessment,” under review by IEEE
Transactions on Power Systems.
[6] W. Fu, J. McCalley, V. Vittal, "Risk-Based Assessment of Transformer
Thermal Loading Capability," Proc. of the 30th North American Power
Symposium," Cleveland,OH., Oct. 1998, pp. 118-123.
[7] W. Fu, J. McCalley, V. Vittal, “Transformer Risk Assessment,” to appear, IEEE
Transactions on Power Systems.
1-6
[8] H. Wan, J. McCalley, V. Vittal, "Risk-Based Voltage Security," to appear, IEEE
Trans. on Pwr Sys.
[9] J. McCalley, A. Fouad, V. Vittal, A. Irizarry-Rivera, B. Agrawal, R. Farmer, “A
Risk-based Security Index for Determining Operating Limits in Stability-
Limited Electric Power Systems” IEEE Trans. on Pwr. Sys., Vol. 12, No. 3, Aug.
1997, pp. 1210-1219.
[10] V. Van Acker, J. McCalley, V. Vittal, "Risk-Based Transient Instability," Proc.
of the 30th North American Pwr Symposium," Cleveland,OH., Oct. 1998.
[11] V. Van Acker, J. McCalley, V. Vittal, J. Pecas-Lopes, “Risk-Based Transient
Stability Assessment,” Proceedings of the Budapest Powertech Conference,
Budapest, Hungary, Sept. 1999.
[12] Barrett J Stephen, Motlis Yakov, “ Discussion for the paper: Increasing
Thermal Rating by Risk Analysis”, IEEE Transactions on Power System, Vol.
13, Aug, 1999.
[13] T. Dy Liacco, “System Security: The Computer Role,” IEEE Spectrum, Vol. 16,
No. 6, pp 48-53, June, 1978.
1-7
2
DECISION MAKING FOR OPERATIONS ----
COMPARISON BETWEEN RISK-BASED AND
DETERMINISTIC SYSTEM OPERATING LIMITS
2.1 Introduction
In many countries today, the introduction of competitive supply and corresponding
organizational separation of supply, transmission, and system operation has resulted in
more highly stressed operating conditions, more vulnerable networks, and an increased
need to identify the operational security level of the transmission system. Here, we
regard security as the ability of the system to respond to contingencies in terms of the
branch loading, bus voltage, and dynamic response of the network. The determination
of the security level, for given operating conditions, traditionally has been done using
what we call the deterministic method. In this method, an operating condition is
identified as secure or insecure according to whether each and every contingency in a
pre-specified set, the contingency set, satisfy specified network performance criteria, the
performance evaluation criteria. If one or more contingencies are in violation, actions
are taken to move the security level into the secure region. If no disturbances are in
violation, then no action need be taken, or actions can be taken to enhance the economic
efficiency of the energy delivered to the end-users.
2-1
deterministic method so attractive, and so useful, in the past. Today, however, with the
industry’s emphasis on economic competition, and with the associated increased
network vulnerability, there is a growing recognition that this simplicity also carries
with it significant subjectivity, and this can result in constraints that are not uniform
with respect to the security level. This suggests that the ultimate decisions that are
made may not be the “best” ones.
It is well known that probabilistic methods constitute powerful tools for use in many
kinds of decision-making problems. Therefore, today there is a great deal of interest in
using them to enhance the security-economy decision making problem. The US Western
Systems Coordinating Council (WSCC) is developing probabilistic based reliability
criteria [1]. A recent CIGRE report [2] recommended further study of probabilistic
security assessment methods, and an ongoing CIGRE task force, 38.02.21, is
implementing this recommendation. There was a panel session dedicated to this subject
at the 1999 PES Summer Meeting [3-6]. Another panel session at this same meeting
focused on risk-based dynamic security assessment [7-11]. The theme of most of this
work is that security level can be quantitatively assessed using a probabilistic metric.
Although the industry has not reached a conclusion regarding which probabilistic
metrics are best, there is consensus that using them has potential to improve analysis
and decision-making.
Despite the perceived drawbacks of the deterministic method and the perceived
promise of probabilistic methods, we believe it prudent to proceed carefully in
embracing probabilistic security assessment for operations. Therefore, the objective of
this paper is to compare probabilistic security assessment with deterministic security
assessment. The comparison is made with respect to the assessment results of each
method. In order to retain simplicity, we focus on overload and low voltage security.
Voltage and transient instability will not be addressed, although we believe that our
general conclusions are applicable to all forms of security problems.
This chapter is organized as follows. Sections 2.2 and 2.3 summarize our
implementations of the deterministic and probabilistic approaches, respectively, to
security assessment. Section 2.4 uses a simple 5-bus system for illustration. Section 2.5
gives results for a contrived constrained interconnection within the IEEE Reliability Test
System (RTS). Section 2.6 provides interpretation and explanation regarding the
differences in the results and the significance of these differences. Section 2.7 concludes.
2-2
configurations (i.e., network topology and unit commitment), a range of system operating conditions, a list of outage
events, and the performance evaluation criteria. Study definition requires careful thought and insight because the
number of possible network configurations, the range of operating conditions, and the number of conceivable outage
events are each very large, and exhaustive study of all combinations of them is generally not reasonable.
Consequently, the deterministic approach has evolved within the electric power industry to minimize study effort yet
provide useful results. This approach depends on the application of two criteria during study development:
Credibility: The network configuration, outage event, and operating conditions is reasonably likely to occur.
Severity: The outage event, network configuration and operating condition on which the decision is based results in
the most severe system performance, i.e., there should be no other credible combination of outage event, network
configuration, and operating condition which results in more severe system performance.
In this paper, we are explicitly interested in studies conducted for the purpose of
identifying operational limits for use by the operator. In this case, the study focuses on a
limited number of operating parameters such as flows on major transfer paths,
generation levels, or load levels for a specific season. We call these the study
parameters. Application of the deterministic approach consists of the following basic
steps:
1. Develop power flow base cases corresponding to the time period (year, season) and loading conditions (peak,
partial peak, off peak) necessary for the study. In each base case, the unit commitment and network topology is
selected based on the expected conditions for the chosen time period. The topologies selected are normally all
circuits in service; here, credibility is emphasized over severity. Sometimes sensitivity studies are also performed if
weakened topologies are planned.
2. Select the contingency set. Normally this set consists of credible events for which post-contingency
performance could be significantly affected by the study parameters.
3. Identify the range of operating conditions, in terms of the study parameters, which are expected during the time
period of interest. We refer to this as the study range.
4. Identify the event or events that “first” violate the performance evaluation criteria as operational stress is
increased within the study range. We refer to these events as the limiting contingencies. If there are no such
violations within the study range, the region is not security-constrained, and the study is complete.
5. Identify the set of operating conditions within the study range where a limiting contingency “first” violates the
performance evaluation criteria. This set of operating conditions constitutes a line (for two study parameters), a
surface (for three) or a hypersurface (for more than three) that partitions the study range. We refer to this line,
surface, or hypersurface as the security boundary; it delineates between acceptable and unacceptable regions of
operation.
6. Condense the security boundary into a set of plots or tables that are easily understood and used by the operator.
2-3
or a hypersurface (for more than three) that partitions the study range. We refer to this line, surface, or hypersurface
as the security boundary; it delineates between acceptable and unacceptable regions of operation.
Remark 1: There are a number of methods by which one can make the decision associated with step 4. One simple
and cautious approach is to evaluate points on the deterministic security boundary and utilize one of these values as
the threshold.
Remark 2: In the next section, we propose using the product of probability and severity, or risk, as the probabilistic
index. In this case, step 5 results in a contour or surface of constant risk.
Remark 3: The fact that step 6 does not change means that the operator sees no difference in how the two
approaches are presented.
Xt,f
Figure 2.1: Uncertainty due to operating conditions and contingency state
condition Ei
We compute the expectation of severity by summing over all possible outcomes the product of the outcome
probability and its severity. This measure corresponds
Xt,j to what has been called risk in many disciplines. In Figure 1,
if we assign probabilities to each branch, then the probability of each terminal state is the product of the probabilities
assigned to the branches that connect the initial state to that terminal state. If we assign severity values to each
terminal state, the risk can be computed as the sum over all terminal states of their product of probability and
severity, i.e.,
Risk ( Sev | X ) =
t, f ∑i ∑j Pr( E ) Pr( X | X ) × Sev( E , X )
i t, j t,f
(eq. 2.1)
i t, j
Pr(Xt,j | Xt,f ) is the probability of operating condition Xt,j at time t given that the forecasted operating condition in
time period t is Xt,f. Assuming we can forecast these operating conditions very well, it is appropriate to model the
probability distribution of Xt,j given Xt,f with a normal distribution having a mean equal to the forecast. Under this
assumption, the voltages and branch flows of Xt,j given a contingency follow the Multi-Variate-Normal (MVN)
distribution [10,12,13]. In this paper, we only consider the risk caused by the bus low voltage and line overload, so
under this circumstance, eq. (2.1) changes to:
(eq. 2.2)
(
Risk Sev | X
)=
t,f
c b
∑ Pr (E )* ∑ ∫ Sev (V )Pr (V | E , X
i =1
i
j =1
lv j j i t,f
)dV
j
l
+ ∑ ∫ Sev (P )Pr (P | E , X
)dP
ol k k i t, f k
k =1
where c, b, l are the total number of contingencies, buses and branches respectively.Pr(Vj | Ei, Xt+1) and Pr(Pk | Ei,
Xt+1) are the probability distributions of Bus j’s voltage and Branch k’s flow. Here, Pr(Ei) is the probability of
contingency i in the next time interval. The events Ei are assumed to be Poisson distributed so that,
2-4
( )
Pr (Ei ) = 1 − e − λ i * e
−∑ j ≠i λ j
(eq. 2.3)
Here, λ i is the occurrence rate of contingency i per time interval.
Uncertainty Model 1
We only consider the uncertainty of contingencies and do not consider the uncertainty of operating conditions. This
means we assume that the mean of Xt equals to the forecasted value and the variance of it equals to 0, implying that
the forecasted value has no error. This assumption is reasonable if the unit time interval is small. Under this
condition, the bus voltage and branch flow under operating condition Xt, given a contingency, are certain values, so
the total risk can be obtained in a simple form of eq. (2.2), i.e.,
Risk (Sev | X )=
( ) + ∑ Sev (P | E , X ))
t, f (eq. 2.4)
∑ Pr (E )* ∑ Sev (V | E , X
c b l
i lv j i t,f ol k i t,f
i =1 j =1 k =1
Uncertainty Model 2
2-5
2.4 Case Study for Five Bus Test Case
1 3
2 4
1 3
2
5
8
6
7
5 4
2-6
2. Post-contingency under-voltage of Bus 4 due to contingency Line 5 outage.
3. Post-contingency overload of Line 7 due to contingency Line 6 outage.
In step 5, we identify the security boundary in the space of the study parameters. Figure
2.4 illustrates the deterministic security boundary (bold lines).
2-7
Figure 2.5a: Risk Indices with Discrete Severity Functions & Uncertainty Model 1
(contingency uncertainty only)
Figure 2.5b: Risk Level for Point A – E along the Deterministic Boundary
2-8
Observation 3: The influence of contingency probability is also apparent in Figure 2.6,
as the 0.001 risk contour indicates points B, C and D are significantly higher risk points
than points A and E, as indicated in Figure 2.5b.
Figure 2.6: Risk Indices with Continuous Severity Functions & Uncertainty Model 1
(contingency uncertainty only)
Figure 2.7: Risk Indices with Continuous Severity Functions & Uncertainty Model 2 (operating condition and
contingency uncertainty)
2-9
In this section, we use a modified version of the IEEE Reliability Test System (RTS) [14]
for the comparison. Figure 2.8 shows the system. As indicated in this figure, the system
has been divided into three areas. The basic idea is that significant north-to-south
transfer causes high flow through area 2 and the interconnections between areas 1 and
3, and it heavily affects some corresponding overload and voltage problems. Area 2 can
alleviate the severity of these problems by shifting generation from its bus 23 to its bus
13. Thus the study parameters are the total north-to south flow and the bus 23
generation. The parameters are varied according to:
∆P23 = −∆P13
(eq. 2.5)
∆ Parea 3 = − ∆ Parea1
(eq. 2.6)
We describe the first three steps of the assessment procedure since they are common to both deterministic and
probabilistic approaches.
2-10
In step 1, the analyst constructs the base case according to the expected system
conditions. In this case, since we use a well-known test system, we describe only the
changes that were made from the data reported in [14]. These changes were made so as
to contrive a security-constrained region and include:
• Line 11~13 is removed.
• Set terminal voltage of the Bus 23 generator to 1.012pu and Bus 15 to 1.045pu.
• Shift 480 MW of load from buses 14, 15, 19, 20 to bus 13;
• Add generation capacity at buses 1 (100 MW unit), 7 (100 MW unit), 15 (100 MW unit, 155 MW unit), 23 (155
MW unit).
• Change the outage rate of Line 12~23, 13-23, 11-14 to 0.1 1,5, 10, respectively, so their
outage rates have significant difference.
In step 2, the contingency set is limited to N-1 contingencies anywhere in the system
that might cause overload or voltage problems limiting the north-to-south transfer.
This set includes:
• Circuit outages:
12~23 out; 13~23 out; 12~13 out; 15~24 out; 14~11 out; 20~23 out; 14~16 out; 12~ 9 out; 12~10 out
• Generator outages:
350 MW unit at bus 23; 197 MW unit at bus 13;
400 MW unit at bus 21; 100 MW unit at bus 7
Step 3 requires the identification of the parameters ranges. They are:
1. Generation at bus 23: 303 MW ~ 903 MW.
2. North-South flow (i.e. combined active power flow on lines 15 ⇒ 24, 14 ⇒ 11, 23 ⇒ 12 and 13 ⇒ 12): 455
MW ~ 1100 MW.
2-11
Figure 9: Deterministic Security Boundary
2-12
Figure 2.10: Risk Indices with Discrete Severity Functions and Uncertainty Model 1
(contingency uncertainty only)
Observation 2: From Figures 2.11 and 2.12, we observe that the use of the continuous
severity functions results in continuous variation in risk throughout the operating
range. The iso-risk curves in Figures 2.11 and 2.12 are consistent with the risk value
change along the deterministic boundary in Figure 2.10.
Observation 3: The use of continuous severity functions causes the non-zero risk inside
the deterministic boundary of Figure 2.11, and it is a contributing reason to the non-zero
risk inside the deterministic boundary of Figure 2.12. Use of this severity function is
also the cause of, for a particular operating condition inside the deterministic boundary,
the risk index of Figure 2.12 is higher than that of Figure 2.11. In Figure 2.12, the non-
zero risk inside the deterministic boundary is also caused by the modeling of
uncertainty in operating conditions, since risk evaluation of a point inside the
deterministic boundary is also affected by the system performance for points outside
the boundary.
2-13
Figure 2.11: Risk Indices with Continuous Severity Functions & Uncertainty Model 1
(contingency uncertainty only)
Observation 4: Comparing Figure 2.11 with Figure 2.10, we can see that the risk value based on the continuous
severity function is larger than that based on the discrete severity function.
Figure 2.12: Risk Indices with Continuous Severity Functions & Uncertainty Model 2
(contingency and operating condition uncertainty)
2.6 Discussion
Based on the analysis in the last two sections, we observe that the deterministic
boundary does not necessarily result in constant risk, and that there are a number of
subtle influences captured by the iso-risk curves that are not captured by the
deterministic approach:
1. Effect of outage probability.
2-14
The deterministic approach assumes all contingencies (in the contingency set) are
equally probable, but the probabilistic approach distinguishes between them. Thus,
there may be some situations where a deterministic violation is in fact very low risk
because the outage probability is extremely low. There may be other situations where a
deterministic violation contributes very high risk because of a very high outage
probability.
2. Effect of non-limiting events and problems
The deterministic approach assesses only the most restrictive contingencies and corresponding problems; i.e., it does
not recognize the influence on security level of less restrictive contingencies or problems. On the other hand, the
probabilistic approach does capture the increased risk caused by multiple constraints as it sums risk associated with
all contingencies and problems, i.e., the probabilistic approach is capable of composing risk from multiple events
and multiple problems and it reflects the total composite risk and not simply that from the single most restrictive
event.
3. Effect of Violation Severity
The deterministic approach considers all violations are unacceptable; this implies that all violations are equally
severe. But the probabilistic approach distinguishes between different severities. Thus, there may be some situations
where a deterministic violation contributes in fact very low risk because the violation severity is extremely low.
There may be other situations where a deterministic violation contributes very high risk because of a very high
violation severity.
4. Effect of uncertainty in operating conditions
The deterministic approach cannot address uncertainty in operating conditions that is a practical and unavoidable
problem for the security assessment of future time. This influence is especially important when small variations in
operating conditions cause large deviations in performance.
2.6 CONCLUSION
The study reported in this paper has compared the traditional deterministic security
assessment approach, as used for many years in industry, with an alternative approach
based on probabilistic risk. Although deterministic assessment is simple in concept and
application, results based on it can be misleading, as it does not capture the effect of
outage likelihood, non-limiting events and problems, violation severity, and uncertainty
in operating conditions. These effects can significantly influence the risk evaluation of a
near-future operating condition. Given the high frequency of stressed conditions
observed in many control centers today, it is clear that on-line control is a continuous
decision-making problem for the operator. We believe that the probabilistic risk based
security evaluation approach will serve well in this kind of environment.
References
[1] Lester H. Fink, “Security: its meaning and objectives”, Proc. of the Workshop on Power System Security
Assessment, pp. 35~`41, Ames, Iowa, April 27~29, 1988.
[2] Mohammed J. Beshir, “Probabilistic based transmission planning and operation criteria development for
the Western Systems Coordinating Council”, Proc. of the 1999 IEEE PES summer meeting, presented at
the 1999 IEEE PES summer meeting panel session on Risk-Based Dynamic Security Assessment,
Edmonton, Canada.
[3] CIGRE task force 38.03.12, “Power System Security Assessment: A Position Paper”, June 1997.
[4] Y. Schlumberger, C. Lebrevelec, M. de Pasquale "An Application of a Risk Based Methodology for
Defining Security Rules Against Voltage Collapse", Proc. of the 1999 IEEE PES summer meeting,
presented at the 1999 IEEE PES summer meeting panel session on Risk-Based Dynamic Security
Assessment, Edmonton, Canada.
[5] Abed, "WSCC Voltage Stability Criteria, Undervoltage Load Shedding Strategy, and Reactive Power
Reserve Monitoring Methodology", Proc. of the 1999 IEEE PES summer meeting, presented at the 1999
2-15
IEEE PES summer meeting panel session on Risk-Based Dynamic Security Assessment, Edmonton,
Canada.
[6] "Dynamic Security Risk Assessment", A.M. Leite da Silva, J. Jardim, A.M. Rei, J.C.O. Mello, Proc. of the
1999 IEEE PES summer meeting, presented at the 1999 IEEE PES summer meeting panel session on Risk-
Based Dynamic Security Assessment, Edmonton, Canada.
[7] J. Momoh, M. Elfayoumy, W. Mittelstadt, Y. Makarov,"Probabilistic Angle Stability Index", Proc. of the
1999 IEEE PES summer meeting, presented at the 1999 IEEE PES summer meeting panel session on Risk-
Based Dynamic Security Assessment, Edmonton, Canada.
[8] S. Aboreshaid, R. Billinton, "A Framework for Incorporating Voltage and Transient Stability
Considerations in Well-Being Evaluation of Composite Power Systems", Proc. of the 1999 IEEE PES
summer meeting, presented at the 1999 IEEE PES summer meeting panel session on Risk-Based Dynamic
Security Assessment, Edmonton, Canada.
[9] J. McCalley, V. Vittal, N. Abi-Samra, "An Overview of Risk Based Security Assessment", Proc. of the
1999 IEEE PES summer meeting, presented at the 1999 IEEE PES summer meeting panel session on Risk-
Based Dynamic Security Assessment, Edmonton, Canada.
[10] J. McCalley, V. Vittal, H. Wan, Y. Dai, N. Abi-Samra,"Voltage Risk Assessment", Proc. of the 1999 IEEE
PES summer meeting, presented at the 1999 IEEE PES summer meeting panel session on Risk-Based
Dynamic Security Assessment, Edmonton, Canada.
[11] V. Vittal, J. McCalley, V. Van Acker, W. Fu, N. Abi-Samra, Transient Instability Risk Assessment", Proc.
of the 1999 IEEE PES summer meeting, presented at the 1999 IEEE PES summer meeting panel session on
Risk-Based Dynamic Security Assessment, Edmonton, Canada.
[12] George Casella, Roger L. Berger, "Statistical Inference", Pacific Grove, Calif.: Brooks/Cole Pub. Co.
c1990.
[13] H.Wan, “Risk-base security assessment for operating electric power systems”, Ph.D. dissertation, Iowa
State University, 1998.
[14] IEEE reliability test system task force of the application of probability methods subcommittee, “ The IEEE
reliability test system – 1996”, IEEE Transactions on Power Systems, v14, n3 1999, pp 1010-1018.
2-16
3
RISK BASED OPTIMAL POWER FLOW
3.1 Introduction
The purpose of an optimal power flow (OPF) is to schedule power system controls to
optimize an objective function while satisfying a set of nonlinear equality and
inequality constraints. The scheduling of these controls is actually a decision-making
effort, and as such, we recognize the OPF as a fundamental decision-making tool for
power system engineers. In this chapter, we explore ways of using probabilistic risk to
improve on the traditional OPF. We call the result of these efforts the risk-based OPF
(RB-OPF). We will see that a significant difference between the OPF and the RB-OPF lies
in the nature of the constraints used in the problem.
Examples of the equality and inequality constraints used in a traditional OPF include
generation/load balance, bus voltage limits, power flow equations, branch flow limits
(including both transmission line and transformer), active/reactive reserve limits, and
limits on all control variables [1]. The following is a simplified deterministic OPF
problem with no discrete variables or controls [2].
subject to
Pi − ∑ViV j COS (θ ij + δ j − δ i ) = 0
Qi + ∑ ViV j SIN (θ ij + δ j − δ i ) = 0
| S ij | ≤ S ij
max
Vi ,min ≤ Vi ≤ Vi ,max
Pgi ,min ≤ Pgi ≤ Pgi ,max
Qgi ,min ≤ Qgi ≤ Qgi ,max
Within this chapter, we will refer to the above problem as problem 0. The objective
function is the total cost of real generation. The first two constraints are power flow
constraints. The next two are the branch power flow limits constraints. The fifth is bus
voltage constraint. The last two are active and reactive power generation constraints.
The traditional security constrained OPF (SCOPF) should also include constraints that
represent operation of the system after N-1 contingency outages, where the system is
operated so that if a contingency occurs, the resulting branch flow and bus voltages
would still be within the emergency voltage and emergency thermal limits prior to
system readjustment [1,3]. In order to include these constraints and avoid heavy
computation, a set of credible contingencies [1] is formed, and corresponding post-
outage constraints are added to the OPF constraints.
It is well known that probabilistic methods constitute powerful tools for use in many
kinds of decision-making problems. Therefore, today there is a great deal of interest in
using them to enhance the security-economy decision making problem. The risk based
Optimal Power Flow (RB-OPF) assumes that power demand in each bus is random and
normally distributed with the forecasted value as its mean and some variance. Credible
contingencies are also taken into account by incorporating them into component risk
functions, which are used to replace the traditional deterministic constraints. There are
three ways to form the risk based OPF: set an individual risk limit on each component,
set an overall system risk limit, or treat the system risk as a part of objective.
In Section 3.2, the system composite risk assessment for thermal overload and bus
voltage out-of-limit are developed. Section 3.3 gives the new formulations of the risk
based OPF. Section 3.4 describes the algorithms used in this study. Section 3.5 gives
some case studies. Conclusions are drawn in Section 3.6.
Many studies have been done to develop a probabilistic load flow [7-13]. The first paper
dealing with probabilistic load flow was published in 1974 by Borkowska [10], in which
it is assumed that branch flows are linear combination of net nodal injections, and that
power balance is a function of the sum of power injections only (i.e., no losses). Dopazo,
et. al. introduced the concept of stochastic load flow [11], commonly referred to as the
AEP approach which assumed a linearized power flow with additive noise, zero
variance, and some covariance matrix. The estimation task is carried out using a
weighted least squares minimization objective. The solution is obtained using iterative
techniques to solve the resulting optimality conditions.
In this study, we use a simplified AEP approach [7,9]. The approach is based on
following assumptions:
1. All bus loads, branch flows, and bus voltage magnitudes are normally
distributed.
2. A linearized model of the system can be used around the expected value of the
bus loads.
The above assumptions have been shown to be reasonably accurate [7], and they result
in great simplification of the computational procedure [11].
Let
C pp = E [ (PL - PL )(PL - PL ) T ]
C qq = E [ (Q L - Q L )(Q L - Q L ) T ]
− Cpq be the covariance matrix of bus loads between active and reactive power
C pq = E [ (PL - PL )(Q L - QL ) T ]
− T be the vector of branch loading at the operating point (assuming system load
to be equal to the expected value).
− V be the vector of bus voltage loads at the operating point (assuming system
load to be equal to the expected value).
− ∆T = T − T
− ∆V = V − V
− ∆PL = PL − PL
∂Tij ∂Tij
∂P ∂QL
A= L
∂Vk ∂Vk
∂PL ∂QL
The derivation of the Jacobian matrix A for the linearized model can be found in [9].
The linearized model of branch flows and bus voltages versus bus load is
∆T ∆PL
∆V = A × ∆Q
L
∆T
E = 0 (eq. 3-2)
∆V
∆T
Cov = AC PQ AT (eq. 3-3)
∆V
Then the probabilistic load flow algorithm used in this study is follows [9]:
Step 1: Solve a deterministic load flow assuming loads are equal to the expected
values PL and QL . We obtain the expected values for the branch flows, T , and
the bus voltages, V .
Step 3: Compute the covariance matrix of the branch flows and bus voltages by
using equation (eq. 3-3). This calculation, together with the expected values
found in step 1, provides the information necessary to characterize distributions
for the branch flows and the bus voltages.
The Risk-Flow curve is created for each transformer based on its local weather
condition and physical properties. Figure 3-3 shows an example of such curve
developed in [6].
When developing Risk-Flow curves, one needs to account for the impact of cascading.
Here, we have assumed a very high cost for the cascading cost component for high
flows. This is a very rough approach. It could be refined by including a Risk-Flow curve
that accounts for only the impact on the circuit itself and another curve that accounts for
the cascading impact on the system. The latter curve would then depend on system
conditions. In this study, we did not consider the effect of cascading.
Thermal limit risk in probabilistic load flow
In the above component level risk calculation for transmission line and transformer, the
branch flows are given deterministically. In the probabilistic load flow, as we assume
that the load is uncertain, the branch flow is also uncertain. If we define the component
risk for a given branch flow on branch i (line or transformer) as Risk(Si), the system risk
for branch i, RiskTi(Si), is given as the expectation of the component risk over the
uncertain flows on branch i, i.e.,
∞
RiskTi ( Si ) = ∫ Pr( S ) Risk ( S )dS
−∞
i i i (eq. 3-4)
where
The Risk-Voltage curve is created for each bus in a transmission network according to
its local load mix. Figures 3-4 and 3-5 show examples of such curve for a 138kV bus and
a 230kV bus, respectively.
Figure 3-4 Risk-Voltage Curve for 138kV Bus
Similar to thermal risk, in the probabilistic load flow, the voltage is also uncertain. If we
define the component risk for a given voltage as Risk(Vj), the bus voltage risk RiskVj(Vj)
in a probabilistic load flow is
∞
RiskV j (V j ) = ∫ Pr(V j ) Risk (V j )dV j (eq. 3-5)
−∞
where
− Vj is the voltage on bus j.
To account for the effect of OLTC (on-load-tap-changer) on the load, when developing
the Risk-Voltage curve, one needs to identify the upper side bus voltage when the
transformer load tap hits its limit and how the lower side bus voltage changes after the
load tap hits its limit, or alternatively, one can develop the Risk-Voltage curve based on
relationship between lower side voltage and load first, and then transform the curve to
the upper side voltage by taking into account the OLTC effect.
k =N ∞
RiskV j (V j ) = ∑ Pr(k ) ∫ Pr(V jk ) Risk (V jk )dV jk
k =1
(eq. 3-7)
−∞
where
− k represents the state of the system. k=1 is the base case, and k>1 means (k-
1)th post-contingency configuration.
− Pr(k) is the probability of the system in kth state in the next hour.
Based on component risk functions developed in the previous sections, the risk
constrained optimal power flow can be formulated as one of the following three
problems.
subject to
Pi − ∑ ViV j COS (θ ij + δ j − δ i ) = 0
Qi + ∑ViV j SIN (θ ij + δ j − δ i ) = 0
RiskTi ( Si ) ≤ RiskT0
RiskV j (V j ) ≤ RiskV0
Pgi ,min ≤ Pgi ≤ Pgi ,max
Q gi ,min ≤ Q gi ≤ Q gi ,max
where RiskTi(Si) is the transmission line and transformer thermal risk computed by (eq.
3-6), RiskVi(Vi) is the bus voltage out-of-limit risk computed by (eq. 3-7), and RiskT0 and
RiskV0 are the assumed maximum risk values tolerated by the system operator. It
should be noted that all the variables in the objective functions and constraints are
expected values.
subject to
Pi − ∑ ViV j COS (θ ij + δ j − δ i ) = 0
Qi + ∑ ViV j SIN (θ ij + δ j − δ i ) = 0
∑ RiskT ( S ) + ∑ RiskV (V )
i i j j ≤ RiskTV0
Pgi ,min ≤ Pgi ≤ Pgi ,max
Q gi ,min ≤ Q gi ≤ Q gi ,max
subject to
Pi − ∑ ViV j COS (θ ij + δ j − δ i ) = 0
Qi + ∑ViV j SIN (θ ij + δ j − δ i ) = 0
Pgi ,min ≤ Pgi ≤ Pgi ,max
Q gi ,min ≤ Q gi ≤ Q gi ,max
0 ≤ ω1 ≤ 1
0 ≤ ω2 ≤ 1
ω1 + ω 2 = 1
Here ω1 and ω2 are weighting coefficients whose values can reflect the system operator's
attitude towards generation cost and risk.
Linear programming based OPF methods are widely adopted today in the industry
[1,17]. In this section, we describe how to solve the above risk constrained OPF using
successive linear programming algorithm.
The OPF problem (eq. 3-6) can be rewritten in the following compact form,
subject to
g1 ( x1 , x2 ) = 0
g 2 ( x1 , x2 ) ≤ 0
Step 2: Solve the equality constraint (using probabilistic load flow) equations.
Step 3: Linearize the problem around the x k , solve the resulting LP for ∆x .
∂f
min ∆x (eq. 3-12)
∂x x= xk
subject to
∂g
∆x ≤ − g ( x k )
∂x x = x k
− ∆ ≤ ∆x ≤ ∆
∂L ∂f ∂g
= + λT ≤ tolerance1
∂x ∂x ∂x
g ( x) ≤ tolerance2
∆x ≤ tolerance3
Step 6: Adjust step size limit ∆ based on the trust region algorithm [18], go to
Step 2.
For the termination criteria given in step 5, λ is the vector of Lagrange multipliers of
the LP problem. The first condition pertains to the size of the gradient, the second to the
violation of the constraints, and the third to the step size.
Line A2 1 3 5.82e-05
Line A5 2 6 5.48e-05
Also we used the same component risk functions developed in Section 3.2.2 for all the
components in the system. For example, the risk function of each 138kV line is the risk
function shown in Figure 3-1 times its length, and the risk function of each 138kV bus is
the risk function shown in Figure 3-4 times the load on that bus. These values are only
used to illustrate our method. For practical usage, the risk function for each component
should be developed individually depending on its own location, weather, and load
conditions.
In this case, there are no bounded branch thermal limits. The Lagrange multipliers
related to bounded voltage limits are shown in Table 3-7. Since the units of the objective
function and limits are the same, these multipliers give us a direct feeling about how
effective they are if we relax these limits. They are also useful in identifying the most
effective means of improving the objective.
Figure 3-7 gives the relationship between the total generation cost and component limit.
It shows that relaxing component risk will reduce the generation cost. But it becomes
more and more ineffective to do so.
Table 3-3 Thermal Risk for Deterministic Constrained Case
1 155.84 13 1729.37
2 208.33 14 1284.86
3 0.00 15 73.32
4 0.00 16 43.62
5 0.00 17 0.00
6 0.00 18 212.78
7 0.00 19 117.80
8 0.00 20 149.55
9 0.00 21 0.00
10 0.00 22 0.00
11 0.00 23 0.00
12 0.00 24 0.00
Table 3-5 Thermal Risk for Risk Constrained Case
1 116.83 13 300.00
2 111.01 14 300.00
3 0.00 15 128.59
4 0.00 16 49.72
5 0.00 17 0.00
6 0.00 18 300.00
7 0.00 19 206.19
8 0.00 20 300.00
9 0.00 21 0.00
10 0.00 22 0.00
11 0.00 23 0.00
12 0.00 24 0.00
3.6 Conclusions
In this chapter, a risk based optimal power flow is developed. The method assumes that
power demand in each bus is random and normally distributed associated with the
forecasted value as its mean and some variance. The uncertainties associated with load
characteristics, weather conditions, and contingencies are incorporated into the
component risk functions. The traditional inequality deterministic constraints such as
branch thermal limits and bus voltage limits are replaced by the probabilistic risk
functions for each transmission line, transformer and bus. A successive linear
programming algorithm is adopted to solve the risk based OPF problem in this study.
Risk based OPF provides a useful decision-making tool to help the system operator to
balance system risk and cost.
Table 3-8 Solution to Problem 3
References
[1] M. Huneault and F. D. Galiana, ``A Survey of The Optimal Power Flow Literature,''
IEEE Transactions on Power Systems, Vol.6, No.2, pp 762-768, May 1991.
[2] IEEE tutorial course, Optimal Power Flow: Solution Techniques, Requirements, and
Challenges. IEEE Power Engineering Society, 96 TP 111-0.
[4] Mid-Continent Area Power Pool (MAPP) System Design Standards, Mid-Continent
Area Power Pool, December 1994.
[5] J. Chen and J. McCalley, “Comparison Between Deterministic and Probabilistic
Study Methods in Security Assessment for Operations,” to appear in Proceedings of the
VI International Conference on Probabilistic Methods Applied to Power Systems,
September 2000, Madeira Island, Portugal.
[6] EPRI final report WO8604-01, ``Risk-based Security Assessment,'' December, 1998.
[11] F. Dopazo, O. A. Klitin, and A. M. Sasson, ``Stochastic Load Flows,'' {\em IEEE
Transactions on Power Apparatus and Systems,} Vol.PAS-94, No.2, 1975, pp. 299-309.
[18] R. Fletcher, Practical Methods of Optimization, 2nd Edition, John Wiley & Sons,
pp.95-96.
[19] IEEE Task Force Report, ``The IEEE Reliability Test System - 1996,'' 96 WM 326-9
PWRS.
[20] Youjie Dai, “Framework for Power System Annual Risk Assessment,” Ph.D.
Dissertation, Iowa State University, 1998.
4
DECISION MAKING FOR OPERATIONS
4.1 Introduction
According to the traditional security assessment, the state of the power system can be
assigned to one of the following sets: normal, alert, emergency and restorative. When a
system is in the alert state, some preventive actions must be taken. When the system is
in the emergency state, some corrective actions must be adopted. In this chapter, we
propose that the risk level of the system can be used to identify when, to select which
ones, and to determine how much preventive or corrective action should be taken.
Therefore the operators will control the system according to the risk value of the
system. If the risk of the system is too high, then the operator should take some actions
to reduce it. This kind of action is called preventive/corrective (P/C) action here. How
to select an efficient P/C action is a decision-making problem.
It has always been a challenge in system operation to find the optimal or satisfactory
balance between two generally opposing objectives: obtaining the maximal return of a
system given the current configuration and available infrastructure, versus minimizing
the adverse effects of possible security problems. The choice is limited by post-
contingency system performance limits as specified by reliability criteria, imposing
restrictions on the pre-contingency operating conditions. A list of credible contingencies
is screened, and those contingencies, when occurring, which lead to violations of the
performance criteria, are selected for further analysis. The selected contingencies put a
limit on a certain number of operating parameters, like circuit flow, generator level, bus
voltage magnitude, etc., operation beyond which may possibly lead to security
problems if the contingencies occur.
In [1] the concept of risk was introduced: it links the economics of an operating point
with the security aspects associated with it. By evaluating the risk at operating points
lying on deterministic operating limits, it is possible to show that that they generally do
not have equal risk, as illustrated in Chapter 2. This is mainly due to the fact that the
risk is composed of limits imposed by different contingencies, each having a different
probability of occurrence, and different security impact. These arguments lead to the
conclusion that there is some risk inconsistency when using the deterministic operating
limits as security criteria. An illustration of this kind of inconsistency is depicted in
Figure 4-1. The discontinuous line represents the deterministic operating limits, while
the curved line connects the points with the same level of risk. Assuming risk increases
with the distance from the origin, it can clearly be observed that some points in the
secure region have a risk greater than points on the iso-risk curve, while some operating
points outside the safe region have a risk value lower than the risk of the contour.
f lo w 2
In se c u re R e g io n
I s o - r is k c o n t o u r
D e t e r m . li m it
S e c u re R e g io n
f lo w 1
The risk index defined in Chapter 1 provides more insight to the operator on the
expected financial consequences of operating at a particular point. It contains
information regarding the probability of eventual insecure contingencies1 as well as on
estimation of what it will cost, if the insecure contingency turns out to be the true one.
The operator has several ways to use this information, and an obvious one is
determining operating limits based on risk. This corresponds to choosing a maximum
risk level at which the system operator is willing to operate. Other possibilities include
using risk to optimize the operating trajectory in the near future, where risk is included
as an attribute in the objective function or as a constraint, as illustrated in Chapter 3
where we described the risk-based OPF. In any of these problems the original challenge
posed –trade-off between economics – is present in one way or another.
In system operation the decisions need to be made in a very short amount of time
(maximum a few hours). The problems can be very complex, and the consequences of a
wrong decision can be felt immediately. Usually, the number of actions available to the
system operator is limited. As a result, decision aid tools are very helpful in an
operation environment.
1
In this chapter, “contingency” includes the real contingency and no-outage state.
A decision-making problem consists of various components: the decision problem, the
decision maker(s) (DM) – single or multiple, the objective(s) – single or multiple, the
attributes and their values, called pay-offs, the alternatives, and the states of nature or
scenarios.
In EPRI report [1], we discuss some decision criteria for the selection of
corrective/preventive action, such as the Maxi-min and Maximum Minimum Regrets
criteria for profit maximization condition. We summarize this discussion in Section 4.3.
After considering the probability of the contingency, instead of using the traditional
decision criteria which looks at the security impacts in each contingency, for each
contingency a risk index (the product of the probability and the impact value of
corresponding contingency) is calculated and then this risk index is combined with the
profit value (or a benefit function) to form a combined index for the selection of the
action. The problem associated with this procedure is that the profits are assumed to
occur with probability equal to that of the no-outage conditions (usually very close to
1.0) whereas the security impacts only occur with probabilities of the outage
contingencies. Thus, the computed risks (product of security impacts and the outage
probabilities) of each contingency cause the risk value to be far smaller than the profit
value. So this procedure effectively results in neglecting the risk, and the selected action
is always the action with the highest profit among the action list. This is not consistent
with intuition or with actual practice, and so in Section 4.3, we present a modified
approach to using the maxi-min and the maxi-min regret decision criteria. The
modification rests on the perspective that in each contingency, both profits and security
impacts occur (rather than just security impacts), and therefore both should be
weighted by the outage probability.
2
Whether the effect of this information on the decision is worth the price that is paid for it is an issue that is
discussed in Chapter 5 of this report.
via risk. Since risk is actually an expected value, a third objective that we propose is
variance.
In Section 4.6, we describe and illustrate application of Evidential Theory to deal with
the corrective/preventive action selection problem. Evidential Theory offers an efficient
way to represent uncertainty and to perform reasoning under uncertainty. In Evidential
Theory, each independent information source is regarded as a piece of evidence. The
information from different pieces of evidence is combined by applying the Dempster’s
Rule of Combination. Results are obtained based on the combined information. In
multi-objective decision-making problem, each objective can be regarded as one
independent information source. So we can use Dempster’s Rule of Combination to
combine information from different objectives, and then we can make the decision
based on the combined result. One advantage of this method is that it can conveniently
include multiple DMs in the multi-objective decision-making, as each DM can also be
treated as an independent information source.
The DM is assumed to be the system operator supervising a control area within the
IEEE reliability test system. This control area comprises Buses 12, 13 and 23, and the
operator must select an operating action for the coming hours. The study case is taken
at peak load conditions when most of the units are operating close to their limits. The
three 155MW units at Bus 23 are generating 105MW each. The three 200MW units at
Bus 13 are producing 600MW. The total area generation, 3*105+600=915MW supplies
250MW of local load and exports the remaining 665MW to the neighboring areas.
Table 4-1 presents the options available to the system operator, together with a
qualitative description of each one of them in terms of security level and profits. The
system operator’s objectives are two-fold: to maximize profits and to maximize security
level.
Additional information is contained in the following two tables. Table 4-2 provides the
probabilities corresponding to each one of the relevant future contingencies. It may
happen that no faults occur, or that one of the two lines emerging from Bus 13 is
faulted. To maintain simplicity, we only consider the transient stability of the bus 13
generators. Other faults might also happen in the sub-system under study but do not
affect the transient stability. The conclusions made from the following illustration are
also applicable to voltage and overload security.
No Outage 0.9999
To measure the economic benefits of an action, the projected profits that result from that
action are calculated as the difference between the revenues from energy sales and the
costs of fuel and energy purchased outside the area. The profits are not affected by an
eventual contingency occurrence, but the increased costs of eventual insecurity are
accounted for in security impacts (see Table 4-4). The security impact is the cost
consequence of each of the listed contingencies and includes start-up and repair cost,
lost opportunity cost and customer interruption cost. [1].
Given this information, the system operator needs to find out which one of the actions,
according to his/her experience and judgment, gives the best trade-off between
economy and security.
Besides the above mentioned approaches, Minimum Expected Monetary Value method
(method No.5) is also a widely used single criterion decision-making approach. When
we apply this approach to the study case, the outcomes, in terms of security impact
only, are multiplied by the probability of occurrence of each contingency and then these
products are summed for each action. Finally the action with the largest of these sums is
picked. The probabilities of each contingency from Table 4-2 are used here. In this case
3
In this chapter the following terminology is used: a criterion is a more general goal, e.g., maximize the security,
maximizing the economics. An attribute is a measure to evaluate the level of satisfaction for one criterion: profits,
risk. An objective is more concrete than a criterion and indicates in which direction one wants to optimize an
attribute, e.g., the objective ‘minimizing risk’ is a way to satisfy the criteria ‘maximizing the security’.
the selected action would be Action 4.
In method No.4, as we have assumed that the profits are not affected by the eventual
contingency occurrence, so they are not weighted by the probabilities of contingencies.
But in reality the profit may vary with the contingency. For example, some
contingencies may cause the congestion of the transmission system, and this can lead to
the change of electricity price and thereupon influence the profit. So if some “profits”
are also obtained under a contingency, they should be weighted by the probability of
the contingency and then included with the “risk” evaluation.
From the results obtained in Section 4.3.1, it becomes apparent that when no probability
data is used in the decision process, the suggested methods are quite conservative. On
the other hand, the probabilistic risk-based methods favor higher profit, higher risk
alternatives and in fact virtually neglect the effect of insecurity on the decision-making
due to the low probability of the events that cause the insecurity problems, in
comparison with the high probability of the no-outage condition where we realize the
profit. This can be seen clearly from Table 4-6. Here we assume that the “profits” values
in Table 4-3 are contingency-related. Then we can calculate the risk of Action i given
Contingency j Riskij by using (eq. 4-1):
The Risk values calculated based on (eq. 4-1) are shown in Table 4-6. This table can be
regarded as the decision matrix for this study case.
We desire to transform the decision matrix so that the elements are of the same scale.
The rank method can achieve this by ranking the elements corresponding to each
contingency with the sequential number (1, 2, 3...) according to their magnitude. In our
example, the risk values of each contingency are ranked from lowest to the highest (see
Table 4-7). Then the traditional mini-max and minimum maximum regrets criteria can
be used for selecting the action.
No Outage 2 3 4 1
Outage 130-120 3 2 1 4
Outage 130-230 4 1 2 3
Based on this criterion, the highest ranked value for each action is selected and
indicated in bold in Table 4-8. Then the action having the lowest highest rank value is
selected. So Action 2 is selected (indicated as the underlined entry in Table 4-8).
The regret value matrix is formed in Table 4-8. For each action, find the maximum
regret value (indicated in bold in Table 4-8). Then select the action that has the lowest
maximum regret value (indicated by the underlined entry in Table 4-8). In this case,
Action 2 is selected.
Table 4-8 Regret Values
No Outage 1 2 3 0
Outage 130-120 2 1 0 3
Outage 130-230 3 0 1 2
Using the Rank Method, we can assure that all of the elements in the decision matrix are
of the same scale. But the rank values only reflect the relative magnitude of elements,
i.e., the rank of the values. They do not reflect the real differences of the magnitudes.
From Table 4-7 we can see that the risk values for a certain contingency are almost of
the same scale, only the risk values of different contingencies have a large difference. So
in the next section, we introduce the Per-unit Method. This method can achieve the aim
of transforming all of the elements in the decision matrix to a commensurate scale and
also can keep the information of the real difference among the risk magnitudes of a
certain contingency.
In the Per-Unit method, for each contingency, we choose the risk with the highest
absolute value as the risk base value. So in this example, the risk base values for no
outage, outage of 120-130 and outage of 130-230 are –22,595, 50.62 and 59.62
respectively. Then the per-unit value of each element can be obtained by dividing it by
the corresponding base value. The per-unit matrix is shown in Table 4-9. Then the
traditional mini-max and minimum maximum regrets criteria can be used for selecting
the action.
Table 4-9 Per-unit Risk Value
Based on this criterion, the highest per-unit value for each action is selected and
indicated in bold in Table 4-9. Then the action having the lowest maximum per-unit
value is selected, which, in this case, is Action 3, i.e., buy 150MW from Area 30. This
action has the lowest profit, but also reflects the highest security level – a conservative
decision.
The regret value matrix is shown in Table 4-10. Based on Table 4-10 and the minimum
maximum regrets criteria, Action 2 is selected. From Table 4-3 we can see that Action 2
has a high profit (although not the highest) and a low security impact (although not the
lowest), so its regret values are very small. This reflects a reasonable trade-off between
profits and security level.
The decision tree is a network diagram that depicts the sequence of decisions and
associated chance events, as the DM understands them. The branch of the tree
represents either decision alternatives or chance events. Decision actions emanate from
decision nodes, represented by squares; chance events (i.e., contingencies) emanate
from chance nodes, represented by circles. Figure 4-2 is a decision tree of the studied
example.
Impact(cost)
No Outage 0.9999
-20385$
Action 1
Outage 130-120 0.0000458
835294$
(-20284$) Outage 130-230 0.0000916
650836$
No Outage 0.9999
-19902$
Action 2
Outage 130-120 0.0000458
215647$
No Outage 0.9999
-10602$
Action 3
Outage 130-120 0.0000458
209509$
No Outage 0.9999
-22595$
Action 4
Outage 130-120 0.0000458
1105287$
The intention to select an action is a decision; the left square in Figure 4-2 represents it.
Actions 1 to 4 are the actions. No outage, outage 12-13 and outage 13-23 are three
chances for each action, so they emanate from the chance node (circle). The number
after each chance is the probability of corresponding chance. The impact value listed in
the figure is the difference between the security impact of each chance and the profit
given an action. So the expected impact of each action, which is risk, is calculated and
listed under each alternative branch with brackets. This is the “minimum expected
monetary value” approach (method No. 5 described in Section 4.3.1). The selected
action is Action 4, which has the lowest expected impact. It is a risky action that favors
the profits over security. We use this approach to illustrate the ability of the decision
tree to modify the decision as additional information becomes available.
In our example, when we gave the probability of the chances, we didn’t consider the
influence of weather, such as lightning. But experience suggests that there is a close
relationship between lightning and line outage. To identify these relationships, we may
gather data and determine the following relations (‘lightning LT’, ‘No lightning
NoLT’, ‘No outage Noout’, ‘ Outage 130-120 Out1’, ‘Outage 130-230 Out2’):
P(LT|Noout)=0.01; P(NoLT|Noout)=0.99
P(LT|Out1)=0.99; P(NoLT|Out2)=0.01
P(LT|Out2)=0.99; P(NoLT|Out2)=0.01
So whether there is lightning can be regarded as additional information for the selection
of corrective/preventive action. The prior probability of each chance (P(Noout),
P(Out1), P(Out2)) which is listed in Figure 4-2, should be modified by Bayes’ theorem
using these information. This additional information will improve the accuracy of the
probability of each chance. The updated probability of “No Outage” can be obtained as
follows:
P(LT | Noout)P(Noout)
P(Noout | LT) =
P(LT | Noout)P(Noout) + P(LT | Out1)P(Out1) + P(LT | Out2)P(Out2)
P(NoLT | Noout)P(Noout)
P(Noout | NoLT) =
P(NoLT | Noout)P(Noout) + P(NoLT | Out1)P(Out1) + P(NoLT | Out2)P(Out2)
Other updated probabilities can be obtained similarly. These probabilities are shown in
Figure 4-3. Then the risk of each action is calculated and listed under each action branch
with brackets. So if the weather forecasting shows that there will be lightning in the
next time period, the selected action is Action 2. If there is no forecasted lightning, the
selected action will be Action 4. It is easy to explain the result: when there is no
lightning, the probability of outage will be very small, so the influence of risk can be
neglected and the action with the highest profit will be selected, i.e. Action 4 in this
example, though it has the highest risk among the four actions. But if there is lightning,
the probability of outage will increase drastically, under this circumstance, the influence
of the risk results in selection of Action 2, a much more conservative action.
No Outage 0.9866
-20385$
Action 1
Outage 130-120 0.0045
835294$
(-10551$) Outage 130-230 0.0089
650836$
No Outage 0.9866
-19902$
Action 2
Outage 130-120 0.0045
215647$
(-17654$) Outage 130-230 0.0089
113559
No Outage 0.9866
-10602$
Action 3
Outage 130-120 0.0045
209509$
(-8423$) Outage 130-230 0.0089
122859$
No Outage 0.9866
-22595$
From Action 4
Outage 130-120 0.0045
Whether Lightning 1105287$
Forecast
(-11543$) Outage 130-230 0.0089
648626$
No Outage 1.0
No- -20385$
Action 1
Lightning Outage 130-120 0.0
835294$
(-20385$) Outage 130-230 0.0
650836$
No Outage 1.0
-19902$
Action 2
Outage 130-120 0.0
215647$
(-19902$) Outage 130-230 0.0
113559
No Outage 1.0
-10602$
Action 3
Outage 130-120 0.0
209509$
(-10602$) Outage 130-230 0.0
122859$
No Outage 1.0
-22595$
Action 4
Outage 130-120 0.0
1105287$
(-22595$) Outage 130-230 0.0
648626$
Several ways exist to distinguish the 2 cases mentioned in weakness no. 1. One of them
is using higher order moments, like for example, the variance (V) or standard deviation
(σ) of the impact (eq.4-2)[9]. It measures the deviation from the mean, and it is a good
way to evaluate the uncertainty associated with an action. Minimizing the uncertainty is
now a third criterion. From this point on, only the standard deviation (σ) will be used.
Returning to the two cases mentioned in comment no. 1, they can now be distinguished,
since the first case would have a very large σ, while the σ in the second case would be
more limited.
V ( Ai ) = Pr ( K | Ai ) ⋅ Im 2 ( K | Ai ) − ( Pr ( K | Ai ) ⋅ Im( K | Ai )) 2
σ ( Ai ) = V ( Ai )
(eq. 4-2)
Ai corresponds to action i.
The values of the standard deviations for each action are presented in Table 4-11.
A first step to improve the objective ‘maximizing Profits minus Risk’, would be to
include a term corresponding to the standard deviation to be minimized (with a minus
sign), which is also expressed in the same monetary units. However, the problem
mentioned in comment no 2 still remains, i.e., the incommensurability of the attributes
to be optimized, now including the standard deviation. The most common way to get
around this problem is to use weight coefficients to give appropriate importance to each
individual attribute (eq. 4-3).
The weights could be provided by the operator according to his priority with respect to
profits, risk and σ and how he would feel about the trade-offs between them. However,
this approach is inappropriate because the weights given like this are arbitrary: their
values will highly depend on the state of mind of the operator at the time of the inquiry.
Arbitrary weights will lead to inconsistent results.
Several alternatives exist to provide values for the weights in equation 4-3 in a more
systematic and robust way. The key is to obtain additional information from the DM
from which the weights can be extracted. One possible way of doing this is by asking
the DM to determine several sets of attribute values for which he is indifferent [3].
Because this introduces a third criterion, we resort to multi-criteria decision-making
methods. Several approaches exist to deal with decision-making problems with various
objectives or criteria, i.e. Multi-Criteria Decision Making (MCDM). Nevertheless, it is
important to keep in mind that an action that optimizes all criteria is very unlikely to
exist. In the following a summarized overview of existing approaches is presented, and
one of them will be applied to the example presented in Section 4.2.
4.5.3 Overview
An essential measure of integrity for a method is the degree of confidence the DM has
in the method. For relatively simple problems, the decision made with the aid of
MCDM should be compared with the decision that the DM would take without any
assistance. The best method can identified by giving the DM similar decision-making
cases and comparing the action chosen by the DM to that suggested by the methods.
Another way to select an appropriate method is to look at methods that have been
successfully applied to similar problems. In [20], several questions are listed to help the
user to evaluate different multiple criteria decision- making methods.
Methods involving multiple criteria have the particularity that they do not and cannot
provide an optimal solution. The process to get a ‘solution’ – perhaps the term
‘suggestion’ would be more adequate - is based on additional subjective information
provided by the decision-maker characterizing his or her preference.
Table 4-12 Decision Making Methods
Minimax, Expected Multi attribute Linear AHP,
Maximin, monetary utility function Programming Outranking
Minimax value methods
regret
Number of
single single multiple single multiple
objectives
Use of
objective
No Yes No No No
probability
data
Number of Finite Finite Finite Finite
infinite
alternatives countable countable countable countable
Scores on
values values utility function values values
criteria
For a multi-objective decision-making problem, the units of each objective are not
necessarily the same, for example, some may be monetary units, some may be length
units, and some maybe time units. So, one way to solve the multi-objective problem is
to change all of the objectives to a same unit which is additive. Value or Utility-based
approaches can fulfill this aim. It changes all of the objectives to a corresponding value
(utility) reflecting the DM’s preference to the objective. By adding the values or utilities
of all objectives, an index for a certain action can be obtained. This index can be used for
making the decision.
For a certain objective, it is always possible to find a function reflecting the user’s
preference of one alternative over another. When the problems involve uncertainty with
respect to the outcome of the attributes, the preference functions are referred to as utility
functions; otherwise they are called value functions. The case described above has the
uncertainty already embedded into the risk and variance attribute values: these
attributes only have one possible value per option. Technically, there’s no uncertainty
about their values, so value functions will be defined and the concept of preference value
will be used in the following paragraph.
Preference value, an economic concept that has been part of economic theory for
centuries, helps describe rational human behavior in economic decision-making. It
reflects the DM’s preference or lack of it to a certain variable. For example, a human
being’s preference to money is not proportional to the amount of the money, as
represented by the dotted line in Figure 4-4, but instead, it is more like the curve in
Figure 4-4. It shows that when one has little money, a small increase of money may give
provide a large pleasure. When one has much money, the pleasure of getting more does
not increase in proportion to how much more is obtained.
Value
Money
Figure 4-4 An Example of a Value Function
In our risk-based decision-making procedure, a certain quantity of profit and the same
quantity of risk (assume both have monetary units) will not have the same absolute
preference value (the preference value for the risk is negative). We have already seen that
probabilities for power system events are very low, so that even though the impact of
the outage may be high, the risk, which is the product of impact and probability, will be
very small and typically much smaller in magnitude than the profit. This may cause the
neglect of the risk during the decision-making. In fact, though the risk of some events is
low, their high impact may make the operator have a very high negative preference value
to these events, because the impact may be unaffordable or unbearable to the operator’s
company. That means the operator does not decide based on the relative profit and risk
magnitudes, but rather on his/her preferences to the profit and risk, i.e., his/her
preference values.
In Section 4.5.4.1, we introduce a procedure for using the Value method for
corrective/preventive action selection. The objectives are to maximize profit, minimize
risk and minimize variance. The corresponding profit, risk and variance are shown in
Table 4-11.
It is not sufficient merely to identify the objective. Since the quantity (amount, level) of
each objective is to be estimated during analysis, and since a value function is to be
formulated for each objective, the objective must be unambiguously defined and their
measurement scales must be specified.
For our problem, the definition of measurement scales is clear, they have been shown in
Table 4-11. But sometimes, the measurement scales of objectives are not easily specified.
Lifson [23] introduced a solution to this problem.
This is the most important stage for applying Value method. The Value functions for the
set of objectives should satisfy the following requirements:
The Value function for a given objective should represent the DM’s preference for
various quantities of that objective over the range of available choices.
The Value functions for the set of objectives should represent the DM’s preference
for trade-off between the objectives.
The Value of the various objectives should be measured on some Value scale so that
the expected Value of individual objectives can be meaningfully combined into a
single expected Value of a candidate action.
The following procedure will produce a set of Value functions that satisfy these
requirements.
For each objective, specify lower and upper limits of the range of interest. These limits
are based on an understanding of the particular decision situation under consideration.
The range of interest is broad enough to include all anticipated consequences.
In this example, for simplification, we choose the lower and upper limits according to
the magnitude of each objective. It is shown in Table 4-13.
1.Profit 0 50,000
2.Risk 0 20
3.Variance of 0 5,000,000
impact
Since the range of interest specified in Step 1 may include both desirable and
undesirable quantities of an objective, it must also include a neutral contribution to
success or failure. This neutral point is the threshold, designated yT. The Value of the
threshold is 0, i.e. U(yT)=0.
In our example we assign the lower limit of each objective as the threshold.
For each objective, therefore, two relative worth points are arbitrarily designated. One
of these points is defined in Step 2: the Value of threshold is set equal to zero. The
second point is determined by setting the most preferred or hated amount of each
objective equal to a utility of 1 or -1.
U2 (yU)=U2 (20)=-1
U3 (yU)=U3 (5,000,000)=-1
Available methods for estimating Value functions have been summarized in [28]. Four
approaches have been distinguished: direct measurement; the von Neumann-
Morgenstern or standard reference contract method; the modified reference contract
approach and the Ramsey method.
The derived Value function has two forms, one is a curve in plane of Objective- Value,
another is a mathematical expression.
One characteristic of the Value function is that it can reflect the DM’s attitude to risk, i.e.
risk-averse or risk-seeking. In our example it is assumed there are two DM’s who will
face the same decision-making problem, one is risk-averse and another is risk-seeking.
For simplification, we assume that the Value function of each objective is of an
exponential form. After assigning the magnitude of the exponent and according to the
defined threshold and Value scales, the corresponding Value function can be obtained.
The risk-averse Value functions for each objective are:
-0.0001y
U1(x)=1.0068(1-e ) (x : profit)
U2(y)=0.0187(1-e0.2y) (y : risk)
U3(z)=0.0524(1-e0.0000006y) (z : variance)
U1(y)=0.0068(-1+e0.0001y) (x : profit)
U2(y)=1.0187(-1+e-0..2y) (y : risk)
-0.0000006y
U3(y)=1.0524(-1+e ) (z : variance)
The corresponding Value curves of each objective for both DMs are shown in Figure 4-5,
4-6, 4-7 respectively. The risk-averse functions are represented by the broken lines, and
the risk-seeking functions are represented by the solid lines.
Subobjective 1 Subobjective i
A total score is arbitrarily selected to represent a perfect ideal action. Then this ideal
score is allocated among the objectives. The procedure of allocating a score among the
sub-objectives is continued until scores have been placed in all blocks of the hierarchy.
The scores so assigned to the sub-objectives of the lowest level are the scaling factors to
be used in equation (4-4).
In our example, we assume that all of the objectives have the same contribution to the
final decision, i.e., the scaling factor for each objective equals 1: W1= W2= W3=1.
After obtaining the final Value function (product of preliminary Value function and
scaling factor) for each objective, we can get the total final Value of each action as:
So the selected action is the action with the highest total final Value.
In Table 4-14, the final Value of each objective( obtained from eq. 4-4) and the total final
utility value of each action for the risk-averse DM and risk-seeking DM are listed.
Table 4-14 The Value of Example Case
From Table 4-14, we can see that the selected actions for the risk-averse DM and the
risk-seeking DM are both Action 2. But the difference between the corresponding Values
is very large. Why do both DMs select the same actions? The reason is that in this
example, Action 2 has almost the highest profit and almost the lowest risk and variance,
so it is superior to other actions in most DMs’ view.
4.5.5 ELECTRE IV
In most MCDM methods, the outcome results in a ranking of the alternatives, with
possible ties. However, in some situations, given the preferences of the DM, no
distinction can be made between alternatives. In spite of this evidence that no
distinction should be made, it appears that many methods are forcing this distinction by
making overly strong assumptions on the preferences stated by the DM, and some
methods cannot provide any solution at all. In these cases, the requirement that each
alternative should be comparable (preferred or equivalent) is restrictive. In the
approach presented in this section, ELECTRE IV, this restriction is omitted. It allows
that two alternatives may be declared incomparable with one other.
The first of a series of outranking methods called ELECTRE appeared in 1968, and after
that several more developed and advanced versions came out [14]-[17]. In this section,
the ELECTRE IV method [18] will be applied to the decision-making case presented in
section 4.2. Each step of the method will be explained in detail.
4.5.5.1 Main Steps of the Method
1. The amount of information required from the DM is limited and easier to provide,
for example, there is no need to give relative importance of the criteria.
2. The method will not draw strong conclusions if the available data does not permit to
do so. It also provides a solution where other methods cannot due to insufficient
data.
As mentioned in reason no. 1, this method does not require the DM to express his
priority in terms of the criteria. Instead he or she should indicate what his/her
thresholds are with respect to indifference and preference. An indifference threshold for
a particular criterion is the maximum change in the attribute of that criterion for which
the DM is indifferent. In more common language, it is the largest change that goes
unnoticed. A preference threshold is the smallest difference between two attributes of
one criterion for which the DM can make a preference. These thresholds can be either
fixed or dependent on the value of the attribute for a particular criterion. The
indifference threshold can also be regarded as a way to take into account the inaccuracy
of the pay-off values. The main steps are of the method are as follows:
gi (a ) ≥ gi (a ' ) + pi ( gi (a ' ))
where pi is the preference threshold depending on the value gi(a’). An action a would
be called weakly preferred to an action a’ with respect to criterion gi, if the following
condition is satisfied.
This concept of thresholds is illustrated in Figure 4-9. The value u represents the
difference between the scores of two alternatives for one criterion.
u = g i (a' ) − g i (a) (u ≥ 0)
When u is smaller than qi, a and a’ are said to be indifferent to each other for criterion
i (1). For u between qi and pi, a’ is declared weakly preferred to a (2), while when u is
larger than pj (3), a’ is strictly preferred to a. The veto threshold (vj) is used in step 3 to
distinguish between strong and weak outranking relations.
The same reasoning is applicable when u is defined as u = g i (a ) − g i (a' ) (u ≥ 0)
Step 4: distillation
In this step it is verified for each action how many other actions it strongly outranks,
and by how many other actions it is strongly outranked. The difference between both is
called the strong qualification. A weak qualification is obtained in a similar fashion.
Two rankings are obtained. For the first one, descending distillation, the action with the
largest strong qualification is selected and receives the rank number 1. The
qualifications of the remaining alternatives are recalculated without the selected
alternative. The alternative that has the highest qualification is selected this time for the
second spot. This is continued until all alternatives have been selected. In case of a tie in
the strong qualifications, the weak qualifications are used to untie.
A second ranking, ascending distillation, is obtained by using the same procedure but
now the alternative with the lowest strong qualification is selected for the lowest rank.
The qualifications are recalculated again, and again the alternative with the lowest is
selected. This is repeated until all options are selected.
Step 5: final ranking
A final ranking is obtained by combining the two rankings obtained in the previous
step. This ranking can be represented by a graph. An arrow points from a node
representing the preferred action to the node of the outranked action (e.g., a to b,Figure
4-10). Two equivalent actions are represented by the same node (d and e). Actions that
are incomparable are not linked with an arrow, but are located at the same ranking
level (b and c).
b
a d,e
The ELECTRE IV method will now be applied to the decision problem presented in
section 4.2.
Assume that the DM chooses the following thresholds. The veto thresholds are taken as
the double of the preference threshold.
The following tables indicate for how many criteria the action at the left of a row is
preferred to the action at the top of the column. Table 4-16 refers to the weak
preference, Table 4-17 to the strong preference, and Table 4-18 shows the result with
respect to the veto preference.
Action 1 0 0 1
Action 2 0 0 0
Action 3 0 0 0
Action 4 0 0 0
Strict Preference
Action 1 0 1 0
Action 2 2 1 2
Action 3 2 0 2
Action 4 0 0 1
Veto Preference
Action 2 1 0 1
Action 3 1 0 1
Action 4 0 0 1
An ‘F’ indicates that the action at left outranks strongly the action at the top of the
column. On the other hand a lowercase ‘f’ points towards a weak outranking relation.
The results are shown in Table 4-19. It can be seen here that action 1 strongly outranks
action 4, but on the other hand action 4 weakly outranks action 1.
Outranking
Action 3 f f 0
Action 4 f 0 0
Table 4-19 is now used to extract two rankings, one using descending distillation, and
one using ascending distillation. In Table 4-20 the numbers below each action refer to
the actions that are strongly outranked by the action in the first row. In Table 4-21 the
weakly outranked actions are listed.
The qualifications can now be obtained for each action. The strong qualification for an
action is the difference between the numbers of actions that it strongly outranks with
the number of actions it is strongly outranked by. A weak qualification is similarly
obtained in a similar fashion. The results are displayed in Table 4-22 and in Table 4-23.
Table 4-22 Strong Qualifications
Action 1 Action 2 Action 3 Action 4
0 3 -1 -2
The descending distillation works as follows: The action with the highest strong
qualification is chosen. In this case it is action 2. In the case of a tie, the weak
qualifications are used to untie. When even the weak qualifications are the same, the 2
actions are selected and considered equivalent for that ranking procedure. Action 2 is
ranked first and consequently removed from Table 4-20 and Table 4-21; the new
qualifications are calculated. The distillation procedure is continued until all actions are
ranked. The ranking obtained this way is:
2 → 1 → 3,4
Action 3 and 4 are equivalent in this ranking. The ascending distillation works in the
same way but starts by selecting the action with the lowest strong qualification, which is
action 4. It is ranked last and then removed from the 2 tables, etc. The ranking obtained
like this gives:
2 → 3 →1→ 4
From the two rankings obtained in the previous step, a final order can be extracted. We
see that in both rankings, action 2 is on the first spot. So this will be the action with the
first priority. Next we see that 1 and 3 have the spots 2 and 3 respectively in the first
ranking, and spots 3 and 2 in the second ranking. Those two actions are declared
incomparable. Finally, we see that both actions 1 and 3 are ranked higher than action 4
in any of the rankings. Action 4 will have the last priority.
Promethee [24] – This method is quite similar to the ELECTRE III method, except that it
does not use a discordance index. It also takes advantage of the most recent
developments of preference modeling at that time.
1
This characteristic is called non-prescriptiveness.
this section, the P/C action selection problem is treated as single decision maker
MCDM and multiple decision maker MCDM problems, respectively.
In 1967, Dempster [29] proposed the concepts of upper lower event probabilities. Unlike
familiar probabilities, upper and lower probabilities do not satisfy the additivity
relation. In 1968, Dempster [30] developed the rule for combining two sets of evidence
(i.e. two independent information sources); this rule is now called Dempster’s Rule of
Combination. In 1976, Shafer refined the theory proposed by Dempster and published a
book named “A Mathematical Theory of Evidence” [27] which provided the foundation
of Evidential Theory (ET).
If there is a decision-making problem and all of the possible results (θ1, θ2, …, θn) of the
decision are in set Θ, then Θ is called the Frame of Discernment (FD). Each subset of Θ
corresponds to a proposition.
A piece of evidence always supports one or several propositions that correspond to one
or several subsets of Θ. The degree of support can be quantified by the Basic Probability
Number (BPN), which satisfies:
( 1 ) m( ∅ ) = 0
( 2 ) (eq. 4-7)
∑ m( A ) = 1
A⊆ θ
Here m is called the Basic Probability Assignment (BPA) of Θ. m(A) is called the Basic
Probability Number (BPN) of subset A and it is understood to be the measure of the
belief that is committed exactly to A. If A is a subset of Θ and m (A)>0, then A is called a
focal element. For each piece of evidence, one BPA can be formed.
The additive degrees of belief of the traditional methods, such as Bayesian theory,
correspond to an intuitive picture in which one’s total belief is susceptible of division
into various portions, and that intuitive picture has two fundamental features. First, to
have a degree of belief in a proposition is to commit a portion of one’s belief to it. And
second, whenever one commits only a portion of one’s belief to a proposition, one must
commit the remainder to its negation. One way to obtain a more flexible and realistic
picture is to discard the second feature while retaining the first. BPA corresponds to
such a picture. When giving BPA, instead of assigning probability or belief to each
element of Θ, the expert can assign his degree of belief (BPN) to some subset of Θ. If
there is no knowledge about the problem, 1 is assigned to the whole set Θ.
4.6.1.2. Belief and Plausibility Function
The quantity m(A) measures the belief that one commits exactly to A, not the total belief
that one commits to A. In order to obtain the measure of total belief committed to A, a
Belief Function is defined as:
Here, Bel (A) is called the belief of A, and it reflects the total belief committed to A. As
Bel (A) does not reveal to what extent one doubts A, i.e., to what extent one believes its
negation, so it is not a full description of one’s belief about A. Therefore we also define a
Plausibility Function as:
−
Pl(A) = 1 − Bel( A ) = ∑ m(B) (eq. 4-9)
A∩ B ≠ ∅
Pl (A) is called the plausibility of A which reflects the extent to which one finds A
credible or plausible. So for any subset A of Θ, there exists the following relationship:
Thus the Plausibility and Belief functions provide upper and lower bounds of the
probability of a subset. (Bel (A), Pl (A)) can be used to represent the uncertainty of A:
(1,1) means that A is true; (0, 0) shows that A is false and (0,1) represents that A is
unknown. The value of Pl(A)-Bel(A) reflects the degree to which A is unknown. So, ET
can separate that which is unknown from that which is uncertain. It is a great advantage
of ET over the other theories.
For each piece of evidence, we can obtain a BPA and corresponding Bel and Pl. When
there are several evidences, we can obtain several BPAs. Dempster’s Rule of
Combination offers a tool for the aggregation of these BPAs on the same FD. This can be
viewed as an information fusion procedure.
Assume Bel1 and Bel2 are two independent Belief functions on the space Θ. m1 and m2
are the corresponding BPAs. Then their combination is another BPA m, denoted as
m=m1 ⊗ m2. Assume the focal elements of m1 and m2 are Ai (i=1,...,k), Bj (j=1,..., l)
respectively, then BPA m is:
∑ m1 (Ai ).m 2 (B j )
Ai IB j = A
A≠φ
m(A) = 1 − ∑ m1 (Ai ).m 2 (B j ) (eq. 4-10)
A i I B j =∅
0 A=φ
where i =1,...,k; j=1,...,l. After obtaining the combined m, we can get the corresponding
Bel and Pl and then make the decision based on them.
Profit, risk and variance of impact of each action are three independent evidences for
this decision-making problem. From each piece of evidence we can get a BPA. Normally
this BPA is given by an expert or experts based on their experience. In practical
application, this BPA should be derived based on the knowledge base containing the
experts’ experience and knowledge. In this example, for illustration and simplification,
we use the utility function in 4.5.4 to give the BPA of each evidence.
a) If the utility value Uj (a) is positive, then the corresponding BPA is:
b) If the utility value Uj (a) is negative, then the corresponding BPA is:
For each action we combine the three BPAs obtained from evidences of Profit, Risk and
Variance of impact. The combined BPAs of each action for the risk-averse and for the
risk-seeking DM are listed in Table 4-25.
Table 4-25 BPA of the Example
Oppose 0 0 0 0
Profit
Θ 0.1243 0.1308 0.3419 0.0983
Support 0 0 0 0
Risk – Support 0 0 0 0
Oppose 0 0 0 0
Profit
Θ 0.9546 0.9570 0.9872 0.9417
Support 0 0 0 0
The appraisal of each action is the element (‘Support’, ‘ Oppose’) of Θ which has the
higher plausibility value that is indicated in bold in Table 4-26. The appraisals for each
action of the risk-averse DM are all ‘Support’, while the appraisals for each action of the
risk-seeking DM are all ‘Oppose’.
Table 4-26 The Plausibility and R of the Example
Action 1 Action 2 Action 3 Action 4
Risk- Pl Support 0.9411 0.9938 0.9845 0.9323
Averse Oppose 0.1759 0.1362 0.3521 0.1593
R 5.3502 7.2966 2.7961 5.8525
Risk- Pl Support 0.0182 0.3798 0.3878 0.0102
Seekin Oppose 0.9992 0.9837 0.9911 0.9994
g R 0.0182 0.3861 0.3913 0.0102
This index R of each action for the risk-averse DM and the risk-seeking DM is also listed
in Table 4-26. Then the action with the largest R is selected as the final action. So the
final action selected for the risk-averse DM is Action 2, while that for the risk-seeking
DM is Action 3.
Dempster’s Rule of Combination provides ET with the ability to combine the opinion of
different experts, in that the opinion of different experts can be regarded as the
independent evidences. In this example, we can combine the attitude of the risk-averse
DM and the risk-seeking DM using this rule: for each action, combine the BPA of the
risk-averse DM and the risk-seeking DM listed in Table 4-25. The combined BPA for
each action and the corresponding plausibility function and index R are shown in Table
4-27.
The appraisals for Actions 1 to 4 are ‘Oppose’, ‘Support’, ‘Support’ and ‘Oppose’
respectively now, no longer with the same appraisal for each action as shown in Table
IV. The final action is Action 2 that has the highest R. Though the final selected action
based on the combined attitude of the risk-averse DM and the risk-seeking DM is the
same as that of the risk-averse DM, we can see that index R of the selected action
reduces from 7.2966 to 2.8177.
4.7 Conclusions
First, the decision-making methods used in EPRI report [1] are overviewed. It was
shown that some of the methods put too much emphasis on the economic aspect of the
problem, while others exclusively are concerned with the security issues. In order to
overcome these drawbacks, some new methods which still use traditional decision
criteria for risk-based corrective/preventive action selection are proposed.
Since maximizing the profits is in conflict with minimizing the risk, and since both
attributes are really incommensurate, applying multi-criteria decision making methods
is attractive. Two methods were investigated: value based and ELECTRE V. These
methods have the advantage to easily accommodate subjective information provided by
the DM, prior to or during the decision making process. The methods differ from each
other by the type of subjective information required by the DM, and also by the way
this information is processed and by the format of the results produced.
Evidential Theory can also be used for the multi-objective corrective/preventive action
selection problem. The more attractive point of this method is its ability to combine the
opinions of different DM’s.
The different methods obtain significantly different results, not only with respect to the
suggested ‘best’ option but also in the total ranking of the options. The multi-objective
methods have the advantage that the parameters of each method can easily be tuned
according to DM’s preferences among the criteria.
References
[1] EPRI final report WO8604-01, “ Risk-based Security Assessment”, December, 1998.
[2] Anders G.J., Probabilistic concepts in electric power systems, John Wiley & Sons,
1990.
[3] Chankong V., Haimes Y.Y., Multi-objective Decision Making – Theory and
Methodology, North Holand, 1983.
[4] Lindley D.V., Making Decisions, Wiley &Sons, 2nd Edition, 1985.
[5] Wan H., McCalley J., Vittal V., “Increasing Thermal Rating by Risk Analysis”, PE-
090-PWRS-0-1-1998, to appear in IEEE Transactions on Power Systems.
[6] Wan H., McCalley J., Vittal V., “Risk Based Voltage Security Assessment”, submitted
for review to the IEEE Transactions in Power Systems.
[7] Fu W., McCalley J., Vittal V., Risk-based Assessment of Transformer Thermal
Overloading Capability”, Proceedings of the 30th North American Power Symposium,
Cleveland, Ohio, October 1998.
[8] Van Acker V., McCalley J.D., Vittal V., Peças Lopes J. A., "Risk-based Transient
Stability Assessment," Proceedings of the Budapest Powertech Conference, Budapest,
Hungary, 1999.
[9] Kmietowicz, Z.W., Pearman A.D., Decision Theory and Incomplete Knowledge,
Gower, 1981.
[10] Churchman C.W., Ackoff R., Arnoff E., Introduction to Operation Research, John
Wiley & Sons, 1957.
[12] Keeney R.L., Raiffa H., Decisions with Multiple Objectives – Preferences and value
trade-offs, John Wiley & Sons, 1976.
[14] Roy B., “Classement et choix en presence de points de vue multiples (la méthode
ELECTRE),” Revue Française d’Informatique et de recherché Opérationelle Vol. 8, 1968,
pp 57-75.
[15] Roy, B., Bertier P., “La méthode ELECTRE II,” Working paper 142, SEMA, 1971.
[16] Roy,B., Bertier P., “La méthode ELECTRE II, une application au media-planning.”
OR 72, M. Ross editor, North Holland, 1973, pp. 291-302.
[17] Roy B., “ELECTRE III; algorithme de classement base sur une representation floue
des preferences en presence de critères multiples,” Cahiers de CERO Vol. 20 no. 1, pp.
3-24.
[18] Hugonnard J., Roy B., “Ranking of suburban line extension projects for the Paris
,metro system by a multi-criteria method,” Transportation research 16A, 1982, pp. 301-
312.
[19] Stewart T.J., “A Critical Survey on the Status of Multiple Criteria Decision Making-
Theory and Practice,” OMEGA, Intl. Journal of Management Science, Vol. 20, No. 5/6,
pp. 569-586, 1992
[20] Hobbs, B.F., Chankong, V., Hamadeh, W., Stakiv E.Z. “Does the choice of Multi-
criteria Method Matter? An experiment in Water Resources Planning,” Water Resources
Research, Vol. 28,no. 7 July 1992, pp. 1767-1779.
[21] Zanakis S.H., Solomon A., Wishart N., Dublish S., “Multi-attribute decision
making: a simulation comparison of select methods,” European journal of operational
research, Vol. 107, pp. 507-529, 1998
[22] Vincke, Ph., Multicriteria Decision-aid, translated from french, John Wiley &Sons,
1992
[23] Lifson, Melvin W., Shaifer, Edward F. “Decision and Risk: Analysis for
Construction Management”, John Willey&Sons, 1982.
[24] Brans J.P., Vincke, Ph., “A preference ranking organization method,” Management
Science Vol. 31 no. 6, 1985, pp. 647-656.
[25] Charnes A., Cooper W.W., Management models and Industrial applications of
Linear Programming, Wley, New York, 1961.
5.1 Introduction
When different options available to the decision maker (DM) are subjected to a
considerable amount of uncertainty, e.g. unknown future scenarios, the DM could
consider spending some amount of money to gather information to reduce this
uncertainty. If acquisition of additional information changes the probability models
used to characterize the uncertainty, then the resulting decision may change as well. If
so, then the acquired information has value, and it is prudent to spend money to get it.
The value of the information is associated with the effect on the utility made by the
change in decision.
The worth of perfect information is a reference to the amount of money that the DM
should pay to acquire more information. The difference between the expected monetary
outcome in the case of perfect information and in the case of no additional information
is a measure for the maximum amount the DM should consider spending to obtain
additional data.
• Load profile over the next hour, over the whole year
• Ambient conditions
In the next sections, the case of perfect information is discussed followed by the case
where only partial or imperfect information is available.
Case A
This case uses a fictitious situation concerning a system in a remote area having some of
its lines crossing an open, remote area. The operator has only access to a regional
weather prediction for today that was developed the previous day with the following
information:
sunny 65
windy 29
stormy 6
Given this information, the operator evaluates the possible costs in each one of the
scenarios. It is assumed that there are basically two operating strategies:
Option 1: operate at minimum cost according economic dispatch, but with heavily
loaded transmission.
Option 2: shift some power to a more expensive generator to off-load the transmission
systems.
Table 5-2 gives an overview of the profits and risk value for each action, given the
weather conditions. Our utility function is the difference between expected profit and
risk. With the probabilities associated with each weather type the expected utility is
obtained and shown in the last row of the table. The risk values include risk of transient
instability and thermal overload.
Table 5-2 Risk and Profit
Option 1 Option 2
Using the maximum expected value of the difference between profit and risk (last row),
the operator would decide to operate the system according to option 1, i.e., to adhere to
the economic dispatch. This would be the best decision under the available information.
Since the difference between profits and risk for either option can vary almost 10%
depending on the weather conditions, it could be useful to obtain more recent, and
therefore more accurate information available about the weather and to estimate what a
reasonable price would be to pay for this information.
First it is necessary to evaluate the worth of the “perfect information.” The perfect
information indicates with certainty whether it will be sunny, windy or stormy.
Knowing that the weather is sunny and calm, or windy the operator will choose the
economic, loaded line mode (Option 1), while if it is known that the weather is stormy
the uneconomic, less-loaded line operation mode (Option 2) will be chosen.
Without the perfect information, the DM only has access to the weather information
from the previous day, from which probabilities of the different weather conditions can
be estimated. It is not known what the perfect information will reveal if it were ordered.
It is assumed that the probability distribution of what the perfect information will tell is
the same as the a priori weather condition distribution. The DM can do no better with
the available information. The perfect information will indicate 65% of the time that the
weather will be nice, 29% of the time that it will be windy and finally, 6% of the time
that it will be stormy. We can evaluate the expected difference between profits and risk,
under perfect information, by assuming that we make the best decision between option
1 and 2 every time. Therefore, we take the option with the maximum difference
between profits and risk under each weather type, the expected difference between
profits and risk with perfect information is given by:
The value of perfect information is now obtained as the difference between the expected
utility with perfect information and the maximum expected utility with existing
information:
This value is a per-day measure for the system coordinator to use in deciding whether it
is worthwhile or not to order more precise weather information or to invest in weather
forecasting equipment. If an investment could improve the information held by the
operator by this amount every day for a year, then the investment should be made if it
is less than 365x$54=$3510.
Case B
Table 5-3). Two different future scenarios are deemed to be possible, a 4% yearly load
increase, and an 8% load increase. For each action in each scenario, the total annual risk
is given including risk of overload and risk of voltage instability. It is assumed that the
4% load increase has a 75% probability, while the 8% load increase has a 25%
probability. In this example, the objective is to minimize the annual risk.
# Description
The same methodology is applied as in Case A. For each action the expected value of
the annual risk is calculated and presented in the last row of Table 5-4. With no
additional information, the best solution would be action 2, corresponding to an
expected value of annual risk of $167,627. On the other hand, if it were known that 8%
increase would occur, than action 3 would be preferred, since it has the lowest risk in
that scenario. The fact that the decision changes depending on the information indicates that
there is something to gain from additional information.
$102,340*0.75+$320,285*0.25 = $156,826
The worth of this perfect information is given as the difference between both, i.e.,
$167,127-$156,826 = $10,801
This value places an upper-bound on how much to pay to improve knowledge of the
load profile for the coming year, by ordering a study.
In the previous section the additional information was perfect in the sense that it tells
with probability one what the scenario was that will happen. The value of information
derived with this in mind corresponds to an upper-limit for the amount of money to
spend on obtaining additional information. Most of the time, however, the additional
information is not perfect, but it can give us a better estimate of the prior probabilities.
It will upgrade the prior (“before”) probabilities with the additional information to
posterior (“after”) probabilities.
For example, in the case with the two load increase scenarios (Case B in Section 7.2), it is
known that there is a strong correlation between the economic (industrial, residential
and commercial) growth of the area and the load increase. Information about the
growth in that area can be obtained for a price, and it will tell whether the economic
growth will be high or low. As Table 5-5 indicates, the correlation is indeed strong but
there is still a non-zero probability that the economic growth leads to a wrong
conclusion about the load increase rate. From historical posterior analysis of the
correlation between economic growth and load increase it has been observed that there
is still a 10% chance that a high economic growth was expected when the load increase
turns out to be only 4%, and a 15% chance for the converse erroneous conclusion.
Table 5-5 Conditional Probabilities of the Growth Given the Observed Load Increase
Table 5-4 showed that depending on the scenario a different decision is taken. So, any
extra information about the scenarios is relevant. The only question is how much to pay
for it? To find this out, the DM uses the data in Table 5-5 to update the probabilities of
the load growth in order to have, maybe not a perfect but at least an improved decision.
The posterior probabilities are calculated using the Bayes’ Rule [1, pg.21]. This rule is a
familiar one in probability theory giving the probability of event Aj given event B, i.e.,
Pr( B | A j ) Pr( A j )
Pr( A j | B ) =
∑ Pr( B | A ) Pr( A )
i
i i
In our example,
When a study predicts that the growth will be low, Action 2 will be chosen, if the study
predicts that the growth will be high, Action 3 will be chosen. This follows from the
projection of the annual risk in each one of the load increase scenarios in Table 5-4. As a
result, the expected value of annual risk when the prediction is low is given by:
With these values the expected value of annual risk with imperfect information can be
found:
The value of this imperfect information is the difference between the expected value of
annual risk without additional information and the expected value of annual risk with
imperfect information:
$167,127-$158,167= $8,960
In this section, the method for evaluating the value of information is introduced. This
value is very useful for the DM to see whether it is necessary to pay the money to get
the additional information and what is the maximum amount he can pay, in order to
increase the accuracy of the final decision. With respect to security assessment, this
approach can be used to determine whether to spend resources to improve one’s ability
to predict the future in terms of load levels, load distribution, equipment outages, and
ambient conditions. The approach can also be used to determine whether to spend
resources to improve one’s knowledge regarding uncertain measured values, including
electrical parameter values (e.g., line impedances and load characteristics) as well as
current weather readings (e.g., temperature and wind speed).
References
[1] Casella, G., Berger, R. L., “Statistical Inference”, Wadsworth & Brooks/Cole 1997.
nd
[2] Lindley D.V., “Making Decisions,” Wiley &Sons, 2 Edition, 1985.
APPENDIX-A: IMPACT ASSESSMENT FOR RBSA
A.1 Introduction
In risk assessment, one must address probability and impact. Probability analysis is
used to quantify the uncertainties associated with various outcomes; while impact
quantifies cost-consequence, or severity, of these outcomes. As mentioned in [1],
development of the severity function is typically difficult in most probabilistic risk
assessment problems. In this chapter we provide some fundamental considerations of
this problem.
The traditional approach to quantify impact uses performance measures such as load
flow, steady state voltage magnitude, transient voltage dip, and others. The drawback
of this approach is that there is no common measure for comparing severity or for
obtaining a composite evaluation of security. Thus there is no way to quantitatively
compare the impact between two different kinds of security problems. For example, it is
not meaningful to compare the impact of transient voltage dip and transmission line
overload by comparing transient voltage magnitude and line current. Similarly, when a
region faces more than one kind of security problem such as transmission line overload
and bus voltage out of limits, there is no single performance indicator that can reflect
the overall system security conditions [1].
There are two severity measures for impact assessment: one is rating-based, and the
other is cost-based.
Impact has many different meanings in different contexts. In the context of RBSA,
Actually, some impacts are quite certain. For example, sanctions can be regarded as
certain impacts to the entity that violates security performance requirements. Western
Systems Coordinating Council (WSCC) has developed a Reliability Management
System (RMS) [2], which includes 17 mandatory criteria to ask its members to comply
with. Sanctions will be applied to the entity that violates these criteria with monetary
penalty. For example, one criteria is that ``The actual power flow on a bulk power path
shall not exceed the operating transfer capability for specified time period.'' If the
transmission owner violated this criterion, a certain amount of fine should be paid
based on the RMS.
However, most of impacts are unfortunately quite uncertain. For example, the end-user
losses due to system disturbances depend on many uncertain factors such as the end-
user's activities, the nature and degree to which the impacted activities are dependent
on electricity, the availability of a backup power source, and the ability to resume the
impacted activities normally after power is restored. Consequently, estimating the
impact requires both objective and subjective justification. Further, it is well recognized
that the accuracy of the cost estimate of an event occurring at a future time generally
decreases as that future time is increased [3].
The first step in estimating the costs is to identify the expected or average value. This
could be enough if the estimate is quite certain. However, it is generally necessary to
account for the uncertainty in the estimate by using a probability distribution to
describe it. The two simplest distributions are uniform, in which case one can also needs
to estimate the range, and normal, in which case one also needs to estimate the standard
deviation. Which one is used, or whether another distribution is used, depends on the
characteristics of the uncertainty. For example, if the actual cost impact is equally likely
within an interval, then the uniform distribution is appropriate. However, if the actual
cost impact is more likely to be close to the mean than at the extremes, then a normal
distribution is a better description.
The cost of an event can only be accurately known after actual occurrence of the event.
Therefore, in risk assessment, where analysis is necessarily performed before the
occurrence of the event, we must estimate its expected value together with the
parameters that describe its uncertainty.
The second step in cost estimation is to identify the statistics associated with each cost.
This is an information-gathering step. It need not be a labor-intensive task, although it
certainly can be. What is important is that the analyst be capable of deciding when to
gather more information, and when not to. Nonetheless, it is always prudent to perform
a first estimation using one's own judgment. Here, one should estimate the mean or
average value of the cost, the range, i.e., a minimum value below which the cost would
not fall and a maximum value above which the cost would not fall. In addition, one
should decide the distribution of the cost over the range. As indicated in Section A.4,
the simplest distributions are uniform and normal.
In many risk assessment problems, the accuracy of the cost estimate is sufficient if the
range spans 1 order of magnitude, or less. This requires that the tolerance on the
decision criteria might need to be as large as the widest range. If the tolerance is
unsatisfactory, then one needs to gather more information to narrow the range (i.e.,
decrease the spread, or variance, of the distribution), assuming that the test of ``perfect
information'' indicates it is economic to do so [1].
The first step to conduct impact cost estimation is decomposing the cost of each
identified event into component costs. Depending on the different criteria, there are
following four kinds of impact classifications, which not only help to estimate the
impact costs but also provide useful information in today's deregulated environment.
This classification provides information that which group is affected and how much
each group is suffered by the different security problems under current operating
conditions. The following groups are considered:
− Generation owner
Generation owners will experience losses when the conditions such as generator out
of step or load interruption occur.
− Transmission owner
Transmission owners will face losses when the system contingencies cause
overloading the transmission lines or transformers or cause load interruptions.
− End-user
Losses might be unavoidable to the end-users when the system contingencies cause
the load interruptions.
It should be noted that the impacts to distribution owners due to system security
problems are not considered here as this study is basically only for the transmission
system security assessment, although the idea can be extended to the distribution
system. Also we assume that the relation among these three groups of owners is
bilateral contract-based. The generation owners directly make bilateral contracts with
the transmission owners to buy transmission capacity rights in order to sell the energy
to the end-users. The end-users could only obtain energy through making contracts
with the generation owners. If either side involving cannot fulfill the contract, he should
pay the monetary penalties based on the agreements in the contract. These assumptions
are made only for the research convenience and can be easily modified if necessary.
This classification shows each affected group where the impact cost comes from. The
following three cost categories are considered in this study
− Load interruption
− Equipment damage
− Equipment outage
The impact of equipment outage includes the additional cost associated with
operating the system when a component is unavailable.
This classification shows how the system is affected by the different security problems.
It is based on the information about what parts of system are threatened. For example,
transmission line overload may cause loss of life and line sag and touch; while
transformer overload may cause loss of life and failure. The impact components
considered in this study are
− Voltage collapse
This classification presents direct information about the makeup of the impact cost. It is
based on the information about how the impact cost is formed. For example, the cost of
transmission line overload is reconductoring the line; while the costs of load
interruption are lost profits, end-user loss, sanctions, penalties. The cost components
considered in this study are
This cost can be estimated from current market price of reconductoring the same
line.
This cost can be estimated from the cost of buying a new transformer with same size
plus the labor of removing the old one and installing the new one.
− Lost profits
This cost can be estimated from the revenues of selling the same amount of the
interrupted energy minus the cost of producing that amount of energy.
− Sanctions
This cost can be obtained from the criteria in the regional reliability management
system.
− Penalty A
This cost can be obtained from the contracts between the generation owners and the
transmission owners.
− Penalty B
This cost can be obtained from the contracts between the generation owners and
end-users.
This cost can be estimated from the production cost of using new higher-cost
generators minus the production cost of using original lower-cost generators.
This cost can be estimated from actual startup cost of the generator.
− End-user loss
The most direct way to estimate end-user loss is conducting surveys for different
groups of customers [4] [5] [6]. But usually this kind of effort is cumbersome, time
consuming and expensive, especially if a large and statistically well designed sample is
to be selected [7]. Since the penalty item in contracts between generation owners and
end-users can be regarded as compensation to the end-users if there is load
interruption, they must embed the information of end-user loss. Thus one alternative to
estimate end-user loss can be based on the penalty agreement in the contract. The
advantages of this approach are easily implemented and less expensive. The
disadvantage is that the accuracy will be sacrificed.
One significant advantage of RBSA over traditional security assessment is that it unifies
the various security problem types, allowing quantification of the composite security
level in a single index. However, this advantage is only manifested if the magnitudes of
the various impacts are quantified one relative to another. It is to this purpose that,
based on the classification in the previous section, we have developed three impact
tables: one for overload security, one for voltage security, and one for dynamic security.
In the following three subsections, we present these tables together with a brief
description of the related impacts.
As we stated before, impact assessment needs both objective and subjective judgment.
The classification and some estimated data presented in this section is only the authors'
opinion. It is expected that different people would give different classifications and
estimations.
Power system overload will cause adverse effects to transmission line and transformer.
The following are simple descriptions of these impacts. Detailed descriptions of these
impacts can be found in [8] [9].
− Loss of life
The transmission line's expected total life is the amount of time when the conductor
operating temperature is always maintained at its maximum allowable temperature.
When the operating temperature is higher than the maximum allowable value, the
conductor-annealing rate can exceed the designed value. The line's life expectancy is
reduced and the impact is determined by the loss of life and the cost of re-conductoring
the circuit.
High operating temperature can cause the thermal expansion of the conductor and thus
the line may drop beneath its safety clearance. Under certain conditions, this may cause
flashover to the ground, resulting in a ground fault, and outage of this circuit. The
impact costs associated with line sag and touch are reconductoring the line, system
redispatching costs, and sanctions.
− Loss of Life
− Transformer Failure
In electric system, the outages of one high-voltage transmission line and transformers
lines and transformers usually will not cause direct load interruptions or generation
interruptions. The system operator may shed some load after assessing the situation.
But we regard the system operator's intervention as a result of decisions and it is
excluded in our impact assessment paradigm. Also we do not consider cascading events
as they are very rare cases in real world. Thus the affected group is assumed only
transmission owners for overload. Their loss includes replacing the failed components,
system redispatching, penalties, and sanctions.
In Table 0-1, a table-form template with some estimated values is given for overload
impact assessment. All the impact costs are assumed following a normal distribution
except sanctions which are assumed certain. A 95% confidence interval is also given for
the cost of each cost component.
Table 0-1 Impact Evaluation for Overload
Performance Cost Impact Cost Affected Standard Expected Standard 95% C.I.
Measure Category Component Component Group units Value Deviation
Sanctions $/MWhr 50 0 50
Equipment Loss of life Replace Transmission $/case 107 106 (0.8-1.2) ×107
Damage via transformer Owner
insulation
deterioration
Sanctions $/MWhr 50 0 50
*Here, we only give the cost estimation for reconductoring 230KV line and replacing 400MVA transmission
The two problems associated with voltage security are bus voltage out of limits and
voltage collapse. The following are simple descriptions of these impacts. Detailed
descriptions of these impacts can be found in [8].
Bus voltage out of limits includes situations when bus voltage is too low and when bus
voltage is too high. On the one hand, low bus voltages may cause induction motor stall
and resulting high lagging current, which will further lower the bus voltages. Since
industrial and commercial motors are usually controlled by magnetically held
contactors, voltage drop may also cause many motors to drop out [11]. Low bus voltage
may also cause the action of automatic undervoltage load shedding schemes [12]. On
the other hand, when the bus voltage is too high, overvoltage protection schemes can
automatically trip individual load or load groups if the voltage violates their setting
thresholds. So the main impact caused by bus voltage out of limit is load interruption
which will cause end-user loss. The transmission owners and generation owners will
also lose profits and have to pay the penalties and sanctions.
− Voltage Collapse
For dynamic security we consider the impact of transient voltage too low, transient
frequency too low, and generator out of step. Detailed discussion of these impacts is
presented in [13].
Similar to bus voltage out of limits in voltage security, low transient voltage dips
may cause some motors to drop out. They may also initiate undervoltage load shedding
and generator tripping. So the major impact of low transient voltage dip is the cost of
load interruption that cause end-user loss. Also the generation owners will lose profits
and probably have to pay the generator startup cost, system redispatching cost,
penalties and sanctions due to failing to fulfill the contract with end-users and violating
the reliability criteria.
Transient frequency dipping too low will cause the action of underfrequency load
shedding program. So the impact costs of low transient frequency dips are load
interruption that causes end-user loss. The generation owners will lose profits and have
to pay sanctions and penalties.
PV Curve System Load Uncontrolled See above section on bus load interruption
Interruption voltage
decline
In Table 0-3, a table-form template with some estimated values is given for dynamic
insecurity. All the impact costs are assumed following a normal distribution except
sanctions and penalties that are assumed certain. A 95% confidence interval is also
given for the cost of each cost component.
Table 0-3 Impact Evaluation for Dynamic Security
Performance Cost Impact Cost Affected Standard Expected Standard 95% C.I.
Measure Category Component Component Group units Value Deviatio
n
Transient Bus load Underfrequency See above section on bus load interruption
frequency interruption
dip
Sanctions $/MWhr 50 0 50
A.8 Summary
In this Chapter, the problem of impact assessment for risk calculation is addressed. The
most important contribution of this section is that a unified common measure of
severity is proposed as a common basis for comparison for different types of security
problems. The comparison between performance-based and cost-based severity
measures, the difference between impact and decision, and the issue related to
uncertainty modeling of impact are discussed. Finally, the classification of impacts and
some sample data in a template form are also presented.
References
[1] EPRI final report WO8604-01, ``Risk-based Security Assessment,'' December, 1998.
[4] S. Burns and G. Gross, ``Value of Service Reliability,'' IEEE Transactions on Power
Systems, Vol. 5, No. 3, August 1990. pp.825-832.
[6] R. Billinton and R. N. Allan, Reliability Evaluation of Power Systems, Plenum Press,
1996.
[9] W. Fu, J. McCalley, and V. Vittal, ``Risk Assessment for Transformer Loading,''under
review for publication in IEEE Transactions on Power Systems.
[14] North American Power Synposium, October 19-20, 1998, Cleveland, Ohio, pp. 328-
335
About EPRI
EPRI creates science and technology
solutions for the global energy and energy
services industry. U.S. electric utilities
established the Electric Power Research
Institute in 1973 as a nonprofit research
consortium for the benefit of utility members,
their customers, and society. Now known
simply as EPRI, the company provides a wide
range of innovative products and services to
more than 1000 energy-related organizations
in 40 countries. EPRI’s multidisciplinary team
of scientists and engineers draws on a
worldwide network of technical and business
expertise to help solve today’s toughest
energy and environmental problems.
1001308
EPRI • 3412 Hillview Avenue, Palo Alto, California 94304 • PO Box 10412, Palo Alto, California 94303 • USA
800.313.3774 • 650.855.2121 • askepri@epri.com • www.epri.com