Sei sulla pagina 1di 134

Decision-Making Techniques for Security Constrained

Power Systems

1001308
Decision-Making Techniques for
Security Constrained Power
Systems

1001308

Technology Review, January 2001

EPRI Project Manager


T. Tayyib

EPRI • 3412 Hillview Avenue, Palo Alto, California 94304 • PO Box 10412, Palo Alto, California 94303 • USA
800.313.3774 • 650.855.2121 • askepri@epri.com • www.epri.com
DISCLAIMER OF WARRANTIES AND LIMITATION OF LIABILITIES
THIS DOCUMENT WAS PREPARED BY THE ORGANIZATION(S) NAMED BELOW AS AN
ACCOUNT OF WORK SPONSORED OR COSPONSORED BY THE ELECTRIC POWER RESEARCH
INSTITUTE, INC. (EPRI). NEITHER EPRI, ANY MEMBER OF EPRI, ANY COSPONSOR, THE
ORGANIZATION(S) BELOW, NOR ANY PERSON ACTING ON BEHALF OF ANY OF THEM:

(A) MAKES ANY WARRANTY OR REPRESENTATION WHATSOEVER, EXPRESS OR IMPLIED, (I)


WITH RESPECT TO THE USE OF ANY INFORMATION, APPARATUS, METHOD, PROCESS, OR
SIMILAR ITEM DISCLOSED IN THIS DOCUMENT, INCLUDING MERCHANTABILITY AND FITNESS
FOR A PARTICULAR PURPOSE, OR (II) THAT SUCH USE DOES NOT INFRINGE ON OR
INTERFERE WITH PRIVATELY OWNED RIGHTS, INCLUDING ANY PARTY'S INTELLECTUAL
PROPERTY, OR (III) THAT THIS DOCUMENT IS SUITABLE TO ANY PARTICULAR USER'S
CIRCUMSTANCE; OR

(B) ASSUMES RESPONSIBILITY FOR ANY DAMAGES OR OTHER LIABILITY WHATSOEVER


(INCLUDING ANY CONSEQUENTIAL DAMAGES, EVEN IF EPRI OR ANY EPRI REPRESENTATIVE
HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES) RESULTING FROM YOUR
SELECTION OR USE OF THIS DOCUMENT OR ANY INFORMATION, APPARATUS, METHOD,
PROCESS, OR SIMILAR ITEM DISCLOSED IN THIS DOCUMENT.

ORGANIZATION(S) THAT PREPARED THIS DOCUMENT

Iowa State University

ORDERING INFORMATION
Requests for copies of this report should be directed to the EPRI Distribution Center, 207 Coggins
Drive, P.O. Box 23205, Pleasant Hill, CA 94523, (800) 313-3774.

Electric Power Research Institute and EPRI are registered service marks of the Electric Power
Research Institute, Inc. EPRI. ELECTRIFY THE WORLD is a service mark of the Electric Power
Research Institute, Inc.

Copyright © 2001 Electric Power Research Institute, Inc. All rights reserved.
CITATIONS

This report was prepared by

Iowa State University


Department of Electrical and Computer Engineering, Room 1113 Coover Hall
Ames, Iowa 50011

Principal Investigators
J. McCalley
M. Ni

Other contributors
J. Chen
W. Fu
V. Van Acker

This report describes research sponsored by EPRI.

The report is a corporate document that should be cited in the literature in the following manner:

Decision-Making Techniques for Security Constrained Power Systems, EPRI, Palo Alto, CA,
Copyright 2001. 1001308.

iii
REPORT SUMMARY

This report provides a summary of decision-making techniques that can be applied to security-
constrained power systems. The single unifying theme throughout the report is that we are
capable of quantifying security level using risk. It is by this quantification that we are then able
to precede in our investigation of decision-making techniques, as decision-making techniques
invariably require quantification of criteria on which the decision is based. We summarize our
method of risk-based security assessment (RBSA) in chapter 1, and we provide an overview of
the applications of risk-based decision-making. Chapter 2 describes how RBSA can be applied
for determining operational limits. Chapter 3 reports on the risk-based optimal power flow
(OPF), which is the classical OPF modified based on the ability to quantify security level in
terms of risk. Chapter 4 explores various decision-making methods for performing control-room
preventive/corrective action, including several multi-criteria decision-making methods based on
risk, variance, and an economic criterion. We believe that this exploration provides the basis for
developing automated decision-support tools for operators where they can reach inside a box and
pull out one or perhaps several decision-making techniques, run them, and then make use of the
multiple suggestions provided. Chapter 5 develops the decision-making problem associated with
when to obtain more information. This relates to the classical data gathering problem that has for
so long plagued probabilistic techniques, but rather than focus on how to obtain the information,
we address the issue of whether to obtain it.

v
ABSTRACT

This report describes various decision-making techniques, based on risk-based security


assessment (RBSA), which can be applied to a security constrained bulk transmission system.
The primary focus is on operations, in terms of control room operators as well as engineers
working off-line to develop decision rules for use by the operator. For each decision-making
technique, the basic theory is described and at least one illustration is provided. This work forms
the basis for ultimately developing a toolbox of techniques that will serve as an effective
decision-support tool that can efficiently provides a series of good suggestions for use by the
operator in determining actions to take.

vii
CONTENTS

1 INTRODUCTION
1.1 Overview of RBSA....................................................................................................... 1-2
1.2 The Decision-making Approach in Industry Today....................................................... 1-4
1.2.1 Deterministic Reliability Criteria............................................................................ 1-4
1.2.2 The Deterministic Decision Making Approach ...................................................... 1-5
1.3 Applications for Risk-based Decision-making .............................................................. 1-5
1.3.1 Operations ........................................................................................................... 1-6
1.3.2 Operational Planning ........................................................................................... 1-7
1.3.3 Facility Planning................................................................................................... 1-7
1.3.4 Reliability Criteria ................................................................................................. 1-7
1.3.5 Data Gathering by Information Valuation ............................................................. 1-8
1.4 Report Overview.......................................................................................................... 1-8
References......................................................................................................................... 1-9

2 DECISION MAKING FOR OPERATIONS—COMPARISON BETWEEN RISK-BASED


AND DETERMINSTIC SYSTEM OPERATING LIMITS
2.1 Introduction.................................................................................................................. 2-1
2.2 Deterministic Study Procedure .................................................................................... 2-3
2.3 Probabilistic Study Procedure...................................................................................... 2-4
2.3.1 Modification to Analysis Steps.............................................................................. 2-4
2.3.2 Description of Probabilistic Index ......................................................................... 2-5
2.3.3 Severity Function ................................................................................................. 2-6
2.3.4 Uncertainty Models .............................................................................................. 2-7
2.4 Case Study for Five Bus Test Case............................................................................. 2-8
2.4.1 Steps 1, 2, 3 for Deterministic and Probabilistic Studies....................................... 2-8
2.4.2 Steps 4, 5 for Deterministic Method ..................................................................... 2-9
2.4.3 Steps 4, 5 for Probabilistic Method......................................................................2-10
2.5 Case Study for IEEE RTS...........................................................................................2-13

ix
2.5.1 Steps 1, 2, 3 for Deterministic and Probabilistic Studies......................................2-13
2.5.2 Steps 4, 5 for Deterministic Method ....................................................................2-15
2.5.3 Steps 4, 5 for Probabilistic Method......................................................................2-16
2.6 Discussion ..................................................................................................................2-18
2.7 Conclusion..................................................................................................................2-19
References........................................................................................................................2-20

3 RISK BASED OPTIMAL POWER FLOW


3.1 Introduction................................................................................................................... 3-1
3.2 System Composite Risk Assessment .......................................................................... 3-2
3.2.1 Probabilistic Load Flow ......................................................................................... 3-3
3.2.2 Risk Assessment of Thermal Overload and Bus Voltage Out-of-limit .................... 3-5
3.2.2.1 Thermal Overload Risk.................................................................................. 3-5
3.2.2.2 Bus Voltage Out-of-limit Risk......................................................................... 3-8
3.2.2.3 Considering Credible Contingencies.............................................................3-10
3.3 Formulating Risk Based Optimal Power Flow Problem ................................................3-10
3.4 Algorithm to Solve the Risk Based Optimal Power Flow ..............................................3-12
3.5 Numerical Illustration ...................................................................................................3-13
3.5.1 Problem 0: Using Deterministic Limits.................................................................3-15
3.5.2 Problem 1: Set Individual Risk Limit on Each Component...................................3-16
3.5.3 Problem 2: Set An Overall System Risk Limit......................................................3-21
3.5.4 Problem 3: Treat the System Risk as a Part of Objective ....................................3-22
3.6 Conclusions.................................................................................................................3-22
References........................................................................................................................3-23

4 DECISION MAKING FOR OPERATIONS—CORRECTIVE/PREVENTIVE ACTION


SELECTION
4.1 Introduction................................................................................................................... 4-1
4.2 Study Case................................................................................................................... 4-4
4.3 Profits Minus Risk Paradigm – The Single Objective Case ........................................... 4-6
4.3.1 Summary of our Previous Work ............................................................................ 4-6
4.3.2 Alternative Methods: Rank and Per-Unit .............................................................. 4-7
4.3.2.1 Rank Method................................................................................................. 4-8
4.3.2.1.1 Mini-max Criterion.................................................................................. 4-9
4.3.2.1.2 Minimium Maximum Regrets Criteria ..................................................... 4-9

x
4.3.2.2 Per-unit Method (Method No. 7) .................................................................... 4-9
4.3.2.2.1 Mini-max Criterion.................................................................................4-10
4.3.2.2.2. Minimum Maximum Regrets Criteria ....................................................4-10
4.4 Decision with Additional Information Using Bayesian Decision Tree............................4-11
4.4.1 Decision Tree.......................................................................................................4-11
4.4.2 Decision-making with Additional Information ........................................................4-12
4.5 Multi-objective Decision Making...................................................................................4-14
4.5.1 Shortcomings of Single Criterion Risk-based Approaches ...................................4-14
4.5.2 Literature Review on Multi-criteria Decision Making .............................................4-15
4.5.3 Overview..............................................................................................................4-16
4.5.4 Value or Utility-based Approaches .......................................................................4-17
4.5.4.1 Define the Scales of Measurement of the Objectives....................................4-18
4.5.4.2 Develop Value Functions..............................................................................4-19
4.5.4.3 Making Decisions based on the Value..........................................................4-23
4.5.5 ELECTRE IV........................................................................................................4-24
4.5.5.1 Main Steps of the Method.............................................................................4-25
4.5.5.2 Results with ELECTRE IV ............................................................................4-27
4.5.6 Other Methods .....................................................................................................4-31
4.6 Evidential Theory.........................................................................................................4-31
4.6.1 Brief Introduction of Evidential Theory..................................................................4-32
4.6.1.1 The Frame of Discernment and Basic Probability Assignment......................4-32
4.6.1.2 Belief and Plausibility Function .....................................................................4-33
4.6.1.3 Dempster’s Rule of Combination ..................................................................4-33
4.6.2 Application of Evidential Theory in Corrective/preventive Action Selection...........4-34
4.6.2.1 Single Decision Makier MCDM .....................................................................4-34
4.6.2.1.1 Appraisal of Each Action.......................................................................4-34
4.6.2.1.2 Select Action Based on the Appraisal ...................................................4-37
4.6.2.2 Multiple Decision Makers MCDM..................................................................4-37
4.7 Conclusion...................................................................................................................4-38
References........................................................................................................................4-39

5 VALUE OF INFORMATION
5.1 Introduction................................................................................................................... 5-1
5.2 Perfect Information ....................................................................................................... 5-2
5.3 Imperfect Information.................................................................................................... 5-6

xi
5.4 Conclusion.................................................................................................................... 5-8

APPENDIX
A.1 Introduction .................................................................................................................. A-1
A.2 Rating-based vs. Cost-based ....................................................................................... A-1
A.3 Impacts vs. Decisions .................................................................................................. A-2
A.4 Modeling Impact Uncertainty........................................................................................ A-2
A.5 Cost Estimation............................................................................................................ A-3
A.6 Classification of Impacts............................................................................................... A-4
A.6.1 Based on Affected Group ..................................................................................... A-4
A.6.2 Based on Cost Category....................................................................................... A-5
A.6.3 Based on Impact Component ............................................................................... A-5
A.6.4 Based on Cost Component................................................................................... A-6
A.7 Impacts for Different Security Problems ....................................................................... A-7
A.7.1 Overload Security ................................................................................................. A-8
A.7.2 Voltage Security ................................................................................................. A-10
A.7.3 Dynamic Security................................................................................................ A-11
A.8 Summary ................................................................................................................... A-14
References....................................................................................................................... A-14

xii
LIST OF FIGURES

Figure 2-1 Uncertainty Due to Operating Conditions and Contingency State .......................... 2-5
Figure 2-2 Overload and Low-voltage Continuous Severity Functions .................................... 2-7
Figure 2-3 Concept of Loadability and Margin......................................................................... 2-6
Figure 2-4 Deterministic Security Boundary ...........................................................................2-10
Figure 2-5a Risk Indices with Discrete Severity Functions & Uncertainty Model 1..................2-11
Figure 2-5b Risk Level for Power A – E along the Deterministic Boundary.............................2-11
Figure 2-6 Risk Indices with Continuous Severity Functions & Uncertainty Model 1...............2-12
Figure 2-7 Risk Indices with Continuous Severity Functions & Uncertainty Model 2...............2-13
Figure 2-8 Modified IEEE RTS ‘96 .........................................................................................2-14
Figure 2-9 Deterministic Security Boundary ...........................................................................2-16
Figure 2-10 Risk Indices with Discrete Severity Functions and Uncertainty Model 1 ..............2-17
Figure 2-11 Risk Indices with Continuous Severity Functions & Uncertainty Model 1.............2-18
Figure 2-12 Risk Indices with Continuous Severity Functions & Uncertainty Model 2.............2-18
Figure 3-1 Risk-flow Curve for 138kV Line .............................................................................. 3-6
Figure 3-2 Risk-flow Curve for 230kV Line .............................................................................. 3-6
Figure 3-3 Risk-flow Curve for 400MVA Transformer .............................................................. 3-7
Figure 3-4 Risk-voltage Curve for 138kV Bus ......................................................................... 3-9
Figure 3-5 Risk-voltage Curve for 230kV Bus ......................................................................... 3-9
Figure 3-6 The IEEE RTS’96 System.....................................................................................3-14
Figure 3-7 Generation Cost vs. Component Risk Limit...........................................................3-21
Figure 3-8 Generation Cost vs. System Risk Limit .................................................................3-21
Figure 3-9 Lagrange Multipliers vs. System Risk Limits .........................................................3-22
Figure 4-1 Risk Inconsistency in System Operation ................................................................ 4-2
Figure 4-2 Decision Tree of the Example ...............................................................................4-11
Figure 4-3 The Decision Tree with Additional Information ......................................................4-13
Figure 4-4 An Example of a Value Function ...........................................................................4-18
Figure 4-5 Value Curves of Profit ...........................................................................................4-21
Figure 4-6 Value Curves of Risk ............................................................................................4-22
Figure 4-7 Value Curves of Variance .....................................................................................4-22
Figure 4-8 Hierarchy Structure for the Decision-making Problem ...........................................4-23
Figure 4-9 Preference and Indifference Thresholds ...............................................................4-26
Figure 4-10 Example of Final Ranking with ELECTRE IV ......................................................4-27

xiii
Figure 4-11 Final Ranking......................................................................................................4-31

xiv
LIST OF TABLES

Table 3-1 Assumed Credible Contingency Set .......................................................................3-15


Table 3-2 Deterministic Limits ................................................................................................3-15
Table 3-3 Thermal Risk for Deterministic Constrained Case ..................................................3-17
Table 3-4 Voltage-out-of-limit Risk for Deterministic Constrained Case..................................3-18
Table 3-5 Thermal Risk for Risk Constrained Case................................................................3-19
Table 3-6 Voltage-out-of-limit Risk for Risk Constrained Case ...............................................3-20
Table 3-7 Lagrange Multipliers for Bounded Constraints........................................................3-20
Table 3-8 Solution to Problem 3.............................................................................................3-23
Table 4-1 Decision Making Case............................................................................................. 4-4
Table 4-2 Contingencies and Probabilities .............................................................................. 4-5
Table 4-3 Profits...................................................................................................................... 4-5
Table 4-4 Security Impacts...................................................................................................... 4-5
Table 4-5 Risk Values ............................................................................................................. 4-7
Table 4-6 Risk Value............................................................................................................... 4-8
Table 4-7 Ranked Risk Value.................................................................................................. 4-8
Table 4-8 Regret Values ......................................................................................................... 4-9
Table 4-9 Per-unit Risk Value ................................................................................................4-10
Table 4-10 Regret Value ........................................................................................................4-10
Table 4-11 Standard Deviation for Each Option .....................................................................4-15
Table 4-12 Decision Making Methods ....................................................................................4-17
Table 4-13 Upper and Lower Limits of Objectives ..................................................................4-19
Table 4-14 The Value of Example Case.................................................................................4-24
Table 4-15 Threshold Values .................................................................................................4-27
Table 4-16 Weak Preferences................................................................................................4-28
Table 4-17 Strong Preferences ..............................................................................................4-28
Table 4-18 Veto Preferences .................................................................................................4-28
Table 4-19 Weak and Strong Outranking Relations ...............................................................4-29
Table 4-20 Strong Outranking Relations ................................................................................4-29
Table 4-21 Weak Outranking Relations..................................................................................4-29
Table 4-22 Strong Qualifications ............................................................................................4-30
Table 4-23 Weak Qualifications .............................................................................................4-30
Table 4-25 BPA of the Example .............................................................................................4-36
Table 4-26 The Plausibility and R of the Example ..................................................................4-37
Table 4-27 Combined BPA and Corresponding Plausibility and R..........................................4-37
Table 5-1 Weather Report....................................................................................................... 5-2
Table 5-2 Risk and Profit......................................................................................................... 5-3
Table 5-3 List of Actions.......................................................................................................... 5-5
Table 5-4 Annual Risk in $ ...................................................................................................... 5-5
Table 5-5 Conditional Probabilities of the Growth Given the Observed Load Increased .......... 5-6
Table A-1 Impact Evaluation for Overload............................................................................. A-10
Table A-2 Impact Evaluation for Voltage Security ................................................................. A-11
Table A-3 Impact Evaluation for Dynamic Security................................................................ A-13
1
INTRODUCTION

Planning and operating bulk interconnected electric power systems are complex activities that require involvement
of a large number of people bringing a wide range of experiences and interests. What was once mainly the domain
of planning and operating engineers within the utility company now must involve people representing interests and
needs of transmission owners, system operators, energy sellers, large industrial customers and other end users,
regulators, reliability councils, security centers, manufacturers, marketers, brokers, and power exchange personnel.
In parallel with the increase in the diversity of participants, the conditions under which power systems are operated
have also become more diverse. Transmission loading patterns differ from those for which they were originally
planned, and the ability to monitor and control them has greatly increased in complexity. High uncertainty is a
characterizing feature of this complexity, and the ability to obtain, manage, and use large amounts of information
has become the primary means of handling this uncertainty.
Within the electric network, an individual disturbance resulting in a cost consequence may occur for a number of
reasons at any time. The disturbance may result in overload, voltage collapse, or transient instability, drawing the
prevailing system to an uncontrollable cascading situation leading to widespread power outages. To maintain system
reliability under uncertainty, studies are performed to aid in operating and planning decisions. The current practice
within the industry uses deterministic methods to perform these studies, with significant safety margins to cover
“all” the possible unknown uncertainties. In practice, this means that power engineers propose a strong system and
then operate it with large security margins. Though investment and operational costs are relatively high, this has
resulted in a corresponding high degree of reliability in most power systems.
The power system, however, has been shifting from a regulated system to a competitive and uncertain market
environment. A fluctuation of market demand and supply has led to an uncertain market price for energy in system
operation. Although some methods of risk assessment and management have been introduced into the market-
oriented energy trading business, the traditional deterministic reliability criteria are still intact. This has led
engineers to face more pressure, from economic imperatives in the marketplace, to operate power systems with
lower security margins. To operate the system closer to the traditional deterministic limits, or even beyond them, a
refined called Risk-based Security Assessment (RBSA) [1,2,3] has been developed. An important feature of this
approach is an index that quantitatively captures the basic factors that determine security level: likelihood and
severity of events. The use of this index provides that security level may be included in decision-making paradigms.
It is to this end that the work of this project is intended.
The main contribution of the work in this report is to show how RBSA can be included in formal decision-making
methods in order to select the “best” alternative or course of action accounting for the impact of network security.
Several different decision-making applications are explored in this context, and several different decision-making
paradigms are employed. We view this work as providing the foundation on which a “toolbox” of decision-making
methods will be coded so that the decision-maker can pull out any one of them or perhaps several of them to provide
decision-support of a variety of different kinds of situations.
In Section 1.1, we briefly describe the RBSA approach to quantifying security level. We motivate this work in
Section 1.2 by describing the process used in the industry today for security-related decision-making. Section 1.3
identifies several different applications for risk-based decision-making.

1.1 Overview of RBSA


The risk index is a measure of the system's exposure to failure. Consequently, this risk index accounts for both
likelihood and severity. In addition, it uses a severity model that captures the unavoidable consequences associated
with each outcome. The basic relation for computing risk is given by

1-1
Risk ( Sev | X ) = E ( Sev( X ) | X )
t, f t t, f
(eq. 1-1)
= ∑ ∫ Pr( E , X | X ) × Sev( E , X )dX
i t t, f i t t
i Xt

Here the risk associated with the future operating condition at time t, X t , f ,is given by the expected value of the

severity of the operating condition (


X t given the forecasted operating condition, i.e., E Im( X t ) | X t , f . This )
expectation is an integral of the product of probability of the uncertain event, defined by:
uncertain event: E i (the contingency state) and

Xt
uncertain operating condition: (the operating condition at time t)

times its corresponding (post-contingency) severity over the set of all possible uncertain events. Another integration
with respect to time (not shown in eq. 1-1), over a specified time period, provides the basis for performing risk
assessment for planning. We emphasize that the "uncertain event" includes uncertainty in the contingency state1,
Xt
E i as well as in the future operating condition .
The risk index is based on the elements of probability and severity. These elements also enable the calculation of the
variance. Variance characterizes the uncertainty of the risk index, and it can be important for good decision-making.
For example, an alternative for which the expected cost, or risk, is low, but the amount of potential variation from
the expected cost is great, may not be a better alternative than one where the expected cost is higher but potential for
variation is much smaller.
Severity assessment is highly influential for decision-making. In RBSA, there are at least two levels of severity
assessment. One level may be broadly identified as “rating-based” and the other level as “cost-based.” Rating-based
severity assessment establishes relatively simple severity functions that depend on deterministic criteria. For that
reason, they are preferred for operational security assessment where engineers prefer indices that reflect physical
attributes of the network that are easily understandable. These severity functions and the manner in which they are
developed are fully described in a report on the EPRI project called “Security Mapping and Reliability Index
Evaluation.” Another level of severity assessment is to assign an economic value to each possible outcome identified
as an impact. Then the corresponding risk has explicit economic meaning in that it represents the expected cost due
to possible insecurity problems. A disadvantage with the latter approach is that it introduces another layer of
uncertainty in translating the uncertain network performance to an even more uncertain associated cost of that
network performance. Yet the cost-based approach to impact evaluation provides the capability for quantification of
the cost uncertainty and for that reason may have advantages in planning.
In either case, the resulting risk index may be used to provide a direct bridge between power system economics and
reliability, in that it is a means to explicitly include reliability in ordinary economic decision-making problems using
formal decision-making paradigms. We have included a summary of cost-based impact assessment in Appendix A.
We utilize this approach in illustrating most of the decision-making techniques described in this report.
One unique feature of RBSA that distinguishes it from traditional security assessment is that it is capable of
assessing uncertainties in the impact given the contingency state Ei and the operating condition Xt, using a
probabilistic model to account for uncertainties in Im(Ei , Xt). For line overload, the uncertainty is in the ambient
temperature, wind speed, and wind direction [4,5]. For transformer overload, it is in the ambient temperature and the
transformers' loading cycle [6,7]. For voltage security, it is in the interruption voltage level of the load [7]. For
dynamic (angle) security, it is in the fault type and fault location of the outaged circuit corresponding to contingency
state Ei [8,9,10]. Details on the related computations can be found in the references [4-11]. Appendix A describes
the impact assessment in more detail.

1
The set of contingency states {E i , ∀ i = 0, N } includes the possibility that the current state remains the same, i.e.,
an outage does not occur.

1-2
1.2 The Decision-making Approach in Industry Today
In today’s power industry, the traditional deterministic reliability criteria are still the basis for the operating and
planning decision-making. Within the electric network, an individual disturbance with non-zero cost consequences
may occur for any number of reasons or at any time in any system environment. The disturbance may result in
overload, voltage collapse or transient instability, and draw the prevailing system to an uncontrollable cascading
situation leading to wide spread power outage. To maintain system security under these uncertainties, some limits
must be satisfied regardless of the economic factor behind the system operation.

1.2.1 Deterministic Reliability Criteria


Typical deterministic reliability criteria used in industry today are specified based on a
subjective assessment of event (or disturbance) likelihood. For example, for any event
resulting in loss of a single component (typically called an N-1 disturbance),
deterministic criteria might specify the following minimum allowable performance
levels:
- 0.8 pu minimum first swing bus voltage dip,
- 30 minute emergency ratings for transmission conductors based on 2 feet per second
wind speed and 40 degrees C ambient temperature,
- 0.90 minimum post-contingency steady state voltage level,
- no out of step condition,
- positively damped oscillations.
We have provided a summary of typical disturbance-performance criteria used in the industry [1], including that
used by National Electric Reliability Council (NERC) and eight of its regional reliability councils in North America.
Such disturbance-performance criteria are the basis for today’s operating and planning decision-making as
influenced by security. Therefore, they are important in the context of this report, as they establish a benchmark
“deterministic” decision-making method.

1.2.2. The Deterministic Decision Making Approach


In the operating and planing decision-making process of today, the deterministic method seeks an optimal operation
condition where the system survives under the worst-case, credible contingency. This method is equivalent to the
maxi-min criterion referenced in the literature. That is, the decision making within power systems, either through an
economic dispatch algorithm or a competitive market mechanism, is done by solving an optimization problem with a
set of deterministic security constraints.
Max Revenue(X)- Cost(X)
S.t. Security Constraints (eq. 1-2)
The constraints, expressed by the limits of line flows, bus voltages and other pre-determined thresholds, must be
satisfied not only by the pre-contingency operating situation, but also by all of the post-contingency situations.
Furthermore, the constraints are typically tightened by an additional safety margin in order to deal with
uncertainties. A well-known example of this strategy is applied when computing Available Transfer Capability
(ATC).
This kind of method has worked quite well in the past. The rare occurrence of system blackout and service
interruption gives the experienced engineers a confidence that this method is safe. However, even “engineers
sometimes do not recognize that there is still risk attached to such methods, leading them to believe, incorrectly, that
deterministic method are inherently ‘safe’, while probabilistic methods are not” [12]. So it is with an eye towards
proper identification of the risks, together with taking the most effective and desirable actions to mitigate them, that
we have address the decision-making problems described in the next section.

1-3
1.3 Applications for Risk-based Decision-making
Quantification of the security level via the risk calculation previously described offers us another approach for
decision-making in power systems. Below, we suggest a few typical applications where this approach is applicable.
This list is not exhaustive, as it is expected that additional applications would be identified following its use.

1.3.1 Operations
a. Unit commitment: In deciding whether to commit a unit to relieve a high
transmission flow, the operator would want to weigh the risk associated with the flow
against the cost of committing the additional unit.
b. Economic dispatch: Dispatching interconnected units to minimize production costs is often
constrained due to security limits. Traditionally, these limits have been hard. However, use
of hard limits sometimes results in acceptance of high energy costs even though the actual
risk may be very low. A “Risk” approach can identify and quantify these situations.

c. Market lever: The risk is able to function as a lever to adjust the behavior of
market participants via an economic mechanism to avoid system security
problems, rather than using mandatory cutting of transactions based on hard rules.
d. Preventive/corrective action selection: The preventive/corrective (P/C) action is very
important for maintaining the power system at an acceptable risk level. The selection of such
an action is a complicated decision-making process where the influence of an action must be
assessed for multiple problems, and frequently, what improves one problem may degrade
another one. Offering the best action or a possible action list will help the operator to
efficiently operate under highly stressed conditions. The traditional corrective/preventive
action selection is to solve an optimization problem, commonly known as the security
constrained optimal power flow (SC-OPF). The objective function is normally the
production cost or and the constraints include the power flow equalities and the limits on the
component performance (branch flows, bus voltage limits, generator capability). In contrast,
we have formulated a risk-based optimal power flow (RB-OPF) based on the ability to
quantify risk. There are two different kinds of formulations for the RB-OPF, depending on
how risk is included:
Risk in the constraints: We may use a traditional objective function (e.g., production
costs) together with the power flow equality constraints, but rather than include limits on
branch and bus performance, we include limits on component risk. Alternatively, we
may include a limit on the system risk.
Risk in the objective: Here, we include the production costs together with the system risk
in the objective function. The only constraints modeled are the power flow equations
and the generator capability limits. Under this circumstance, the limits on bus voltage
and transmission line flow performance are not modeled as this influence is reflected in
the risk part of the objective.

1.3.2 Operational Planning


a. Rating individual components: Limitations are associated with almost every
power system component, including lines, transformers, and generators.

1-4
These limitations are often given for various conditions; for example, a
transmission line typically has both a normal rating, which limits the
continuous current flow, and a 15 or 30 minute emergency rating, which
limits the flow for the corresponding amount of time. RBSA is very effective in
identifying different ratings for different durations and different
components.
b. Identifying operating limits: Operators must adhere to limits on transmission flows,
generation levels, load levels, and voltage levels. These limits, often complex functions of
several operating parameters, are driven by risk associated with normal conditions as well as
risk associated with potential outage conditions. RBSA can quantify these risks and provide
decision criteria for use in identifying them.

1.3.3 Facility Planning


System analysts studying transmission and generation needs for the future must identify and select from alternatives
to solve perceived problems. This work requires prediction of the conditions characterizing a distant future and
consequently results in large amounts of uncertainty inherent to the analysis, particularly with respect to outage
conditions and loading levels. RBSA provides tools for handling this uncertainty, quantifying long-term risk of a
particular facility plan, and comparing this risk with the costs and benefits of the plan.

1.3.4 Reliability Criteria


Some criteria used to judge acceptability of system performance is quite subjective. For example,
many companies specify transient voltage dip performance requirements of 0.8 pu, based on the
perception that violation of this requirement can cause interruption of some types of loads. Yet
there exists little data characterizing load interruption as a function of voltage dip severity, so
that the true consequence of transient voltage dips are not well quantified. Consequently,
justification of component ratings and operating limits can be difficult. Agents that incur
economic penalty as a result of being constrained off the system may press for justification of the
violated performance requirement. RBSA can provide this justification; alternatively, it can
provide the basis for adjustments to the performance requirements.

1.3.5 Data gathering by information valuation


Application of probability methods to characterize engineering systems always requires some
data collection, i.e., information gathering. The size and complexity of power systems makes this
a formidable problem, and this has been reason in the past for abandoning probabilistic
approaches altogether. The RBSA approach enables valuation of information before it is
gathered in order to determine whether its value exceeds the cost of gathering it. This may be
thought of as a secondary decision on whether to spend resources for gathering more data to
improve a primary decision.

1.4 Report Overview


The deterministic approach to performing security-related decision-making has been a
traditional mainstay in the industry for many years. In making changes it is important
to clearly understand the changes, what is different, what is better, and what is not. This
is the goal of Chapter 2, where we compare risk-based decision-making to traditional
deterministic decision-making for identifying operational limits.

1-5
In Chapter 3, a risk based Optimal Power Flow (RB-OPF) is developed. The method assumes that power demand in
each bus is random and normally distributed. There are two basic implementations of the RB-OPF. The first
implementation is to replace the traditional deterministic constraints with component risk functions. The advantage
here is that one can then solve the RB-OPF with individual component risk limits, regional risk limits, system risk
limits, or a combination of these various risk limits. The second implementation is to eliminate constraints altogether
and include the total risk in the objective function with the generation cost so that these two can be optimized
against each other.
Chapter 4 explores various decision-making paradigms for performing corrective and preventive action selection. In
the past, such actions have been selected based on the concepts introduced by Dy Liacco [13], where preventive
actions are selected to move the system from the alert state to the normal state, and corrective actions are selected to
move the system from the emergency state to the normal state. Thus, preventive/corrective actions require a decision
in terms of when to take action and which action to take. The basis for this decision has been the identification of the
alert or the emergency states in terms of deterministic criteria. The ability to compute risk and related measures
provides for various new decision-making paradigms in this arena. We have used a simple decision-making scenario
to test several variations on two basic types of decision-making methods. A single criterion method results from
combining economic measures with risk. There are a variety of such approaches that we describe. The Bayesian
decision tree is particularly effective as a tool that provides integration of additional information as it becomes
available. On the other hand, risk alone, as an expected value, does not completely describe the uncertainty inherent
to the decision. We may also use variance as an index, along with risk, yet its inclusion requires multi-criteria
decision-making, an approach that is also described in Chapter 4. Finally, Chapter 4 explores use of evidential
theory to deal with the corrective/preventive action selection problem. This theory provides an effective method to
process the uncertainty and has a special advantage of combining the opinions of different decision makers.
Chapter 5 addresses the issue of data gathering for use in applying probabilistic methods to decision-making. Rather
than discuss the mechanics of how to do it, we focus on the decision of whether to do it. Thus, the data-gathering
problem becomes a decision problem in itself. This decision requires assessment of the information cost to the
information value. The concept that underlies placing a dollar value on information is that the purpose of gathering
information is to reduce uncertainty. The anticipated change in uncertainty, measured by changes in probabilities,
result in changes in expected impacts (risks). The value of the information is determined by comparing the risk with
and without the additional information.
References
[1] EPRI final report WO8604-01, “ Risk-based Security Assessment”, December,
1998.
[2] J. McCalley, V. Vittal, and N. Abi-Samra, “Overview of Risk Based Security
Assessment,” Proc. of the 1999 IEEE PES Summer Meeting , July 18-22, 1999.
[3] J. McCalley, V. Vittal, N. Abi-Samra, "Use of Probabilistic Risk in Security
Assessment: A Natural Evolution," International Conference on Large High
Voltage Electric Systems (CIGRE), Selected by the CIGRE U.S. National
Committee for presentation at the CIGRE 2000 Conference, August, 2000,
Paris.
[4] H. Wan, J. McCalley, and V. Vittal, "Increasing Thermal Rating by Risk
Analysis," IEEE Trans. on Pwr Sys., Vol. 14, No. 3, Aug., 1999, pp. 815-828.
[5] J. Zhang, J. McCalley, H. Stern, and W. Gallus, “A Bayesian Approach to Short-
Term Transmission Line Thermal Overload Risk Assessment,” under review by IEEE
Transactions on Power Systems.
[6] W. Fu, J. McCalley, V. Vittal, "Risk-Based Assessment of Transformer
Thermal Loading Capability," Proc. of the 30th North American Power
Symposium," Cleveland,OH., Oct. 1998, pp. 118-123.
[7] W. Fu, J. McCalley, V. Vittal, “Transformer Risk Assessment,” to appear, IEEE
Transactions on Power Systems.

1-6
[8] H. Wan, J. McCalley, V. Vittal, "Risk-Based Voltage Security," to appear, IEEE
Trans. on Pwr Sys.
[9] J. McCalley, A. Fouad, V. Vittal, A. Irizarry-Rivera, B. Agrawal, R. Farmer, “A
Risk-based Security Index for Determining Operating Limits in Stability-
Limited Electric Power Systems” IEEE Trans. on Pwr. Sys., Vol. 12, No. 3, Aug.
1997, pp. 1210-1219.
[10] V. Van Acker, J. McCalley, V. Vittal, "Risk-Based Transient Instability," Proc.
of the 30th North American Pwr Symposium," Cleveland,OH., Oct. 1998.
[11] V. Van Acker, J. McCalley, V. Vittal, J. Pecas-Lopes, “Risk-Based Transient
Stability Assessment,” Proceedings of the Budapest Powertech Conference,
Budapest, Hungary, Sept. 1999.
[12] Barrett J Stephen, Motlis Yakov, “ Discussion for the paper: Increasing
Thermal Rating by Risk Analysis”, IEEE Transactions on Power System, Vol.
13, Aug, 1999.
[13] T. Dy Liacco, “System Security: The Computer Role,” IEEE Spectrum, Vol. 16,
No. 6, pp 48-53, June, 1978.

1-7
2
DECISION MAKING FOR OPERATIONS ----
COMPARISON BETWEEN RISK-BASED AND
DETERMINISTIC SYSTEM OPERATING LIMITS

Deterministic methods are widely utilized in industry to perform security evaluation of


power system operation by providing a basis for determining tradeoffs between
security and economy, while probabilistic approaches are recognized for their ability to
enhance this decision-making process. In this chapter, we compare the deterministic
approach to a probabilistic one via an overload and low voltage security assessment
study to identify secure regions of operation for a small 5-bus system and for the IEEE
Reliability Test System. The results of this comparison indicate that the probabilistic
approach offers several inherent advantages relative to the deterministic approach.

2.1 Introduction
In many countries today, the introduction of competitive supply and corresponding
organizational separation of supply, transmission, and system operation has resulted in
more highly stressed operating conditions, more vulnerable networks, and an increased
need to identify the operational security level of the transmission system. Here, we
regard security as the ability of the system to respond to contingencies in terms of the
branch loading, bus voltage, and dynamic response of the network. The determination
of the security level, for given operating conditions, traditionally has been done using
what we call the deterministic method. In this method, an operating condition is
identified as secure or insecure according to whether each and every contingency in a
pre-specified set, the contingency set, satisfy specified network performance criteria, the
performance evaluation criteria. If one or more contingencies are in violation, actions
are taken to move the security level into the secure region. If no disturbances are in
violation, then no action need be taken, or actions can be taken to enhance the economic
efficiency of the energy delivered to the end-users.

It is easy to recognize a decision-making problem in the above process; the decision is


whether to take actions and if so, what kind and how much. The deterministic method
provides a very simple rule for use in making this decision: optimize economy within
hard constraints of the secure operational region. It is this simplicity that has made the

2-1
deterministic method so attractive, and so useful, in the past. Today, however, with the
industry’s emphasis on economic competition, and with the associated increased
network vulnerability, there is a growing recognition that this simplicity also carries
with it significant subjectivity, and this can result in constraints that are not uniform
with respect to the security level. This suggests that the ultimate decisions that are
made may not be the “best” ones.

It is well known that probabilistic methods constitute powerful tools for use in many
kinds of decision-making problems. Therefore, today there is a great deal of interest in
using them to enhance the security-economy decision making problem. The US Western
Systems Coordinating Council (WSCC) is developing probabilistic based reliability
criteria [1]. A recent CIGRE report [2] recommended further study of probabilistic
security assessment methods, and an ongoing CIGRE task force, 38.02.21, is
implementing this recommendation. There was a panel session dedicated to this subject
at the 1999 PES Summer Meeting [3-6]. Another panel session at this same meeting
focused on risk-based dynamic security assessment [7-11]. The theme of most of this
work is that security level can be quantitatively assessed using a probabilistic metric.
Although the industry has not reached a conclusion regarding which probabilistic
metrics are best, there is consensus that using them has potential to improve analysis
and decision-making.

Despite the perceived drawbacks of the deterministic method and the perceived
promise of probabilistic methods, we believe it prudent to proceed carefully in
embracing probabilistic security assessment for operations. Therefore, the objective of
this paper is to compare probabilistic security assessment with deterministic security
assessment. The comparison is made with respect to the assessment results of each
method. In order to retain simplicity, we focus on overload and low voltage security.
Voltage and transient instability will not be addressed, although we believe that our
general conclusions are applicable to all forms of security problems.

This chapter is organized as follows. Sections 2.2 and 2.3 summarize our
implementations of the deterministic and probabilistic approaches, respectively, to
security assessment. Section 2.4 uses a simple 5-bus system for illustration. Section 2.5
gives results for a contrived constrained interconnection within the IEEE Reliability Test
System (RTS). Section 2.6 provides interpretation and explanation regarding the
differences in the results and the significance of these differences. Section 2.7 concludes.

2.2 Deterministic Study Procedure


In deterministic security assessment, the decision is founded on the requirement that each outage event in the
contingency set results in system performance that satisfies the chosen performance evaluation criteria. These
assessments, typically involving large numbers of computer simulations, are defined by selecting a set of network

2-2
configurations (i.e., network topology and unit commitment), a range of system operating conditions, a list of outage
events, and the performance evaluation criteria. Study definition requires careful thought and insight because the
number of possible network configurations, the range of operating conditions, and the number of conceivable outage
events are each very large, and exhaustive study of all combinations of them is generally not reasonable.
Consequently, the deterministic approach has evolved within the electric power industry to minimize study effort yet
provide useful results. This approach depends on the application of two criteria during study development:
Credibility: The network configuration, outage event, and operating conditions is reasonably likely to occur.
Severity: The outage event, network configuration and operating condition on which the decision is based results in
the most severe system performance, i.e., there should be no other credible combination of outage event, network
configuration, and operating condition which results in more severe system performance.
In this paper, we are explicitly interested in studies conducted for the purpose of
identifying operational limits for use by the operator. In this case, the study focuses on a
limited number of operating parameters such as flows on major transfer paths,
generation levels, or load levels for a specific season. We call these the study
parameters. Application of the deterministic approach consists of the following basic
steps:
1. Develop power flow base cases corresponding to the time period (year, season) and loading conditions (peak,
partial peak, off peak) necessary for the study. In each base case, the unit commitment and network topology is
selected based on the expected conditions for the chosen time period. The topologies selected are normally all
circuits in service; here, credibility is emphasized over severity. Sometimes sensitivity studies are also performed if
weakened topologies are planned.
2. Select the contingency set. Normally this set consists of credible events for which post-contingency
performance could be significantly affected by the study parameters.
3. Identify the range of operating conditions, in terms of the study parameters, which are expected during the time
period of interest. We refer to this as the study range.
4. Identify the event or events that “first” violate the performance evaluation criteria as operational stress is
increased within the study range. We refer to these events as the limiting contingencies. If there are no such
violations within the study range, the region is not security-constrained, and the study is complete.
5. Identify the set of operating conditions within the study range where a limiting contingency “first” violates the
performance evaluation criteria. This set of operating conditions constitutes a line (for two study parameters), a
surface (for three) or a hypersurface (for more than three) that partitions the study range. We refer to this line,
surface, or hypersurface as the security boundary; it delineates between acceptable and unacceptable regions of
operation.
6. Condense the security boundary into a set of plots or tables that are easily understood and used by the operator.

2.3 Probabilistic Study Procedure


In the probabilistic analysis performed in this work, we utilize a measure or index that
reflects the composite security level associated with the values of the chosen study
parameters. There are a number of different indices that could be chosen. We have
selected one that is reasonable, and it is described below. We think it unlikely that use
of alternative probabilistic indices in our study would significantly influence the
conclusions.

2.3.1 Modification to Analysis Steps


The probabilistic study procedure retains the 6 basic steps described in the preceding section. However, steps 4 and
5 are modified to read:
4. Evaluate the probabilistic index throughout the study range. Decide on a particular threshold level beyond
which is unacceptable.
5. Identify the set of operating conditions within the study range that have an index evaluation equal to the
threshold level. This set of operating conditions constitutes the line (for two study parameters), a surface (for three)

2-3
or a hypersurface (for more than three) that partitions the study range. We refer to this line, surface, or hypersurface
as the security boundary; it delineates between acceptable and unacceptable regions of operation.

Remark 1: There are a number of methods by which one can make the decision associated with step 4. One simple
and cautious approach is to evaluate points on the deterministic security boundary and utilize one of these values as
the threshold.
Remark 2: In the next section, we propose using the product of probability and severity, or risk, as the probabilistic
index. In this case, step 5 results in a contour or surface of constant risk.
Remark 3: The fact that step 6 does not change means that the operator sees no difference in how the two
approaches are presented.

2.3.2 Description of Probabilistic Index


The difficulty of the security-economy problem is that there exists significant uncertainty associated with it.
Consider that we wish to assess the security level of a power system for the purpose of making a decision that will
be effective for one time period. Therefore the goal is to assess the effect of the decision made now on the conditions
in the next time period. Denote the future time by t, the corresponding operating conditions by Xt , the
corresponding forecasted operating conditions by Xt,f, and the contingency state by E. We desire to identify the
security level of the future time t. Based on the state estimation result of the current condition and the ability to
forecast, we assume we can generate a forecasted operating condition very well. Yet, in the analysis of the future
time t, we encounter uncertainty associated with possible deviations from the forecast and in the contingency state.
These two forms of uncertainty are illustrated in Figure 2.1. We note that each state, represented by a circle,
corresponds to a particular power flow result. We call the state on the left the “initial state” and the states on the far
right the “terminal states.” The jth possible future operating condition is denoted by Xt,j, and the ith possible
contingency state is denoted by Ei.

Xt,f
Figure 2.1: Uncertainty due to operating conditions and contingency state
condition Ei
We compute the expectation of severity by summing over all possible outcomes the product of the outcome
probability and its severity. This measure corresponds
Xt,j to what has been called risk in many disciplines. In Figure 1,
if we assign probabilities to each branch, then the probability of each terminal state is the product of the probabilities
assigned to the branches that connect the initial state to that terminal state. If we assign severity values to each
terminal state, the risk can be computed as the sum over all terminal states of their product of probability and
severity, i.e.,
Risk ( Sev | X ) =
t, f ∑i ∑j Pr( E ) Pr( X | X ) × Sev( E , X )
i t, j t,f
(eq. 2.1)
i t, j

Pr(Xt,j | Xt,f ) is the probability of operating condition Xt,j at time t given that the forecasted operating condition in
time period t is Xt,f. Assuming we can forecast these operating conditions very well, it is appropriate to model the
probability distribution of Xt,j given Xt,f with a normal distribution having a mean equal to the forecast. Under this
assumption, the voltages and branch flows of Xt,j given a contingency follow the Multi-Variate-Normal (MVN)
distribution [10,12,13]. In this paper, we only consider the risk caused by the bus low voltage and line overload, so
under this circumstance, eq. (2.1) changes to:
(eq. 2.2)

(
Risk Sev | X
)=
t,f

c b

∑ Pr (E )*  ∑ ∫ Sev (V )Pr (V | E , X
i =1
i
j =1
lv j j i t,f
)dV
j

 l

+ ∑ ∫ Sev (P )Pr (P | E , X 
)dP 
ol k k i t, f k
k =1
where c, b, l are the total number of contingencies, buses and branches respectively.Pr(Vj | Ei, Xt+1) and Pr(Pk | Ei,
Xt+1) are the probability distributions of Bus j’s voltage and Branch k’s flow. Here, Pr(Ei) is the probability of
contingency i in the next time interval. The events Ei are assumed to be Poisson distributed so that,

2-4
( )
Pr (Ei ) = 1 − e − λ i * e
−∑ j ≠i λ j
(eq. 2.3)
Here, λ i is the occurrence rate of contingency i per time interval.

2.3.3 Severity Function


The individual severity functions of low voltage (Sevlv) and overload (Sevol) quantify the severity, impact,
consequence, or cost of the corresponding bus or circuit. They are generally uncertain (e.g., what is the cost of a
104% overload?), but there are ways to quantify them very simply. We adopt two of these ways for the purposes of
our analysis here.
Discrete Severity Function : Sevol is assigned 1 if the flow on circuit exceeds its rating and 0 otherwise. Sevlv is
assigned 1 if the bus voltage falls below its rating and 0 otherwise. Therefore, when discrete severity functions are
used, the resulting risk computed by (1) reveals the expectation of the number of violations.
Continuous Severity Function : The discrete severity functions do not measure the extent of the violation. For
example, these functions would not capture the difference between a 101% overload and a 110% overload, although
clearly the latter is more severe. To measure the extent of a violation, we use continuous severity functions. These
functions, for overload and for low-voltage, are illustrated in Figure 2.2. For each circuit and bus, these severity
functions evaluate to 1.0 at the deterministic limits and increase linearly as conditions exceed these limits.

Figure 2.2: Overload and Low-voltage Continuous Severity Functions


Other severity functions are clearly possible, and we do not advocate that the ones chosen here are best. Rather, we
simply recognize that they are simple, and yet they do allow illustration of the basic features of probabilistic
assessment. This is satisfactory since the purpose of the chapter is to arrive to compare probabilistic assessment to
deterministic assessment.
2.3.4 Uncertainty models
We may address the two kinds of uncertainty associated with this problem, uncertainty of operating conditions and
uncertainty of contingencies, at two levels of modeling complexity:

Uncertainty Model 1
We only consider the uncertainty of contingencies and do not consider the uncertainty of operating conditions. This
means we assume that the mean of Xt equals to the forecasted value and the variance of it equals to 0, implying that
the forecasted value has no error. This assumption is reasonable if the unit time interval is small. Under this
condition, the bus voltage and branch flow under operating condition Xt, given a contingency, are certain values, so
the total risk can be obtained in a simple form of eq. (2.2), i.e.,
Risk (Sev | X )=
( ) + ∑ Sev (P | E , X ))
t, f (eq. 2.4)

∑ Pr (E )* ∑ Sev (V | E , X
c b l

i lv j i t,f ol k i t,f
i =1 j =1 k =1

Uncertainty Model 2

In this model, we consider both the uncertainty of operating conditions and


uncertainty of contingencies. This means Xt follows a normal distribution with a
corresponding variance and a mean equal to the forecast. So we need to use eq. (2.2) to
calculate the risk.

2-5
2.4 Case Study for Five Bus Test Case

2.4.1 Steps 1, 2, 3 for Deterministic and Probabilistic Studies


In this section, a 5-bus test system is used for the comparison. As this system is simple, we can obtain some obvious
comparison results that serve to validate the calculations. Figure 2.3 shows the system. There are 5 buses and 8
branches (including transformers) in this system. The study parameters in this case are real power load of Bus 3 (P3)
and real power load of Bus 4 (P4).
We describe the first three steps of the assessment procedure since they are common to both deterministic and to
probabilistic approaches. In step 1, the analyst constructs the base case according to the expected system conditions.
Bus 1 is the swing bus (V=1.1) and Bus 5 is a PV bus (V=1.0). There are generators connected to Bus 1 and 5. We
assume that possible security violations exist only for low voltage problems at buses 2, 3, 4 and overload problems
on lines 1, 2, 5, 6, 7.
In step 2, we select the contingency set as the N-1 contingencies that might cause violations in the perspective
components. As Line 1 and Line 2 are parallel lines and have the same parameters, so the contingency set includes
only the outage of Line 1. However, we account for outage of Line 2 in the probability index calculation by setting
the outage rate of Line 1 as two times its real outage rate. This is equivalent to considering both outage of Line 1 and
Line 2. For the same reason, we only include outage of Line 6 in the contingency set. So the contingency set
includes three contingencies:
1) Line 1 outage, its yearly outage rate is 0.4;
2) Line 5 outage, its yearly outage rate is 0.1;
3) Line 6 outage, its yearly outage rate is 1.4.
Step 3 requires identification of the parameters ranges. They are:

1) Real power load of Bus 3 (X axis); change from 5 to 280 MW


2) Real power load of Bus 4 (Y axis); change from 1 to 120 MW
A simple dispatch policy is assumed where the generation at buses 1 and 5 compensate for 85% and 15%,
respectively, of all load variation.

1 3
2 4
1 3
2
5
8
6
7

5 4

Fig. 2.3: Five bus test system

2.4.2 Steps 4, 5 for Deterministic Method

The task of step 4 is to identify limiting contingencies. The performance evaluation


criteria is:
• Post-contingency bus voltages should be at least 0.95 pu.
• Pre-contingency circuit flow should not exceed the circuit’s continuous rating.
• Post-contingency circuit flow should not exceed the circuit’s emergency rating.
Power flow analysis indicates that there are, within the study range, three violations of the
criteria.

1. Post-contingency overload of Line 2 due to contingency Line 1 outage.

2-6
2. Post-contingency under-voltage of Bus 4 due to contingency Line 5 outage.
3. Post-contingency overload of Line 7 due to contingency Line 6 outage.
In step 5, we identify the security boundary in the space of the study parameters. Figure
2.4 illustrates the deterministic security boundary (bold lines).

Figure 2.4: Deterministic Security Boundary

2.4.3 Steps 4, 5 for Probabilistic Method


In step 4, we evaluate the probabilistic index, risk, within the study region. Three
different illustrations are provided, resulting in the risk indices as displayed in Figures
2.5, 2.6 and 2.7:
• Figure 2.5: Discrete severity functions were used with uncertainty model 1
(contingency uncertainty only);
• Figure 2.6: Continuous severity functions were used with uncertainty model 1
(contingency uncertainty only);
• Figure 2.7: Continuous severity functions were used with uncertainty model 2
(contingency and operating condition uncertainty).
From these figures, we can make the following observations:
Observation 1: When using the discrete severity function with uncertainty model 1, we
obtain zero risk inside the deterministic boundary, as there are no violations inside the
boundary. Figure 2.5a illustrates, however, that at and outside the deterministic
boundary, risk varies significantly. Some particularly important features to note in
Figure 2.5a are:
At the boundary: The deterministic boundary, indicated by the bold line, reflects significant risk variation. This is
illustrated in Figure 5b, which shows the risk level for points A – E alone the deterministic boundary. The risk of
points B and D differ from C because points B and D are located at the intersection of two different deterministic
constraints and therefore incur risk from both of them, whereas point C, like point A and E, incurs risk from only
one deterministic constraint. From Figures 5a and 5b, we conclude that the risk along the deterministic boundary
varies significantly. Outside the boundary: The risk takes 6 different values outside the boundary, with each
different risk area separated by the deterministic constraints. This is important, as it indicates that exceeding the
deterministic boundary in one direction may incur significantly different risk variation than exceeding it in another.

2-7
Figure 2.5a: Risk Indices with Discrete Severity Functions & Uncertainty Model 1
(contingency uncertainty only)

Figure 2.5b: Risk Level for Point A – E along the Deterministic Boundary

Observation 2: Use of the continuous severity functions results in continuous variation


in risk throughout the operating range. This provides that contours of constant risk, iso-
risk curves, may be identified as in Figure 2.6 and 2.7, where we observe:
a) Although small, risk is non-zero inside the deterministic boundary. This is caused, for both figures, by the
particular selection of severity function that was made (see Figure 2.2) where the function returns a positive
quantity for values close to but within the line flow or low voltage ratings. In addition, Figure 2.7 has non-zero
risk contributing as a result of modeling uncertainty in operating conditions, since risk evaluation of a point
inside the deterministic boundary is also affected by the system performance for points outside the boundary
b) The risk increases continuously as the operating conditions become more stressed.
This is caused by the fact that the severity functions are continuously increasing
with stress. This is also the reason why, for a particular operating condition, the risk
index of Figure 2.6 is higher than that of Figure 2.5a.

2-8
Observation 3: The influence of contingency probability is also apparent in Figure 2.6,
as the 0.001 risk contour indicates points B, C and D are significantly higher risk points
than points A and E, as indicated in Figure 2.5b.

Figure 2.6: Risk Indices with Continuous Severity Functions & Uncertainty Model 1
(contingency uncertainty only)

Figure 2.7: Risk Indices with Continuous Severity Functions & Uncertainty Model 2 (operating condition and
contingency uncertainty)

2.5 Case Study for IEEE RTS

2.5.1 Steps 1, 2, 3 for Deterministic and Probabilistic Studies

2-9
In this section, we use a modified version of the IEEE Reliability Test System (RTS) [14]
for the comparison. Figure 2.8 shows the system. As indicated in this figure, the system
has been divided into three areas. The basic idea is that significant north-to-south
transfer causes high flow through area 2 and the interconnections between areas 1 and
3, and it heavily affects some corresponding overload and voltage problems. Area 2 can
alleviate the severity of these problems by shifting generation from its bus 23 to its bus
13. Thus the study parameters are the total north-to south flow and the bus 23
generation. The parameters are varied according to:
∆P23 = −∆P13
(eq. 2.5)
∆ Parea 3 = − ∆ Parea1
(eq. 2.6)
We describe the first three steps of the assessment procedure since they are common to both deterministic and
probabilistic approaches.

Figure 2.8: Modified IEEE RTS ‘96

2-10
In step 1, the analyst constructs the base case according to the expected system
conditions. In this case, since we use a well-known test system, we describe only the
changes that were made from the data reported in [14]. These changes were made so as
to contrive a security-constrained region and include:
• Line 11~13 is removed.
• Set terminal voltage of the Bus 23 generator to 1.012pu and Bus 15 to 1.045pu.
• Shift 480 MW of load from buses 14, 15, 19, 20 to bus 13;
• Add generation capacity at buses 1 (100 MW unit), 7 (100 MW unit), 15 (100 MW unit, 155 MW unit), 23 (155
MW unit).
• Change the outage rate of Line 12~23, 13-23, 11-14 to 0.1 1,5, 10, respectively, so their
outage rates have significant difference.

In step 2, the contingency set is limited to N-1 contingencies anywhere in the system
that might cause overload or voltage problems limiting the north-to-south transfer.
This set includes:
• Circuit outages:
12~23 out; 13~23 out; 12~13 out; 15~24 out; 14~11 out; 20~23 out; 14~16 out; 12~ 9 out; 12~10 out
• Generator outages:
350 MW unit at bus 23; 197 MW unit at bus 13;
400 MW unit at bus 21; 100 MW unit at bus 7
Step 3 requires the identification of the parameters ranges. They are:
1. Generation at bus 23: 303 MW ~ 903 MW.
2. North-South flow (i.e. combined active power flow on lines 15 ⇒ 24, 14 ⇒ 11, 23 ⇒ 12 and 13 ⇒ 12): 455
MW ~ 1100 MW.

2.5.2 Steps 4, 5 for Deterministic Method

The task of step 4 is to identify limiting contingencies. The performance evaluation


criteria is:
• Post-contingency bus voltages should be at least 0.95 pu.
• Pre-contingency circuit flow should not exceed the circuit’s continuous rating.
• Post-contingency circuit flow should not exceed the circuit’s emergency rating.
Power flow analysis indicates that there are, within the study range, four violations of
the criteria. They are:

1. Post-contingency overload limit of line 13~23 due to contingency 12~23


outage.

2. Post-contingency voltage limit of bus 12 due to contingency 13~23 outage.


3. Post-contingency overload limit of line 12~23 due to contingency 13~23 outage.
4. Post-contingency voltage limit of bus 24 due to contingency 11~14 outage.
In step 5, we identify the security boundary in the space of the study parameters. Figure
2.9 illustrates the deterministic security boundary (bold lines).

2-11
Figure 9: Deterministic Security Boundary

2.5.3 Steps 4, 5 for Probabilistic Method


In step 4, we evaluate the risk indices within the study region. Three different
illustrations are provided, similar to those given in Section IV.3, resulting in the risk
indices as displayed in Figures 2.10, 2.11 and 2.12:
• Figure 2.10: Discrete severity functions were used with uncertainty model 1
(contingency uncertainty only);
• Figure 2.11: Continuous severity functions were used with uncertainty model 1
(contingency uncertainty only);
• Figure 2.12: Continuous severity functions were used with uncertainty model 2
(contingency and operating condition uncertainty).
From these figures, we make observations similar to those made in section IV.3:
Observation 1: When using the discrete severity function under uncertainty model 1, we
obtain zero risk inside the deterministic boundary, as there are no violations in that
region. Figure 2.10 also illustrates that at and outside the deterministic boundary, risk
varies significantly.

2-12
Figure 2.10: Risk Indices with Discrete Severity Functions and Uncertainty Model 1
(contingency uncertainty only)
Observation 2: From Figures 2.11 and 2.12, we observe that the use of the continuous
severity functions results in continuous variation in risk throughout the operating
range. The iso-risk curves in Figures 2.11 and 2.12 are consistent with the risk value
change along the deterministic boundary in Figure 2.10.
Observation 3: The use of continuous severity functions causes the non-zero risk inside
the deterministic boundary of Figure 2.11, and it is a contributing reason to the non-zero
risk inside the deterministic boundary of Figure 2.12. Use of this severity function is
also the cause of, for a particular operating condition inside the deterministic boundary,
the risk index of Figure 2.12 is higher than that of Figure 2.11. In Figure 2.12, the non-
zero risk inside the deterministic boundary is also caused by the modeling of
uncertainty in operating conditions, since risk evaluation of a point inside the
deterministic boundary is also affected by the system performance for points outside
the boundary.

2-13
Figure 2.11: Risk Indices with Continuous Severity Functions & Uncertainty Model 1
(contingency uncertainty only)
Observation 4: Comparing Figure 2.11 with Figure 2.10, we can see that the risk value based on the continuous
severity function is larger than that based on the discrete severity function.

Figure 2.12: Risk Indices with Continuous Severity Functions & Uncertainty Model 2
(contingency and operating condition uncertainty)

2.6 Discussion
Based on the analysis in the last two sections, we observe that the deterministic
boundary does not necessarily result in constant risk, and that there are a number of
subtle influences captured by the iso-risk curves that are not captured by the
deterministic approach:
1. Effect of outage probability.

2-14
The deterministic approach assumes all contingencies (in the contingency set) are
equally probable, but the probabilistic approach distinguishes between them. Thus,
there may be some situations where a deterministic violation is in fact very low risk
because the outage probability is extremely low. There may be other situations where a
deterministic violation contributes very high risk because of a very high outage
probability.
2. Effect of non-limiting events and problems
The deterministic approach assesses only the most restrictive contingencies and corresponding problems; i.e., it does
not recognize the influence on security level of less restrictive contingencies or problems. On the other hand, the
probabilistic approach does capture the increased risk caused by multiple constraints as it sums risk associated with
all contingencies and problems, i.e., the probabilistic approach is capable of composing risk from multiple events
and multiple problems and it reflects the total composite risk and not simply that from the single most restrictive
event.
3. Effect of Violation Severity
The deterministic approach considers all violations are unacceptable; this implies that all violations are equally
severe. But the probabilistic approach distinguishes between different severities. Thus, there may be some situations
where a deterministic violation contributes in fact very low risk because the violation severity is extremely low.
There may be other situations where a deterministic violation contributes very high risk because of a very high
violation severity.
4. Effect of uncertainty in operating conditions
The deterministic approach cannot address uncertainty in operating conditions that is a practical and unavoidable
problem for the security assessment of future time. This influence is especially important when small variations in
operating conditions cause large deviations in performance.

2.6 CONCLUSION
The study reported in this paper has compared the traditional deterministic security
assessment approach, as used for many years in industry, with an alternative approach
based on probabilistic risk. Although deterministic assessment is simple in concept and
application, results based on it can be misleading, as it does not capture the effect of
outage likelihood, non-limiting events and problems, violation severity, and uncertainty
in operating conditions. These effects can significantly influence the risk evaluation of a
near-future operating condition. Given the high frequency of stressed conditions
observed in many control centers today, it is clear that on-line control is a continuous
decision-making problem for the operator. We believe that the probabilistic risk based
security evaluation approach will serve well in this kind of environment.

References
[1] Lester H. Fink, “Security: its meaning and objectives”, Proc. of the Workshop on Power System Security
Assessment, pp. 35~`41, Ames, Iowa, April 27~29, 1988.
[2] Mohammed J. Beshir, “Probabilistic based transmission planning and operation criteria development for
the Western Systems Coordinating Council”, Proc. of the 1999 IEEE PES summer meeting, presented at
the 1999 IEEE PES summer meeting panel session on Risk-Based Dynamic Security Assessment,
Edmonton, Canada.
[3] CIGRE task force 38.03.12, “Power System Security Assessment: A Position Paper”, June 1997.
[4] Y. Schlumberger, C. Lebrevelec, M. de Pasquale "An Application of a Risk Based Methodology for
Defining Security Rules Against Voltage Collapse", Proc. of the 1999 IEEE PES summer meeting,
presented at the 1999 IEEE PES summer meeting panel session on Risk-Based Dynamic Security
Assessment, Edmonton, Canada.
[5] Abed, "WSCC Voltage Stability Criteria, Undervoltage Load Shedding Strategy, and Reactive Power
Reserve Monitoring Methodology", Proc. of the 1999 IEEE PES summer meeting, presented at the 1999

2-15
IEEE PES summer meeting panel session on Risk-Based Dynamic Security Assessment, Edmonton,
Canada.
[6] "Dynamic Security Risk Assessment", A.M. Leite da Silva, J. Jardim, A.M. Rei, J.C.O. Mello, Proc. of the
1999 IEEE PES summer meeting, presented at the 1999 IEEE PES summer meeting panel session on Risk-
Based Dynamic Security Assessment, Edmonton, Canada.
[7] J. Momoh, M. Elfayoumy, W. Mittelstadt, Y. Makarov,"Probabilistic Angle Stability Index", Proc. of the
1999 IEEE PES summer meeting, presented at the 1999 IEEE PES summer meeting panel session on Risk-
Based Dynamic Security Assessment, Edmonton, Canada.
[8] S. Aboreshaid, R. Billinton, "A Framework for Incorporating Voltage and Transient Stability
Considerations in Well-Being Evaluation of Composite Power Systems", Proc. of the 1999 IEEE PES
summer meeting, presented at the 1999 IEEE PES summer meeting panel session on Risk-Based Dynamic
Security Assessment, Edmonton, Canada.
[9] J. McCalley, V. Vittal, N. Abi-Samra, "An Overview of Risk Based Security Assessment", Proc. of the
1999 IEEE PES summer meeting, presented at the 1999 IEEE PES summer meeting panel session on Risk-
Based Dynamic Security Assessment, Edmonton, Canada.
[10] J. McCalley, V. Vittal, H. Wan, Y. Dai, N. Abi-Samra,"Voltage Risk Assessment", Proc. of the 1999 IEEE
PES summer meeting, presented at the 1999 IEEE PES summer meeting panel session on Risk-Based
Dynamic Security Assessment, Edmonton, Canada.
[11] V. Vittal, J. McCalley, V. Van Acker, W. Fu, N. Abi-Samra, Transient Instability Risk Assessment", Proc.
of the 1999 IEEE PES summer meeting, presented at the 1999 IEEE PES summer meeting panel session on
Risk-Based Dynamic Security Assessment, Edmonton, Canada.
[12] George Casella, Roger L. Berger, "Statistical Inference", Pacific Grove, Calif.: Brooks/Cole Pub. Co.
c1990.
[13] H.Wan, “Risk-base security assessment for operating electric power systems”, Ph.D. dissertation, Iowa
State University, 1998.
[14] IEEE reliability test system task force of the application of probability methods subcommittee, “ The IEEE
reliability test system – 1996”, IEEE Transactions on Power Systems, v14, n3 1999, pp 1010-1018.

2-16
3
RISK BASED OPTIMAL POWER FLOW

3.1 Introduction

The purpose of an optimal power flow (OPF) is to schedule power system controls to
optimize an objective function while satisfying a set of nonlinear equality and
inequality constraints. The scheduling of these controls is actually a decision-making
effort, and as such, we recognize the OPF as a fundamental decision-making tool for
power system engineers. In this chapter, we explore ways of using probabilistic risk to
improve on the traditional OPF. We call the result of these efforts the risk-based OPF
(RB-OPF). We will see that a significant difference between the OPF and the RB-OPF lies
in the nature of the constraints used in the problem.

Examples of the equality and inequality constraints used in a traditional OPF include
generation/load balance, bus voltage limits, power flow equations, branch flow limits
(including both transmission line and transformer), active/reactive reserve limits, and
limits on all control variables [1]. The following is a simplified deterministic OPF
problem with no discrete variables or controls [2].

min ∑ f ( Pgi ) (eq. 3-1)

subject to

Pi − ∑ViV j COS (θ ij + δ j − δ i ) = 0
Qi + ∑ ViV j SIN (θ ij + δ j − δ i ) = 0
| S ij | ≤ S ij
max

Vi ,min ≤ Vi ≤ Vi ,max
Pgi ,min ≤ Pgi ≤ Pgi ,max
Qgi ,min ≤ Qgi ≤ Qgi ,max

Within this chapter, we will refer to the above problem as problem 0. The objective
function is the total cost of real generation. The first two constraints are power flow
constraints. The next two are the branch power flow limits constraints. The fifth is bus
voltage constraint. The last two are active and reactive power generation constraints.
The traditional security constrained OPF (SCOPF) should also include constraints that
represent operation of the system after N-1 contingency outages, where the system is
operated so that if a contingency occurs, the resulting branch flow and bus voltages
would still be within the emergency voltage and emergency thermal limits prior to
system readjustment [1,3]. In order to include these constraints and avoid heavy
computation, a set of credible contingencies [1] is formed, and corresponding post-
outage constraints are added to the OPF constraints.

Problem 0 uses deterministic constraints. The deterministic method provides a


straightforward approach to operational decision-making: optimize economy within
hard constraints of the secure operational region. It is the simplicity of this approach
that has made the deterministic method so useful in the past. Today, however, with the
industry's emphasis on economic competition, and with the associated increased
network vulnerability, there is a growing recognition that this simplicity also carries
with it significant subjectivity, and this can result in constraints that are not uniform
with respect to the security level, as illustrated in Chapter 2. This suggests that the
ultimate decisions that are made may not be the “best” ones [5].

It is well known that probabilistic methods constitute powerful tools for use in many
kinds of decision-making problems. Therefore, today there is a great deal of interest in
using them to enhance the security-economy decision making problem. The risk based
Optimal Power Flow (RB-OPF) assumes that power demand in each bus is random and
normally distributed with the forecasted value as its mean and some variance. Credible
contingencies are also taken into account by incorporating them into component risk
functions, which are used to replace the traditional deterministic constraints. There are
three ways to form the risk based OPF: set an individual risk limit on each component,
set an overall system risk limit, or treat the system risk as a part of objective.
In Section 3.2, the system composite risk assessment for thermal overload and bus
voltage out-of-limit are developed. Section 3.3 gives the new formulations of the risk
based OPF. Section 3.4 describes the algorithms used in this study. Section 3.5 gives
some case studies. Conclusions are drawn in Section 3.6.

3.2 System Composite Risk Assessment


In this section, we show how to conduct risk based system security assessment for
thermal overload and bus voltage out-of-limit using a probabilistic load flow and
component risk functions [6].
3.2.1 Probabilistic Load Flow

Many studies have been done to develop a probabilistic load flow [7-13]. The first paper
dealing with probabilistic load flow was published in 1974 by Borkowska [10], in which
it is assumed that branch flows are linear combination of net nodal injections, and that
power balance is a function of the sum of power injections only (i.e., no losses). Dopazo,
et. al. introduced the concept of stochastic load flow [11], commonly referred to as the
AEP approach which assumed a linearized power flow with additive noise, zero
variance, and some covariance matrix. The estimation task is carried out using a
weighted least squares minimization objective. The solution is obtained using iterative
techniques to solve the resulting optimality conditions.

In this study, we use a simplified AEP approach [7,9]. The approach is based on
following assumptions:
1. All bus loads, branch flows, and bus voltage magnitudes are normally
distributed.
2. A linearized model of the system can be used around the expected value of the
bus loads.

The above assumptions have been shown to be reasonably accurate [7], and they result
in great simplification of the computational procedure [11].

Let

− PL be the vector of real part of bus loads

− QL be the vector of imaginary part of bus loads

− Cpp be the covariance matrix of bus loads between active power

C pp = E [ (PL - PL )(PL - PL ) T ]

− Cqq be the covariance matrix of bus loads between reactive power

C qq = E [ (Q L - Q L )(Q L - Q L ) T ]

− Cpq be the covariance matrix of bus loads between active and reactive power

C pq = E [ (PL - PL )(Q L - QL ) T ]

− CPQ be the covariance matrix of bus loads


C pp C pq 
C PQ = 
C pq C qq 

− T be a vector of line flow

− V be a vector of bus voltage magnitudes.

− T be the vector of branch loading at the operating point (assuming system load
to be equal to the expected value).

− V be the vector of bus voltage loads at the operating point (assuming system
load to be equal to the expected value).

− ∆T = T − T

− ∆V = V − V

− ∆PL = PL − PL

− A is the Jacobian matrix for the linearized model

 ∂Tij ∂Tij 
 
∂P ∂QL 
A= L
 ∂Vk ∂Vk 
 ∂PL ∂QL 

The derivation of the Jacobian matrix A for the linearized model can be found in [9].

The linearized model of branch flows and bus voltages versus bus load is

 ∆T   ∆PL 
∆V  = A × ∆Q 
   L

By using the linearized model, we obtain

 ∆T 
E  = 0 (eq. 3-2)
 ∆V 

 ∆T 
Cov   = AC PQ AT (eq. 3-3)
∆V 
Then the probabilistic load flow algorithm used in this study is follows [9]:

Step 1: Solve a deterministic load flow assuming loads are equal to the expected
values PL and QL . We obtain the expected values for the branch flows, T , and
the bus voltages, V .

Step 2: Compute the matrices A and CPQ of the linearized model

Step 3: Compute the covariance matrix of the branch flows and bus voltages by
using equation (eq. 3-3). This calculation, together with the expected values
found in step 1, provides the information necessary to characterize distributions
for the branch flows and the bus voltages.

3.2.2 Risk Assessment of Thermal Overload and Bus Voltage Out-of-limit


In this section, component risk functions are introduced for risk assessment of system
thermal overload and bus voltage out-of-limit security. These functions provide the risk
for a circuit or bus given only the flow or voltage of that circuit or bus, respectively. We
described rather highly developed component risk functions in what follows. However,
these functions may also be estimated in a very simple fashion, as we did in Chapter 2
(see Figure 2.2).

3.2.2.1 Thermal Overload Risk


Here, the thermal overload risk of transmission line and transformer is assessed.
Transmission line thermal risk
We have developed a method that combines probability and impact calculation to
provide the risk of transmission line overload [6,14]. The risk, which indicates the
expectation of the cost consequence associated with thermal overload, may be
effectively used to make decisions regarding the line loading and system operation. As
described in [6], a given flow through a transmission line may result in thermal
overload of the conductor, and hence, result in related physical damages or circuit
opening. The cost consequence or the impact of this thermal overload has been
presented in detail in [14]. Moreover, it has been discussed that the possible overload
depends not only on the given line flow, but also on the ambient weather around the
transmission line. An expected value of the overload impact using the conductor
thermal model and the probabilistic description of ambient weather, is calculated as the
component level overload risk for the given line under the given load flow. This
procedure can be repeated under various line flows such that a “Risk vs. Flow” curve is
created for a transmission line. The component assessment encapsulates the detailed
impact calculation and thermal model of a transmission line into its final risk-flow
curve such that on the system side, given that one has this curve, one can determine the
expected monetary loss due to the line loading directly from its line flow.
The Risk-Flow curve is created on a line-by-line basis. Each transmission line has its
own Risk-Flow based on its local weather condition, physical properties, and system
conditions. Figures 3-1 and 3-2 show examples of such curves developed in [6].

Figure 3-1 Risk-Flow Curve for 138kV Line

Figure 3-2 Risk-Flow Curve for 230kV Line

Transformer thermal risk


We have developed a method that combines probability and impact calculation to
provide the risk of transformer line overload [6,15]. As described in [6], a given flow
through a transformer may result in elevation of the temperature of winding and
insulation, and hence, bring about possible loss of life and equipment damage on the
transformer. The elevation of temperature is dependent on the uncertain ambient
weather conditions. Thus, an expected monetary cost consequence of transformer
overload is calculated as the component overload risk by the probabilistic description of
ambient weathers. This procedure is repeated under various flows such that a Risk-
Flow curve is created for a transformer.

Similar to the component assessment of a transmission line, a transformer's Risk-Flow


curve encapsulates the detailed internal thermal model and probabilistic impact
calculation, such that on the system side, one can determine the monetary loss only by
the transformer loading level without any knowledge of the intrinsic properties of the
transformer.

The Risk-Flow curve is created for each transformer based on its local weather
condition and physical properties. Figure 3-3 shows an example of such curve
developed in [6].

Figure 3-3 Risk-Flow Curve for 400MVA Transformer

When developing Risk-Flow curves, one needs to account for the impact of cascading.
Here, we have assumed a very high cost for the cascading cost component for high
flows. This is a very rough approach. It could be refined by including a Risk-Flow curve
that accounts for only the impact on the circuit itself and another curve that accounts for
the cascading impact on the system. The latter curve would then depend on system
conditions. In this study, we did not consider the effect of cascading.
Thermal limit risk in probabilistic load flow
In the above component level risk calculation for transmission line and transformer, the
branch flows are given deterministically. In the probabilistic load flow, as we assume
that the load is uncertain, the branch flow is also uncertain. If we define the component
risk for a given branch flow on branch i (line or transformer) as Risk(Si), the system risk
for branch i, RiskTi(Si), is given as the expectation of the component risk over the
uncertain flows on branch i, i.e.,


RiskTi ( Si ) = ∫ Pr( S ) Risk ( S )dS
−∞
i i i (eq. 3-4)

where

− Si is the load flow on branch i.

− Pr(Si) is the probability distribution of load flow on branch i.

3.2.2.2 Bus voltage out-of-limit risk


End users of electricity may be interrupted under out-of-limit voltage []. Different load
classes have different distributions of voltage tolerance and interruption cost. Under a
given bus voltage, an expected monetary impact on customers due to service
interruption is calculated as the component voltage risk at a bus based on its aggregated
probabilistic description of load interruption voltages for the load mix at the bus. This
procedure is repeated under various bus voltages such that a Risk-Voltage curve is
provided for a load bus. This component level Risk-Voltage curve for a bus
encapsulates the detailed evaluation of expected impact and load mix into its final Risk-
Voltage curve such that on the system side, given that one has this curve, one can
determine the expected monetary loss due to bus voltage out of limits directly from its
bus voltage.

The Risk-Voltage curve is created for each bus in a transmission network according to
its local load mix. Figures 3-4 and 3-5 show examples of such curve for a 138kV bus and
a 230kV bus, respectively.
Figure 3-4 Risk-Voltage Curve for 138kV Bus

Figure 3-5 Risk-Voltage Curve for 230kV Bus

Similar to thermal risk, in the probabilistic load flow, the voltage is also uncertain. If we
define the component risk for a given voltage as Risk(Vj), the bus voltage risk RiskVj(Vj)
in a probabilistic load flow is


RiskV j (V j ) = ∫ Pr(V j ) Risk (V j )dV j (eq. 3-5)
−∞

where
− Vj is the voltage on bus j.

− Pr(Vj) is the probability distribution of voltage on bus j.

To account for the effect of OLTC (on-load-tap-changer) on the load, when developing
the Risk-Voltage curve, one needs to identify the upper side bus voltage when the
transformer load tap hits its limit and how the lower side bus voltage changes after the
load tap hits its limit, or alternatively, one can develop the Risk-Voltage curve based on
relationship between lower side voltage and load first, and then transform the curve to
the upper side voltage by taking into account the OLTC effect.

3.2.2.3 Considering Credible Contingencies


We assume that the credible contingency set for overload and voltage problem is
identified a priori. This set consists of a list of component outages to be considered in
the analysis. By using component risk functions, we can easily incorporate the risk of
credible contingencies into the component risk functions by modifying (eq.3-2) and
(eq.3-3) as follows:
k =N ∞
RiskTi ( S i ) = ∑ Pr(k ) ∫ Pr( S ik ) Risk ( S ik )dS ik (eq. 3-6)
k =1 −∞

k =N ∞
RiskV j (V j ) = ∑ Pr(k ) ∫ Pr(V jk ) Risk (V jk )dV jk
k =1
(eq. 3-7)
−∞

where

− k represents the state of the system. k=1 is the base case, and k>1 means (k-
1)th post-contingency configuration.

− N-1 is the number of credible contingencies.

− Pr(k) is the probability of the system in kth state in the next hour.

3.3 Formulating Risk Based Optimal Power Flow Problem

Based on component risk functions developed in the previous sections, the risk
constrained optimal power flow can be formulated as one of the following three
problems.

Problem 1: Set individual risk limit on each component


Here, we set a risk limit on each component in the system. Thus, the objective of the
OPF is to minimize the total generation cost while keeping the risk of each component
below a predefined limit, as shown in (eq. 3-8).
Min ∑ f (P
gi ) (eq. 3-8)

subject to

Pi − ∑ ViV j COS (θ ij + δ j − δ i ) = 0
Qi + ∑ViV j SIN (θ ij + δ j − δ i ) = 0
RiskTi ( Si ) ≤ RiskT0
RiskV j (V j ) ≤ RiskV0
Pgi ,min ≤ Pgi ≤ Pgi ,max
Q gi ,min ≤ Q gi ≤ Q gi ,max

where RiskTi(Si) is the transmission line and transformer thermal risk computed by (eq.
3-6), RiskVi(Vi) is the bus voltage out-of-limit risk computed by (eq. 3-7), and RiskT0 and
RiskV0 are the assumed maximum risk values tolerated by the system operator. It
should be noted that all the variables in the objective functions and constraints are
expected values.

Problem 2: Set an overall system risk limit


Since thermal risk and voltage out-of-limit risk both have units of dollars, we can
replace these limits by using one single system overall limit. Thus, as shown in (eq. 3-9),
the objective of OPF is to minimize the total generation cost while keeping the system
total risk below a predefined limit.

Min ∑ f ( Pgi ) (eq. 3-9)

subject to

Pi − ∑ ViV j COS (θ ij + δ j − δ i ) = 0
Qi + ∑ ViV j SIN (θ ij + δ j − δ i ) = 0
∑ RiskT ( S ) + ∑ RiskV (V )
i i j j ≤ RiskTV0
Pgi ,min ≤ Pgi ≤ Pgi ,max
Q gi ,min ≤ Q gi ≤ Q gi ,max

Here, RiskTV0 is the system risk limit tolerated by system operators.


Problem 3: Treat the system risk as a part of objective
The generation cost and system risk can be included into the objective function together
as follows,
Min ω1 ( ∑ f ( Pgi )) + ω 2 ( ∑ RiskTi ( Si ) + ∑ RiskV j (V j )) (eq. 3-10)

subject to

Pi − ∑ ViV j COS (θ ij + δ j − δ i ) = 0
Qi + ∑ViV j SIN (θ ij + δ j − δ i ) = 0
Pgi ,min ≤ Pgi ≤ Pgi ,max
Q gi ,min ≤ Q gi ≤ Q gi ,max
0 ≤ ω1 ≤ 1
0 ≤ ω2 ≤ 1
ω1 + ω 2 = 1
Here ω1 and ω2 are weighting coefficients whose values can reflect the system operator's
attitude towards generation cost and risk.

3.4 Algorithm to Solve the Risk Based Optimal Power Flow

Linear programming based OPF methods are widely adopted today in the industry
[1,17]. In this section, we describe how to solve the above risk constrained OPF using
successive linear programming algorithm.

The OPF problem (eq. 3-6) can be rewritten in the following compact form,

min f ( x2 ) (eq. 3-11)

subject to

g1 ( x1 , x2 ) = 0
g 2 ( x1 , x2 ) ≤ 0

The algorithm we used in this study proceeds as follows [3]:

Step 1: Set iteration counter k ← 0 and choose appropriate initial values.

Step 2: Solve the equality constraint (using probabilistic load flow) equations.

Step 3: Linearize the problem around the x k , solve the resulting LP for ∆x .
 ∂f 
min   ∆x (eq. 3-12)
 ∂x x= xk 

subject to

 ∂g 
  ∆x ≤ − g ( x k )
 ∂x x = x k 
− ∆ ≤ ∆x ≤ ∆

Step 4: Set k ← k + 1 , update current solution x k = x k −1 + ∆x .

Step 5: Check whether

∂L ∂f ∂g
= + λT ≤ tolerance1
∂x ∂x ∂x
g ( x) ≤ tolerance2
∆x ≤ tolerance3

if yes, stop; if not, continue.

Step 6: Adjust step size limit ∆ based on the trust region algorithm [18], go to
Step 2.

For the termination criteria given in step 5, λ is the vector of Lagrange multipliers of
the LP problem. The first condition pertains to the size of the gradient, the second to the
violation of the constraints, and the third to the step size.

Quite frequently, the value of x k given by step 1 results in an infeasible LP problem. In


these cases, a slack variable is added for each violated constraint. These slack variables
must be zero at the optimal solution.
The procedure described above is basically a standard procedure for the traditional
OPF problem (problem 0) using linear programming except for Step 2, in which a
probabilistic load flow is used to compute (eq. 3-6) and (eq.3-7) for the system risk.

3.5 Numerical Illustration


The risk based optimal power flow algorithm was applied to the modified IEEE RTS'96
system [19] that is shown in Figure 3-6. The expected load is 1909MW, and total
generation capacity is 2305MW. The detailed network data can be found in [19]. Also,
we made the following statistical assumptions [20]: the standard deviation for the load
on the same bus is 1% of its mean value, the correlation coefficient between active
power and reactive power of the load on the same bus is 0.5, the correlation coefficient
among active power of the load on different bus but same voltage level is 0.4, the
correlation coefficient between active power and reactive power of the load on the
different bus but same voltage level is 0.2, the correlation coefficient among active
power and reactive power of the load on the different bus and different voltage level is
0. Table 3-1 shows the assumed credible contingency set in the example. This
contingency set is determined by choosing the five contingencies with the highest
outage probability, whose outage rate is higher than 0.45 outages/year, as given in [19].
This contingency set is chosen only for illustration purposes. In reality, it might be
expanded and should be identified by some security assessment procedures [2].

Figure 3-6 The IEEE RTS'96 system

Table 3-1 Assumed Credible Contingency Set


Outage From Bus To Bus Probability

Line A2 1 3 5.82e-05

Line A5 2 6 5.48e-05

Line A21 12 23 5.94e-05

Line A22 13 23 5.59e-05

Line A30 17 22 6.16e-05

Also we used the same component risk functions developed in Section 3.2.2 for all the
components in the system. For example, the risk function of each 138kV line is the risk
function shown in Figure 3-1 times its length, and the risk function of each 138kV bus is
the risk function shown in Figure 3-4 times the load on that bus. These values are only
used to illustrate our method. For practical usage, the risk function for each component
should be developed individually depending on its own location, weather, and load
conditions.

3.5.1 Problem 0: Using deterministic limits


In this case, we still set deterministic OPF formulation. The deterministic limits for lines,
transformers, and bus voltages are given in Table 3-2.

Table 3-2 Deterministic Limits

Item Low limit Upper limit

138KV line - 175MVA

230KV line - 500MVA

400MVA xfmr - 600MVA

138KV bus 124.20KV 151.80KV

230KV bus 207.00KV 253.00KV


The limits for the line and transformer are given in [19] as the values of short-time
emergency ratings (STE) of the equipment. The criterion for the bus voltage limits is
that there should be no expected load interruption under that voltage. So from Figures
3-4 and 3-5, we obtain the limits shown in Table 3-2.
The optimal expected generation cost is $30967.43/hr. The risk calculation results are
given in Tables 3-3 and 3-4. They show that the system risk is dominated by the bus
voltage-out-of-limit risk that is $3975.49/hr, while the total thermal risk is only
$1.16/hr. The high bus voltage out-of-limit risk in this case is caused by the assumption
that all the loads are commercial loads which are assumed to be sensitive to bus voltage
change and have very high load interruption costs [16]. The highest voltage risk is from
bus 13 that is $1729.37/hr. Calculation shows that its base case expected bus voltage is
249.09KV with its variance 2.92. Although its expected value does not exceed its limit
(253.00KV), the bus still suffers a very high risk because of the tail of bus voltage
distribution outside the limit.
The differences among the voltage out-of-limit risks for different bus are very large in
Table 3-4. This is mainly because of the steep shape of Risk-Voltage curves shown in
Figures 3-4 and 3-5.

3.5.2 Problem 1: Set Individual Risk Limit On Each Component


In this case, we assume that the maximum component risk accepted by the utilities is
$300.00/hr and set it as the limit for both branch bus voltage. The optimal expected
generation cost is $30984.94/hr. The total thermal risk for transmission line and
transformer is $1.20/hr and the total risk for voltage out of limit is $1812.35/hr. Tables
3-5 and 3-6 show the distribution of these risks on each component. Comparing with
Problem 0 results (see Tables 3-3 and 3-4), it is found that although the generation cost
increases about $17.51/hr, the total system risk is reduced about $2163.10/hr. This is
mainly because the voltage-out-of-limit risks of some high risk buses such as bus 13 and
14 are greatly reduced. This example shows how effectively the system operator can
reduce system risk by trading off between risk and cost.

In this case, there are no bounded branch thermal limits. The Lagrange multipliers
related to bounded voltage limits are shown in Table 3-7. Since the units of the objective
function and limits are the same, these multipliers give us a direct feeling about how
effective they are if we relax these limits. They are also useful in identifying the most
effective means of improving the objective.

Figure 3-7 gives the relationship between the total generation cost and component limit.
It shows that relaxing component risk will reduce the generation cost. But it becomes
more and more ineffective to do so.
Table 3-3 Thermal Risk for Deterministic Constrained Case

Line or Xfmr Risk($) Line or Xfmr Risk($)

1-2 0.00 12-13 0.00

1-3 0.00 12-23 0.00

1-5 0.00 13-23 0.00

2-4 0.00 14-16 0.08

2-6 0.00 15-16 0.00

3-9 0.00 15-21 0.00

3-24 0.23 15-21 0.00

4-9 0.00 15-24 0.00

5-10 0.00 16-17 0.00

6-10 0.00 16-19 0.00

7-8 0.00 17-18 0.00

8-9 0.00 17-22 0.00

8-10 0.00 18-21 0.00

9-11 0.09 18-21 0.00

9-12 0.23 19-20 0.00

10-11 0.20 19-20 0.00

10-12 0.40 20-23 0.00

11-13 0.00 20-23 0.00

11-14 0.00 21-22 0.00


Table 3-4 Voltage-out-of-limit Risk for Deterministic Constrained Case

Bus Risk($) Bus No. Risk($)


No.

1 155.84 13 1729.37

2 208.33 14 1284.86

3 0.00 15 73.32

4 0.00 16 43.62

5 0.00 17 0.00

6 0.00 18 212.78

7 0.00 19 117.80

8 0.00 20 149.55

9 0.00 21 0.00

10 0.00 22 0.00

11 0.00 23 0.00

12 0.00 24 0.00
Table 3-5 Thermal Risk for Risk Constrained Case

Line or Xfmr Risk($) Line or Xfmr Risk($)

1-2 0.00 12-13 0.01

1-3 0.00 12-23 0.01

1-5 0.00 13-23 0.00

2-4 0.00 14-16 0.00

2-6 0.00 15-16 0.00

3-9 0.00 15-21 0.00

3-24 0.24 15-21 0.00

4-9 0.00 15-24 0.00

5-10 0.00 16-17 0.00

6-10 0.00 16-19 0.00

7-8 0.00 17-18 0.00

8-9 0.00 17-22 0.00

8-10 0.00 18-21 0.00

9-11 0.08 18-21 0.00

9-12 0.23 19-20 0.00

10-11 0.19 19-20 0.00

10-12 0.42 20-23 0.00

11-13 0.00 20-23 0.00

11-14 0.00 21-22 0.00


Table 3.6 Voltage-out-of-limit Risk for Risk Constrained Case

Bus Risk($) Bus Risk($)


No. No.

1 116.83 13 300.00

2 111.01 14 300.00

3 0.00 15 128.59

4 0.00 16 49.72

5 0.00 17 0.00

6 0.00 18 300.00

7 0.00 19 206.19

8 0.00 20 300.00

9 0.00 21 0.00

10 0.00 22 0.00

11 0.00 23 0.00

12 0.00 24 0.00

Table 3-7 Lagrange Multipliers for Bounded Constraints

Item Risk($) Value Lagrange Multiplier

Bus 13 300.00 251.62KV 0.020

Bus 14 300.00 251.62KV 0.007

Bus 18 300.00 253.23KV 0.006

Bus 20 300.00 253.46KV 0.024


Figure 3-7 Generation Cost vs. Component Risk Limit

3.5.3 Problem 2: Set An Overall System Risk Limit


In this case, we combine the system thermal risk and voltage out-of-limit risk into one
single overall system risk. Figure 3-8 shows the relation between the generation cost
and system risk limit. This figure can help the system operator to determine a
reasonable system risk limit to balance the cost and benefit. Figure 3-9 shows the
relationship between Lagrange multiplier λ with system risk limit. It shows that how
effective it is to reduce the generation cost by increasing the overall system risk limit.
Figure 3-8 Generation Cost vs. System Risk Limit

Figure 3-9 Lagrange Multipliers vs. System Risk Limits

3.5.4 Problem 3: Treat The System Risk As A Part of Objective


In this case, we minimize system risk and total generation cost simultaneously. Table 3-
8 shows the relationship between generation cost and system total risk with different
weighting coefficients ω1 and ω2. As we stated before, the choice of ω1 and ω2 will reflect
the system operator's attitude towards generation cost and risk. It shows that, for a risk-
neutral person who treats risk and generation cost as the same, the best solution should
be No. 6 that has the smallest total risk and cost.

3.6 Conclusions
In this chapter, a risk based optimal power flow is developed. The method assumes that
power demand in each bus is random and normally distributed associated with the
forecasted value as its mean and some variance. The uncertainties associated with load
characteristics, weather conditions, and contingencies are incorporated into the
component risk functions. The traditional inequality deterministic constraints such as
branch thermal limits and bus voltage limits are replaced by the probabilistic risk
functions for each transmission line, transformer and bus. A successive linear
programming algorithm is adopted to solve the risk based OPF problem in this study.
Risk based OPF provides a useful decision-making tool to help the system operator to
balance system risk and cost.
Table 3-8 Solution to Problem 3

No. ω1 ω2 Cost ($/hr) Risk ($/hr) Total ($/hr)

1 0.0 1.0 33121.20 0.59 33121.79

2 0.1 0.9 31752.16 0.85 31753.01

3 0.2 0.8 31123.51 1.13 31124.64

4 0.3 0.7 31062.90 1.16 31064.06

5 0.4 0.6 31015.68 1.71 31017.39

6 0.5 0.5 31002.07 2.54 31004.61

7 0.6 0.4 31000.09 7.65 31007.74

8 0.7 0.3 30998.26 8.26 31006.52

9 0.8 0.2 30994.37 16.74 31011.11

10 0.9 0.1 30992.43 18.34 31010.77

11 1.0 0.0 30357.55 624722.87 655080.42

References

[1] M. Huneault and F. D. Galiana, ``A Survey of The Optimal Power Flow Literature,''
IEEE Transactions on Power Systems, Vol.6, No.2, pp 762-768, May 1991.

[2] IEEE tutorial course, Optimal Power Flow: Solution Techniques, Requirements, and
Challenges. IEEE Power Engineering Society, 96 TP 111-0.

[3]R. D. Zimmerman and D. Gan, MATPOWER - A Matlab Power System Simulation


Package, User's Manual, Version 2.0, December 24, 1997.

[4] Mid-Continent Area Power Pool (MAPP) System Design Standards, Mid-Continent
Area Power Pool, December 1994.
[5] J. Chen and J. McCalley, “Comparison Between Deterministic and Probabilistic
Study Methods in Security Assessment for Operations,” to appear in Proceedings of the
VI International Conference on Probabilistic Methods Applied to Power Systems,
September 2000, Madeira Island, Portugal.

[6] EPRI final report WO8604-01, ``Risk-based Security Assessment,'' December, 1998.

[7] R. N. Allan, A. M. Leite da Silva, and R. C. Burchett, ``Evaluation Methods and


Accuracy in Probabilistic Load Flow Solutions,'' IEEE Transactions on Power Apparatus
and Systems,} Vol.PAS-100, No.5, pp 2539-2546, May 1981.

[8] R. N. Allan, C. H. Crigg, and M. R. Al-Shakarchi, “Numerical Techniques in


Probabilistic Load Flow Problems,” International Journal for Numerical Methods in
Engineering,} Vol. 10, 1976, pp 853-860.

[9] A. P. Meliopoulos, A. G. Bakirtzis, and R. Kovacs, ``Power System Reliability


Evaluation using Stochastic Load Flows,'' IEEE Transactions on Power Apparatus and
Systems, Vol.PAS-103, No.5, pp 1084-1091, May 1984.

[10] B. Borkowska, ``Probabilistic Load Flow,'' IEEE Transactions on Power Apparatus


and Systems, Vol.PAS-93, No.3, 1974, pp 752-759.

[11] F. Dopazo, O. A. Klitin, and A. M. Sasson, ``Stochastic Load Flows,'' {\em IEEE
Transactions on Power Apparatus and Systems,} Vol.PAS-94, No.2, 1975, pp. 299-309.

[12] H. R. Sirisena and E. P. M. Brown, ``Representation of Non-Gaussian Probability


Distributions in Stochastic Load-Flow Studies By The Methods of Gaussian Sum
Approximations,'' IEE Proceedings, Vol. 130, Part C, No. 4, July 1983. pp. 165-172.

[13] M. E. El-Hawary and G. A. N. Mbamalu, ``A Comparison of Probabilistic


Perturbabtion and Deterministic Based optimal Power Flow Solutions,'' IEEE
Transactions on Power Systems, Vol.6, No.3, pp 1099-1105, August 1991.

[14] H. Wan, J. D. McCalley, V. Vittal, ''Increasing Thermal Rating by Risk Analysis'',


PE-090-PWRS-0-1-1998, to appear in IEEE Transactions on Power Systems.

[15] W. Fu, J.D. McCalley, V. Vittal, "Risk-based Assessment of Transformer Thermal


Overloading Capability", Proceedings of the 30th North American Power Symposium,
Oct. 19-20, 1998, Cleveland State University, Ohio, pp. 118-123.

[16] H. Wan, J. D. McCalley, V. Vittal, ``Risk Based Voltage Security Assessment,'' to


appear in IEEE Transactions on Power Systems.
[17] O. Alsac, J. Bright, M. Prais, and B. Stott, ``Further Developments in LP-Based
Optimal Power Flow,'' IEEE Transactions on Power Systems, Vol.5, No.3, pp 697-711,
August 1990.

[18] R. Fletcher, Practical Methods of Optimization, 2nd Edition, John Wiley & Sons,
pp.95-96.

[19] IEEE Task Force Report, ``The IEEE Reliability Test System - 1996,'' 96 WM 326-9
PWRS.

[20] Youjie Dai, “Framework for Power System Annual Risk Assessment,” Ph.D.
Dissertation, Iowa State University, 1998.
4
DECISION MAKING FOR OPERATIONS 

CORRECTIVE/PREVENTIVE ACTION SELECTION

4.1 Introduction

According to the traditional security assessment, the state of the power system can be
assigned to one of the following sets: normal, alert, emergency and restorative. When a
system is in the alert state, some preventive actions must be taken. When the system is
in the emergency state, some corrective actions must be adopted. In this chapter, we
propose that the risk level of the system can be used to identify when, to select which
ones, and to determine how much preventive or corrective action should be taken.
Therefore the operators will control the system according to the risk value of the
system. If the risk of the system is too high, then the operator should take some actions
to reduce it. This kind of action is called preventive/corrective (P/C) action here. How
to select an efficient P/C action is a decision-making problem.

It has always been a challenge in system operation to find the optimal or satisfactory
balance between two generally opposing objectives: obtaining the maximal return of a
system given the current configuration and available infrastructure, versus minimizing
the adverse effects of possible security problems. The choice is limited by post-
contingency system performance limits as specified by reliability criteria, imposing
restrictions on the pre-contingency operating conditions. A list of credible contingencies
is screened, and those contingencies, when occurring, which lead to violations of the
performance criteria, are selected for further analysis. The selected contingencies put a
limit on a certain number of operating parameters, like circuit flow, generator level, bus
voltage magnitude, etc., operation beyond which may possibly lead to security
problems if the contingencies occur.

In [1] the concept of risk was introduced: it links the economics of an operating point
with the security aspects associated with it. By evaluating the risk at operating points
lying on deterministic operating limits, it is possible to show that that they generally do
not have equal risk, as illustrated in Chapter 2. This is mainly due to the fact that the
risk is composed of limits imposed by different contingencies, each having a different
probability of occurrence, and different security impact. These arguments lead to the
conclusion that there is some risk inconsistency when using the deterministic operating
limits as security criteria. An illustration of this kind of inconsistency is depicted in
Figure 4-1. The discontinuous line represents the deterministic operating limits, while
the curved line connects the points with the same level of risk. Assuming risk increases
with the distance from the origin, it can clearly be observed that some points in the
secure region have a risk greater than points on the iso-risk curve, while some operating
points outside the safe region have a risk value lower than the risk of the contour.

f lo w 2
In se c u re R e g io n
I s o - r is k c o n t o u r

D e t e r m . li m it

S e c u re R e g io n

f lo w 1

Figure 4-1 Risk Inconsistency in System Operation

The risk index defined in Chapter 1 provides more insight to the operator on the
expected financial consequences of operating at a particular point. It contains
information regarding the probability of eventual insecure contingencies1 as well as on
estimation of what it will cost, if the insecure contingency turns out to be the true one.
The operator has several ways to use this information, and an obvious one is
determining operating limits based on risk. This corresponds to choosing a maximum
risk level at which the system operator is willing to operate. Other possibilities include
using risk to optimize the operating trajectory in the near future, where risk is included
as an attribute in the objective function or as a constraint, as illustrated in Chapter 3
where we described the risk-based OPF. In any of these problems the original challenge
posed –trade-off between economics – is present in one way or another.

In system operation the decisions need to be made in a very short amount of time
(maximum a few hours). The problems can be very complex, and the consequences of a
wrong decision can be felt immediately. Usually, the number of actions available to the
system operator is limited. As a result, decision aid tools are very helpful in an
operation environment.

1
In this chapter, “contingency” includes the real contingency and no-outage state.
A decision-making problem consists of various components: the decision problem, the
decision maker(s) (DM) – single or multiple, the objective(s) – single or multiple, the
attributes and their values, called pay-offs, the alternatives, and the states of nature or
scenarios.

In EPRI report [1], we discuss some decision criteria for the selection of
corrective/preventive action, such as the Maxi-min and Maximum Minimum Regrets
criteria for profit maximization condition. We summarize this discussion in Section 4.3.
After considering the probability of the contingency, instead of using the traditional
decision criteria which looks at the security impacts in each contingency, for each
contingency a risk index (the product of the probability and the impact value of
corresponding contingency) is calculated and then this risk index is combined with the
profit value (or a benefit function) to form a combined index for the selection of the
action. The problem associated with this procedure is that the profits are assumed to
occur with probability equal to that of the no-outage conditions (usually very close to
1.0) whereas the security impacts only occur with probabilities of the outage
contingencies. Thus, the computed risks (product of security impacts and the outage
probabilities) of each contingency cause the risk value to be far smaller than the profit
value. So this procedure effectively results in neglecting the risk, and the selected action
is always the action with the highest profit among the action list. This is not consistent
with intuition or with actual practice, and so in Section 4.3, we present a modified
approach to using the maxi-min and the maxi-min regret decision criteria. The
modification rests on the perspective that in each contingency, both profits and security
impacts occur (rather than just security impacts), and therefore both should be
weighted by the outage probability.

Decision-making depends on the available information and what is uncertain.


Increasing or improving the available information can reduce the level of uncertainty
and generally improve the decision. We can use additional information to get improved
values of various probabilities appearing in the decision model by the application of
Bayes’ theorem. In Section 4.4, a Bayesian Decision Tree is applied for the decision-
making problem to show how additional information may be integrated into the
problem. The acquisition of the additional information requires time and resources for
2
gathering, organizing, storing, processing and reporting the information .

In Section 4.5, several methods dealing with multi-objective decision-making are


proposed for the risk-based corrective/preventive action selection. These methods
represent a promising approach as they provide a consistent framework for making
decisions when there exist several objectives to optimize. The P/C action selection case
is inherently multi-objective due to the desire to optimize both profits and security level

2
Whether the effect of this information on the decision is worth the price that is paid for it is an issue that is
discussed in Chapter 5 of this report.
via risk. Since risk is actually an expected value, a third objective that we propose is
variance.

In Section 4.6, we describe and illustrate application of Evidential Theory to deal with
the corrective/preventive action selection problem. Evidential Theory offers an efficient
way to represent uncertainty and to perform reasoning under uncertainty. In Evidential
Theory, each independent information source is regarded as a piece of evidence. The
information from different pieces of evidence is combined by applying the Dempster’s
Rule of Combination. Results are obtained based on the combined information. In
multi-objective decision-making problem, each objective can be regarded as one
independent information source. So we can use Dempster’s Rule of Combination to
combine information from different objectives, and then we can make the decision
based on the combined result. One advantage of this method is that it can conveniently
include multiple DMs in the multi-objective decision-making, as each DM can also be
treated as an independent information source.

4.2 Study Case

The DM is assumed to be the system operator supervising a control area within the
IEEE reliability test system. This control area comprises Buses 12, 13 and 23, and the
operator must select an operating action for the coming hours. The study case is taken
at peak load conditions when most of the units are operating close to their limits. The
three 155MW units at Bus 23 are generating 105MW each. The three 200MW units at
Bus 13 are producing 600MW. The total area generation, 3*105+600=915MW supplies
250MW of local load and exports the remaining 665MW to the neighboring areas.

Table 4-1 Decision Making Case

Description Security Economics

Action 1 Maintain present conditions. Low Medium

Action 2 Transfer 60 MW from bus 130 to bus 230 High Medium

Action 3 Buy 150 MW from Area 30 High Low

Action 4 Sell 130 MW to Area 30 Low High

Table 4-1 presents the options available to the system operator, together with a
qualitative description of each one of them in terms of security level and profits. The
system operator’s objectives are two-fold: to maximize profits and to maximize security
level.
Additional information is contained in the following two tables. Table 4-2 provides the
probabilities corresponding to each one of the relevant future contingencies. It may
happen that no faults occur, or that one of the two lines emerging from Bus 13 is
faulted. To maintain simplicity, we only consider the transient stability of the bus 13
generators. Other faults might also happen in the sub-system under study but do not
affect the transient stability. The conclusions made from the following illustration are
also applicable to voltage and overload security.

Table 4-2 Contingencies and Probabilities

Contingency Occurrence Probability

No Outage 0.9999

Outage 130-120 4.58E-5

Outage 130-230 9.16E-5

To measure the economic benefits of an action, the projected profits that result from that
action are calculated as the difference between the revenues from energy sales and the
costs of fuel and energy purchased outside the area. The profits are not affected by an
eventual contingency occurrence, but the increased costs of eventual insecurity are
accounted for in security impacts (see Table 4-4). The security impact is the cost
consequence of each of the listed contingencies and includes start-up and repair cost,
lost opportunity cost and customer interruption cost. [1].

Table 4-3 Profits

Action 1 Action 2 Action 3 Action 4


Profit 20,385 19,902 10,602 22,595

Given this information, the system operator needs to find out which one of the actions,
according to his/her experience and judgment, gives the best trade-off between
economy and security.

Table 4-4 Security Impacts

Action 1 Action 2 Action 3 Action 4


Contingency 1 0 0 0 0
Contingency 2 855,679 235,549 220,111 1,127,882
Contingency 3 671,221 133,461 133,461 671,221

4.3 Profits Minus Risk Paradigm – The Single Objective Case

4.3.1 Summary of our previous work

We summarize the work on decision-making reported in [1]. If there is only one


relevant criterion or if different criteria can be combined into one single criterion, the
problem can be approached by single criterion decision-making methods. The
differences among the various existing methods are related to the way the probabilities
of the possible contingencies are perceived [2]-[4]. A possible criterion3 could be
maximizing the difference between the attributes profits and security impacts. For each
action under each contingency, the resulting outcome (value of the attribute) is
calculated. The best-known single criterion decision paradigms were applied to the
presented problem and the results are summarized below. More details of this study
can be found in chapter 11 of [1]. The first three paradigms are based on subjective
assumptions on the contingency likelihood, while the last one uses probabilities
obtained from historical data or through calculation.
No.1 Maxi-min paradigm: This is a pessimistic rule. It assumes that whatever action is
taken, the worst contingency for that action will occur. The action that has the
‘best’ worst outcome is selected. Under this rule, Action 3 would be chosen.
No.2 Mini-max regret paradigm: Here, the regret associated with one action is
quantified as the difference in outcome of that action and the outcome of the
action that would have been chosen, if the future were known. For each action,
the maximum regret value is identified and the action with smallest maximum
regret value is picked. According to this rule, Action 2 would be selected.
No.3 Equal likelihood paradigm: This rule assumes that all contingencies have the same
probability of happening: the action with the highest sum of outcomes is
selected. This would result in the selection of Action 2.
No.4 Maximizing Profit minus Risk: Instead of looking at the security impacts in each
contingency, a risk index can be calculated using the approaches developed in
[5]-[8]. The risk values for each action are presented in Table 4-5. Here, we
account for transient instability, assuming the overload and voltage instability
risks are zero. The probability of the instability fault is considered in this
paradigm, which depends on contingency likelihood and the probability of
instability given the contingency. The latter is computed as a function of fault
type and location. The action with the highest difference between profit and
risk is selected, i.e., Action 4.

Besides the above mentioned approaches, Minimum Expected Monetary Value method
(method No.5) is also a widely used single criterion decision-making approach. When
we apply this approach to the study case, the outcomes, in terms of security impact
only, are multiplied by the probability of occurrence of each contingency and then these
products are summed for each action. Finally the action with the largest of these sums is
picked. The probabilities of each contingency from Table 4-2 are used here. In this case

3
In this chapter the following terminology is used: a criterion is a more general goal, e.g., maximize the security,
maximizing the economics. An attribute is a measure to evaluate the level of satisfaction for one criterion: profits,
risk. An objective is more concrete than a criterion and indicates in which direction one wants to optimize an
attribute, e.g., the objective ‘minimizing risk’ is a way to satisfy the criteria ‘maximizing the security’.
the selected action would be Action 4.

In method No.4, as we have assumed that the profits are not affected by the eventual
contingency occurrence, so they are not weighted by the probabilities of contingencies.
But in reality the profit may vary with the contingency. For example, some
contingencies may cause the congestion of the transmission system, and this can lead to
the change of electricity price and thereupon influence the profit. So if some “profits”
are also obtained under a contingency, they should be weighted by the probability of
the contingency and then included with the “risk” evaluation.

Table 4-5. Risk Values

Action 1 Action 2 Action 3 Action 4


Risk ($/hr) 9.3 2.12 2.06 10.44

4.3.2: Alternative methods: Rank and Per-Unit

From the results obtained in Section 4.3.1, it becomes apparent that when no probability
data is used in the decision process, the suggested methods are quite conservative. On
the other hand, the probabilistic risk-based methods favor higher profit, higher risk
alternatives and in fact virtually neglect the effect of insecurity on the decision-making
due to the low probability of the events that cause the insecurity problems, in
comparison with the high probability of the no-outage condition where we realize the
profit. This can be seen clearly from Table 4-6. Here we assume that the “profits” values
in Table 4-3 are contingency-related. Then we can calculate the risk of Action i given
Contingency j Riskij by using (eq. 4-1):

Riskij = (SecurityImpactij-Profitij)*Probabilityj (eq. 4-1 )

The Risk values calculated based on (eq. 4-1) are shown in Table 4-6. This table can be
regarded as the decision matrix for this study case.

Table 4-6 Risk Value

Action 1 Action 2 Action 3 Action 4

No Outage ($) -20,385 -19,902 -10,602 -22,595

Outage 130-120($) 38.26 9.88 9.60 50.62

Outage 130-230($) 59.62 10.40 11.25 59.41


A decision matrix with elements of the same scale is prerequisite for the above methods
(No.1 to No.5) to result in reasonable decision-making result. From Table 4-6 we can
see that as the security impact under no-outage contingencies are all 0 and the
probabilities of no-outage contingencies are nearly unity, so the absolute value of risk
under no-outage contingency for each action is much more larger than those of outage
contingencies. They are not in the same scale. This may cause the neglect of some
factors in the process of decision making. One adjustment that can be made to account
for this is that we can perform a transformation of the decision matrix. The Rank
Method for dealing with this is introduced in the next section.

4.3.2.1 Rank Method (Method No.6)

We desire to transform the decision matrix so that the elements are of the same scale.
The rank method can achieve this by ranking the elements corresponding to each
contingency with the sequential number (1, 2, 3...) according to their magnitude. In our
example, the risk values of each contingency are ranked from lowest to the highest (see
Table 4-7). Then the traditional mini-max and minimum maximum regrets criteria can
be used for selecting the action.

Table 4-7 Ranked Risk Value

Action 1 Action 2 Action 3 Action 4

No Outage 2 3 4 1

Outage 130-120 3 2 1 4

Outage 130-230 4 1 2 3

4.3.2.1.1 Mini-max Criterion

Based on this criterion, the highest ranked value for each action is selected and
indicated in bold in Table 4-8. Then the action having the lowest highest rank value is
selected. So Action 2 is selected (indicated as the underlined entry in Table 4-8).

4.3.2.1.2 Minimum Maximum Regrets Criteria

The regret value matrix is formed in Table 4-8. For each action, find the maximum
regret value (indicated in bold in Table 4-8). Then select the action that has the lowest
maximum regret value (indicated by the underlined entry in Table 4-8). In this case,
Action 2 is selected.
Table 4-8 Regret Values

Action 1 Action 2 Action 3 Action 4

No Outage 1 2 3 0

Outage 130-120 2 1 0 3

Outage 130-230 3 0 1 2

Using the Rank Method, we can assure that all of the elements in the decision matrix are
of the same scale. But the rank values only reflect the relative magnitude of elements,
i.e., the rank of the values. They do not reflect the real differences of the magnitudes.
From Table 4-7 we can see that the risk values for a certain contingency are almost of
the same scale, only the risk values of different contingencies have a large difference. So
in the next section, we introduce the Per-unit Method. This method can achieve the aim
of transforming all of the elements in the decision matrix to a commensurate scale and
also can keep the information of the real difference among the risk magnitudes of a
certain contingency.

4.3.2.2 Per-unit Method (Method No.7)

In the Per-Unit method, for each contingency, we choose the risk with the highest
absolute value as the risk base value. So in this example, the risk base values for no
outage, outage of 120-130 and outage of 130-230 are –22,595, 50.62 and 59.62
respectively. Then the per-unit value of each element can be obtained by dividing it by
the corresponding base value. The per-unit matrix is shown in Table 4-9. Then the
traditional mini-max and minimum maximum regrets criteria can be used for selecting
the action.
Table 4-9 Per-unit Risk Value

Action 1 Action 2 Action 3 Action 4

No Outage -0.9022 -0.8808 -0.4692 -1.0000

Outage 130-120 0.7558 0.1952 0.1896 1.0000

Outage 130-230 1.0000 0.1744 0.1887 0.9965

4.3.2.2.1 Mini-max Criterion

Based on this criterion, the highest per-unit value for each action is selected and
indicated in bold in Table 4-9. Then the action having the lowest maximum per-unit
value is selected, which, in this case, is Action 3, i.e., buy 150MW from Area 30. This
action has the lowest profit, but also reflects the highest security level – a conservative
decision.

4.3.2.2.2 Minimum Maximum Regrets Criteria

The regret value matrix is shown in Table 4-10. Based on Table 4-10 and the minimum
maximum regrets criteria, Action 2 is selected. From Table 4-3 we can see that Action 2
has a high profit (although not the highest) and a low security impact (although not the
lowest), so its regret values are very small. This reflects a reasonable trade-off between
profits and security level.

Table 4-10 Regret Value

Action 1 Action 2 Action 3 Action 4

No Outage 0.0978 0.1192 0.5308 0

Outage 130-120 0.5662 0.0056 0 0.8104

Outage 130-230 0.8256 0 0.0143 0.8221


4.4 Decision with Additional Information Using Bayesian Decision Tree

4.4.1 Decision Tree

The decision tree is a network diagram that depicts the sequence of decisions and
associated chance events, as the DM understands them. The branch of the tree
represents either decision alternatives or chance events. Decision actions emanate from
decision nodes, represented by squares; chance events (i.e., contingencies) emanate
from chance nodes, represented by circles. Figure 4-2 is a decision tree of the studied
example.

Impact(cost)
No Outage 0.9999
-20385$
Action 1
Outage 130-120 0.0000458
835294$
(-20284$) Outage 130-230 0.0000916
650836$
No Outage 0.9999
-19902$
Action 2
Outage 130-120 0.0000458
215647$

(-19879$) Outage 130-230 0.0000916


113559

No Outage 0.9999
-10602$
Action 3
Outage 130-120 0.0000458
209509$

(-10580$) Outage 130-230 0.0000916


122859$

No Outage 0.9999
-22595$
Action 4
Outage 130-120 0.0000458
1105287$

Decision node (-22482$) Outage 130-230 0.0000916


648626$
o Chance node

Figure 4-2 Decision Tree of the Example

The intention to select an action is a decision; the left square in Figure 4-2 represents it.
Actions 1 to 4 are the actions. No outage, outage 12-13 and outage 13-23 are three
chances for each action, so they emanate from the chance node (circle). The number
after each chance is the probability of corresponding chance. The impact value listed in
the figure is the difference between the security impact of each chance and the profit
given an action. So the expected impact of each action, which is risk, is calculated and
listed under each alternative branch with brackets. This is the “minimum expected
monetary value” approach (method No. 5 described in Section 4.3.1). The selected
action is Action 4, which has the lowest expected impact. It is a risky action that favors
the profits over security. We use this approach to illustrate the ability of the decision
tree to modify the decision as additional information becomes available.

4.4.2 Decision-making with Additional Information

In our example, when we gave the probability of the chances, we didn’t consider the
influence of weather, such as lightning. But experience suggests that there is a close
relationship between lightning and line outage. To identify these relationships, we may
gather data and determine the following relations (‘lightning  LT’, ‘No lightning 
NoLT’, ‘No outage  Noout’, ‘ Outage 130-120  Out1’, ‘Outage 130-230  Out2’):

P(LT|Noout)=0.01; P(NoLT|Noout)=0.99

P(LT|Out1)=0.99; P(NoLT|Out2)=0.01

P(LT|Out2)=0.99; P(NoLT|Out2)=0.01

So whether there is lightning can be regarded as additional information for the selection
of corrective/preventive action. The prior probability of each chance (P(Noout),
P(Out1), P(Out2)) which is listed in Figure 4-2, should be modified by Bayes’ theorem
using these information. This additional information will improve the accuracy of the
probability of each chance. The updated probability of “No Outage” can be obtained as
follows:
P(LT | Noout)P(Noout)
P(Noout | LT) =
P(LT | Noout)P(Noout) + P(LT | Out1)P(Out1) + P(LT | Out2)P(Out2)

P(NoLT | Noout)P(Noout)
P(Noout | NoLT) =
P(NoLT | Noout)P(Noout) + P(NoLT | Out1)P(Out1) + P(NoLT | Out2)P(Out2)

Other updated probabilities can be obtained similarly. These probabilities are shown in
Figure 4-3. Then the risk of each action is calculated and listed under each action branch
with brackets. So if the weather forecasting shows that there will be lightning in the
next time period, the selected action is Action 2. If there is no forecasted lightning, the
selected action will be Action 4. It is easy to explain the result: when there is no
lightning, the probability of outage will be very small, so the influence of risk can be
neglected and the action with the highest profit will be selected, i.e. Action 4 in this
example, though it has the highest risk among the four actions. But if there is lightning,
the probability of outage will increase drastically, under this circumstance, the influence
of the risk results in selection of Action 2, a much more conservative action.
No Outage 0.9866
-20385$
Action 1
Outage 130-120 0.0045
835294$
(-10551$) Outage 130-230 0.0089
650836$
No Outage 0.9866
-19902$
Action 2
Outage 130-120 0.0045
215647$
(-17654$) Outage 130-230 0.0089
113559

No Outage 0.9866
-10602$
Action 3
Outage 130-120 0.0045
209509$
(-8423$) Outage 130-230 0.0089
122859$

No Outage 0.9866
-22595$
From Action 4
Outage 130-120 0.0045
Whether Lightning 1105287$
Forecast
(-11543$) Outage 130-230 0.0089
648626$

No Outage 1.0
No- -20385$
Action 1
Lightning Outage 130-120 0.0
835294$
(-20385$) Outage 130-230 0.0
650836$
No Outage 1.0
-19902$
Action 2
Outage 130-120 0.0
215647$
(-19902$) Outage 130-230 0.0
113559

No Outage 1.0
-10602$
Action 3
Outage 130-120 0.0
209509$
(-10602$) Outage 130-230 0.0
122859$

No Outage 1.0
-22595$
Action 4
Outage 130-120 0.0
1105287$
(-22595$) Outage 130-230 0.0
648626$

Figure 4-3 The Decision Tree with Additional Information


4.5 Multi-Objective Decision Making

4.5.1 Shortcomings of Single Criterion Risk-based Approaches


From the results obtained in section 4.3, it is apparent that the methods that do not use
probability data (methods No.1-3) are quite conservative, as they select those actions
that result in the safest or most secure contingencies. On the other hand, the methods
that do use probability data (methods No.4 and No.5) are quite risky, as they select
those actions that result in the higher profit, but also higher risky contingencies.
Intuitively we know that a proper decision paradigm for power system operations
should result in decisions between these two extremes. We believe that such a decision
paradigm can be obtained via improving method No.4. Specifically, with respect to
method No.4, solutions are sought that overcome the following weaknesses:
1. The use of risk as single measure of the security level alone might be not
enough. As indicated before, risk is the mean of the cost consequences (or
expected impact) of an insecurity event. It is probable that two operating
points with the same risk value correspond to two totally different situations.
One situation may lead to catastrophic consequences but with low
probability. The other situation may have a high probability of occurrence,
but the impact is low. Although they have the same risk value (expected
impact), the operator is certainly not indifferent to these cases. Risk alone
does not distinguish the preference of the operator between these two cases.
2. The values for the profits obtained with each action appear to be
incommensurate with the values for risk. An increase of risk by $1 is not
compensated by an increase of profit by $1. Since risk is the product of
probability and impact, the sometimes-large impact is weighted by the very
low occurrence probability of the insecurity events, resulting in a small risk
value. Adhering to the actions suggested this way leads to exclusively profit
driven operation of the system. In reality, plant operators have a more
conservative attitude and they do consider security aspects. The above-
mentioned objective (maximizing profits - risk) therefore does not reflect the
operator’s attitude and, thus, its use should not be recommended.

Several ways exist to distinguish the 2 cases mentioned in weakness no. 1. One of them
is using higher order moments, like for example, the variance (V) or standard deviation
(σ) of the impact (eq.4-2)[9]. It measures the deviation from the mean, and it is a good
way to evaluate the uncertainty associated with an action. Minimizing the uncertainty is
now a third criterion. From this point on, only the standard deviation (σ) will be used.
Returning to the two cases mentioned in comment no. 1, they can now be distinguished,
since the first case would have a very large σ, while the σ in the second case would be
more limited.
V ( Ai ) = Pr ( K | Ai ) ⋅ Im 2 ( K | Ai ) − ( Pr ( K | Ai ) ⋅ Im( K | Ai )) 2
σ ( Ai ) = V ( Ai )
(eq. 4-2)

Ai corresponds to action i.

The values of the standard deviations for each action are presented in Table 4-11.

Table 4-11 Standard Deviation for Each Option


Action 1 Action 2 Action 3 Action 4
Profits ($/hr) 20,385 19,902 10,602 22,595
Risk or mean ($/hr) 9.29 2.12 2.06 10.44
Standard deviation ($/hr) 1,725 935 921 1,829

A first step to improve the objective ‘maximizing Profits minus Risk’, would be to
include a term corresponding to the standard deviation to be minimized (with a minus
sign), which is also expressed in the same monetary units. However, the problem
mentioned in comment no 2 still remains, i.e., the incommensurability of the attributes
to be optimized, now including the standard deviation. The most common way to get
around this problem is to use weight coefficients to give appropriate importance to each
individual attribute (eq. 4-3).

Max f ( x ) = α ⋅ Profits ( x ) − β ⋅ Risk ( x ) − γ ⋅ σ ( x ) (eq. 4-3)

The weights could be provided by the operator according to his priority with respect to
profits, risk and σ and how he would feel about the trade-offs between them. However,
this approach is inappropriate because the weights given like this are arbitrary: their
values will highly depend on the state of mind of the operator at the time of the inquiry.
Arbitrary weights will lead to inconsistent results.
Several alternatives exist to provide values for the weights in equation 4-3 in a more
systematic and robust way. The key is to obtain additional information from the DM
from which the weights can be extracted. One possible way of doing this is by asking
the DM to determine several sets of attribute values for which he is indifferent [3].
Because this introduces a third criterion, we resort to multi-criteria decision-making
methods. Several approaches exist to deal with decision-making problems with various
objectives or criteria, i.e. Multi-Criteria Decision Making (MCDM). Nevertheless, it is
important to keep in mind that an action that optimizes all criteria is very unlikely to
exist. In the following a summarized overview of existing approaches is presented, and
one of them will be applied to the example presented in Section 4.2.

4.5.2 Literature Review on Multi-criteria Decision Making


An intuitive way of dealing with multiple objectives has is to attribute weights to the
objectives according to their importance in the eyes of the DM. In [10], this decision-
making method is approached from an academic point of view. In the 60’s and the 70’s
a lot of effort was spent on the development of the value and utility theory and its
application to problems with multiple objectives through the use of multi-attribute
utility functions[11][12]. A derivative of these methods, proposed in 1980 and called
Analytical Hierarchy Process (AHP)[13], was particularly well suited for problems in
which objectives have a hierarchical structure. In the early 70's a new kind of approach,
the outranking methods, emerged; these methods introduced a more subtle relation
between alternatives -- the outranking relation. Several different versions and
adaptations appeared having many applications in Europe [14]-[18]. The 80’s saw a
wide proliferation of methods, with varying degree of success. Therefore, a
classification of the methods became necessary. A comprehensive survey of MCDM
methods and applications is presented in [19][20]. A comparison of the results obtained
with several methods applied to one problem is presented in [21]. Another excellent
overview including more recent methods is presented in [22].

4.5.3 Overview
An essential measure of integrity for a method is the degree of confidence the DM has
in the method. For relatively simple problems, the decision made with the aid of
MCDM should be compared with the decision that the DM would take without any
assistance. The best method can identified by giving the DM similar decision-making
cases and comparing the action chosen by the DM to that suggested by the methods.
Another way to select an appropriate method is to look at methods that have been
successfully applied to similar problems. In [20], several questions are listed to help the
user to evaluate different multiple criteria decision- making methods.

Many attempts to classify decision-making methods have been made. First, it is


necessary to distinguish the attributes of a decision-making problem. A problem can be
characterized by: single or multiple criteria; having either a finite countable number or
an infinite number of alternatives; the performance of an alternative with respect to a
criteria can be expressed in either direct measured values (dollars, miles, people,…) or
by a utility function value; objective probability data can be used to make a decision or
subjective probabilities can be assumed, or it might happen that no uncertainty is
involved. Table 4-12 provides a classification of the methods. Each column header
shows different groups of methods, and the rows indicate various characteristics. This
table is not exhaustive, but it is believed that most methods can be characterized by a
set of these attributes. Any other method can be added in one of the existing columns or
by creating a new column if it has a different mix of attributes.

Methods involving multiple criteria have the particularity that they do not and cannot
provide an optimal solution. The process to get a ‘solution’ – perhaps the term
‘suggestion’ would be more adequate - is based on additional subjective information
provided by the decision-maker characterizing his or her preference.
Table 4-12 Decision Making Methods
Minimax, Expected Multi attribute Linear AHP,
Maximin, monetary utility function Programming Outranking
Minimax value methods
regret
Number of
single single multiple single multiple
objectives
Use of
objective
No Yes No No No
probability
data
Number of Finite Finite Finite Finite
infinite
alternatives countable countable countable countable
Scores on
values values utility function values values
criteria

4.5.4 Value or Utility–based Approaches

For a multi-objective decision-making problem, the units of each objective are not
necessarily the same, for example, some may be monetary units, some may be length
units, and some maybe time units. So, one way to solve the multi-objective problem is
to change all of the objectives to a same unit which is additive. Value or Utility-based
approaches can fulfill this aim. It changes all of the objectives to a corresponding value
(utility) reflecting the DM’s preference to the objective. By adding the values or utilities
of all objectives, an index for a certain action can be obtained. This index can be used for
making the decision.

For a certain objective, it is always possible to find a function reflecting the user’s
preference of one alternative over another. When the problems involve uncertainty with
respect to the outcome of the attributes, the preference functions are referred to as utility
functions; otherwise they are called value functions. The case described above has the
uncertainty already embedded into the risk and variance attribute values: these
attributes only have one possible value per option. Technically, there’s no uncertainty
about their values, so value functions will be defined and the concept of preference value
will be used in the following paragraph.

Preference value, an economic concept that has been part of economic theory for
centuries, helps describe rational human behavior in economic decision-making. It
reflects the DM’s preference or lack of it to a certain variable. For example, a human
being’s preference to money is not proportional to the amount of the money, as
represented by the dotted line in Figure 4-4, but instead, it is more like the curve in
Figure 4-4. It shows that when one has little money, a small increase of money may give
provide a large pleasure. When one has much money, the pleasure of getting more does
not increase in proportion to how much more is obtained.
Value

Money
Figure 4-4 An Example of a Value Function

In our risk-based decision-making procedure, a certain quantity of profit and the same
quantity of risk (assume both have monetary units) will not have the same absolute
preference value (the preference value for the risk is negative). We have already seen that
probabilities for power system events are very low, so that even though the impact of
the outage may be high, the risk, which is the product of impact and probability, will be
very small and typically much smaller in magnitude than the profit. This may cause the
neglect of the risk during the decision-making. In fact, though the risk of some events is
low, their high impact may make the operator have a very high negative preference value
to these events, because the impact may be unaffordable or unbearable to the operator’s
company. That means the operator does not decide based on the relative profit and risk
magnitudes, but rather on his/her preferences to the profit and risk, i.e., his/her
preference values.

In Section 4.5.4.1, we introduce a procedure for using the Value method for
corrective/preventive action selection. The objectives are to maximize profit, minimize
risk and minimize variance. The corresponding profit, risk and variance are shown in
Table 4-11.

4.5.4.1 Define the Scales of Measurement of the Objectives

It is not sufficient merely to identify the objective. Since the quantity (amount, level) of
each objective is to be estimated during analysis, and since a value function is to be
formulated for each objective, the objective must be unambiguously defined and their
measurement scales must be specified.

For our problem, the definition of measurement scales is clear, they have been shown in
Table 4-11. But sometimes, the measurement scales of objectives are not easily specified.
Lifson [23] introduced a solution to this problem.

4.5.4.2 Develop Value Functions

This is the most important stage for applying Value method. The Value functions for the
set of objectives should satisfy the following requirements:
The Value function for a given objective should represent the DM’s preference for
various quantities of that objective over the range of available choices.

The Value functions for the set of objectives should represent the DM’s preference
for trade-off between the objectives.

The Value of the various objectives should be measured on some Value scale so that
the expected Value of individual objectives can be meaningfully combined into a
single expected Value of a candidate action.

The following procedure will produce a set of Value functions that satisfy these
requirements.

Step 1. Specify a Range of Interest

For each objective, specify lower and upper limits of the range of interest. These limits
are based on an understanding of the particular decision situation under consideration.
The range of interest is broad enough to include all anticipated consequences.

In this example, for simplification, we choose the lower and upper limits according to
the magnitude of each objective. It is shown in Table 4-13.

Table 4-13 Upper and Lower Limits of Objectives

Objective Lower Limit (yL) Upper Limit(yU)

1.Profit 0 50,000

2.Risk 0 20

3.Variance of 0 5,000,000
impact

Step 2. Identify the threshold

Since the range of interest specified in Step 1 may include both desirable and
undesirable quantities of an objective, it must also include a neutral contribution to
success or failure. This neutral point is the threshold, designated yT. The Value of the
threshold is 0, i.e. U(yT)=0.

In our example we assign the lower limit of each objective as the threshold.

Step 3. Define Value Scales


Our decision rule requires a cardinal Value scale for measuring preferences. Defining a
cardinal scale requires arbitrarily anchoring two and only two points on the scale to
designated phenomena or quantities.

For each objective, therefore, two relative worth points are arbitrarily designated. One
of these points is defined in Step 2: the Value of threshold is set equal to zero. The
second point is determined by setting the most preferred or hated amount of each
objective equal to a utility of 1 or -1.

In our example, we set: U1 (yU)=U1 (50,000)=1

U2 (yU)=U2 (20)=-1

U3 (yU)=U3 (5,000,000)=-1

Here, 1,2, 3 correspond to profit, risk and variance respectively.

Step 4. Develop the Value Functions

Available methods for estimating Value functions have been summarized in [28]. Four
approaches have been distinguished: direct measurement; the von Neumann-
Morgenstern or standard reference contract method; the modified reference contract
approach and the Ramsey method.

The derived Value function has two forms, one is a curve in plane of Objective- Value,
another is a mathematical expression.

One characteristic of the Value function is that it can reflect the DM’s attitude to risk, i.e.
risk-averse or risk-seeking. In our example it is assumed there are two DM’s who will
face the same decision-making problem, one is risk-averse and another is risk-seeking.
For simplification, we assume that the Value function of each objective is of an
exponential form. After assigning the magnitude of the exponent and according to the
defined threshold and Value scales, the corresponding Value function can be obtained.
The risk-averse Value functions for each objective are:
-0.0001y
U1(x)=1.0068(1-e ) (x : profit)

U2(y)=0.0187(1-e0.2y) (y : risk)

U3(z)=0.0524(1-e0.0000006y) (z : variance)

And the risk-seeking Value functions for each objective are:

U1(y)=0.0068(-1+e0.0001y) (x : profit)
U2(y)=1.0187(-1+e-0..2y) (y : risk)
-0.0000006y
U3(y)=1.0524(-1+e ) (z : variance)

The corresponding Value curves of each objective for both DMs are shown in Figure 4-5,
4-6, 4-7 respectively. The risk-averse functions are represented by the broken lines, and
the risk-seeking functions are represented by the solid lines.

Figure 4-5 Value Curves of Profit

Figure 4-6 Value Curves of Risk


Figure 4-7 Value Curves of Variance

Step 5. Determine the Scaling Factor


The set of scaling factors {Wj} is the mechanism provides that proper trade-offs between
or across objectives are assured, where “proper” means consistent with DM’s
perception of the relative desirability of amounts of different objectives, with DM’s
opinion of the relative contribution of levels of the various objectives to the final
decision. After given the scaling factor, the preliminary Value function should be
transformed to an equivalent final Value function with Value measured on the common
scale. The transformation is as follows:
Ui final(y)= Ui (y)* W i ( eq. 4-4)
Here, i=1,…,n is the sequential number of objectives.
How to obtain the scaling factors? The hierarchy method can be used to help DM to
specify the scaling factors. A decision-making problem can be separated to a hierarchy
structure (Figure 4-8) according to the objectives and sub-objectives.
Action

Objective 1 Objective 2 Objective n

Subobjective 1 Subobjective i

Figure 4-8 Hierarchy Structure for the Decision-making Problem

A total score is arbitrarily selected to represent a perfect ideal action. Then this ideal
score is allocated among the objectives. The procedure of allocating a score among the
sub-objectives is continued until scores have been placed in all blocks of the hierarchy.
The scores so assigned to the sub-objectives of the lowest level are the scaling factors to
be used in equation (4-4).

In our example, we assume that all of the objectives have the same contribution to the
final decision, i.e., the scaling factor for each objective equals 1: W1= W2= W3=1.

4.5.4.3 Making Decisions based on the Value

After obtaining the final Value function (product of preliminary Value function and
scaling factor) for each objective, we can get the total final Value of each action as:

TotalFinalValue i = ∑Uj final (y ij) ( eq. 4-5)

So the selected action is the action with the highest total final Value.

In Table 4-14, the final Value of each objective( obtained from eq. 4-4) and the total final
utility value of each action for the risk-averse DM and risk-seeking DM are listed.
Table 4-14 The Value of Example Case

DM Objective Action 1 Action 2 Action3 Action 4

Profit 0.8757 0.8692 0.6581 0.9017

Risk – Risk -0.1012 -0.0099 -0.0095 -0.1322

Averse Variance -0.2600 -0.0362 -0.0348 -0.3372

Total 0.5145 0.8231 0.6138 0.4323

Profit 0.0454 0.0430 0.0128 0.0583

Risk- Risk -0.8598 -0.3520 -0.3440 -0.8924

Seeking Variance -0.8758 -0.4298 -0.4196 -0.9109

Total -1.6902 -0.7388 -0.7508 -1.7450

From Table 4-14, we can see that the selected actions for the risk-averse DM and the
risk-seeking DM are both Action 2. But the difference between the corresponding Values
is very large. Why do both DMs select the same actions? The reason is that in this
example, Action 2 has almost the highest profit and almost the lowest risk and variance,
so it is superior to other actions in most DMs’ view.

4.5.5 ELECTRE IV
In most MCDM methods, the outcome results in a ranking of the alternatives, with
possible ties. However, in some situations, given the preferences of the DM, no
distinction can be made between alternatives. In spite of this evidence that no
distinction should be made, it appears that many methods are forcing this distinction by
making overly strong assumptions on the preferences stated by the DM, and some
methods cannot provide any solution at all. In these cases, the requirement that each
alternative should be comparable (preferred or equivalent) is restrictive. In the
approach presented in this section, ELECTRE IV, this restriction is omitted. It allows
that two alternatives may be declared incomparable with one other.

The first of a series of outranking methods called ELECTRE appeared in 1968, and after
that several more developed and advanced versions came out [14]-[17]. In this section,
the ELECTRE IV method [18] will be applied to the decision-making case presented in
section 4.2. Each step of the method will be explained in detail.
4.5.5.1 Main Steps of the Method

The ELECTRE IV method is an attractive one because of the following reasons:

1. The amount of information required from the DM is limited and easier to provide,
for example, there is no need to give relative importance of the criteria.

2. The method will not draw strong conclusions if the available data does not permit to
do so. It also provides a solution where other methods cannot due to insufficient
data.

As mentioned in reason no. 1, this method does not require the DM to express his
priority in terms of the criteria. Instead he or she should indicate what his/her
thresholds are with respect to indifference and preference. An indifference threshold for
a particular criterion is the maximum change in the attribute of that criterion for which
the DM is indifferent. In more common language, it is the largest change that goes
unnoticed. A preference threshold is the smallest difference between two attributes of
one criterion for which the DM can make a preference. These thresholds can be either
fixed or dependent on the value of the attribute for a particular criterion. The
indifference threshold can also be regarded as a way to take into account the inaccuracy
of the pay-off values. The main steps are of the method are as follows:

Step 1: setting the thresholds


For an action a being strongly preferred to an action a’ with respect to criterion gi, the
following condition should be fulfilled.

gi (a ) ≥ gi (a ' ) + pi ( gi (a ' ))

where pi is the preference threshold depending on the value gi(a’). An action a would
be called weakly preferred to an action a’ with respect to criterion gi, if the following
condition is satisfied.

gi (a ' ) + pi ( gi (a ' )) ≥ gi (a ) ≥ gi (a ' ) + qi ( gi (a ' ))

where qi is the indifference threshold depending on the value gi(a’).

This concept of thresholds is illustrated in Figure 4-9. The value u represents the
difference between the scores of two alternatives for one criterion.

u = g i (a' ) − g i (a) (u ≥ 0)

When u is smaller than qi, a and a’ are said to be indifferent to each other for criterion
i (1). For u between qi and pi, a’ is declared weakly preferred to a (2), while when u is
larger than pj (3), a’ is strictly preferred to a. The veto threshold (vj) is used in step 3 to
distinguish between strong and weak outranking relations.
The same reasoning is applicable when u is defined as u = g i (a ) − g i (a' ) (u ≥ 0)

(1) (2) (3) (4)


qi pi vi u

Figure 4-9 Preference and Indifference Thresholds

Step 2 – checking the strong and weak preferences


With the thresholds provided in the first step, the alternatives should be compared with
each other for their scores in the different criteria. It is evaluated for how many of the
criteria that one alternative is preferred to another one. This is repeated for three levels
of preferences: weak, strong and veto. The veto preference usually has a threshold that
is two times the strong preference threshold.

Step 3: defining the outranking relations


Two outranking relations are introduced, based on the previous preference concepts:
aSFa’: A strongly outranks a’ if no criterion exist for which a’ is strongly preferred to a
and the number of criteria for which a’ is weakly preferred to a is at most equal to the
number of criteria for which a is preferred (weakly or strongly) to a’.
aSfa’: a weakly outranks a’ if no criterion exists for which a’ is strongly preferred to a,
but the second condition for the strong outranking is not fulfilled; or if there exists a
unique criterion for which a’ is strictly preferred to a, under the condition that the
difference in favor of a’ is not larger than the veto threshold and that a is strictly
preferred for at least half of the criteria.

Step 4: distillation
In this step it is verified for each action how many other actions it strongly outranks,
and by how many other actions it is strongly outranked. The difference between both is
called the strong qualification. A weak qualification is obtained in a similar fashion.
Two rankings are obtained. For the first one, descending distillation, the action with the
largest strong qualification is selected and receives the rank number 1. The
qualifications of the remaining alternatives are recalculated without the selected
alternative. The alternative that has the highest qualification is selected this time for the
second spot. This is continued until all alternatives have been selected. In case of a tie in
the strong qualifications, the weak qualifications are used to untie.

A second ranking, ascending distillation, is obtained by using the same procedure but
now the alternative with the lowest strong qualification is selected for the lowest rank.
The qualifications are recalculated again, and again the alternative with the lowest is
selected. This is repeated until all options are selected.
Step 5: final ranking
A final ranking is obtained by combining the two rankings obtained in the previous
step. This ranking can be represented by a graph. An arrow points from a node
representing the preferred action to the node of the outranked action (e.g., a to b,Figure
4-10). Two equivalent actions are represented by the same node (d and e). Actions that
are incomparable are not linked with an arrow, but are located at the same ranking
level (b and c).
b
a d,e

Figure 4-10 Example of Final Ranking with ELECTRE IV

4.5.5.2 Results with ELECTRE IV

The ELECTRE IV method will now be applied to the decision problem presented in
section 4.2.

Step 1 – defining the thresholds

Assume that the DM chooses the following thresholds. The veto thresholds are taken as
the double of the preference threshold.

Table 4-15 Threshold Values


Indifference Preference Veto

Profits 0.3 0.5 1


Risk 0.05 0.2 0.4
Variance 0.15 0.4 0.8

Step 2 – checking the strong and weak preferences

The following tables indicate for how many criteria the action at the left of a row is
preferred to the action at the top of the column. Table 4-16 refers to the weak
preference, Table 4-17 to the strong preference, and Table 4-18 shows the result with
respect to the veto preference.

Table 4-16 Weak Preferences


Weak preference

Action 1 Action 2 Action 3 Action 4

Action 1 0 0 1
Action 2 0 0 0

Action 3 0 0 0
Action 4 0 0 0

Table 4-17 Strong Preferences

Strict Preference

Action 1 Action 2 Action 3 Action 4

Action 1 0 1 0

Action 2 2 1 2

Action 3 2 0 2

Action 4 0 0 1

Table 4-18 Veto Preferences

Veto Preference

Action 1 Action 2 Action 3 Action 4


Action 1 0 0 0

Action 2 1 0 1

Action 3 1 0 1

Action 4 0 0 1

Step 3 – outranking relations


In this step for each pair of actions, it is decided whether one action strongly or weakly
outranks the other or not at all. It can also happen that 2 actions outrank each other.
Consequently the outranking hypothesis should be checked in both directions.

An ‘F’ indicates that the action at left outranks strongly the action at the top of the
column. On the other hand a lowercase ‘f’ points towards a weak outranking relation.
The results are shown in Table 4-19. It can be seen here that action 1 strongly outranks
action 4, but on the other hand action 4 weakly outranks action 1.

Table 4-19 Weak and Strong Outranking Relations

Outranking

Action 1 Action 2 Action 3 Action 4


Action 1
0 0 F
Action 2 F F F

Action 3 f f 0

Action 4 f 0 0

Step 4 – distillation procedure

Table 4-19 is now used to extract two rankings, one using descending distillation, and
one using ascending distillation. In Table 4-20 the numbers below each action refer to
the actions that are strongly outranked by the action in the first row. In Table 4-21 the
weakly outranked actions are listed.

Table 4-20 Strong Outranking Relations


Action 1 Action 2 Action 3 Action 4
4 1,3,4

Table 4-21 Weak Outranking Relations


Action 1 Action 2 Action 3 Action 4
1,2 1

The qualifications can now be obtained for each action. The strong qualification for an
action is the difference between the numbers of actions that it strongly outranks with
the number of actions it is strongly outranked by. A weak qualification is similarly
obtained in a similar fashion. The results are displayed in Table 4-22 and in Table 4-23.
Table 4-22 Strong Qualifications
Action 1 Action 2 Action 3 Action 4
0 3 -1 -2

Table 4-23 Weak Qualifications


Action 1 Action 2 Action 3 Action 4
-2 -1 0 0

The descending distillation works as follows: The action with the highest strong
qualification is chosen. In this case it is action 2. In the case of a tie, the weak
qualifications are used to untie. When even the weak qualifications are the same, the 2
actions are selected and considered equivalent for that ranking procedure. Action 2 is
ranked first and consequently removed from Table 4-20 and Table 4-21; the new
qualifications are calculated. The distillation procedure is continued until all actions are
ranked. The ranking obtained this way is:

2 → 1 → 3,4

Action 3 and 4 are equivalent in this ranking. The ascending distillation works in the
same way but starts by selecting the action with the lowest strong qualification, which is
action 4. It is ranked last and then removed from the 2 tables, etc. The ranking obtained
like this gives:

2 → 3 →1→ 4

Step 5 - graphical final order

From the two rankings obtained in the previous step, a final order can be extracted. We
see that in both rankings, action 2 is on the first spot. So this will be the action with the
first priority. Next we see that 1 and 3 have the spots 2 and 3 respectively in the first
ranking, and spots 3 and 2 in the second ranking. Those two actions are declared
incomparable. Finally, we see that both actions 1 and 3 are ranked higher than action 4
in any of the rankings. Action 4 will have the last priority.

Graphically, the relations can be visualized as follows:


1
2 4

Figure 4-11 Final ranking


From this result, the DM knows that there are strong reasons that favor action 2 over all
the others, while no strong reasons against this decision exist. Due to the fact that the
method allows incomparability1, it does not recommend – in this case – a second-best
solution, as Action 1 and 3 are incomparable; should a choice between 1 an 3 be
necessary, the DM would have to analyze the alternatives further to make a final
decision. However, it should be pointed out that, if a complete order (without ties) of
the alternatives is desired, other variants of the ELECTRE IV method could be
employed, as well as other outranking methods.

4.5.6 Other Methods

Promethee [24] – This method is quite similar to the ELECTRE III method, except that it
does not use a discordance index. It also takes advantage of the most recent
developments of preference modeling at that time.

Goal programming [25], comprehensive survey in [26] – The DM specifies a goal or a


target value for each one of the criteria and tries to find the alternative that is closest to
those targets according to a measure of distance.

Lexicographic Method [3](a.k.a. Housewife method) – The selection process is performed


by ranking of objectives according to the most important criterion. The top subset is
nd
identified, and the alternatives in this subset are ranked according to the 2 most
important criterion, and so on, until only one or a small subset of alternatives remains.

4.6 Evidential Theory

In this section, another useful method is introduced to address the multi-objective


decision-making problem. This method is based on Evidential Theory. Evidential
Theory offers an efficient way to represent uncertainty and to perform reasoning under
uncertainty. In Evidential Theory, each independent information source is regarded as a
piece of evidence. Information from evidence can be combined by applying Dempster’s
Rule of Combination. Results are obtained based on the combined information. In the
MCDM problem, each criterion can be regarded as one independent information
source. So we can use Dempster’s Rule of Combination to combine the information
from different criteria, and then we can make the decision based on the combined
result. One advantage of this method is that it can conveniently include multiple DM’s
in the MCDM, as each DM can also be treated as an independent information source.

In Section 4.6.1, a brief introduction of Evidential Theory is given. The method of


applying the Evidential Theory in P/C action selection is illustrated in Section 4.6.2. In

1
This characteristic is called non-prescriptiveness.
this section, the P/C action selection problem is treated as single decision maker
MCDM and multiple decision maker MCDM problems, respectively.

4.6.1 Brief Introduction of Evidential Theory

In 1967, Dempster [29] proposed the concepts of upper lower event probabilities. Unlike
familiar probabilities, upper and lower probabilities do not satisfy the additivity
relation. In 1968, Dempster [30] developed the rule for combining two sets of evidence
(i.e. two independent information sources); this rule is now called Dempster’s Rule of
Combination. In 1976, Shafer refined the theory proposed by Dempster and published a
book named “A Mathematical Theory of Evidence” [27] which provided the foundation
of Evidential Theory (ET).

4.6.1.1. The Frame of Discernment and Basic Probability Assignment

If there is a decision-making problem and all of the possible results (θ1, θ2, …, θn) of the
decision are in set Θ, then Θ is called the Frame of Discernment (FD). Each subset of Θ
corresponds to a proposition.

A piece of evidence always supports one or several propositions that correspond to one
or several subsets of Θ. The degree of support can be quantified by the Basic Probability
Number (BPN), which satisfies:

( 1 ) m( ∅ ) = 0
( 2 ) (eq. 4-7)
∑ m( A ) = 1
 A⊆ θ

Here m is called the Basic Probability Assignment (BPA) of Θ. m(A) is called the Basic
Probability Number (BPN) of subset A and it is understood to be the measure of the
belief that is committed exactly to A. If A is a subset of Θ and m (A)>0, then A is called a
focal element. For each piece of evidence, one BPA can be formed.

The additive degrees of belief of the traditional methods, such as Bayesian theory,
correspond to an intuitive picture in which one’s total belief is susceptible of division
into various portions, and that intuitive picture has two fundamental features. First, to
have a degree of belief in a proposition is to commit a portion of one’s belief to it. And
second, whenever one commits only a portion of one’s belief to a proposition, one must
commit the remainder to its negation. One way to obtain a more flexible and realistic
picture is to discard the second feature while retaining the first. BPA corresponds to
such a picture. When giving BPA, instead of assigning probability or belief to each
element of Θ, the expert can assign his degree of belief (BPN) to some subset of Θ. If
there is no knowledge about the problem, 1 is assigned to the whole set Θ.
4.6.1.2. Belief and Plausibility Function

The quantity m(A) measures the belief that one commits exactly to A, not the total belief
that one commits to A. In order to obtain the measure of total belief committed to A, a
Belief Function is defined as:

Bel(A) = ∑ m(B) (eq. 4-8)


B⊆ A

Here, Bel (A) is called the belief of A, and it reflects the total belief committed to A. As
Bel (A) does not reveal to what extent one doubts A, i.e., to what extent one believes its
negation, so it is not a full description of one’s belief about A. Therefore we also define a
Plausibility Function as:


Pl(A) = 1 − Bel( A ) = ∑ m(B) (eq. 4-9)
A∩ B ≠ ∅

Pl (A) is called the plausibility of A which reflects the extent to which one finds A
credible or plausible. So for any subset A of Θ, there exists the following relationship:

Bel (A) ≤Prob (A) ≤ Pl (A)

Thus the Plausibility and Belief functions provide upper and lower bounds of the
probability of a subset. (Bel (A), Pl (A)) can be used to represent the uncertainty of A:
(1,1) means that A is true; (0, 0) shows that A is false and (0,1) represents that A is
unknown. The value of Pl(A)-Bel(A) reflects the degree to which A is unknown. So, ET
can separate that which is unknown from that which is uncertain. It is a great advantage
of ET over the other theories.

4.6.1.3. Dempster’s Rule of Combination

For each piece of evidence, we can obtain a BPA and corresponding Bel and Pl. When
there are several evidences, we can obtain several BPAs. Dempster’s Rule of
Combination offers a tool for the aggregation of these BPAs on the same FD. This can be
viewed as an information fusion procedure.

Assume Bel1 and Bel2 are two independent Belief functions on the space Θ. m1 and m2
are the corresponding BPAs. Then their combination is another BPA m, denoted as
m=m1 ⊗ m2. Assume the focal elements of m1 and m2 are Ai (i=1,...,k), Bj (j=1,..., l)
respectively, then BPA m is:

 ∑ m1 (Ai ).m 2 (B j )
 Ai IB j = A
 A≠φ
m(A) = 1 − ∑ m1 (Ai ).m 2 (B j ) (eq. 4-10)
 A i I B j =∅
 0 A=φ
where i =1,...,k; j=1,...,l. After obtaining the combined m, we can get the corresponding
Bel and Pl and then make the decision based on them.

4.6.2 Application of Evidential Theory in Corrective/preventive Action Selection

In Utility Theory, the DM can be classified as risk-averse, risk-seeking and risk-neutral


based on their attitude to the risk. This difference can be reflected by their utility
function. In this section, two operators are asked to select the P/C action: one is risk-
seeking, another is risk-averse. In section 4.6.2.1, these two operators make their
decision independently, so it is a single decision maker MCDM problem. In section
4.6.2.2, these two operators make the decision together, so it is a multiple decision
maker MCDM problem. In both situations, the procedure for P/C action selection is the
same: first use Evidential Theory to appraise each action; then select one action based
on assessment of the appraisals.

4.6.2.1. Single Decision maker MCDM


4.6.2.1.1 Appraisal of each action
First, we form the Frame of Discernment (FD). As the problem we are facing now is to
appraise each action, so the FD should contain all of the possible appraisal results. In
this case, we assume that the appraisal for each action is “Support” or “Oppose”. So the
FD Θ of this case is: {Support, Oppose}.

Profit, risk and variance of impact of each action are three independent evidences for
this decision-making problem. From each piece of evidence we can get a BPA. Normally
this BPA is given by an expert or experts based on their experience. In practical
application, this BPA should be derived based on the knowledge base containing the
experts’ experience and knowledge. In this example, for illustration and simplification,
we use the utility function in 4.5.4 to give the BPA of each evidence.

The BPA of each criterion is obtained based on the following rules:

a) If the utility value Uj (a) is positive, then the corresponding BPA is:

m j ( {Support})= Uj (a) and m j (Θ )=1- Uj (a)

b) If the utility value Uj (a) is negative, then the corresponding BPA is:

m j ( {Oppose})= -Uj (a) and m j (Θ )=1+ Uj (a)

For each action we combine the three BPAs obtained from evidences of Profit, Risk and
Variance of impact. The combined BPAs of each action for the risk-averse and for the
risk-seeking DM are listed in Table 4-25.
Table 4-25 BPA of the Example

DM BPA Action 1 Action 2 Action3 Action 4

Support 0.8757 0.8692 0.6581 0.9017

Oppose 0 0 0 0
Profit
Θ 0.1243 0.1308 0.3419 0.0983

Support 0 0 0 0

Oppose 0.1012 0.0099 0.0095 0.1322


Risk
Θ 0.9988 0.9901 0.9905 0.8678

Risk – Support 0 0 0 0

Averse Variance of Oppose 0.2600 0.0362 0.0348 0.3372


impact
Θ 0.7400 0.9638 0.9652 0.6628

Support 0.8241 0.8638 0.6479 0.8407

Oppose 0.0589 0.0062 0.155 0.0677


Total
Θ 0.1170 0.1300 0.3366 0.0916

Support 0.0454 0.0430 0.0128 0.0583

Oppose 0 0 0 0
Profit
Θ 0.9546 0.9570 0.9872 0.9417

Support 0 0 0 0

Oppose 0.8598 0.3520 0.3440 0.8924


Risk
Θ 0.1402 0.6480 0.6560 0.1076
Risk-
Support 0 0 0 0
Seeking
Variance of Oppose 0.8758 0.4298 0.4196 0.9109
impact
Θ 0.1242 0.5702 0.5804 0.0891

Support 0.0008 0.0163 0.0049 0.0006

Oppose 0.9818 0.6202 0.6162 0.9898


Total
Θ 0.0174 0.3635 0.3789 0.0096

The appraisal of each action is the element (‘Support’, ‘ Oppose’) of Θ which has the
higher plausibility value that is indicated in bold in Table 4-26. The appraisals for each
action of the risk-averse DM are all ‘Support’, while the appraisals for each action of the
risk-seeking DM are all ‘Oppose’.
Table 4-26 The Plausibility and R of the Example
Action 1 Action 2 Action 3 Action 4
Risk- Pl Support 0.9411 0.9938 0.9845 0.9323
Averse Oppose 0.1759 0.1362 0.3521 0.1593
R 5.3502 7.2966 2.7961 5.8525
Risk- Pl Support 0.0182 0.3798 0.3878 0.0102
Seekin Oppose 0.9992 0.9837 0.9911 0.9994
g R 0.0182 0.3861 0.3913 0.0102

4.6.2.1.2 Select Action Based on the Appraisal

For each action, we calculate the following index:

R=Pl({‘Support’})/Pl({‘Oppose’}) (eq. 4-11)

This index R of each action for the risk-averse DM and the risk-seeking DM is also listed
in Table 4-26. Then the action with the largest R is selected as the final action. So the
final action selected for the risk-averse DM is Action 2, while that for the risk-seeking
DM is Action 3.

4.6.2.2. Multiple Decision Makers MCDM

Dempster’s Rule of Combination provides ET with the ability to combine the opinion of
different experts, in that the opinion of different experts can be regarded as the
independent evidences. In this example, we can combine the attitude of the risk-averse
DM and the risk-seeking DM using this rule: for each action, combine the BPA of the
risk-averse DM and the risk-seeking DM listed in Table 4-25. The combined BPA for
each action and the corresponding plausibility function and index R are shown in Table
4-27.

Table 4-27 Combined BPA and corresponding Plausibility and R


Action 1 Action 2 Action 3 Action 4
Support 0.0791 0.7114 0.4167 0.0514
BPA Oppose 0.9103 0.1868 0.3710 0.9433
Θ 0.0107 0.1018 0.2123 0.0052
Pl Support 0.0898 0.8132 0.6290 0.0566
Oppose 0.9210 0.2886 0.5822 0.9485
R 0.0975 2.8177 1.0804 0.0597

The appraisals for Actions 1 to 4 are ‘Oppose’, ‘Support’, ‘Support’ and ‘Oppose’
respectively now, no longer with the same appraisal for each action as shown in Table
IV. The final action is Action 2 that has the highest R. Though the final selected action
based on the combined attitude of the risk-averse DM and the risk-seeking DM is the
same as that of the risk-averse DM, we can see that index R of the selected action
reduces from 7.2966 to 2.8177.
4.7 Conclusions

In this section, decision-making tools are proposed to be applied to risk management in


a power system operating environment. A decision problem is presented where the
decision-maker, i.e. the system operator, has to select a corrective/preventive action
among several options, each one with different implications with respect to projected
profits and risk.

First, the decision-making methods used in EPRI report [1] are overviewed. It was
shown that some of the methods put too much emphasis on the economic aspect of the
problem, while others exclusively are concerned with the security issues. In order to
overcome these drawbacks, some new methods which still use traditional decision
criteria for risk-based corrective/preventive action selection are proposed.

Since maximizing the profits is in conflict with minimizing the risk, and since both
attributes are really incommensurate, applying multi-criteria decision making methods
is attractive. Two methods were investigated: value based and ELECTRE V. These
methods have the advantage to easily accommodate subjective information provided by
the DM, prior to or during the decision making process. The methods differ from each
other by the type of subjective information required by the DM, and also by the way
this information is processed and by the format of the results produced.

Evidential Theory can also be used for the multi-objective corrective/preventive action
selection problem. The more attractive point of this method is its ability to combine the
opinions of different DM’s.

The different methods obtain significantly different results, not only with respect to the
suggested ‘best’ option but also in the total ranking of the options. The multi-objective
methods have the advantage that the parameters of each method can easily be tuned
according to DM’s preferences among the criteria.

References

[1] EPRI final report WO8604-01, “ Risk-based Security Assessment”, December, 1998.

[2] Anders G.J., Probabilistic concepts in electric power systems, John Wiley & Sons,
1990.

[3] Chankong V., Haimes Y.Y., Multi-objective Decision Making – Theory and
Methodology, North Holand, 1983.

[4] Lindley D.V., Making Decisions, Wiley &Sons, 2nd Edition, 1985.

[5] Wan H., McCalley J., Vittal V., “Increasing Thermal Rating by Risk Analysis”, PE-
090-PWRS-0-1-1998, to appear in IEEE Transactions on Power Systems.
[6] Wan H., McCalley J., Vittal V., “Risk Based Voltage Security Assessment”, submitted
for review to the IEEE Transactions in Power Systems.

[7] Fu W., McCalley J., Vittal V., Risk-based Assessment of Transformer Thermal
Overloading Capability”, Proceedings of the 30th North American Power Symposium,
Cleveland, Ohio, October 1998.

[8] Van Acker V., McCalley J.D., Vittal V., Peças Lopes J. A., "Risk-based Transient
Stability Assessment," Proceedings of the Budapest Powertech Conference, Budapest,
Hungary, 1999.

[9] Kmietowicz, Z.W., Pearman A.D., Decision Theory and Incomplete Knowledge,
Gower, 1981.

[10] Churchman C.W., Ackoff R., Arnoff E., Introduction to Operation Research, John
Wiley & Sons, 1957.

[11] Raiffa H., Decision Analysis Addison-Wesley, 1968

[12] Keeney R.L., Raiffa H., Decisions with Multiple Objectives – Preferences and value
trade-offs, John Wiley & Sons, 1976.

[13] Saaty T.L., Analytical Hierarchy Process, McGraw Hill, 1980.

[14] Roy B., “Classement et choix en presence de points de vue multiples (la méthode
ELECTRE),” Revue Française d’Informatique et de recherché Opérationelle Vol. 8, 1968,
pp 57-75.

[15] Roy, B., Bertier P., “La méthode ELECTRE II,” Working paper 142, SEMA, 1971.

[16] Roy,B., Bertier P., “La méthode ELECTRE II, une application au media-planning.”
OR 72, M. Ross editor, North Holland, 1973, pp. 291-302.

[17] Roy B., “ELECTRE III; algorithme de classement base sur une representation floue
des preferences en presence de critères multiples,” Cahiers de CERO Vol. 20 no. 1, pp.
3-24.

[18] Hugonnard J., Roy B., “Ranking of suburban line extension projects for the Paris
,metro system by a multi-criteria method,” Transportation research 16A, 1982, pp. 301-
312.

[19] Stewart T.J., “A Critical Survey on the Status of Multiple Criteria Decision Making-
Theory and Practice,” OMEGA, Intl. Journal of Management Science, Vol. 20, No. 5/6,
pp. 569-586, 1992
[20] Hobbs, B.F., Chankong, V., Hamadeh, W., Stakiv E.Z. “Does the choice of Multi-
criteria Method Matter? An experiment in Water Resources Planning,” Water Resources
Research, Vol. 28,no. 7 July 1992, pp. 1767-1779.

[21] Zanakis S.H., Solomon A., Wishart N., Dublish S., “Multi-attribute decision
making: a simulation comparison of select methods,” European journal of operational
research, Vol. 107, pp. 507-529, 1998

[22] Vincke, Ph., Multicriteria Decision-aid, translated from french, John Wiley &Sons,
1992

[23] Lifson, Melvin W., Shaifer, Edward F. “Decision and Risk: Analysis for
Construction Management”, John Willey&Sons, 1982.

[24] Brans J.P., Vincke, Ph., “A preference ranking organization method,” Management
Science Vol. 31 no. 6, 1985, pp. 647-656.

[25] Charnes A., Cooper W.W., Management models and Industrial applications of
Linear Programming, Wley, New York, 1961.

[26] Romero C., “A Survey of generalized goal programming (1970-1982),” European


journal of operational research, Vol. 25, 1986, pp. 183-191.

[27] Shafer,G. “ A Mathematical theory of evidence”, Princeton University Press,


Princeton, NJ, 1976.

[28] Roumasset, James A. “ Rice and Risk”, North-Holland, 1976.

[29] A.P. Dempster, “Upper and lower probabilities induced by a multivalued


mapping”, Annals of Mathematical Statistics, Vol. 38, 1967, pp. 325-339.

[30] A.P. Dempster, “A generalization of Bayesian inference”, Journal of the Royal


Statistical Society, Series B, Vol. 30, 1968, pp. 205-247.
5.
VALUE OF INFORMATION

5.1 Introduction

When different options available to the decision maker (DM) are subjected to a
considerable amount of uncertainty, e.g. unknown future scenarios, the DM could
consider spending some amount of money to gather information to reduce this
uncertainty. If acquisition of additional information changes the probability models
used to characterize the uncertainty, then the resulting decision may change as well. If
so, then the acquired information has value, and it is prudent to spend money to get it.
The value of the information is associated with the effect on the utility made by the
change in decision.

The worth of perfect information is a reference to the amount of money that the DM
should pay to acquire more information. The difference between the expected monetary
outcome in the case of perfect information and in the case of no additional information
is a measure for the maximum amount the DM should consider spending to obtain
additional data.

Uncertainty can be the result of a prediction or can be caused by the difficulty to


measure a certain parameter. Typically this is translated in probability distributions
with considerable variance. In security assessment analysis, several sources of
uncertainty can be considered:

• Load profile over the next hour, over the whole year

• Load distribution among buses and generation dispatch

• System and equipment parameter values

• Equipment outage rates (lines, generators and transformers)

• Fault types and locations

• Ambient conditions
In the next sections, the case of perfect information is discussed followed by the case
where only partial or imperfect information is available.

5.2 Perfect Information

Case A

This case uses a fictitious situation concerning a system in a remote area having some of
its lines crossing an open, remote area. The operator has only access to a regional
weather prediction for today that was developed the previous day with the following
information:

Table 5-1 Weather Report

Weather type Probability (%)

sunny 65

windy 29

stormy 6

Given this information, the operator evaluates the possible costs in each one of the
scenarios. It is assumed that there are basically two operating strategies:

Option 1: operate at minimum cost according economic dispatch, but with heavily
loaded transmission.

Option 2: shift some power to a more expensive generator to off-load the transmission
systems.

Table 5-2 gives an overview of the profits and risk value for each action, given the
weather conditions. Our utility function is the difference between expected profit and
risk. With the probabilities associated with each weather type the expected utility is
obtained and shown in the last row of the table. The risk values include risk of transient
instability and thermal overload.
Table 5-2 Risk and Profit

Option 1 Option 2

Profits 20,400 19,900

Weather type risk profits-risk risk profits-risk

Sunny, calm 10 20,390 2 19,898

Windy 75 20,325 40 19,860

Stormy 1750 18,650 350 19,550

Expected 20266.75 Expected 19,866.1


profits-risk profits-risk

In case of stormy weather, the probability of an outage is considerably higher than in


the other two weather conditions. Consequently, the corresponding risk will be higher
too. The effect of shifting power from one generator to another reduces the flow
through the lines in the open area. If the lines are off-loaded, a fault on them will be less
costly. Therefore, the risk associated with the power shifting option action is lower.

Using the maximum expected value of the difference between profit and risk (last row),
the operator would decide to operate the system according to option 1, i.e., to adhere to
the economic dispatch. This would be the best decision under the available information.

Since the difference between profits and risk for either option can vary almost 10%
depending on the weather conditions, it could be useful to obtain more recent, and
therefore more accurate information available about the weather and to estimate what a
reasonable price would be to pay for this information.

First it is necessary to evaluate the worth of the “perfect information.” The perfect
information indicates with certainty whether it will be sunny, windy or stormy.
Knowing that the weather is sunny and calm, or windy the operator will choose the
economic, loaded line mode (Option 1), while if it is known that the weather is stormy
the uneconomic, less-loaded line operation mode (Option 2) will be chosen.

Without the perfect information, the DM only has access to the weather information
from the previous day, from which probabilities of the different weather conditions can
be estimated. It is not known what the perfect information will reveal if it were ordered.
It is assumed that the probability distribution of what the perfect information will tell is
the same as the a priori weather condition distribution. The DM can do no better with
the available information. The perfect information will indicate 65% of the time that the
weather will be nice, 29% of the time that it will be windy and finally, 6% of the time
that it will be stormy. We can evaluate the expected difference between profits and risk,
under perfect information, by assuming that we make the best decision between option
1 and 2 every time. Therefore, we take the option with the maximum difference
between profits and risk under each weather type, the expected difference between
profits and risk with perfect information is given by:

0.65 * $20,390 + 0.29 * $20,325 + 0.06 * $19,550 = $20,320.75

The value of perfect information is now obtained as the difference between the expected
utility with perfect information and the maximum expected utility with existing
information:

value of perfect information = $20,320.75 – $20,266.75 = $54

This value is a per-day measure for the system coordinator to use in deciding whether it
is worthwhile or not to order more precise weather information or to invest in weather
forecasting equipment. If an investment could improve the information held by the
operator by this amount every day for a year, then the investment should be made if it
is less than 365x$54=$3510.

Case B

In this operation-planning problem, it is assumed there are 4 different options available


for the system coordinator (

Table 5-3). Two different future scenarios are deemed to be possible, a 4% yearly load
increase, and an 8% load increase. For each action in each scenario, the total annual risk
is given including risk of overload and risk of voltage instability. It is assumed that the
4% load increase has a 75% probability, while the 8% load increase has a 25%
probability. In this example, the objective is to minimize the annual risk.

Table 5-3 List of Actions

# Description

Action 1 Maintain current conditions

Action 2 Change in unit commitment

Action 3 Connect 2 parallel lines on 2 towers instead of 1

Action 4 Install SPS


The annual risk values in dollars are summarized in Table 5-4

Table 5-4 Annual Risk in $

Action 1 Action 2 Action 3 Action 4

4% increase 345,690 102,340 182,780 391,720

8% increase 478,980 363,490 320,285 450,120

Expected value 379,012.5 167,627.5 217,156.3 406,325

The same methodology is applied as in Case A. For each action the expected value of
the annual risk is calculated and presented in the last row of Table 5-4. With no
additional information, the best solution would be action 2, corresponding to an
expected value of annual risk of $167,627. On the other hand, if it were known that 8%
increase would occur, than action 3 would be preferred, since it has the lowest risk in
that scenario. The fact that the decision changes depending on the information indicates that
there is something to gain from additional information.

The expected value of annual risk with perfect information is now:

$102,340*0.75+$320,285*0.25 = $156,826

The worth of this perfect information is given as the difference between both, i.e.,

$167,127-$156,826 = $10,801

This value places an upper-bound on how much to pay to improve knowledge of the
load profile for the coming year, by ordering a study.

5.3 Imperfect Information

In the previous section the additional information was perfect in the sense that it tells
with probability one what the scenario was that will happen. The value of information
derived with this in mind corresponds to an upper-limit for the amount of money to
spend on obtaining additional information. Most of the time, however, the additional
information is not perfect, but it can give us a better estimate of the prior probabilities.
It will upgrade the prior (“before”) probabilities with the additional information to
posterior (“after”) probabilities.

For example, in the case with the two load increase scenarios (Case B in Section 7.2), it is
known that there is a strong correlation between the economic (industrial, residential
and commercial) growth of the area and the load increase. Information about the
growth in that area can be obtained for a price, and it will tell whether the economic
growth will be high or low. As Table 5-5 indicates, the correlation is indeed strong but
there is still a non-zero probability that the economic growth leads to a wrong
conclusion about the load increase rate. From historical posterior analysis of the
correlation between economic growth and load increase it has been observed that there
is still a 10% chance that a high economic growth was expected when the load increase
turns out to be only 4%, and a 15% chance for the converse erroneous conclusion.

Table 5-5 Conditional Probabilities of the Growth Given the Observed Load Increase

probabilities economic growth expected

Load increase observed Low High

4% load increase 0.9 0.1

8% load increase 0.15 0.85

Table 5-4 showed that depending on the scenario a different decision is taken. So, any
extra information about the scenarios is relevant. The only question is how much to pay
for it? To find this out, the DM uses the data in Table 5-5 to update the probabilities of
the load growth in order to have, maybe not a perfect but at least an improved decision.
The posterior probabilities are calculated using the Bayes’ Rule [1, pg.21]. This rule is a
familiar one in probability theory giving the probability of event Aj given event B, i.e.,

Pr( B | A j ) Pr( A j )
Pr( A j | B ) =
∑ Pr( B | A ) Pr( A )
i
i i

In our example,

Pr( Low | 4%) ⋅ Pr(4%)


Pr(4% | Low) = = 0.947,
Pr( Low | 4%) ⋅ Pr(4%) + Pr( Low | 8%) ⋅ Pr(8%)
corresponding to the probability of having a 4% load increase given a low economic
growth prediction.

Similarly, the following values can be obtained:

Pr(8% | Low) = 0.0526

Pr(4% | High) = 0.261

Pr(8% | High) = 0.739

When a study predicts that the growth will be low, Action 2 will be chosen, if the study
predicts that the growth will be high, Action 3 will be chosen. This follows from the
projection of the annual risk in each one of the load increase scenarios in Table 5-4. As a
result, the expected value of annual risk when the prediction is low is given by:

$102,340 * Pr(4% | Low) + $363,490 * Pr(8% | Low) = $116,084

When the prediction is high,

$182,78 * Pr(4% | High) + $320,285 * Pr(8% | High) = $284,414

With these values the expected value of annual risk with imperfect information can be
found:

$116,084 * Pr(Low) + $284,414 * Pr(High) = $158,167

The value of this imperfect information is the difference between the expected value of
annual risk without additional information and the expected value of annual risk with
imperfect information:

$167,127-$158,167= $8,960

As we would have expected intuitively, the value of imperfect information is lower


than that for perfect information. It is the accuracy of the imperfect information, in this
case of the prediction of the growth and the “strength” of the correlation that will
determine its value [2].
5.4 Conclusion

In this section, the method for evaluating the value of information is introduced. This
value is very useful for the DM to see whether it is necessary to pay the money to get
the additional information and what is the maximum amount he can pay, in order to
increase the accuracy of the final decision. With respect to security assessment, this
approach can be used to determine whether to spend resources to improve one’s ability
to predict the future in terms of load levels, load distribution, equipment outages, and
ambient conditions. The approach can also be used to determine whether to spend
resources to improve one’s knowledge regarding uncertain measured values, including
electrical parameter values (e.g., line impedances and load characteristics) as well as
current weather readings (e.g., temperature and wind speed).

References

[1] Casella, G., Berger, R. L., “Statistical Inference”, Wadsworth & Brooks/Cole 1997.
nd
[2] Lindley D.V., “Making Decisions,” Wiley &Sons, 2 Edition, 1985.
APPENDIX-A: IMPACT ASSESSMENT FOR RBSA

A.1 Introduction

In risk assessment, one must address probability and impact. Probability analysis is
used to quantify the uncertainties associated with various outcomes; while impact
quantifies cost-consequence, or severity, of these outcomes. As mentioned in [1],
development of the severity function is typically difficult in most probabilistic risk
assessment problems. In this chapter we provide some fundamental considerations of
this problem.

The traditional approach to quantify impact uses performance measures such as load
flow, steady state voltage magnitude, transient voltage dip, and others. The drawback
of this approach is that there is no common measure for comparing severity or for
obtaining a composite evaluation of security. Thus there is no way to quantitatively
compare the impact between two different kinds of security problems. For example, it is
not meaningful to compare the impact of transient voltage dip and transmission line
overload by comparing transient voltage magnitude and line current. Similarly, when a
region faces more than one kind of security problem such as transmission line overload
and bus voltage out of limits, there is no single performance indicator that can reflect
the overall system security conditions [1].

In this study, a momentary measure is utilized as a common measure of severity. It is a


function of the traditional performance measures (e.g. line current, bus voltage
magnitude, transient voltage dip, etc.). This approach provides a unified basis for
assessment for different security problems and finally provides a dollar-based risk.

A.2 Rating-Based vs. Cost-Based

There are two severity measures for impact assessment: one is rating-based, and the
other is cost-based.

In using the rating-based measure, we assume violation of any traditional deterministic


criteria is equally severe, i.e., we assign 1.0 for violation and 0.0 otherwise. The resulting
risk is an expectation of the number of violations per time period. The advantages of
this approach are that it is simple, no estimation is required, and it enjoys strong
coupling with the deterministic approach. One disadvantage is that it does not account
for differences in severity between one type of security violation and another. For
example, it recognizes 1% overload as being equally severe to an out of step condition
of a generating plant. Another disadvantage is that the rating-based measure does not
account for the degree of a violation. For example, it recognizes a 1% overload as being
equally severe to a 25% overload.

In using cost-based measure, we estimate the actual cost of an operating condition's


impact. The resulting risk is an expectation of the economic consequence per time
period. It overcomes the disadvantage of rating-based risk in that it recognizes different
severity levels of various violations, and it recognizes the degree of each violation. The
main disadvantage is that cost estimates contain significant uncertainty, and reducing
this uncertainty can be expensive. Therefore, one needs to estimate average values of
these costs and also model the uncertainty associated with estimates.

A.3 Impacts vs. Decisions

Impact has many different meanings in different contexts. In the context of RBSA,

Impact is defined as an inevitable consequence caused by an operating condition.

So the impact must be treated as a function of operation condition. This definition


excludes any influence of human interventions. For example, the consequences of
overloading a transmission line, such as loss of life, line sag and touch, are regarded as
impacts; while the consequence of load shedding to reduce the load on a transmission
line is not regarded as an impact. We exclude consequence related to human
intervention because it is a result of decisions. We cannot model the decision without
knowing the benefit function and the risk preference of the decision-makers, information
that is unique to each situation.

A.4 Modeling Impact Uncertainty

Actually, some impacts are quite certain. For example, sanctions can be regarded as
certain impacts to the entity that violates security performance requirements. Western
Systems Coordinating Council (WSCC) has developed a Reliability Management
System (RMS) [2], which includes 17 mandatory criteria to ask its members to comply
with. Sanctions will be applied to the entity that violates these criteria with monetary
penalty. For example, one criteria is that ``The actual power flow on a bulk power path
shall not exceed the operating transfer capability for specified time period.'' If the
transmission owner violated this criterion, a certain amount of fine should be paid
based on the RMS.

However, most of impacts are unfortunately quite uncertain. For example, the end-user
losses due to system disturbances depend on many uncertain factors such as the end-
user's activities, the nature and degree to which the impacted activities are dependent
on electricity, the availability of a backup power source, and the ability to resume the
impacted activities normally after power is restored. Consequently, estimating the
impact requires both objective and subjective justification. Further, it is well recognized
that the accuracy of the cost estimate of an event occurring at a future time generally
decreases as that future time is increased [3].

The first step in estimating the costs is to identify the expected or average value. This
could be enough if the estimate is quite certain. However, it is generally necessary to
account for the uncertainty in the estimate by using a probability distribution to
describe it. The two simplest distributions are uniform, in which case one can also needs
to estimate the range, and normal, in which case one also needs to estimate the standard
deviation. Which one is used, or whether another distribution is used, depends on the
characteristics of the uncertainty. For example, if the actual cost impact is equally likely
within an interval, then the uniform distribution is appropriate. However, if the actual
cost impact is more likely to be close to the mean than at the extremes, then a normal
distribution is a better description.

A.5 Cost Estimation

The cost of an event can only be accurately known after actual occurrence of the event.
Therefore, in risk assessment, where analysis is necessarily performed before the
occurrence of the event, we must estimate its expected value together with the
parameters that describe its uncertainty.

An essential step in cost estimation is to decompose each identified event into


component costs. Cost estimates can be assigned to each component and then costs for
the various components are aggregated to form a cost for the event. For example, to
estimate load interruption cost at a particular bus, one can decompose this cost into
costs for residential, agricultural, commercial, and industrial portions of the load.
Likewise, to estimate equipment costs caused by overload on a line, one can decompose
this cost into cost for loss of conductor life and cost for line sag that violates clearance
codes and consequently shorts to an underlying object.

The second step in cost estimation is to identify the statistics associated with each cost.
This is an information-gathering step. It need not be a labor-intensive task, although it
certainly can be. What is important is that the analyst be capable of deciding when to
gather more information, and when not to. Nonetheless, it is always prudent to perform
a first estimation using one's own judgment. Here, one should estimate the mean or
average value of the cost, the range, i.e., a minimum value below which the cost would
not fall and a maximum value above which the cost would not fall. In addition, one
should decide the distribution of the cost over the range. As indicated in Section A.4,
the simplest distributions are uniform and normal.
In many risk assessment problems, the accuracy of the cost estimate is sufficient if the
range spans 1 order of magnitude, or less. This requires that the tolerance on the
decision criteria might need to be as large as the widest range. If the tolerance is
unsatisfactory, then one needs to gather more information to narrow the range (i.e.,
decrease the spread, or variance, of the distribution), assuming that the test of ``perfect
information'' indicates it is economic to do so [1].

A.6 Classification of Impacts

The first step to conduct impact cost estimation is decomposing the cost of each
identified event into component costs. Depending on the different criteria, there are
following four kinds of impact classifications, which not only help to estimate the
impact costs but also provide useful information in today's deregulated environment.

A.6.1 Based on Affected Group

This classification provides information that which group is affected and how much
each group is suffered by the different security problems under current operating
conditions. The following groups are considered:

− Generation owner

Generation owners will experience losses when the conditions such as generator out
of step or load interruption occur.

− Transmission owner

Transmission owners will face losses when the system contingencies cause
overloading the transmission lines or transformers or cause load interruptions.

− End-user

Losses might be unavoidable to the end-users when the system contingencies cause
the load interruptions.

It should be noted that the impacts to distribution owners due to system security
problems are not considered here as this study is basically only for the transmission
system security assessment, although the idea can be extended to the distribution
system. Also we assume that the relation among these three groups of owners is
bilateral contract-based. The generation owners directly make bilateral contracts with
the transmission owners to buy transmission capacity rights in order to sell the energy
to the end-users. The end-users could only obtain energy through making contracts
with the generation owners. If either side involving cannot fulfill the contract, he should
pay the monetary penalties based on the agreements in the contract. These assumptions
are made only for the research convenience and can be easily modified if necessary.

A.6.2 Based on Cost Category

This classification shows each affected group where the impact cost comes from. The
following three cost categories are considered in this study

− Load interruption

Load interruption occurs when the system contingencies result in unacceptable


voltage to the end-users.

− Equipment damage

The impact of equipment damage quantifies the cost of repairing or replacing


damaged equipment.

− Equipment outage

The impact of equipment outage includes the additional cost associated with
operating the system when a component is unavailable.

A.6.3 Based on Impact Component

This classification shows how the system is affected by the different security problems.
It is based on the information about what parts of system are threatened. For example,
transmission line overload may cause loss of life and line sag and touch; while
transformer overload may cause loss of life and failure. The impact components
considered in this study are

− Line loss of life

This impact is caused by high conductor operating temperature.

− Line sag and touch

This impact is caused by high conductor operating temperature.

− Transformer loss of life

This impact is caused by high transformer hottest-spot temperature.


− Transformer failure

This impact is caused by high transformer hottest-spot temperature.

− Under- or over- voltage

This impact is caused by lack of reactive power support in some buses.

− Voltage collapse

This impact is caused by lack of reactive power support in the system.

− Generator out of step

This impact is caused by some large disturbances in the system.

− Transient frequency dip

This impact is caused by some large disturbances in the system.

− Transient voltage dip

This impact is caused by some large disturbances in the system.

A.6.4 Based on Cost Component

This classification presents direct information about the makeup of the impact cost. It is
based on the information about how the impact cost is formed. For example, the cost of
transmission line overload is reconductoring the line; while the costs of load
interruption are lost profits, end-user loss, sanctions, penalties. The cost components
considered in this study are

− Reconductoring the line

This cost can be estimated from current market price of reconductoring the same
line.

− Replacing the transformer

This cost can be estimated from the cost of buying a new transformer with same size
plus the labor of removing the old one and installing the new one.

− Lost profits
This cost can be estimated from the revenues of selling the same amount of the
interrupted energy minus the cost of producing that amount of energy.

− Sanctions

This cost can be obtained from the criteria in the regional reliability management
system.

− Penalty A

This cost can be obtained from the contracts between the generation owners and the
transmission owners.

− Penalty B

This cost can be obtained from the contracts between the generation owners and
end-users.

− System redispatching cost

This cost can be estimated from the production cost of using new higher-cost
generators minus the production cost of using original lower-cost generators.

− Generator startup cost

This cost can be estimated from actual startup cost of the generator.

− End-user loss

The most direct way to estimate end-user loss is conducting surveys for different
groups of customers [4] [5] [6]. But usually this kind of effort is cumbersome, time
consuming and expensive, especially if a large and statistically well designed sample is
to be selected [7]. Since the penalty item in contracts between generation owners and
end-users can be regarded as compensation to the end-users if there is load
interruption, they must embed the information of end-user loss. Thus one alternative to
estimate end-user loss can be based on the penalty agreement in the contract. The
advantages of this approach are easily implemented and less expensive. The
disadvantage is that the accuracy will be sacrificed.

A.7 Impacts for Different Security Problems

One significant advantage of RBSA over traditional security assessment is that it unifies
the various security problem types, allowing quantification of the composite security
level in a single index. However, this advantage is only manifested if the magnitudes of
the various impacts are quantified one relative to another. It is to this purpose that,
based on the classification in the previous section, we have developed three impact
tables: one for overload security, one for voltage security, and one for dynamic security.
In the following three subsections, we present these tables together with a brief
description of the related impacts.

As we stated before, impact assessment needs both objective and subjective judgment.
The classification and some estimated data presented in this section is only the authors'
opinion. It is expected that different people would give different classifications and
estimations.

A.7.1 Overload Security

Power system overload will cause adverse effects to transmission line and transformer.
The following are simple descriptions of these impacts. Detailed descriptions of these
impacts can be found in [8] [9].

− Line Current Too High

High current in the transmission line leads to high conductor operating


temperature. In this report, we consider two kinds of impacts related to high line
current: loss of life and line sag and touch.

− Loss of life

The transmission line's expected total life is the amount of time when the conductor
operating temperature is always maintained at its maximum allowable temperature.
When the operating temperature is higher than the maximum allowable value, the
conductor-annealing rate can exceed the designed value. The line's life expectancy is
reduced and the impact is determined by the loss of life and the cost of re-conductoring
the circuit.

− Line sag and touch

High operating temperature can cause the thermal expansion of the conductor and thus
the line may drop beneath its safety clearance. Under certain conditions, this may cause
flashover to the ground, resulting in a ground fault, and outage of this circuit. The
impact costs associated with line sag and touch are reconductoring the line, system
redispatching costs, and sanctions.

− Transformer Load Too High


High transformer load will cause high transformer hottest-spot temperature. We
consider two kinds of impacts related to transformer overloading: loss of life and
transformer failure.

− Loss of Life

The transformer insulation deteriorates as a function of time and temperature. The


relation of insulation deterioration to changes in time and temperature is assumed to
follow an adaptation of Arrhenius reaction rate theory. The impact cost associated with
transformer loss of life is replacing the transformer.

− Transformer Failure

Operating at insulated winding hottest-spot temperature above 140oC may cause


gassing in the solid insulation and oil of the transformer. Gassing produces a potential
risk to the dielectric strength of the transformer. The test results indicate that there is a
gradually increasing probability of dielectric failure whenever normal operating
temperatures are exceeded, and the probability of failure is significantly increased at
o
temperatures above 150 C. A catastrophic failure of a transformer results in
considerable costs. When such a failure occurs, the event is sudden, and the
consequence can be substantial, particularly if the system is stressed. The impact costs
associated with transformer failure are the cost of replacing the transformer, system
redispatching costs, penalties, and sanctions.

In electric system, the outages of one high-voltage transmission line and transformers
lines and transformers usually will not cause direct load interruptions or generation
interruptions. The system operator may shed some load after assessing the situation.
But we regard the system operator's intervention as a result of decisions and it is
excluded in our impact assessment paradigm. Also we do not consider cascading events
as they are very rare cases in real world. Thus the affected group is assumed only
transmission owners for overload. Their loss includes replacing the failed components,
system redispatching, penalties, and sanctions.

In Table 0-1, a table-form template with some estimated values is given for overload
impact assessment. All the impact costs are assumed following a normal distribution
except sanctions which are assumed certain. A 95% confidence interval is also given for
the cost of each cost component.
Table 0-1 Impact Evaluation for Overload

Impact Classification Cost Information

Performance Cost Impact Cost Affected Standard Expected Standard 95% C.I.
Measure Category Component Component Group units Value Deviation

Equipment Loss of life Re- Transmission $/mile* 105 104 (0.8-1.2)×105


Damage via conductor Owner
annealing line

Line current Equipment Line sag and Redispatch $/MWhr 50 5 40-60


Outage touch

Sanctions $/MWhr 50 0 50

Penalty A $/MWhr 10,000 0 10,000

Equipment Loss of life Replace Transmission $/case 107 106 (0.8-1.2) ×107
Damage via transformer Owner
insulation
deterioration

Transformer Dielectric Replace $/failure 107 106 (0.8-1.2) ×107


current failure transformer

Equipment Dielectric Redispatch $/MWhr 50 5 40-60


Outage failure

Sanctions $/MWhr 50 0 50

Penalty A $/MWhr 10,000 0 10,000

*Here, we only give the cost estimation for reconductoring 230KV line and replacing 400MVA transmission

A.7.2 Voltage Security

The two problems associated with voltage security are bus voltage out of limits and
voltage collapse. The following are simple descriptions of these impacts. Detailed
descriptions of these impacts can be found in [8].

− Bus Voltage Out of Limits

Bus voltage out of limits includes situations when bus voltage is too low and when bus
voltage is too high. On the one hand, low bus voltages may cause induction motor stall
and resulting high lagging current, which will further lower the bus voltages. Since
industrial and commercial motors are usually controlled by magnetically held
contactors, voltage drop may also cause many motors to drop out [11]. Low bus voltage
may also cause the action of automatic undervoltage load shedding schemes [12]. On
the other hand, when the bus voltage is too high, overvoltage protection schemes can
automatically trip individual load or load groups if the voltage violates their setting
thresholds. So the main impact caused by bus voltage out of limit is load interruption
which will cause end-user loss. The transmission owners and generation owners will
also lose profits and have to pay the penalties and sanctions.

− Voltage Collapse

When a power system is subjected to a sudden increase of reactive power demand


following a system contingency, and the additional demand can not be met by the
reactive power reserves carried by the generators and compensators, it may lead to
system collapse, possibly resulting in loss of synchronism of generating units and a
major blackout. So the consequence of voltage collapse is high. The transmission owners
and generation owners will also lose profits and may have to pay the penalties and
sanctions.

A.7.3 Dynamic Security

For dynamic security we consider the impact of transient voltage too low, transient
frequency too low, and generator out of step. Detailed discussion of these impacts is
presented in [13].

− Transient Voltage Dip

Similar to bus voltage out of limits in voltage security, low transient voltage dips
may cause some motors to drop out. They may also initiate undervoltage load shedding
and generator tripping. So the major impact of low transient voltage dip is the cost of
load interruption that cause end-user loss. Also the generation owners will lose profits
and probably have to pay the generator startup cost, system redispatching cost,
penalties and sanctions due to failing to fulfill the contract with end-users and violating
the reliability criteria.

− Transient Frequency Dip

Transient frequency dipping too low will cause the action of underfrequency load
shedding program. So the impact costs of low transient frequency dips are load
interruption that causes end-user loss. The generation owners will lose profits and have
to pay sanctions and penalties.

Table 0-2 Impact Evaluation for Voltage Security

Impact Classification Cost Information


Performan Cost Impact Cost Affected Standard Expected Standard 95% C.I.
ce Category Component Component Group units Value Deviatio
Measure n

Lost profits Transmission $/MWhr 50 5 40-60


Owner

Generation $/MWhr 50 5 40-60


Owner

Sanctions Transmission $/MWhr 50 0 50


Owner

Bus Bus Load Undervoltage Generation $/MWhr 50 5 40-60


voltage Interruption Or Owner
magnitude Overvoltage

Penalty A Transmission $/MWhr 10,000 0 10,000


Owner

Penalty B Generation $/MWhr 12,000 0 12,000


Owner

End-user End-user $/MWhr 20,000 5,000 10,000-30,000


loss

PV Curve System Load Uncontrolled See above section on bus load interruption
Interruption voltage
decline

− Generator Out of Step

If the interconnected synchronous machines of a power system can not remain in


synchronism following a large disturbance, one or more generators will go out of step.
This will cause some generators to trip, and the supply energy will be replaced by a
higher cost source.So the impact costs of a generator out of step condition are system
redispatching costs and generator startup costs to generation owners. The generation
owners might also have to pay sanctions.

In Table 0-3, a table-form template with some estimated values is given for dynamic
insecurity. All the impact costs are assumed following a normal distribution except
sanctions and penalties that are assumed certain. A 95% confidence interval is also
given for the cost of each cost component.
Table 0-3 Impact Evaluation for Dynamic Security

Impact Classification Cost Information

Performance Cost Impact Cost Affected Standard Expected Standard 95% C.I.
Measure Category Component Component Group units Value Deviatio
n

Lost profits Transmissio $/MWhr 50 5 40-60


n Owner

Generation $/MWhr 50 5 40-60


Owner

Sanctions Transmissio $/MWhr 50 0 50


n Owner

Transient Load Undervoltage Or Generation $/MWhr 50 5 40-60


Voltage Dip Interruption Overvoltage Owner

Penalty A Transmissio $/MWhr 10,000 0 10,000


n Owner

Penalty B Generation $/MWhr 12,000 0 12,000


Owner

End-user loss End-user $/MWhr 20,000 5,000 10,000-30,000

Equipment Redispatch Generation $/MWhr 50 5 40-60


Ouatge Owner

Generator $/case 5,000 500 4,000-6,000


startup

Transient Bus load Underfrequency See above section on bus load interruption
frequency interruption
dip

Bus load Underfreqency. See above section on bus load interruption


interruption Or Undervoltage

Generator Equipment Controlled Generator Generation $/case 5,000 500 4,000-6,000


out of step outage startup Owner
condition

Redispatch $/MWhr 50 5 40-60

Uncontrolled Generator $/case 5,000 500 4,000-6,000


startup

Redispatch $/MWhr 50 5 40-60

Sanctions $/MWhr 50 0 50
A.8 Summary

In this Chapter, the problem of impact assessment for risk calculation is addressed. The
most important contribution of this section is that a unified common measure of
severity is proposed as a common basis for comparison for different types of security
problems. The comparison between performance-based and cost-based severity
measures, the difference between impact and decision, and the issue related to
uncertainty modeling of impact are discussed. Finally, the classification of impacts and
some sample data in a template form are also presented.

References

[1] EPRI final report WO8604-01, ``Risk-based Security Assessment,'' December, 1998.

[2] WSCC Reliability Management System (RMS), Western Systems Coordinating


Council, December, 1997.

[3] E. P. Degarmo, W. G. Sullivan, and J. A. Bontadelli, Engineering Economy,


Macmillan Publishing Company, 1993.

[4] S. Burns and G. Gross, ``Value of Service Reliability,'' IEEE Transactions on Power
Systems, Vol. 5, No. 3, August 1990. pp.825-832.

[5] M. J. Sullivan, T. Vardell, B. N. Suddeth, and Z. Vojdani, ``Interruption costs,


Customer Satisfaction and Expections for Services Reliability,'' IEEE Transactions on
Power Systems,} Vol 11, No. 2, May 1996, pp.989-995.

[6] R. Billinton and R. N. Allan, Reliability Evaluation of Power Systems, Plenum Press,
1996.

[7] A. P. Sanghvi, “Economic Costs of Electricity Supply Interruptions,” Energy


Economics,} Vol. 4, No. 3, July 1982, pp.180-198.

[8] H. Wan, J. D. McCalley and V. Vittal, “Increasing Thermal Ratings by Risk


Analysis,” to appear in IEEE Trans. Pwr Sys.

[9] W. Fu, J. McCalley, and V. Vittal, ``Risk Assessment for Transformer Loading,''under
review for publication in IEEE Transactions on Power Systems.

[10] H Wan, J. D. McCalley, V. Vittal, ``Risk Based Voltage Security Assessment,'' to


appear in IEEE Trans. Pwr Sys.

[11] P. Kunder, Power System Stability and Control, McGraw-Hill., 1994.


[12] NERC Planning Standards}, draft, June 1997.

[13] Vincent, V. Acker, M. Mitchell, J. McCalley, and V. Vittal,``Risked Based Transient


Stability Assessment using Neural Networks,''

[14] North American Power Synposium, October 19-20, 1998, Cleveland, Ohio, pp. 328-
335
About EPRI
EPRI creates science and technology
solutions for the global energy and energy
services industry. U.S. electric utilities
established the Electric Power Research
Institute in 1973 as a nonprofit research
consortium for the benefit of utility members,
their customers, and society. Now known
simply as EPRI, the company provides a wide
range of innovative products and services to
more than 1000 energy-related organizations
in 40 countries. EPRI’s multidisciplinary team
of scientists and engineers draws on a
worldwide network of technical and business
expertise to help solve today’s toughest
energy and environmental problems.

EPRI. Electrify the World

© 2001 Electric Power Research Institute (EPRI), Inc. All


rights reserved. Electric Power Research Institute and EPRI
are registered service marks of the Electric Power Research
Institute, Inc. EPRI. ELECTRIFY THE WORLD is a service
mark of the Electric Power Research Institute, Inc.

1001308

Printed on recycled paper in the United States


of America

EPRI • 3412 Hillview Avenue, Palo Alto, California 94304 • PO Box 10412, Palo Alto, California 94303 • USA
800.313.3774 • 650.855.2121 • askepri@epri.com • www.epri.com

Potrebbero piacerti anche